Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
125,618
12,264,252,860
IssuesEvent
2020-05-07 03:39:19
geesoon/github-slideshow
https://api.github.com/repos/geesoon/github-slideshow
closed
Issue Testing 1
documentation
### Testing 1 - [ ] activity 1 @githubteacher Hello, what issue do you have? > dasds > dsads > **dsadas** ### `include`
1.0
Issue Testing 1 - ### Testing 1 - [ ] activity 1 @githubteacher Hello, what issue do you have? > dasds > dsads > **dsadas** ### `include`
non_defect
issue testing testing activity githubteacher hello what issue do you have dasds dsads dsadas include
0
312,628
23,436,361,348
IssuesEvent
2022-08-15 10:17:04
yorkie-team/yorkie-js-sdk
https://api.github.com/repos/yorkie-team/yorkie-js-sdk
opened
Explain how to store documents in local storage
documentation 📔
<!-- Please only use this template for submitting common issues --> **Description**: Explain how to store documents in local storage. Users can edit the document even while offline. However, because the document is stored in memory, all changes are lost when the users close the browser. Let's explain how to store the document's snapshot and local changes in local storage so that the edits remain even when the users close the browser. Related to https://github.com/yorkie-team/yorkie-js-sdk/pull/364. **Why**: - Help developers implement storing documents in local storage.
1.0
Explain how to store documents in local storage - <!-- Please only use this template for submitting common issues --> **Description**: Explain how to store documents in local storage. Users can edit the document even while offline. However, because the document is stored in memory, all changes are lost when the users close the browser. Let's explain how to store the document's snapshot and local changes in local storage so that the edits remain even when the users close the browser. Related to https://github.com/yorkie-team/yorkie-js-sdk/pull/364. **Why**: - Help developers implement storing documents in local storage.
non_defect
explain how to store documents in local storage description explain how to store documents in local storage users can edit the document even while offline however because the document is stored in memory all changes are lost when the users close the browser let s explain how to store the document s snapshot and local changes in local storage so that the edits remain even when the users close the browser related to why help developers implement storing documents in local storage
0
5,813
30,790,981,731
IssuesEvent
2023-07-31 16:08:37
obi1kenobi/trustfall
https://api.github.com/repos/obi1kenobi/trustfall
closed
Test-drive adapters to ensure common edge cases are handled correctly
A-adapter A-errors C-enhancement C-maintainability E-help-wanted E-mentor E-medium
Before using a new adapter, Trustfall could "test drive" it to make sure it adequately handles edge cases: - call `resolve_property` with a `None` active vertex for some property and assert that it got a `FieldValue::Null` property value - call `resolve_neighbors` with a `None` active vertex for some edge and assert that it got an empty iterable of neighbors - call `resolve_coercion` with a `None` active vertex for some plausible type coercion (if any) and assert that it got a `false` result - perhaps even assert that sending multiple contexts into these functions means the contexts are returned in the same order. This will require some schema introspection (to generate valid type / property / edge / coercion values) but should be cheap perf-wise. It could be implemented transparently in the `trustfall` crate with an optional default-enabled feature. Particularly perf-sensitive applications could opt out of the feature.
True
Test-drive adapters to ensure common edge cases are handled correctly - Before using a new adapter, Trustfall could "test drive" it to make sure it adequately handles edge cases: - call `resolve_property` with a `None` active vertex for some property and assert that it got a `FieldValue::Null` property value - call `resolve_neighbors` with a `None` active vertex for some edge and assert that it got an empty iterable of neighbors - call `resolve_coercion` with a `None` active vertex for some plausible type coercion (if any) and assert that it got a `false` result - perhaps even assert that sending multiple contexts into these functions means the contexts are returned in the same order. This will require some schema introspection (to generate valid type / property / edge / coercion values) but should be cheap perf-wise. It could be implemented transparently in the `trustfall` crate with an optional default-enabled feature. Particularly perf-sensitive applications could opt out of the feature.
non_defect
test drive adapters to ensure common edge cases are handled correctly before using a new adapter trustfall could test drive it to make sure it adequately handles edge cases call resolve property with a none active vertex for some property and assert that it got a fieldvalue null property value call resolve neighbors with a none active vertex for some edge and assert that it got an empty iterable of neighbors call resolve coercion with a none active vertex for some plausible type coercion if any and assert that it got a false result perhaps even assert that sending multiple contexts into these functions means the contexts are returned in the same order this will require some schema introspection to generate valid type property edge coercion values but should be cheap perf wise it could be implemented transparently in the trustfall crate with an optional default enabled feature particularly perf sensitive applications could opt out of the feature
0
55,074
14,174,018,740
IssuesEvent
2020-11-12 19:14:31
SAP/fundamental-ngx
https://api.github.com/repos/SAP/fundamental-ngx
closed
Panel: RTL mode down arrow is showing wrong
Defect Hunting Medium RTL bug platform
Description: RTL mode down arrow is showing wrong Expected: RTL mode down arrow should show properly Screen shot: ![image](https://user-images.githubusercontent.com/32538291/96883201-c1df7400-149d-11eb-9524-7d23f84010cd.png) Expected: ![image](https://user-images.githubusercontent.com/32538291/96883223-c9068200-149d-11eb-8530-4914cd9c2980.png)
1.0
Panel: RTL mode down arrow is showing wrong - Description: RTL mode down arrow is showing wrong Expected: RTL mode down arrow should show properly Screen shot: ![image](https://user-images.githubusercontent.com/32538291/96883201-c1df7400-149d-11eb-9524-7d23f84010cd.png) Expected: ![image](https://user-images.githubusercontent.com/32538291/96883223-c9068200-149d-11eb-8530-4914cd9c2980.png)
defect
panel rtl mode down arrow is showing wrong description rtl mode down arrow is showing wrong expected rtl mode down arrow should show properly screen shot expected
1
309,676
23,302,742,514
IssuesEvent
2022-08-07 15:13:50
cython/cython
https://api.github.com/repos/cython/cython
closed
Switch readthedocs to show stable branch by default
Documentation
Right now https://cython.readthedocs.io shows documentation for the "latest" version of the code, which is apparently an alpha? You can configure the project to show "stable" docs by default, and that way users will see the documentation for latest stable release automatically. It's in the settings somewhere, you can set what branch to identifier to show by default, and "stable" is one of the options.
1.0
Switch readthedocs to show stable branch by default - Right now https://cython.readthedocs.io shows documentation for the "latest" version of the code, which is apparently an alpha? You can configure the project to show "stable" docs by default, and that way users will see the documentation for latest stable release automatically. It's in the settings somewhere, you can set what branch to identifier to show by default, and "stable" is one of the options.
non_defect
switch readthedocs to show stable branch by default right now shows documentation for the latest version of the code which is apparently an alpha you can configure the project to show stable docs by default and that way users will see the documentation for latest stable release automatically it s in the settings somewhere you can set what branch to identifier to show by default and stable is one of the options
0
105,331
23,033,148,440
IssuesEvent
2022-07-22 15:44:14
Azure/azure-dev
https://api.github.com/repos/Azure/azure-dev
closed
Clean up extra .ps1 files included in .vsix release
engsys vscode
On release we noticed a couple extra powershell files in the .vsix files.
1.0
Clean up extra .ps1 files included in .vsix release - On release we noticed a couple extra powershell files in the .vsix files.
non_defect
clean up extra files included in vsix release on release we noticed a couple extra powershell files in the vsix files
0
80,949
23,343,120,645
IssuesEvent
2022-08-09 15:30:31
xamarin/xamarin-android
https://api.github.com/repos/xamarin/xamarin-android
closed
Xamarin.Android adds automatically WRITE_EXTERNAL_STORAGE permissions to manifest
Area: App+Library Build
### Steps to Reproduce 1. Create a new Xamarin.Forms project with Android and iOS app. 2. Build release android. 3. Check manifest in /obj/Release/android/AndroidManifest.xml ### Expected Behavior Manifest in `obj` should not contain `WRITE_EXTERNAL_STORAGE` if it is not declared in the project. This permission should not be included in the manifest in APK either (after running Archive for Publishing). ### Actual Behavior Xamarin.Android adds `WRITE_EXTERNAL_STORAGE` to permissions even though it is not selected in the manifest. ### Version Information Xamarin.Android should not add `WRITE_EXTERNAL_STORAGE` permission if not checked in the manifest. ``` === Visual Studio Community 2019 for Mac === Version 8.7.8 (build 4) Installation UUID: 0be7cb3e-0171-43b8-99c3-b7ba76999bd0 GTK+ 2.24.23 (Raleigh theme) Xamarin.Mac 6.18.0.23 (d16-6 / 088c73638) Package version: 612000093 === Mono Framework MDK === Runtime: Mono 6.12.0.93 (2020-02/620cf538206) (64-bit) Package version: 612000093 === Roslyn (Language Service) === 3.7.0-6.20427.1+18ede13943b0bfae1b44ef078b2f3923159bcd32 === NuGet === Version: 5.7.0.6702 === .NET Core SDK === SDK: /usr/local/share/dotnet/sdk/3.1.402/Sdks SDK Versions: 3.1.402 3.1.200 3.1.102 3.1.101 MSBuild SDKs: /Library/Frameworks/Mono.framework/Versions/6.12.0/lib/mono/msbuild/Current/bin/Sdks === .NET Core Runtime === Runtime: /usr/local/share/dotnet/dotnet Runtime Versions: 3.1.8 3.1.2 3.1.1 2.1.22 2.1.16 2.1.15 === Xamarin.Profiler === Version: 1.6.13.11 Location: /Applications/Xamarin Profiler.app/Contents/MacOS/Xamarin Profiler === Updater === Version: 11 === Xamarin.Android === Version: 11.0.2.0 (Visual Studio Community) Commit: xamarin-android/d16-7/025fde9 Android SDK: /Users/wkulik/Library/Android/sdk Supported Android versions: None installed SDK Tools Version: 26.1.1 SDK Platform Tools Version: 30.0.4 SDK Build Tools Version: 29.0.3 Build Information: Mono: 83105ba Java.Interop: xamarin/java.interop/d16-7@1f3388a ProGuard: Guardsquare/proguard/proguard6.2.2@ebe9000 SQLite: xamarin/sqlite/3.32.1@1a3276b Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-7@017078f === Microsoft OpenJDK for Mobile === Java SDK: /Users/wkulik/Library/Developer/Xamarin/jdk/microsoft_dist_openjdk_1.8.0.25 1.8.0-25 Android Designer EPL code available here: https://github.com/xamarin/AndroidDesigner.EPL === Android SDK Manager === Version: 16.7.0.13 Hash: 8380518 Branch: remotes/origin/d16-7~2 Build date: 2020-09-16 05:12:24 UTC === Android Device Manager === Version: 16.7.0.24 Hash: bb090a3 Branch: remotes/origin/d16-7 Build date: 2020-09-16 05:12:46 UTC === Xamarin Designer === Version: 16.7.0.495 Hash: 03d50a221 Branch: remotes/origin/d16-7-vsmac Build date: 2020-08-28 13:12:52 UTC === Apple Developer Tools === Xcode 12.1 (17222) Build 12A7403 === Xamarin.Mac === Xamarin.Mac not installed. Can't find /Library/Frameworks/Xamarin.Mac.framework/Versions/Current/Version. === Xamarin.iOS === Version: 14.0.0.0 (Visual Studio Community) Hash: 7ec3751a1 Branch: xcode12 Build date: 2020-09-16 11:33:15-0400 === Build Information === Release ID: 807080004 Git revision: 9ea7bef96d65cdc3f4288014a799026ccb1993bc Build date: 2020-09-16 17:22:54-04 Build branch: release-8.7 Xamarin extensions: 9ea7bef96d65cdc3f4288014a799026ccb1993bc === Operating System === Mac OS X 10.15.7 Darwin 19.6.0 Darwin Kernel Version 19.6.0 Mon Aug 31 22:12:52 PDT 2020 root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64 ```
1.0
Xamarin.Android adds automatically WRITE_EXTERNAL_STORAGE permissions to manifest - ### Steps to Reproduce 1. Create a new Xamarin.Forms project with Android and iOS app. 2. Build release android. 3. Check manifest in /obj/Release/android/AndroidManifest.xml ### Expected Behavior Manifest in `obj` should not contain `WRITE_EXTERNAL_STORAGE` if it is not declared in the project. This permission should not be included in the manifest in APK either (after running Archive for Publishing). ### Actual Behavior Xamarin.Android adds `WRITE_EXTERNAL_STORAGE` to permissions even though it is not selected in the manifest. ### Version Information Xamarin.Android should not add `WRITE_EXTERNAL_STORAGE` permission if not checked in the manifest. ``` === Visual Studio Community 2019 for Mac === Version 8.7.8 (build 4) Installation UUID: 0be7cb3e-0171-43b8-99c3-b7ba76999bd0 GTK+ 2.24.23 (Raleigh theme) Xamarin.Mac 6.18.0.23 (d16-6 / 088c73638) Package version: 612000093 === Mono Framework MDK === Runtime: Mono 6.12.0.93 (2020-02/620cf538206) (64-bit) Package version: 612000093 === Roslyn (Language Service) === 3.7.0-6.20427.1+18ede13943b0bfae1b44ef078b2f3923159bcd32 === NuGet === Version: 5.7.0.6702 === .NET Core SDK === SDK: /usr/local/share/dotnet/sdk/3.1.402/Sdks SDK Versions: 3.1.402 3.1.200 3.1.102 3.1.101 MSBuild SDKs: /Library/Frameworks/Mono.framework/Versions/6.12.0/lib/mono/msbuild/Current/bin/Sdks === .NET Core Runtime === Runtime: /usr/local/share/dotnet/dotnet Runtime Versions: 3.1.8 3.1.2 3.1.1 2.1.22 2.1.16 2.1.15 === Xamarin.Profiler === Version: 1.6.13.11 Location: /Applications/Xamarin Profiler.app/Contents/MacOS/Xamarin Profiler === Updater === Version: 11 === Xamarin.Android === Version: 11.0.2.0 (Visual Studio Community) Commit: xamarin-android/d16-7/025fde9 Android SDK: /Users/wkulik/Library/Android/sdk Supported Android versions: None installed SDK Tools Version: 26.1.1 SDK Platform Tools Version: 30.0.4 SDK Build Tools Version: 29.0.3 Build Information: Mono: 83105ba Java.Interop: xamarin/java.interop/d16-7@1f3388a ProGuard: Guardsquare/proguard/proguard6.2.2@ebe9000 SQLite: xamarin/sqlite/3.32.1@1a3276b Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-7@017078f === Microsoft OpenJDK for Mobile === Java SDK: /Users/wkulik/Library/Developer/Xamarin/jdk/microsoft_dist_openjdk_1.8.0.25 1.8.0-25 Android Designer EPL code available here: https://github.com/xamarin/AndroidDesigner.EPL === Android SDK Manager === Version: 16.7.0.13 Hash: 8380518 Branch: remotes/origin/d16-7~2 Build date: 2020-09-16 05:12:24 UTC === Android Device Manager === Version: 16.7.0.24 Hash: bb090a3 Branch: remotes/origin/d16-7 Build date: 2020-09-16 05:12:46 UTC === Xamarin Designer === Version: 16.7.0.495 Hash: 03d50a221 Branch: remotes/origin/d16-7-vsmac Build date: 2020-08-28 13:12:52 UTC === Apple Developer Tools === Xcode 12.1 (17222) Build 12A7403 === Xamarin.Mac === Xamarin.Mac not installed. Can't find /Library/Frameworks/Xamarin.Mac.framework/Versions/Current/Version. === Xamarin.iOS === Version: 14.0.0.0 (Visual Studio Community) Hash: 7ec3751a1 Branch: xcode12 Build date: 2020-09-16 11:33:15-0400 === Build Information === Release ID: 807080004 Git revision: 9ea7bef96d65cdc3f4288014a799026ccb1993bc Build date: 2020-09-16 17:22:54-04 Build branch: release-8.7 Xamarin extensions: 9ea7bef96d65cdc3f4288014a799026ccb1993bc === Operating System === Mac OS X 10.15.7 Darwin 19.6.0 Darwin Kernel Version 19.6.0 Mon Aug 31 22:12:52 PDT 2020 root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64 ```
non_defect
xamarin android adds automatically write external storage permissions to manifest steps to reproduce create a new xamarin forms project with android and ios app build release android check manifest in obj release android androidmanifest xml expected behavior manifest in obj should not contain write external storage if it is not declared in the project this permission should not be included in the manifest in apk either after running archive for publishing actual behavior xamarin android adds write external storage to permissions even though it is not selected in the manifest version information xamarin android should not add write external storage permission if not checked in the manifest visual studio community for mac version build installation uuid gtk raleigh theme xamarin mac package version mono framework mdk runtime mono bit package version roslyn language service nuget version net core sdk sdk usr local share dotnet sdk sdks sdk versions msbuild sdks library frameworks mono framework versions lib mono msbuild current bin sdks net core runtime runtime usr local share dotnet dotnet runtime versions xamarin profiler version location applications xamarin profiler app contents macos xamarin profiler updater version xamarin android version visual studio community commit xamarin android android sdk users wkulik library android sdk supported android versions none installed sdk tools version sdk platform tools version sdk build tools version build information mono java interop xamarin java interop proguard guardsquare proguard sqlite xamarin sqlite xamarin android tools xamarin xamarin android tools microsoft openjdk for mobile java sdk users wkulik library developer xamarin jdk microsoft dist openjdk android designer epl code available here android sdk manager version hash branch remotes origin build date utc android device manager version hash branch remotes origin build date utc xamarin designer version hash branch remotes origin vsmac build date utc apple developer tools xcode build xamarin mac xamarin mac not installed can t find library frameworks xamarin mac framework versions current version xamarin ios version visual studio community hash branch build date build information release id git revision build date build branch release xamarin extensions operating system mac os x darwin darwin kernel version mon aug pdt root xnu release
0
60,159
17,023,352,895
IssuesEvent
2021-07-03 01:34:53
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Mapnik renders names for some items that otherwise aren't rendered
Component: mapnik Priority: minor Resolution: wontfix Type: defect
**[Submitted to the original trac issue database at 10.50pm, Tuesday, 20th January 2009]** An example is the dismantled railway here: http://www.openstreetmap.org/?lat=53.1788&lon=-1.3573&zoom=17&layers=B000FTF This is a stretch of former railway where there's no trace left on the ground. It's set as "railway=dismantled" (I changed this stretch some time ago from "railway=abandoned" precisely because there is no trace left on the ground). The railway doesn't show, but the name does. Would it be possible for Mapnik to only render the name of a way if there is a renderable tag on the way?
1.0
Mapnik renders names for some items that otherwise aren't rendered - **[Submitted to the original trac issue database at 10.50pm, Tuesday, 20th January 2009]** An example is the dismantled railway here: http://www.openstreetmap.org/?lat=53.1788&lon=-1.3573&zoom=17&layers=B000FTF This is a stretch of former railway where there's no trace left on the ground. It's set as "railway=dismantled" (I changed this stretch some time ago from "railway=abandoned" precisely because there is no trace left on the ground). The railway doesn't show, but the name does. Would it be possible for Mapnik to only render the name of a way if there is a renderable tag on the way?
defect
mapnik renders names for some items that otherwise aren t rendered an example is the dismantled railway here this is a stretch of former railway where there s no trace left on the ground it s set as railway dismantled i changed this stretch some time ago from railway abandoned precisely because there is no trace left on the ground the railway doesn t show but the name does would it be possible for mapnik to only render the name of a way if there is a renderable tag on the way
1
77,226
26,862,496,520
IssuesEvent
2023-02-03 19:47:07
dotCMS/core
https://api.github.com/repos/dotCMS/core
opened
Adjusting action names: Copy vs Mark for Copy
Type : Defect OKR : User Experience Triage Type : UX Design
### Problem Statement The current Site Browser "copy" options are a little confusing. ![Screenshot 2023-02-03 at 2 41 22 PM](https://user-images.githubusercontent.com/102264829/216693035-5552719f-e1c7-41d3-898c-7b5df1f8e776.png) | Action | Behavior | |-------|--------| | `Copy` | Make an immediate duplicate of the item | | `Mark for Copy` | Place item into dotCMS's clipboard for pasting | The `Mark for Copy` pattern is the one typically associated most Copy behaviors since time immemorial, while the `Copy` pattern is what, e.g., Apple calls `Duplicate`. ### Steps to Reproduce Right-click an item in the Site Browser on Auth or on a local container (note that at the time of writing, someone appears to have renamed Copy to Duplicate on the live Demo). It's not really a bug, also, but I think it's counterintuitive enough to warrant the Defect label. ### Acceptance Criteria I think we should consider the following changes: - `Copy` -> `Duplicate` - `Mark for Copy` -> `Copy` The requirements are slightly different for each, as the first is a Workflow Action and the other is a System Action (but hopefully both are fairly simple). ### dotCMS Version Seen on versions from 22.09 to 23.01, probably goes back much further. ### Proposed Objective User Experience ### Proposed Priority Priority 4 - Trivial ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. _No response_ ### Assumptions & Initiation Needs If anyone feels strongly about the current setup, may call for a Debate Club session or something? ### Sub-Tasks & Estimates _No response_
1.0
Adjusting action names: Copy vs Mark for Copy - ### Problem Statement The current Site Browser "copy" options are a little confusing. ![Screenshot 2023-02-03 at 2 41 22 PM](https://user-images.githubusercontent.com/102264829/216693035-5552719f-e1c7-41d3-898c-7b5df1f8e776.png) | Action | Behavior | |-------|--------| | `Copy` | Make an immediate duplicate of the item | | `Mark for Copy` | Place item into dotCMS's clipboard for pasting | The `Mark for Copy` pattern is the one typically associated most Copy behaviors since time immemorial, while the `Copy` pattern is what, e.g., Apple calls `Duplicate`. ### Steps to Reproduce Right-click an item in the Site Browser on Auth or on a local container (note that at the time of writing, someone appears to have renamed Copy to Duplicate on the live Demo). It's not really a bug, also, but I think it's counterintuitive enough to warrant the Defect label. ### Acceptance Criteria I think we should consider the following changes: - `Copy` -> `Duplicate` - `Mark for Copy` -> `Copy` The requirements are slightly different for each, as the first is a Workflow Action and the other is a System Action (but hopefully both are fairly simple). ### dotCMS Version Seen on versions from 22.09 to 23.01, probably goes back much further. ### Proposed Objective User Experience ### Proposed Priority Priority 4 - Trivial ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. _No response_ ### Assumptions & Initiation Needs If anyone feels strongly about the current setup, may call for a Debate Club session or something? ### Sub-Tasks & Estimates _No response_
defect
adjusting action names copy vs mark for copy problem statement the current site browser copy options are a little confusing action behavior copy make an immediate duplicate of the item mark for copy place item into dotcms s clipboard for pasting the mark for copy pattern is the one typically associated most copy behaviors since time immemorial while the copy pattern is what e g apple calls duplicate steps to reproduce right click an item in the site browser on auth or on a local container note that at the time of writing someone appears to have renamed copy to duplicate on the live demo it s not really a bug also but i think it s counterintuitive enough to warrant the defect label acceptance criteria i think we should consider the following changes copy duplicate mark for copy copy the requirements are slightly different for each as the first is a workflow action and the other is a system action but hopefully both are fairly simple dotcms version seen on versions from to probably goes back much further proposed objective user experience proposed priority priority trivial external links slack conversations support tickets figma designs etc no response assumptions initiation needs if anyone feels strongly about the current setup may call for a debate club session or something sub tasks estimates no response
1
407,336
11,912,199,634
IssuesEvent
2020-03-31 09:52:17
JEvents/JEvents
https://api.github.com/repos/JEvents/JEvents
closed
uikit: No save when creating a new calendar
Priority - High
As stated, however, if you edit an existing calendar there is a save & close button along with the cancel.
1.0
uikit: No save when creating a new calendar - As stated, however, if you edit an existing calendar there is a save & close button along with the cancel.
non_defect
uikit no save when creating a new calendar as stated however if you edit an existing calendar there is a save close button along with the cancel
0
79,362
28,128,914,130
IssuesEvent
2023-03-31 20:27:42
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
opened
Move ZeroSourceSumHATsurf in DataHeatBalance to ChilledCeilingPanelSimple
Defect
Issue overview -------------- Followup to #9921. The variable ZeroSourceSumHATsurf stores the sum of zone (or space?) surfaces radiant heat. This methodology is used for all radiant heaters. The discussion is in regard to more than 1 radiant heater in the same zone and how this zone surface data summation is used within each radiant model. The ElectricBaseBoard branch #9921 moved this variable to be a member of the electric baseboard instead of being indexed to the zone where the baseboard was installed. There is uncertainty of how this information should be used within a radiant heat transfer model. ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link or helpdesk ticket number ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
1.0
Move ZeroSourceSumHATsurf in DataHeatBalance to ChilledCeilingPanelSimple - Issue overview -------------- Followup to #9921. The variable ZeroSourceSumHATsurf stores the sum of zone (or space?) surfaces radiant heat. This methodology is used for all radiant heaters. The discussion is in regard to more than 1 radiant heater in the same zone and how this zone surface data summation is used within each radiant model. The ElectricBaseBoard branch #9921 moved this variable to be a member of the electric baseboard instead of being indexed to the zone where the baseboard was installed. There is uncertainty of how this information should be used within a radiant heat transfer model. ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link or helpdesk ticket number ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
defect
move zerosourcesumhatsurf in dataheatbalance to chilledceilingpanelsimple issue overview followup to the variable zerosourcesumhatsurf stores the sum of zone or space surfaces radiant heat this methodology is used for all radiant heaters the discussion is in regard to more than radiant heater in the same zone and how this zone surface data summation is used within each radiant model the electricbaseboard branch moved this variable to be a member of the electric baseboard instead of being indexed to the zone where the baseboard was installed there is uncertainty of how this information should be used within a radiant heat transfer model details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
1
5,923
2,610,217,999
IssuesEvent
2015-02-26 19:09:21
chrsmith/somefinders
https://api.github.com/repos/chrsmith/somefinders
opened
x3daudio dll
auto-migrated Priority-Medium Type-Defect
``` '''Арефий Тетерин''' День добрый никак не могу найти .x3daudio dll. где то видел уже '''Алан Яковлев''' Вот держи линк http://bit.ly/16svV0a '''Арвид Кононов''' Спасибо вроде то но просит телефон вводить '''Адриан Фомичёв''' Не это не влияет на баланс '''Велислав Буров''' Неа все ок у меня ничего не списало Информация о файле: x3daudio dll Загружен: В этом месяце Скачан раз: 1172 Рейтинг: 206 Средняя скорость скачивания: 1105 Похожих файлов: 29 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 12:34
1.0
x3daudio dll - ``` '''Арефий Тетерин''' День добрый никак не могу найти .x3daudio dll. где то видел уже '''Алан Яковлев''' Вот держи линк http://bit.ly/16svV0a '''Арвид Кононов''' Спасибо вроде то но просит телефон вводить '''Адриан Фомичёв''' Не это не влияет на баланс '''Велислав Буров''' Неа все ок у меня ничего не списало Информация о файле: x3daudio dll Загружен: В этом месяце Скачан раз: 1172 Рейтинг: 206 Средняя скорость скачивания: 1105 Похожих файлов: 29 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 12:34
defect
dll арефий тетерин день добрый никак не могу найти dll где то видел уже алан яковлев вот держи линк арвид кононов спасибо вроде то но просит телефон вводить адриан фомичёв не это не влияет на баланс велислав буров неа все ок у меня ничего не списало информация о файле dll загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
1
139,025
20,758,730,744
IssuesEvent
2022-03-15 14:28:40
raft-tech/TANF-app
https://api.github.com/repos/raft-tech/TANF-app
opened
As a grantee pilot user I want to be able to access TDP help/support content in one place
Research & Design
**Description** Blocked by other onboarding content tickets as it requires insight into the full set of content. Delivers a design for an in-app knowledge base which aggregates all onboarding/support content. May also contain existing materials like data coding instructions. **AC:** - [ ] Email has a clear sequence of next steps for the user to follow - [ ] Each step is complete with a link to the place that step will be carried out - [ ] The design is consistent with the team’s past decisions, or a change is clearly documented - [ ] The design is usable, meaning... - [ ] It uses [USWDS components and follows it’s UX guidance](https://designsystem.digital.gov/components/), or a deviation is clearly documented - [ ] Language is intentional and [plain](https://plainlanguage.gov/guidelines/); placeholders are clearly documented - [ ] It follows [accessibility guidelines](https://accessibility.digital.gov/) (e.g. clear information hierarchy, color is not the only way meaning is communicated, etc.) - [ ] If feedback identifies bigger questions or unknowns, create additional issues to investigate - [ ] Relevant user stories are documented. - [ ] Relevant accessibility implementation notes are documented - [ ] Recommended pa11y checks are documented. - [ ] User flow is included/updated. - [ ] Design includes implementation notes for accessibility for instances where standard best practices won't deliver the desired experience. - [ ] Dev/Design hanodff has occurred **Notes** Likely knowledge base areas: - Creating & managing a login.gov account - Logging into TDP, Getting Access - Framing TDP functionality (Data Submission/Resubmission, Requirements for Data and How to Submit Complete Resubmissions) - TDP Release Notes (Potential way to frame feature overview from homepage & Getting Started email) - How To: Ask Questions, Get Support, Give Feedback May also be a good place to host some content from https://tanfdata.org/ Design will likely be modelled after informational USWDS page templates, e.g. USWDS' own getting started guide https://designsystem.digital.gov/documentation/getting-started/developers/phase-one-install/ **Tasks** - [ ] Draft information architecture (page hierarchy, order, labeling) - [ ] Draft knowledge base design in [figma](url) - [ ] Add links to all content to be hosted in knowledge base - [ ] Review / critique opportunity w/ DIGIT team (and/or regional staff?) - [ ] Revision as needed - [ ] Dev/Design handoff sync **Documentation** - Figma - Content links hack.md **DD** - [ ] @lfrohlich has reviewed and signed off
1.0
As a grantee pilot user I want to be able to access TDP help/support content in one place - **Description** Blocked by other onboarding content tickets as it requires insight into the full set of content. Delivers a design for an in-app knowledge base which aggregates all onboarding/support content. May also contain existing materials like data coding instructions. **AC:** - [ ] Email has a clear sequence of next steps for the user to follow - [ ] Each step is complete with a link to the place that step will be carried out - [ ] The design is consistent with the team’s past decisions, or a change is clearly documented - [ ] The design is usable, meaning... - [ ] It uses [USWDS components and follows it’s UX guidance](https://designsystem.digital.gov/components/), or a deviation is clearly documented - [ ] Language is intentional and [plain](https://plainlanguage.gov/guidelines/); placeholders are clearly documented - [ ] It follows [accessibility guidelines](https://accessibility.digital.gov/) (e.g. clear information hierarchy, color is not the only way meaning is communicated, etc.) - [ ] If feedback identifies bigger questions or unknowns, create additional issues to investigate - [ ] Relevant user stories are documented. - [ ] Relevant accessibility implementation notes are documented - [ ] Recommended pa11y checks are documented. - [ ] User flow is included/updated. - [ ] Design includes implementation notes for accessibility for instances where standard best practices won't deliver the desired experience. - [ ] Dev/Design hanodff has occurred **Notes** Likely knowledge base areas: - Creating & managing a login.gov account - Logging into TDP, Getting Access - Framing TDP functionality (Data Submission/Resubmission, Requirements for Data and How to Submit Complete Resubmissions) - TDP Release Notes (Potential way to frame feature overview from homepage & Getting Started email) - How To: Ask Questions, Get Support, Give Feedback May also be a good place to host some content from https://tanfdata.org/ Design will likely be modelled after informational USWDS page templates, e.g. USWDS' own getting started guide https://designsystem.digital.gov/documentation/getting-started/developers/phase-one-install/ **Tasks** - [ ] Draft information architecture (page hierarchy, order, labeling) - [ ] Draft knowledge base design in [figma](url) - [ ] Add links to all content to be hosted in knowledge base - [ ] Review / critique opportunity w/ DIGIT team (and/or regional staff?) - [ ] Revision as needed - [ ] Dev/Design handoff sync **Documentation** - Figma - Content links hack.md **DD** - [ ] @lfrohlich has reviewed and signed off
non_defect
as a grantee pilot user i want to be able to access tdp help support content in one place description blocked by other onboarding content tickets as it requires insight into the full set of content delivers a design for an in app knowledge base which aggregates all onboarding support content may also contain existing materials like data coding instructions ac email has a clear sequence of next steps for the user to follow each step is complete with a link to the place that step will be carried out the design is consistent with the team’s past decisions or a change is clearly documented the design is usable meaning it uses or a deviation is clearly documented language is intentional and placeholders are clearly documented it follows e g clear information hierarchy color is not the only way meaning is communicated etc if feedback identifies bigger questions or unknowns create additional issues to investigate relevant user stories are documented relevant accessibility implementation notes are documented recommended checks are documented user flow is included updated design includes implementation notes for accessibility for instances where standard best practices won t deliver the desired experience dev design hanodff has occurred notes likely knowledge base areas creating managing a login gov account logging into tdp getting access framing tdp functionality data submission resubmission requirements for data and how to submit complete resubmissions tdp release notes potential way to frame feature overview from homepage getting started email how to ask questions get support give feedback may also be a good place to host some content from design will likely be modelled after informational uswds page templates e g uswds own getting started guide tasks draft information architecture page hierarchy order labeling draft knowledge base design in url add links to all content to be hosted in knowledge base review critique opportunity w digit team and or regional staff revision as needed dev design handoff sync documentation figma content links hack md dd lfrohlich has reviewed and signed off
0
59,195
17,016,412,272
IssuesEvent
2021-07-02 12:42:14
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
opened
Russian geocoding support in Nominatim
Component: nominatim Priority: major Type: defect
**[Submitted to the original trac issue database at 8.14pm, Saturday, 19th October 2013]** As of now, Russian geocoding support in Nominatim is totally broken. I'm filing this meta-ticket to track progress on individual tickets and to gather relevant information. The background is that I've tried to conduct a sociological study that involved computing coordinates for hundreds of thousands of addresses. For that, I planned to deploy a local Nominatim instance, but it turned out that for most of addresses it simply doesn't work. For now, I resort to using Yandex (Russia's #1 search engine) geocoding API that works like a charm, but is not suitable for bulk queries. Another point is that there are desktop applications being developed that use geocode-glib library (GNOME Maps, for example) that, in turn, uses Nominatim API inside. The problem is that Russian addresses nomenclature is very diverse and informal. Here is a brief summary; if needed, I can create a wiki article on that. 1) The "street" term includes not only "" (a street proper), but also "" (side-street), "" (passage), "" (avenue), "" (highway), "" (cul-de-sac), "" (bridge), "" (square) and some others. These are used in full or abbreviated form ("" -> ".", "" -> "-"), and can be both appended or prepended to the name. Sometimes, "" (major) or "" (minor) are the part of the name, and the word order is arbitrary. Thus, " ." and " " refer to the same. #4703 Examples: ". ", " ", " .", " " 2) The building number nomenclature is also very diverse. Usually, there is a top-level prefix: "" (house) or "" (property), followed by the main number. These prefixes can be abbreviated as "." r "." or even omitted. #4647 Besides the main number, there can be also letter indexes, different sub-numbers and combinations of those: - letter index is a letter (usually "", "", "") appended to the building number without a space; - sub-building is either a "" or "". These are similar, but not interchangeable. These can be spelled full-form (" 1 2)" or abbreviated in different ways: ". 1 . 2", ". 12", "3 . 1", "31". As you see, are short form (". 3") and one-letter form ("3"); both period and space can be omitted when appending it to the main number. Moreover, a sub-building number can have a letter index itself; - finally, the slash syntax is used when the building has dual address. For example, a building on the corner of two streets can be addressed as both ". 30" and " ., 61", while full address is " 30/61". 3) Rarely, but there can be ranges used as building numbers. For example, there is one single building with an address " ., 10-16". This means that this building should be a hit for requests like ", 12" or ", 14" (but not ", 11" - there are even and odd sides of the street usually). 4) The "" (ie) and "" (yo) letters should be treated as identical; the queries should be case insensitive. #2467 #4819 #2758 As a solution, I can imagine some code that canonicalizes the requested address. For this to work, all the Russian addresses in OSM will need to be canonicalized, too (probably, with the help of the same code).
1.0
Russian geocoding support in Nominatim - **[Submitted to the original trac issue database at 8.14pm, Saturday, 19th October 2013]** As of now, Russian geocoding support in Nominatim is totally broken. I'm filing this meta-ticket to track progress on individual tickets and to gather relevant information. The background is that I've tried to conduct a sociological study that involved computing coordinates for hundreds of thousands of addresses. For that, I planned to deploy a local Nominatim instance, but it turned out that for most of addresses it simply doesn't work. For now, I resort to using Yandex (Russia's #1 search engine) geocoding API that works like a charm, but is not suitable for bulk queries. Another point is that there are desktop applications being developed that use geocode-glib library (GNOME Maps, for example) that, in turn, uses Nominatim API inside. The problem is that Russian addresses nomenclature is very diverse and informal. Here is a brief summary; if needed, I can create a wiki article on that. 1) The "street" term includes not only "" (a street proper), but also "" (side-street), "" (passage), "" (avenue), "" (highway), "" (cul-de-sac), "" (bridge), "" (square) and some others. These are used in full or abbreviated form ("" -> ".", "" -> "-"), and can be both appended or prepended to the name. Sometimes, "" (major) or "" (minor) are the part of the name, and the word order is arbitrary. Thus, " ." and " " refer to the same. #4703 Examples: ". ", " ", " .", " " 2) The building number nomenclature is also very diverse. Usually, there is a top-level prefix: "" (house) or "" (property), followed by the main number. These prefixes can be abbreviated as "." r "." or even omitted. #4647 Besides the main number, there can be also letter indexes, different sub-numbers and combinations of those: - letter index is a letter (usually "", "", "") appended to the building number without a space; - sub-building is either a "" or "". These are similar, but not interchangeable. These can be spelled full-form (" 1 2)" or abbreviated in different ways: ". 1 . 2", ". 12", "3 . 1", "31". As you see, are short form (". 3") and one-letter form ("3"); both period and space can be omitted when appending it to the main number. Moreover, a sub-building number can have a letter index itself; - finally, the slash syntax is used when the building has dual address. For example, a building on the corner of two streets can be addressed as both ". 30" and " ., 61", while full address is " 30/61". 3) Rarely, but there can be ranges used as building numbers. For example, there is one single building with an address " ., 10-16". This means that this building should be a hit for requests like ", 12" or ", 14" (but not ", 11" - there are even and odd sides of the street usually). 4) The "" (ie) and "" (yo) letters should be treated as identical; the queries should be case insensitive. #2467 #4819 #2758 As a solution, I can imagine some code that canonicalizes the requested address. For this to work, all the Russian addresses in OSM will need to be canonicalized, too (probably, with the help of the same code).
defect
russian geocoding support in nominatim as of now russian geocoding support in nominatim is totally broken i m filing this meta ticket to track progress on individual tickets and to gather relevant information the background is that i ve tried to conduct a sociological study that involved computing coordinates for hundreds of thousands of addresses for that i planned to deploy a local nominatim instance but it turned out that for most of addresses it simply doesn t work for now i resort to using yandex russia s search engine geocoding api that works like a charm but is not suitable for bulk queries another point is that there are desktop applications being developed that use geocode glib library gnome maps for example that in turn uses nominatim api inside the problem is that russian addresses nomenclature is very diverse and informal here is a brief summary if needed i can create a wiki article on that the street term includes not only a street proper but also side street passage avenue highway cul de sac bridge square and some others these are used in full or abbreviated form and can be both appended or prepended to the name sometimes major or minor are the part of the name and the word order is arbitrary thus and refer to the same examples the building number nomenclature is also very diverse usually there is a top level prefix house or property followed by the main number these prefixes can be abbreviated as r or even omitted besides the main number there can be also letter indexes different sub numbers and combinations of those letter index is a letter usually appended to the building number without a space sub building is either a or these are similar but not interchangeable these can be spelled full form or abbreviated in different ways as you see are short form and one letter form both period and space can be omitted when appending it to the main number moreover a sub building number can have a letter index itself finally the slash syntax is used when the building has dual address for example a building on the corner of two streets can be addressed as both and while full address is rarely but there can be ranges used as building numbers for example there is one single building with an address this means that this building should be a hit for requests like or but not there are even and odd sides of the street usually the ie and yo letters should be treated as identical the queries should be case insensitive as a solution i can imagine some code that canonicalizes the requested address for this to work all the russian addresses in osm will need to be canonicalized too probably with the help of the same code
1
2,621
2,607,932,514
IssuesEvent
2015-02-26 00:27:22
chrsmithdemos/minify
https://api.github.com/repos/chrsmithdemos/minify
closed
Re-arranging CSS to minimize the gzip output
auto-migrated Priority-Medium Type-Defect
``` I've found an interesting article about CSS compression and gzip. It shows techniques to improve data compression by simply reordering CSS properties. I think it would be interesting to improve minify :) Here is the link http://www.barryvan.com.au/2009/08/css-minifier-and-alphabetiser/ Thanks for your tool ! (PS : Sorry, I've no time to implement that to minify myself, so I just give you the idea :) ) ``` ----- Original issue reported on code.google.com by `bist...@gmail.com` on 10 Sep 2009 at 1:54
1.0
Re-arranging CSS to minimize the gzip output - ``` I've found an interesting article about CSS compression and gzip. It shows techniques to improve data compression by simply reordering CSS properties. I think it would be interesting to improve minify :) Here is the link http://www.barryvan.com.au/2009/08/css-minifier-and-alphabetiser/ Thanks for your tool ! (PS : Sorry, I've no time to implement that to minify myself, so I just give you the idea :) ) ``` ----- Original issue reported on code.google.com by `bist...@gmail.com` on 10 Sep 2009 at 1:54
defect
re arranging css to minimize the gzip output i ve found an interesting article about css compression and gzip it shows techniques to improve data compression by simply reordering css properties i think it would be interesting to improve minify here is the link thanks for your tool ps sorry i ve no time to implement that to minify myself so i just give you the idea original issue reported on code google com by bist gmail com on sep at
1
62,743
17,187,480,063
IssuesEvent
2021-07-16 05:46:18
Questie/Questie
https://api.github.com/repos/Questie/Questie
closed
Gathering Leather (768)
Questie - Journey Type - Defect
**Misplaced in the Journey feature.** **Gathering Leather** (768) This is an "Skinning" quest, but appears for everyone. Placed in Thunder Bluff section. Should be placed in Profession section. [Wowhead Link](https://classic.wowhead.com/quest=768/gathering-leather)
1.0
Gathering Leather (768) - **Misplaced in the Journey feature.** **Gathering Leather** (768) This is an "Skinning" quest, but appears for everyone. Placed in Thunder Bluff section. Should be placed in Profession section. [Wowhead Link](https://classic.wowhead.com/quest=768/gathering-leather)
defect
gathering leather misplaced in the journey feature gathering leather this is an skinning quest but appears for everyone placed in thunder bluff section should be placed in profession section
1
77,602
27,070,086,768
IssuesEvent
2023-02-14 05:49:35
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
gitter: shows "active" notification, but doesn't seem to go away no matter what I look at
T-Defect
### Steps to reproduce 1. Go to app.gitter.im 2. See the * on the title bar 3. Look at lots of things to try to make it go away 4. Cry ### Outcome #### What did you expect? No asterisk. #### What happened instead? Tears. ### Operating system macOS ### Browser information Chrome ### URL for webapp app.gitter.im ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? Yes
1.0
gitter: shows "active" notification, but doesn't seem to go away no matter what I look at - ### Steps to reproduce 1. Go to app.gitter.im 2. See the * on the title bar 3. Look at lots of things to try to make it go away 4. Cry ### Outcome #### What did you expect? No asterisk. #### What happened instead? Tears. ### Operating system macOS ### Browser information Chrome ### URL for webapp app.gitter.im ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? Yes
defect
gitter shows active notification but doesn t seem to go away no matter what i look at steps to reproduce go to app gitter im see the on the title bar look at lots of things to try to make it go away cry outcome what did you expect no asterisk what happened instead tears operating system macos browser information chrome url for webapp app gitter im application version no response homeserver no response will you send logs yes
1
57,046
14,101,821,590
IssuesEvent
2020-11-06 07:38:32
siyam4u/pingon
https://api.github.com/repos/siyam4u/pingon
opened
CVE-2018-19838 (Medium) detected in opennmsopennms-source-25.1.0-1, node-sass-4.14.1.tgz
security vulnerability
## CVE-2018-19838 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-25.1.0-1</b>, <b>node-sass-4.14.1.tgz</b></p></summary> <p> <details><summary><b>node-sass-4.14.1.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p> <p>Path to dependency file: pingon/package.json</p> <p>Path to vulnerable library: pingon/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - laravel-mix-1.7.2.tgz (Root Library) - :x: **node-sass-4.14.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/siyam4u/pingon/commit/4d774e9827008423397c5c157b52249afe1da317">4d774e9827008423397c5c157b52249afe1da317</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy(). <p>Publish Date: 2018-12-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19838>CVE-2018-19838</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sass/libsass/blob/3.6.0/src/ast.cpp">https://github.com/sass/libsass/blob/3.6.0/src/ast.cpp</a></p> <p>Release Date: 2019-07-01</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-19838 (Medium) detected in opennmsopennms-source-25.1.0-1, node-sass-4.14.1.tgz - ## CVE-2018-19838 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-25.1.0-1</b>, <b>node-sass-4.14.1.tgz</b></p></summary> <p> <details><summary><b>node-sass-4.14.1.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p> <p>Path to dependency file: pingon/package.json</p> <p>Path to vulnerable library: pingon/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - laravel-mix-1.7.2.tgz (Root Library) - :x: **node-sass-4.14.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/siyam4u/pingon/commit/4d774e9827008423397c5c157b52249afe1da317">4d774e9827008423397c5c157b52249afe1da317</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy(). <p>Publish Date: 2018-12-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19838>CVE-2018-19838</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sass/libsass/blob/3.6.0/src/ast.cpp">https://github.com/sass/libsass/blob/3.6.0/src/ast.cpp</a></p> <p>Release Date: 2019-07-01</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in opennmsopennms source node sass tgz cve medium severity vulnerability vulnerable libraries opennmsopennms source node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file pingon package json path to vulnerable library pingon node modules node sass package json dependency hierarchy laravel mix tgz root library x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details in libsass prior to functions inside ast cpp for implement ast operators expansion allow attackers to cause a denial of service resulting from stack consumption via a crafted sass file as demonstrated by recursive calls involving clone clonechildren and copy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
0
61,675
17,023,754,837
IssuesEvent
2021-07-03 03:40:14
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Tagging a way freezes tagging menu
Component: potlatch2 Priority: major Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 2.48am, Sunday, 30th October 2011]** I can add as many ways as I like without problems, but once I add tags to a way, the menu on the left freezes. If I am using 'simple': once I select a road type, the drop-down menu remains down. I can move it back up by clicking on the tab, but if I click anywhere previously covered by the drop-down menu, the menu reappears. I can add new ways, and categorise them differently as long as it appears in the same section (e.g. if the first way is highway=trunk, I can select other types of road, but not switch to (e.g.) rail). The advanced tab no longer works, so I can't add other tags. If I am using 'advanced': again I can add as many ways as I like, but once I create some tags, I am stuck using just this way - any clicks on the map adds a new node on the same way, double-clicking or deleting don't work. I can finish the way by pressing enter, but I can't then create any more ways or select the original way. (I can still drag nodes onto the map.) I initially was using Bing imagery, I've turned this off to no effect. I first noticed this on my 6-year old Mac at home, I initially thought it was because of my dial-up connection (slow, maybe not completely loading), but it is also happening on my new Windows 7 computer at work, broadband connection.
1.0
Tagging a way freezes tagging menu - **[Submitted to the original trac issue database at 2.48am, Sunday, 30th October 2011]** I can add as many ways as I like without problems, but once I add tags to a way, the menu on the left freezes. If I am using 'simple': once I select a road type, the drop-down menu remains down. I can move it back up by clicking on the tab, but if I click anywhere previously covered by the drop-down menu, the menu reappears. I can add new ways, and categorise them differently as long as it appears in the same section (e.g. if the first way is highway=trunk, I can select other types of road, but not switch to (e.g.) rail). The advanced tab no longer works, so I can't add other tags. If I am using 'advanced': again I can add as many ways as I like, but once I create some tags, I am stuck using just this way - any clicks on the map adds a new node on the same way, double-clicking or deleting don't work. I can finish the way by pressing enter, but I can't then create any more ways or select the original way. (I can still drag nodes onto the map.) I initially was using Bing imagery, I've turned this off to no effect. I first noticed this on my 6-year old Mac at home, I initially thought it was because of my dial-up connection (slow, maybe not completely loading), but it is also happening on my new Windows 7 computer at work, broadband connection.
defect
tagging a way freezes tagging menu i can add as many ways as i like without problems but once i add tags to a way the menu on the left freezes if i am using simple once i select a road type the drop down menu remains down i can move it back up by clicking on the tab but if i click anywhere previously covered by the drop down menu the menu reappears i can add new ways and categorise them differently as long as it appears in the same section e g if the first way is highway trunk i can select other types of road but not switch to e g rail the advanced tab no longer works so i can t add other tags if i am using advanced again i can add as many ways as i like but once i create some tags i am stuck using just this way any clicks on the map adds a new node on the same way double clicking or deleting don t work i can finish the way by pressing enter but i can t then create any more ways or select the original way i can still drag nodes onto the map i initially was using bing imagery i ve turned this off to no effect i first noticed this on my year old mac at home i initially thought it was because of my dial up connection slow maybe not completely loading but it is also happening on my new windows computer at work broadband connection
1
38,358
8,786,462,134
IssuesEvent
2018-12-20 15:48:18
techo/voluntariado-eventual
https://api.github.com/repos/techo/voluntariado-eventual
closed
Si hago una búsqueda, al cambiar el presente, asistencia o pago (individual) en un usuario se cambia en otro
Defecto
**Describí el error** Al buscar un usuario y cambiarle la asistencia, si busco otro y hago lo mismo se modifican los dos. **Para reproducirlo** Pasos para reproducir el comportamiento: 1. Ir a Inscripciones **2. Clickear en búqueda, tipear una búsqueda** 3. Cambiar el estado, asistencia o pago (individual) 4. Borrar la búsqueda. 5. Ver como el usuario que se modificó no quedó modificado, y se modificó otro random. **Comportamiento esperando** 5. Se modifica correctamente el usuario **Capturas de pantalla** ![image](https://user-images.githubusercontent.com/94343/49674908-4499ea00-fa52-11e8-9687-fc74d039abee.png) **Información adicional** Probar el caso donde haya varias páginas
1.0
Si hago una búsqueda, al cambiar el presente, asistencia o pago (individual) en un usuario se cambia en otro - **Describí el error** Al buscar un usuario y cambiarle la asistencia, si busco otro y hago lo mismo se modifican los dos. **Para reproducirlo** Pasos para reproducir el comportamiento: 1. Ir a Inscripciones **2. Clickear en búqueda, tipear una búsqueda** 3. Cambiar el estado, asistencia o pago (individual) 4. Borrar la búsqueda. 5. Ver como el usuario que se modificó no quedó modificado, y se modificó otro random. **Comportamiento esperando** 5. Se modifica correctamente el usuario **Capturas de pantalla** ![image](https://user-images.githubusercontent.com/94343/49674908-4499ea00-fa52-11e8-9687-fc74d039abee.png) **Información adicional** Probar el caso donde haya varias páginas
defect
si hago una búsqueda al cambiar el presente asistencia o pago individual en un usuario se cambia en otro describí el error al buscar un usuario y cambiarle la asistencia si busco otro y hago lo mismo se modifican los dos para reproducirlo pasos para reproducir el comportamiento ir a inscripciones clickear en búqueda tipear una búsqueda cambiar el estado asistencia o pago individual borrar la búsqueda ver como el usuario que se modificó no quedó modificado y se modificó otro random comportamiento esperando se modifica correctamente el usuario capturas de pantalla información adicional probar el caso donde haya varias páginas
1
30,918
2,729,473,202
IssuesEvent
2015-04-16 08:48:31
calblueprint/foodshift
https://api.github.com/repos/calblueprint/foodshift
closed
Add logo upload utility
high priority in progress
We want recipients and donors to upload their logos to their profiles so that these can show up on the "About" page.
1.0
Add logo upload utility - We want recipients and donors to upload their logos to their profiles so that these can show up on the "About" page.
non_defect
add logo upload utility we want recipients and donors to upload their logos to their profiles so that these can show up on the about page
0
76,782
9,963,494,054
IssuesEvent
2019-07-08 00:12:13
bongnv/kitgen
https://api.github.com/repos/bongnv/kitgen
opened
Add overview and introduction to the project
documentation
It would be more helpful if there is an overview and introduction to the project.
1.0
Add overview and introduction to the project - It would be more helpful if there is an overview and introduction to the project.
non_defect
add overview and introduction to the project it would be more helpful if there is an overview and introduction to the project
0
68,910
21,953,060,573
IssuesEvent
2022-05-24 09:35:53
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
ng-template won't load, missing internal SharedModule export inside p-menubar component
defect
**I'm submitting a bug report** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Current behavior** <!-- Describe how the bug manifests. --> `<ng-template pTemplate="start">...</ng-template>` and `<ng-template pTemplate="end">...</ng-template>` are not loaded into DOM when using p-menubar inside a component of a lazy loaded module. But if I import the undocumented SharedModule into the LazyLoaded module, it works fine. **Expected behavior** <!-- Describe what the behavior would be without the bug. --> `<ng-template pTemplate="start">...</ng-template>` and `<ng-template pTemplate="end">...</ng-template>` must be loaded by p-menubar when using it inside a component inside a lazy loaded Angular Module without having to import SharedModule (the primeng/api SharedModule) like other component does (ex : p-dropdown that export SharedModule internally, see : https://github.com/primefaces/primeng/blob/master/src/app/components/dropdown/dropdown.ts). **Minimal reproduction of the problem with instructions** <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Exemple of the bug (with comment showing the solution and the problem in the contact-base.component.html and contact.module.ts : https://stackblitz.com/edit/primeng-menubar-demo-29wkyw **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> It's an undocumented behaviour. The motivation is to improve consistency between the components of primeng (as some export SharedModule but p-menubar doesn't export it. And it will solve a bug too. **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> * **Angular version:** 13.x (bug is not related to Angular) <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 13.4.0 <!-- Check whether this is still an issue in the most recent Angular version --> * **Browser:** all <!-- All browsers where this could be reproduced --> Affect all browsers as this bug is not browser related. * **Language:** TypeScript ~4.6.2 (the version provided with the latest Angular version when generating a new Angular project.) * **Node (for AoT issues):** `node --version` = 16.14.2 On windows with NPM 8.8.0 Issue is not environment related. Note : if you want, I can make a PR to solve the problem by exporting the PrimeNG internal SharedModule from p-menubar component (e.g : like primeng already does inside p-dropdown). If you accept PR, I will modify this file by exporting SharedModule in it https://github.com/primefaces/primeng/blob/master/src/app/components/menubar/menubar.ts. Thank you for reading
1.0
ng-template won't load, missing internal SharedModule export inside p-menubar component - **I'm submitting a bug report** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Current behavior** <!-- Describe how the bug manifests. --> `<ng-template pTemplate="start">...</ng-template>` and `<ng-template pTemplate="end">...</ng-template>` are not loaded into DOM when using p-menubar inside a component of a lazy loaded module. But if I import the undocumented SharedModule into the LazyLoaded module, it works fine. **Expected behavior** <!-- Describe what the behavior would be without the bug. --> `<ng-template pTemplate="start">...</ng-template>` and `<ng-template pTemplate="end">...</ng-template>` must be loaded by p-menubar when using it inside a component inside a lazy loaded Angular Module without having to import SharedModule (the primeng/api SharedModule) like other component does (ex : p-dropdown that export SharedModule internally, see : https://github.com/primefaces/primeng/blob/master/src/app/components/dropdown/dropdown.ts). **Minimal reproduction of the problem with instructions** <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Exemple of the bug (with comment showing the solution and the problem in the contact-base.component.html and contact.module.ts : https://stackblitz.com/edit/primeng-menubar-demo-29wkyw **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> It's an undocumented behaviour. The motivation is to improve consistency between the components of primeng (as some export SharedModule but p-menubar doesn't export it. And it will solve a bug too. **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> * **Angular version:** 13.x (bug is not related to Angular) <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 13.4.0 <!-- Check whether this is still an issue in the most recent Angular version --> * **Browser:** all <!-- All browsers where this could be reproduced --> Affect all browsers as this bug is not browser related. * **Language:** TypeScript ~4.6.2 (the version provided with the latest Angular version when generating a new Angular project.) * **Node (for AoT issues):** `node --version` = 16.14.2 On windows with NPM 8.8.0 Issue is not environment related. Note : if you want, I can make a PR to solve the problem by exporting the PrimeNG internal SharedModule from p-menubar component (e.g : like primeng already does inside p-dropdown). If you accept PR, I will modify this file by exporting SharedModule in it https://github.com/primefaces/primeng/blob/master/src/app/components/menubar/menubar.ts. Thank you for reading
defect
ng template won t load missing internal sharedmodule export inside p menubar component i m submitting a bug report check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see current behavior and are not loaded into dom when using p menubar inside a component of a lazy loaded module but if i import the undocumented sharedmodule into the lazyloaded module it works fine expected behavior and must be loaded by p menubar when using it inside a component inside a lazy loaded angular module without having to import sharedmodule the primeng api sharedmodule like other component does ex p dropdown that export sharedmodule internally see minimal reproduction of the problem with instructions if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point exemple of the bug with comment showing the solution and the problem in the contact base component html and contact module ts what is the motivation use case for changing the behavior it s an undocumented behaviour the motivation is to improve consistency between the components of primeng as some export sharedmodule but p menubar doesn t export it and it will solve a bug too please tell us about your environment angular version x bug is not related to angular primeng version browser all affect all browsers as this bug is not browser related language typescript the version provided with the latest angular version when generating a new angular project node for aot issues node version on windows with npm issue is not environment related note if you want i can make a pr to solve the problem by exporting the primeng internal sharedmodule from p menubar component e g like primeng already does inside p dropdown if you accept pr i will modify this file by exporting sharedmodule in it thank you for reading
1
278,722
8,649,581,433
IssuesEvent
2018-11-26 19:50:43
Klepac-Ceraj-Lab/snakemake_workflows
https://api.github.com/repos/Klepac-Ceraj-Lab/snakemake_workflows
opened
Add finalizer rule
Medium priority enhancement
Add a rule or series of rules to wrap up, eg compress files, clear out temporary files etc.
1.0
Add finalizer rule - Add a rule or series of rules to wrap up, eg compress files, clear out temporary files etc.
non_defect
add finalizer rule add a rule or series of rules to wrap up eg compress files clear out temporary files etc
0
657,291
21,789,661,705
IssuesEvent
2022-05-14 17:37:54
RoboJackets/robocup-firmware
https://api.github.com/repos/RoboJackets/robocup-firmware
closed
Document build system
priority / high exp / expert area / support
Currently we do not have solid documentation for the majority of our build system. @petersonev has agreed to put some work into making it more understandable for all of us.
1.0
Document build system - Currently we do not have solid documentation for the majority of our build system. @petersonev has agreed to put some work into making it more understandable for all of us.
non_defect
document build system currently we do not have solid documentation for the majority of our build system petersonev has agreed to put some work into making it more understandable for all of us
0
79,739
28,588,559,043
IssuesEvent
2023-04-22 00:37:01
zed-industries/community
https://api.github.com/repos/zed-industries/community
opened
Copilot doesn't re-check auth after finding no active copilot subscription
defect copilot admin read
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it From our in-app feedback form: I just linked with Copilot and ended up in a stuck state: 1. Click the sign in with Copilot button 2. Link my Github account using the modal 3. Follow the 2nd modal to subscribe to Copilot 4. Now Zed shows me as logged out of Copilot, but the sign in button does nothing I was able to recover from this state: 1. Run the copilot auth: sign out command 2. Run the copilot auth: sign in command 3. Link my Github account using the modal 4. This time, since I'm subscribed, it waits for the response from Github and then everything works I'm guessing the issue is that I didn't have a Copilot subscription prior to signing in through Zed. Copying here so others can see the work around if they hit it. ### Environment N/A ### If applicable, add mockups / screenshots to help explain present your vision of the feature n/a ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. n/a
1.0
Copilot doesn't re-check auth after finding no active copilot subscription - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it From our in-app feedback form: I just linked with Copilot and ended up in a stuck state: 1. Click the sign in with Copilot button 2. Link my Github account using the modal 3. Follow the 2nd modal to subscribe to Copilot 4. Now Zed shows me as logged out of Copilot, but the sign in button does nothing I was able to recover from this state: 1. Run the copilot auth: sign out command 2. Run the copilot auth: sign in command 3. Link my Github account using the modal 4. This time, since I'm subscribed, it waits for the response from Github and then everything works I'm guessing the issue is that I didn't have a Copilot subscription prior to signing in through Zed. Copying here so others can see the work around if they hit it. ### Environment N/A ### If applicable, add mockups / screenshots to help explain present your vision of the feature n/a ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. n/a
defect
copilot doesn t re check auth after finding no active copilot subscription check for existing issues completed describe the bug provide steps to reproduce it from our in app feedback form i just linked with copilot and ended up in a stuck state click the sign in with copilot button link my github account using the modal follow the modal to subscribe to copilot now zed shows me as logged out of copilot but the sign in button does nothing i was able to recover from this state run the copilot auth sign out command run the copilot auth sign in command link my github account using the modal this time since i m subscribed it waits for the response from github and then everything works i m guessing the issue is that i didn t have a copilot subscription prior to signing in through zed copying here so others can see the work around if they hit it environment n a if applicable add mockups screenshots to help explain present your vision of the feature n a if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last n a
1
27,329
4,965,460,981
IssuesEvent
2016-12-04 09:44:10
otros-systems/otroslogviewer
https://api.github.com/repos/otros-systems/otroslogviewer
closed
Hang when open 3 large log files at once
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1.Click "Open log with Autodetect type". 2.Click "Connection" and fill up the remote server details for SFTP. 3.Select 3 log files (use CTRL+mouse click) with sizes 60Mb each after connect to server. 4.Click open. What is the expected output? What do you see instead? Expected: Can load all 3 log files and view in otroslogviewer. Actual: Hang when all 3 files completed 60% of the total file sizes. What version of the product are you using? On what operating system? version:2012-04-25 OS: Windows7 Please provide any additional information below. JVM memory defined in olv.sh: -Xmx2048m VM available shown in otroslogviewer: 910Mb ``` Original issue reported on code.google.com by `rinoawon...@gmail.com` on 22 Jun 2012 at 12:29
1.0
Hang when open 3 large log files at once - ``` What steps will reproduce the problem? 1.Click "Open log with Autodetect type". 2.Click "Connection" and fill up the remote server details for SFTP. 3.Select 3 log files (use CTRL+mouse click) with sizes 60Mb each after connect to server. 4.Click open. What is the expected output? What do you see instead? Expected: Can load all 3 log files and view in otroslogviewer. Actual: Hang when all 3 files completed 60% of the total file sizes. What version of the product are you using? On what operating system? version:2012-04-25 OS: Windows7 Please provide any additional information below. JVM memory defined in olv.sh: -Xmx2048m VM available shown in otroslogviewer: 910Mb ``` Original issue reported on code.google.com by `rinoawon...@gmail.com` on 22 Jun 2012 at 12:29
defect
hang when open large log files at once what steps will reproduce the problem click open log with autodetect type click connection and fill up the remote server details for sftp select log files use ctrl mouse click with sizes each after connect to server click open what is the expected output what do you see instead expected can load all log files and view in otroslogviewer actual hang when all files completed of the total file sizes what version of the product are you using on what operating system version os please provide any additional information below jvm memory defined in olv sh vm available shown in otroslogviewer original issue reported on code google com by rinoawon gmail com on jun at
1
106,608
23,258,438,933
IssuesEvent
2022-08-04 11:25:31
Onelinerhub/onelinerhub
https://api.github.com/repos/Onelinerhub/onelinerhub
closed
Short solution needed: "How to create Node.js project" (nodejs)
help wanted good first issue code nodejs
Please help us write most modern and shortest code solution for this issue: **How to create Node.js project** (technology: [nodejs](https://onelinerhub.com/nodejs)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
1.0
Short solution needed: "How to create Node.js project" (nodejs) - Please help us write most modern and shortest code solution for this issue: **How to create Node.js project** (technology: [nodejs](https://onelinerhub.com/nodejs)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
non_defect
short solution needed how to create node js project nodejs please help us write most modern and shortest code solution for this issue how to create node js project technology fast way just write the code solution in the comments prefered way create with a new code file inside don t forget to explain solution link to this issue in comments of pull request
0
365,618
25,544,753,703
IssuesEvent
2022-11-29 17:49:59
robocorp/rpaframework
https://api.github.com/repos/robocorp/rpaframework
closed
`RPA.Email.Exchange` code example(s) & docstrings should use OAuth2
documentation
https://github.com/robocorp/rpaframework/blob/02f90774c15d9b2f9e7d90f74ab30d9f47d63d3b/packages/main/src/RPA/Email/Exchange.py#L91 Exchange basic authentication flow will be [deprecated](https://docs.microsoft.com/en-us/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online) from October 1st, 2022. The examples should be updated to use Oauth2 Auth Code flow. Example here: https://robocorp.com/portal/robot/robocorp/example-oauth-email
1.0
`RPA.Email.Exchange` code example(s) & docstrings should use OAuth2 - https://github.com/robocorp/rpaframework/blob/02f90774c15d9b2f9e7d90f74ab30d9f47d63d3b/packages/main/src/RPA/Email/Exchange.py#L91 Exchange basic authentication flow will be [deprecated](https://docs.microsoft.com/en-us/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online) from October 1st, 2022. The examples should be updated to use Oauth2 Auth Code flow. Example here: https://robocorp.com/portal/robot/robocorp/example-oauth-email
non_defect
rpa email exchange code example s docstrings should use exchange basic authentication flow will be from october the examples should be updated to use auth code flow example here
0
67,395
27,829,235,665
IssuesEvent
2023-03-20 02:14:15
cloudflare/terraform-provider-cloudflare
https://api.github.com/repos/cloudflare/terraform-provider-cloudflare
closed
Resource cloudflare_notification_policy_webhooks should be replaced when url is updated
kind/bug service/notifications triage/debug-log-attached
### Confirmation - [X] My issue isn't already found on the issue tracker. - [X] I have replicated my issue using the latest version of the provider and it is still present. ### Terraform and Cloudflare provider version Terraform v1.3.9 on darwin_amd64 + provider registry.terraform.io/cloudflare/cloudflare v4.0.0 ### Affected resource(s) cloudflare_notification_policy_webhooks ### Terraform configuration files ```hcl resource "cloudflare_notification_policy_webhooks" "main" { account_id = var.account_id name = "test-webhook" url = "https://google.com" } ``` ### Link to debug output https://gist.github.com/yannispgs/481838d0a54bcda0f0f0c2a05bcf33a8 ### Panic output _No response_ ### Expected output Webhook **recreated** to target URL "https://google.com" instead of "https://example.com" ### Actual output Failed attempt to update the Webhook in-place. URL is not updated ### Steps to reproduce 1. Run `terraform apply` with : ``` resource "cloudflare_notification_policy_webhooks" "main" { account_id = var.account_id name = "test-webhook" url = "https://example.com" } ``` 2. Run `terraform apply` with : ``` resource "cloudflare_notification_policy_webhooks" "main" { account_id = var.account_id name = "test-webhook" url = "https://google.com" } ``` ### Additional factoids On the Cloudflare Dashboard, when trying to update an existing Webhook, we are explicitly told the URL cannot be updated in-place. The Webhook must be recreated to have a different URL. <img width="762" alt="image" src="https://user-images.githubusercontent.com/67374171/222167771-443e22d7-5378-4061-aca4-98bede961b57.png"> EDIT: The `replace_triggered_by` lifecycle option is a workaround, but this may not cover everybody's use case. ### References _No response_
1.0
Resource cloudflare_notification_policy_webhooks should be replaced when url is updated - ### Confirmation - [X] My issue isn't already found on the issue tracker. - [X] I have replicated my issue using the latest version of the provider and it is still present. ### Terraform and Cloudflare provider version Terraform v1.3.9 on darwin_amd64 + provider registry.terraform.io/cloudflare/cloudflare v4.0.0 ### Affected resource(s) cloudflare_notification_policy_webhooks ### Terraform configuration files ```hcl resource "cloudflare_notification_policy_webhooks" "main" { account_id = var.account_id name = "test-webhook" url = "https://google.com" } ``` ### Link to debug output https://gist.github.com/yannispgs/481838d0a54bcda0f0f0c2a05bcf33a8 ### Panic output _No response_ ### Expected output Webhook **recreated** to target URL "https://google.com" instead of "https://example.com" ### Actual output Failed attempt to update the Webhook in-place. URL is not updated ### Steps to reproduce 1. Run `terraform apply` with : ``` resource "cloudflare_notification_policy_webhooks" "main" { account_id = var.account_id name = "test-webhook" url = "https://example.com" } ``` 2. Run `terraform apply` with : ``` resource "cloudflare_notification_policy_webhooks" "main" { account_id = var.account_id name = "test-webhook" url = "https://google.com" } ``` ### Additional factoids On the Cloudflare Dashboard, when trying to update an existing Webhook, we are explicitly told the URL cannot be updated in-place. The Webhook must be recreated to have a different URL. <img width="762" alt="image" src="https://user-images.githubusercontent.com/67374171/222167771-443e22d7-5378-4061-aca4-98bede961b57.png"> EDIT: The `replace_triggered_by` lifecycle option is a workaround, but this may not cover everybody's use case. ### References _No response_
non_defect
resource cloudflare notification policy webhooks should be replaced when url is updated confirmation my issue isn t already found on the issue tracker i have replicated my issue using the latest version of the provider and it is still present terraform and cloudflare provider version terraform on darwin provider registry terraform io cloudflare cloudflare affected resource s cloudflare notification policy webhooks terraform configuration files hcl resource cloudflare notification policy webhooks main account id var account id name test webhook url link to debug output panic output no response expected output webhook recreated to target url instead of actual output failed attempt to update the webhook in place url is not updated steps to reproduce run terraform apply with resource cloudflare notification policy webhooks main account id var account id name test webhook url run terraform apply with resource cloudflare notification policy webhooks main account id var account id name test webhook url additional factoids on the cloudflare dashboard when trying to update an existing webhook we are explicitly told the url cannot be updated in place the webhook must be recreated to have a different url img width alt image src edit the replace triggered by lifecycle option is a workaround but this may not cover everybody s use case references no response
0
345,930
24,880,192,784
IssuesEvent
2022-10-27 23:47:12
Ciclo4-ucaldas/GX-equipo0
https://api.github.com/repos/Ciclo4-ucaldas/GX-equipo0
closed
Mockup paginas
Diseñador de interfaz de usuario documentation
crear el mockup de cada actor y la pagina principal - [ ] tener una pagina principal - [ ] tener los primeros formularios
1.0
Mockup paginas - crear el mockup de cada actor y la pagina principal - [ ] tener una pagina principal - [ ] tener los primeros formularios
non_defect
mockup paginas crear el mockup de cada actor y la pagina principal tener una pagina principal tener los primeros formularios
0
26,656
4,776,674,020
IssuesEvent
2016-10-27 14:24:50
wheeler-microfluidics/microdrop
https://api.github.com/repos/wheeler-microfluidics/microdrop
opened
Exe not working on Windows XP (Trac #77)
defect Incomplete Migration microdrop Migrated from Trac
Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/77 ```json { "status": "closed", "changetime": "2014-04-17T19:39:01", "description": "When run from the command line I get the error:\n\n\"The system cannot execute the specified program.\"\n\nWhen run by clicking on the exe, I get the error box (screenshot attached) message:\n\n\"C:\\Documents and Settings\\User\\My Documents\\dev\\microdrop\\dist\\microdrop\\microdrop.exe\n\nThis application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem.\"\n\nTested on v0.1-408-g4fb6983.", "reporter": "ryan", "cc": "", "resolution": "fixed", "_ts": "1397763541728826", "component": "microdrop", "summary": "Exe not working on Windows XP", "priority": "blocker", "keywords": "", "version": "0.1", "time": "2012-03-16T23:41:34", "milestone": "Microdrop 1.0", "owner": "cfobel", "type": "defect" } ```
1.0
Exe not working on Windows XP (Trac #77) - Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/77 ```json { "status": "closed", "changetime": "2014-04-17T19:39:01", "description": "When run from the command line I get the error:\n\n\"The system cannot execute the specified program.\"\n\nWhen run by clicking on the exe, I get the error box (screenshot attached) message:\n\n\"C:\\Documents and Settings\\User\\My Documents\\dev\\microdrop\\dist\\microdrop\\microdrop.exe\n\nThis application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem.\"\n\nTested on v0.1-408-g4fb6983.", "reporter": "ryan", "cc": "", "resolution": "fixed", "_ts": "1397763541728826", "component": "microdrop", "summary": "Exe not working on Windows XP", "priority": "blocker", "keywords": "", "version": "0.1", "time": "2012-03-16T23:41:34", "milestone": "Microdrop 1.0", "owner": "cfobel", "type": "defect" } ```
defect
exe not working on windows xp trac migrated from json status closed changetime description when run from the command line i get the error n n the system cannot execute the specified program n nwhen run by clicking on the exe i get the error box screenshot attached message n n c documents and settings user my documents dev microdrop dist microdrop microdrop exe n nthis application has failed to start because the application configuration is incorrect reinstalling the application may fix this problem n ntested on reporter ryan cc resolution fixed ts component microdrop summary exe not working on windows xp priority blocker keywords version time milestone microdrop owner cfobel type defect
1
187,740
6,760,822,076
IssuesEvent
2017-10-24 22:06:58
18F/web-design-standards
https://api.github.com/repos/18F/web-design-standards
closed
[Dropdown] duplicate focus border within dropdown box in Firefox
[Priority] Minor [Skill] Front end [Type] Bug
**Steps to recreate:** In Firefox, click on the dropdown box to set focus. The correct focus styling appears (the blue halo) but another focus style appears as a thin dotted line inside the dropdown box. <img width="406" alt="screen shot 2015-09-03 at 1 33 32 pm" src="https://cloud.githubusercontent.com/assets/10144074/9665845/e01cb4ac-5240-11e5-903c-a321c9e287f6.png"> **Expected behavior:** Would expect that the thin dotted line would not appear on focus, and you would only see the blue halo. **Tested in** Macbook Pro Yosemite 10.10.5 Firefox 39.0.3 staging
1.0
[Dropdown] duplicate focus border within dropdown box in Firefox - **Steps to recreate:** In Firefox, click on the dropdown box to set focus. The correct focus styling appears (the blue halo) but another focus style appears as a thin dotted line inside the dropdown box. <img width="406" alt="screen shot 2015-09-03 at 1 33 32 pm" src="https://cloud.githubusercontent.com/assets/10144074/9665845/e01cb4ac-5240-11e5-903c-a321c9e287f6.png"> **Expected behavior:** Would expect that the thin dotted line would not appear on focus, and you would only see the blue halo. **Tested in** Macbook Pro Yosemite 10.10.5 Firefox 39.0.3 staging
non_defect
duplicate focus border within dropdown box in firefox steps to recreate in firefox click on the dropdown box to set focus the correct focus styling appears the blue halo but another focus style appears as a thin dotted line inside the dropdown box img width alt screen shot at pm src expected behavior would expect that the thin dotted line would not appear on focus and you would only see the blue halo tested in macbook pro yosemite firefox staging
0
100,496
12,529,485,574
IssuesEvent
2020-06-04 11:25:45
ajency/Finaegis-Backend
https://api.github.com/repos/ajency/Finaegis-Backend
opened
Personal Information - Ui and text issues
Design Issue Must do Web Application
**Describe the Issue** 1. Extra horizontal line shown 3. The back button is lower compared to the Back button on the Investment Account page 4. Horizontal Line should be lighter **Screenshots** ![image](https://user-images.githubusercontent.com/52652632/83751149-08196f80-a684-11ea-8603-3f840df24d6b.png) Reference: ![image](https://user-images.githubusercontent.com/52652632/83745377-1747ef80-a67b-11ea-8721-3aec6959db05.png)
1.0
Personal Information - Ui and text issues - **Describe the Issue** 1. Extra horizontal line shown 3. The back button is lower compared to the Back button on the Investment Account page 4. Horizontal Line should be lighter **Screenshots** ![image](https://user-images.githubusercontent.com/52652632/83751149-08196f80-a684-11ea-8603-3f840df24d6b.png) Reference: ![image](https://user-images.githubusercontent.com/52652632/83745377-1747ef80-a67b-11ea-8721-3aec6959db05.png)
non_defect
personal information ui and text issues describe the issue extra horizontal line shown the back button is lower compared to the back button on the investment account page horizontal line should be lighter screenshots reference
0
18,881
3,090,947,602
IssuesEvent
2015-08-26 10:04:03
gbif/ipt
https://api.github.com/repos/gbif/ipt
closed
Allow non-occurrence datasets to be registered regardless of license assigned
bug Component-i18n Milestone-Release2.3 Priority-Critical Translation Type-Defect Usability
Currently GBIF requires occurrence datasets, or datasets with associated occurrence records, to be assigned a GBIF supported license in order to be registered with GBIF. IPT 2.2 requires all datasets be assigned a GBIF supported license in order to be registered. This requirement should be waived for non-occurrence datasets.
1.0
Allow non-occurrence datasets to be registered regardless of license assigned - Currently GBIF requires occurrence datasets, or datasets with associated occurrence records, to be assigned a GBIF supported license in order to be registered with GBIF. IPT 2.2 requires all datasets be assigned a GBIF supported license in order to be registered. This requirement should be waived for non-occurrence datasets.
defect
allow non occurrence datasets to be registered regardless of license assigned currently gbif requires occurrence datasets or datasets with associated occurrence records to be assigned a gbif supported license in order to be registered with gbif ipt requires all datasets be assigned a gbif supported license in order to be registered this requirement should be waived for non occurrence datasets
1
53,251
13,261,257,288
IssuesEvent
2020-08-20 19:33:49
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
ShiftAlongTrack() is useless (Trac #1046)
Migrated from Trac combo core defect
The only difference between ```text I3Position p = particle.ShiftAlongTrack(distance); ``` and ```text I3Position p = particle.GetPos() + distance*particle.GetDir(); ``` is that the former will [more or less silently] return I3Position(NaN,NaN,NaN) if particle.IsTrack() returns false. The conditions under which that happens are numerous and opaque enough to make it not worth the trouble, and so the entire method should probably be removed. Discuss. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1046">https://code.icecube.wisc.edu/projects/icecube/ticket/1046</a>, reported by jvansantenand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:58", "_ts": "1550067178841456", "description": "The only difference between \n\n{{{\nI3Position p = particle.ShiftAlongTrack(distance);\n}}}\n\nand\n\n{{{\nI3Position p = particle.GetPos() + distance*particle.GetDir();\n}}}\n\nis that the former will [more or less silently] return I3Position(NaN,NaN,NaN) if particle.IsTrack() returns false. The conditions under which that happens are numerous and opaque enough to make it not worth the trouble, and so the entire method should probably be removed. Discuss.\n\n", "reporter": "jvansanten", "cc": "mzoll", "resolution": "fixed", "time": "2015-07-13T14:08:39", "component": "combo core", "summary": "ShiftAlongTrack() is useless", "priority": "normal", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
ShiftAlongTrack() is useless (Trac #1046) - The only difference between ```text I3Position p = particle.ShiftAlongTrack(distance); ``` and ```text I3Position p = particle.GetPos() + distance*particle.GetDir(); ``` is that the former will [more or less silently] return I3Position(NaN,NaN,NaN) if particle.IsTrack() returns false. The conditions under which that happens are numerous and opaque enough to make it not worth the trouble, and so the entire method should probably be removed. Discuss. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1046">https://code.icecube.wisc.edu/projects/icecube/ticket/1046</a>, reported by jvansantenand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:58", "_ts": "1550067178841456", "description": "The only difference between \n\n{{{\nI3Position p = particle.ShiftAlongTrack(distance);\n}}}\n\nand\n\n{{{\nI3Position p = particle.GetPos() + distance*particle.GetDir();\n}}}\n\nis that the former will [more or less silently] return I3Position(NaN,NaN,NaN) if particle.IsTrack() returns false. The conditions under which that happens are numerous and opaque enough to make it not worth the trouble, and so the entire method should probably be removed. Discuss.\n\n", "reporter": "jvansanten", "cc": "mzoll", "resolution": "fixed", "time": "2015-07-13T14:08:39", "component": "combo core", "summary": "ShiftAlongTrack() is useless", "priority": "normal", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
shiftalongtrack is useless trac the only difference between text p particle shiftalongtrack distance and text p particle getpos distance particle getdir is that the former will return nan nan nan if particle istrack returns false the conditions under which that happens are numerous and opaque enough to make it not worth the trouble and so the entire method should probably be removed discuss migrated from json status closed changetime ts description the only difference between n n p particle shiftalongtrack distance n n nand n n p particle getpos distance particle getdir n n nis that the former will return nan nan nan if particle istrack returns false the conditions under which that happens are numerous and opaque enough to make it not worth the trouble and so the entire method should probably be removed discuss n n reporter jvansanten cc mzoll resolution fixed time component combo core summary shiftalongtrack is useless priority normal keywords milestone owner olivas type defect
1
55,089
14,194,456,868
IssuesEvent
2020-11-15 03:55:20
FoldingAtHome/fah-issues
https://api.github.com/repos/FoldingAtHome/fah-issues
closed
Finish doesn't work from web control
1.Type - Defect 3.Component - Web Control
If I tell FAH to "Finish up, then stop" from the web control, nothing happens, and this is the case for every computer I try this on. To get around this, I have to open up the advanced control and click "Finish" from there.
1.0
Finish doesn't work from web control - If I tell FAH to "Finish up, then stop" from the web control, nothing happens, and this is the case for every computer I try this on. To get around this, I have to open up the advanced control and click "Finish" from there.
defect
finish doesn t work from web control if i tell fah to finish up then stop from the web control nothing happens and this is the case for every computer i try this on to get around this i have to open up the advanced control and click finish from there
1
7,032
2,838,336,541
IssuesEvent
2015-05-27 06:52:04
Microsoft/TypeScript
https://api.github.com/repos/Microsoft/TypeScript
closed
is this a Generic bug?
By Design Canonical
consider the following class. ``` class GenericPractice<T>{ private entity: T; constructor(entity: T) { this.entity = entity; } public add(item: T): string { return item.toString(); } } class CouponInfo { public toString(): string { return "couponInfo"; } } class Snake{ } var genericPractice = new GenericPractice<CouponInfo>(new CouponInfo()); genericPractice.add(new Snake("Sammy the Python"))); ``` theoretically, the above last line: ``` genericPractice.add(new Snake("Sammy the Python"))); ``` should have compile time error since the generic of class GenericPractice should only allow CouponInfo in method add, not Snake. However, the compile still passes which violates the fundamental concept of generic. is this a bug?
1.0
is this a Generic bug? - consider the following class. ``` class GenericPractice<T>{ private entity: T; constructor(entity: T) { this.entity = entity; } public add(item: T): string { return item.toString(); } } class CouponInfo { public toString(): string { return "couponInfo"; } } class Snake{ } var genericPractice = new GenericPractice<CouponInfo>(new CouponInfo()); genericPractice.add(new Snake("Sammy the Python"))); ``` theoretically, the above last line: ``` genericPractice.add(new Snake("Sammy the Python"))); ``` should have compile time error since the generic of class GenericPractice should only allow CouponInfo in method add, not Snake. However, the compile still passes which violates the fundamental concept of generic. is this a bug?
non_defect
is this a generic bug consider the following class class genericpractice private entity t constructor entity t this entity entity public add item t string return item tostring class couponinfo public tostring string return couponinfo class snake var genericpractice new genericpractice new couponinfo genericpractice add new snake sammy the python theoretically the above last line genericpractice add new snake sammy the python should have compile time error since the generic of class genericpractice should only allow couponinfo in method add not snake however the compile still passes which violates the fundamental concept of generic is this a bug
0
329,117
10,012,452,708
IssuesEvent
2019-07-15 13:14:33
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.qwant.com - site is not usable
browser-firefox-mobile engine-gecko priority-important
<!-- @browser: Firefox Mobile 65.0 --> <!-- @ua_header: QwantMobile/3.0 (Android 7.0; Mobile; rv:67.0) Gecko/67.0 Firefox/65.0 QwantBrowser/67.0.4 --> <!-- @reported_with: mobile-reporter --> **URL**: https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr **Browser / Version**: Firefox Mobile 65.0 **Operating System**: Android 7.0 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: nothing appear **Steps to Reproduce**: Nothing appear [![Screenshot Description](https://webcompat.com/uploads/2019/7/1c671aa2-0741-42df-91b9-3899d2730ae5-thumb.jpeg)](https://webcompat.com/uploads/2019/7/1c671aa2-0741-42df-91b9-3899d2730ae5.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190619220335</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: default</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "chec du chargement pour llment <script> dont la source est https://www.qwant.com/js/app.js?1562856259875." {file: "https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr" line: 149}]', u'[JavaScript Error: "ReferenceError: config_set is not defined" {file: "https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr" line: 88}]\nwindow.configOverloadProxy@https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr:88:13\n@https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr:152:9\n', u'[JavaScript Error: "ReferenceError: config_get is not defined" {file: "https://www.qwant.com/js/bootstrap.js?1562856259875" line: 1}]\n@https://www.qwant.com/js/bootstrap.js?1562856259875:1:1\n'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.qwant.com - site is not usable - <!-- @browser: Firefox Mobile 65.0 --> <!-- @ua_header: QwantMobile/3.0 (Android 7.0; Mobile; rv:67.0) Gecko/67.0 Firefox/65.0 QwantBrowser/67.0.4 --> <!-- @reported_with: mobile-reporter --> **URL**: https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr **Browser / Version**: Firefox Mobile 65.0 **Operating System**: Android 7.0 **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: nothing appear **Steps to Reproduce**: Nothing appear [![Screenshot Description](https://webcompat.com/uploads/2019/7/1c671aa2-0741-42df-91b9-3899d2730ae5-thumb.jpeg)](https://webcompat.com/uploads/2019/7/1c671aa2-0741-42df-91b9-3899d2730ae5.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190619220335</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: default</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "chec du chargement pour llment <script> dont la source est https://www.qwant.com/js/app.js?1562856259875." {file: "https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr" line: 149}]', u'[JavaScript Error: "ReferenceError: config_set is not defined" {file: "https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr" line: 88}]\nwindow.configOverloadProxy@https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr:88:13\n@https://www.qwant.com/?client=qwantbrowser&topsearch=true&lb=fr:152:9\n', u'[JavaScript Error: "ReferenceError: config_get is not defined" {file: "https://www.qwant.com/js/bootstrap.js?1562856259875" line: 1}]\n@https://www.qwant.com/js/bootstrap.js?1562856259875:1:1\n'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
site is not usable url browser version firefox mobile operating system android tested another browser no problem type site is not usable description nothing appear steps to reproduce nothing appear browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel default console messages u nwindow configoverloadproxy u n from with ❤️
0
31,268
8,679,069,247
IssuesEvent
2018-11-30 22:10:19
angular/angular-cli
https://api.github.com/repos/angular/angular-cli
closed
Hybrid/ngUpdate build does not output a changed chunk when updating AngularJS javascript sources
comp: devkit/build-webpack freq1: low severity3: broken type: bug/fix
### Bug Report or Feature Request (mark with an `x`) ``` - [x] bug report -> please search issues before submitting - [ ] feature request ``` ### Command (mark with an `x`) ``` - [ ] new - [ ] build - [x] serve - [ ] test - [ ] e2e - [ ] generate - [ ] add - [ ] update - [ ] lint - [ ] xi18n - [ ] run - [ ] config - [ ] help - [ ] version - [ ] doc ``` ### Versions Windows 7, Node 10 LTS, Angular 7.1.0 ### Failure Description I've an Angular project, which is currently upgraded using NgUpgrade from an older AngularJS project. Yesterday, I've updated from Angular 6 to Angular 7 using `ng update` and started the project as usual with `ng serve`. However, I've noticed a strange behaviour with the update. When I've used Angular 6 and I changed some code within the JS-Files from the AngularJS project, the CLI recompiled the app and outputted one changed chunk. With Angular (CLI) 7, the CLI recompiles after a change as well, but does not output a changed chunk, so the changed code is not there after the live reload. Source Maps are also broken at this point. However, if I change code within a template file of the AngularJS app, then a new chunk will be generated. In both versions it worked perfectly when changing TypeScript sources (but still, won't update the JS sources within the changed chunk) ### Desired functionality It should work like it did before, so whenever I change code within the AngularJS code base, the recompilation should output a changed chunk with the new code. I've also noticed, that if I remove `allowJs` from my `tsconfig.app.json`, it works as expected with Angular 7. ### Package Versions: Angular (CLI/ Compiler CLI, Language Service etc.): 7.1.0 @angular-devkit/build-angular: 0.11.0
1.0
Hybrid/ngUpdate build does not output a changed chunk when updating AngularJS javascript sources - ### Bug Report or Feature Request (mark with an `x`) ``` - [x] bug report -> please search issues before submitting - [ ] feature request ``` ### Command (mark with an `x`) ``` - [ ] new - [ ] build - [x] serve - [ ] test - [ ] e2e - [ ] generate - [ ] add - [ ] update - [ ] lint - [ ] xi18n - [ ] run - [ ] config - [ ] help - [ ] version - [ ] doc ``` ### Versions Windows 7, Node 10 LTS, Angular 7.1.0 ### Failure Description I've an Angular project, which is currently upgraded using NgUpgrade from an older AngularJS project. Yesterday, I've updated from Angular 6 to Angular 7 using `ng update` and started the project as usual with `ng serve`. However, I've noticed a strange behaviour with the update. When I've used Angular 6 and I changed some code within the JS-Files from the AngularJS project, the CLI recompiled the app and outputted one changed chunk. With Angular (CLI) 7, the CLI recompiles after a change as well, but does not output a changed chunk, so the changed code is not there after the live reload. Source Maps are also broken at this point. However, if I change code within a template file of the AngularJS app, then a new chunk will be generated. In both versions it worked perfectly when changing TypeScript sources (but still, won't update the JS sources within the changed chunk) ### Desired functionality It should work like it did before, so whenever I change code within the AngularJS code base, the recompilation should output a changed chunk with the new code. I've also noticed, that if I remove `allowJs` from my `tsconfig.app.json`, it works as expected with Angular 7. ### Package Versions: Angular (CLI/ Compiler CLI, Language Service etc.): 7.1.0 @angular-devkit/build-angular: 0.11.0
non_defect
hybrid ngupdate build does not output a changed chunk when updating angularjs javascript sources bug report or feature request mark with an x bug report please search issues before submitting feature request command mark with an x new build serve test generate add update lint run config help version doc versions windows node lts angular failure description i ve an angular project which is currently upgraded using ngupgrade from an older angularjs project yesterday i ve updated from angular to angular using ng update and started the project as usual with ng serve however i ve noticed a strange behaviour with the update when i ve used angular and i changed some code within the js files from the angularjs project the cli recompiled the app and outputted one changed chunk with angular cli the cli recompiles after a change as well but does not output a changed chunk so the changed code is not there after the live reload source maps are also broken at this point however if i change code within a template file of the angularjs app then a new chunk will be generated in both versions it worked perfectly when changing typescript sources but still won t update the js sources within the changed chunk desired functionality it should work like it did before so whenever i change code within the angularjs code base the recompilation should output a changed chunk with the new code i ve also noticed that if i remove allowjs from my tsconfig app json it works as expected with angular package versions angular cli compiler cli language service etc angular devkit build angular
0
72,226
7,292,440,280
IssuesEvent
2018-02-25 01:03:46
kuj0210/IoT-Pet-Home-System
https://api.github.com/repos/kuj0210/IoT-Pet-Home-System
closed
Temporary code to use when using ultrasonic sensors
TEST system
``` from grovepi import * import time import threading class UltrasonicSensor(threading.Thread): def __init__(self): self.ultrasonic_ranger = 3 self.distant = 0 self.feedcount = 0 self.watercount = 0 self.doorcount = 0 self.position = 0 self.feedposition = 20 self.waterposition = 25 self.doorposition = 35 self.now = time.localtime() self.sleep_start = 22 self.sleep_end = 6 self.on_time = 10 threading.Thread.__init__(self) self.ultrasonicEvent = threading.Event() def run(self): while True: self.now = time.localtime() if self.sleep_end <= self.now.tm_hour and self.now.tm_hour <= self.sleep_start: if self.now.tm_min < self.on_time: try: self.distant = ultrasonicRead(self.ultrasonic_ranger) if self.feedposition < self.distant and self.distant <= self.feedposition +5 : if self.feedcount == 3: print self.distant time.sleep(1) self.position = 1 print self.position elif self.feedcount == 2: print self.distant time.sleep(1) self.feedcount = 3 elif self.feedcount == 1: print self.distant time.sleep(1) self.feedcount = 2 elif self. feedcount == 0: print self.distant time.sleep(1) self.feedcount = 1 else: self.feedcount = 0 if self.waterposition < self.distant and self.distant <= self.waterposition + 5: if self.watercount == 3: print self.distant time.sleep(1) self.position = 2 print self.position elif self.watercount == 2: print self.distant time.sleep(1) self.watercount = 3 elif self.watercount == 1: print self.distant time.sleep(1) self.watercount = 2 elif self.watercount == 0: print self.distant time.sleep(1) self.watercount = 1 else: self.watercount = 0 if self.doorposition < self.distant and self.distant <= self.doorposition + 10: if self.doorcount == 3: print self.distant time.sleep(1) self.position = 3 print self.position elif self.doorcount == 2: print self.distant time.sleep(1) self.doorcount = 3 elif self.doorcount == 1: print self.distant time.sleep(1) self.doorcount = 2 elif self.doorcount == 0: print self.distant time.sleep(1) self.doorcount = 1 else: self.doorcount = 0 except TypeError: print("Type Error") self.ultrasonicEvent.clear() except IOError: print ("IO Error") self.ultrasonicEvent.clear() except KeyboardInterrupt: exit() def setUltrasonicSensor(self): self.ultrasonicEvent.set() ```
1.0
Temporary code to use when using ultrasonic sensors - ``` from grovepi import * import time import threading class UltrasonicSensor(threading.Thread): def __init__(self): self.ultrasonic_ranger = 3 self.distant = 0 self.feedcount = 0 self.watercount = 0 self.doorcount = 0 self.position = 0 self.feedposition = 20 self.waterposition = 25 self.doorposition = 35 self.now = time.localtime() self.sleep_start = 22 self.sleep_end = 6 self.on_time = 10 threading.Thread.__init__(self) self.ultrasonicEvent = threading.Event() def run(self): while True: self.now = time.localtime() if self.sleep_end <= self.now.tm_hour and self.now.tm_hour <= self.sleep_start: if self.now.tm_min < self.on_time: try: self.distant = ultrasonicRead(self.ultrasonic_ranger) if self.feedposition < self.distant and self.distant <= self.feedposition +5 : if self.feedcount == 3: print self.distant time.sleep(1) self.position = 1 print self.position elif self.feedcount == 2: print self.distant time.sleep(1) self.feedcount = 3 elif self.feedcount == 1: print self.distant time.sleep(1) self.feedcount = 2 elif self. feedcount == 0: print self.distant time.sleep(1) self.feedcount = 1 else: self.feedcount = 0 if self.waterposition < self.distant and self.distant <= self.waterposition + 5: if self.watercount == 3: print self.distant time.sleep(1) self.position = 2 print self.position elif self.watercount == 2: print self.distant time.sleep(1) self.watercount = 3 elif self.watercount == 1: print self.distant time.sleep(1) self.watercount = 2 elif self.watercount == 0: print self.distant time.sleep(1) self.watercount = 1 else: self.watercount = 0 if self.doorposition < self.distant and self.distant <= self.doorposition + 10: if self.doorcount == 3: print self.distant time.sleep(1) self.position = 3 print self.position elif self.doorcount == 2: print self.distant time.sleep(1) self.doorcount = 3 elif self.doorcount == 1: print self.distant time.sleep(1) self.doorcount = 2 elif self.doorcount == 0: print self.distant time.sleep(1) self.doorcount = 1 else: self.doorcount = 0 except TypeError: print("Type Error") self.ultrasonicEvent.clear() except IOError: print ("IO Error") self.ultrasonicEvent.clear() except KeyboardInterrupt: exit() def setUltrasonicSensor(self): self.ultrasonicEvent.set() ```
non_defect
temporary code to use when using ultrasonic sensors from grovepi import import time import threading class ultrasonicsensor threading thread def init self self ultrasonic ranger self distant self feedcount self watercount self doorcount self position self feedposition self waterposition self doorposition self now time localtime self sleep start self sleep end self on time threading thread init self self ultrasonicevent threading event def run self while true self now time localtime if self sleep end self now tm hour and self now tm hour self sleep start if self now tm min self on time try self distant ultrasonicread self ultrasonic ranger if self feedposition self distant and self distant self feedposition if self feedcount print self distant time sleep self position print self position elif self feedcount print self distant time sleep self feedcount elif self feedcount print self distant time sleep self feedcount elif self feedcount print self distant time sleep self feedcount else self feedcount if self waterposition self distant and self distant self waterposition if self watercount print self distant time sleep self position print self position elif self watercount print self distant time sleep self watercount elif self watercount print self distant time sleep self watercount elif self watercount print self distant time sleep self watercount else self watercount if self doorposition self distant and self distant self doorposition if self doorcount print self distant time sleep self position print self position elif self doorcount print self distant time sleep self doorcount elif self doorcount print self distant time sleep self doorcount elif self doorcount print self distant time sleep self doorcount else self doorcount except typeerror print type error self ultrasonicevent clear except ioerror print io error self ultrasonicevent clear except keyboardinterrupt exit def setultrasonicsensor self self ultrasonicevent set
0
6,963
2,610,319,671
IssuesEvent
2015-02-26 19:43:08
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
closed
Gameplay Error
auto-migrated Priority-Medium Type-Defect
``` The following planetary abilities don't work at all and are defective: 1. Bakura= Cost of all repulsion lift vehicles is reduced to the abundance of of repuslor lift components. * I noticed this when my AAT and other repuslor vehicle prices were still the same even when I did not have control of the planet anymore. * Also a typo in its ability descripon it states " this area of space can only be corrupted by conquering it ". 2. Ithor = medical droid reducion cost ability did not lower the cost of medical droids at all. The droids were still at their base price of 100 even when I lost control of the planet. 3. Anaxes = ability defective cost of all victory star destroyers on all planets is reduced by 20%. Same thing cost was not reduced on all planets at all. ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 8 May 2011 at 2:11
1.0
Gameplay Error - ``` The following planetary abilities don't work at all and are defective: 1. Bakura= Cost of all repulsion lift vehicles is reduced to the abundance of of repuslor lift components. * I noticed this when my AAT and other repuslor vehicle prices were still the same even when I did not have control of the planet anymore. * Also a typo in its ability descripon it states " this area of space can only be corrupted by conquering it ". 2. Ithor = medical droid reducion cost ability did not lower the cost of medical droids at all. The droids were still at their base price of 100 even when I lost control of the planet. 3. Anaxes = ability defective cost of all victory star destroyers on all planets is reduced by 20%. Same thing cost was not reduced on all planets at all. ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 8 May 2011 at 2:11
defect
gameplay error the following planetary abilities don t work at all and are defective bakura cost of all repulsion lift vehicles is reduced to the abundance of of repuslor lift components i noticed this when my aat and other repuslor vehicle prices were still the same even when i did not have control of the planet anymore also a typo in its ability descripon it states this area of space can only be corrupted by conquering it ithor medical droid reducion cost ability did not lower the cost of medical droids at all the droids were still at their base price of even when i lost control of the planet anaxes ability defective cost of all victory star destroyers on all planets is reduced by same thing cost was not reduced on all planets at all original issue reported on code google com by gmail com on may at
1
43,736
11,812,663,760
IssuesEvent
2020-03-19 20:37:20
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
closed
Inconsistent vhost config for demo environments
Defect DevOps
New "docs" demo site does not return from apache (CMS or WEB) http://docs.demo.ci.cms.va.gov/ Ethan's demo site does: http://ethan.demo.ci.cms.va.gov/ http://ethan.demo.ci.cms.va.gov/ and http://ethan.web.demo.ci.cms.va.gov/ work fine. Domain | Status ----------------------------------------|------------ http://weekend.demo.ci.cms.va.gov | ❌ http://docs.demo.ci.cms.va.gov | ❌ http://howie.demo.ci.cms.va.gov | ❌
1.0
Inconsistent vhost config for demo environments - New "docs" demo site does not return from apache (CMS or WEB) http://docs.demo.ci.cms.va.gov/ Ethan's demo site does: http://ethan.demo.ci.cms.va.gov/ http://ethan.demo.ci.cms.va.gov/ and http://ethan.web.demo.ci.cms.va.gov/ work fine. Domain | Status ----------------------------------------|------------ http://weekend.demo.ci.cms.va.gov | ❌ http://docs.demo.ci.cms.va.gov | ❌ http://howie.demo.ci.cms.va.gov | ❌
defect
inconsistent vhost config for demo environments new docs demo site does not return from apache cms or web ethan s demo site does and work fine domain status ❌ ❌ ❌
1
70,975
23,392,939,439
IssuesEvent
2022-08-11 19:45:33
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
The default OIDC handler ignores email scope when creating new users
A-SSO S-Minor T-Defect X-Needs-Info
### Description I've set up a SSO interface using OIDC (OpenID Connect) and the default mapping provider that goes with it. I can see quite clearly from the debug logs that it's actually able to retrieve the email scope successfully, but it won't reflect in the client as such. homeserver.yaml ```yaml oidc_providers: - idp_id: backend idp_name: "Hrest.org" issuer: "https://api.myweb.site/oauth2" client_id: "synapse" client_secret: "synapse-secret" scopes: ["openid", "profile", "email"] user_profile_method: "userinfo_endpoint" allow_existing_users: true user_mapping_provider: config: localpart_template: "{{ user.sub }}" display_name_template: "{{ user.name }}" email_template: "{{ user.email }}" ``` homeserver.log ``` 2022-05-02 01:07:47,603 - synapse.handlers.oidc - 882 - DEBUG - GET-10- Userinfo for OIDC login: {'sub': 'Brody40', 'name': 'Ilya Kowalewski', 'given_name': 'Ilya', 'family_name': 'K', 'preferred_username': 'Brody40', 'email': 'badt@aletheia.icu', 'email_verified': True, 'locale': 'en'}j 2022-05-02 01:07:47,620 - synapse.handlers.sso - 519 - DEBUG - GET-10- Retrieved user attributes from user mapping provider: UserAttributes(localpart='brody40', confirm_localpart=False, display_name='Ilya K', emails=['badt@aletheia.icu']) (attempt 0) ``` Even though that seems to be the case, when opening the user profile in Element web, it doesn't show said email as belonging to said user. ### Steps to reproduce - Set up a OIDC provider with default mapping - Make an authorisation using client_secret method - Confirm with logs that the email has been retrieved - Check for user's email information, it's not been set. I would expect the user to indeed have the login email set. ### Version information Our homeserver is running Synapse [v1.57.1](https://hub.docker.com/layers/synapse/matrixdotorg/synapse/v1.57.1/images/sha256-c30ec59505505392ae13e51f6cadf912a8873b12e7fa3765e5fbbefb86ede6a3?context=explore). - **Version**: v1.57.1 - **Install method**: Docker, docker-compose - **Platform**: Debian 10
1.0
The default OIDC handler ignores email scope when creating new users - ### Description I've set up a SSO interface using OIDC (OpenID Connect) and the default mapping provider that goes with it. I can see quite clearly from the debug logs that it's actually able to retrieve the email scope successfully, but it won't reflect in the client as such. homeserver.yaml ```yaml oidc_providers: - idp_id: backend idp_name: "Hrest.org" issuer: "https://api.myweb.site/oauth2" client_id: "synapse" client_secret: "synapse-secret" scopes: ["openid", "profile", "email"] user_profile_method: "userinfo_endpoint" allow_existing_users: true user_mapping_provider: config: localpart_template: "{{ user.sub }}" display_name_template: "{{ user.name }}" email_template: "{{ user.email }}" ``` homeserver.log ``` 2022-05-02 01:07:47,603 - synapse.handlers.oidc - 882 - DEBUG - GET-10- Userinfo for OIDC login: {'sub': 'Brody40', 'name': 'Ilya Kowalewski', 'given_name': 'Ilya', 'family_name': 'K', 'preferred_username': 'Brody40', 'email': 'badt@aletheia.icu', 'email_verified': True, 'locale': 'en'}j 2022-05-02 01:07:47,620 - synapse.handlers.sso - 519 - DEBUG - GET-10- Retrieved user attributes from user mapping provider: UserAttributes(localpart='brody40', confirm_localpart=False, display_name='Ilya K', emails=['badt@aletheia.icu']) (attempt 0) ``` Even though that seems to be the case, when opening the user profile in Element web, it doesn't show said email as belonging to said user. ### Steps to reproduce - Set up a OIDC provider with default mapping - Make an authorisation using client_secret method - Confirm with logs that the email has been retrieved - Check for user's email information, it's not been set. I would expect the user to indeed have the login email set. ### Version information Our homeserver is running Synapse [v1.57.1](https://hub.docker.com/layers/synapse/matrixdotorg/synapse/v1.57.1/images/sha256-c30ec59505505392ae13e51f6cadf912a8873b12e7fa3765e5fbbefb86ede6a3?context=explore). - **Version**: v1.57.1 - **Install method**: Docker, docker-compose - **Platform**: Debian 10
defect
the default oidc handler ignores email scope when creating new users description i ve set up a sso interface using oidc openid connect and the default mapping provider that goes with it i can see quite clearly from the debug logs that it s actually able to retrieve the email scope successfully but it won t reflect in the client as such homeserver yaml yaml oidc providers idp id backend idp name hrest org issuer client id synapse client secret synapse secret scopes user profile method userinfo endpoint allow existing users true user mapping provider config localpart template user sub display name template user name email template user email homeserver log synapse handlers oidc debug get userinfo for oidc login sub name ilya kowalewski given name ilya family name k preferred username email badt aletheia icu email verified true locale en j synapse handlers sso debug get retrieved user attributes from user mapping provider userattributes localpart confirm localpart false display name ilya k emails attempt even though that seems to be the case when opening the user profile in element web it doesn t show said email as belonging to said user steps to reproduce set up a oidc provider with default mapping make an authorisation using client secret method confirm with logs that the email has been retrieved check for user s email information it s not been set i would expect the user to indeed have the login email set version information our homeserver is running synapse version install method docker docker compose platform debian
1
413,550
27,958,556,306
IssuesEvent
2023-03-24 14:07:09
nextflow-io/training
https://api.github.com/repos/nextflow-io/training
closed
Improve indentation, code style and pipeline/workflow usage consistency
documentation
- [x] Some code blocks are indented with 4 spaces. Others with 2. - [x] Make new lines between blocks in processes consistent - [x] Sometimes there are spaces around parenthesis. Some other times there isn't any space or a mix of it - [x] Sometimes `option:value`, some other times `option: value` (`header: true`, `flat:true`) - [ ] Sometimes operators are separated by a line. Some other times, they aren't. - [x] The words _workflow_ and _pipeline_ are used interchangeably, but if we mean the same thing we should stick one. If we don't (like in the Tower documentation), then we should stick to the proper name in every situation. - [x] Image name and image name tag are both called tags which can be confusing. - [ ] NCBI instructions when translated should have the words in English the way they are in the NCBI portal. They're currently partially translated into the Portuguese translation - [x] method/channel factories are used interchangeably but they should be introduced as methods and then only called factories. - [x] Sometimes, a process is called a task, and a task is called a process. A task is a process instance and should be always clear to the reader. - [x] Sometimes code within the string block is indented, and sometimes it's not. - [x] Avoid implicit syntax such as not mentioning `script:` before `"""` - [x] Use `debug true` instead of `echo true` - [x] process names should be uppercase - [x] Keep the basic training consistent in terms of Nextflow version (remove `export NXF_VER=20.10.0` in Tower section) - [x] Add space around values separated by `,` ([1, 2] instead of [1,2]) or regular operators (`4 % 2` or `4 + 2` instead of `4%2` or `4+2`). Exceptions here are bash that requires no spaces such as in `export name=example` or globbing in `{1,2}`.
1.0
Improve indentation, code style and pipeline/workflow usage consistency - - [x] Some code blocks are indented with 4 spaces. Others with 2. - [x] Make new lines between blocks in processes consistent - [x] Sometimes there are spaces around parenthesis. Some other times there isn't any space or a mix of it - [x] Sometimes `option:value`, some other times `option: value` (`header: true`, `flat:true`) - [ ] Sometimes operators are separated by a line. Some other times, they aren't. - [x] The words _workflow_ and _pipeline_ are used interchangeably, but if we mean the same thing we should stick one. If we don't (like in the Tower documentation), then we should stick to the proper name in every situation. - [x] Image name and image name tag are both called tags which can be confusing. - [ ] NCBI instructions when translated should have the words in English the way they are in the NCBI portal. They're currently partially translated into the Portuguese translation - [x] method/channel factories are used interchangeably but they should be introduced as methods and then only called factories. - [x] Sometimes, a process is called a task, and a task is called a process. A task is a process instance and should be always clear to the reader. - [x] Sometimes code within the string block is indented, and sometimes it's not. - [x] Avoid implicit syntax such as not mentioning `script:` before `"""` - [x] Use `debug true` instead of `echo true` - [x] process names should be uppercase - [x] Keep the basic training consistent in terms of Nextflow version (remove `export NXF_VER=20.10.0` in Tower section) - [x] Add space around values separated by `,` ([1, 2] instead of [1,2]) or regular operators (`4 % 2` or `4 + 2` instead of `4%2` or `4+2`). Exceptions here are bash that requires no spaces such as in `export name=example` or globbing in `{1,2}`.
non_defect
improve indentation code style and pipeline workflow usage consistency some code blocks are indented with spaces others with make new lines between blocks in processes consistent sometimes there are spaces around parenthesis some other times there isn t any space or a mix of it sometimes option value some other times option value header true flat true sometimes operators are separated by a line some other times they aren t the words workflow and pipeline are used interchangeably but if we mean the same thing we should stick one if we don t like in the tower documentation then we should stick to the proper name in every situation image name and image name tag are both called tags which can be confusing ncbi instructions when translated should have the words in english the way they are in the ncbi portal they re currently partially translated into the portuguese translation method channel factories are used interchangeably but they should be introduced as methods and then only called factories sometimes a process is called a task and a task is called a process a task is a process instance and should be always clear to the reader sometimes code within the string block is indented and sometimes it s not avoid implicit syntax such as not mentioning script before use debug true instead of echo true process names should be uppercase keep the basic training consistent in terms of nextflow version remove export nxf ver in tower section add space around values separated by instead of or regular operators or instead of or exceptions here are bash that requires no spaces such as in export name example or globbing in
0
283,959
8,728,661,150
IssuesEvent
2018-12-10 17:59:56
SenitCorp/bugtracker
https://api.github.com/repos/SenitCorp/bugtracker
closed
mobile app sync taking forever users not getting transaction data ios
High Priority bug iOS
So having issues with sync and iOS client 1. Sent $20 usd to Robert 2. Transaction Completed 3. Activity feed and balance were instantly update when i was returned (didnt see any rabits) PERFECT then IMMEDIATELY 1. I Sent $20 to Sam 2. Transaction completed 3. balance was not updated and transaction was not in my activity feed when i was returned 4. waited 5 seconds nothing 5. manually synced the activity feed by dragging down 6. hung syncing forever - like 3 seocnds 7. after 3 seconds my balance changed 8. after 5 seconds sams transaction turned up so sync took over 5+ seconds which is a eternity not sure if this is because i did 2 transaction back to back ?
1.0
mobile app sync taking forever users not getting transaction data ios - So having issues with sync and iOS client 1. Sent $20 usd to Robert 2. Transaction Completed 3. Activity feed and balance were instantly update when i was returned (didnt see any rabits) PERFECT then IMMEDIATELY 1. I Sent $20 to Sam 2. Transaction completed 3. balance was not updated and transaction was not in my activity feed when i was returned 4. waited 5 seconds nothing 5. manually synced the activity feed by dragging down 6. hung syncing forever - like 3 seocnds 7. after 3 seconds my balance changed 8. after 5 seconds sams transaction turned up so sync took over 5+ seconds which is a eternity not sure if this is because i did 2 transaction back to back ?
non_defect
mobile app sync taking forever users not getting transaction data ios so having issues with sync and ios client sent usd to robert transaction completed activity feed and balance were instantly update when i was returned didnt see any rabits perfect then immediately i sent to sam transaction completed balance was not updated and transaction was not in my activity feed when i was returned waited seconds nothing manually synced the activity feed by dragging down hung syncing forever like seocnds after seconds my balance changed after seconds sams transaction turned up so sync took over seconds which is a eternity not sure if this is because i did transaction back to back
0
51,434
13,207,472,658
IssuesEvent
2020-08-14 23:14:12
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
ttrigger fails to build w/ -Wextra (Trac #408)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/408">https://code.icecube.wisc.edu/projects/icecube/ticket/408</a>, reported by negaand owned by dunkman</em></summary> <p> ```json { "status": "closed", "changetime": "2014-09-30T18:11:26", "_ts": "1412100686268639", "description": "You probably want to take a look at logic related to these errors\n\n{{{\n[ 83%] Building C object ttrigger/CMakeFiles/ttrigger.dir/private/cluster/cluster.c.o\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c: In function \u2018cluster_checked_insert\u2019:\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:158:2: error: comparison of unsigned expression >= 0 is always true [-Werror=type-limits]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c: In function \u2018cluster_partition_destroy\u2019:\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:308:16: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c: In function \u2018cluster_partition_build\u2019:\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:344:16: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:362:17: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:374:18: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:399:5: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:460:16: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\ncc1: all warnings being treated as errors\n\nmake[2]: *** [ttrigger/CMakeFiles/ttrigger.dir/private/cluster/cluster.c.o] Error 1\nmake[1]: *** [ttrigger/CMakeFiles/ttrigger.dir/all] Error 2\nmake: *** [all] Error 2\n[maru:~/i3/icerec/build] \n}}}", "reporter": "nega", "cc": "jvs", "resolution": "fixed", "time": "2012-05-30T19:26:03", "component": "combo reconstruction", "summary": "ttrigger fails to build w/ -Wextra", "priority": "normal", "keywords": "ttrigger", "milestone": "", "owner": "dunkman", "type": "defect" } ``` </p> </details>
1.0
ttrigger fails to build w/ -Wextra (Trac #408) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/408">https://code.icecube.wisc.edu/projects/icecube/ticket/408</a>, reported by negaand owned by dunkman</em></summary> <p> ```json { "status": "closed", "changetime": "2014-09-30T18:11:26", "_ts": "1412100686268639", "description": "You probably want to take a look at logic related to these errors\n\n{{{\n[ 83%] Building C object ttrigger/CMakeFiles/ttrigger.dir/private/cluster/cluster.c.o\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c: In function \u2018cluster_checked_insert\u2019:\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:158:2: error: comparison of unsigned expression >= 0 is always true [-Werror=type-limits]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c: In function \u2018cluster_partition_destroy\u2019:\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:308:16: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c: In function \u2018cluster_partition_build\u2019:\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:344:16: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:362:17: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:374:18: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:399:5: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\n/home/nega/i3/icerec/src/ttrigger/private/cluster/cluster.c:460:16: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]\ncc1: all warnings being treated as errors\n\nmake[2]: *** [ttrigger/CMakeFiles/ttrigger.dir/private/cluster/cluster.c.o] Error 1\nmake[1]: *** [ttrigger/CMakeFiles/ttrigger.dir/all] Error 2\nmake: *** [all] Error 2\n[maru:~/i3/icerec/build] \n}}}", "reporter": "nega", "cc": "jvs", "resolution": "fixed", "time": "2012-05-30T19:26:03", "component": "combo reconstruction", "summary": "ttrigger fails to build w/ -Wextra", "priority": "normal", "keywords": "ttrigger", "milestone": "", "owner": "dunkman", "type": "defect" } ``` </p> </details>
defect
ttrigger fails to build w wextra trac migrated from json status closed changetime ts description you probably want to take a look at logic related to these errors n n n building c object ttrigger cmakefiles ttrigger dir private cluster cluster c o n home nega icerec src ttrigger private cluster cluster c in function checked insert n home nega icerec src ttrigger private cluster cluster c error comparison of unsigned expression is always true n home nega icerec src ttrigger private cluster cluster c in function partition destroy n home nega icerec src ttrigger private cluster cluster c error comparison between signed and unsigned integer expressions n home nega icerec src ttrigger private cluster cluster c in function partition build n home nega icerec src ttrigger private cluster cluster c error comparison between signed and unsigned integer expressions n home nega icerec src ttrigger private cluster cluster c error comparison between signed and unsigned integer expressions n home nega icerec src ttrigger private cluster cluster c error comparison between signed and unsigned integer expressions n home nega icerec src ttrigger private cluster cluster c error comparison between signed and unsigned integer expressions n home nega icerec src ttrigger private cluster cluster c error comparison between signed and unsigned integer expressions all warnings being treated as errors n nmake error nmake error nmake error n n reporter nega cc jvs resolution fixed time component combo reconstruction summary ttrigger fails to build w wextra priority normal keywords ttrigger milestone owner dunkman type defect
1
64,030
18,144,768,208
IssuesEvent
2021-09-25 08:10:54
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
[Important] Built-in matrix and slack integration no longer work
T-Defect
### Steps to reproduce **About the issue** Recently, I tried to integrate Matrix with Slack using Element (Matrix) built-in integration but when going to configure auth set up on Slack then page doesn't work. [Matrix] Go to one room on my matrix space https://matrix.to/#/#wplk:matrix.org [Matrix] Go to room info and add "widgets, bridges and bots" (screen1) https://i.ibb.co/3kRV796/screen1.png [Matrix] pick slack and add bridge (screen 2, 3) https://i.ibb.co/QC6gYV7/screen2.png https://i.ibb.co/tmtLfJN/screen3.png [Matrix] add slack (screen 4, 5) https://i.ibb.co/yh78PTP/screen4.png https://i.ibb.co/GVFNGbX/screen5.png https://wplk.slack.com/ [Slack] allow permissions (screen 6) https://i.ibb.co/5TxbCVV/screen6.png [Slack] this page isn't working (screen 7) https://i.ibb.co/rpLtwNP/screen7.png **Slack support response** ![image](https://user-images.githubusercontent.com/73447325/134669182-11dbc462-65e0-46a8-ba0b-25e9f111c50f.png) **Discussion on EMS room** ![image](https://user-images.githubusercontent.com/73447325/134669438-20d2147a-4054-48b5-bf0a-6cb5e9ddf305.png) ![image](https://user-images.githubusercontent.com/73447325/134669486-149e841e-6148-4a5e-82f2-c78d23ba3379.png) ### What happened? ### What did you expect? Work ### What happened? No longer work ### Operating system Ubuntu 21.04 ### Browser information Chromium ### URL for webapp _No response_ ### Homeserver matrix.org ### Have you submitted a rageshake? No
1.0
[Important] Built-in matrix and slack integration no longer work - ### Steps to reproduce **About the issue** Recently, I tried to integrate Matrix with Slack using Element (Matrix) built-in integration but when going to configure auth set up on Slack then page doesn't work. [Matrix] Go to one room on my matrix space https://matrix.to/#/#wplk:matrix.org [Matrix] Go to room info and add "widgets, bridges and bots" (screen1) https://i.ibb.co/3kRV796/screen1.png [Matrix] pick slack and add bridge (screen 2, 3) https://i.ibb.co/QC6gYV7/screen2.png https://i.ibb.co/tmtLfJN/screen3.png [Matrix] add slack (screen 4, 5) https://i.ibb.co/yh78PTP/screen4.png https://i.ibb.co/GVFNGbX/screen5.png https://wplk.slack.com/ [Slack] allow permissions (screen 6) https://i.ibb.co/5TxbCVV/screen6.png [Slack] this page isn't working (screen 7) https://i.ibb.co/rpLtwNP/screen7.png **Slack support response** ![image](https://user-images.githubusercontent.com/73447325/134669182-11dbc462-65e0-46a8-ba0b-25e9f111c50f.png) **Discussion on EMS room** ![image](https://user-images.githubusercontent.com/73447325/134669438-20d2147a-4054-48b5-bf0a-6cb5e9ddf305.png) ![image](https://user-images.githubusercontent.com/73447325/134669486-149e841e-6148-4a5e-82f2-c78d23ba3379.png) ### What happened? ### What did you expect? Work ### What happened? No longer work ### Operating system Ubuntu 21.04 ### Browser information Chromium ### URL for webapp _No response_ ### Homeserver matrix.org ### Have you submitted a rageshake? No
defect
built in matrix and slack integration no longer work steps to reproduce about the issue recently i tried to integrate matrix with slack using element matrix built in integration but when going to configure auth set up on slack then page doesn t work go to one room on my matrix space go to room info and add widgets bridges and bots pick slack and add bridge screen add slack screen allow permissions screen this page isn t working screen slack support response discussion on ems room what happened what did you expect work what happened no longer work operating system ubuntu browser information chromium url for webapp no response homeserver matrix org have you submitted a rageshake no
1
11,611
3,211,710,380
IssuesEvent
2015-10-06 12:24:29
edeposit/edeposit
https://api.github.com/repos/edeposit/edeposit
closed
zkontrolovat dva pripady pro aleph link export
otestovat TODO
1.pripad - "archive only" epublikace v Alephu 2.pripad - "open access" epublikace v Alephu Tj. - v aleph.nkp.cz dohledat epublikaci. - kliknout na linku na zpristupneni - ziskat odpovidajici odpoved
1.0
zkontrolovat dva pripady pro aleph link export - 1.pripad - "archive only" epublikace v Alephu 2.pripad - "open access" epublikace v Alephu Tj. - v aleph.nkp.cz dohledat epublikaci. - kliknout na linku na zpristupneni - ziskat odpovidajici odpoved
non_defect
zkontrolovat dva pripady pro aleph link export pripad archive only epublikace v alephu pripad open access epublikace v alephu tj v aleph nkp cz dohledat epublikaci kliknout na linku na zpristupneni ziskat odpovidajici odpoved
0
64,878
18,951,449,360
IssuesEvent
2021-11-18 15:33:17
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
Jitsi crash on Android 7
T-Defect Z-Crash A-Jitsi S-Major O-Occasional
Thread: OkHttp Dispatcher, Exception: java.lang.NoClassDefFoundError: Failed resolution of: Landroid/graphics/ColorSpace; at com.facebook.imagepipeline.image.EncodedImage.parseMetaData(EncodedImage.java:34) See the rageshakes for more details
1.0
Jitsi crash on Android 7 - Thread: OkHttp Dispatcher, Exception: java.lang.NoClassDefFoundError: Failed resolution of: Landroid/graphics/ColorSpace; at com.facebook.imagepipeline.image.EncodedImage.parseMetaData(EncodedImage.java:34) See the rageshakes for more details
defect
jitsi crash on android thread okhttp dispatcher exception java lang noclassdeffounderror failed resolution of landroid graphics colorspace at com facebook imagepipeline image encodedimage parsemetadata encodedimage java see the rageshakes for more details
1
66,763
20,622,828,791
IssuesEvent
2022-03-07 19:11:00
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Geometry class is used by open-source edition, but docs say it is only supported in commercial editions
T: Defect
### Expected behavior The `Geometry` class shouldn't be documented as only available in commercial editions if it is used in the open-source edition. OR Generating code from a database that uses the PostgreSQL `GEOMETRY` type using the jOOQ 3.16 open-source edition shouldn't use the new `Geometry` class. ### Actual behavior `GEOMETRY` columns are bound to the jOOQ `Geometry` class in Kotlin code that's generated by the open-source edition, despite the docs ([example](https://www.jooq.org/javadoc/3.16.5-SNAPSHOT/org.jooq/org/jooq/Geometry.html)) saying the class is only supported by the commercial editions. ### Steps to reproduce the problem The problem may just be a documentation issue, but it doesn't require any custom configuration of the code generator; any table with a GEOMETRY column like ``` CREATE TABLE demo (location GEOMETRY); ``` will result in generated code that references the jOOQ `Geometry` class, even with the open-source edition of jOOQ 3.16: ``` val LOCATION: TableField<DemoRecord, Geometry?> = createField(DSL.name("location"), SQLDataType.GEOMETRY, this, "") ``` ### Versions - jOOQ: 3.16.4 - Java: `openjdk version "17.0.2" 2022-01-18` - Database (include vendor): PostgreSQL 14.1, PostGIS 3.1.4 - OS: OS X 12.2.1 - JDBC Driver (include name if inofficial driver): PostgreSQL 42.3.3
1.0
Geometry class is used by open-source edition, but docs say it is only supported in commercial editions - ### Expected behavior The `Geometry` class shouldn't be documented as only available in commercial editions if it is used in the open-source edition. OR Generating code from a database that uses the PostgreSQL `GEOMETRY` type using the jOOQ 3.16 open-source edition shouldn't use the new `Geometry` class. ### Actual behavior `GEOMETRY` columns are bound to the jOOQ `Geometry` class in Kotlin code that's generated by the open-source edition, despite the docs ([example](https://www.jooq.org/javadoc/3.16.5-SNAPSHOT/org.jooq/org/jooq/Geometry.html)) saying the class is only supported by the commercial editions. ### Steps to reproduce the problem The problem may just be a documentation issue, but it doesn't require any custom configuration of the code generator; any table with a GEOMETRY column like ``` CREATE TABLE demo (location GEOMETRY); ``` will result in generated code that references the jOOQ `Geometry` class, even with the open-source edition of jOOQ 3.16: ``` val LOCATION: TableField<DemoRecord, Geometry?> = createField(DSL.name("location"), SQLDataType.GEOMETRY, this, "") ``` ### Versions - jOOQ: 3.16.4 - Java: `openjdk version "17.0.2" 2022-01-18` - Database (include vendor): PostgreSQL 14.1, PostGIS 3.1.4 - OS: OS X 12.2.1 - JDBC Driver (include name if inofficial driver): PostgreSQL 42.3.3
defect
geometry class is used by open source edition but docs say it is only supported in commercial editions expected behavior the geometry class shouldn t be documented as only available in commercial editions if it is used in the open source edition or generating code from a database that uses the postgresql geometry type using the jooq open source edition shouldn t use the new geometry class actual behavior geometry columns are bound to the jooq geometry class in kotlin code that s generated by the open source edition despite the docs saying the class is only supported by the commercial editions steps to reproduce the problem the problem may just be a documentation issue but it doesn t require any custom configuration of the code generator any table with a geometry column like create table demo location geometry will result in generated code that references the jooq geometry class even with the open source edition of jooq val location tablefield createfield dsl name location sqldatatype geometry this versions jooq java openjdk version database include vendor postgresql postgis os os x jdbc driver include name if inofficial driver postgresql
1
138,675
12,826,365,666
IssuesEvent
2020-07-06 16:25:36
465b/nemf
https://api.github.com/repos/465b/nemf
closed
doc: inconsistencies
documentation
I mixed some concept in the description, i.e. * model_class & load_model * model_path & no path * loading reference data into introduction
1.0
doc: inconsistencies - I mixed some concept in the description, i.e. * model_class & load_model * model_path & no path * loading reference data into introduction
non_defect
doc inconsistencies i mixed some concept in the description i e model class load model model path no path loading reference data into introduction
0
69,714
3,313,818,855
IssuesEvent
2015-11-06 00:18:50
bhwarren/PhotoCal
https://api.github.com/repos/bhwarren/PhotoCal
closed
take picture, and maybe allow calendar upload later
app side potential feature top priority
get whether we cancelled on the Add to Calendar intent, and if so, save for later w/ popup
1.0
take picture, and maybe allow calendar upload later - get whether we cancelled on the Add to Calendar intent, and if so, save for later w/ popup
non_defect
take picture and maybe allow calendar upload later get whether we cancelled on the add to calendar intent and if so save for later w popup
0
301,926
9,247,276,226
IssuesEvent
2019-03-15 00:00:05
gitchecking/alerts
https://api.github.com/repos/gitchecking/alerts
closed
Gitkanban:inbox-stale-issue-6h-gitchecking/third#3
gitchecking/third-repo inbox-queue inbox-stale-issue-6h-constraint low-priority
### Read-Me: Issue-url - https://github.com/gitchecking/third/issues/3 <table border="1"><tr><th>priority</th><td>low</td></tr><tr><th>issue_no</th><td>3</td></tr><tr><th>issue_url</th><td>https://api.github.com/repos/gitchecking/third/issues/3</td></tr><tr><th>issue_html_url</th><td>https://github.com/gitchecking/third/issues/3</td></tr><tr><th>issue_title</th><td>test3</td></tr><tr><th>constraint_name</th><td>inbox-stale-issue-6h</td></tr><tr><th>queue_name</th><td>inbox</td></tr><tr><th>person_name</th><td>rsrp94</td></tr><tr><th>repo_name</th><td>gitchecking/third</td></tr><tr><th>issue_creation_time</th><td>2019-02-26T18:36:27Z</td></tr><tr><th>repo_group_name</th><td>None</td></tr></table>
1.0
Gitkanban:inbox-stale-issue-6h-gitchecking/third#3 - ### Read-Me: Issue-url - https://github.com/gitchecking/third/issues/3 <table border="1"><tr><th>priority</th><td>low</td></tr><tr><th>issue_no</th><td>3</td></tr><tr><th>issue_url</th><td>https://api.github.com/repos/gitchecking/third/issues/3</td></tr><tr><th>issue_html_url</th><td>https://github.com/gitchecking/third/issues/3</td></tr><tr><th>issue_title</th><td>test3</td></tr><tr><th>constraint_name</th><td>inbox-stale-issue-6h</td></tr><tr><th>queue_name</th><td>inbox</td></tr><tr><th>person_name</th><td>rsrp94</td></tr><tr><th>repo_name</th><td>gitchecking/third</td></tr><tr><th>issue_creation_time</th><td>2019-02-26T18:36:27Z</td></tr><tr><th>repo_group_name</th><td>None</td></tr></table>
non_defect
gitkanban inbox stale issue gitchecking third read me issue url priority low issue no issue url
0
32,824
13,934,178,852
IssuesEvent
2020-10-22 09:40:52
angular/angular
https://api.github.com/repos/angular/angular
closed
service worker won't register after .unregister()
comp: service-worker
<!-- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING --> ### Bug Report or Feature Request (mark with an `x`) ``` - [ x ] bug report -> please search issues before submitting - [ ] feature request ``` ### Area ``` - [ x? ] devkit - [ ] schematics ``` ### Versions <!-- Output from: `node --version` and `npm --version`. Windows (7/8/10). Linux (incl. distribution). macOS (El Capitan? Sierra? High Sierra?) --> node --version v8.11.2 npm -v 6.1.0 ### Repro steps <!-- Simple steps to reproduce this bug. Please include: commands run, packages added, related code changes. A link to a sample repo would help too. --> on logout we do: ```` // unregister service worker navigator.serviceWorker.getRegistrations().then(function (registrations) { for (const registration of registrations) { registration.unregister(); } }); ```` Then we reload the site.. when user loggs back in there is no service worker registered ### The log given by the failure <!-- Normally this include a stack trace and some more information. --> ### Desired functionality <!-- What would like to see implemented? What is the usecase? --> Service worker should register ### Mention any other details that might be useful <!-- Please include a link to the repo if this is related to an OSS project. --> If we manually do this in the console: ```` navigator.serviceWorker.register('/ngsw-worker.js') ```` then things start working again
1.0
service worker won't register after .unregister() - <!-- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING --> ### Bug Report or Feature Request (mark with an `x`) ``` - [ x ] bug report -> please search issues before submitting - [ ] feature request ``` ### Area ``` - [ x? ] devkit - [ ] schematics ``` ### Versions <!-- Output from: `node --version` and `npm --version`. Windows (7/8/10). Linux (incl. distribution). macOS (El Capitan? Sierra? High Sierra?) --> node --version v8.11.2 npm -v 6.1.0 ### Repro steps <!-- Simple steps to reproduce this bug. Please include: commands run, packages added, related code changes. A link to a sample repo would help too. --> on logout we do: ```` // unregister service worker navigator.serviceWorker.getRegistrations().then(function (registrations) { for (const registration of registrations) { registration.unregister(); } }); ```` Then we reload the site.. when user loggs back in there is no service worker registered ### The log given by the failure <!-- Normally this include a stack trace and some more information. --> ### Desired functionality <!-- What would like to see implemented? What is the usecase? --> Service worker should register ### Mention any other details that might be useful <!-- Please include a link to the repo if this is related to an OSS project. --> If we manually do this in the console: ```` navigator.serviceWorker.register('/ngsw-worker.js') ```` then things start working again
non_defect
service worker won t register after unregister if you don t fill out the following information your issue might be closed without investigating bug report or feature request mark with an x bug report please search issues before submitting feature request area devkit schematics versions output from node version and npm version windows linux incl distribution macos el capitan sierra high sierra node version npm v repro steps simple steps to reproduce this bug please include commands run packages added related code changes a link to a sample repo would help too on logout we do unregister service worker navigator serviceworker getregistrations then function registrations for const registration of registrations registration unregister then we reload the site when user loggs back in there is no service worker registered the log given by the failure desired functionality what would like to see implemented what is the usecase service worker should register mention any other details that might be useful if we manually do this in the console navigator serviceworker register ngsw worker js then things start working again
0
37,880
6,650,790,066
IssuesEvent
2017-09-28 17:37:05
librosa/librosa
https://api.github.com/repos/librosa/librosa
closed
librosa.dtw with subseq=True is not symmetric w.r.t. its inputs
discussion documentation question
cc @stefan-balke My expectation for subsequence matching is that implies that either `X` can be matched to a subsequence of `Y` or `Y` can be matched to a subsequence of `X`. `librosa.dtw` doesn't exhibit this behavior: ```Python In [1]: import librosa In [2]: import numpy as np In [3]: X = np.array([10., 10., 0., 1., 2., 3., 10., 10.]).reshape([1, -1]) In [4]: Y = np.arange(4).astype(float).reshape([1, -1]) In [5]: librosa.dtw(X, Y, subseq=True) Out[5]: (array([[ 10., 9., 8., 7.], [ 20., 18., 16., 14.], [ 20., 19., 18., 17.], [ 21., 19., 19., 19.], [ 23., 20., 19., 20.], [ 26., 22., 20., 19.], [ 36., 31., 28., 26.], [ 46., 40., 36., 33.]]), array([[7, 3], [6, 3], [5, 3], [4, 2], [3, 1], [2, 1], [1, 1], [0, 1]])) In [6]: librosa.dtw(Y, X, subseq=True) Out[6]: (array([[ 10., 10., 0., 1., 2., 3., 10., 10.], [ 19., 19., 1., 0., 1., 3., 12., 19.], [ 27., 27., 3., 1., 0., 1., 9., 17.], [ 34., 34., 6., 3., 1., 0., 7., 14.]]), array([[3, 5], [2, 4], [1, 3], [0, 2]])) ``` This is in contrast to `djitw`'s behavior (my expected behavior, obviously ;) ```Python In [7]: import djitw In [8]: import scipy In [9]: djitw.dtw(scipy.spatial.distance.cdist(X.T, Y.T)) Out[9]: (array([2, 3, 4, 5]), array([0, 1, 2, 3]), 0.0) In [10]: djitw.dtw(scipy.spatial.distance.cdist(Y.T, X.T)) Out[10]: (array([0, 1, 2, 3]), array([2, 3, 4, 5]), 0.0) ``` What is the motivation for making it non-symmetric? We should at least document it more specifically, saying that `subseq=True` allows for a subsequence of `Y` to be matched to the entirety of `X` but not vice-versa.
1.0
librosa.dtw with subseq=True is not symmetric w.r.t. its inputs - cc @stefan-balke My expectation for subsequence matching is that implies that either `X` can be matched to a subsequence of `Y` or `Y` can be matched to a subsequence of `X`. `librosa.dtw` doesn't exhibit this behavior: ```Python In [1]: import librosa In [2]: import numpy as np In [3]: X = np.array([10., 10., 0., 1., 2., 3., 10., 10.]).reshape([1, -1]) In [4]: Y = np.arange(4).astype(float).reshape([1, -1]) In [5]: librosa.dtw(X, Y, subseq=True) Out[5]: (array([[ 10., 9., 8., 7.], [ 20., 18., 16., 14.], [ 20., 19., 18., 17.], [ 21., 19., 19., 19.], [ 23., 20., 19., 20.], [ 26., 22., 20., 19.], [ 36., 31., 28., 26.], [ 46., 40., 36., 33.]]), array([[7, 3], [6, 3], [5, 3], [4, 2], [3, 1], [2, 1], [1, 1], [0, 1]])) In [6]: librosa.dtw(Y, X, subseq=True) Out[6]: (array([[ 10., 10., 0., 1., 2., 3., 10., 10.], [ 19., 19., 1., 0., 1., 3., 12., 19.], [ 27., 27., 3., 1., 0., 1., 9., 17.], [ 34., 34., 6., 3., 1., 0., 7., 14.]]), array([[3, 5], [2, 4], [1, 3], [0, 2]])) ``` This is in contrast to `djitw`'s behavior (my expected behavior, obviously ;) ```Python In [7]: import djitw In [8]: import scipy In [9]: djitw.dtw(scipy.spatial.distance.cdist(X.T, Y.T)) Out[9]: (array([2, 3, 4, 5]), array([0, 1, 2, 3]), 0.0) In [10]: djitw.dtw(scipy.spatial.distance.cdist(Y.T, X.T)) Out[10]: (array([0, 1, 2, 3]), array([2, 3, 4, 5]), 0.0) ``` What is the motivation for making it non-symmetric? We should at least document it more specifically, saying that `subseq=True` allows for a subsequence of `Y` to be matched to the entirety of `X` but not vice-versa.
non_defect
librosa dtw with subseq true is not symmetric w r t its inputs cc stefan balke my expectation for subsequence matching is that implies that either x can be matched to a subsequence of y or y can be matched to a subsequence of x librosa dtw doesn t exhibit this behavior python in import librosa in import numpy as np in x np array reshape in y np arange astype float reshape in librosa dtw x y subseq true out array array in librosa dtw y x subseq true out array array this is in contrast to djitw s behavior my expected behavior obviously python in import djitw in import scipy in djitw dtw scipy spatial distance cdist x t y t out array array in djitw dtw scipy spatial distance cdist y t x t out array array what is the motivation for making it non symmetric we should at least document it more specifically saying that subseq true allows for a subsequence of y to be matched to the entirety of x but not vice versa
0
5,996
13,465,032,937
IssuesEvent
2020-09-09 20:10:48
Azure/azure-sdk
https://api.github.com/repos/Azure/azure-sdk
opened
Board Review: Text Analytics - review before 5.1.0 GA
architecture board-review
Thank you for starting the process for approval of the client library for your Azure service. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure. ** Before submitting, ensure you adjust the title of the issue appropriately ** To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings. ## The Basics * Service team responsible for the client library: Azure SDK Team * Link to documentation describing the service: * Contact email (if service team, provide PM and Dev Lead): @mayurid * We're calling this meeting to go over the features we've added since our previous GA * We would like to schedule the meeting for end of September / early October ## About this client library * Name of the client library: Text Analytics * Languages for this review: Python, Java, .NET, JS * Link to the service REST APIs: https://github.com/Azure/azure-rest-api-specs/blob/master/specification/cognitiveservices/data-plane/TextAnalytics/preview/v3.1-preview.2/TextAnalytics.json ## Artifacts required (per language) We use an API review tool ([apiview](https://apiview.azurewebsites.net)) to support .NET and Java API reviews. For Python and TypeScript, use the API extractor tool, then submit the output as a Draft PR to the relevant repository (azure-sdk-for-python or azure-sdk-for-js). ### .NET * Upload DLL to [apiview](https://apiview.azurewebsites.net). Link: * Link to samples for champion scenarios: ### Java * Upload JAR to [apiview](https://apiview.azurewebsites.net). Link: * Link to samples for champion scenarios: ### Python * Upload the api as a Draft PR. Link to PR: * Link to samples for champion scenarios: ### TypeScript * Upload output of api-extractor as a Draft PR. Link to PR: * Link to samples for champion scenarios: ## Champion Scenarios A champion scenario is a use case that the consumer of the client library is commonly expected to perform. Champion scenarios are used to ensure the developer experience is exemplary for the common cases. You need to show the entire code sample (including error handling, as an example) for the champion scenarios. * Champion Scenario 1: * Describe the champion scenario * Estimate the percentage of developers using the service who would use the champion scenario * Link to the code sample _ Repeat for each champion scenario _ Examples of good scenarios are technology agnostic (i.e. the customer can do the same thing in multiple ways), and are expected to be used by > 20% of users: * Upload a file * Update firmware on the device * Recognize faces in an uploaded image Examples of bad scenarios: * Create a client (it's part of a scenario, and we'll see it often enough in true champion scenarios) * Send a batch of events (again, part of the scenario) * Create a page blob (it's not used by enough of the user base) ## Agenda for the review A board review is generally split into two parts, with additional meetings as required Part 1 - Introducing the board to the service: - Review of the service (no more than 10 minutes). - Review of the champion scenarios. - Get feedback on the API patterns used in the champion scenarios. After part 1, you may schedule additional meetings with architects to refine the API and work on implementation. Part 2 - the "GA" meeting - Scheduled at least one week after the APIs have been uploaded for review. - Will go over controversial feedback from the line-by-line API review. - Exit meeting with concrete changes necessary to meet quality bar. ## Thank you for your submission
1.0
Board Review: Text Analytics - review before 5.1.0 GA - Thank you for starting the process for approval of the client library for your Azure service. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure. ** Before submitting, ensure you adjust the title of the issue appropriately ** To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings. ## The Basics * Service team responsible for the client library: Azure SDK Team * Link to documentation describing the service: * Contact email (if service team, provide PM and Dev Lead): @mayurid * We're calling this meeting to go over the features we've added since our previous GA * We would like to schedule the meeting for end of September / early October ## About this client library * Name of the client library: Text Analytics * Languages for this review: Python, Java, .NET, JS * Link to the service REST APIs: https://github.com/Azure/azure-rest-api-specs/blob/master/specification/cognitiveservices/data-plane/TextAnalytics/preview/v3.1-preview.2/TextAnalytics.json ## Artifacts required (per language) We use an API review tool ([apiview](https://apiview.azurewebsites.net)) to support .NET and Java API reviews. For Python and TypeScript, use the API extractor tool, then submit the output as a Draft PR to the relevant repository (azure-sdk-for-python or azure-sdk-for-js). ### .NET * Upload DLL to [apiview](https://apiview.azurewebsites.net). Link: * Link to samples for champion scenarios: ### Java * Upload JAR to [apiview](https://apiview.azurewebsites.net). Link: * Link to samples for champion scenarios: ### Python * Upload the api as a Draft PR. Link to PR: * Link to samples for champion scenarios: ### TypeScript * Upload output of api-extractor as a Draft PR. Link to PR: * Link to samples for champion scenarios: ## Champion Scenarios A champion scenario is a use case that the consumer of the client library is commonly expected to perform. Champion scenarios are used to ensure the developer experience is exemplary for the common cases. You need to show the entire code sample (including error handling, as an example) for the champion scenarios. * Champion Scenario 1: * Describe the champion scenario * Estimate the percentage of developers using the service who would use the champion scenario * Link to the code sample _ Repeat for each champion scenario _ Examples of good scenarios are technology agnostic (i.e. the customer can do the same thing in multiple ways), and are expected to be used by > 20% of users: * Upload a file * Update firmware on the device * Recognize faces in an uploaded image Examples of bad scenarios: * Create a client (it's part of a scenario, and we'll see it often enough in true champion scenarios) * Send a batch of events (again, part of the scenario) * Create a page blob (it's not used by enough of the user base) ## Agenda for the review A board review is generally split into two parts, with additional meetings as required Part 1 - Introducing the board to the service: - Review of the service (no more than 10 minutes). - Review of the champion scenarios. - Get feedback on the API patterns used in the champion scenarios. After part 1, you may schedule additional meetings with architects to refine the API and work on implementation. Part 2 - the "GA" meeting - Scheduled at least one week after the APIs have been uploaded for review. - Will go over controversial feedback from the line-by-line API review. - Exit meeting with concrete changes necessary to meet quality bar. ## Thank you for your submission
non_defect
board review text analytics review before ga thank you for starting the process for approval of the client library for your azure service thorough review of your client library ensures that your apis are consistent with the guidelines and the consumers of your client library have a consistently good experience when using azure before submitting ensure you adjust the title of the issue appropriately to ensure consistency all tier languages c typescript java python will generally be reviewed together in expansive libraries we will pair dynamic languages python typescript together and strongly typed languages c java together in separate meetings the basics service team responsible for the client library azure sdk team link to documentation describing the service contact email if service team provide pm and dev lead mayurid we re calling this meeting to go over the features we ve added since our previous ga we would like to schedule the meeting for end of september early october about this client library name of the client library text analytics languages for this review python java net js link to the service rest apis artifacts required per language we use an api review tool to support net and java api reviews for python and typescript use the api extractor tool then submit the output as a draft pr to the relevant repository azure sdk for python or azure sdk for js net upload dll to link link to samples for champion scenarios java upload jar to link link to samples for champion scenarios python upload the api as a draft pr link to pr link to samples for champion scenarios typescript upload output of api extractor as a draft pr link to pr link to samples for champion scenarios champion scenarios a champion scenario is a use case that the consumer of the client library is commonly expected to perform champion scenarios are used to ensure the developer experience is exemplary for the common cases you need to show the entire code sample including error handling as an example for the champion scenarios champion scenario describe the champion scenario estimate the percentage of developers using the service who would use the champion scenario link to the code sample repeat for each champion scenario examples of good scenarios are technology agnostic i e the customer can do the same thing in multiple ways and are expected to be used by of users upload a file update firmware on the device recognize faces in an uploaded image examples of bad scenarios create a client it s part of a scenario and we ll see it often enough in true champion scenarios send a batch of events again part of the scenario create a page blob it s not used by enough of the user base agenda for the review a board review is generally split into two parts with additional meetings as required part introducing the board to the service review of the service no more than minutes review of the champion scenarios get feedback on the api patterns used in the champion scenarios after part you may schedule additional meetings with architects to refine the api and work on implementation part the ga meeting scheduled at least one week after the apis have been uploaded for review will go over controversial feedback from the line by line api review exit meeting with concrete changes necessary to meet quality bar thank you for your submission
0
264,725
8,318,970,029
IssuesEvent
2018-09-25 15:55:26
biocore/qiita
https://api.github.com/repos/biocore/qiita
closed
different queues in qiita per user
CMI-request enhancement group input performance priority: high
We should have queues per user. The current queues we thought of: demos (16+ cores), only available for official demo presenters reserved (8+ cores), only for web development/improvement general (rest), everyone else.
1.0
different queues in qiita per user - We should have queues per user. The current queues we thought of: demos (16+ cores), only available for official demo presenters reserved (8+ cores), only for web development/improvement general (rest), everyone else.
non_defect
different queues in qiita per user we should have queues per user the current queues we thought of demos cores only available for official demo presenters reserved cores only for web development improvement general rest everyone else
0
56,942
8,132,412,410
IssuesEvent
2018-08-18 11:33:15
nodemcu/nodemcu-firmware
https://api.github.com/repos/nodemcu/nodemcu-firmware
opened
Add (better) documentation about missing standard Lua modules
documentation
This is a sequel to #2461 and @timg11's work maybe re-used here. Shall we add some extra chapter(s) targeting Lua developers new to NodeMCU? There's the great FAQ from Terry but some questions this audience might have is somewhat buried in there. Questions like: - Which standard Lua modules are *not* available on NodeMCU? Our README says [right there in the summary](https://github.com/nodemcu/nodemcu-firmware#summary) but `io` happens to be missing. - How does standard functionality X map to the NodeMCU-world if at all (could contain code snippets)? This list may grow over time.
1.0
Add (better) documentation about missing standard Lua modules - This is a sequel to #2461 and @timg11's work maybe re-used here. Shall we add some extra chapter(s) targeting Lua developers new to NodeMCU? There's the great FAQ from Terry but some questions this audience might have is somewhat buried in there. Questions like: - Which standard Lua modules are *not* available on NodeMCU? Our README says [right there in the summary](https://github.com/nodemcu/nodemcu-firmware#summary) but `io` happens to be missing. - How does standard functionality X map to the NodeMCU-world if at all (could contain code snippets)? This list may grow over time.
non_defect
add better documentation about missing standard lua modules this is a sequel to and s work maybe re used here shall we add some extra chapter s targeting lua developers new to nodemcu there s the great faq from terry but some questions this audience might have is somewhat buried in there questions like which standard lua modules are not available on nodemcu our readme says but io happens to be missing how does standard functionality x map to the nodemcu world if at all could contain code snippets this list may grow over time
0
290,326
21,875,768,836
IssuesEvent
2022-05-19 09:59:16
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
closed
[Docs] #12144 [Bug]: Disable run button in query editor page after having run the query once.
Documentation User Education Pod
> TODO - [ ] Evaluate if this task is needed. If not add the "Skip Docs" label on the parent ticket - [ ] Fill these fields - [ ] Prepare first draft - [ ] Add label: "Ready for Docs Team" Field | Details -----|----- **POD** | FE Coders Pod **Parent Ticket** | #12144 Engineer | Release Date | Live Date | First Draft | Auto Assign | Priority | Environment |
1.0
[Docs] #12144 [Bug]: Disable run button in query editor page after having run the query once. - > TODO - [ ] Evaluate if this task is needed. If not add the "Skip Docs" label on the parent ticket - [ ] Fill these fields - [ ] Prepare first draft - [ ] Add label: "Ready for Docs Team" Field | Details -----|----- **POD** | FE Coders Pod **Parent Ticket** | #12144 Engineer | Release Date | Live Date | First Draft | Auto Assign | Priority | Environment |
non_defect
disable run button in query editor page after having run the query once todo evaluate if this task is needed if not add the skip docs label on the parent ticket fill these fields prepare first draft add label ready for docs team field details pod fe coders pod parent ticket engineer release date live date first draft auto assign priority environment
0
67,101
3,266,304,584
IssuesEvent
2015-10-22 20:05:09
geecko86/QuickLyric
https://api.github.com/repos/geecko86/QuickLyric
closed
Make some advanced testing for LRC support
bug Hi Priority
* Define a blacklist: players that don't broadcast their location * Add something on screen to know how far QL thinks we're at in the song * Test lots of songs
1.0
Make some advanced testing for LRC support - * Define a blacklist: players that don't broadcast their location * Add something on screen to know how far QL thinks we're at in the song * Test lots of songs
non_defect
make some advanced testing for lrc support define a blacklist players that don t broadcast their location add something on screen to know how far ql thinks we re at in the song test lots of songs
0
66,140
20,016,489,873
IssuesEvent
2022-02-01 12:37:00
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
DataTable: Dynamically rendered columns are not filterable/sortable
defect
**Describe the defect** In our application we have cases where columns are added/removed dynamically by switching the "rendered"-attribute of each column. The column gets added but sorting and filtering is not working. This worked in previous versions of Primefaces. You can see the issue in the showcase "DataTable - Dynamic Columns" when adding a new column "representative". I have also added a reproducer which showcases this problem in smaller size. **Reproducer** [primefaces-test.zip](https://github.com/primefaces/primefaces/files/7977071/primefaces-test.zip) https://www.primefaces.org/showcase/ui/data/datatable/columns.xhtml?jfwid=87ffd **Environment:** - PF Version: _11.0.0_ - Affected browsers: _ALL_ **To Reproduce** Steps to reproduce the behavior: 1. Start Reproducer 2. Test Sorting/Filtering functionality on the initial column -> works 3. Press Button "Show Column" 4. Test Sorting/Filtering functionality on the 2nd column -> does not work **Expected behavior** Same behaviour as in Primefaces 10 (working). **Further Info** I am quite sure that the underlying problem is similar to #8159, that being, that the sortByAsMap/filterByAsMap values are not updated and therefore the newly added column having nothing to go by when sorting/filtering. I have opened a PR #8358 which I believe contains the fix to the problem. #8159 was closed with the suggestion to reset the values of sortByAsMap/filterByAsMap to null, which **would also work** for this reproducer and our application. However, in my opinion this is might not be the best solution, since every application using this feature has to be migrated in order to work. If we decide to use this approach, we should leave a hint in the migration guide and update the showcase for PF11, where it is currently used. The fix I was proposing in my PR adds two lines which were previously there in PF10 and got lost in transition to PF11. Adding those back fixes the problem. With that solution it wouldn't be necessary to update the showcase as well. But I am open for other ideas or arguments against my approach. **Excerpt from my comment under #8159** After some time debugging I noticed that the functions isColumnSortable() / isColumnFilterable() from UITable are always called from within the encodeColumnHeader function of the DataTableRenderer. In PF10 it was here, where the sortByAsMap property was newly set. For some reason though a line was dropped from PF10 to PF11 which resets the value. Please see below: Primefaces 10 default boolean isColumnSortable(FacesContext context, UIColumn column) { Map<String, SortMeta> sortBy = getSortByAsMap(); if (sortBy.containsKey(column.getColumnKey())) { return true; } SortMeta s = SortMeta.of(context, getVar(), column); if (s == null) { return false; } // unlikely to happen, in case columns change between two ajax requests sortBy.put(s.getColumnKey(), s); setSortByAsMap(sortBy); return true; } Primefaces 11 default boolean isColumnSortable(FacesContext context, UIColumn column) { Map<String, SortMeta> sortBy = getSortByAsMap(); if (sortBy.containsKey(column.getColumnKey())) { return true; } // lazy init - happens in cases where the column is initially not rendered SortMeta s = SortMeta.of(context, getVar(), column); if (s != null) { sortBy.put(s.getColumnKey(), s); } // setSortByAsMap(sortBy); is missing here return s != null; } Although isColumnFilterable looks different to isColumnSortable in PF10, in PF11 they almost look identical. Adding a setFilterByAsMap(filterBy) at the same position as above fixed the filtering for me as well.
1.0
DataTable: Dynamically rendered columns are not filterable/sortable - **Describe the defect** In our application we have cases where columns are added/removed dynamically by switching the "rendered"-attribute of each column. The column gets added but sorting and filtering is not working. This worked in previous versions of Primefaces. You can see the issue in the showcase "DataTable - Dynamic Columns" when adding a new column "representative". I have also added a reproducer which showcases this problem in smaller size. **Reproducer** [primefaces-test.zip](https://github.com/primefaces/primefaces/files/7977071/primefaces-test.zip) https://www.primefaces.org/showcase/ui/data/datatable/columns.xhtml?jfwid=87ffd **Environment:** - PF Version: _11.0.0_ - Affected browsers: _ALL_ **To Reproduce** Steps to reproduce the behavior: 1. Start Reproducer 2. Test Sorting/Filtering functionality on the initial column -> works 3. Press Button "Show Column" 4. Test Sorting/Filtering functionality on the 2nd column -> does not work **Expected behavior** Same behaviour as in Primefaces 10 (working). **Further Info** I am quite sure that the underlying problem is similar to #8159, that being, that the sortByAsMap/filterByAsMap values are not updated and therefore the newly added column having nothing to go by when sorting/filtering. I have opened a PR #8358 which I believe contains the fix to the problem. #8159 was closed with the suggestion to reset the values of sortByAsMap/filterByAsMap to null, which **would also work** for this reproducer and our application. However, in my opinion this is might not be the best solution, since every application using this feature has to be migrated in order to work. If we decide to use this approach, we should leave a hint in the migration guide and update the showcase for PF11, where it is currently used. The fix I was proposing in my PR adds two lines which were previously there in PF10 and got lost in transition to PF11. Adding those back fixes the problem. With that solution it wouldn't be necessary to update the showcase as well. But I am open for other ideas or arguments against my approach. **Excerpt from my comment under #8159** After some time debugging I noticed that the functions isColumnSortable() / isColumnFilterable() from UITable are always called from within the encodeColumnHeader function of the DataTableRenderer. In PF10 it was here, where the sortByAsMap property was newly set. For some reason though a line was dropped from PF10 to PF11 which resets the value. Please see below: Primefaces 10 default boolean isColumnSortable(FacesContext context, UIColumn column) { Map<String, SortMeta> sortBy = getSortByAsMap(); if (sortBy.containsKey(column.getColumnKey())) { return true; } SortMeta s = SortMeta.of(context, getVar(), column); if (s == null) { return false; } // unlikely to happen, in case columns change between two ajax requests sortBy.put(s.getColumnKey(), s); setSortByAsMap(sortBy); return true; } Primefaces 11 default boolean isColumnSortable(FacesContext context, UIColumn column) { Map<String, SortMeta> sortBy = getSortByAsMap(); if (sortBy.containsKey(column.getColumnKey())) { return true; } // lazy init - happens in cases where the column is initially not rendered SortMeta s = SortMeta.of(context, getVar(), column); if (s != null) { sortBy.put(s.getColumnKey(), s); } // setSortByAsMap(sortBy); is missing here return s != null; } Although isColumnFilterable looks different to isColumnSortable in PF10, in PF11 they almost look identical. Adding a setFilterByAsMap(filterBy) at the same position as above fixed the filtering for me as well.
defect
datatable dynamically rendered columns are not filterable sortable describe the defect in our application we have cases where columns are added removed dynamically by switching the rendered attribute of each column the column gets added but sorting and filtering is not working this worked in previous versions of primefaces you can see the issue in the showcase datatable dynamic columns when adding a new column representative i have also added a reproducer which showcases this problem in smaller size reproducer environment pf version affected browsers all to reproduce steps to reproduce the behavior start reproducer test sorting filtering functionality on the initial column works press button show column test sorting filtering functionality on the column does not work expected behavior same behaviour as in primefaces working further info i am quite sure that the underlying problem is similar to that being that the sortbyasmap filterbyasmap values are not updated and therefore the newly added column having nothing to go by when sorting filtering i have opened a pr which i believe contains the fix to the problem was closed with the suggestion to reset the values of sortbyasmap filterbyasmap to null which would also work for this reproducer and our application however in my opinion this is might not be the best solution since every application using this feature has to be migrated in order to work if we decide to use this approach we should leave a hint in the migration guide and update the showcase for where it is currently used the fix i was proposing in my pr adds two lines which were previously there in and got lost in transition to adding those back fixes the problem with that solution it wouldn t be necessary to update the showcase as well but i am open for other ideas or arguments against my approach excerpt from my comment under after some time debugging i noticed that the functions iscolumnsortable iscolumnfilterable from uitable are always called from within the encodecolumnheader function of the datatablerenderer in it was here where the sortbyasmap property was newly set for some reason though a line was dropped from to which resets the value please see below primefaces default boolean iscolumnsortable facescontext context uicolumn column map sortby getsortbyasmap if sortby containskey column getcolumnkey return true sortmeta s sortmeta of context getvar column if s null return false unlikely to happen in case columns change between two ajax requests sortby put s getcolumnkey s setsortbyasmap sortby return true primefaces default boolean iscolumnsortable facescontext context uicolumn column map sortby getsortbyasmap if sortby containskey column getcolumnkey return true lazy init happens in cases where the column is initially not rendered sortmeta s sortmeta of context getvar column if s null sortby put s getcolumnkey s setsortbyasmap sortby is missing here return s null although iscolumnfilterable looks different to iscolumnsortable in in they almost look identical adding a setfilterbyasmap filterby at the same position as above fixed the filtering for me as well
1
65,873
19,727,279,067
IssuesEvent
2022-01-13 21:20:25
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
closed
Service Location Address conditional fields not showing correctly
Defect Needs refining Sitewide CMS Team
[It may make sense to work this along with #7166] ## Describe the defect On paragraph `service_location_address`, the field `field_address` is a conditionally necessary field based on another field `field_use_facility_address`. On the node-edit form, `field_address` is hidden until the checkbox tied to `field_use_facility_address` is unchecked (it is checked by default). This behavior is expected and functioning well. However, on initial page load of subsequent edits, the conditional `field_address` is hidden _regardless of the state of_ `field_use_facility_address`. It should be initially visible if the `field_use_facility_address` is false (unchecked). ## To Reproduce Steps to reproduce the behavior: 1. Go to https://staging.cms.va.gov/node/6704/edit 2. Scroll down to section "Service Locations" 3. Click to expand "Address". 4. Note the unchecked box "Use the facility's street address?" 5. Note that address fields are not visible below the checkbox. ## Expected behavior If the boolean field `field_use_facility_address` is set to false (box unchecked), the conditional address field should be visible on initial page load (instead, an editor needs to check the box, then uncheck the box to toggle the initially hidden form fields). ## Screenshots ![image](https://user-images.githubusercontent.com/6863534/144762097-13716ead-31a7-4c94-960b-d5ad04cfedb6.png) ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS workstream (orange) (not needed for bug tickets) - [ ] CMS-supported product (black) ### CMS Team - [ ] `Platform CMS Team` - [x] `Sitewide CMS Team`
1.0
Service Location Address conditional fields not showing correctly - [It may make sense to work this along with #7166] ## Describe the defect On paragraph `service_location_address`, the field `field_address` is a conditionally necessary field based on another field `field_use_facility_address`. On the node-edit form, `field_address` is hidden until the checkbox tied to `field_use_facility_address` is unchecked (it is checked by default). This behavior is expected and functioning well. However, on initial page load of subsequent edits, the conditional `field_address` is hidden _regardless of the state of_ `field_use_facility_address`. It should be initially visible if the `field_use_facility_address` is false (unchecked). ## To Reproduce Steps to reproduce the behavior: 1. Go to https://staging.cms.va.gov/node/6704/edit 2. Scroll down to section "Service Locations" 3. Click to expand "Address". 4. Note the unchecked box "Use the facility's street address?" 5. Note that address fields are not visible below the checkbox. ## Expected behavior If the boolean field `field_use_facility_address` is set to false (box unchecked), the conditional address field should be visible on initial page load (instead, an editor needs to check the box, then uncheck the box to toggle the initially hidden form fields). ## Screenshots ![image](https://user-images.githubusercontent.com/6863534/144762097-13716ead-31a7-4c94-960b-d5ad04cfedb6.png) ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS workstream (orange) (not needed for bug tickets) - [ ] CMS-supported product (black) ### CMS Team - [ ] `Platform CMS Team` - [x] `Sitewide CMS Team`
defect
service location address conditional fields not showing correctly describe the defect on paragraph service location address the field field address is a conditionally necessary field based on another field field use facility address on the node edit form field address is hidden until the checkbox tied to field use facility address is unchecked it is checked by default this behavior is expected and functioning well however on initial page load of subsequent edits the conditional field address is hidden regardless of the state of field use facility address it should be initially visible if the field use facility address is false unchecked to reproduce steps to reproduce the behavior go to scroll down to section service locations click to expand address note the unchecked box use the facility s street address note that address fields are not visible below the checkbox expected behavior if the boolean field field use facility address is set to false box unchecked the conditional address field should be visible on initial page load instead an editor needs to check the box then uncheck the box to toggle the initially hidden form fields screenshots labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms workstream orange not needed for bug tickets cms supported product black cms team platform cms team sitewide cms team
1
133,660
10,854,437,436
IssuesEvent
2019-11-13 16:25:21
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
opened
[Flaky Test] gce-master-scale-performance flakes due to kube-controller-manager loosing lease watch
kind/failing-test
/sig scalability Debugging done by @mborsz Failed test attempt: https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1193936861135900677 It failed with a number of "WaitingFor*Pods: objects timed out" errors (46 failed objects in total). Sample failing pods: ```[measurement call WaitForControlledPodsRunning - WaitForRunningJobs error: 3 objects timed out: Jobs: test-aeoqzf-1/big-job-0, test-aeoqzf-6/big-job-0, test-aeoqzf-37/big-job-0``` 580 nodes were not ready at 18:37: ``` I1111 18:37:28.387103 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-66d8] I1111 18:37:28.498675 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-259b] I1111 18:37:28.518832 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tzqn] I1111 18:37:30.121908 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lwlg] I1111 18:37:30.815458 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nbps] I1111 18:37:30.890932 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-jkg1] I1111 18:37:31.292585 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-356q] I1111 18:37:31.852058 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-fsqx] I1111 18:37:31.995249 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hzvt] I1111 18:37:32.214507 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-gstw] I1111 18:37:32.311879 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-7s8f] I1111 18:37:33.084482 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-sh4h] I1111 18:37:33.166893 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-29wq] I1111 18:37:33.293074 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-11qb] I1111 18:37:33.406918 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7w7c] I1111 18:37:34.280034 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-51sg] I1111 18:37:34.392178 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wp8k] I1111 18:37:34.502013 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-6k3g] I1111 18:37:35.103696 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-ngjs] I1111 18:37:35.217235 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-2dtm] I1111 18:37:35.336545 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-20ph] I1111 18:37:35.751964 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-ppw2] I1111 18:37:36.434121 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-ccqj] I1111 18:37:36.564376 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s274] I1111 18:37:37.628794 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vxpr] I1111 18:37:37.726619 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qd8q] I1111 18:37:38.880584 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bwsg] I1111 18:37:39.006484 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cd9z] I1111 18:37:39.292148 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-j3g7] I1111 18:37:39.769229 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1gn9] I1111 18:37:40.257826 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bd4w] I1111 18:37:41.065366 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-9pv5] I1111 18:37:43.116762 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f8tj] I1111 18:37:43.339429 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-tw32] I1111 18:37:43.451980 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bkdn] I1111 18:37:43.637899 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-q0lb] I1111 18:37:43.785598 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-t5tz] I1111 18:37:44.004449 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-zpfv] I1111 18:37:44.209577 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-jxp7] I1111 18:37:44.324071 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-pn7l] I1111 18:37:44.607303 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-t5pm] I1111 18:37:44.941803 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-ws1p] I1111 18:37:45.310339 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xztn] I1111 18:37:45.354347 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lt8b] I1111 18:37:45.651971 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-p2g4] I1111 18:37:45.928044 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lmlz] I1111 18:37:46.281877 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-p7kr] I1111 18:37:46.591677 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-5wgw] I1111 18:37:47.029497 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-13rp] I1111 18:37:47.269065 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wxmg] I1111 18:37:47.547684 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3pb0] I1111 18:37:47.647479 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-c94m] I1111 18:37:47.985983 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-w421] I1111 18:37:48.241689 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f6g6] I1111 18:37:48.562744 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-wltm] I1111 18:37:49.517088 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-7zng] I1111 18:37:49.972701 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qzs3] I1111 18:37:50.298585 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-jm3t] I1111 18:37:50.505342 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qgkz] I1111 18:37:50.605297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vnmk] I1111 18:37:50.764049 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nhn1] I1111 18:37:50.843480 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-c2hh] I1111 18:37:50.925994 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-2rjl] I1111 18:37:51.162715 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-1ng6] I1111 18:37:51.350300 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-b2dc] I1111 18:37:51.658863 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-19dh] I1111 18:37:51.983381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-dn9h] I1111 18:37:52.070628 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3v11] I1111 18:37:52.283535 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-m45v] I1111 18:37:52.548529 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-dljv] I1111 18:37:52.782994 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-d0tp] I1111 18:37:53.239955 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-zzzs] I1111 18:37:53.458197 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-l5nn] I1111 18:37:53.857368 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4hrl] I1111 18:37:53.915010 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tcnx] I1111 18:37:53.957982 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-s7rk] I1111 18:37:54.336905 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8r5n] I1111 18:37:54.605440 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-zw4x] I1111 18:37:54.942220 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8f96] I1111 18:37:55.316004 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8n48] I1111 18:37:55.613033 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-frfs] I1111 18:37:55.887332 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-46kw] I1111 18:37:56.634617 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-pzzg] I1111 18:37:57.282599 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lf6r] I1111 18:37:57.354027 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-trzz] I1111 18:37:58.056352 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qxhk] I1111 18:37:58.209253 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-ms57] I1111 18:37:58.329592 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-cq2n] I1111 18:37:58.450046 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-ndtj] I1111 18:37:58.687895 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4fqw] I1111 18:37:58.973537 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-v0wx] I1111 18:37:59.101574 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-fgt6] I1111 18:37:59.244297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-9mcm] I1111 18:37:59.367966 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mtq4] I1111 18:37:59.622707 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-cdrb] I1111 18:37:59.783529 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-n8b9] I1111 18:38:00.111485 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1hcl] I1111 18:38:00.616859 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-72hv] I1111 18:38:00.810590 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bzfd] I1111 18:38:01.168910 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bx4z] I1111 18:38:01.578337 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lqjt] I1111 18:38:01.770263 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-jqk1] I1111 18:38:02.061361 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lpsm] I1111 18:38:02.396538 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-s6m6] I1111 18:38:02.770775 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hvz7] I1111 18:38:03.228379 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-55xd] I1111 18:38:03.315612 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-g620] I1111 18:38:03.936931 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wg2m] I1111 18:38:04.424293 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-3tzk] I1111 18:38:05.281710 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r8m5] I1111 18:38:05.602427 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-zdwl] I1111 18:38:05.788528 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-39j4] I1111 18:38:05.993550 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r4tf] I1111 18:38:06.261329 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-78cp] I1111 18:38:06.419663 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fdjs] I1111 18:38:06.484446 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-v3gq] I1111 18:38:06.781222 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vh76] I1111 18:38:07.032394 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-csqg] I1111 18:38:07.087125 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1fw0] I1111 18:38:07.339819 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-2hh3] I1111 18:38:07.641678 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-pw6p] I1111 18:38:07.949707 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-sqq9] I1111 18:38:08.109691 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-111g] I1111 18:38:08.325421 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wzcg] I1111 18:38:08.602802 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-xfht] I1111 18:38:08.880364 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lhtd] I1111 18:38:09.144392 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-z1r0] I1111 18:38:09.219673 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8rzk] I1111 18:38:09.573570 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nfhw] I1111 18:38:10.010771 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gc5n] I1111 18:38:10.362357 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cxqb] I1111 18:38:10.642776 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-86rb] I1111 18:38:11.123493 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-n4c8] I1111 18:38:12.047791 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-njpn] I1111 18:38:12.306582 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-b8nn] I1111 18:38:12.590385 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-z1l0] I1111 18:38:12.757332 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-tlkw] I1111 18:38:12.943619 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9kt7] I1111 18:38:13.194187 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3stg] I1111 18:38:13.488149 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hj4t] I1111 18:38:13.704476 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-cpf8] I1111 18:38:13.937571 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-kr9f] I1111 18:38:14.033259 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hz3d] I1111 18:38:14.173053 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-h5bc] I1111 18:38:14.428397 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-jp9f] I1111 18:38:14.635603 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-t20b] I1111 18:38:14.846848 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-9l39] I1111 18:38:14.970107 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-qq4k] I1111 18:38:15.215528 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-600q] I1111 18:38:15.505309 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-tr2v] I1111 18:38:15.895880 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-nz5p] I1111 18:38:16.081949 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-qcbz] I1111 18:38:16.305558 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4mc0] I1111 18:38:16.548506 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qswx] I1111 18:38:16.689381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jb08] I1111 18:38:16.714843 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-k85q] I1111 18:38:16.979169 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-r2fb] I1111 18:38:17.467256 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tbpc] I1111 18:38:18.637341 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-s12t] I1111 18:38:19.007701 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7mg4] I1111 18:38:19.244470 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-t7tn] I1111 18:38:19.474166 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-zhbp] I1111 18:38:19.659036 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-h6q1] I1111 18:38:19.776465 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-zmxw] I1111 18:38:19.919427 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-dq45] I1111 18:38:20.011586 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vbtg] I1111 18:38:20.141681 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p0kd] I1111 18:38:20.371146 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-dvm7] I1111 18:38:20.616428 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-88tf] I1111 18:38:20.896307 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-fwlh] I1111 18:38:21.174803 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-ct6f] I1111 18:38:21.508416 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7b94] I1111 18:38:21.734200 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tks4] I1111 18:38:22.209506 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-4gwx] I1111 18:38:22.507205 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-vg65] I1111 18:38:22.874686 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-q6pn] I1111 18:38:23.600527 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-m5x3] I1111 18:38:24.024932 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-c5cx] I1111 18:38:24.243836 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-mz0h] I1111 18:38:25.315200 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-tnz1] I1111 18:38:25.696849 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-057r] I1111 18:38:25.770130 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-4vxt] I1111 18:38:25.925778 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5hm4] I1111 18:38:26.326549 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-wrf7] I1111 18:38:26.508675 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-5ww9] I1111 18:38:26.732965 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-b798] I1111 18:38:26.925785 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r09j] I1111 18:38:27.101672 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-d74f] I1111 18:38:27.540538 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-twxc] I1111 18:38:27.871267 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-j1rr] I1111 18:38:28.109543 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vnx7] I1111 18:38:28.447338 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-0rsl] I1111 18:38:28.773923 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-g1w1] I1111 18:38:29.166330 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-p0rl] I1111 18:38:29.458795 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-5cgh] I1111 18:38:29.599650 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-zh56] I1111 18:38:29.818921 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-9450] I1111 18:38:30.117421 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-glg9] I1111 18:38:30.467943 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-blcf] I1111 18:38:30.908641 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-dl56] I1111 18:38:30.997161 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-v2ff] I1111 18:38:31.853190 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-hrnk] I1111 18:38:32.101083 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f0gw] I1111 18:38:32.252692 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-m0s7] I1111 18:38:32.323073 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2fkm] I1111 18:38:32.545522 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fwdd] I1111 18:38:32.761190 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lzhj] I1111 18:38:33.018175 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fhgm] I1111 18:38:33.496766 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-r4qm] I1111 18:38:33.681105 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mx33] I1111 18:38:33.987314 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-swtn] I1111 18:38:34.347237 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-c01c] I1111 18:38:34.615267 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2652] I1111 18:38:34.716355 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fzhk] I1111 18:38:35.087431 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9zq3] I1111 18:38:35.550418 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wrhh] I1111 18:38:35.839456 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-xfms] I1111 18:38:36.115179 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j9dz] I1111 18:38:36.456882 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-v77j] I1111 18:38:36.541538 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-5b5n] I1111 18:38:36.877008 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-3qvp] I1111 18:38:37.218718 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-tkx3] I1111 18:38:38.363314 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mm7q] I1111 18:38:38.699014 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-573f] I1111 18:38:38.935157 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2nlr] I1111 18:38:39.122364 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qlx8] I1111 18:38:39.223464 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-1zzt] I1111 18:38:39.359691 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-hs19] I1111 18:38:39.488680 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-v3v6] I1111 18:38:39.788296 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-6wwf] I1111 18:38:40.259162 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1gt7] I1111 18:38:40.366147 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wr1t] I1111 18:38:40.434989 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-jhbm] I1111 18:38:40.833900 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-7z0d] I1111 18:38:40.913712 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-73m6] I1111 18:38:41.202076 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-5tdt] I1111 18:38:41.479094 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-zzz7] I1111 18:38:41.881878 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-14kc] I1111 18:38:42.183027 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7j01] I1111 18:38:42.438968 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-9klz] I1111 18:38:42.533411 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-r0zh] I1111 18:38:42.587141 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gj65] I1111 18:38:42.990843 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wf8n] I1111 18:38:43.300660 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-pljw] I1111 18:38:43.643535 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vg08] I1111 18:38:43.699697 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lqm0] I1111 18:38:43.972340 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0c4b] I1111 18:38:45.130681 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4z5j] I1111 18:38:45.372096 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-p1z4] I1111 18:38:45.692477 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mkqf] I1111 18:38:45.834688 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-l2nj] I1111 18:38:45.993775 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hqdc] I1111 18:38:46.100336 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p058] I1111 18:38:46.231845 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-dtr8] I1111 18:38:46.574274 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wg6l] I1111 18:38:46.770622 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f7gm] I1111 18:38:47.046924 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bq8g] I1111 18:38:47.337844 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-hdk4] I1111 18:38:47.716548 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-sg1n] I1111 18:38:48.007164 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-bzjz] I1111 18:38:48.356598 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tfdz] I1111 18:38:48.418540 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j7s7] I1111 18:38:48.880005 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1cg6] I1111 18:38:49.148773 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-38k4] I1111 18:38:49.339790 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tt43] I1111 18:38:49.587573 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r99r] I1111 18:38:49.742519 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-n3s4] I1111 18:38:50.088846 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-10ps] I1111 18:38:50.525943 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-gbqs] I1111 18:38:50.778937 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-lpvb] I1111 18:38:51.187043 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nl6r] I1111 18:38:51.651307 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-gn6n] I1111 18:38:52.232016 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-7kt6] I1111 18:38:52.566470 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gz2k] I1111 18:38:52.876868 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5c8h] I1111 18:38:52.985187 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lv9g] I1111 18:38:53.145669 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7mh1] I1111 18:38:53.477834 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-24hm] I1111 18:38:53.737045 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-s4pn] I1111 18:38:54.039465 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-mcg5] I1111 18:38:54.317484 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-xsmg] I1111 18:38:54.598026 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-dxsx] I1111 18:38:55.091282 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-9p0g] I1111 18:38:55.427845 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-t7lm] I1111 18:38:55.543477 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5m1r] I1111 18:38:56.028029 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-z8jc] I1111 18:38:56.070403 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-v13n] I1111 18:38:56.365921 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qhjh] I1111 18:38:56.407344 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-6j02] I1111 18:38:56.647467 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-wdhd] I1111 18:38:57.131829 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p5kh] I1111 18:38:57.521473 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-k1mh] I1111 18:38:57.852846 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-g821] I1111 18:38:58.067074 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wr0t] I1111 18:38:58.339612 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-2ccx] I1111 18:38:58.905254 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-0t73] I1111 18:38:59.762190 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-rwzp] I1111 18:39:00.054413 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-1bjx] I1111 18:39:00.324002 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-4k9t] I1111 18:39:00.439786 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8jnb] I1111 18:39:00.543561 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-sxf2] I1111 18:39:00.632589 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-745x] I1111 18:39:00.926210 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7mcj] I1111 18:39:01.404694 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-n2dl] I1111 18:39:01.709573 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jr2k] I1111 18:39:02.027130 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-x7nm] I1111 18:39:02.263907 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-txbf] I1111 18:39:02.699439 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-l4xr] I1111 18:39:02.961453 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-mwk3] I1111 18:39:03.316577 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6mp9] I1111 18:39:03.586097 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-mdl0] I1111 18:39:03.899580 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-j38d] I1111 18:39:04.256914 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-qkkc] I1111 18:39:04.432297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2frx] I1111 18:39:04.700258 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-ftxz] I1111 18:39:05.350233 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-3v49] I1111 18:39:05.440266 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wr2k] I1111 18:39:05.543044 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-x3lm] I1111 18:39:05.958126 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-6g89] I1111 18:39:06.062887 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-25ww] I1111 18:39:06.907431 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-w0n3] I1111 18:39:06.996672 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wgml] I1111 18:39:07.482092 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-sbv3] I1111 18:39:07.739111 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9zql] I1111 18:39:08.268033 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tfxw] I1111 18:39:08.469658 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wl7d] I1111 18:39:08.494823 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-8vjt] I1111 18:39:08.831920 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-xbng] I1111 18:39:08.953907 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-l7mq] I1111 18:39:09.373633 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-328r] I1111 18:39:09.567176 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s0r1] I1111 18:39:09.648895 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-4mc9] I1111 18:39:10.017439 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-bps6] I1111 18:39:10.282764 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-k18f] I1111 18:39:10.600084 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-p5d2] I1111 18:39:10.857286 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-ckqw] I1111 18:39:11.233301 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-4hvb] I1111 18:39:11.588868 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-5b23] I1111 18:39:11.782381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-d7kc] I1111 18:39:11.879102 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-b8dp] I1111 18:39:12.138797 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-7t2c] I1111 18:39:12.427890 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-0xgb] I1111 18:39:12.850075 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-81dt] I1111 18:39:13.385066 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-g68w] I1111 18:39:14.230811 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pj57] I1111 18:39:14.991263 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-kncq] I1111 18:39:15.272829 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qfst] I1111 18:39:15.450647 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-67dg] I1111 18:39:15.696548 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0qwp] I1111 18:39:15.771299 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0zg3] I1111 18:39:15.845018 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-99sx] I1111 18:39:16.059427 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pzcs] I1111 18:39:16.353765 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-mrw2] I1111 18:39:16.577633 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-wjxk] I1111 18:39:16.982767 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-htgd] I1111 18:39:17.544936 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bmg5] I1111 18:39:17.649214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3v4t] I1111 18:39:18.154252 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-8qdk] I1111 18:39:18.310060 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3rxn] I1111 18:39:18.617075 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8gl8] I1111 18:39:19.016250 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-46hp] I1111 18:39:19.477577 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-cwrh] I1111 18:39:19.938012 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-mps7] I1111 18:39:20.147223 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-dl41] I1111 18:39:20.791216 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hpbs] I1111 18:39:21.700306 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-x1k6] I1111 18:39:22.175236 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-414x] I1111 18:39:22.459859 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gjvw] I1111 18:39:22.587232 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nsgb] I1111 18:39:22.902872 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3kql] I1111 18:39:23.133551 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nmlf] I1111 18:39:23.384778 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-12km] I1111 18:39:23.589037 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-nk8v] I1111 18:39:23.660390 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-fnrp] I1111 18:39:23.845723 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-r9b5] I1111 18:39:24.010260 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-91q7] I1111 18:39:24.297671 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-82xj] I1111 18:39:24.603582 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-761n] I1111 18:39:24.837122 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-jb5n] I1111 18:39:25.209054 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gsx4] I1111 18:39:25.490787 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-nwfv] I1111 18:39:25.781753 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-zr4r] I1111 18:39:25.828573 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-kql9] I1111 18:39:26.197595 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-n7xb] I1111 18:39:26.910286 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-mqhm] I1111 18:39:26.999475 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-rjbx] I1111 18:39:27.338519 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-61f6] I1111 18:39:27.647301 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-bnkp] I1111 18:39:28.033623 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qpn6] I1111 18:39:28.944493 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pblt] I1111 18:39:29.622154 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-714d] I1111 18:39:29.839487 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-573w] I1111 18:39:30.030796 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hfwn] I1111 18:39:30.278218 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fzh9] I1111 18:39:30.396450 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-zx5h] I1111 18:39:30.565589 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-vnw6] I1111 18:39:30.710700 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-76mt] I1111 18:39:31.075067 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-61g9] I1111 18:39:31.236304 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-w7mk] I1111 18:39:31.494766 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-bd41] I1111 18:39:31.605305 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-2t34] I1111 18:39:31.972326 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-dhp2] I1111 18:39:32.477650 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-704b] I1111 18:39:33.034549 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-8skg] I1111 18:39:33.382584 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9qzm] I1111 18:39:33.588702 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-z816] I1111 18:39:33.986465 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-rj3v] I1111 18:39:34.249107 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-xz9p] I1111 18:39:34.567923 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-gsz2] I1111 18:39:34.915820 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-p3kz] I1111 18:39:35.157996 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-m3kz] I1111 18:39:35.628456 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-wlsm] I1111 18:39:35.779676 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-tqlw] I1111 18:39:36.918460 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-g3sm] I1111 18:39:37.549209 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8sl7] I1111 18:39:37.810307 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-4sf8] I1111 18:39:38.409946 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-vms6] I1111 18:39:38.475139 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-krsm] I1111 18:39:38.592150 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-qlj2] I1111 18:39:38.810591 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jvsk] I1111 18:39:39.152628 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0gk0] I1111 18:39:39.568955 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wjwh] I1111 18:39:39.714656 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3wtw] I1111 18:39:39.820303 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-v5vd] I1111 18:39:40.048672 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tp6q] I1111 18:39:40.261932 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tfmq] I1111 18:39:40.577344 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-096v] I1111 18:39:41.046671 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-04s2] I1111 18:39:41.434857 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-0l73] I1111 18:39:41.915326 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-kdmk] I1111 18:39:42.080964 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2br9] I1111 18:39:42.339260 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-4chv] I1111 18:39:42.711780 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-frx9] I1111 18:39:43.110570 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-ntlh] I1111 18:39:43.425878 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-q7mg] I1111 18:39:43.779251 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5xmx] I1111 18:39:43.989802 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1lp9] I1111 18:39:44.297255 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2r4g] I1111 18:39:45.357398 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vk0l] I1111 18:39:46.383214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-g8sw] I1111 18:39:46.555960 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-l4r8] I1111 18:39:46.661191 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-pv3n] I1111 18:39:47.113505 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6ppm] I1111 18:39:47.188476 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s581] I1111 18:39:47.304973 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-jzzd] I1111 18:39:47.556619 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-92m2] I1111 18:39:47.704804 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cpm9] I1111 18:39:48.037517 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7x13] I1111 18:39:48.432171 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j2hg] I1111 18:39:48.680666 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fz0l] I1111 18:39:49.015524 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-p88r] I1111 18:39:49.391521 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-3r8m] I1111 18:39:49.733150 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-p6hh] I1111 18:39:50.234236 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-f6f9] I1111 18:39:50.738040 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-580w] I1111 18:39:50.895688 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nvrw] I1111 18:39:51.217524 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gkld] I1111 18:39:51.562688 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-1672] I1111 18:39:51.664724 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vrc8] I1111 18:39:51.998702 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nlxt] I1111 18:39:52.282487 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lkd8] I1111 18:39:52.487063 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-2wmv] I1111 18:39:53.279273 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-9fhd] I1111 18:39:53.748607 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-blx8] I1111 18:39:54.252631 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gf66] I1111 18:39:54.849170 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-q4q0] I1111 18:39:55.078434 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0g6p] I1111 18:39:55.301198 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-r3kh] I1111 18:39:55.408948 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-dh60] I1111 18:39:55.573297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gw8l] I1111 18:39:56.093055 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-8ggm] I1111 18:39:56.373142 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-mpj9] I1111 18:39:56.509212 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-j0m7] I1111 18:39:56.814137 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-mxdc] I1111 18:39:57.158808 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-8wxd] I1111 18:39:57.495696 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bkk3] I1111 18:39:57.931057 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-hkff] I1111 18:39:58.470851 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-c6gk] I1111 18:39:58.796070 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jggc] I1111 18:39:59.065624 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-93bc] I1111 18:39:59.421789 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j473] I1111 18:39:59.664666 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-csmv] I1111 18:39:59.988656 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-xcfz] I1111 18:40:00.172523 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-j49d] I1111 18:40:00.495009 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-tz9z] I1111 18:40:00.818639 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-v4pz] I1111 18:40:01.073872 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-2qmd] I1111 18:40:01.482237 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5q5w] I1111 18:40:02.103121 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fhkz] I1111 18:40:02.740083 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wcql] I1111 18:40:03.432864 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6j43] I1111 18:40:03.539459 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-lzw5] I1111 18:40:03.661212 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-t85p] I1111 18:40:03.839092 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bdf9] I1111 18:40:04.070572 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bfc9] I1111 18:40:04.238901 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pz5k] I1111 18:40:04.318291 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-43d4] I1111 18:40:04.715717 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xvbt] I1111 18:40:04.946384 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-f51r] I1111 18:40:05.574220 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qcgf] I1111 18:40:06.028681 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gcsx] I1111 18:40:06.235450 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wq6z] I1111 18:40:06.463222 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wd89] I1111 18:40:06.669367 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tst0] I1111 18:40:06.991029 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-6ffp] I1111 18:40:07.290680 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-d4rc] I1111 18:40:07.625417 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qb0l] I1111 18:40:07.989539 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-16wm] I1111 18:40:08.298710 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6gtg] I1111 18:40:08.544512 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-jbj7] I1111 18:40:08.903102 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-m3qz] I1111 18:40:09.288837 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2v46] I1111 18:40:09.712452 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-645m] I1111 18:40:09.956980 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-hq6h] I1111 18:40:10.293164 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-th53] I1111 18:40:11.232052 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-tvq1] I1111 18:40:11.563648 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-25t9] I1111 18:40:11.906882 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-m78v] I1111 18:40:11.954090 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-sd0g] I1111 18:40:12.172313 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-cbbd] I1111 18:40:12.283434 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-31g6] I1111 18:40:12.366786 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hd4f] I1111 18:40:12.453117 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-s883] I1111 18:40:12.743109 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cgqp] I1111 18:40:13.369057 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-v316] I1111 18:40:13.733467 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-07fp] I1111 18:40:14.042805 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nxv1] I1111 18:40:14.342895 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-x5v7] I1111 18:40:14.612709 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-bt6c] I1111 18:40:14.897639 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8mph] I1111 18:40:15.233431 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-jrv8] I1111 18:40:15.694456 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bn89] I1111 18:40:16.291492 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-6621] I1111 18:40:16.339440 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-19p2] I1111 18:40:16.594489 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-n2pd] I1111 18:40:16.970156 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bjhk] I1111 18:40:17.143069 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-w1ct] I1111 18:40:17.305550 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-2xp9] I1111 18:40:17.570090 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-swn7] I1111 18:40:17.967891 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-dxwr] I1111 18:40:19.436201 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-xt77] I1111 18:40:19.894755 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p527] I1111 18:40:20.038867 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tbsw] I1111 18:40:20.174110 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-q8vp] I1111 18:40:20.459233 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s04p] I1111 18:40:20.625268 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7p7q] I1111 18:40:20.848241 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-gqrf] I1111 18:40:20.924132 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gk58] I1111 18:40:21.126664 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-x5pm] I1111 18:40:21.692854 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-2x5n] I1111 18:40:21.755381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vhcl] I1111 18:40:22.008464 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-nms7] I1111 18:40:22.436191 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-btxw] I1111 18:40:22.754123 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-1s6l] I1111 18:40:23.360269 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-c9px] I1111 18:40:23.637447 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-3ncs] I1111 18:40:24.181781 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-0q2t] I1111 18:40:24.647872 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0ml6] I1111 18:40:24.669665 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gfn7] I1111 18:40:24.884514 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qzdk] I1111 18:40:25.124432 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fdjf] I1111 18:40:25.405321 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-pz1w] I1111 18:40:26.895677 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-sjr3] I1111 18:40:27.307843 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-kzzr] I1111 18:40:27.668214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-m30f] I1111 18:40:27.757480 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-dk0d] I1111 18:40:28.011769 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hvhw] I1111 18:40:28.403362 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-rkpf] I1111 18:40:28.473591 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-h79n] I1111 18:40:28.978385 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-x7s0] I1111 18:40:29.189443 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-7tzz] I1111 18:40:29.483829 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-z2sr] I1111 18:40:30.131831 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fg4d] I1111 18:40:30.154205 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-52rs] I1111 18:40:30.656704 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4l1n] I1111 18:40:30.860403 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xjdl] I1111 18:40:31.032045 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-8vsg] I1111 18:40:31.185006 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-x9dq] I1111 18:40:31.543804 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hc2k] I1111 18:40:31.660632 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-cvg7] I1111 18:40:32.001493 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hqtf] I1111 18:40:32.039168 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5mcp] I1111 18:40:32.445498 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xkf6] I1111 18:40:32.588909 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lr3k] I1111 18:40:32.959447 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qj4g] I1111 18:40:33.147214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-9bt2] I1111 18:40:34.669491 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f6lx] ``` node_lifecycle_controller said that gce-scale-cluster-minion-group-2-66d8's status hasn't been updated for 41 seconds. ``` I1111 18:37:28.370392 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740028231s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:17 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,} I1111 18:37:28.370515 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740159195s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,} I1111 18:37:28.370533 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740177955s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,} I1111 18:37:28.370547 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740192115s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,} ``` apiserver's logs show that kubelet was doing 'PUT /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/gce-scale-cluster-minion-group-2-66d8' every 10 seconds. Something weird happened to watch by 'shared-informers': ``` I1111 18:32:20.160965 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3560113 labels= fields= timeout=7m14s I1111 18:32:20.161197 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3560113&timeout=7m14s&timeoutSeconds=434&watch=true: (384.017µs) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:32:21.303182 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3560961 labels= fields= timeout=5m46s I1111 18:36:15.241329 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3560961&timeout=5m46s&timeoutSeconds=346&watch=true: (3m53.938351655s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:36:15.284570 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3732753 labels= fields= timeout=5m48s I1111 18:36:15.324265 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3732753&timeout=5m48s&timeoutSeconds=348&watch=true: (42.014758ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:36:16.442047 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3733880 labels= fields= timeout=6m14s I1111 18:36:52.300950 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3733880&timeout=6m14s&timeoutSeconds=374&watch=true: (35.859084497s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:36:52.314218 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3760136 labels= fields= timeout=9m43s I1111 18:36:52.320361 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3760136&timeout=9m43s&timeoutSeconds=583&watch=true: (9.10633ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:40:34.906921 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3949153 labels= fields= timeout=6m22s I1111 18:42:20.654833 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3949153&timeout=6m22s&timeoutSeconds=382&watch=true: (1m45.764834382s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:42:20.697678 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4028224 labels= fields= timeout=5m44s I1111 18:42:20.702168 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4028224&timeout=5m44s&timeoutSeconds=344&watch=true: (4.82265ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:42:21.822285 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4029047 labels= fields= timeout=8m7s I1111 18:43:07.453394 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4029047&timeout=8m7s&timeoutSeconds=487&watch=true: (45.631249675s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.533402 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064578 labels= fields= timeout=7m16s I1111 18:43:07.592559 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064578&timeout=7m16s&timeoutSeconds=436&watch=true: (60.648364ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.604225 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064660 labels= fields= timeout=7m47s I1111 18:43:07.658464 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064660&timeout=7m47s&timeoutSeconds=467&watch=true: (54.605721ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.659558 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064853 labels= fields= timeout=8m33s I1111 18:43:07.716122 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064853&timeout=8m33s&timeoutSeconds=513&watch=true: (56.895361ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.723797 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064886 labels= fields= timeout=7m17s I1111 18:43:07.724210 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064886&timeout=7m17s&timeoutSeconds=437&watch=true: (785.817µs) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] ``` One watch started at 18:36:52 and finished 9ms later. The next one started at 18:40:34.906921 So it seems that there was a gap in lease watche between 18:36:52 and 18:40:34. Between 18:36:52 and 18:40:34 there is a number of leases LIST requests in the following pattern: * a single /leases?limit=500 LIST request that finishes with 200 * a single /leases/continue=XXX request that finishes with 400 The kube-controller-manager is retrying every 1 second up to 18:40:34.
1.0
[Flaky Test] gce-master-scale-performance flakes due to kube-controller-manager loosing lease watch - /sig scalability Debugging done by @mborsz Failed test attempt: https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1193936861135900677 It failed with a number of "WaitingFor*Pods: objects timed out" errors (46 failed objects in total). Sample failing pods: ```[measurement call WaitForControlledPodsRunning - WaitForRunningJobs error: 3 objects timed out: Jobs: test-aeoqzf-1/big-job-0, test-aeoqzf-6/big-job-0, test-aeoqzf-37/big-job-0``` 580 nodes were not ready at 18:37: ``` I1111 18:37:28.387103 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-66d8] I1111 18:37:28.498675 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-259b] I1111 18:37:28.518832 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tzqn] I1111 18:37:30.121908 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lwlg] I1111 18:37:30.815458 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nbps] I1111 18:37:30.890932 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-jkg1] I1111 18:37:31.292585 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-356q] I1111 18:37:31.852058 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-fsqx] I1111 18:37:31.995249 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hzvt] I1111 18:37:32.214507 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-gstw] I1111 18:37:32.311879 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-7s8f] I1111 18:37:33.084482 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-sh4h] I1111 18:37:33.166893 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-29wq] I1111 18:37:33.293074 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-11qb] I1111 18:37:33.406918 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7w7c] I1111 18:37:34.280034 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-51sg] I1111 18:37:34.392178 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wp8k] I1111 18:37:34.502013 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-6k3g] I1111 18:37:35.103696 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-ngjs] I1111 18:37:35.217235 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-2dtm] I1111 18:37:35.336545 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-20ph] I1111 18:37:35.751964 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-ppw2] I1111 18:37:36.434121 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-ccqj] I1111 18:37:36.564376 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s274] I1111 18:37:37.628794 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vxpr] I1111 18:37:37.726619 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qd8q] I1111 18:37:38.880584 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bwsg] I1111 18:37:39.006484 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cd9z] I1111 18:37:39.292148 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-j3g7] I1111 18:37:39.769229 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1gn9] I1111 18:37:40.257826 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bd4w] I1111 18:37:41.065366 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-9pv5] I1111 18:37:43.116762 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f8tj] I1111 18:37:43.339429 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-tw32] I1111 18:37:43.451980 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bkdn] I1111 18:37:43.637899 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-q0lb] I1111 18:37:43.785598 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-t5tz] I1111 18:37:44.004449 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-zpfv] I1111 18:37:44.209577 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-jxp7] I1111 18:37:44.324071 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-pn7l] I1111 18:37:44.607303 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-t5pm] I1111 18:37:44.941803 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-ws1p] I1111 18:37:45.310339 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xztn] I1111 18:37:45.354347 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lt8b] I1111 18:37:45.651971 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-p2g4] I1111 18:37:45.928044 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lmlz] I1111 18:37:46.281877 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-p7kr] I1111 18:37:46.591677 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-5wgw] I1111 18:37:47.029497 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-13rp] I1111 18:37:47.269065 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wxmg] I1111 18:37:47.547684 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3pb0] I1111 18:37:47.647479 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-c94m] I1111 18:37:47.985983 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-w421] I1111 18:37:48.241689 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f6g6] I1111 18:37:48.562744 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-wltm] I1111 18:37:49.517088 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-7zng] I1111 18:37:49.972701 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qzs3] I1111 18:37:50.298585 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-jm3t] I1111 18:37:50.505342 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qgkz] I1111 18:37:50.605297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vnmk] I1111 18:37:50.764049 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nhn1] I1111 18:37:50.843480 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-c2hh] I1111 18:37:50.925994 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-2rjl] I1111 18:37:51.162715 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-1ng6] I1111 18:37:51.350300 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-b2dc] I1111 18:37:51.658863 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-19dh] I1111 18:37:51.983381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-dn9h] I1111 18:37:52.070628 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3v11] I1111 18:37:52.283535 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-m45v] I1111 18:37:52.548529 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-dljv] I1111 18:37:52.782994 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-d0tp] I1111 18:37:53.239955 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-zzzs] I1111 18:37:53.458197 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-l5nn] I1111 18:37:53.857368 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4hrl] I1111 18:37:53.915010 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tcnx] I1111 18:37:53.957982 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-s7rk] I1111 18:37:54.336905 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8r5n] I1111 18:37:54.605440 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-zw4x] I1111 18:37:54.942220 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8f96] I1111 18:37:55.316004 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8n48] I1111 18:37:55.613033 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-frfs] I1111 18:37:55.887332 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-46kw] I1111 18:37:56.634617 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-pzzg] I1111 18:37:57.282599 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lf6r] I1111 18:37:57.354027 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-trzz] I1111 18:37:58.056352 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qxhk] I1111 18:37:58.209253 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-ms57] I1111 18:37:58.329592 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-cq2n] I1111 18:37:58.450046 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-ndtj] I1111 18:37:58.687895 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4fqw] I1111 18:37:58.973537 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-v0wx] I1111 18:37:59.101574 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-fgt6] I1111 18:37:59.244297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-9mcm] I1111 18:37:59.367966 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mtq4] I1111 18:37:59.622707 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-cdrb] I1111 18:37:59.783529 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-n8b9] I1111 18:38:00.111485 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1hcl] I1111 18:38:00.616859 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-72hv] I1111 18:38:00.810590 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bzfd] I1111 18:38:01.168910 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bx4z] I1111 18:38:01.578337 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lqjt] I1111 18:38:01.770263 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-jqk1] I1111 18:38:02.061361 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lpsm] I1111 18:38:02.396538 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-s6m6] I1111 18:38:02.770775 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hvz7] I1111 18:38:03.228379 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-55xd] I1111 18:38:03.315612 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-g620] I1111 18:38:03.936931 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wg2m] I1111 18:38:04.424293 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-3tzk] I1111 18:38:05.281710 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r8m5] I1111 18:38:05.602427 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-zdwl] I1111 18:38:05.788528 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-39j4] I1111 18:38:05.993550 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r4tf] I1111 18:38:06.261329 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-78cp] I1111 18:38:06.419663 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fdjs] I1111 18:38:06.484446 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-v3gq] I1111 18:38:06.781222 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vh76] I1111 18:38:07.032394 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-csqg] I1111 18:38:07.087125 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1fw0] I1111 18:38:07.339819 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-2hh3] I1111 18:38:07.641678 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-pw6p] I1111 18:38:07.949707 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-sqq9] I1111 18:38:08.109691 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-111g] I1111 18:38:08.325421 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wzcg] I1111 18:38:08.602802 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-xfht] I1111 18:38:08.880364 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lhtd] I1111 18:38:09.144392 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-z1r0] I1111 18:38:09.219673 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8rzk] I1111 18:38:09.573570 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nfhw] I1111 18:38:10.010771 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gc5n] I1111 18:38:10.362357 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cxqb] I1111 18:38:10.642776 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-86rb] I1111 18:38:11.123493 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-n4c8] I1111 18:38:12.047791 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-njpn] I1111 18:38:12.306582 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-b8nn] I1111 18:38:12.590385 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-z1l0] I1111 18:38:12.757332 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-tlkw] I1111 18:38:12.943619 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9kt7] I1111 18:38:13.194187 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3stg] I1111 18:38:13.488149 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hj4t] I1111 18:38:13.704476 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-cpf8] I1111 18:38:13.937571 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-kr9f] I1111 18:38:14.033259 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hz3d] I1111 18:38:14.173053 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-h5bc] I1111 18:38:14.428397 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-jp9f] I1111 18:38:14.635603 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-t20b] I1111 18:38:14.846848 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-9l39] I1111 18:38:14.970107 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-qq4k] I1111 18:38:15.215528 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-600q] I1111 18:38:15.505309 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-tr2v] I1111 18:38:15.895880 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-nz5p] I1111 18:38:16.081949 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-qcbz] I1111 18:38:16.305558 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4mc0] I1111 18:38:16.548506 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qswx] I1111 18:38:16.689381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jb08] I1111 18:38:16.714843 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-k85q] I1111 18:38:16.979169 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-r2fb] I1111 18:38:17.467256 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tbpc] I1111 18:38:18.637341 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-s12t] I1111 18:38:19.007701 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7mg4] I1111 18:38:19.244470 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-t7tn] I1111 18:38:19.474166 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-zhbp] I1111 18:38:19.659036 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-h6q1] I1111 18:38:19.776465 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-zmxw] I1111 18:38:19.919427 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-dq45] I1111 18:38:20.011586 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vbtg] I1111 18:38:20.141681 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p0kd] I1111 18:38:20.371146 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-dvm7] I1111 18:38:20.616428 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-88tf] I1111 18:38:20.896307 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-fwlh] I1111 18:38:21.174803 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-ct6f] I1111 18:38:21.508416 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7b94] I1111 18:38:21.734200 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tks4] I1111 18:38:22.209506 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-4gwx] I1111 18:38:22.507205 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-vg65] I1111 18:38:22.874686 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-q6pn] I1111 18:38:23.600527 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-m5x3] I1111 18:38:24.024932 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-c5cx] I1111 18:38:24.243836 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-mz0h] I1111 18:38:25.315200 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-tnz1] I1111 18:38:25.696849 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-057r] I1111 18:38:25.770130 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-4vxt] I1111 18:38:25.925778 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5hm4] I1111 18:38:26.326549 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-wrf7] I1111 18:38:26.508675 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-5ww9] I1111 18:38:26.732965 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-b798] I1111 18:38:26.925785 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r09j] I1111 18:38:27.101672 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-d74f] I1111 18:38:27.540538 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-twxc] I1111 18:38:27.871267 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-j1rr] I1111 18:38:28.109543 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vnx7] I1111 18:38:28.447338 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-0rsl] I1111 18:38:28.773923 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-g1w1] I1111 18:38:29.166330 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-p0rl] I1111 18:38:29.458795 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-5cgh] I1111 18:38:29.599650 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-zh56] I1111 18:38:29.818921 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-9450] I1111 18:38:30.117421 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-glg9] I1111 18:38:30.467943 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-blcf] I1111 18:38:30.908641 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-dl56] I1111 18:38:30.997161 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-v2ff] I1111 18:38:31.853190 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-hrnk] I1111 18:38:32.101083 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f0gw] I1111 18:38:32.252692 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-m0s7] I1111 18:38:32.323073 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2fkm] I1111 18:38:32.545522 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fwdd] I1111 18:38:32.761190 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lzhj] I1111 18:38:33.018175 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fhgm] I1111 18:38:33.496766 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-r4qm] I1111 18:38:33.681105 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mx33] I1111 18:38:33.987314 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-swtn] I1111 18:38:34.347237 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-c01c] I1111 18:38:34.615267 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2652] I1111 18:38:34.716355 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fzhk] I1111 18:38:35.087431 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9zq3] I1111 18:38:35.550418 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wrhh] I1111 18:38:35.839456 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-xfms] I1111 18:38:36.115179 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j9dz] I1111 18:38:36.456882 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-v77j] I1111 18:38:36.541538 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-5b5n] I1111 18:38:36.877008 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-3qvp] I1111 18:38:37.218718 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-tkx3] I1111 18:38:38.363314 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mm7q] I1111 18:38:38.699014 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-573f] I1111 18:38:38.935157 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2nlr] I1111 18:38:39.122364 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qlx8] I1111 18:38:39.223464 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-1zzt] I1111 18:38:39.359691 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-hs19] I1111 18:38:39.488680 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-v3v6] I1111 18:38:39.788296 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-6wwf] I1111 18:38:40.259162 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1gt7] I1111 18:38:40.366147 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wr1t] I1111 18:38:40.434989 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-jhbm] I1111 18:38:40.833900 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-7z0d] I1111 18:38:40.913712 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-73m6] I1111 18:38:41.202076 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-5tdt] I1111 18:38:41.479094 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-zzz7] I1111 18:38:41.881878 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-14kc] I1111 18:38:42.183027 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7j01] I1111 18:38:42.438968 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-9klz] I1111 18:38:42.533411 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-r0zh] I1111 18:38:42.587141 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gj65] I1111 18:38:42.990843 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wf8n] I1111 18:38:43.300660 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-pljw] I1111 18:38:43.643535 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vg08] I1111 18:38:43.699697 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-lqm0] I1111 18:38:43.972340 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0c4b] I1111 18:38:45.130681 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4z5j] I1111 18:38:45.372096 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-p1z4] I1111 18:38:45.692477 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-mkqf] I1111 18:38:45.834688 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-l2nj] I1111 18:38:45.993775 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hqdc] I1111 18:38:46.100336 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p058] I1111 18:38:46.231845 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-dtr8] I1111 18:38:46.574274 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wg6l] I1111 18:38:46.770622 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f7gm] I1111 18:38:47.046924 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bq8g] I1111 18:38:47.337844 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-hdk4] I1111 18:38:47.716548 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-sg1n] I1111 18:38:48.007164 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-bzjz] I1111 18:38:48.356598 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tfdz] I1111 18:38:48.418540 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j7s7] I1111 18:38:48.880005 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-1cg6] I1111 18:38:49.148773 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-38k4] I1111 18:38:49.339790 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tt43] I1111 18:38:49.587573 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-r99r] I1111 18:38:49.742519 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-n3s4] I1111 18:38:50.088846 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-10ps] I1111 18:38:50.525943 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-gbqs] I1111 18:38:50.778937 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-lpvb] I1111 18:38:51.187043 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nl6r] I1111 18:38:51.651307 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-gn6n] I1111 18:38:52.232016 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-7kt6] I1111 18:38:52.566470 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gz2k] I1111 18:38:52.876868 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5c8h] I1111 18:38:52.985187 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lv9g] I1111 18:38:53.145669 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-7mh1] I1111 18:38:53.477834 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-24hm] I1111 18:38:53.737045 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-s4pn] I1111 18:38:54.039465 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-mcg5] I1111 18:38:54.317484 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-xsmg] I1111 18:38:54.598026 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-dxsx] I1111 18:38:55.091282 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-9p0g] I1111 18:38:55.427845 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-t7lm] I1111 18:38:55.543477 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5m1r] I1111 18:38:56.028029 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-z8jc] I1111 18:38:56.070403 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-v13n] I1111 18:38:56.365921 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qhjh] I1111 18:38:56.407344 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-6j02] I1111 18:38:56.647467 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-wdhd] I1111 18:38:57.131829 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p5kh] I1111 18:38:57.521473 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-k1mh] I1111 18:38:57.852846 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-g821] I1111 18:38:58.067074 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wr0t] I1111 18:38:58.339612 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-2ccx] I1111 18:38:58.905254 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-0t73] I1111 18:38:59.762190 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-rwzp] I1111 18:39:00.054413 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-1bjx] I1111 18:39:00.324002 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-4k9t] I1111 18:39:00.439786 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8jnb] I1111 18:39:00.543561 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-sxf2] I1111 18:39:00.632589 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-745x] I1111 18:39:00.926210 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7mcj] I1111 18:39:01.404694 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-n2dl] I1111 18:39:01.709573 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jr2k] I1111 18:39:02.027130 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-x7nm] I1111 18:39:02.263907 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-txbf] I1111 18:39:02.699439 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-l4xr] I1111 18:39:02.961453 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-mwk3] I1111 18:39:03.316577 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6mp9] I1111 18:39:03.586097 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-mdl0] I1111 18:39:03.899580 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-j38d] I1111 18:39:04.256914 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-qkkc] I1111 18:39:04.432297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-2frx] I1111 18:39:04.700258 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-ftxz] I1111 18:39:05.350233 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-3v49] I1111 18:39:05.440266 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wr2k] I1111 18:39:05.543044 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-x3lm] I1111 18:39:05.958126 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-6g89] I1111 18:39:06.062887 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-25ww] I1111 18:39:06.907431 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-w0n3] I1111 18:39:06.996672 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wgml] I1111 18:39:07.482092 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-sbv3] I1111 18:39:07.739111 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9zql] I1111 18:39:08.268033 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tfxw] I1111 18:39:08.469658 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wl7d] I1111 18:39:08.494823 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-8vjt] I1111 18:39:08.831920 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-xbng] I1111 18:39:08.953907 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-l7mq] I1111 18:39:09.373633 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-328r] I1111 18:39:09.567176 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s0r1] I1111 18:39:09.648895 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-4mc9] I1111 18:39:10.017439 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-bps6] I1111 18:39:10.282764 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-k18f] I1111 18:39:10.600084 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-p5d2] I1111 18:39:10.857286 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-ckqw] I1111 18:39:11.233301 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-4hvb] I1111 18:39:11.588868 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-5b23] I1111 18:39:11.782381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-d7kc] I1111 18:39:11.879102 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-b8dp] I1111 18:39:12.138797 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-7t2c] I1111 18:39:12.427890 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-0xgb] I1111 18:39:12.850075 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-81dt] I1111 18:39:13.385066 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-g68w] I1111 18:39:14.230811 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pj57] I1111 18:39:14.991263 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-kncq] I1111 18:39:15.272829 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qfst] I1111 18:39:15.450647 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-67dg] I1111 18:39:15.696548 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0qwp] I1111 18:39:15.771299 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0zg3] I1111 18:39:15.845018 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-99sx] I1111 18:39:16.059427 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pzcs] I1111 18:39:16.353765 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-mrw2] I1111 18:39:16.577633 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-wjxk] I1111 18:39:16.982767 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-htgd] I1111 18:39:17.544936 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bmg5] I1111 18:39:17.649214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3v4t] I1111 18:39:18.154252 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-8qdk] I1111 18:39:18.310060 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3rxn] I1111 18:39:18.617075 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-8gl8] I1111 18:39:19.016250 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-46hp] I1111 18:39:19.477577 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-cwrh] I1111 18:39:19.938012 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-mps7] I1111 18:39:20.147223 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-dl41] I1111 18:39:20.791216 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hpbs] I1111 18:39:21.700306 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-x1k6] I1111 18:39:22.175236 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-414x] I1111 18:39:22.459859 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gjvw] I1111 18:39:22.587232 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nsgb] I1111 18:39:22.902872 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-3kql] I1111 18:39:23.133551 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nmlf] I1111 18:39:23.384778 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-12km] I1111 18:39:23.589037 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-nk8v] I1111 18:39:23.660390 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-fnrp] I1111 18:39:23.845723 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-r9b5] I1111 18:39:24.010260 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-91q7] I1111 18:39:24.297671 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-82xj] I1111 18:39:24.603582 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-761n] I1111 18:39:24.837122 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-jb5n] I1111 18:39:25.209054 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gsx4] I1111 18:39:25.490787 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-nwfv] I1111 18:39:25.781753 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-zr4r] I1111 18:39:25.828573 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-kql9] I1111 18:39:26.197595 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-n7xb] I1111 18:39:26.910286 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-mqhm] I1111 18:39:26.999475 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-rjbx] I1111 18:39:27.338519 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-61f6] I1111 18:39:27.647301 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-bnkp] I1111 18:39:28.033623 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qpn6] I1111 18:39:28.944493 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pblt] I1111 18:39:29.622154 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-714d] I1111 18:39:29.839487 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-573w] I1111 18:39:30.030796 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hfwn] I1111 18:39:30.278218 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fzh9] I1111 18:39:30.396450 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-zx5h] I1111 18:39:30.565589 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-vnw6] I1111 18:39:30.710700 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-76mt] I1111 18:39:31.075067 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-61g9] I1111 18:39:31.236304 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-w7mk] I1111 18:39:31.494766 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-bd41] I1111 18:39:31.605305 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-2t34] I1111 18:39:31.972326 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-dhp2] I1111 18:39:32.477650 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-704b] I1111 18:39:33.034549 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-8skg] I1111 18:39:33.382584 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-9qzm] I1111 18:39:33.588702 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-z816] I1111 18:39:33.986465 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-rj3v] I1111 18:39:34.249107 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-xz9p] I1111 18:39:34.567923 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-gsz2] I1111 18:39:34.915820 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-p3kz] I1111 18:39:35.157996 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-m3kz] I1111 18:39:35.628456 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-wlsm] I1111 18:39:35.779676 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-tqlw] I1111 18:39:36.918460 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-g3sm] I1111 18:39:37.549209 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8sl7] I1111 18:39:37.810307 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-4sf8] I1111 18:39:38.409946 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-vms6] I1111 18:39:38.475139 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-krsm] I1111 18:39:38.592150 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-qlj2] I1111 18:39:38.810591 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jvsk] I1111 18:39:39.152628 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0gk0] I1111 18:39:39.568955 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wjwh] I1111 18:39:39.714656 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3wtw] I1111 18:39:39.820303 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-v5vd] I1111 18:39:40.048672 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tp6q] I1111 18:39:40.261932 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tfmq] I1111 18:39:40.577344 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-096v] I1111 18:39:41.046671 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-04s2] I1111 18:39:41.434857 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-0l73] I1111 18:39:41.915326 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-kdmk] I1111 18:39:42.080964 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2br9] I1111 18:39:42.339260 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-4chv] I1111 18:39:42.711780 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-frx9] I1111 18:39:43.110570 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-ntlh] I1111 18:39:43.425878 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-q7mg] I1111 18:39:43.779251 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5xmx] I1111 18:39:43.989802 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1lp9] I1111 18:39:44.297255 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2r4g] I1111 18:39:45.357398 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-vk0l] I1111 18:39:46.383214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-g8sw] I1111 18:39:46.555960 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-l4r8] I1111 18:39:46.661191 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-pv3n] I1111 18:39:47.113505 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6ppm] I1111 18:39:47.188476 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s581] I1111 18:39:47.304973 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-jzzd] I1111 18:39:47.556619 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-92m2] I1111 18:39:47.704804 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cpm9] I1111 18:39:48.037517 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7x13] I1111 18:39:48.432171 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j2hg] I1111 18:39:48.680666 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fz0l] I1111 18:39:49.015524 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-p88r] I1111 18:39:49.391521 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-3r8m] I1111 18:39:49.733150 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-p6hh] I1111 18:39:50.234236 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-f6f9] I1111 18:39:50.738040 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-580w] I1111 18:39:50.895688 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nvrw] I1111 18:39:51.217524 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gkld] I1111 18:39:51.562688 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-1672] I1111 18:39:51.664724 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vrc8] I1111 18:39:51.998702 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-nlxt] I1111 18:39:52.282487 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-lkd8] I1111 18:39:52.487063 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-2wmv] I1111 18:39:53.279273 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-9fhd] I1111 18:39:53.748607 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-blx8] I1111 18:39:54.252631 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gf66] I1111 18:39:54.849170 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-q4q0] I1111 18:39:55.078434 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0g6p] I1111 18:39:55.301198 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-r3kh] I1111 18:39:55.408948 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-dh60] I1111 18:39:55.573297 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-gw8l] I1111 18:39:56.093055 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-8ggm] I1111 18:39:56.373142 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-mpj9] I1111 18:39:56.509212 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-j0m7] I1111 18:39:56.814137 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-mxdc] I1111 18:39:57.158808 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-8wxd] I1111 18:39:57.495696 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bkk3] I1111 18:39:57.931057 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-hkff] I1111 18:39:58.470851 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-c6gk] I1111 18:39:58.796070 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-jggc] I1111 18:39:59.065624 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-93bc] I1111 18:39:59.421789 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-j473] I1111 18:39:59.664666 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-csmv] I1111 18:39:59.988656 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-xcfz] I1111 18:40:00.172523 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-j49d] I1111 18:40:00.495009 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-tz9z] I1111 18:40:00.818639 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-v4pz] I1111 18:40:01.073872 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-2qmd] I1111 18:40:01.482237 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5q5w] I1111 18:40:02.103121 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fhkz] I1111 18:40:02.740083 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-wcql] I1111 18:40:03.432864 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6j43] I1111 18:40:03.539459 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-lzw5] I1111 18:40:03.661212 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-t85p] I1111 18:40:03.839092 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bdf9] I1111 18:40:04.070572 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bfc9] I1111 18:40:04.238901 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-pz5k] I1111 18:40:04.318291 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-43d4] I1111 18:40:04.715717 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xvbt] I1111 18:40:04.946384 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-f51r] I1111 18:40:05.574220 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-qcgf] I1111 18:40:06.028681 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gcsx] I1111 18:40:06.235450 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-wq6z] I1111 18:40:06.463222 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-wd89] I1111 18:40:06.669367 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-tst0] I1111 18:40:06.991029 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-6ffp] I1111 18:40:07.290680 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-d4rc] I1111 18:40:07.625417 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qb0l] I1111 18:40:07.989539 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-16wm] I1111 18:40:08.298710 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-6gtg] I1111 18:40:08.544512 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-jbj7] I1111 18:40:08.903102 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-m3qz] I1111 18:40:09.288837 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2v46] I1111 18:40:09.712452 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-645m] I1111 18:40:09.956980 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-hq6h] I1111 18:40:10.293164 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-th53] I1111 18:40:11.232052 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-tvq1] I1111 18:40:11.563648 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-25t9] I1111 18:40:11.906882 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-m78v] I1111 18:40:11.954090 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-sd0g] I1111 18:40:12.172313 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-cbbd] I1111 18:40:12.283434 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-31g6] I1111 18:40:12.366786 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hd4f] I1111 18:40:12.453117 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-s883] I1111 18:40:12.743109 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-cgqp] I1111 18:40:13.369057 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-v316] I1111 18:40:13.733467 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-07fp] I1111 18:40:14.042805 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-nxv1] I1111 18:40:14.342895 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-x5v7] I1111 18:40:14.612709 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-bt6c] I1111 18:40:14.897639 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-8mph] I1111 18:40:15.233431 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-jrv8] I1111 18:40:15.694456 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-bn89] I1111 18:40:16.291492 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-6621] I1111 18:40:16.339440 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-19p2] I1111 18:40:16.594489 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-n2pd] I1111 18:40:16.970156 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-bjhk] I1111 18:40:17.143069 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-w1ct] I1111 18:40:17.305550 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-2xp9] I1111 18:40:17.570090 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-swn7] I1111 18:40:17.967891 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-dxwr] I1111 18:40:19.436201 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-xt77] I1111 18:40:19.894755 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-p527] I1111 18:40:20.038867 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-tbsw] I1111 18:40:20.174110 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-q8vp] I1111 18:40:20.459233 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-s04p] I1111 18:40:20.625268 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-7p7q] I1111 18:40:20.848241 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-gqrf] I1111 18:40:20.924132 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gk58] I1111 18:40:21.126664 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-x5pm] I1111 18:40:21.692854 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-2x5n] I1111 18:40:21.755381 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-vhcl] I1111 18:40:22.008464 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-nms7] I1111 18:40:22.436191 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-btxw] I1111 18:40:22.754123 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-1s6l] I1111 18:40:23.360269 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-c9px] I1111 18:40:23.637447 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-3ncs] I1111 18:40:24.181781 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-0q2t] I1111 18:40:24.647872 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-0ml6] I1111 18:40:24.669665 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-gfn7] I1111 18:40:24.884514 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-qzdk] I1111 18:40:25.124432 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-fdjf] I1111 18:40:25.405321 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-pz1w] I1111 18:40:26.895677 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-sjr3] I1111 18:40:27.307843 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-kzzr] I1111 18:40:27.668214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-m30f] I1111 18:40:27.757480 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-dk0d] I1111 18:40:28.011769 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hvhw] I1111 18:40:28.403362 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-rkpf] I1111 18:40:28.473591 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-h79n] I1111 18:40:28.978385 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-x7s0] I1111 18:40:29.189443 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-7tzz] I1111 18:40:29.483829 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-z2sr] I1111 18:40:30.131831 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-fg4d] I1111 18:40:30.154205 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-52rs] I1111 18:40:30.656704 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4l1n] I1111 18:40:30.860403 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xjdl] I1111 18:40:31.032045 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-8vsg] I1111 18:40:31.185006 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-1-x9dq] I1111 18:40:31.543804 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-hc2k] I1111 18:40:31.660632 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-cvg7] I1111 18:40:32.001493 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-4-hqtf] I1111 18:40:32.039168 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-5mcp] I1111 18:40:32.445498 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-xkf6] I1111 18:40:32.588909 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-3-lr3k] I1111 18:40:32.959447 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-qj4g] I1111 18:40:33.147214 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-9bt2] I1111 18:40:34.669491 1 controller_utils.go:121] Update ready status of pods on node [gce-scale-cluster-minion-group-2-f6lx] ``` node_lifecycle_controller said that gce-scale-cluster-minion-group-2-66d8's status hasn't been updated for 41 seconds. ``` I1111 18:37:28.370392 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740028231s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:17 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,} I1111 18:37:28.370515 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740159195s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,} I1111 18:37:28.370533 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740177955s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,} I1111 18:37:28.370547 1 node_lifecycle_controller.go:1137] node gce-scale-cluster-minion-group-2-66d8 hasn't been updated for 41.740192115s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-11 18:33:03 +0000 UTC,LastTransitionTime:2019-11-11 17:12:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,} ``` apiserver's logs show that kubelet was doing 'PUT /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/gce-scale-cluster-minion-group-2-66d8' every 10 seconds. Something weird happened to watch by 'shared-informers': ``` I1111 18:32:20.160965 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3560113 labels= fields= timeout=7m14s I1111 18:32:20.161197 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3560113&timeout=7m14s&timeoutSeconds=434&watch=true: (384.017µs) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:32:21.303182 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3560961 labels= fields= timeout=5m46s I1111 18:36:15.241329 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3560961&timeout=5m46s&timeoutSeconds=346&watch=true: (3m53.938351655s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:36:15.284570 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3732753 labels= fields= timeout=5m48s I1111 18:36:15.324265 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3732753&timeout=5m48s&timeoutSeconds=348&watch=true: (42.014758ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:36:16.442047 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3733880 labels= fields= timeout=6m14s I1111 18:36:52.300950 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3733880&timeout=6m14s&timeoutSeconds=374&watch=true: (35.859084497s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:36:52.314218 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3760136 labels= fields= timeout=9m43s I1111 18:36:52.320361 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3760136&timeout=9m43s&timeoutSeconds=583&watch=true: (9.10633ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:40:34.906921 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=3949153 labels= fields= timeout=6m22s I1111 18:42:20.654833 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=3949153&timeout=6m22s&timeoutSeconds=382&watch=true: (1m45.764834382s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:42:20.697678 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4028224 labels= fields= timeout=5m44s I1111 18:42:20.702168 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4028224&timeout=5m44s&timeoutSeconds=344&watch=true: (4.82265ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:42:21.822285 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4029047 labels= fields= timeout=8m7s I1111 18:43:07.453394 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4029047&timeout=8m7s&timeoutSeconds=487&watch=true: (45.631249675s) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.533402 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064578 labels= fields= timeout=7m16s I1111 18:43:07.592559 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064578&timeout=7m16s&timeoutSeconds=436&watch=true: (60.648364ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.604225 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064660 labels= fields= timeout=7m47s I1111 18:43:07.658464 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064660&timeout=7m47s&timeoutSeconds=467&watch=true: (54.605721ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.659558 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064853 labels= fields= timeout=8m33s I1111 18:43:07.716122 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064853&timeout=8m33s&timeoutSeconds=513&watch=true: (56.895361ms) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] I1111 18:43:07.723797 1 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=4064886 labels= fields= timeout=7m17s I1111 18:43:07.724210 1 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=4064886&timeout=7m17s&timeoutSeconds=437&watch=true: (785.817µs) 0 [kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/a05efc6/shared-informers [::1]:53282] ``` One watch started at 18:36:52 and finished 9ms later. The next one started at 18:40:34.906921 So it seems that there was a gap in lease watche between 18:36:52 and 18:40:34. Between 18:36:52 and 18:40:34 there is a number of leases LIST requests in the following pattern: * a single /leases?limit=500 LIST request that finishes with 200 * a single /leases/continue=XXX request that finishes with 400 The kube-controller-manager is retrying every 1 second up to 18:40:34.
non_defect
gce master scale performance flakes due to kube controller manager loosing lease watch sig scalability debugging done by mborsz failed test attempt it failed with a number of waitingfor pods objects timed out errors failed objects in total sample failing pods measurement call waitforcontrolledpodsrunning waitforrunningjobs error objects timed out jobs test aeoqzf big job test aeoqzf big job test aeoqzf big job nodes were not ready at controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node controller utils go update ready status of pods on node node lifecycle controller said that gce scale cluster minion group s status hasn t been updated for seconds node lifecycle controller go node gce scale cluster minion group hasn t been updated for last ready is nodecondition type ready status true lastheartbeattime utc lasttransitiontime utc reason kubeletready message kubelet is posting ready status apparmor enabled node lifecycle controller go node gce scale cluster minion group hasn t been updated for last memorypressure is nodecondition type memorypressure status false lastheartbeattime utc lasttransitiontime utc reason kubelethassufficientmemory message kubelet has sufficient memory available node lifecycle controller go node gce scale cluster minion group hasn t been updated for last diskpressure is nodecondition type diskpressure status false lastheartbeattime utc lasttransitiontime utc reason kubelethasnodiskpressure message kubelet has no disk pressure node lifecycle controller go node gce scale cluster minion group hasn t been updated for last pidpressure is nodecondition type pidpressure status false lastheartbeattime utc lasttransitiontime utc reason kubelethassufficientpid message kubelet has sufficient pid available apiserver s logs show that kubelet was doing put apis coordination io namespaces kube node lease leases gce scale cluster minion group every seconds something weird happened to watch by shared informers get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true get go starting watch for apis coordination io leases rv labels fields timeout httplog go get apis coordination io leases allowwatchbookmarks true resourceversion timeout timeoutseconds watch true one watch started at and finished later the next one started at so it seems that there was a gap in lease watche between and between and there is a number of leases list requests in the following pattern a single leases limit list request that finishes with a single leases continue xxx request that finishes with the kube controller manager is retrying every second up to
0
20,275
29,504,502,470
IssuesEvent
2023-06-03 06:01:23
ThosRTanner/inforss
https://api.github.com/repos/ThosRTanner/inforss
opened
Options window information tab doesn't work properly on waterfox classic
Browser compatibility
For some reason the translators and contributors entries don't come up. There's a workround to stop this blanking the options window entirely, but it's not clear why this doesn't work.
True
Options window information tab doesn't work properly on waterfox classic - For some reason the translators and contributors entries don't come up. There's a workround to stop this blanking the options window entirely, but it's not clear why this doesn't work.
non_defect
options window information tab doesn t work properly on waterfox classic for some reason the translators and contributors entries don t come up there s a workround to stop this blanking the options window entirely but it s not clear why this doesn t work
0
69,806
22,680,135,899
IssuesEvent
2022-07-04 09:09:10
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Resized Room Topic input goes over other elements
T-Defect S-Tolerable A-Room-Settings O-Occasional Z-Community-Testing
### Steps to reproduce 1. Where are you starting? What can you see? -I have the Room settings modal open 2. What do you click? -I add a multi-line topic 3. More steps… -I resize the input element Before resizing: ![image](https://user-images.githubusercontent.com/18530109/176430141-91c5e1c2-7689-4332-914a-8d3770117bc0.png) After: ![image](https://user-images.githubusercontent.com/18530109/176430388-a8d5bc4f-f503-476d-82f2-3bc9969cb0ad.png) ### Outcome #### What did you expect? I expected other elements to be pushed down, giving the Room Topic enough room to be displayed. #### What happened instead? Room Topic goes over other elements instead. ### Operating system Windows 11 Home 21H2 ### Application version Element Nightly version: 0.0.1-nightly.2022062901 Olm version: 3.2.8 ### How did you install the app? From https://element.io/get-started#nightly ### Homeserver matrix.org ### Will you send logs? No
1.0
Resized Room Topic input goes over other elements - ### Steps to reproduce 1. Where are you starting? What can you see? -I have the Room settings modal open 2. What do you click? -I add a multi-line topic 3. More steps… -I resize the input element Before resizing: ![image](https://user-images.githubusercontent.com/18530109/176430141-91c5e1c2-7689-4332-914a-8d3770117bc0.png) After: ![image](https://user-images.githubusercontent.com/18530109/176430388-a8d5bc4f-f503-476d-82f2-3bc9969cb0ad.png) ### Outcome #### What did you expect? I expected other elements to be pushed down, giving the Room Topic enough room to be displayed. #### What happened instead? Room Topic goes over other elements instead. ### Operating system Windows 11 Home 21H2 ### Application version Element Nightly version: 0.0.1-nightly.2022062901 Olm version: 3.2.8 ### How did you install the app? From https://element.io/get-started#nightly ### Homeserver matrix.org ### Will you send logs? No
defect
resized room topic input goes over other elements steps to reproduce where are you starting what can you see i have the room settings modal open what do you click i add a multi line topic more steps… i resize the input element before resizing after outcome what did you expect i expected other elements to be pushed down giving the room topic enough room to be displayed what happened instead room topic goes over other elements instead operating system windows home application version element nightly version nightly olm version how did you install the app from homeserver matrix org will you send logs no
1
315,868
27,112,548,995
IssuesEvent
2023-02-15 16:14:53
wazuh/wazuh
https://api.github.com/repos/wazuh/wazuh
closed
Release 4.4.0 - Release Candidate 1 - Service
team/cicd type/release tracking release test/4.4.0
### Packages tests metrics information ||| | :-- | :-- | | **Main release candidate issue** | #16132 | | **Main packages metrics issue** | #16142 | | **Version** | 4.4.0 | | **Release candidate** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.4.0-rc1 | --- | System | Status | Build | | :-- | :--: | :-- | | CentOS 6 | :green_circle: |https://ci.wazuh.info/view/Tests/job/Test_service/5686/ | | CentOS 7 | :green_circle: | https://ci.wazuh.info/view/Tests/job/Test_service/5687/ | | Ubuntu Trusty | :green_circle: | https://ci.wazuh.info/view/Tests/job/Test_service/5688/ | | Ubuntu Bionic | :green_circle: | https://ci.wazuh.info/view/Tests/job/Test_service/5689/ | --- Status legend: :black_circle: - Pending/In progress :white_circle: - Skipped :red_circle: - Rejected :yellow_circle: - Ready to review :green_circle: - Approved --- ## Auditor's validation In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC. - [x] @alberpilot - [x] @okynos ---
1.0
Release 4.4.0 - Release Candidate 1 - Service - ### Packages tests metrics information ||| | :-- | :-- | | **Main release candidate issue** | #16132 | | **Main packages metrics issue** | #16142 | | **Version** | 4.4.0 | | **Release candidate** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.4.0-rc1 | --- | System | Status | Build | | :-- | :--: | :-- | | CentOS 6 | :green_circle: |https://ci.wazuh.info/view/Tests/job/Test_service/5686/ | | CentOS 7 | :green_circle: | https://ci.wazuh.info/view/Tests/job/Test_service/5687/ | | Ubuntu Trusty | :green_circle: | https://ci.wazuh.info/view/Tests/job/Test_service/5688/ | | Ubuntu Bionic | :green_circle: | https://ci.wazuh.info/view/Tests/job/Test_service/5689/ | --- Status legend: :black_circle: - Pending/In progress :white_circle: - Skipped :red_circle: - Rejected :yellow_circle: - Ready to review :green_circle: - Approved --- ## Auditor's validation In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC. - [x] @alberpilot - [x] @okynos ---
non_defect
release release candidate service packages tests metrics information main release candidate issue main packages metrics issue version release candidate tag system status build centos green circle centos green circle ubuntu trusty green circle ubuntu bionic green circle status legend black circle pending in progress white circle skipped red circle rejected yellow circle ready to review green circle approved auditor s validation in order to close and proceed with the release or the next candidate version the following auditors must give the green light to this rc alberpilot okynos
0
62,337
17,023,900,594
IssuesEvent
2021-07-03 04:27:13
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Cannot find postcode from postal_code relation
Component: nominatim Priority: major Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 2.20pm, Sunday, 16th March 2014]** The query for the postalcode "04457" (http://nominatim.openstreetmap.org/search.php?postalcode=04457) results in two adresses: "Mlkau, Zweinaundorf, Leipzig, Sachsen, 04457, Deutschland, European Union" and "Lincoln, Piscataquis, Maine, 04457, United States of America" but the result in germany actually has the postalcode 04316. Furthermore querys for streets in this german region deliver wrong postcodes for the addresses. For example the query (http://nominatim.openstreetmap.org/search.php?q=Paul-Kl%C3%B6psch-Stra%C3%9Fe)delivers "Paul-Klpsch-Strae, Holzhausen, Zweinaundorf, Leipzig, Sachsen, 04288, Deutschland, Europische Union" but the postal code should be 04316.
1.0
Cannot find postcode from postal_code relation - **[Submitted to the original trac issue database at 2.20pm, Sunday, 16th March 2014]** The query for the postalcode "04457" (http://nominatim.openstreetmap.org/search.php?postalcode=04457) results in two adresses: "Mlkau, Zweinaundorf, Leipzig, Sachsen, 04457, Deutschland, European Union" and "Lincoln, Piscataquis, Maine, 04457, United States of America" but the result in germany actually has the postalcode 04316. Furthermore querys for streets in this german region deliver wrong postcodes for the addresses. For example the query (http://nominatim.openstreetmap.org/search.php?q=Paul-Kl%C3%B6psch-Stra%C3%9Fe)delivers "Paul-Klpsch-Strae, Holzhausen, Zweinaundorf, Leipzig, Sachsen, 04288, Deutschland, Europische Union" but the postal code should be 04316.
defect
cannot find postcode from postal code relation the query for the postalcode results in two adresses mlkau zweinaundorf leipzig sachsen deutschland european union and lincoln piscataquis maine united states of america but the result in germany actually has the postalcode furthermore querys for streets in this german region deliver wrong postcodes for the addresses for example the query paul klpsch strae holzhausen zweinaundorf leipzig sachsen deutschland europische union but the postal code should be
1
4,172
2,610,088,669
IssuesEvent
2015-02-26 18:26:53
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳痤疮怎么治最好
auto-migrated Priority-Medium Type-Defect
``` 深圳痤疮怎么治最好【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:29
1.0
深圳痤疮怎么治最好 - ``` 深圳痤疮怎么治最好【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:29
defect
深圳痤疮怎么治最好 深圳痤疮怎么治最好【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
1
304,495
23,068,991,734
IssuesEvent
2022-07-25 16:11:45
chicago-cdac/nm-exp-active-netrics
https://api.github.com/repos/chicago-cdac/nm-exp-active-netrics
closed
Create a repo for data pipeline to Netrics data
documentation data
Documentation on how to obtain data collected by Netrics deployment so that it can be further visualized and analyzed.
1.0
Create a repo for data pipeline to Netrics data - Documentation on how to obtain data collected by Netrics deployment so that it can be further visualized and analyzed.
non_defect
create a repo for data pipeline to netrics data documentation on how to obtain data collected by netrics deployment so that it can be further visualized and analyzed
0
19,323
3,188,821,829
IssuesEvent
2015-09-29 00:10:29
aBitNomadic/shimeji-ee
https://api.github.com/repos/aBitNomadic/shimeji-ee
closed
Will not run
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Just run the app exe 2. Nothing happends on the screens. What is the expected output? What do you see instead? To see the animations What version of the product are you using? On what operating system? 1.0.3 on Win7 w/SP1 Pro x64 Please provide any additional information below. Java 7.0.4 x32 & 64 installed Tri-Monitor setup dual nvidia 260GTX cards ``` Original issue reported on code.google.com by `anif...@gmail.com` on 1 May 2012 at 10:58
1.0
Will not run - ``` What steps will reproduce the problem? 1. Just run the app exe 2. Nothing happends on the screens. What is the expected output? What do you see instead? To see the animations What version of the product are you using? On what operating system? 1.0.3 on Win7 w/SP1 Pro x64 Please provide any additional information below. Java 7.0.4 x32 & 64 installed Tri-Monitor setup dual nvidia 260GTX cards ``` Original issue reported on code.google.com by `anif...@gmail.com` on 1 May 2012 at 10:58
defect
will not run what steps will reproduce the problem just run the app exe nothing happends on the screens what is the expected output what do you see instead to see the animations what version of the product are you using on what operating system on w pro please provide any additional information below java installed tri monitor setup dual nvidia cards original issue reported on code google com by anif gmail com on may at
1
73,603
24,711,866,199
IssuesEvent
2022-10-20 01:55:07
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Unable to join video room after update to 0.3.0
T-Defect
### Steps to reproduce 1. Click existing video room that worked fine with 0.2.14 2. Click join ![element_call_0 3 0](https://user-images.githubusercontent.com/414984/196789833-d61b6e25-1f6b-4e30-a226-5ba1c702739b.png) 3. See "joining ..." being displayed without anything happening According the browser console it appears Element is actually trying to join https://call.ems.host instead of the configured https://call.domain.de server from config.json ``` "element_call": { "url": "https://call.domain.de" }, ``` ![element_call_0 3 0_00](https://user-images.githubusercontent.com/414984/196792409-a4484763-6262-4d69-b523-e26d1414d3e3.png) ![element_call_0 3 0_01](https://user-images.githubusercontent.com/414984/196791882-ba633a34-8df8-439e-8da8-e1b8f2000317.png) Element Version used is 1.11.10 which was working just fine with the previous element call 0.2.14, just replaced the docker image with 0.3.0, everything else stayed the same. Reverting back to v0.2.14 with ``` # docker pull ghcr.io/vector-im/element-call@sha256:ec88c86bb22843ae3f81bfff31cbd376afe415ba441818be17630ad8ed2efdce # docker run --name call --add-host call.domain.de:xxx.xxx.xxx.xxx -t -d -p xxx.xxx.xxx.xxx:8080:8080 -e "VITE_DEFAULT_HOMESERVER=https://matrix.domain.de" --restart always --cap-add MKNOD --privileged ghcr.io/vector-im/element-call@sha256:ec88c86bb22843ae3f81bfff31cbd376afe415ba441818be17630ad8ed2efdce ``` makes things work again. ### Outcome #### What did you expect? Being able to join the video room #### What happened instead? Being stuck in "joining ..." ### Operating system Linux ### Browser information Chrome 106.0.5249.119 ### URL for webapp call.domain.de ### Will you send logs? Yes
1.0
Unable to join video room after update to 0.3.0 - ### Steps to reproduce 1. Click existing video room that worked fine with 0.2.14 2. Click join ![element_call_0 3 0](https://user-images.githubusercontent.com/414984/196789833-d61b6e25-1f6b-4e30-a226-5ba1c702739b.png) 3. See "joining ..." being displayed without anything happening According the browser console it appears Element is actually trying to join https://call.ems.host instead of the configured https://call.domain.de server from config.json ``` "element_call": { "url": "https://call.domain.de" }, ``` ![element_call_0 3 0_00](https://user-images.githubusercontent.com/414984/196792409-a4484763-6262-4d69-b523-e26d1414d3e3.png) ![element_call_0 3 0_01](https://user-images.githubusercontent.com/414984/196791882-ba633a34-8df8-439e-8da8-e1b8f2000317.png) Element Version used is 1.11.10 which was working just fine with the previous element call 0.2.14, just replaced the docker image with 0.3.0, everything else stayed the same. Reverting back to v0.2.14 with ``` # docker pull ghcr.io/vector-im/element-call@sha256:ec88c86bb22843ae3f81bfff31cbd376afe415ba441818be17630ad8ed2efdce # docker run --name call --add-host call.domain.de:xxx.xxx.xxx.xxx -t -d -p xxx.xxx.xxx.xxx:8080:8080 -e "VITE_DEFAULT_HOMESERVER=https://matrix.domain.de" --restart always --cap-add MKNOD --privileged ghcr.io/vector-im/element-call@sha256:ec88c86bb22843ae3f81bfff31cbd376afe415ba441818be17630ad8ed2efdce ``` makes things work again. ### Outcome #### What did you expect? Being able to join the video room #### What happened instead? Being stuck in "joining ..." ### Operating system Linux ### Browser information Chrome 106.0.5249.119 ### URL for webapp call.domain.de ### Will you send logs? Yes
defect
unable to join video room after update to steps to reproduce click existing video room that worked fine with click join see joining being displayed without anything happening according the browser console it appears element is actually trying to join instead of the configured server from config json element call url element version used is which was working just fine with the previous element call just replaced the docker image with everything else stayed the same reverting back to with docker pull ghcr io vector im element call docker run name call add host call domain de xxx xxx xxx xxx t d p xxx xxx xxx xxx e vite default homeserver restart always cap add mknod privileged ghcr io vector im element call makes things work again outcome what did you expect being able to join the video room what happened instead being stuck in joining operating system linux browser information chrome url for webapp call domain de will you send logs yes
1
764,902
26,822,722,811
IssuesEvent
2023-02-02 10:36:13
pkp/pkp-lib
https://api.github.com/repos/pkp/pkp-lib
closed
Improve submission wizard after usability tests
Community:2:Priority Enhancement:1:Minor
**Describe the problem you would like to solve** @Devika008 ran usability testing sessions on the new submission wizard (#7191) with editors and authors. The following feedback had broad consensus: - Change the order of the steps in the wizard so that submission details are first. The preferred order is: 1) Details, 2) Upload Files, 3) Contributors, 4) For the Editors. - Move the keywords field into the Details step, because it is a fundamental part of the submission itself. - In the screen to start a new submission, move Language above Title, so that authors feel they are entering the title in the language they selected. - Clarify the error message when a contributor is missing the given name in a language (see below). **What application are you using?** `main` branch (3.4 pre-release) **Additional Information** When a contributor is missing the given name in the required language, the message says: > One or more of the contributors are missing details in {$language}. This should be updated to say that the given name must be completed in that language for all contributors. ![Screenshot from 2023-01-26 10-55-47](https://user-images.githubusercontent.com/2306629/214819040-f0fc269e-2687-4c9b-900f-deda2b3ca837.png)
1.0
Improve submission wizard after usability tests - **Describe the problem you would like to solve** @Devika008 ran usability testing sessions on the new submission wizard (#7191) with editors and authors. The following feedback had broad consensus: - Change the order of the steps in the wizard so that submission details are first. The preferred order is: 1) Details, 2) Upload Files, 3) Contributors, 4) For the Editors. - Move the keywords field into the Details step, because it is a fundamental part of the submission itself. - In the screen to start a new submission, move Language above Title, so that authors feel they are entering the title in the language they selected. - Clarify the error message when a contributor is missing the given name in a language (see below). **What application are you using?** `main` branch (3.4 pre-release) **Additional Information** When a contributor is missing the given name in the required language, the message says: > One or more of the contributors are missing details in {$language}. This should be updated to say that the given name must be completed in that language for all contributors. ![Screenshot from 2023-01-26 10-55-47](https://user-images.githubusercontent.com/2306629/214819040-f0fc269e-2687-4c9b-900f-deda2b3ca837.png)
non_defect
improve submission wizard after usability tests describe the problem you would like to solve ran usability testing sessions on the new submission wizard with editors and authors the following feedback had broad consensus change the order of the steps in the wizard so that submission details are first the preferred order is details upload files contributors for the editors move the keywords field into the details step because it is a fundamental part of the submission itself in the screen to start a new submission move language above title so that authors feel they are entering the title in the language they selected clarify the error message when a contributor is missing the given name in a language see below what application are you using main branch pre release additional information when a contributor is missing the given name in the required language the message says one or more of the contributors are missing details in language this should be updated to say that the given name must be completed in that language for all contributors
0
105,235
4,232,938,165
IssuesEvent
2016-07-05 04:28:17
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
[fvt]2.12.1xcatprobe image -c return errors if no IMAGENAME and IMAGEUUID in compute node's xcatinfo
component:xcatprobe priority:normal type:bug xCAT 2.12.1 Sprint 2
env:linux build: lsdef - Version 2.12.1 (git commit fddd78e1ec6a9d59bc921bef24e300e8d94f69e4, built Sun Jun 19 06:45:32 EDT 2016) How to reproduce: ``` c910f04x30v01:/opt/xcat/bin # ./xcatprobe image -c Smartmatch is experimental at ./xcatprobe line 64. Smartmatch is experimental at ./xcatprobe line 173. node is c910f04x30v11 c910f04x30v11 is diskless node is switch-10-4-25-1 [ OK ] Pinging c910f04x30v11 ---- Gathering information from node c910f04x30v11 ---- [FAIL] Not able to determine os image name or uuid of the image installed on any compute node. ``` ----------------------> no IMAGENAME and IMAGEUUID in compute node's xcatinfo so returns error here
1.0
[fvt]2.12.1xcatprobe image -c return errors if no IMAGENAME and IMAGEUUID in compute node's xcatinfo - env:linux build: lsdef - Version 2.12.1 (git commit fddd78e1ec6a9d59bc921bef24e300e8d94f69e4, built Sun Jun 19 06:45:32 EDT 2016) How to reproduce: ``` c910f04x30v01:/opt/xcat/bin # ./xcatprobe image -c Smartmatch is experimental at ./xcatprobe line 64. Smartmatch is experimental at ./xcatprobe line 173. node is c910f04x30v11 c910f04x30v11 is diskless node is switch-10-4-25-1 [ OK ] Pinging c910f04x30v11 ---- Gathering information from node c910f04x30v11 ---- [FAIL] Not able to determine os image name or uuid of the image installed on any compute node. ``` ----------------------> no IMAGENAME and IMAGEUUID in compute node's xcatinfo so returns error here
non_defect
image c return errors if no imagename and imageuuid in compute node s xcatinfo env linux build lsdef version git commit built sun jun edt how to reproduce opt xcat bin xcatprobe image c smartmatch is experimental at xcatprobe line smartmatch is experimental at xcatprobe line node is is diskless node is switch pinging gathering information from node not able to determine os image name or uuid of the image installed on any compute node no imagename and imageuuid in compute node s xcatinfo so returns error here
0
12,197
9,635,717,474
IssuesEvent
2019-05-16 02:30:04
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
Verify safe removal of legacy testing IVTs
4 - In Review Area-Infrastructure
The following IVTs are likely unused, in which case they can be removed: * RoslynETAHost * RoslynTaoActions
1.0
Verify safe removal of legacy testing IVTs - The following IVTs are likely unused, in which case they can be removed: * RoslynETAHost * RoslynTaoActions
non_defect
verify safe removal of legacy testing ivts the following ivts are likely unused in which case they can be removed roslynetahost roslyntaoactions
0
11,648
5,060,796,053
IssuesEvent
2016-12-22 13:26:07
junit-team/junit5
https://api.github.com/repos/junit-team/junit5
closed
Generate binary test results in JUnit Platform Gradle Plugin
build enhancement Platform up-for-grabs
## Status Quo The JUnit Platform Gradle Plugin ("JUnit plugin") is currently capable of generating XML reports in the de facto standard "JUnit 4 XML report" format; however, the JUnit plugin does not generate so called _binary test results_ analogous to the standard Gradle `test` task. Consequently, test results tracked by the JUnit plugin do not show up in Gradle's test reports and _build scans_. ## Related Issues - #315 - #475 ## Research The `executeTests()` method in the Gradle [`Test`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/tasks/testing/Test.java) task orchestrates the entire process for the standard `test` task within a Gradle build script. The `Test` task uses a [`TestResultSerializer`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestResultSerializer.java) to serialize results into a binary file named `results.bin` located in the `build/test-results/test/binary` folder for the current project. The `/test/` portion of the path is the name of the `Test` task in the current project. The path to this folder can be set via the `binResultsDir` property of the `Test` task. The binary results are actually generated by invoking the `write(Collection<TestClassResult> results)` method on the `TestResultSerializer`. The collection of `results` is composed of instances of [`TestClassResult`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestClassResult.java) which in turn are composed of instances of [`TestMethodResult`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestMethodResult.java). For the standard `Test` task, the class and method results are collected via the [`TestReportDataCollector`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestReportDataCollector.java) which implements Gradle's [`TestListener`](https://github.com/gradle/gradle/blob/master/subprojects/testing-base/src/main/java/org/gradle/api/tasks/testing/TestListener.java) API. For the JUnit Platform, it should be possible to generate and collect instances of `TestClassResult` and `TestMethodResult` within a JUnit `TestExecutionListener` (perhaps analogous to the existing `XmlReportsWritingListener`) and then serialize them into the required binary format via Gradle's `TestResultSerializer`. ## Deliverables - [ ] Generate _binary test results_ in the JUnit Platform Gradle Plugin
1.0
Generate binary test results in JUnit Platform Gradle Plugin - ## Status Quo The JUnit Platform Gradle Plugin ("JUnit plugin") is currently capable of generating XML reports in the de facto standard "JUnit 4 XML report" format; however, the JUnit plugin does not generate so called _binary test results_ analogous to the standard Gradle `test` task. Consequently, test results tracked by the JUnit plugin do not show up in Gradle's test reports and _build scans_. ## Related Issues - #315 - #475 ## Research The `executeTests()` method in the Gradle [`Test`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/tasks/testing/Test.java) task orchestrates the entire process for the standard `test` task within a Gradle build script. The `Test` task uses a [`TestResultSerializer`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestResultSerializer.java) to serialize results into a binary file named `results.bin` located in the `build/test-results/test/binary` folder for the current project. The `/test/` portion of the path is the name of the `Test` task in the current project. The path to this folder can be set via the `binResultsDir` property of the `Test` task. The binary results are actually generated by invoking the `write(Collection<TestClassResult> results)` method on the `TestResultSerializer`. The collection of `results` is composed of instances of [`TestClassResult`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestClassResult.java) which in turn are composed of instances of [`TestMethodResult`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestMethodResult.java). For the standard `Test` task, the class and method results are collected via the [`TestReportDataCollector`](https://github.com/gradle/gradle/blob/master/subprojects/testing-jvm/src/main/java/org/gradle/api/internal/tasks/testing/junit/result/TestReportDataCollector.java) which implements Gradle's [`TestListener`](https://github.com/gradle/gradle/blob/master/subprojects/testing-base/src/main/java/org/gradle/api/tasks/testing/TestListener.java) API. For the JUnit Platform, it should be possible to generate and collect instances of `TestClassResult` and `TestMethodResult` within a JUnit `TestExecutionListener` (perhaps analogous to the existing `XmlReportsWritingListener`) and then serialize them into the required binary format via Gradle's `TestResultSerializer`. ## Deliverables - [ ] Generate _binary test results_ in the JUnit Platform Gradle Plugin
non_defect
generate binary test results in junit platform gradle plugin status quo the junit platform gradle plugin junit plugin is currently capable of generating xml reports in the de facto standard junit xml report format however the junit plugin does not generate so called binary test results analogous to the standard gradle test task consequently test results tracked by the junit plugin do not show up in gradle s test reports and build scans related issues research the executetests method in the gradle task orchestrates the entire process for the standard test task within a gradle build script the test task uses a to serialize results into a binary file named results bin located in the build test results test binary folder for the current project the test portion of the path is the name of the test task in the current project the path to this folder can be set via the binresultsdir property of the test task the binary results are actually generated by invoking the write collection results method on the testresultserializer the collection of results is composed of instances of which in turn are composed of instances of for the standard test task the class and method results are collected via the which implements gradle s api for the junit platform it should be possible to generate and collect instances of testclassresult and testmethodresult within a junit testexecutionlistener perhaps analogous to the existing xmlreportswritinglistener and then serialize them into the required binary format via gradle s testresultserializer deliverables generate binary test results in the junit platform gradle plugin
0
11,602
2,659,910,471
IssuesEvent
2015-03-19 00:28:26
perfsonar/project
https://api.github.com/repos/perfsonar/project
closed
Find Replacement for Anaconda Interactive Mode
Milestone-Future Priority-Medium Type-Defect
Original [issue 723](https://code.google.com/p/perfsonar-ps/issues/detail?id=723) created by arlake228 on 2013-05-08T16:57:35.000Z: Anaconda, which is what manages the install process for the NetInstall, has deprecated interactive mode. Interactive mode is what prompts users for root passwords, hard drive selection and a few other things. We recently ran into a bug with partitioning under interactive mode that caused quite a few headaches. SOme options to explore may be Fedora Spins or see if we can tie into the default CentOS kickstart generator somehow. See some more discussion on the issue below: http://fedoraproject.org/wiki/Anaconda/Changes#Kickstart_Changes https://bugzilla.redhat.com/show_bug.cgi?id=660754
1.0
Find Replacement for Anaconda Interactive Mode - Original [issue 723](https://code.google.com/p/perfsonar-ps/issues/detail?id=723) created by arlake228 on 2013-05-08T16:57:35.000Z: Anaconda, which is what manages the install process for the NetInstall, has deprecated interactive mode. Interactive mode is what prompts users for root passwords, hard drive selection and a few other things. We recently ran into a bug with partitioning under interactive mode that caused quite a few headaches. SOme options to explore may be Fedora Spins or see if we can tie into the default CentOS kickstart generator somehow. See some more discussion on the issue below: http://fedoraproject.org/wiki/Anaconda/Changes#Kickstart_Changes https://bugzilla.redhat.com/show_bug.cgi?id=660754
defect
find replacement for anaconda interactive mode original created by on anaconda which is what manages the install process for the netinstall has deprecated interactive mode interactive mode is what prompts users for root passwords hard drive selection and a few other things we recently ran into a bug with partitioning under interactive mode that caused quite a few headaches some options to explore may be fedora spins or see if we can tie into the default centos kickstart generator somehow see some more discussion on the issue below
1
268,402
28,565,958,343
IssuesEvent
2023-04-21 02:08:14
andygonzalez2010/store
https://api.github.com/repos/andygonzalez2010/store
closed
CVE-2019-16943 (High) detected in jackson-databind-2.9.8.jar - autoclosed
Mend: dependency security vulnerability
## CVE-2019-16943 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p> <p> Dependency Hierarchy: - jackson-datatype-hibernate5-2.9.8.jar (Root Library) - :x: **jackson-databind-2.9.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/3f6d614029f4d6cfdddfcef8468949cb7822503c">3f6d614029f4d6cfdddfcef8468949cb7822503c</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the p6spy (3.8.6) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of com.p6spy.engine.spy.P6DataSource mishandling. <p>Publish Date: 2019-10-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-16943>CVE-2019-16943</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943</a></p> <p>Release Date: 2019-10-01</p> <p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10.1</p> <p>Direct dependency fix Resolution (com.fasterxml.jackson.datatype:jackson-datatype-hibernate5): 2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-16943 (High) detected in jackson-databind-2.9.8.jar - autoclosed - ## CVE-2019-16943 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p> <p> Dependency Hierarchy: - jackson-datatype-hibernate5-2.9.8.jar (Root Library) - :x: **jackson-databind-2.9.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/3f6d614029f4d6cfdddfcef8468949cb7822503c">3f6d614029f4d6cfdddfcef8468949cb7822503c</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the p6spy (3.8.6) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of com.p6spy.engine.spy.P6DataSource mishandling. <p>Publish Date: 2019-10-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-16943>CVE-2019-16943</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943</a></p> <p>Release Date: 2019-10-01</p> <p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10.1</p> <p>Direct dependency fix Resolution (com.fasterxml.jackson.datatype:jackson-datatype-hibernate5): 2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jackson datatype jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of com engine spy mishandling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution com fasterxml jackson datatype jackson datatype step up your open source security game with mend
0
8,974
27,295,087,239
IssuesEvent
2023-02-23 19:37:41
bcgov/api-services-portal
https://api.github.com/repos/bcgov/api-services-portal
closed
Cypress Test -Change Authorization scope from 'Kong API Key with ACL' to 'Outh2 Client Credential'
automation aps-demo
- [ ] Check manually If 'Kong API Key with ACL' to 'Outh2 Client Credential' works correctly - [ ] Prepare Automation test to change authorization scope Change Authorization profile from Kong ACL-API to Client Credential 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 1.4 Deactivate the service for Test environment 1.5 Update the authorization scope from Kong ACL-API to Client Credential 1.6 applies authorization plugin to service published to Kong Gateway 1.7 activate the service for Test environment 2.Developer creates an access request for Client ID/Secret authenticator 2.1 Developer logs in 2.2 Creates an application 2.3 Creates an access request Access manager approves developer access request for Client ID/Secret authenticator 3.1 Access Manager logs in 3.2 Access Manager approves developer access request 3.3 approves an access request Make an API request using Client ID, Secret, and Access Token 4.1 Get access token using client ID and secret; make API request
1.0
Cypress Test -Change Authorization scope from 'Kong API Key with ACL' to 'Outh2 Client Credential' - - [ ] Check manually If 'Kong API Key with ACL' to 'Outh2 Client Credential' works correctly - [ ] Prepare Automation test to change authorization scope Change Authorization profile from Kong ACL-API to Client Credential 1.1 Authenticates api owner 1.2 Activates the namespace 1.3 Create an authorization profile 1.4 Deactivate the service for Test environment 1.5 Update the authorization scope from Kong ACL-API to Client Credential 1.6 applies authorization plugin to service published to Kong Gateway 1.7 activate the service for Test environment 2.Developer creates an access request for Client ID/Secret authenticator 2.1 Developer logs in 2.2 Creates an application 2.3 Creates an access request Access manager approves developer access request for Client ID/Secret authenticator 3.1 Access Manager logs in 3.2 Access Manager approves developer access request 3.3 approves an access request Make an API request using Client ID, Secret, and Access Token 4.1 Get access token using client ID and secret; make API request
non_defect
cypress test change authorization scope from kong api key with acl to client credential check manually if kong api key with acl to client credential works correctly prepare automation test to change authorization scope change authorization profile from kong acl api to client credential authenticates api owner activates the namespace create an authorization profile deactivate the service for test environment update the authorization scope from kong acl api to client credential applies authorization plugin to service published to kong gateway activate the service for test environment developer creates an access request for client id secret authenticator developer logs in creates an application creates an access request access manager approves developer access request for client id secret authenticator access manager logs in access manager approves developer access request approves an access request make an api request using client id secret and access token get access token using client id and secret make api request
0
143,284
5,513,242,262
IssuesEvent
2017-03-17 11:55:25
geosolutions-it/MapStore2
https://api.github.com/repos/geosolutions-it/MapStore2
opened
Home page still links to gh-pages
Priority: Blocker question
The "home page" link in the MapStore 2 home still link to this site: https://geosolutions-it.github.io/MapStore2/ Should it link to http://mapstore2.geo-solutions.it/mapstore/docs/ instead ?
1.0
Home page still links to gh-pages - The "home page" link in the MapStore 2 home still link to this site: https://geosolutions-it.github.io/MapStore2/ Should it link to http://mapstore2.geo-solutions.it/mapstore/docs/ instead ?
non_defect
home page still links to gh pages the home page link in the mapstore home still link to this site should it link to instead
0
37,634
18,680,489,293
IssuesEvent
2021-11-01 04:32:18
pingcap/ticdc
https://api.github.com/repos/pingcap/ticdc
opened
RegionCache can be shared with multiple KV clients
component/kv-client subject/performance
### Is your feature request related to a problem? TiCDC may open many gorutines for `(*RegionCache).asyncCheckAndResolveLoop`. The number of goroutines equals to the number of captured tables. We can save these goroutines by sharing RegionCache with mulitple KV clients. ---- We can replace the region cache at line 329 with a shared one. https://github.com/pingcap/ticdc/blob/ecd6f551e4c9868dc96f9a69f272a813fea2a027/cdc/kv/client.go#L296-L333 ### Describe the feature you'd like RegionCache can be shared with multiple KV clients. ### Describe alternatives you've considered _No response_ ### Teachability, Documentation, Adoption, Migration Strategy _No response_
True
RegionCache can be shared with multiple KV clients - ### Is your feature request related to a problem? TiCDC may open many gorutines for `(*RegionCache).asyncCheckAndResolveLoop`. The number of goroutines equals to the number of captured tables. We can save these goroutines by sharing RegionCache with mulitple KV clients. ---- We can replace the region cache at line 329 with a shared one. https://github.com/pingcap/ticdc/blob/ecd6f551e4c9868dc96f9a69f272a813fea2a027/cdc/kv/client.go#L296-L333 ### Describe the feature you'd like RegionCache can be shared with multiple KV clients. ### Describe alternatives you've considered _No response_ ### Teachability, Documentation, Adoption, Migration Strategy _No response_
non_defect
regioncache can be shared with multiple kv clients is your feature request related to a problem ticdc may open many gorutines for regioncache asynccheckandresolveloop the number of goroutines equals to the number of captured tables we can save these goroutines by sharing regioncache with mulitple kv clients we can replace the region cache at line with a shared one describe the feature you d like regioncache can be shared with multiple kv clients describe alternatives you ve considered no response teachability documentation adoption migration strategy no response
0
249,978
27,015,625,136
IssuesEvent
2023-02-10 19:05:01
pustovitDmytro/curly-express
https://api.github.com/repos/pustovitDmytro/curly-express
closed
CVE-2022-46175 (High) detected in multiple libraries - autoclosed
security vulnerability
## CVE-2022-46175 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>json5-0.5.1.tgz</b>, <b>json5-1.0.1.tgz</b>, <b>json5-2.2.0.tgz</b></p></summary> <p> <details><summary><b>json5-0.5.1.tgz</b></p></summary> <p>JSON for the ES5 era.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-0.5.1.tgz">https://registry.npmjs.org/json5/-/json5-0.5.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/find-babel-config/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - babel-plugin-module-resolver-4.1.0.tgz (Root Library) - find-babel-config-1.2.0.tgz - :x: **json5-0.5.1.tgz** (Vulnerable Library) </details> <details><summary><b>json5-1.0.1.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-1.0.1.tgz">https://registry.npmjs.org/json5/-/json5-1.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/tsconfig-paths/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - eslint-plugin-import-2.25.4.tgz (Root Library) - tsconfig-paths-3.12.0.tgz - :x: **json5-1.0.1.tgz** (Vulnerable Library) </details> <details><summary><b>json5-2.2.0.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-2.2.0.tgz">https://registry.npmjs.org/json5/-/json5-2.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/json5/package.json</p> <p> Dependency Hierarchy: - core-7.17.0.tgz (Root Library) - :x: **json5-2.2.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/pustovitDmytro/curly-express/commit/415bb9a321527a52154a3ded11ed251389a0961b">415bb9a321527a52154a3ded11ed251389a0961b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including versions 1.0.1 and 2.2.1 does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 versions 1.0.2, 2.2.2, and later. <p>Publish Date: 2022-12-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p> <p>Release Date: 2022-12-24</p> <p>Fix Resolution (json5): 1.0.2</p> <p>Direct dependency fix Resolution (eslint-plugin-import): 2.26.0</p><p>Fix Resolution (json5): 2.2.2</p> <p>Direct dependency fix Resolution (@babel/core): 7.17.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-46175 (High) detected in multiple libraries - autoclosed - ## CVE-2022-46175 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>json5-0.5.1.tgz</b>, <b>json5-1.0.1.tgz</b>, <b>json5-2.2.0.tgz</b></p></summary> <p> <details><summary><b>json5-0.5.1.tgz</b></p></summary> <p>JSON for the ES5 era.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-0.5.1.tgz">https://registry.npmjs.org/json5/-/json5-0.5.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/find-babel-config/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - babel-plugin-module-resolver-4.1.0.tgz (Root Library) - find-babel-config-1.2.0.tgz - :x: **json5-0.5.1.tgz** (Vulnerable Library) </details> <details><summary><b>json5-1.0.1.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-1.0.1.tgz">https://registry.npmjs.org/json5/-/json5-1.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/tsconfig-paths/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - eslint-plugin-import-2.25.4.tgz (Root Library) - tsconfig-paths-3.12.0.tgz - :x: **json5-1.0.1.tgz** (Vulnerable Library) </details> <details><summary><b>json5-2.2.0.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-2.2.0.tgz">https://registry.npmjs.org/json5/-/json5-2.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/json5/package.json</p> <p> Dependency Hierarchy: - core-7.17.0.tgz (Root Library) - :x: **json5-2.2.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/pustovitDmytro/curly-express/commit/415bb9a321527a52154a3ded11ed251389a0961b">415bb9a321527a52154a3ded11ed251389a0961b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including versions 1.0.1 and 2.2.1 does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 versions 1.0.2, 2.2.2, and later. <p>Publish Date: 2022-12-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p> <p>Release Date: 2022-12-24</p> <p>Fix Resolution (json5): 1.0.2</p> <p>Direct dependency fix Resolution (eslint-plugin-import): 2.26.0</p><p>Fix Resolution (json5): 2.2.2</p> <p>Direct dependency fix Resolution (@babel/core): 7.17.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries tgz tgz tgz tgz json for the era library home page a href path to dependency file package json path to vulnerable library node modules find babel config node modules package json dependency hierarchy babel plugin module resolver tgz root library find babel config tgz x tgz vulnerable library tgz json for humans library home page a href path to dependency file package json path to vulnerable library node modules tsconfig paths node modules package json dependency hierarchy eslint plugin import tgz root library tsconfig paths tgz x tgz vulnerable library tgz json for humans library home page a href path to dependency file package json path to vulnerable library node modules package json dependency hierarchy core tgz root library x tgz vulnerable library found in head commit a href found in base branch master vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including versions and does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in versions and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution direct dependency fix resolution eslint plugin import fix resolution direct dependency fix resolution babel core step up your open source security game with mend
0
46,953
13,056,006,413
IssuesEvent
2020-07-30 03:22:15
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
dataclasses I3Orientation python printing fails (Trac #2182)
Incomplete Migration Migrated from Trac combo core defect
Migrated from https://code.icecube.wisc.edu/ticket/2182 ```json { "status": "closed", "changetime": "2018-08-23T18:33:34", "description": "when printing an I3Orientation directly in python, using print(str(<I3Orientation>))\nit prints any orientation like this:\n\nI3Orientation:\n Dir: (6.94834e-310,6.94834e-310,1.64477e-316)\n Up: (6.95303e-310,6.94833e-310,6.94833e-310)\n Right: (0,6.94832e-310,0)\n\nso with only zeros in each component of the directions.\nI'm using the combo stable.\n\n(this is my first ticket, so if I forgot something important just ask; I don't seem to be able to log in as something else than icecube, my user name is elohfink)", "reporter": "icecube", "cc": "", "resolution": "fixed", "_ts": "1535049214287878", "component": "combo core", "summary": "dataclasses I3Orientation python printing fails", "priority": "normal", "keywords": "", "time": "2018-08-23T15:31:03", "milestone": "", "owner": "cweaver", "type": "defect" } ```
1.0
dataclasses I3Orientation python printing fails (Trac #2182) - Migrated from https://code.icecube.wisc.edu/ticket/2182 ```json { "status": "closed", "changetime": "2018-08-23T18:33:34", "description": "when printing an I3Orientation directly in python, using print(str(<I3Orientation>))\nit prints any orientation like this:\n\nI3Orientation:\n Dir: (6.94834e-310,6.94834e-310,1.64477e-316)\n Up: (6.95303e-310,6.94833e-310,6.94833e-310)\n Right: (0,6.94832e-310,0)\n\nso with only zeros in each component of the directions.\nI'm using the combo stable.\n\n(this is my first ticket, so if I forgot something important just ask; I don't seem to be able to log in as something else than icecube, my user name is elohfink)", "reporter": "icecube", "cc": "", "resolution": "fixed", "_ts": "1535049214287878", "component": "combo core", "summary": "dataclasses I3Orientation python printing fails", "priority": "normal", "keywords": "", "time": "2018-08-23T15:31:03", "milestone": "", "owner": "cweaver", "type": "defect" } ```
defect
dataclasses python printing fails trac migrated from json status closed changetime description when printing an directly in python using print str nit prints any orientation like this n n dir n up n right n nso with only zeros in each component of the directions ni m using the combo stable n n this is my first ticket so if i forgot something important just ask i don t seem to be able to log in as something else than icecube my user name is elohfink reporter icecube cc resolution fixed ts component combo core summary dataclasses python printing fails priority normal keywords time milestone owner cweaver type defect
1
52,502
13,224,790,904
IssuesEvent
2020-08-17 19:51:19
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
[filterscripts] Shadow filter in simulations does not vary moon/sun position (Trac #2347)
Incomplete Migration Migrated from Trac combo simulation defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2347">https://code.icecube.wisc.edu/projects/icecube/ticket/2347</a>, reported by icecubeand owned by sschindler</em></summary> <p> ```json { "status": "closed", "changetime": "2020-06-24T12:31:42", "_ts": "1593001902142004", "description": "Simulations of the shadow filter (all events in a window around moon/sun) do not randomize the position of moon/sun used to determine the window in which events pass the filter. Consequently all events passing the filter have only a limited zenith range around the fixed moon/sun position, instead of being a good sample of possible zeniths.\n\nCause:\n * use of uninitialized `I3Time` object to determine moon/sun position in `filterscripts/private/filterscripts/I3ShadowFilter_13.cxx::CheckShadow` (line 448)\n * `eventtime` (declared at the top of the function) stays uninitialized if `corsikaMode_ = True` and `frame.Has(corsikaMJDName_ + \"MJD\" ) = False`, which should be the case at least once for each frame (except if a random MJD for moon/sun is written to the frame somewhere prior to the shadow filter)\n * in this case also `corsika_reuse = False` and line 448 is executed without `eventtime` ever getting initialized as something other than its default constructor by `I3Time eventtime;`\n * moon/sun position in `shadowZenith_` and `shadowAzimuth_` are calculated from this default time and used later in `I3ShadowFilter_13::InShadowWindow` to determine the filter result by checking whether events are inside a window around this position\n\nCheck against simulation data (for moon):\n * 10 simulation datasets (from 2011 to 2016) choosen representative of all software versions used for L2 in simulation according to simulation production table\n * attachment ''moonfilter_bug_zenith_diff_default.png'' (vertical lines below 0 show maximum and minimum): distribution (for events passing moon filter) of zenith difference between reconstruction (filter_globals.muon_llhfit) and default moon position (from `astro.I3GetMoonDirection(dataclasses.I3Time())`):\n \t* 2015 and 2016 datasets contained within \u00b1 10 deg window around default moon position --> moon position the same for all events\n \t* 2011 and 2013 datasets not contained --> moon position varies\n \t* 2012 datasets have no events passing the moon filter due to known bug No. 7 in https://wiki.icecube.wisc.edu/index.php/Known_Simulaton_Bugs\n * on the other hand: use CorsikaMoonMJD frame object, which contains a properly randomized moon time which was written to the frame (line 458), but due to the bug not regarded for calculating the moon position\n * attachment ''moonfilter_bug_zenith_diff_randomized.png'': distribution of zenith difference between reconstruction and the moon position corresponding to time CorsikaMoonMJD (no data for 2011, as 2011 simulations had their CorsikaMoonMJD frame object removed it seems):\n \t* 2013 datasets contained in \u00b1 10 deg window around simulated moon position --> moon filter works\n \t* 2015 and 2016 datasets not contained in a window --> moon filter disregarded CorsikaMoonMJD information\n\nAffects: simulation data after 2013\n\nProposed fix: add `eventtime = I3Time(timeMJD_);` after line 431 to initialize `eventtime` with the correctly randomized time from `timeMJD_`\n\nOne could easily try to re-simulate the moon filter and just ignore the filter mask entry in the frame. However, as frames that have '''all''' filters failing (condition_passed is 0) are removed from the datasets, one introduces a systematic by doing so. Only events that have some or several other filter(s) passing will be present in the dataset and thus included in the re-simulated moon filter. Such a re-simulated moon filter would therefore be biased by not containing any events that have no other filter passing.\n", "reporter": "icecube", "cc": "", "resolution": "fixed", "time": "2019-08-30T12:29:20", "component": "combo simulation", "summary": "[filterscripts] Shadow filter in simulations does not vary moon/sun position", "priority": "normal", "keywords": "shadow filter, moon filter, sun filter", "milestone": "Autumnal Equinox 2020", "owner": "sschindler", "type": "defect" } ``` </p> </details>
1.0
[filterscripts] Shadow filter in simulations does not vary moon/sun position (Trac #2347) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2347">https://code.icecube.wisc.edu/projects/icecube/ticket/2347</a>, reported by icecubeand owned by sschindler</em></summary> <p> ```json { "status": "closed", "changetime": "2020-06-24T12:31:42", "_ts": "1593001902142004", "description": "Simulations of the shadow filter (all events in a window around moon/sun) do not randomize the position of moon/sun used to determine the window in which events pass the filter. Consequently all events passing the filter have only a limited zenith range around the fixed moon/sun position, instead of being a good sample of possible zeniths.\n\nCause:\n * use of uninitialized `I3Time` object to determine moon/sun position in `filterscripts/private/filterscripts/I3ShadowFilter_13.cxx::CheckShadow` (line 448)\n * `eventtime` (declared at the top of the function) stays uninitialized if `corsikaMode_ = True` and `frame.Has(corsikaMJDName_ + \"MJD\" ) = False`, which should be the case at least once for each frame (except if a random MJD for moon/sun is written to the frame somewhere prior to the shadow filter)\n * in this case also `corsika_reuse = False` and line 448 is executed without `eventtime` ever getting initialized as something other than its default constructor by `I3Time eventtime;`\n * moon/sun position in `shadowZenith_` and `shadowAzimuth_` are calculated from this default time and used later in `I3ShadowFilter_13::InShadowWindow` to determine the filter result by checking whether events are inside a window around this position\n\nCheck against simulation data (for moon):\n * 10 simulation datasets (from 2011 to 2016) choosen representative of all software versions used for L2 in simulation according to simulation production table\n * attachment ''moonfilter_bug_zenith_diff_default.png'' (vertical lines below 0 show maximum and minimum): distribution (for events passing moon filter) of zenith difference between reconstruction (filter_globals.muon_llhfit) and default moon position (from `astro.I3GetMoonDirection(dataclasses.I3Time())`):\n \t* 2015 and 2016 datasets contained within \u00b1 10 deg window around default moon position --> moon position the same for all events\n \t* 2011 and 2013 datasets not contained --> moon position varies\n \t* 2012 datasets have no events passing the moon filter due to known bug No. 7 in https://wiki.icecube.wisc.edu/index.php/Known_Simulaton_Bugs\n * on the other hand: use CorsikaMoonMJD frame object, which contains a properly randomized moon time which was written to the frame (line 458), but due to the bug not regarded for calculating the moon position\n * attachment ''moonfilter_bug_zenith_diff_randomized.png'': distribution of zenith difference between reconstruction and the moon position corresponding to time CorsikaMoonMJD (no data for 2011, as 2011 simulations had their CorsikaMoonMJD frame object removed it seems):\n \t* 2013 datasets contained in \u00b1 10 deg window around simulated moon position --> moon filter works\n \t* 2015 and 2016 datasets not contained in a window --> moon filter disregarded CorsikaMoonMJD information\n\nAffects: simulation data after 2013\n\nProposed fix: add `eventtime = I3Time(timeMJD_);` after line 431 to initialize `eventtime` with the correctly randomized time from `timeMJD_`\n\nOne could easily try to re-simulate the moon filter and just ignore the filter mask entry in the frame. However, as frames that have '''all''' filters failing (condition_passed is 0) are removed from the datasets, one introduces a systematic by doing so. Only events that have some or several other filter(s) passing will be present in the dataset and thus included in the re-simulated moon filter. Such a re-simulated moon filter would therefore be biased by not containing any events that have no other filter passing.\n", "reporter": "icecube", "cc": "", "resolution": "fixed", "time": "2019-08-30T12:29:20", "component": "combo simulation", "summary": "[filterscripts] Shadow filter in simulations does not vary moon/sun position", "priority": "normal", "keywords": "shadow filter, moon filter, sun filter", "milestone": "Autumnal Equinox 2020", "owner": "sschindler", "type": "defect" } ``` </p> </details>
defect
shadow filter in simulations does not vary moon sun position trac migrated from json status closed changetime ts description simulations of the shadow filter all events in a window around moon sun do not randomize the position of moon sun used to determine the window in which events pass the filter consequently all events passing the filter have only a limited zenith range around the fixed moon sun position instead of being a good sample of possible zeniths n ncause n use of uninitialized object to determine moon sun position in filterscripts private filterscripts cxx checkshadow line n eventtime declared at the top of the function stays uninitialized if corsikamode true and frame has corsikamjdname mjd false which should be the case at least once for each frame except if a random mjd for moon sun is written to the frame somewhere prior to the shadow filter n in this case also corsika reuse false and line is executed without eventtime ever getting initialized as something other than its default constructor by eventtime n moon sun position in shadowzenith and shadowazimuth are calculated from this default time and used later in inshadowwindow to determine the filter result by checking whether events are inside a window around this position n ncheck against simulation data for moon n simulation datasets from to choosen representative of all software versions used for in simulation according to simulation production table n attachment moonfilter bug zenith diff default png vertical lines below show maximum and minimum distribution for events passing moon filter of zenith difference between reconstruction filter globals muon llhfit and default moon position from astro dataclasses n t and datasets contained within deg window around default moon position moon position the same for all events n t and datasets not contained moon position varies n t datasets have no events passing the moon filter due to known bug no in on the other hand use corsikamoonmjd frame object which contains a properly randomized moon time which was written to the frame line but due to the bug not regarded for calculating the moon position n attachment moonfilter bug zenith diff randomized png distribution of zenith difference between reconstruction and the moon position corresponding to time corsikamoonmjd no data for as simulations had their corsikamoonmjd frame object removed it seems n t datasets contained in deg window around simulated moon position moon filter works n t and datasets not contained in a window moon filter disregarded corsikamoonmjd information n naffects simulation data after n nproposed fix add eventtime timemjd after line to initialize eventtime with the correctly randomized time from timemjd n none could easily try to re simulate the moon filter and just ignore the filter mask entry in the frame however as frames that have all filters failing condition passed is are removed from the datasets one introduces a systematic by doing so only events that have some or several other filter s passing will be present in the dataset and thus included in the re simulated moon filter such a re simulated moon filter would therefore be biased by not containing any events that have no other filter passing n reporter icecube cc resolution fixed time component combo simulation summary shadow filter in simulations does not vary moon sun position priority normal keywords shadow filter moon filter sun filter milestone autumnal equinox owner sschindler type defect
1
8,906
2,612,928,998
IssuesEvent
2015-02-27 17:34:10
chrsmith/windows-package-manager
https://api.github.com/repos/chrsmith/windows-package-manager
closed
Help with my own repository
auto-migrated Type-Defect
``` What steps will reproduce the problem? 1. Start npackdg.exe 2. Automatically detects installed applications 3. What is the expected output? What do you see instead? Only application packages from my own repository. I see everything installed on my computer. After installing package from my own repository there's two packages in the list. One from "controlpanel" and one from my repository. What version of the product are you using? On what operating system? 1.18.7 Windows 7 SP1 Please provide any additional information below. I want to stop the auto detect of installed packages. Is it possible? ``` Original issue reported on code.google.com by `fredrik....@utb.motala.se` on 21 Mar 2014 at 11:11
1.0
Help with my own repository - ``` What steps will reproduce the problem? 1. Start npackdg.exe 2. Automatically detects installed applications 3. What is the expected output? What do you see instead? Only application packages from my own repository. I see everything installed on my computer. After installing package from my own repository there's two packages in the list. One from "controlpanel" and one from my repository. What version of the product are you using? On what operating system? 1.18.7 Windows 7 SP1 Please provide any additional information below. I want to stop the auto detect of installed packages. Is it possible? ``` Original issue reported on code.google.com by `fredrik....@utb.motala.se` on 21 Mar 2014 at 11:11
defect
help with my own repository what steps will reproduce the problem start npackdg exe automatically detects installed applications what is the expected output what do you see instead only application packages from my own repository i see everything installed on my computer after installing package from my own repository there s two packages in the list one from controlpanel and one from my repository what version of the product are you using on what operating system windows please provide any additional information below i want to stop the auto detect of installed packages is it possible original issue reported on code google com by fredrik utb motala se on mar at
1
673
2,578,018,000
IssuesEvent
2015-02-12 20:32:23
pymc-devs/pymc
https://api.github.com/repos/pymc-devs/pymc
closed
Covariance matrix in Metroplis Step
defects
I've been having some issues with using a covariance matrix in the Metrepolis step. Here is some example code: ``` import pymc n = 10 with pymc.Model() as model: alpha = pymc.distributions.Flat('alpha', shape=(n)) model.AddPotential(-1 * (alpha ** 2.).sum()) start = pymc.find_MAP() h = pymc.approx_hessian(start) step = pymc.Metropolis(model.vars, S=h) trace = pymc.sample(3000, step, start) ``` This code raises an error: ``` pymc/step_methods/metropolis.pyc in astep(self, q0, logp) 95 self.accepted = 0 96 ---> 97 delta = self.proposal_dist() * self.scaling 98 99 q = q0 + delta /pymc/step_methods/metropolis.pyc in __call__(self) 21 class NormalProposal(Proposal): 22 def __call__(self): ---> 23 return normal(scale=self.s) 24 25 numpy/random/mtrand.so in mtrand.RandomState.normal (numpy/random/mtrand/mtrand.c:9268)() ``` If I disable the proposal covariance, things run fine: ``` with pymc.Model() as model: alpha = pymc.distributions.Flat('alpha', shape=(n)) model.AddPotential(-1 * (alpha ** 2.).sum()) start = pymc.find_MAP() h = pymc.approx_hess(start) step = pymc.Metropolis(model.vars) trace = pymc.sample(3000, step, start) ``` If I use an identity matrix for the proposal distribution, I *still* get the error. Am I building my model incorrectly, or is there something wrong here?
1.0
Covariance matrix in Metroplis Step - I've been having some issues with using a covariance matrix in the Metrepolis step. Here is some example code: ``` import pymc n = 10 with pymc.Model() as model: alpha = pymc.distributions.Flat('alpha', shape=(n)) model.AddPotential(-1 * (alpha ** 2.).sum()) start = pymc.find_MAP() h = pymc.approx_hessian(start) step = pymc.Metropolis(model.vars, S=h) trace = pymc.sample(3000, step, start) ``` This code raises an error: ``` pymc/step_methods/metropolis.pyc in astep(self, q0, logp) 95 self.accepted = 0 96 ---> 97 delta = self.proposal_dist() * self.scaling 98 99 q = q0 + delta /pymc/step_methods/metropolis.pyc in __call__(self) 21 class NormalProposal(Proposal): 22 def __call__(self): ---> 23 return normal(scale=self.s) 24 25 numpy/random/mtrand.so in mtrand.RandomState.normal (numpy/random/mtrand/mtrand.c:9268)() ``` If I disable the proposal covariance, things run fine: ``` with pymc.Model() as model: alpha = pymc.distributions.Flat('alpha', shape=(n)) model.AddPotential(-1 * (alpha ** 2.).sum()) start = pymc.find_MAP() h = pymc.approx_hess(start) step = pymc.Metropolis(model.vars) trace = pymc.sample(3000, step, start) ``` If I use an identity matrix for the proposal distribution, I *still* get the error. Am I building my model incorrectly, or is there something wrong here?
defect
covariance matrix in metroplis step i ve been having some issues with using a covariance matrix in the metrepolis step here is some example code import pymc n with pymc model as model alpha pymc distributions flat alpha shape n model addpotential alpha sum start pymc find map h pymc approx hessian start step pymc metropolis model vars s h trace pymc sample step start this code raises an error pymc step methods metropolis pyc in astep self logp self accepted delta self proposal dist self scaling q delta pymc step methods metropolis pyc in call self class normalproposal proposal def call self return normal scale self s numpy random mtrand so in mtrand randomstate normal numpy random mtrand mtrand c if i disable the proposal covariance things run fine with pymc model as model alpha pymc distributions flat alpha shape n model addpotential alpha sum start pymc find map h pymc approx hess start step pymc metropolis model vars trace pymc sample step start if i use an identity matrix for the proposal distribution i still get the error am i building my model incorrectly or is there something wrong here
1
3,997
2,610,085,488
IssuesEvent
2015-02-26 18:26:01
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳长痘痘怎么排毒
auto-migrated Priority-Medium Type-Defect
``` 深圳长痘痘怎么排毒【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:05
1.0
深圳长痘痘怎么排毒 - ``` 深圳长痘痘怎么排毒【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:05
defect
深圳长痘痘怎么排毒 深圳长痘痘怎么排毒【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
1
31,371
6,502,501,478
IssuesEvent
2017-08-23 13:51:49
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
dataTable default styleClass is undefined
defect
<!-- - IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING. - IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours. --> **I'm submitting a ...** (check one with "x") ``` [X ] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** http://plnkr.co/edit/7RW3ptnOO5V88f5cTmmY?p=preview **Current behavior** When using p-dataTable if styleClass attribute not filled with any value by developer "table's" default class becomes undefined. **Expected behavior** "styleClass" attribute should be optional. If developer do not pass any value for field, it should be empty. **Minimal reproduction of the problem with instructions** http://plnkr.co/edit/7RW3ptnOO5V88f5cTmmY?p=preview ![image](https://user-images.githubusercontent.com/8249485/27789935-0683b40e-5ff7-11e7-8c4d-5209f3f167c3.png) * **PrimeNG version:** 4.1.0-rc.2
1.0
dataTable default styleClass is undefined - <!-- - IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING. - IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours. --> **I'm submitting a ...** (check one with "x") ``` [X ] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** http://plnkr.co/edit/7RW3ptnOO5V88f5cTmmY?p=preview **Current behavior** When using p-dataTable if styleClass attribute not filled with any value by developer "table's" default class becomes undefined. **Expected behavior** "styleClass" attribute should be optional. If developer do not pass any value for field, it should be empty. **Minimal reproduction of the problem with instructions** http://plnkr.co/edit/7RW3ptnOO5V88f5cTmmY?p=preview ![image](https://user-images.githubusercontent.com/8249485/27789935-0683b40e-5ff7-11e7-8c4d-5209f3f167c3.png) * **PrimeNG version:** 4.1.0-rc.2
defect
datatable default styleclass is undefined if you don t fill out the following information we might close your issue without investigating if you d like to secure our response you may consider primeng pro support where support is provided within hours i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports current behavior when using p datatable if styleclass attribute not filled with any value by developer table s default class becomes undefined expected behavior styleclass attribute should be optional if developer do not pass any value for field it should be empty minimal reproduction of the problem with instructions primeng version rc
1
66,886
20,748,495,934
IssuesEvent
2022-03-15 03:27:36
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
ZTS on FreeBSD 13 - all tests on this are failing in the moment :(
Component: Test Suite Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name |FreeBSD Distribution Version |13.0-STABLE Kernel Version | Architecture |x86_64 OpenZFS Version |all <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing All tests with FreeBSD 13 fail currently ... the reason is this: ``` + uname -p + ABI=amd64 + freebsd-version -r + VERSION=13.0-STABLE + cd /tmp + fetch https://download.freebsd.org/ftp/snapshots/amd64/13.0-STABLE/src.txz fetch: https://download.freebsd.org/ftp/snapshots/amd64/13.0-STABLE/src.txz: Not Found + fetch https://download.freebsd.org/ftp/releases/amd64/13.0-STABLE/src.txz fetch: https://download.freebsd.org/ftp/releases/amd64/13.0-STABLE/src.txz: Not Found + sudo tar xpf src.txz -C / tar: Error opening archive: Failed to open 'src.txz' + rm src.txz ``` ### Possible solution `git clone --branch=stable/13 --depth=1 https://git.FreeBSD.org/src.git /usr/src` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> This is a [link to an example log](http://build.zfsonlinux.org/builders/FreeBSD%20stable%2F13%20amd64%20%28TEST%29/builds/4266/steps/shell/logs/stdio)
1.0
ZTS on FreeBSD 13 - all tests on this are failing in the moment :( - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name |FreeBSD Distribution Version |13.0-STABLE Kernel Version | Architecture |x86_64 OpenZFS Version |all <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing All tests with FreeBSD 13 fail currently ... the reason is this: ``` + uname -p + ABI=amd64 + freebsd-version -r + VERSION=13.0-STABLE + cd /tmp + fetch https://download.freebsd.org/ftp/snapshots/amd64/13.0-STABLE/src.txz fetch: https://download.freebsd.org/ftp/snapshots/amd64/13.0-STABLE/src.txz: Not Found + fetch https://download.freebsd.org/ftp/releases/amd64/13.0-STABLE/src.txz fetch: https://download.freebsd.org/ftp/releases/amd64/13.0-STABLE/src.txz: Not Found + sudo tar xpf src.txz -C / tar: Error opening archive: Failed to open 'src.txz' + rm src.txz ``` ### Possible solution `git clone --branch=stable/13 --depth=1 https://git.FreeBSD.org/src.git /usr/src` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> This is a [link to an example log](http://build.zfsonlinux.org/builders/FreeBSD%20stable%2F13%20amd64%20%28TEST%29/builds/4266/steps/shell/logs/stdio)
defect
zts on freebsd all tests on this are failing in the moment thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name freebsd distribution version stable kernel version architecture openzfs version all command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing all tests with freebsd fail currently the reason is this uname p abi freebsd version r version stable cd tmp fetch fetch not found fetch fetch not found sudo tar xpf src txz c tar error opening archive failed to open src txz rm src txz possible solution git clone branch stable depth usr src include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with this is a
1
166,503
26,365,923,479
IssuesEvent
2023-01-11 16:33:57
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
closed
Supplemental Claims | Evidence summary
frontend Supplemental Claims needs-design benefits-team-1 squad-2
## Value Statement **_As a_** Veteran **_I want to_** review the evidence that I am submitting for my claim **_So that_** I can confirm that I've upload and given access to the correct evidence. --- ## Acceptance Criteria - [x] User may edit or remove form fields in each section - [x] User clicking edit takes them back to the page they filled - [x] User clicking edit for uploads takes them back to the upload page - [x] User may add more evidence - link to VA evidence request page - [ ] Show an error message if all evidence is removed, and the user tries to continue - [ ] Unit tests complete (90% coverage) - [ ] E2E tests complete - [ ] Accessibility testing complete - [ ] Reviewed and approved by product and/or design ## Designs and Build Notes [DESIGN](https://www.sketch.com/s/d2416db4-9a4f-4919-abe4-20ba4bdcfd89/a/09Dg1Me) [ERROR MESSAGE](https://www.sketch.com/s/d2416db4-9a4f-4919-abe4-20ba4bdcfd89/a/nR1OPvz) [CONTENT](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/decision-reviews/Supplemental-Claims/design/sourceofcontenttruth.md#step-3-of-4-new-and-relevant-evidence-10) - When veteran clicks **Edit**, it takes them back to the page where they entered the information - After editing, veteran will step through the flow to get back to this page (MVP) - When veteran clicks **Remove**, it removes the entry. - When a veteran clicks **Add more evidence**, they will step through the beginning of the evidence flow again. ## Definition of Ready - [ ] Clear value description - [ ] Testable acceptance criteria - [ ] Accessibility added to acceptance criteria - [ ] Approved designs attached - [ ] Sample data provided where appropriate - [ ] Estimated to fit within the sprint - [ ] Dependencies and blockers linked ## Definition of Done - [ ] Meets acceptance criteria - [ ] Passed E2E testing (90% coverage) - [ ] Passed unit testing (90% coverage) - [ ] Passed integration testing (if applicable) - [ ] Code reviewed (internal) - [ ] Submitted to staging - [ ] Reviewed and approved by product and/or design
1.0
Supplemental Claims | Evidence summary - ## Value Statement **_As a_** Veteran **_I want to_** review the evidence that I am submitting for my claim **_So that_** I can confirm that I've upload and given access to the correct evidence. --- ## Acceptance Criteria - [x] User may edit or remove form fields in each section - [x] User clicking edit takes them back to the page they filled - [x] User clicking edit for uploads takes them back to the upload page - [x] User may add more evidence - link to VA evidence request page - [ ] Show an error message if all evidence is removed, and the user tries to continue - [ ] Unit tests complete (90% coverage) - [ ] E2E tests complete - [ ] Accessibility testing complete - [ ] Reviewed and approved by product and/or design ## Designs and Build Notes [DESIGN](https://www.sketch.com/s/d2416db4-9a4f-4919-abe4-20ba4bdcfd89/a/09Dg1Me) [ERROR MESSAGE](https://www.sketch.com/s/d2416db4-9a4f-4919-abe4-20ba4bdcfd89/a/nR1OPvz) [CONTENT](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/decision-reviews/Supplemental-Claims/design/sourceofcontenttruth.md#step-3-of-4-new-and-relevant-evidence-10) - When veteran clicks **Edit**, it takes them back to the page where they entered the information - After editing, veteran will step through the flow to get back to this page (MVP) - When veteran clicks **Remove**, it removes the entry. - When a veteran clicks **Add more evidence**, they will step through the beginning of the evidence flow again. ## Definition of Ready - [ ] Clear value description - [ ] Testable acceptance criteria - [ ] Accessibility added to acceptance criteria - [ ] Approved designs attached - [ ] Sample data provided where appropriate - [ ] Estimated to fit within the sprint - [ ] Dependencies and blockers linked ## Definition of Done - [ ] Meets acceptance criteria - [ ] Passed E2E testing (90% coverage) - [ ] Passed unit testing (90% coverage) - [ ] Passed integration testing (if applicable) - [ ] Code reviewed (internal) - [ ] Submitted to staging - [ ] Reviewed and approved by product and/or design
non_defect
supplemental claims evidence summary value statement as a veteran i want to review the evidence that i am submitting for my claim so that i can confirm that i ve upload and given access to the correct evidence acceptance criteria user may edit or remove form fields in each section user clicking edit takes them back to the page they filled user clicking edit for uploads takes them back to the upload page user may add more evidence link to va evidence request page show an error message if all evidence is removed and the user tries to continue unit tests complete coverage tests complete accessibility testing complete reviewed and approved by product and or design designs and build notes when veteran clicks edit it takes them back to the page where they entered the information after editing veteran will step through the flow to get back to this page mvp when veteran clicks remove it removes the entry when a veteran clicks add more evidence they will step through the beginning of the evidence flow again definition of ready clear value description testable acceptance criteria accessibility added to acceptance criteria approved designs attached sample data provided where appropriate estimated to fit within the sprint dependencies and blockers linked definition of done meets acceptance criteria passed testing coverage passed unit testing coverage passed integration testing if applicable code reviewed internal submitted to staging reviewed and approved by product and or design
0
81,312
15,612,096,198
IssuesEvent
2021-03-19 15:00:51
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
closed
Vulnerability roundup 100: git-2.29.3: 4 advisories [6.4]
1.severity: security
[search](https://search.nix.gsc.io/?q=git&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=git+in%3Apath&type=Code) * [ ] [CVE-2018-1000182](https://nvd.nist.gov/vuln/detail/CVE-2018-1000182) CVSSv3=6.4 (nixos-20.09) * [ ] [CVE-2020-2136](https://nvd.nist.gov/vuln/detail/CVE-2020-2136) CVSSv3=5.4 (nixos-20.09) * [ ] [CVE-2018-1000110](https://nvd.nist.gov/vuln/detail/CVE-2018-1000110) CVSSv3=5.3 (nixos-20.09) * [ ] [CVE-2019-1003010](https://nvd.nist.gov/vuln/detail/CVE-2019-1003010) CVSSv3=4.3 (nixos-20.09) Scanned versions: nixos-20.09: 12d9950bf47. Cc @globin Cc @peti Cc @primeos Cc @wmertens
True
Vulnerability roundup 100: git-2.29.3: 4 advisories [6.4] - [search](https://search.nix.gsc.io/?q=git&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=git+in%3Apath&type=Code) * [ ] [CVE-2018-1000182](https://nvd.nist.gov/vuln/detail/CVE-2018-1000182) CVSSv3=6.4 (nixos-20.09) * [ ] [CVE-2020-2136](https://nvd.nist.gov/vuln/detail/CVE-2020-2136) CVSSv3=5.4 (nixos-20.09) * [ ] [CVE-2018-1000110](https://nvd.nist.gov/vuln/detail/CVE-2018-1000110) CVSSv3=5.3 (nixos-20.09) * [ ] [CVE-2019-1003010](https://nvd.nist.gov/vuln/detail/CVE-2019-1003010) CVSSv3=4.3 (nixos-20.09) Scanned versions: nixos-20.09: 12d9950bf47. Cc @globin Cc @peti Cc @primeos Cc @wmertens
non_defect
vulnerability roundup git advisories nixos nixos nixos nixos scanned versions nixos cc globin cc peti cc primeos cc wmertens
0
38,790
8,966,907,830
IssuesEvent
2019-01-29 00:55:00
svigerske/Ipopt
https://api.github.com/repos/svigerske/Ipopt
closed
Patch for r2368
Ipopt defect
Issue created by migration from Trac. Original creator: @ghackebeil Original creation time: 2013-09-02 23:16:02 Assignee: ipopt-team Debug build is failing... ``` Index: Interfaces/IpStdInterfaceTNLP.cpp =================================================================== --- Interfaces/IpStdInterfaceTNLP.cpp (revision 2370) +++ Interfaces/IpStdInterfaceTNLP.cpp (working copy) `@``@` -387,7 +387,7 `@``@` non_const_x_[i] = x[i]; } } - DBG_ASSERT(non_const_x && "non_const_x is NULL after apply_new_x"); + DBG_ASSERT(non_const_x_ && "non_const_x is NULL after apply_new_x"); } } // namespace Ipopt ```
1.0
Patch for r2368 - Issue created by migration from Trac. Original creator: @ghackebeil Original creation time: 2013-09-02 23:16:02 Assignee: ipopt-team Debug build is failing... ``` Index: Interfaces/IpStdInterfaceTNLP.cpp =================================================================== --- Interfaces/IpStdInterfaceTNLP.cpp (revision 2370) +++ Interfaces/IpStdInterfaceTNLP.cpp (working copy) `@``@` -387,7 +387,7 `@``@` non_const_x_[i] = x[i]; } } - DBG_ASSERT(non_const_x && "non_const_x is NULL after apply_new_x"); + DBG_ASSERT(non_const_x_ && "non_const_x is NULL after apply_new_x"); } } // namespace Ipopt ```
defect
patch for issue created by migration from trac original creator ghackebeil original creation time assignee ipopt team debug build is failing index interfaces ipstdinterfacetnlp cpp interfaces ipstdinterfacetnlp cpp revision interfaces ipstdinterfacetnlp cpp working copy non const x x dbg assert non const x non const x is null after apply new x dbg assert non const x non const x is null after apply new x namespace ipopt
1
127,311
12,312,124,513
IssuesEvent
2020-05-12 13:29:46
ampproject/amp.dev
https://api.github.com/repos/ampproject/amp.dev
closed
Allow documents to be in more than one category
Category: Documentation P2: Medium Type: New
Currently category filtering works by using Grow's `$category` frontmatter key. This only allows a document to be in one category at a time. Though we might want to allow examples and components to be in more than one by using a tagging behaviour.
1.0
Allow documents to be in more than one category - Currently category filtering works by using Grow's `$category` frontmatter key. This only allows a document to be in one category at a time. Though we might want to allow examples and components to be in more than one by using a tagging behaviour.
non_defect
allow documents to be in more than one category currently category filtering works by using grow s category frontmatter key this only allows a document to be in one category at a time though we might want to allow examples and components to be in more than one by using a tagging behaviour
0
31,403
6,518,631,576
IssuesEvent
2017-08-28 08:58:20
johan-adriaans/hackbar
https://api.github.com/repos/johan-adriaans/hackbar
closed
Hackbar unusable in Firefox 33.
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Install Firefox 33. 2. Install Hackbar. 3. Attempt to use Hackbar, discover that the browser renders the webpage over Hackbar. What is the expected output? What do you see instead? Expect to have the full hackbar visible. Instead it is covered up by the webpage. What version of the product are you using? On what operating system? Hackbar version 1.6.2, Firefox 33.0 on Mac OS X 10.9.5. Please provide any additional information below. ``` Original issue reported on code.google.com by `ForboElM...@gmail.com` on 22 Oct 2014 at 6:44
1.0
Hackbar unusable in Firefox 33. - ``` What steps will reproduce the problem? 1. Install Firefox 33. 2. Install Hackbar. 3. Attempt to use Hackbar, discover that the browser renders the webpage over Hackbar. What is the expected output? What do you see instead? Expect to have the full hackbar visible. Instead it is covered up by the webpage. What version of the product are you using? On what operating system? Hackbar version 1.6.2, Firefox 33.0 on Mac OS X 10.9.5. Please provide any additional information below. ``` Original issue reported on code.google.com by `ForboElM...@gmail.com` on 22 Oct 2014 at 6:44
defect
hackbar unusable in firefox what steps will reproduce the problem install firefox install hackbar attempt to use hackbar discover that the browser renders the webpage over hackbar what is the expected output what do you see instead expect to have the full hackbar visible instead it is covered up by the webpage what version of the product are you using on what operating system hackbar version firefox on mac os x please provide any additional information below original issue reported on code google com by forboelm gmail com on oct at
1
6,263
2,610,224,405
IssuesEvent
2015-02-26 19:11:07
chrsmith/somefinders
https://api.github.com/repos/chrsmith/somefinders
opened
тесты при трудоустройстве мвд
auto-migrated Priority-Medium Type-Defect
``` '''Гелеон Богданов''' Привет всем не подскажите где можно найти .тесты при трудоустройстве мвд. как то выкладывали уже '''Боримир Никитин''' Вот хороший сайт где можно скачать http://bit.ly/17zUnOb '''Афанасий Калашников''' Спасибо вроде то но просит телефон вводить '''Антип Корнилов''' Неа все ок у меня ничего не списало '''Виль Петухов''' Не это не влияет на баланс Информация о файле: тесты при трудоустройстве мвд Загружен: В этом месяце Скачан раз: 638 Рейтинг: 586 Средняя скорость скачивания: 1076 Похожих файлов: 18 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 18 Dec 2013 at 9:26
1.0
тесты при трудоустройстве мвд - ``` '''Гелеон Богданов''' Привет всем не подскажите где можно найти .тесты при трудоустройстве мвд. как то выкладывали уже '''Боримир Никитин''' Вот хороший сайт где можно скачать http://bit.ly/17zUnOb '''Афанасий Калашников''' Спасибо вроде то но просит телефон вводить '''Антип Корнилов''' Неа все ок у меня ничего не списало '''Виль Петухов''' Не это не влияет на баланс Информация о файле: тесты при трудоустройстве мвд Загружен: В этом месяце Скачан раз: 638 Рейтинг: 586 Средняя скорость скачивания: 1076 Похожих файлов: 18 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 18 Dec 2013 at 9:26
defect
тесты при трудоустройстве мвд гелеон богданов привет всем не подскажите где можно найти тесты при трудоустройстве мвд как то выкладывали уже боримир никитин вот хороший сайт где можно скачать афанасий калашников спасибо вроде то но просит телефон вводить антип корнилов неа все ок у меня ничего не списало виль петухов не это не влияет на баланс информация о файле тесты при трудоустройстве мвд загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
1
1,886
2,603,972,962
IssuesEvent
2015-02-24 19:00:43
chrsmith/nishazi6
https://api.github.com/repos/chrsmith/nishazi6
opened
沈阳包皮内有疙瘩
auto-migrated Priority-Medium Type-Defect
``` 沈阳包皮内有疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102330 8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于� ��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:03
1.0
沈阳包皮内有疙瘩 - ``` 沈阳包皮内有疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102330 8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于� ��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:03
defect
沈阳包皮内有疙瘩 沈阳包皮内有疙瘩〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位于� �� 。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 original issue reported on code google com by gmail com on jun at
1
19,107
3,141,903,411
IssuesEvent
2015-09-13 01:36:58
cakephp/debug_kit
https://api.github.com/repos/cakephp/debug_kit
closed
Toolbar tabs autoclose
defect
**What I did** Used `"cakephp/debug_kit": "2.2.*@dev"` with Cake `2.7.3`. Loaded a page, clicked the variable tab, expanded a variable, then clicked the grey bar at the bottom of the drop down. **What happened** The bar vanishes. Then it will display and hide only when hovering over the toolbar. **Workaround** You have to click the toolbar tab again a few times until it becomes 'sticky' again.
1.0
Toolbar tabs autoclose - **What I did** Used `"cakephp/debug_kit": "2.2.*@dev"` with Cake `2.7.3`. Loaded a page, clicked the variable tab, expanded a variable, then clicked the grey bar at the bottom of the drop down. **What happened** The bar vanishes. Then it will display and hide only when hovering over the toolbar. **Workaround** You have to click the toolbar tab again a few times until it becomes 'sticky' again.
defect
toolbar tabs autoclose what i did used cakephp debug kit dev with cake loaded a page clicked the variable tab expanded a variable then clicked the grey bar at the bottom of the drop down what happened the bar vanishes then it will display and hide only when hovering over the toolbar workaround you have to click the toolbar tab again a few times until it becomes sticky again
1
206,955
23,411,613,380
IssuesEvent
2022-08-12 18:12:07
turkdevops/gradle
https://api.github.com/repos/turkdevops/gradle
opened
CVE-2020-9493 (High) detected in log4j-1.2.8.jar
security vulnerability
## CVE-2020-9493 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.8.jar</b></p></summary> <p></p> <p>Path to vulnerable library: /.2.8.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/gradle/commit/2731f430cbe595273e6858dba029ef5f2a2f3c30">2731f430cbe595273e6858dba029ef5f2a2f3c30</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution. <p>Publish Date: 2021-06-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493>CVE-2020-9493</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.openwall.com/lists/oss-security/2021/06/16/1">https://www.openwall.com/lists/oss-security/2021/06/16/1</a></p> <p>Release Date: 2021-06-16</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-9493 (High) detected in log4j-1.2.8.jar - ## CVE-2020-9493 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.8.jar</b></p></summary> <p></p> <p>Path to vulnerable library: /.2.8.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/gradle/commit/2731f430cbe595273e6858dba029ef5f2a2f3c30">2731f430cbe595273e6858dba029ef5f2a2f3c30</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution. <p>Publish Date: 2021-06-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493>CVE-2020-9493</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.openwall.com/lists/oss-security/2021/06/16/1">https://www.openwall.com/lists/oss-security/2021/06/16/1</a></p> <p>Release Date: 2021-06-16</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in jar cve high severity vulnerability vulnerable library jar path to vulnerable library jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch master vulnerability details a deserialization flaw was found in apache chainsaw versions prior to which could lead to malicious code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos step up your open source security game with mend
0
44,968
23,847,569,934
IssuesEvent
2022-09-06 15:05:45
umee-network/umee
https://api.github.com/repos/umee-network/umee
opened
Create an index for `MaxCollateralShare` and other global indexes
C:x/leverage T:Performance
<!-- markdownlint-disable MD041 --> ## Summary In https://github.com/umee-network/umee/pull/1329 , https://github.com/umee-network/umee/pull/1330 and https://github.com/umee-network/umee/pull/1331 we added new indexes. Recomputing them all the time is time consuming. Instead we should store them as as an index value on chain. --- ## For Admin Use - [ ] Not duplicate issue - [ ] Appropriate labels applied - [ ] Appropriate contributors tagged - [ ] Contributor assigned/self-assigned
True
Create an index for `MaxCollateralShare` and other global indexes - <!-- markdownlint-disable MD041 --> ## Summary In https://github.com/umee-network/umee/pull/1329 , https://github.com/umee-network/umee/pull/1330 and https://github.com/umee-network/umee/pull/1331 we added new indexes. Recomputing them all the time is time consuming. Instead we should store them as as an index value on chain. --- ## For Admin Use - [ ] Not duplicate issue - [ ] Appropriate labels applied - [ ] Appropriate contributors tagged - [ ] Contributor assigned/self-assigned
non_defect
create an index for maxcollateralshare and other global indexes summary in and we added new indexes recomputing them all the time is time consuming instead we should store them as as an index value on chain for admin use not duplicate issue appropriate labels applied appropriate contributors tagged contributor assigned self assigned
0
67,417
20,961,610,824
IssuesEvent
2022-03-27 21:49:09
abedmaatalla/sipdroid
https://api.github.com/repos/abedmaatalla/sipdroid
closed
Registration timeout with tcp while switching wifi to 3g
Priority-Medium Type-Defect auto-migrated
``` Registration times out when switiching from wifi to 3g data. This is happening when i started using pbxes.org and tcp. Before with sipgate and udp it worked well. I think the registration request is send too soon. 3g data is not really connected yet, and registration is attempted. If i retry it registers perfectly, without restarting app or phone (just open sipdroid) What steps will reproduce the problem? 1. account with pbxes and tcp used 2. disconnect wifi switch to 3g data (vodafone.de and o2/ZTE Blade/Huawei u8600) 3. registration is attempted really quickly (yellow dot) goes red after timeout and stays red What is the expected output? What do you see instead? What version of the product are you using? On what device/operating system? sipdroid 2.8 beta android 2.3.7 and 2.3.5, Huawei u8600 and ZTE Blade Which SIP server are you using? What happens with PBXes? pbxes.org (nuernberg) with tcp pbxes with udp and sipgate with udp switch over just fine!!! Which type of network are you using? Wifi (open) to 3g data switch over Please provide any additional information below. ``` Original issue reported on code.google.com by `ramp...@gmail.com` on 6 Jan 2013 at 9:54
1.0
Registration timeout with tcp while switching wifi to 3g - ``` Registration times out when switiching from wifi to 3g data. This is happening when i started using pbxes.org and tcp. Before with sipgate and udp it worked well. I think the registration request is send too soon. 3g data is not really connected yet, and registration is attempted. If i retry it registers perfectly, without restarting app or phone (just open sipdroid) What steps will reproduce the problem? 1. account with pbxes and tcp used 2. disconnect wifi switch to 3g data (vodafone.de and o2/ZTE Blade/Huawei u8600) 3. registration is attempted really quickly (yellow dot) goes red after timeout and stays red What is the expected output? What do you see instead? What version of the product are you using? On what device/operating system? sipdroid 2.8 beta android 2.3.7 and 2.3.5, Huawei u8600 and ZTE Blade Which SIP server are you using? What happens with PBXes? pbxes.org (nuernberg) with tcp pbxes with udp and sipgate with udp switch over just fine!!! Which type of network are you using? Wifi (open) to 3g data switch over Please provide any additional information below. ``` Original issue reported on code.google.com by `ramp...@gmail.com` on 6 Jan 2013 at 9:54
defect
registration timeout with tcp while switching wifi to registration times out when switiching from wifi to data this is happening when i started using pbxes org and tcp before with sipgate and udp it worked well i think the registration request is send too soon data is not really connected yet and registration is attempted if i retry it registers perfectly without restarting app or phone just open sipdroid what steps will reproduce the problem account with pbxes and tcp used disconnect wifi switch to data vodafone de and zte blade huawei registration is attempted really quickly yellow dot goes red after timeout and stays red what is the expected output what do you see instead what version of the product are you using on what device operating system sipdroid beta android and huawei and zte blade which sip server are you using what happens with pbxes pbxes org nuernberg with tcp pbxes with udp and sipgate with udp switch over just fine which type of network are you using wifi open to data switch over please provide any additional information below original issue reported on code google com by ramp gmail com on jan at
1