Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
1k
labels
stringlengths
4
1.38k
body
stringlengths
1
262k
index
stringclasses
16 values
text_combine
stringlengths
96
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
364,417
25,490,185,439
IssuesEvent
2022-11-27 00:22:42
projetosala/projetosalabadge
https://api.github.com/repos/projetosala/projetosalabadge
opened
Criar documentação do projeto
documentation priority
Adicionar os seguintes pontos na documentação: - [ ] Ambiente de desenvolvimento - [ ] Fluxo de trabalho - [ ] Como contribuir - [ ] Como executar - [ ] Design da aplicação - [ ] Licença utilizada
1.0
Criar documentação do projeto - Adicionar os seguintes pontos na documentação: - [ ] Ambiente de desenvolvimento - [ ] Fluxo de trabalho - [ ] Como contribuir - [ ] Como executar - [ ] Design da aplicação - [ ] Licença utilizada
non_priority
criar documentação do projeto adicionar os seguintes pontos na documentação ambiente de desenvolvimento fluxo de trabalho como contribuir como executar design da aplicação licença utilizada
0
145,398
5,575,075,543
IssuesEvent
2017-03-28 00:23:51
projectcalico/calico
https://api.github.com/repos/projectcalico/calico
closed
Remove Docker Vagrant-Ubuntu Guide
area/demo content/out-of-date priority/P1 size/S
I don't think we really need to be maintaining 2 vagrant guides. Let's stick with CoreOS guide to match the k8s guide.
1.0
Remove Docker Vagrant-Ubuntu Guide - I don't think we really need to be maintaining 2 vagrant guides. Let's stick with CoreOS guide to match the k8s guide.
priority
remove docker vagrant ubuntu guide i don t think we really need to be maintaining vagrant guides let s stick with coreos guide to match the guide
1
313,214
9,558,471,500
IssuesEvent
2019-05-03 14:18:39
entrepreneur-interet-general/gobelins
https://api.github.com/repos/entrepreneur-interet-general/gobelins
closed
Sur l'écran d'accueil Beta, lancer une recherche bloque le scroll.
Priority 4
**Description du défaut** Sur l'écran d'accueil Beta, si l'utilisateur, avant de faire défiler jusqu'aux objets, lance une recherche depuis le bas de son écran, alors le viewport est bloqué lors de l'affichage des résultats. **Comportement attendu** Interagir avec l'interface de recherche devrait faire scroller et faire disparaître l'écran d'accueil Beta. Problème sur iphone SE 👍 ![IMG_1828](https://user-images.githubusercontent.com/36261410/55716311-f3e1d000-59f6-11e9-938f-1f115c6c1d7f.PNG)
1.0
Sur l'écran d'accueil Beta, lancer une recherche bloque le scroll. - **Description du défaut** Sur l'écran d'accueil Beta, si l'utilisateur, avant de faire défiler jusqu'aux objets, lance une recherche depuis le bas de son écran, alors le viewport est bloqué lors de l'affichage des résultats. **Comportement attendu** Interagir avec l'interface de recherche devrait faire scroller et faire disparaître l'écran d'accueil Beta. Problème sur iphone SE 👍 ![IMG_1828](https://user-images.githubusercontent.com/36261410/55716311-f3e1d000-59f6-11e9-938f-1f115c6c1d7f.PNG)
priority
sur l écran d accueil beta lancer une recherche bloque le scroll description du défaut sur l écran d accueil beta si l utilisateur avant de faire défiler jusqu aux objets lance une recherche depuis le bas de son écran alors le viewport est bloqué lors de l affichage des résultats comportement attendu interagir avec l interface de recherche devrait faire scroller et faire disparaître l écran d accueil beta problème sur iphone se 👍
1
703,990
24,180,409,942
IssuesEvent
2022-09-23 08:21:21
space-wizards/space-station-14
https://api.github.com/repos/space-wizards/space-station-14
opened
APC construction doesn't require a cell and will not pass through cell charge
Priority: 2-Before Release Issue: Feature Request Difficulty: 2-Medium
## Description <!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. --> So APC construction has been improved but still doesn't require a power cell like it does in SS13. This is effectively the battery component of the APC and allows APCs to be upgraded in game just by swapping the cell, or to bump start a rooms power by swapping cells. Although the construction system lets you create containers to store a cell, I'm not sure there's a way to pass through the charge percentage of the cell in proportion to the APC entity battery level, or do away with one of these things entirely so the cell charge is the battery comp for the APC. This issue replaces #6962
1.0
APC construction doesn't require a cell and will not pass through cell charge - ## Description <!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. --> So APC construction has been improved but still doesn't require a power cell like it does in SS13. This is effectively the battery component of the APC and allows APCs to be upgraded in game just by swapping the cell, or to bump start a rooms power by swapping cells. Although the construction system lets you create containers to store a cell, I'm not sure there's a way to pass through the charge percentage of the cell in proportion to the APC entity battery level, or do away with one of these things entirely so the cell charge is the battery comp for the APC. This issue replaces #6962
priority
apc construction doesn t require a cell and will not pass through cell charge description so apc construction has been improved but still doesn t require a power cell like it does in this is effectively the battery component of the apc and allows apcs to be upgraded in game just by swapping the cell or to bump start a rooms power by swapping cells although the construction system lets you create containers to store a cell i m not sure there s a way to pass through the charge percentage of the cell in proportion to the apc entity battery level or do away with one of these things entirely so the cell charge is the battery comp for the apc this issue replaces
1
754,279
26,380,182,809
IssuesEvent
2023-01-12 07:57:34
Together-Java/TJ-Bot
https://api.github.com/repos/Together-Java/TJ-Bot
closed
Auto-post advice on empty help threads
enhancement priority: normal valid help wanted
Sometimes new users create a help thread and then dont post any details in it. In such a situation, the bot should (after some waiting time, maybe 5 minutes) post a message where it pings the author and asks them to post some details, shortly explaining how the system works. This could be setup as scheduled task, scheduled by `ImplictAskListener.java` and `AskCommand.java` (via a separate helper class). Where we could schedule this "explain-job" in 5 minutes after creating the threads. The job would then check the thread history to see if there are any messages already (other than from the bot) - and if not, it would post this message. Also, we would have to setup a small database to memorize help thread channel ID <-> author ID in order to be able to know who to ping.
1.0
Auto-post advice on empty help threads - Sometimes new users create a help thread and then dont post any details in it. In such a situation, the bot should (after some waiting time, maybe 5 minutes) post a message where it pings the author and asks them to post some details, shortly explaining how the system works. This could be setup as scheduled task, scheduled by `ImplictAskListener.java` and `AskCommand.java` (via a separate helper class). Where we could schedule this "explain-job" in 5 minutes after creating the threads. The job would then check the thread history to see if there are any messages already (other than from the bot) - and if not, it would post this message. Also, we would have to setup a small database to memorize help thread channel ID <-> author ID in order to be able to know who to ping.
priority
auto post advice on empty help threads sometimes new users create a help thread and then dont post any details in it in such a situation the bot should after some waiting time maybe minutes post a message where it pings the author and asks them to post some details shortly explaining how the system works this could be setup as scheduled task scheduled by implictasklistener java and askcommand java via a separate helper class where we could schedule this explain job in minutes after creating the threads the job would then check the thread history to see if there are any messages already other than from the bot and if not it would post this message also we would have to setup a small database to memorize help thread channel id author id in order to be able to know who to ping
1
296,073
9,104,018,432
IssuesEvent
2019-02-20 17:07:50
medic/medic
https://api.github.com/repos/medic/medic
closed
Navigate away from form dialog shown when completing a task
Priority: 1 - High Type: Bug
The dialog that you are navigating away from a un-saved form is shown on submit from a task. **Steps to reproduce**: Navigate to a person/place that has a task. Or navigate to the tasks page. Complete a task **What should happen**: The task is completed and user is returned. **What actually happens**: The task is completed and the dialog `This form is not finished. You will lose your data if you leave now. A form is only complete when you hit Submit. Are you sure you want to leave?` is shown. **Environment**: - Instance: gamma.dev - Browser: Chrome - Client platform: Linux - App: Webapp - Version: 3.3.0.beta.6
1.0
Navigate away from form dialog shown when completing a task - The dialog that you are navigating away from a un-saved form is shown on submit from a task. **Steps to reproduce**: Navigate to a person/place that has a task. Or navigate to the tasks page. Complete a task **What should happen**: The task is completed and user is returned. **What actually happens**: The task is completed and the dialog `This form is not finished. You will lose your data if you leave now. A form is only complete when you hit Submit. Are you sure you want to leave?` is shown. **Environment**: - Instance: gamma.dev - Browser: Chrome - Client platform: Linux - App: Webapp - Version: 3.3.0.beta.6
priority
navigate away from form dialog shown when completing a task the dialog that you are navigating away from a un saved form is shown on submit from a task steps to reproduce navigate to a person place that has a task or navigate to the tasks page complete a task what should happen the task is completed and user is returned what actually happens the task is completed and the dialog this form is not finished you will lose your data if you leave now a form is only complete when you hit submit are you sure you want to leave is shown environment instance gamma dev browser chrome client platform linux app webapp version beta
1
20,481
3,814,422,532
IssuesEvent
2016-03-28 13:14:36
Exa-Networks/exabgp
https://api.github.com/repos/Exa-Networks/exabgp
closed
exabgp ibgp down
duplicate fixed-need-testing
``` exabgp: 29770 network Peer ip1 ASN asnumber out loop, peer reset, message [] error[] exabgp: 29770 message Peer ip2 ASN asnumber<< KEEPALIVE exabgp: 29770 message Peer ip2 ASN asnumber << NOTIFICATION ******************************************************************************** EXABGP MISBEHAVED / HELP US FIX IT ******************************************************************************** Sorry, you encountered a problem with ExaBGP, as the problem only affects one peer, we are trying to keep the program running. There are a few things you can do to help us (and yourself): - make sure you are running the latest version of the code available at https://github.com/Exa-Networks/exabgp/releases/latest - if so report the issue on https://github.com/Exa-Networks/exabgp/issues so it can be fixed (github can be searched for similar reports) PLEASE, when reporting, do include as much information as you can: - do not obfuscate any data (feel free to send us a private email with the extra information if your business policy is strict on information sharing) https://github.com/Exa-Networks/exabgp/wiki/FAQ - if you can reproduce the issue, run ExaBGP with the command line option -d it provides us with much needed information to fix problems quickly - include the information presented below Should you not receive an acknowledgment of your issue on github (assignement, comment, or similar) within a few hours, feel free to email us to make sure it was not overlooked. (please keep in mind the authors are based in GMT/Europe) ******************************************************************************** -- Please provide ALL the information below on : -- https://github.com/Exa-Networks/exabgp/issues ******************************************************************************** ExaBGP version : 3.4.13 Python version : 2.6.6 (r266:84292, May 29 2014, 05:49:27) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] System Uname : #1 SMP Mon Jan 28 17:12:52 CST 2013 System MaxInt : 9223372036854775807 peer ip2 ASN asnumber <type 'exceptions.KeyError'> 3 Traceback (most recent call last): File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/peer.py", line 587, in _run for action in self._main(direction): File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/peer.py", line 456, in _main for message in proto.read_message(): File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/protocol.py", line 168, in read_message self.peer.reactor.processes.message(msg_id,self.peer,message,'','') File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/api/processes.py", line 270, in closure return function(self,*args) File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/api/processes.py", line 306, in message self._dispatch[message_id](self,peer,message,header,*body) KeyError: 3 ```
1.0
exabgp ibgp down - ``` exabgp: 29770 network Peer ip1 ASN asnumber out loop, peer reset, message [] error[] exabgp: 29770 message Peer ip2 ASN asnumber<< KEEPALIVE exabgp: 29770 message Peer ip2 ASN asnumber << NOTIFICATION ******************************************************************************** EXABGP MISBEHAVED / HELP US FIX IT ******************************************************************************** Sorry, you encountered a problem with ExaBGP, as the problem only affects one peer, we are trying to keep the program running. There are a few things you can do to help us (and yourself): - make sure you are running the latest version of the code available at https://github.com/Exa-Networks/exabgp/releases/latest - if so report the issue on https://github.com/Exa-Networks/exabgp/issues so it can be fixed (github can be searched for similar reports) PLEASE, when reporting, do include as much information as you can: - do not obfuscate any data (feel free to send us a private email with the extra information if your business policy is strict on information sharing) https://github.com/Exa-Networks/exabgp/wiki/FAQ - if you can reproduce the issue, run ExaBGP with the command line option -d it provides us with much needed information to fix problems quickly - include the information presented below Should you not receive an acknowledgment of your issue on github (assignement, comment, or similar) within a few hours, feel free to email us to make sure it was not overlooked. (please keep in mind the authors are based in GMT/Europe) ******************************************************************************** -- Please provide ALL the information below on : -- https://github.com/Exa-Networks/exabgp/issues ******************************************************************************** ExaBGP version : 3.4.13 Python version : 2.6.6 (r266:84292, May 29 2014, 05:49:27) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] System Uname : #1 SMP Mon Jan 28 17:12:52 CST 2013 System MaxInt : 9223372036854775807 peer ip2 ASN asnumber <type 'exceptions.KeyError'> 3 Traceback (most recent call last): File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/peer.py", line 587, in _run for action in self._main(direction): File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/peer.py", line 456, in _main for message in proto.read_message(): File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/protocol.py", line 168, in read_message self.peer.reactor.processes.message(msg_id,self.peer,message,'','') File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/api/processes.py", line 270, in closure return function(self,*args) File "/home/work/agentInstall/sh.a1_sender01/exabgpAgent/newlyExabgp/exabgp/lib/exabgp/reactor/api/processes.py", line 306, in message self._dispatch[message_id](self,peer,message,header,*body) KeyError: 3 ```
non_priority
exabgp ibgp down exabgp network peer asn asnumber out loop peer reset message error exabgp message peer asn asnumber keepalive exabgp message peer asn asnumber notification exabgp misbehaved help us fix it sorry you encountered a problem with exabgp as the problem only affects one peer we are trying to keep the program running there are a few things you can do to help us and yourself make sure you are running the latest version of the code available at if so report the issue on so it can be fixed github can be searched for similar reports please when reporting do include as much information as you can do not obfuscate any data feel free to send us a private email with the extra information if your business policy is strict on information sharing if you can reproduce the issue run exabgp with the command line option d it provides us with much needed information to fix problems quickly include the information presented below should you not receive an acknowledgment of your issue on github assignement comment or similar within a few hours feel free to email us to make sure it was not overlooked please keep in mind the authors are based in gmt europe please provide all the information below on exabgp version python version may system uname smp mon jan cst system maxint peer asn asnumber traceback most recent call last file home work agentinstall sh exabgpagent newlyexabgp exabgp lib exabgp reactor peer py line in run for action in self main direction file home work agentinstall sh exabgpagent newlyexabgp exabgp lib exabgp reactor peer py line in main for message in proto read message file home work agentinstall sh exabgpagent newlyexabgp exabgp lib exabgp reactor protocol py line in read message self peer reactor processes message msg id self peer message file home work agentinstall sh exabgpagent newlyexabgp exabgp lib exabgp reactor api processes py line in closure return function self args file home work agentinstall sh exabgpagent newlyexabgp exabgp lib exabgp reactor api processes py line in message self dispatch self peer message header body keyerror
0
95,572
27,553,770,128
IssuesEvent
2023-03-07 16:30:30
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
AV during System.Net.Http.Functional.Tests on windows x86
blocking-clean-ci Known Build Error
## Build Information Build: https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_build/results?buildId=195488 Build error leg or test failing: System.Net.Http.Functional.Tests.WorkItemExecution Pull request: https://github.com/dotnet/runtime/pull/83068 <!-- Error message template --> ## Error Message Fill the error message using [known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section). ```json { "ErrorMessage": "exit code -1073741819", "BuildRetry": false, "ErrorPattern": "", "ExcludeConsoleLog": false } ``` Seems like failure is in the xunit host runtime?
1.0
AV during System.Net.Http.Functional.Tests on windows x86 - ## Build Information Build: https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_build/results?buildId=195488 Build error leg or test failing: System.Net.Http.Functional.Tests.WorkItemExecution Pull request: https://github.com/dotnet/runtime/pull/83068 <!-- Error message template --> ## Error Message Fill the error message using [known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section). ```json { "ErrorMessage": "exit code -1073741819", "BuildRetry": false, "ErrorPattern": "", "ExcludeConsoleLog": false } ``` Seems like failure is in the xunit host runtime?
non_priority
av during system net http functional tests on windows build information build build error leg or test failing system net http functional tests workitemexecution pull request error message fill the error message using json errormessage exit code buildretry false errorpattern excludeconsolelog false seems like failure is in the xunit host runtime
0
76,226
14,583,065,292
IssuesEvent
2020-12-18 13:25:33
odpi/egeria
https://api.github.com/repos/odpi/egeria
opened
Security Analysis - Poor Logging Practice
code-quality security
Ubrella issue for Code quality improvements identified OWASP [Poor Logging Practice](https://owasp.org/www-community/vulnerabilities/Poor_Logging_Practice)
1.0
Security Analysis - Poor Logging Practice - Ubrella issue for Code quality improvements identified OWASP [Poor Logging Practice](https://owasp.org/www-community/vulnerabilities/Poor_Logging_Practice)
non_priority
security analysis poor logging practice ubrella issue for code quality improvements identified owasp
0
123,296
16,473,849,633
IssuesEvent
2021-05-23 23:37:31
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Incorrect Type, expected array with terminal
*as-designed
Issue Type: <b>Bug</b> At any time go into settings, edit user settings in settings.json. Write the following: "terminal.integrated.profiles.windows": { //automatically generated code here, remove comma below if necessary , "DevPowerShell64": { "path": "C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe", "args": "-noe -c &{Import-Module \"\"\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll\"\"\"; Enter-VsDevShell efd669b9}", // } } } What this does: Uses the incorrect type of string, creating a warning, but that isn't the cool part. The cool part is that you can spawn a developer powershell with this code, if you have visual studio community edition installed. In theory, this should work for any visual studio edition. Why this is cool: It summons msbuild ect. into your integrated terminal, you simply have to redirect the terminal to the directory you want to use it in. I used this to build a cmake project. Less windows, more clean. Why this isn't cool: Presumably at some point someone could patch out taking literal strings like this for any reason. Why don't I just use 'args' as intended, with an array to get this behavior: The array of arguments ("args") are supplied as a string literal to powershell, presumably after some work is done on the array to make them a single string. For some reason, this causes -c to be understood as some kind of program. It also raises the question of whether code would pass backtick wrapped text to every terminal it was using... I imagine that'd cause some kind of bug somewhere eventually. Workarounds if patched so it forces the use of an array: make it clear why the patch occurred somewhere and a solution could be. A simple workaround is just to run an external terminal started in developer mode of course! What might be cool: If there was an extension, addon, or set of instructions to put the VS Developer command prompt into the integrated terminal by default if you had it. Maybe just a button you could click somewhere that pasted the right command into integrated terminals? This is primarily a bug because it's clearly not taking the right type, unfortunately. VS Code version: Code 1.56.2 (054a9295330880ed74ceaedda236253b4f39a335, 2021-05-12T17:13:13.157Z) OS version: Windows_NT x64 10.0.19042 <!-- generated by issue reporter -->
1.0
Incorrect Type, expected array with terminal - Issue Type: <b>Bug</b> At any time go into settings, edit user settings in settings.json. Write the following: "terminal.integrated.profiles.windows": { //automatically generated code here, remove comma below if necessary , "DevPowerShell64": { "path": "C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe", "args": "-noe -c &{Import-Module \"\"\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll\"\"\"; Enter-VsDevShell efd669b9}", // } } } What this does: Uses the incorrect type of string, creating a warning, but that isn't the cool part. The cool part is that you can spawn a developer powershell with this code, if you have visual studio community edition installed. In theory, this should work for any visual studio edition. Why this is cool: It summons msbuild ect. into your integrated terminal, you simply have to redirect the terminal to the directory you want to use it in. I used this to build a cmake project. Less windows, more clean. Why this isn't cool: Presumably at some point someone could patch out taking literal strings like this for any reason. Why don't I just use 'args' as intended, with an array to get this behavior: The array of arguments ("args") are supplied as a string literal to powershell, presumably after some work is done on the array to make them a single string. For some reason, this causes -c to be understood as some kind of program. It also raises the question of whether code would pass backtick wrapped text to every terminal it was using... I imagine that'd cause some kind of bug somewhere eventually. Workarounds if patched so it forces the use of an array: make it clear why the patch occurred somewhere and a solution could be. A simple workaround is just to run an external terminal started in developer mode of course! What might be cool: If there was an extension, addon, or set of instructions to put the VS Developer command prompt into the integrated terminal by default if you had it. Maybe just a button you could click somewhere that pasted the right command into integrated terminals? This is primarily a bug because it's clearly not taking the right type, unfortunately. VS Code version: Code 1.56.2 (054a9295330880ed74ceaedda236253b4f39a335, 2021-05-12T17:13:13.157Z) OS version: Windows_NT x64 10.0.19042 <!-- generated by issue reporter -->
non_priority
incorrect type expected array with terminal issue type bug at any time go into settings edit user settings in settings json write the following terminal integrated profiles windows automatically generated code here remove comma below if necessary path c windows windowspowershell powershell exe args noe c import module c program files microsoft visual studio community tools microsoft visualstudio devshell dll enter vsdevshell what this does uses the incorrect type of string creating a warning but that isn t the cool part the cool part is that you can spawn a developer powershell with this code if you have visual studio community edition installed in theory this should work for any visual studio edition why this is cool it summons msbuild ect into your integrated terminal you simply have to redirect the terminal to the directory you want to use it in i used this to build a cmake project less windows more clean why this isn t cool presumably at some point someone could patch out taking literal strings like this for any reason why don t i just use args as intended with an array to get this behavior the array of arguments args are supplied as a string literal to powershell presumably after some work is done on the array to make them a single string for some reason this causes c to be understood as some kind of program it also raises the question of whether code would pass backtick wrapped text to every terminal it was using i imagine that d cause some kind of bug somewhere eventually workarounds if patched so it forces the use of an array make it clear why the patch occurred somewhere and a solution could be a simple workaround is just to run an external terminal started in developer mode of course what might be cool if there was an extension addon or set of instructions to put the vs developer command prompt into the integrated terminal by default if you had it maybe just a button you could click somewhere that pasted the right command into integrated terminals this is primarily a bug because it s clearly not taking the right type unfortunately vs code version code os version windows nt
0
352,817
25,083,467,542
IssuesEvent
2022-11-07 21:26:59
SWE-OutOfBounds/Documents
https://api.github.com/repos/SWE-OutOfBounds/Documents
opened
Creazione preventivo orari
documentation
Definizione di un documento per la specifica degli orari da rispettare
1.0
Creazione preventivo orari - Definizione di un documento per la specifica degli orari da rispettare
non_priority
creazione preventivo orari definizione di un documento per la specifica degli orari da rispettare
0
80,060
29,935,850,321
IssuesEvent
2023-06-22 12:43:18
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
MapStore Write-Behind Batching produces too small batches
Type: Defect
**Problem** Our write-behind queue is growing very large because it is not processed efficiently. We are trying to use batches of size 1000, but we see that most batches are smaller than 10 items. **Expected behavior** Since we are using coalescing write-behind queues - and therefore only the latest version of a map entry needs to be persisted, we would expect that the batches are 1000 items in size if the write-behind queue is growing large. **To Reproduce** 1. To reproduce the issue, you will need a workload that mixes store and delete operations. We use around 80% store and 20% delete operations. 2. Configure a map which has a coalescing write-behind map store configured and uses batches of size 1000. 3. Execute the workload and look at the MapStore.storeAll() and MapStore.deleteAll() callbacks. I already opened a PR which contains a test that shows the problem: https://github.com/hazelcast/hazelcast/pull/24672 **Additional context** In the above mentioned PR I already implemented a fix which seems easy enough - please check, approve, and merge :-)
1.0
MapStore Write-Behind Batching produces too small batches - **Problem** Our write-behind queue is growing very large because it is not processed efficiently. We are trying to use batches of size 1000, but we see that most batches are smaller than 10 items. **Expected behavior** Since we are using coalescing write-behind queues - and therefore only the latest version of a map entry needs to be persisted, we would expect that the batches are 1000 items in size if the write-behind queue is growing large. **To Reproduce** 1. To reproduce the issue, you will need a workload that mixes store and delete operations. We use around 80% store and 20% delete operations. 2. Configure a map which has a coalescing write-behind map store configured and uses batches of size 1000. 3. Execute the workload and look at the MapStore.storeAll() and MapStore.deleteAll() callbacks. I already opened a PR which contains a test that shows the problem: https://github.com/hazelcast/hazelcast/pull/24672 **Additional context** In the above mentioned PR I already implemented a fix which seems easy enough - please check, approve, and merge :-)
non_priority
mapstore write behind batching produces too small batches problem our write behind queue is growing very large because it is not processed efficiently we are trying to use batches of size but we see that most batches are smaller than items expected behavior since we are using coalescing write behind queues and therefore only the latest version of a map entry needs to be persisted we would expect that the batches are items in size if the write behind queue is growing large to reproduce to reproduce the issue you will need a workload that mixes store and delete operations we use around store and delete operations configure a map which has a coalescing write behind map store configured and uses batches of size execute the workload and look at the mapstore storeall and mapstore deleteall callbacks i already opened a pr which contains a test that shows the problem additional context in the above mentioned pr i already implemented a fix which seems easy enough please check approve and merge
0
545,033
15,934,592,489
IssuesEvent
2021-04-14 08:51:37
Automattic/Edit-Flow
https://api.github.com/repos/Automattic/Edit-Flow
opened
Migrating Travis CI to GitHub Actions
[Priority] High enhancement
As the title, we consider moving from Travis CI to GitHub Actions as https://travis-ci.org/ will be moved to http://www.travis-ci.com/ soon. Good start https://docs.github.com/en/actions/learn-github-actions/migrating-from-travis-ci-to-github-actions
1.0
Migrating Travis CI to GitHub Actions - As the title, we consider moving from Travis CI to GitHub Actions as https://travis-ci.org/ will be moved to http://www.travis-ci.com/ soon. Good start https://docs.github.com/en/actions/learn-github-actions/migrating-from-travis-ci-to-github-actions
priority
migrating travis ci to github actions as the title we consider moving from travis ci to github actions as will be moved to soon good start
1
328,736
9,999,337,652
IssuesEvent
2019-07-12 10:23:06
ushahidi/opensourcedesign
https://api.github.com/repos/ushahidi/opensourcedesign
reopened
Square logo to be created
Design Highest Priority In progress
We need a logo that will work on social icons/profiles that is square orientation. max size 1080x1080px - High res version in .png .jpg .svg min size 50x50px low res version .png .jpg .svg Favicon .favi (use website to create this)
1.0
Square logo to be created - We need a logo that will work on social icons/profiles that is square orientation. max size 1080x1080px - High res version in .png .jpg .svg min size 50x50px low res version .png .jpg .svg Favicon .favi (use website to create this)
priority
square logo to be created we need a logo that will work on social icons profiles that is square orientation max size high res version in png jpg svg min size low res version png jpg svg favicon favi use website to create this
1
499,363
14,445,962,477
IssuesEvent
2020-12-08 00:08:44
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
Cloud Worlds - auto-updated to version 0.9.3.0
Category: Cloud Worlds Priority: High Status: Fixed
My Cloud World auto updated to version 0.9.3.0 , I'm guessing other Cloud Worlds might've done the same if they were set to auto-update: ![image](https://user-images.githubusercontent.com/964559/101179053-9d2efe00-35fe-11eb-98f2-44f07810a7bf.png)
1.0
Cloud Worlds - auto-updated to version 0.9.3.0 - My Cloud World auto updated to version 0.9.3.0 , I'm guessing other Cloud Worlds might've done the same if they were set to auto-update: ![image](https://user-images.githubusercontent.com/964559/101179053-9d2efe00-35fe-11eb-98f2-44f07810a7bf.png)
priority
cloud worlds auto updated to version my cloud world auto updated to version i m guessing other cloud worlds might ve done the same if they were set to auto update
1
262,213
19,767,688,987
IssuesEvent
2022-01-17 06:00:01
Real-Dev-Squad/mobile-app
https://api.github.com/repos/Real-Dev-Squad/mobile-app
opened
Make a README.md
documentation
Make the README.md file for the repository explaining the file structure and local installation and running details.
1.0
Make a README.md - Make the README.md file for the repository explaining the file structure and local installation and running details.
non_priority
make a readme md make the readme md file for the repository explaining the file structure and local installation and running details
0
270,252
8,453,646,251
IssuesEvent
2018-10-20 17:37:21
CS2113-AY1819S1-T09-1/main
https://api.github.com/repos/CS2113-AY1819S1-T09-1/main
closed
add : add a print manually
priority.medium type.enhancement type.story
As a student , i want to be able to add a print to a particular queue manually so that i can ensure that my print job will be processed.
1.0
add : add a print manually - As a student , i want to be able to add a print to a particular queue manually so that i can ensure that my print job will be processed.
priority
add add a print manually as a student i want to be able to add a print to a particular queue manually so that i can ensure that my print job will be processed
1
416,739
28,097,845,128
IssuesEvent
2023-03-30 17:04:47
microsoft/torchgeo
https://api.github.com/repos/microsoft/torchgeo
closed
torchgeo install in google colab
documentation
### Description I wanted to check a potential bug with a reproducible example in a colab notebook and found that there is some installation issue. ``` %pip install torchgeo from torchgeo.trainers import ClassificationTask ``` `ContextualVersionConflict: (Pygments 2.6.1 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('pygments<3.0.0,>=2.14.0'), {'rich'})` ### Steps to reproduce [Notebook](https://colab.research.google.com/drive/1zK4uXLPGOkNWqRuqFszR9opg2bnFzDb1?usp=sharing) ### Version 0.4.0
1.0
torchgeo install in google colab - ### Description I wanted to check a potential bug with a reproducible example in a colab notebook and found that there is some installation issue. ``` %pip install torchgeo from torchgeo.trainers import ClassificationTask ``` `ContextualVersionConflict: (Pygments 2.6.1 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('pygments<3.0.0,>=2.14.0'), {'rich'})` ### Steps to reproduce [Notebook](https://colab.research.google.com/drive/1zK4uXLPGOkNWqRuqFszR9opg2bnFzDb1?usp=sharing) ### Version 0.4.0
non_priority
torchgeo install in google colab description i wanted to check a potential bug with a reproducible example in a colab notebook and found that there is some installation issue pip install torchgeo from torchgeo trainers import classificationtask contextualversionconflict pygments usr local lib dist packages requirement parse pygments rich steps to reproduce version
0
307,483
9,417,762,153
IssuesEvent
2019-04-10 17:31:56
Esri/maps-app-ios
https://api.github.com/repos/Esri/maps-app-ios
closed
Remove ATS allows arbitrary loads.
Effort - Small Priority - Low Status - Backlog Type - Enhancement
ArcGIS Online has updated it's owning system URL. https://www.arcgis.com/sharing/rest/info?f=pjson This change should be reflected in - [ ] XCode - [ ] README
1.0
Remove ATS allows arbitrary loads. - ArcGIS Online has updated it's owning system URL. https://www.arcgis.com/sharing/rest/info?f=pjson This change should be reflected in - [ ] XCode - [ ] README
priority
remove ats allows arbitrary loads arcgis online has updated it s owning system url this change should be reflected in xcode readme
1
49,588
3,003,706,314
IssuesEvent
2015-07-25 05:49:21
jayway/powermock
https://api.github.com/repos/jayway/powermock
closed
Add support for Java 8
bug imported Priority-Medium
_From [iir...@gmail.com](https://code.google.com/u/113896979631297957884/) on January 06, 2014 23:51:28_ Some features may not work. At least PowerMock agent module didn't work. Here's my fix for that module: https://github.com/iirekm/powermock/commit/339ae4a17ec7e7adde3c78ea5f37843647bbbc3d _Original issue: http://code.google.com/p/powermock/issues/detail?id=475_
1.0
Add support for Java 8 - _From [iir...@gmail.com](https://code.google.com/u/113896979631297957884/) on January 06, 2014 23:51:28_ Some features may not work. At least PowerMock agent module didn't work. Here's my fix for that module: https://github.com/iirekm/powermock/commit/339ae4a17ec7e7adde3c78ea5f37843647bbbc3d _Original issue: http://code.google.com/p/powermock/issues/detail?id=475_
priority
add support for java from on january some features may not work at least powermock agent module didn t work here s my fix for that module original issue
1
67,531
20,978,278,355
IssuesEvent
2022-03-28 17:13:03
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
opened
auth API: fetching individual RRsets skips disabled records
auth defect
- Program: Authoritative - Issue type: Bug report ### Short description Fetching of individual RRsets via the API, as introduced in #11389, skips disabled records. Given that full zone listings do include those records, this is inconsistent and potentially confusing. ### Other notes Because fixing this looks like a minor refactor of how we use SQL, we will most likely not backport the fix to 4.6.
1.0
auth API: fetching individual RRsets skips disabled records - - Program: Authoritative - Issue type: Bug report ### Short description Fetching of individual RRsets via the API, as introduced in #11389, skips disabled records. Given that full zone listings do include those records, this is inconsistent and potentially confusing. ### Other notes Because fixing this looks like a minor refactor of how we use SQL, we will most likely not backport the fix to 4.6.
non_priority
auth api fetching individual rrsets skips disabled records program authoritative issue type bug report short description fetching of individual rrsets via the api as introduced in skips disabled records given that full zone listings do include those records this is inconsistent and potentially confusing other notes because fixing this looks like a minor refactor of how we use sql we will most likely not backport the fix to
0
202,237
7,045,636,547
IssuesEvent
2018-01-01 22:32:46
pybel/pybel
https://api.github.com/repos/pybel/pybel
closed
Integrate document utilities from pybel-tools
enhancement low priority
Since there's already a feature to make bel scripts, the robust solution might as well be in place here too
1.0
Integrate document utilities from pybel-tools - Since there's already a feature to make bel scripts, the robust solution might as well be in place here too
priority
integrate document utilities from pybel tools since there s already a feature to make bel scripts the robust solution might as well be in place here too
1
709,272
24,372,602,728
IssuesEvent
2022-10-03 20:42:39
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
exporter/loadbalancingexporter: use hashicorp/memberlist instead of DNS to create hashring
enhancement priority:p3 exporter/loadbalancing
**Is your feature request related to a problem? Please describe.** Doing DNS lookups every 5 seconds seems to put some strain on the infrastructure. Also scaling events that change the number of pods may change where a traceID goes, causing disruption. I've been thinking of adding an HPA for the load-balancer front-end and the tail-sampling backends, and during a load-increase, a scale-up of the backends would probably lose a lot of traces when we need them most (we're getting more load). Grafana/tempo ingesters/distributors use a memberlist mechanism to distribute traces in a more reliable way, and I suspect we should move the load-balancer more in that direction for more robust trace-distribution. **Describe the solution you'd like** Seems https://github.com/hashicorp/memberlist is a robust system, used a lot in the loki and tempo backends (and many other places). **Describe alternatives you've considered** None come to mind. **Additional context**
1.0
exporter/loadbalancingexporter: use hashicorp/memberlist instead of DNS to create hashring - **Is your feature request related to a problem? Please describe.** Doing DNS lookups every 5 seconds seems to put some strain on the infrastructure. Also scaling events that change the number of pods may change where a traceID goes, causing disruption. I've been thinking of adding an HPA for the load-balancer front-end and the tail-sampling backends, and during a load-increase, a scale-up of the backends would probably lose a lot of traces when we need them most (we're getting more load). Grafana/tempo ingesters/distributors use a memberlist mechanism to distribute traces in a more reliable way, and I suspect we should move the load-balancer more in that direction for more robust trace-distribution. **Describe the solution you'd like** Seems https://github.com/hashicorp/memberlist is a robust system, used a lot in the loki and tempo backends (and many other places). **Describe alternatives you've considered** None come to mind. **Additional context**
priority
exporter loadbalancingexporter use hashicorp memberlist instead of dns to create hashring is your feature request related to a problem please describe doing dns lookups every seconds seems to put some strain on the infrastructure also scaling events that change the number of pods may change where a traceid goes causing disruption i ve been thinking of adding an hpa for the load balancer front end and the tail sampling backends and during a load increase a scale up of the backends would probably lose a lot of traces when we need them most we re getting more load grafana tempo ingesters distributors use a memberlist mechanism to distribute traces in a more reliable way and i suspect we should move the load balancer more in that direction for more robust trace distribution describe the solution you d like seems is a robust system used a lot in the loki and tempo backends and many other places describe alternatives you ve considered none come to mind additional context
1
557,537
16,510,987,357
IssuesEvent
2021-05-26 04:05:57
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
mcumgr: shell output gets truncated
area: Shell area: mcumgr bug priority: low
**Describe the bug** The mcumgr output from shell command gets truncated at predefined length, determined by buffer length. Providing `shell help` as an example but the issue impacts all shell commands that return some output. **To Reproduce** Steps to reproduce the behavior: 1. Build `west build -b nrf52840dk_nrf52840 zephyr/samples/subsys/mgmt/mcumgr/smp_svr -t menuconfig -- -DOVERLAY_CONFIG='overlay-fs.conf;overlay-shell.conf'` build and sign with your key 2. Flash to board and reset 3. Attempt to get help from shell `mcumgr -t 1200 --conntype serial --connstring dev=/dev/ttyACM0,baud=115200 shell exec help` 4. See error ![image](https://user-images.githubusercontent.com/56024351/109523811-0d3f0a00-7ab0-11eb-807a-1c20337e33bf.png) **Expected behavior** Shell output does not get truncated: ![image](https://user-images.githubusercontent.com/56024351/109524462-d9181900-7ab0-11eb-8919-a492e7f91744.png) **Impact** Most shell commands that give output longer than fits in buffer unusable. **Environment (please complete the following information):** - OS: Linux 5.4.0-65-generic x86_64 - Toolchain zephyr-sdk 0.12 - Zephyr 9df116ed0f6841fb9fbe1f804a5736622787f030
1.0
mcumgr: shell output gets truncated - **Describe the bug** The mcumgr output from shell command gets truncated at predefined length, determined by buffer length. Providing `shell help` as an example but the issue impacts all shell commands that return some output. **To Reproduce** Steps to reproduce the behavior: 1. Build `west build -b nrf52840dk_nrf52840 zephyr/samples/subsys/mgmt/mcumgr/smp_svr -t menuconfig -- -DOVERLAY_CONFIG='overlay-fs.conf;overlay-shell.conf'` build and sign with your key 2. Flash to board and reset 3. Attempt to get help from shell `mcumgr -t 1200 --conntype serial --connstring dev=/dev/ttyACM0,baud=115200 shell exec help` 4. See error ![image](https://user-images.githubusercontent.com/56024351/109523811-0d3f0a00-7ab0-11eb-807a-1c20337e33bf.png) **Expected behavior** Shell output does not get truncated: ![image](https://user-images.githubusercontent.com/56024351/109524462-d9181900-7ab0-11eb-8919-a492e7f91744.png) **Impact** Most shell commands that give output longer than fits in buffer unusable. **Environment (please complete the following information):** - OS: Linux 5.4.0-65-generic x86_64 - Toolchain zephyr-sdk 0.12 - Zephyr 9df116ed0f6841fb9fbe1f804a5736622787f030
priority
mcumgr shell output gets truncated describe the bug the mcumgr output from shell command gets truncated at predefined length determined by buffer length providing shell help as an example but the issue impacts all shell commands that return some output to reproduce steps to reproduce the behavior build west build b zephyr samples subsys mgmt mcumgr smp svr t menuconfig doverlay config overlay fs conf overlay shell conf build and sign with your key flash to board and reset attempt to get help from shell mcumgr t conntype serial connstring dev dev baud shell exec help see error expected behavior shell output does not get truncated impact most shell commands that give output longer than fits in buffer unusable environment please complete the following information os linux generic toolchain zephyr sdk zephyr
1
112,264
24,245,494,144
IssuesEvent
2022-09-27 10:11:44
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
closed
[Bug]: Space or newline in action binding causes Action selector not to update properly
Bug Low Low effort Papercut FE Coders Pod Evaluated Value
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior If we add a space after the binding curly braces, then the action selector value is not updated properly. Refer to the following video: https://user-images.githubusercontent.com/10436935/142255827-6268bc4f-e53e-463a-ae9b-c2e4dc65a79e.mov ### Steps To Reproduce 1. Add action binding in JS Edit mode ex: `{{showModal('Modal1')}}` 2. See how "Modal1" gets updated once we toggle the JSEdit mode 3. Now, enable JS Edit mode and change the binding to `{{ showModal('Modal1') }}` (Adds space) 4. disable JSEdit mode and see that "Modal1" is not updated into the field. ### Environment Production ### Version Cloud
1.0
[Bug]: Space or newline in action binding causes Action selector not to update properly - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior If we add a space after the binding curly braces, then the action selector value is not updated properly. Refer to the following video: https://user-images.githubusercontent.com/10436935/142255827-6268bc4f-e53e-463a-ae9b-c2e4dc65a79e.mov ### Steps To Reproduce 1. Add action binding in JS Edit mode ex: `{{showModal('Modal1')}}` 2. See how "Modal1" gets updated once we toggle the JSEdit mode 3. Now, enable JS Edit mode and change the binding to `{{ showModal('Modal1') }}` (Adds space) 4. disable JSEdit mode and see that "Modal1" is not updated into the field. ### Environment Production ### Version Cloud
non_priority
space or newline in action binding causes action selector not to update properly is there an existing issue for this i have searched the existing issues current behavior if we add a space after the binding curly braces then the action selector value is not updated properly refer to the following video steps to reproduce add action binding in js edit mode ex showmodal see how gets updated once we toggle the jsedit mode now enable js edit mode and change the binding to showmodal adds space disable jsedit mode and see that is not updated into the field environment production version cloud
0
783,901
27,550,582,276
IssuesEvent
2023-03-07 14:42:20
NewGraphEnvironment/fish_passage_skeena_2022_reporting
https://api.github.com/repos/NewGraphEnvironment/fish_passage_skeena_2022_reporting
closed
Populate cost estimate objects and add to memos and results
priority low
I labelled as low priority for now because I can get to this later once memos and other stuff is done first. Just wondering if I'm following a similar workflow to how it was done in the bulkley report? Helpful issue from bulkley repo [here](https://github.com/NewGraphEnvironment/fish_passage_bulkley_2022_reporting/issues/16)
1.0
Populate cost estimate objects and add to memos and results - I labelled as low priority for now because I can get to this later once memos and other stuff is done first. Just wondering if I'm following a similar workflow to how it was done in the bulkley report? Helpful issue from bulkley repo [here](https://github.com/NewGraphEnvironment/fish_passage_bulkley_2022_reporting/issues/16)
priority
populate cost estimate objects and add to memos and results i labelled as low priority for now because i can get to this later once memos and other stuff is done first just wondering if i m following a similar workflow to how it was done in the bulkley report helpful issue from bulkley repo
1
299,921
9,205,971,251
IssuesEvent
2019-03-08 12:17:00
qissue-bot/QGIS
https://api.github.com/repos/qissue-bot/QGIS
closed
toobars re-arrange at their will
Category: GUI Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
--- Author Name: **Maciej Sieczka -** (Maciej Sieczka -) Original Redmine Issue: 1275, https://issues.qgis.org/issues/1275 Original Assignee: nobody - --- r9249. See the attached screendumps. 1. On ok.png toolbars are organized the way I want them. 2. Then I quite QGIS and start it again. 3. Toolbars are organized differently - bad.png. --- - [ok.png](https://issues.qgis.org/attachments/download/2122/ok.png) (Maciej Sieczka -) - [bad.png](https://issues.qgis.org/attachments/download/2121/bad.png) (Maciej Sieczka -)
1.0
toobars re-arrange at their will - --- Author Name: **Maciej Sieczka -** (Maciej Sieczka -) Original Redmine Issue: 1275, https://issues.qgis.org/issues/1275 Original Assignee: nobody - --- r9249. See the attached screendumps. 1. On ok.png toolbars are organized the way I want them. 2. Then I quite QGIS and start it again. 3. Toolbars are organized differently - bad.png. --- - [ok.png](https://issues.qgis.org/attachments/download/2122/ok.png) (Maciej Sieczka -) - [bad.png](https://issues.qgis.org/attachments/download/2121/bad.png) (Maciej Sieczka -)
priority
toobars re arrange at their will author name maciej sieczka maciej sieczka original redmine issue original assignee nobody see the attached screendumps on ok png toolbars are organized the way i want them then i quite qgis and start it again toolbars are organized differently bad png maciej sieczka maciej sieczka
1
591,814
17,862,437,766
IssuesEvent
2021-09-06 04:06:06
ArkEcosystem/exchange-json-rpc
https://api.github.com/repos/ArkEcosystem/exchange-json-rpc
closed
Upgrade `@arkecosystem/crypto` to `3.0.0-next.34`
Priority: Critical
Required to support the new voting transactions.
1.0
Upgrade `@arkecosystem/crypto` to `3.0.0-next.34` - Required to support the new voting transactions.
priority
upgrade arkecosystem crypto to next required to support the new voting transactions
1
803,030
29,115,577,360
IssuesEvent
2023-05-17 00:26:29
returntocorp/semgrep
https://api.github.com/repos/returntocorp/semgrep
closed
Support `dot dotdotdot dot` ellipsis in swift
priority:medium parsing-pattern lang:swift
**Describe the bug** Currently the engine does not support `$VALUE. ... .FOO(...)` or `$VALUE. ...` which makes it fairly hard to write certain rules in swift. **To Reproduce** Visit https://semgrep.dev/s/JK8W click run **Expected behavior** https://semgrep.dev/s/JK8W should pass `$VALUE. ... .$FOO(...)` and `$VALUE. ...` **What is the priority of the bug to you?** - [x] P1: important to fix or quite annoying **Use case** Help writing swift rules
1.0
Support `dot dotdotdot dot` ellipsis in swift - **Describe the bug** Currently the engine does not support `$VALUE. ... .FOO(...)` or `$VALUE. ...` which makes it fairly hard to write certain rules in swift. **To Reproduce** Visit https://semgrep.dev/s/JK8W click run **Expected behavior** https://semgrep.dev/s/JK8W should pass `$VALUE. ... .$FOO(...)` and `$VALUE. ...` **What is the priority of the bug to you?** - [x] P1: important to fix or quite annoying **Use case** Help writing swift rules
priority
support dot dotdotdot dot ellipsis in swift describe the bug currently the engine does not support value foo or value which makes it fairly hard to write certain rules in swift to reproduce visit click run expected behavior should pass value foo and value what is the priority of the bug to you important to fix or quite annoying use case help writing swift rules
1
70,173
13,434,317,551
IssuesEvent
2020-09-07 11:10:38
hackjunction/hackplatform
https://api.github.com/repos/hackjunction/hackplatform
opened
Search all snake_case variables and see if they are possible to convert to camelCase
chore code style
Meta issue to figure out extent of work needed to do this.
1.0
Search all snake_case variables and see if they are possible to convert to camelCase - Meta issue to figure out extent of work needed to do this.
non_priority
search all snake case variables and see if they are possible to convert to camelcase meta issue to figure out extent of work needed to do this
0
755,943
26,448,574,018
IssuesEvent
2023-01-16 09:28:36
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.sonyliv.com - site is not usable
priority-normal browser-focus-geckoview engine-gecko
<!-- @browser: Firefox Mobile 108.0 --> <!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:108.0) Gecko/108.0 Firefox/108.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/116784 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.sonyliv.com/shows/shark-tank-india-1700000741/investing-in-the-future-of-india-1000203475?watch=true&utm_source=Search&utm_medium=paid&utm_campaign=IN_MSixSonyLIV_168429_Branding_Hindi-Show_SharkTank_Pre-Launch_Search_Traffic_India_Jan_2023_NA&utm_content=SharkTank_Text_Jan_11&gclid=EAIaIQobChMIko_UwqfG_AIV_pJmAh3rvAVfEAAYASAAEgLQXfD_BwE **Browser / Version**: Firefox Mobile 108.0 **Operating System**: Android 12 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: The page didn not even load... <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/f0444bb6-d76c-43c0-8795-de25659487ee.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230104165113</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/1/1daae132-981b-4f3f-8d3d-1f503faf4ed1) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.sonyliv.com - site is not usable - <!-- @browser: Firefox Mobile 108.0 --> <!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:108.0) Gecko/108.0 Firefox/108.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/116784 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.sonyliv.com/shows/shark-tank-india-1700000741/investing-in-the-future-of-india-1000203475?watch=true&utm_source=Search&utm_medium=paid&utm_campaign=IN_MSixSonyLIV_168429_Branding_Hindi-Show_SharkTank_Pre-Launch_Search_Traffic_India_Jan_2023_NA&utm_content=SharkTank_Text_Jan_11&gclid=EAIaIQobChMIko_UwqfG_AIV_pJmAh3rvAVfEAAYASAAEgLQXfD_BwE **Browser / Version**: Firefox Mobile 108.0 **Operating System**: Android 12 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: The page didn not even load... <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/f0444bb6-d76c-43c0-8795-de25659487ee.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230104165113</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/1/1daae132-981b-4f3f-8d3d-1f503faf4ed1) _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce the page didn not even load view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
1
100,682
8,752,748,083
IssuesEvent
2018-12-14 04:59:05
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
Testing 14 : ApiV1DashboardCountTimeBetweenGetPathParamTodateMongodbNoSqlInjectionTimebound
Testing 14
Project : Testing 14 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZjcxM2RhOTQtNGZlNy00NzY0LTgxMjktZjBhMDc2YzNkODdk; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 14 Dec 2018 04:53:31 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/dashboard/count-time-between?toDate= Request : Response : { "timestamp" : "2018-12-14T04:53:32.734+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/dashboard/count-time-between" } Logs : Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [1498 < 7000 OR 1498 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
Testing 14 : ApiV1DashboardCountTimeBetweenGetPathParamTodateMongodbNoSqlInjectionTimebound - Project : Testing 14 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZjcxM2RhOTQtNGZlNy00NzY0LTgxMjktZjBhMDc2YzNkODdk; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 14 Dec 2018 04:53:31 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/dashboard/count-time-between?toDate= Request : Response : { "timestamp" : "2018-12-14T04:53:32.734+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/dashboard/count-time-between" } Logs : Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [1498 < 7000 OR 1498 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
non_priority
testing project testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api dashboard count time between logs assertion resolved to result assertion resolved to result fx bot
0
133,846
18,959,496,799
IssuesEvent
2021-11-19 01:41:05
airshipit/airshipctl
https://api.github.com/repos/airshipit/airshipctl
closed
Airship specific implementation of KRM Function Specification
enhancement priority/low design needed
**Problem description (if applicable)** At the moment Airship uses two types of generic containers: krm (kyaml/runfn) and airship (airship specific docker wrapper). kyaml/runfn has some limitations that do not allow to use it everywhere. For example, sometimes we want a container to be run from a privileged user. Also kyaml/runfn does not provide some of the functionality we potentially would like to have (e.g. running timeout). **Proposed change** Instead of using two container runtimes, let's use just one. The suggestion is 1) Get rid of Airship specific docker client 2) Copy the code from kyaml/runfn and modify it to cover Airship specific needs The goal is to have one container runtime that would implement KRM Function Specification https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md#krm-functions-specification and have all necessary functionality needed for Airship. The same approach is used in https://github.com/GoogleContainerTools/kpt. They copied kyaml/runfn and modified the code according to their needs.
1.0
Airship specific implementation of KRM Function Specification - **Problem description (if applicable)** At the moment Airship uses two types of generic containers: krm (kyaml/runfn) and airship (airship specific docker wrapper). kyaml/runfn has some limitations that do not allow to use it everywhere. For example, sometimes we want a container to be run from a privileged user. Also kyaml/runfn does not provide some of the functionality we potentially would like to have (e.g. running timeout). **Proposed change** Instead of using two container runtimes, let's use just one. The suggestion is 1) Get rid of Airship specific docker client 2) Copy the code from kyaml/runfn and modify it to cover Airship specific needs The goal is to have one container runtime that would implement KRM Function Specification https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md#krm-functions-specification and have all necessary functionality needed for Airship. The same approach is used in https://github.com/GoogleContainerTools/kpt. They copied kyaml/runfn and modified the code according to their needs.
non_priority
airship specific implementation of krm function specification problem description if applicable at the moment airship uses two types of generic containers krm kyaml runfn and airship airship specific docker wrapper kyaml runfn has some limitations that do not allow to use it everywhere for example sometimes we want a container to be run from a privileged user also kyaml runfn does not provide some of the functionality we potentially would like to have e g running timeout proposed change instead of using two container runtimes let s use just one the suggestion is get rid of airship specific docker client copy the code from kyaml runfn and modify it to cover airship specific needs the goal is to have one container runtime that would implement krm function specification and have all necessary functionality needed for airship the same approach is used in they copied kyaml runfn and modified the code according to their needs
0
268,769
8,411,465,752
IssuesEvent
2018-10-12 13:58:48
CS2113-AY1819S1-T16-3/main
https://api.github.com/repos/CS2113-AY1819S1-T16-3/main
closed
As a user, I want to have administrator priviledges
priority.high type.story
so that I can use superuser commands and manage employees.
1.0
As a user, I want to have administrator priviledges - so that I can use superuser commands and manage employees.
priority
as a user i want to have administrator priviledges so that i can use superuser commands and manage employees
1
132,882
10,772,264,803
IssuesEvent
2019-11-02 13:45:55
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
[Failing Test] gce-cos-master-scalability-100 (ci-kubernetes-e2e-gci-gce-scalability)
kind/failing-test priority/critical-urgent sig/scheduling
**Which jobs are failing**: `gce-cos-master-scalability-100 (ci-kubernetes-e2e-gci-gce-scalability)` **Since when has it been failing**: `1st Nov 01:32 PDT` **Testgrid link**: https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-scalability-100 **Reason for failure**: ```console W1101 14:14:36.135] Run: ('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=e2e-big', '--gcp-network=e2e-big', '--check-leaked-resources', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-nodes=100', '--gcp-project-type=scalability-project', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gci-gce-scalability-1190270490954960897', '--test-cmd-args=--nodes=100', '--test-cmd-args=--prometheus-scrape-node-exporter', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/chaosmonkey/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_pvs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/use_simple_latency_query.yaml', '--test-cmd-args=--testoverrides=./testing/load/gce/throughput_override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=120m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/1190270490954960897/artifacts') W1101 14:14:36.163] 2019/11/01 14:14:36 main.go:332: Limiting testing to 2h0m0s W1101 14:14:36.163] 2019/11/01 14:14:36 util.go:164: Please use kubetest --gcp-node-size=n1-standard-1 (instead of deprecated NODE_SIZE=n1-standard-1) W1101 14:14:36.163] 2019/11/01 14:14:36 main.go:725: --gcp-project is missing, trying to fetch a project from boskos. W1101 14:14:36.164] (for local runs please set --gcp-project to your dev project) W1101 14:14:36.164] 2019/11/01 14:14:36 main.go:737: provider gce, will acquire project type scalability-project from boskos W1101 14:19:47.663] 2019/11/01 14:19:47 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W1101 14:19:47.665] 2019/11/01 14:19:47 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" W1101 14:19:47.976] 2019/11/01 14:19:47 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 312.587809ms W1101 14:19:47.976] 2019/11/01 14:19:47 main.go:319: Something went wrong: failed to prepare test environment: --provider=gce boskos failed to acquire project: resources not found W1101 14:19:47.980] Traceback (most recent call last): W1101 14:19:47.981] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module> W1101 14:19:47.981] main(parse_args()) W1101 14:19:47.982] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main W1101 14:19:47.982] mode.start(runner_args) W1101 14:19:47.982] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start W1101 14:19:47.982] check_env(env, self.command, *args) W1101 14:19:47.982] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W1101 14:19:47.982] subprocess.check_call(cmd, env=env) W1101 14:19:47.983] File "/usr/lib/python2.7/subprocess.py", line 190, in check_call W1101 14:19:47.983] raise CalledProcessError(retcode, cmd) W1101 14:19:47.985] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=e2e-big', '--gcp-network=e2e-big', '--check-leaked-resources', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-nodes=100', '--gcp-project-type=scalability-project', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gci-gce-scalability-1190270490954960897', '--test-cmd-args=--nodes=100', '--test-cmd-args=--prometheus-scrape-node-exporter', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/chaosmonkey/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_pvs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/use_simple_latency_query.yaml', '--test-cmd-args=--testoverrides=./testing/load/gce/throughput_override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=120m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/1190270490954960897/artifacts')' returned non-zero exit status 1 E1101 14:19:47.993] Command failed ``` **Anything else we need to know**: /cc @alenkacz @hasheddan @Verolop /milestone v1.17 /priority critical-urgent /sig scheduling
1.0
[Failing Test] gce-cos-master-scalability-100 (ci-kubernetes-e2e-gci-gce-scalability) - **Which jobs are failing**: `gce-cos-master-scalability-100 (ci-kubernetes-e2e-gci-gce-scalability)` **Since when has it been failing**: `1st Nov 01:32 PDT` **Testgrid link**: https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-scalability-100 **Reason for failure**: ```console W1101 14:14:36.135] Run: ('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=e2e-big', '--gcp-network=e2e-big', '--check-leaked-resources', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-nodes=100', '--gcp-project-type=scalability-project', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gci-gce-scalability-1190270490954960897', '--test-cmd-args=--nodes=100', '--test-cmd-args=--prometheus-scrape-node-exporter', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/chaosmonkey/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_pvs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/use_simple_latency_query.yaml', '--test-cmd-args=--testoverrides=./testing/load/gce/throughput_override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=120m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/1190270490954960897/artifacts') W1101 14:14:36.163] 2019/11/01 14:14:36 main.go:332: Limiting testing to 2h0m0s W1101 14:14:36.163] 2019/11/01 14:14:36 util.go:164: Please use kubetest --gcp-node-size=n1-standard-1 (instead of deprecated NODE_SIZE=n1-standard-1) W1101 14:14:36.163] 2019/11/01 14:14:36 main.go:725: --gcp-project is missing, trying to fetch a project from boskos. W1101 14:14:36.164] (for local runs please set --gcp-project to your dev project) W1101 14:14:36.164] 2019/11/01 14:14:36 main.go:737: provider gce, will acquire project type scalability-project from boskos W1101 14:19:47.663] 2019/11/01 14:19:47 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W1101 14:19:47.665] 2019/11/01 14:19:47 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" W1101 14:19:47.976] 2019/11/01 14:19:47 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 312.587809ms W1101 14:19:47.976] 2019/11/01 14:19:47 main.go:319: Something went wrong: failed to prepare test environment: --provider=gce boskos failed to acquire project: resources not found W1101 14:19:47.980] Traceback (most recent call last): W1101 14:19:47.981] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module> W1101 14:19:47.981] main(parse_args()) W1101 14:19:47.982] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main W1101 14:19:47.982] mode.start(runner_args) W1101 14:19:47.982] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start W1101 14:19:47.982] check_env(env, self.command, *args) W1101 14:19:47.982] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W1101 14:19:47.982] subprocess.check_call(cmd, env=env) W1101 14:19:47.983] File "/usr/lib/python2.7/subprocess.py", line 190, in check_call W1101 14:19:47.983] raise CalledProcessError(retcode, cmd) W1101 14:19:47.985] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=e2e-big', '--gcp-network=e2e-big', '--check-leaked-resources', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-nodes=100', '--gcp-project-type=scalability-project', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gci-gce-scalability-1190270490954960897', '--test-cmd-args=--nodes=100', '--test-cmd-args=--prometheus-scrape-node-exporter', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/chaosmonkey/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_pvs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/use_simple_latency_query.yaml', '--test-cmd-args=--testoverrides=./testing/load/gce/throughput_override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=120m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/1190270490954960897/artifacts')' returned non-zero exit status 1 E1101 14:19:47.993] Command failed ``` **Anything else we need to know**: /cc @alenkacz @hasheddan @Verolop /milestone v1.17 /priority critical-urgent /sig scheduling
non_priority
gce cos master scalability ci kubernetes gci gce scalability which jobs are failing gce cos master scalability ci kubernetes gci gce scalability since when has it been failing nov pdt testgrid link reason for failure console run kubetest dump workspace artifacts gcp service account etc service account service account json up down provider gce cluster big gcp network big check leaked resources extract ci latest gcp node image gci gcp nodes gcp project type scalability project gcp zone us b test cmd go src io perf tests run sh test cmd args cluster test cmd args experimental gcp snapshot prometheus disk true test cmd args experimental prometheus disk snapshot name ci kubernetes gci gce scalability test cmd args nodes test cmd args prometheus scrape node exporter test cmd args provider gce test cmd args report dir workspace artifacts test cmd args testconfig testing density config yaml test cmd args testconfig testing load config yaml test cmd args testoverrides testing chaosmonkey override yaml test cmd args testoverrides testing load experimental overrides enable configmaps yaml test cmd args testoverrides testing load experimental overrides enable daemonsets yaml test cmd args testoverrides testing load experimental overrides enable jobs yaml test cmd args testoverrides testing load experimental overrides enable pvs yaml test cmd args testoverrides testing load experimental overrides enable secrets yaml test cmd args testoverrides testing load experimental overrides enable statefulsets yaml test cmd args testoverrides testing experiments enable prometheus api responsiveness yaml test cmd args testoverrides testing experiments enable restart count check yaml test cmd args testoverrides testing experiments use simple latency query yaml test cmd args testoverrides testing load gce throughput override yaml test cmd name timeout logexporter gcs path gs kubernetes jenkins logs ci kubernetes gci gce scalability artifacts main go limiting testing to util go please use kubetest gcp node size standard instead of deprecated node size standard main go gcp project is missing trying to fetch a project from boskos for local runs please set gcp project to your dev project main go provider gce will acquire project type scalability project from boskos process go saved xml output to workspace artifacts junit runner xml process go running bash c hack lib version sh kube root kube version get version vars echo kube git version process go step bash c hack lib version sh kube root kube version get version vars echo kube git version finished in main go something went wrong failed to prepare test environment provider gce boskos failed to acquire project resources not found traceback most recent call last file workspace test infra jenkins scenarios kubernetes py line in main parse args file workspace test infra jenkins scenarios kubernetes py line in main mode start runner args file workspace test infra jenkins scenarios kubernetes py line in start check env env self command args file workspace test infra jenkins scenarios kubernetes py line in check env subprocess check call cmd env env file usr lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command kubetest dump workspace artifacts gcp service account etc service account service account json up down provider gce cluster big gcp network big check leaked resources extract ci latest gcp node image gci gcp nodes gcp project type scalability project gcp zone us b test cmd go src io perf tests run sh test cmd args cluster test cmd args experimental gcp snapshot prometheus disk true test cmd args experimental prometheus disk snapshot name ci kubernetes gci gce scalability test cmd args nodes test cmd args prometheus scrape node exporter test cmd args provider gce test cmd args report dir workspace artifacts test cmd args testconfig testing density config yaml test cmd args testconfig testing load config yaml test cmd args testoverrides testing chaosmonkey override yaml test cmd args testoverrides testing load experimental overrides enable configmaps yaml test cmd args testoverrides testing load experimental overrides enable daemonsets yaml test cmd args testoverrides testing load experimental overrides enable jobs yaml test cmd args testoverrides testing load experimental overrides enable pvs yaml test cmd args testoverrides testing load experimental overrides enable secrets yaml test cmd args testoverrides testing load experimental overrides enable statefulsets yaml test cmd args testoverrides testing experiments enable prometheus api responsiveness yaml test cmd args testoverrides testing experiments enable restart count check yaml test cmd args testoverrides testing experiments use simple latency query yaml test cmd args testoverrides testing load gce throughput override yaml test cmd name timeout logexporter gcs path gs kubernetes jenkins logs ci kubernetes gci gce scalability artifacts returned non zero exit status command failed anything else we need to know cc alenkacz hasheddan verolop milestone priority critical urgent sig scheduling
0
797,107
28,138,039,092
IssuesEvent
2023-04-01 16:09:12
OWASP/threat-dragon
https://api.github.com/repos/OWASP/threat-dragon
closed
BrowserStack End to End Tests failing
bug version-2.0 priority
**Describe the bug** The workflow action BrowserStack End to End Tests are failing, along with all other pipelines, with: ``` Run npm i -g pnpm  ERR_PNPM_FROZEN_LOCKFILE_WITH_OUTDATED_LOCKFILE  Cannot perform a frozen installation because the version of the lockfile is incompatible with this version of pnpm ``` **Expected behaviour** this action should not fail on `pnpm install` **Environment** - Version: 2.0.1 - Platform: pipeline - OS: NA - Browser: NA **To Reproduce** https://github.com/OWASP/threat-dragon/actions/runs/4552007331 **Any additional context, screenshots, etc**
1.0
BrowserStack End to End Tests failing - **Describe the bug** The workflow action BrowserStack End to End Tests are failing, along with all other pipelines, with: ``` Run npm i -g pnpm  ERR_PNPM_FROZEN_LOCKFILE_WITH_OUTDATED_LOCKFILE  Cannot perform a frozen installation because the version of the lockfile is incompatible with this version of pnpm ``` **Expected behaviour** this action should not fail on `pnpm install` **Environment** - Version: 2.0.1 - Platform: pipeline - OS: NA - Browser: NA **To Reproduce** https://github.com/OWASP/threat-dragon/actions/runs/4552007331 **Any additional context, screenshots, etc**
priority
browserstack end to end tests failing describe the bug the workflow action browserstack end to end tests are failing along with all other pipelines with run npm i g pnpm  err pnpm frozen lockfile with outdated lockfile  cannot perform a frozen installation because the version of the lockfile is incompatible with this version of pnpm expected behaviour this action should not fail on pnpm install environment version platform pipeline os na browser na to reproduce any additional context screenshots etc
1
53,016
6,670,390,703
IssuesEvent
2017-10-03 23:23:02
phetsims/ohms-law
https://api.github.com/repos/phetsims/ohms-law
closed
Describing size of the current arrows
design:a11y
From https://github.com/phetsims/ohms-law/issues/80, @emily-phet said > Current arrows relative description doesn't seem to update (stays at "tiny arrows represent..."). @jessegreenberg said: > I think that the problem is just that the current arrows grow like 1 / R. The max height of the arrows is 1243 view coordinates. For the vast majority of voltage and resistance values, the arrows are less than 200 view coordinates. The arrows grow 80% in the last 40 ohms. @emily-phet @terracoda how would you like to handle this? If we describe the size of the arrows relative to their max/min heights, the current behavior is correct. If we want to indicate ```I = V / R``` and describe the arrow sizes relative to their min/max allowable heights, this is the correct behavior, but the current arrows will almost always be "Very tiny".
1.0
Describing size of the current arrows - From https://github.com/phetsims/ohms-law/issues/80, @emily-phet said > Current arrows relative description doesn't seem to update (stays at "tiny arrows represent..."). @jessegreenberg said: > I think that the problem is just that the current arrows grow like 1 / R. The max height of the arrows is 1243 view coordinates. For the vast majority of voltage and resistance values, the arrows are less than 200 view coordinates. The arrows grow 80% in the last 40 ohms. @emily-phet @terracoda how would you like to handle this? If we describe the size of the arrows relative to their max/min heights, the current behavior is correct. If we want to indicate ```I = V / R``` and describe the arrow sizes relative to their min/max allowable heights, this is the correct behavior, but the current arrows will almost always be "Very tiny".
non_priority
describing size of the current arrows from emily phet said current arrows relative description doesn t seem to update stays at tiny arrows represent jessegreenberg said i think that the problem is just that the current arrows grow like r the max height of the arrows is view coordinates for the vast majority of voltage and resistance values the arrows are less than view coordinates the arrows grow in the last ohms emily phet terracoda how would you like to handle this if we describe the size of the arrows relative to their max min heights the current behavior is correct if we want to indicate i v r and describe the arrow sizes relative to their min max allowable heights this is the correct behavior but the current arrows will almost always be very tiny
0
433,946
30,387,583,787
IssuesEvent
2023-07-13 03:00:36
Jeong29Hyeon/jpa-study
https://api.github.com/repos/Jeong29Hyeon/jpa-study
closed
벌크 연산
documentation
# 벌크연산 - 재고가 10개 미만인 모든 상품의 가격을 10% 상승시키려면 ? - JPA 변경 감지 기능으로 실행하려면 너무 많은 SQL을 실행해야한다. - 재고가 10개 미만인 상품 리스트를 조회한다. - 상품 엔티티의 가격을 10% 증가시킨다. - 트랜잭션 커밋 시점에 변경감지가 동작한다. - 변경된 데이터가 100건이라면 100번의 UPDATE SQL 실행됨. 위의 문제점을 해결하기 위한 메소드가 존재한다! `executeUpdate()` !! 쿼리 한 번으로 여러 테이블 데이터를 변경한다. executeUpdate() 의 결과는 영향받은 엔티티의 수임 UPDATE, DELETE 지원 INSERT(insert into ... select , 하이버네이트 지원) ```java String qlString = "update Product p " + "set p.price = p.price * 1.1 " + "where p.stockAmount < :stockAmount"; int resultCount = em.createQuery(qlString) .setParameter("stockAmount", 10) .executeUpdate(); ``` ## 벌크 연산 주의 벌크 연산은 영속성 컨텍스트를 무시하고 데이터베이스에 직접 쿼리를 날린다. 그렇기 때문에 작업 안에서 영속성 컨텍스트에 올라가있는 엔티티를 DB에 수정해버린다면 어플리케이션에 있는 엔티티의 값과 DB의 값이 달라 문제가 발생한다. 해결책 - 벌크 연산을 먼저 실행 - 벌크 연산 수행 후 영속성 컨텍스트 초기화 `em.clear()`
1.0
벌크 연산 - # 벌크연산 - 재고가 10개 미만인 모든 상품의 가격을 10% 상승시키려면 ? - JPA 변경 감지 기능으로 실행하려면 너무 많은 SQL을 실행해야한다. - 재고가 10개 미만인 상품 리스트를 조회한다. - 상품 엔티티의 가격을 10% 증가시킨다. - 트랜잭션 커밋 시점에 변경감지가 동작한다. - 변경된 데이터가 100건이라면 100번의 UPDATE SQL 실행됨. 위의 문제점을 해결하기 위한 메소드가 존재한다! `executeUpdate()` !! 쿼리 한 번으로 여러 테이블 데이터를 변경한다. executeUpdate() 의 결과는 영향받은 엔티티의 수임 UPDATE, DELETE 지원 INSERT(insert into ... select , 하이버네이트 지원) ```java String qlString = "update Product p " + "set p.price = p.price * 1.1 " + "where p.stockAmount < :stockAmount"; int resultCount = em.createQuery(qlString) .setParameter("stockAmount", 10) .executeUpdate(); ``` ## 벌크 연산 주의 벌크 연산은 영속성 컨텍스트를 무시하고 데이터베이스에 직접 쿼리를 날린다. 그렇기 때문에 작업 안에서 영속성 컨텍스트에 올라가있는 엔티티를 DB에 수정해버린다면 어플리케이션에 있는 엔티티의 값과 DB의 값이 달라 문제가 발생한다. 해결책 - 벌크 연산을 먼저 실행 - 벌크 연산 수행 후 영속성 컨텍스트 초기화 `em.clear()`
non_priority
벌크 연산 벌크연산 재고가 미만인 모든 상품의 가격을 상승시키려면 jpa 변경 감지 기능으로 실행하려면 너무 많은 sql을 실행해야한다 재고가 미만인 상품 리스트를 조회한다 상품 엔티티의 가격을 증가시킨다 트랜잭션 커밋 시점에 변경감지가 동작한다 변경된 데이터가 update sql 실행됨 위의 문제점을 해결하기 위한 메소드가 존재한다 executeupdate 쿼리 한 번으로 여러 테이블 데이터를 변경한다 executeupdate 의 결과는 영향받은 엔티티의 수임 update delete 지원 insert insert into select 하이버네이트 지원 java string qlstring update product p set p price p price where p stockamount stockamount int resultcount em createquery qlstring setparameter stockamount executeupdate 벌크 연산 주의 벌크 연산은 영속성 컨텍스트를 무시하고 데이터베이스에 직접 쿼리를 날린다 그렇기 때문에 작업 안에서 영속성 컨텍스트에 올라가있는 엔티티를 db에 수정해버린다면 어플리케이션에 있는 엔티티의 값과 db의 값이 달라 문제가 발생한다 해결책 벌크 연산을 먼저 실행 벌크 연산 수행 후 영속성 컨텍스트 초기화 em clear
0
758,291
26,549,024,526
IssuesEvent
2023-01-20 05:05:44
myalley-project/myalley-be
https://api.github.com/repos/myalley-project/myalley-be
closed
메이트 모집글 작성 기능
back-end 1st Priority
**할 일** - [x] 메이트 모집 페키지 구조 틀 만들기 - [x] 도메인 변수명, 변수 목록 확정하기 - [x] 메이트 모집글 삭제 repository, domain 도 만들기 - [x] 글 작성용 dto 추가하기 - [x] 전시글과 연동해서 등록시키기 - [x] 토큰 받아서 회원 정보 연결해서 등록시키기 - [x] service & controller 구현하기 - [x] 연관관계 설정 추가하기
1.0
메이트 모집글 작성 기능 - **할 일** - [x] 메이트 모집 페키지 구조 틀 만들기 - [x] 도메인 변수명, 변수 목록 확정하기 - [x] 메이트 모집글 삭제 repository, domain 도 만들기 - [x] 글 작성용 dto 추가하기 - [x] 전시글과 연동해서 등록시키기 - [x] 토큰 받아서 회원 정보 연결해서 등록시키기 - [x] service & controller 구현하기 - [x] 연관관계 설정 추가하기
priority
메이트 모집글 작성 기능 할 일 메이트 모집 페키지 구조 틀 만들기 도메인 변수명 변수 목록 확정하기 메이트 모집글 삭제 repository domain 도 만들기 글 작성용 dto 추가하기 전시글과 연동해서 등록시키기 토큰 받아서 회원 정보 연결해서 등록시키기 service controller 구현하기 연관관계 설정 추가하기
1
6,001
8,808,922,197
IssuesEvent
2018-12-27 16:54:37
linnovate/root
https://api.github.com/repos/linnovate/root
closed
office documents from tasks bug
2.0.6 Fixed Process bug critical
after creating a task, and then going to the documents tab, clicking on manage documents create new item doesnt update the list, and after editing the document it isnt saved after refreshing the page ![image](https://user-images.githubusercontent.com/38312178/50145494-65680800-02ba-11e9-96ac-36338e64ced9.png)
1.0
office documents from tasks bug - after creating a task, and then going to the documents tab, clicking on manage documents create new item doesnt update the list, and after editing the document it isnt saved after refreshing the page ![image](https://user-images.githubusercontent.com/38312178/50145494-65680800-02ba-11e9-96ac-36338e64ced9.png)
non_priority
office documents from tasks bug after creating a task and then going to the documents tab clicking on manage documents create new item doesnt update the list and after editing the document it isnt saved after refreshing the page
0
61,604
7,480,629,550
IssuesEvent
2018-04-04 18:00:48
oracle/weblogic-deploy-tooling
https://api.github.com/repos/oracle/weblogic-deploy-tooling
opened
Resource messages from external projects need to be added to wlsdeploy_rb.properties file
design workaround
It looks like the way that things are currently designed, the new weblogic-deploy-testing project has to use the same wlsdeploy_rb.properties file as the weblogic-deploy-tooling project, in order to use Java or python classes in the weblogic-deploy-tooling project. This is because currently, it looks like PlatformLogger python wrapper only knows about a single bundle (xxx_rb.properties) file. It loads that when it's initialized, and will only resolve message IDs (e.g. WLSDPLY-09804) contained in that single bundle file. This is not a bug. It's more of a design or implementation choice that was made, which can be worked around...hence the "design workaround" label.
1.0
Resource messages from external projects need to be added to wlsdeploy_rb.properties file - It looks like the way that things are currently designed, the new weblogic-deploy-testing project has to use the same wlsdeploy_rb.properties file as the weblogic-deploy-tooling project, in order to use Java or python classes in the weblogic-deploy-tooling project. This is because currently, it looks like PlatformLogger python wrapper only knows about a single bundle (xxx_rb.properties) file. It loads that when it's initialized, and will only resolve message IDs (e.g. WLSDPLY-09804) contained in that single bundle file. This is not a bug. It's more of a design or implementation choice that was made, which can be worked around...hence the "design workaround" label.
non_priority
resource messages from external projects need to be added to wlsdeploy rb properties file it looks like the way that things are currently designed the new weblogic deploy testing project has to use the same wlsdeploy rb properties file as the weblogic deploy tooling project in order to use java or python classes in the weblogic deploy tooling project this is because currently it looks like platformlogger python wrapper only knows about a single bundle xxx rb properties file it loads that when it s initialized and will only resolve message ids e g wlsdply contained in that single bundle file this is not a bug it s more of a design or implementation choice that was made which can be worked around hence the design workaround label
0
42,993
23,067,955,436
IssuesEvent
2022-07-25 15:22:21
flutter/flutter
https://api.github.com/repos/flutter/flutter
opened
[iOS] The drag behavior is laggy and seems to not have 120hz adjustment
created via performance template
As title of issue descripes, this issue will happen when I use an iPhone with **Promotion** (eg:iPhone13Pro) When I turn on the power save mode(back to 60HZ) The jank and laggy will not happen. But when I back to 120HZ, it happens again. **The scroll animation is fine.But drag is not.** (Tip:Drag, instead of swipe on the screen) ## Details **Target Platform:** iOS **Target OS version/browser:** 15.5 **Devices:** iPhone13 Pro
True
[iOS] The drag behavior is laggy and seems to not have 120hz adjustment - As title of issue descripes, this issue will happen when I use an iPhone with **Promotion** (eg:iPhone13Pro) When I turn on the power save mode(back to 60HZ) The jank and laggy will not happen. But when I back to 120HZ, it happens again. **The scroll animation is fine.But drag is not.** (Tip:Drag, instead of swipe on the screen) ## Details **Target Platform:** iOS **Target OS version/browser:** 15.5 **Devices:** iPhone13 Pro
non_priority
the drag behavior is laggy and seems to not have adjustment as title of issue descripes this issue will happen when i use an iphone with promotion eg when i turn on the power save mode back to the jank and laggy will not happen but when i back to it happens again the scroll animation is fine but drag is not tip drag instead of swipe on the screen details target platform ios target os version browser devices pro
0
728,360
25,076,163,323
IssuesEvent
2022-11-07 15:38:57
magento/magento2
https://api.github.com/repos/magento/magento2
closed
Duplicate orders with 2 parallel GraphQL requests
Issue: Confirmed Reproduced on 2.4.x Priority: P1 Severity: S1 Progress: done Evaluated
#### Precondition As a Guest user follow steps by GraphQL checkout tutorial to step 10 Place order https://devdocs.magento.com/guides/v2.4/graphql/tutorials/checkout/index.html #### Steps Execute Place the order mutation in parallel in 2 browsers (I did it in Postman) . ``` mutation { placeOrder(input: {cart*id: "{ CART*ID }"}) { order { order_number } } } ``` #### (x) Actual result 2 orders created #### (/) Expected result 1 order created
1.0
Duplicate orders with 2 parallel GraphQL requests - #### Precondition As a Guest user follow steps by GraphQL checkout tutorial to step 10 Place order https://devdocs.magento.com/guides/v2.4/graphql/tutorials/checkout/index.html #### Steps Execute Place the order mutation in parallel in 2 browsers (I did it in Postman) . ``` mutation { placeOrder(input: {cart*id: "{ CART*ID }"}) { order { order_number } } } ``` #### (x) Actual result 2 orders created #### (/) Expected result 1 order created
priority
duplicate orders with parallel graphql requests precondition as a guest user follow steps by graphql checkout tutorial to step place order steps execute place the order mutation in parallel in browsers i did it in postman mutation placeorder input cart id cart id order order number x actual result orders created expected result order created
1
46,217
13,152,203,721
IssuesEvent
2020-08-09 20:53:20
Jacksole/Learning-JavaScript
https://api.github.com/repos/Jacksole/Learning-JavaScript
closed
CVE-2019-2391 (Medium) detected in bson-1.1.3.tgz
security vulnerability
## CVE-2019-2391 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bson-1.1.3.tgz</b></p></summary> <p>A bson parser for node.js and the browser</p> <p>Library home page: <a href="https://registry.npmjs.org/bson/-/bson-1.1.3.tgz">https://registry.npmjs.org/bson/-/bson-1.1.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Learning-JavaScript/meanauthapp/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/Learning-JavaScript/meanauthapp/node_modules/bson/package.json</p> <p> Dependency Hierarchy: - mongoose-5.7.13.tgz (Root Library) - :x: **bson-1.1.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Jacksole/Learning-JavaScript/commit/c9ca295725f33eb0d8e03f930a5da88ebb01cedf">c9ca295725f33eb0d8e03f930a5da88ebb01cedf</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Incorrect parsing of certain JSON input may result in js-bson not correctly serializing BSON. This may cause unexpected application behaviour including data disclosure. <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-2391>CVE-2019-2391</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/mongodb/js-bson/pull/336/commits">https://github.com/mongodb/js-bson/pull/336/commits</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: v1.1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-2391 (Medium) detected in bson-1.1.3.tgz - ## CVE-2019-2391 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bson-1.1.3.tgz</b></p></summary> <p>A bson parser for node.js and the browser</p> <p>Library home page: <a href="https://registry.npmjs.org/bson/-/bson-1.1.3.tgz">https://registry.npmjs.org/bson/-/bson-1.1.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Learning-JavaScript/meanauthapp/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/Learning-JavaScript/meanauthapp/node_modules/bson/package.json</p> <p> Dependency Hierarchy: - mongoose-5.7.13.tgz (Root Library) - :x: **bson-1.1.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Jacksole/Learning-JavaScript/commit/c9ca295725f33eb0d8e03f930a5da88ebb01cedf">c9ca295725f33eb0d8e03f930a5da88ebb01cedf</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Incorrect parsing of certain JSON input may result in js-bson not correctly serializing BSON. This may cause unexpected application behaviour including data disclosure. <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-2391>CVE-2019-2391</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/mongodb/js-bson/pull/336/commits">https://github.com/mongodb/js-bson/pull/336/commits</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: v1.1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve medium detected in bson tgz cve medium severity vulnerability vulnerable library bson tgz a bson parser for node js and the browser library home page a href path to dependency file tmp ws scm learning javascript meanauthapp package json path to vulnerable library tmp ws scm learning javascript meanauthapp node modules bson package json dependency hierarchy mongoose tgz root library x bson tgz vulnerable library found in head commit a href vulnerability details incorrect parsing of certain json input may result in js bson not correctly serializing bson this may cause unexpected application behaviour including data disclosure publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
324,898
27,828,203,239
IssuesEvent
2023-03-20 00:29:26
FasterXML/jackson-dataformat-xml
https://api.github.com/repos/FasterXML/jackson-dataformat-xml
closed
Enabled FromXmlParser.Feature.EMPTY_ELEMENT_AS_NULL breaks up old functionality since 2.12.0
test-needed
I have a class ``` @Data public class FooView { private String str; } ``` and enabled feature **FromXmlParser.Feature.EMPTY_ELEMENT_AS_NULL** for XmlMapper. Then i trying to call ` FooView fooView = xmlMapper.readValue(content, FooView.class) ` and set content as: ` <Content/> ` As result i've had a NP: ` java.lang.NullPointerException: Cannot invoke method getStr() on null object ` Before 2.12.0 returned result object (fooView) was not null object. **Version information** 2.12.1 and greater. 2.12.0 gets another bug too.
1.0
Enabled FromXmlParser.Feature.EMPTY_ELEMENT_AS_NULL breaks up old functionality since 2.12.0 - I have a class ``` @Data public class FooView { private String str; } ``` and enabled feature **FromXmlParser.Feature.EMPTY_ELEMENT_AS_NULL** for XmlMapper. Then i trying to call ` FooView fooView = xmlMapper.readValue(content, FooView.class) ` and set content as: ` <Content/> ` As result i've had a NP: ` java.lang.NullPointerException: Cannot invoke method getStr() on null object ` Before 2.12.0 returned result object (fooView) was not null object. **Version information** 2.12.1 and greater. 2.12.0 gets another bug too.
non_priority
enabled fromxmlparser feature empty element as null breaks up old functionality since i have a class data public class fooview private string str and enabled feature fromxmlparser feature empty element as null for xmlmapper then i trying to call fooview fooview xmlmapper readvalue content fooview class and set content as as result i ve had a np java lang nullpointerexception cannot invoke method getstr on null object before returned result object fooview was not null object version information and greater gets another bug too
0
6,281
5,346,098,181
IssuesEvent
2017-02-17 18:45:14
jruby/jruby
https://api.github.com/repos/jruby/jruby
closed
Improve Bignum string parsing
JRuby 1.7.x JRuby 9000 performance stdlib
JRuby currently uses Java's BigInteger to handle String#to_i when it might need a Bignum. However, this algorithm is quite a bit slower than the one in MRI 2.1. ``` system ~/projects/jruby/jruby-tmp $ rvm ruby-head do ruby -v -rjson -rbenchmark -e 'p Benchmark.realtime { ("1" * 1000000).to_i }' ruby 2.1.0dev (2013-09-30) [x86_64-darwin12.5.0] 0.895446 system ~/projects/jruby/jruby-tmp $ rvm jruby-1.7.8 do ruby -v -rjson -rbenchmark -e 'p Benchmark.realtime { ("1" * 1000000).to_i }' jruby 1.7.8 (1.9.3p392) 2013-11-18 0ce429e on Java HotSpot(TM) 64-Bit Server VM 1.8.0-ea-b115 +indy [darwin-x86_64] 24.387 system ~/projects/jruby/jruby-tmp $ jruby -v -rjson -rbenchmark -e 'p Benchmark.realtime { ("1" * 1000000).to_i }' jruby 9000.dev (2.1.0.dev) 2013-11-23 f73ef68 on Java HotSpot(TM) 64-Bit Server VM 1.8.0-ea-b115 +indy [darwin-x86_64] 24.227 ``` I had a quick look at the algorithm and I don't think it would be hard to mimic.
True
Improve Bignum string parsing - JRuby currently uses Java's BigInteger to handle String#to_i when it might need a Bignum. However, this algorithm is quite a bit slower than the one in MRI 2.1. ``` system ~/projects/jruby/jruby-tmp $ rvm ruby-head do ruby -v -rjson -rbenchmark -e 'p Benchmark.realtime { ("1" * 1000000).to_i }' ruby 2.1.0dev (2013-09-30) [x86_64-darwin12.5.0] 0.895446 system ~/projects/jruby/jruby-tmp $ rvm jruby-1.7.8 do ruby -v -rjson -rbenchmark -e 'p Benchmark.realtime { ("1" * 1000000).to_i }' jruby 1.7.8 (1.9.3p392) 2013-11-18 0ce429e on Java HotSpot(TM) 64-Bit Server VM 1.8.0-ea-b115 +indy [darwin-x86_64] 24.387 system ~/projects/jruby/jruby-tmp $ jruby -v -rjson -rbenchmark -e 'p Benchmark.realtime { ("1" * 1000000).to_i }' jruby 9000.dev (2.1.0.dev) 2013-11-23 f73ef68 on Java HotSpot(TM) 64-Bit Server VM 1.8.0-ea-b115 +indy [darwin-x86_64] 24.227 ``` I had a quick look at the algorithm and I don't think it would be hard to mimic.
non_priority
improve bignum string parsing jruby currently uses java s biginteger to handle string to i when it might need a bignum however this algorithm is quite a bit slower than the one in mri system projects jruby jruby tmp rvm ruby head do ruby v rjson rbenchmark e p benchmark realtime to i ruby system projects jruby jruby tmp rvm jruby do ruby v rjson rbenchmark e p benchmark realtime to i jruby on java hotspot tm bit server vm ea indy system projects jruby jruby tmp jruby v rjson rbenchmark e p benchmark realtime to i jruby dev dev on java hotspot tm bit server vm ea indy i had a quick look at the algorithm and i don t think it would be hard to mimic
0
234,375
17,952,810,722
IssuesEvent
2021-09-13 01:10:50
UnBArqDsw2021-1/2021.1_G6_Curumim
https://api.github.com/repos/UnBArqDsw2021-1/2021.1_G6_Curumim
closed
Guia de estilo
documentation
### Descrição: Issue direcionada para a criação do guia de estilo da aplicação. ### Tarefas: - [ ] Criar guia de estilo; - [ ] Todos os integrantes pontuarem a issue. ### Critérios de aceitação: - [ ] Guia de estilo criado; - [ ] Issue pontuada.
1.0
Guia de estilo - ### Descrição: Issue direcionada para a criação do guia de estilo da aplicação. ### Tarefas: - [ ] Criar guia de estilo; - [ ] Todos os integrantes pontuarem a issue. ### Critérios de aceitação: - [ ] Guia de estilo criado; - [ ] Issue pontuada.
non_priority
guia de estilo descrição issue direcionada para a criação do guia de estilo da aplicação tarefas criar guia de estilo todos os integrantes pontuarem a issue critérios de aceitação guia de estilo criado issue pontuada
0
137,131
12,746,260,786
IssuesEvent
2020-06-26 15:37:33
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
opened
Document Special Issues Findings
Priority: Medium Product: caseflow-dispatch Product: caseflow-queue Team: Echo 🐬 Type: Documentation Type: Tech-Improvement
After the completion of the Special Issues investigations, we want to capture the information we learned about Special Issues in Queue. ### AC - [ ] Create a Special Issues page - Document general special issues information - Document the disconnection between Special Issues in Queue vs in Caseflow Dispatch - Link to Queue SI page / section, Link to Caseflow Dispatch SI section - [ ] Create a Queue Special Issues page - document history - document use - document disconnect - [ ] Update the Caseflow Dispatch page with a Special Issues section - document history - document use - document disconnect
1.0
Document Special Issues Findings - After the completion of the Special Issues investigations, we want to capture the information we learned about Special Issues in Queue. ### AC - [ ] Create a Special Issues page - Document general special issues information - Document the disconnection between Special Issues in Queue vs in Caseflow Dispatch - Link to Queue SI page / section, Link to Caseflow Dispatch SI section - [ ] Create a Queue Special Issues page - document history - document use - document disconnect - [ ] Update the Caseflow Dispatch page with a Special Issues section - document history - document use - document disconnect
non_priority
document special issues findings after the completion of the special issues investigations we want to capture the information we learned about special issues in queue ac create a special issues page document general special issues information document the disconnection between special issues in queue vs in caseflow dispatch link to queue si page section link to caseflow dispatch si section create a queue special issues page document history document use document disconnect update the caseflow dispatch page with a special issues section document history document use document disconnect
0
720,895
24,810,240,431
IssuesEvent
2022-10-25 08:51:23
spidernet-io/spiderpool
https://api.github.com/repos/spidernet-io/spiderpool
closed
Edit subnet is rejected
issue/not-assign priority/important-soon kind/bug
Describe the version version about: spiderpool - v0.2.2 **Describe the bug** A Subnet, without any IP assigned to the IPPool,But it is not possible to change the size of the ips **Output of the failure** ``` # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # # spidersubnets.spiderpool.spidernet.io "v4-ss-10" was not valid: # * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs' # apiVersion: spiderpool.spidernet.io/v1 kind: SpiderSubnet metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"spiderpool.spidernet.io/v1","kind":"SpiderSubnet","metadata":{"annotations":{},"deletionGracePeriodSeconds":0,"finalizers":["spiderpool.spidernet.io"],"generation":2,"name":"v4-ss-10","resourceVersion":"43769"},"spec":{"gateway":"10.118.88.1","ipVersion":4,"ips":["10.118.88.2-10.118.88.201"],"subnet":"10.118.88.0/24","vlan":0}} creationTimestamp: "2022-10-25T03:23:49Z" finalizers: - spiderpool.spidernet.io generation: 1 name: v4-ss-10 resourceVersion: "127189" uid: 1f9b1561-a7a8-4a14-80ba-105bf77198be spec: gateway: 10.118.88.1 ipVersion: 4 ips: - 10.118.88.2-10.118.88.100 subnet: 10.118.88.0/24 vlan: 0 status: allocatedIPCount: 0 totalIPCount: 200 ``` **Additional** ``` # * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs' status: allocatedIPCount: 0 totalIPCount: 200 ```
1.0
Edit subnet is rejected - Describe the version version about: spiderpool - v0.2.2 **Describe the bug** A Subnet, without any IP assigned to the IPPool,But it is not possible to change the size of the ips **Output of the failure** ``` # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # # spidersubnets.spiderpool.spidernet.io "v4-ss-10" was not valid: # * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs' # apiVersion: spiderpool.spidernet.io/v1 kind: SpiderSubnet metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"spiderpool.spidernet.io/v1","kind":"SpiderSubnet","metadata":{"annotations":{},"deletionGracePeriodSeconds":0,"finalizers":["spiderpool.spidernet.io"],"generation":2,"name":"v4-ss-10","resourceVersion":"43769"},"spec":{"gateway":"10.118.88.1","ipVersion":4,"ips":["10.118.88.2-10.118.88.201"],"subnet":"10.118.88.0/24","vlan":0}} creationTimestamp: "2022-10-25T03:23:49Z" finalizers: - spiderpool.spidernet.io generation: 1 name: v4-ss-10 resourceVersion: "127189" uid: 1f9b1561-a7a8-4a14-80ba-105bf77198be spec: gateway: 10.118.88.1 ipVersion: 4 ips: - 10.118.88.2-10.118.88.100 subnet: 10.118.88.0/24 vlan: 0 status: allocatedIPCount: 0 totalIPCount: 200 ``` **Additional** ``` # * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs' status: allocatedIPCount: 0 totalIPCount: 200 ```
priority
edit subnet is rejected describe the version version about spiderpool describe the bug a subnet without any ip assigned to the ippool,but it is not possible to change the size of the ips output of the failure please edit the object below lines beginning with a will be ignored and an empty file will abort the edit if an error occurs while saving this file will be reopened with the relevant failures spidersubnets spiderpool spidernet io ss was not valid spec ips forbidden remove some ip ranges that is being used total ip addresses of an subnet are jointly determined by spec ips and spec excludeips apiversion spiderpool spidernet io kind spidersubnet metadata annotations kubectl kubernetes io last applied configuration apiversion spiderpool spidernet io kind spidersubnet metadata annotations deletiongraceperiodseconds finalizers generation name ss resourceversion spec gateway ipversion ips subnet vlan creationtimestamp finalizers spiderpool spidernet io generation name ss resourceversion uid spec gateway ipversion ips subnet vlan status allocatedipcount totalipcount additional spec ips forbidden remove some ip ranges that is being used total ip addresses of an subnet are jointly determined by spec ips and spec excludeips status allocatedipcount totalipcount
1
631,370
20,151,185,281
IssuesEvent
2022-02-09 12:33:45
ita-social-projects/horondi_admin
https://api.github.com/repos/ita-social-projects/horondi_admin
closed
(SP:1)[Admin page: Категорії] The button 'Зберегти' isn't clickable and error message doesn't appear after creating new category without image
bug priority: medium Admin
Environment: Windows Server 2019 Standart 64-bit, Google Chrome v.88.0.4324.190 Reproducible: always Build found: the last commit from https://github.com/ita-social-projects/horondi_admin Pre-conditions: Go to https://horondi-admin-staging.azurewebsites.net Log into Administrator page as Administrator: login 'qualityadmin@gmail.com', password 'qualityControl123' Click on 'Category' menu item. Steps to reproduce: 1.Click 'Додати категоріїю' button 2.Enter valid data into the field 'Код категорії' 3.Enter valid data in the field 'Назва категорії' 4.Enter valid data in the field 'Name of category' Actual result: The button 'Зберегти' isn't clickable Expected result: The button 'Зберегти' is clickable and warning message is displayed: 'Додайте зображення до категорії' after clicking on 'Зберегти' button [User story] (https://jira.softserve.academy/browse/LVHRB-15) [TC] (https://jira.softserve.academy/browse/LVHRB-68)
1.0
(SP:1)[Admin page: Категорії] The button 'Зберегти' isn't clickable and error message doesn't appear after creating new category without image - Environment: Windows Server 2019 Standart 64-bit, Google Chrome v.88.0.4324.190 Reproducible: always Build found: the last commit from https://github.com/ita-social-projects/horondi_admin Pre-conditions: Go to https://horondi-admin-staging.azurewebsites.net Log into Administrator page as Administrator: login 'qualityadmin@gmail.com', password 'qualityControl123' Click on 'Category' menu item. Steps to reproduce: 1.Click 'Додати категоріїю' button 2.Enter valid data into the field 'Код категорії' 3.Enter valid data in the field 'Назва категорії' 4.Enter valid data in the field 'Name of category' Actual result: The button 'Зберегти' isn't clickable Expected result: The button 'Зберегти' is clickable and warning message is displayed: 'Додайте зображення до категорії' after clicking on 'Зберегти' button [User story] (https://jira.softserve.academy/browse/LVHRB-15) [TC] (https://jira.softserve.academy/browse/LVHRB-68)
priority
sp the button зберегти isn t clickable and error message doesn t appear after creating new category without image environment windows server standart bit google chrome v reproducible always build found the last commit from pre conditions go to log into administrator page as administrator login qualityadmin gmail com password click on category menu item steps to reproduce click додати категоріїю button enter valid data into the field код категорії enter valid data in the field назва категорії enter valid data in the field name of category actual result the button зберегти isn t clickable expected result the button зберегти is clickable and warning message is displayed додайте зображення до категорії after clicking on зберегти button
1
209,226
7,166,486,608
IssuesEvent
2018-01-29 17:22:23
openshift/origin
https://api.github.com/repos/openshift/origin
closed
oc cluster up v3.9.0-alpha.3 fails due to panic
kind/bug priority/P1 sig/master
`oc cluster up` fails because the server start panics inside the container. ##### Version ``` $ oc version oc v3.9.0-alpha.3+4f709b4-198-dirty kubernetes v1.9.1+a0ce1bc657 features: Basic-Auth GSSAPI Kerberos SPNEGO ``` ##### Steps To Reproduce 1. Build from latest master (`4f709b48f8e52e8c6012bd8b91945f022a437a6a`) 2. `oc cluster up` ##### Current Result ``` Starting OpenShift using openshift/origin:v3.9.0-alpha.3 ... -- Checking OpenShift client ... OK -- Checking Docker client ... OK -- Checking Docker version ... OK -- Checking for existing OpenShift container ... Deleted existing OpenShift container -- Checking for openshift/origin:v3.9.0-alpha.3 image ... OK -- Checking Docker daemon configuration ... OK -- Checking for available ports ... WARNING: Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients. -- Checking type of volume mount ... Using nsenter mounter for OpenShift volumes -- Creating host directories ... OK -- Finding server IP ... Using 127.0.0.1 as the server IP -- Starting OpenShift container ... Creating initial OpenShift configuration Starting OpenShift using container 'origin' FAIL Error: could not start OpenShift container "origin" Details: Last 10 lines of "origin" container log: github.com/openshift/origin/pkg/cmd/server/start.NewCommandStartAllInOne.func1(0xc420219b00, 0xc4210a5640, 0x0, 0x2) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_allinone.go:89 +0x125 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).execute(0xc420219b00, 0xc4210a54c0, 0x2, 0x2, 0xc420219b00, 0xc4210a54c0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:603 +0x234 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420219680, 0xc42000e018, 0xc420219680, 0x8) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:689 +0x2fe github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420219680, 0x9, 0xc420219680) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:648 +0x2b main.main() /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/cmd/openshift/openshift.go:36 +0x24b $ docker logs 4cdc83a70125 W0125 22:09:23.250592 20853 server.go:160] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP. E0125 22:09:23.250648 20853 controllers.go:121] Server isn't healthy yet. Waiting a little while. I0125 22:09:23.254652 20853 server.go:556] Version: v1.9.1+a0ce1bc657 I0125 22:09:23.254911 20853 server.go:586] starting metrics server on 0.0.0.0:10251 I0125 22:09:23.255502 20853 controllermanager.go:109] Version: v1.9.1+a0ce1bc657 E0125 22:09:23.255553 20853 controllermanager.go:117] unable to register configz: register config "componentconfig" twice E0125 22:09:23.257902 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:595: Failed to list *v1.Pod: Get https://127.0.0.1:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258727 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://127.0.0.1:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258735 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://127.0.0.1:8443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258745 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: Get https://127.0.0.1:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258897 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: Get https://127.0.0.1:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258965 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://127.0.0.1:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258977 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://127.0.0.1:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.259119 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: Get https://127.0.0.1:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.259164 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://127.0.0.1:8443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused I0125 22:09:23.267037 20853 leaderelection.go:174] attempting to acquire leader lease... E0125 22:09:23.267335 20853 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:8443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager: dial tcp 127.0.0.1:8443: getsockopt: connection refused W0125 22:09:23.267737 20853 admission.go:66] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I0125 22:09:23.708350 20853 master_config.go:356] Will report 172.31.0.101 as public IP address. 2018-01-25 22:09:23.713938 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:4001: getsockopt: connection refused"; Reconnecting to {127.0.0.1:4001 <nil>} I0125 22:09:23.714893 20853 start_master.go:532] Starting master on 0.0.0.0:8443 (v3.9.0-alpha.3+78ddc10) I0125 22:09:23.714907 20853 start_master.go:533] Public master address is https://127.0.0.1:8443 I0125 22:09:23.714917 20853 start_master.go:540] Using images from "openshift/origin-<component>:v3.9.0-alpha.3" 2018-01-25 22:09:23.715025 I | embed: peerTLS: cert = /var/lib/origin/openshift.local.config/master/etcd.server.crt, key = /var/lib/origin/openshift.local.config/master/etcd.server.key, ca = /var/lib/origin/openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true 2018-01-25 22:09:23.715118 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:4001: getsockopt: connection refused"; Reconnecting to {127.0.0.1:4001 <nil>} 2018-01-25 22:09:23.715669 I | embed: listening for peers on https://0.0.0.0:7001 2018-01-25 22:09:23.715718 I | embed: listening for client requests on 0.0.0.0:4001 2018-01-25 22:09:23.733531 I | etcdserver: name = openshift.local 2018-01-25 22:09:23.733549 I | etcdserver: data dir = /var/lib/origin/openshift.local.etcd 2018-01-25 22:09:23.733555 I | etcdserver: member dir = /var/lib/origin/openshift.local.etcd/member 2018-01-25 22:09:23.733559 I | etcdserver: heartbeat = 100ms 2018-01-25 22:09:23.733562 I | etcdserver: election = 1000ms 2018-01-25 22:09:23.733566 I | etcdserver: snapshot count = 100000 2018-01-25 22:09:23.733577 I | etcdserver: advertise client URLs = https://127.0.0.1:4001 2018-01-25 22:09:23.733583 I | etcdserver: initial advertise peer URLs = https://127.0.0.1:7001 2018-01-25 22:09:23.733591 I | etcdserver: initial cluster = openshift.local=https://127.0.0.1:7001 2018-01-25 22:09:23.753838 I | etcdserver: starting member 51cc720fdd39e048 in cluster dcf5ba954f7ebe11 2018-01-25 22:09:23.753888 I | raft: 51cc720fdd39e048 became follower at term 0 2018-01-25 22:09:23.753904 I | raft: newRaft 51cc720fdd39e048 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 2018-01-25 22:09:23.753917 I | raft: 51cc720fdd39e048 became follower at term 1 2018-01-25 22:09:23.779416 W | auth: simple token is not cryptographically signed 2018-01-25 22:09:23.792959 I | etcdserver: starting server... [version: 3.2.8, cluster version: to_be_decided] 2018-01-25 22:09:23.793014 I | embed: ClientTLS: cert = /var/lib/origin/openshift.local.config/master/etcd.server.crt, key = /var/lib/origin/openshift.local.config/master/etcd.server.key, ca = /var/lib/origin/openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true 2018-01-25 22:09:23.793420 I | etcdserver/membership: added member 51cc720fdd39e048 [https://127.0.0.1:7001] to cluster dcf5ba954f7ebe11 2018-01-25 22:09:24.754253 I | raft: 51cc720fdd39e048 is starting a new election at term 1 2018-01-25 22:09:24.754328 I | raft: 51cc720fdd39e048 became candidate at term 2 2018-01-25 22:09:24.756263 I | raft: 51cc720fdd39e048 received MsgVoteResp from 51cc720fdd39e048 at term 2 2018-01-25 22:09:24.756299 I | raft: 51cc720fdd39e048 became leader at term 2 2018-01-25 22:09:24.756312 I | raft: raft.node: 51cc720fdd39e048 elected leader 51cc720fdd39e048 at term 2 2018-01-25 22:09:24.756586 I | etcdserver: setting up the initial cluster version to 3.2 2018-01-25 22:09:24.759626 N | etcdserver/membership: set the initial cluster version to 3.2 2018-01-25 22:09:24.759670 I | etcdserver/api: enabled capabilities for version 3.2 2018-01-25 22:09:24.759701 I | embed: ready to serve client requests I0125 22:09:24.759728 20853 run.go:81] Started etcd at 127.0.0.1:4001 2018-01-25 22:09:24.759774 I | etcdserver: published {Name:openshift.local ClientURLs:[https://127.0.0.1:4001]} to cluster dcf5ba954f7ebe11 2018-01-25 22:09:24.769033 I | embed: serving client requests on [::]:4001 W0125 22:09:24.783916 20853 run_components.go:49] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients W0125 22:09:24.784355 20853 server.go:85] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53 2018-01-25 22:09:24.784374 I | etcdserver/api/v3rpc: Failed to dial 0.0.0.0:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry. I0125 22:09:24.784565 20853 logs.go:41] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0] I0125 22:09:24.784584 20853 logs.go:41] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0] I0125 22:09:24.884868 20853 run_components.go:75] DNS listening at 0.0.0.0:8053 panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0xc0 pc=0x4144536] goroutine 1 [running]: github.com/openshift/origin/pkg/cmd/server/origin.(*MasterConfig).buildHandlerChain.func1(0xef81600, 0xc420aec220, 0xc4205088c0, 0xc420aec220, 0x51ee3c0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/master.go:288 +0x146 github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server.completedConfig.New.func1(0xef81600, 0xc420aec220, 0xef81600, 0xc420aec220) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/config.go:437 +0x45 github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server.NewAPIServerHandler(0x5773af6, 0x17, 0xef92d80, 0xc42033a810, 0xefc01c0, 0xc4216918c0, 0xc420aec1a0, 0x0, 0x0, 0x7fefcbed28b0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/handler.go:103 +0x338 github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server.completedConfig.New(0xc4205088c0, 0x0, 0x0, 0x5773af6, 0x17, 0xefdabe0, 0xc420bc97d0, 0x524a560, 0xc421a50001, 0xc420aec180) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/config.go:439 +0x149 github.com/openshift/origin/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.completedConfig.New(0xc420aec180, 0xc420aec048, 0xefdabe0, 0xc420bc97d0, 0x0, 0x0, 0x0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go:125 +0x87 github.com/openshift/origin/pkg/cmd/server/origin.createAPIExtensionsServer(0xc420aec040, 0xefdabe0, 0xc420bc97d0, 0xc4206b3b90, 0xef81280, 0xc420d4b8e0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/apiextensions.go:35 +0x5d github.com/openshift/origin/pkg/cmd/server/origin.(*MasterConfig).withAPIExtensions(0xc42074a400, 0xefdabe0, 0xc420bc97d0, 0xc4212dcc30, 0xc4212772c0, 0xef73440, 0xc4206b3b90, 0xef81280, 0xc420d4b8e0, 0x0, ...) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/master.go:109 +0xf9 github.com/openshift/origin/pkg/cmd/server/origin.(*MasterConfig).Run(0xc42074a400, 0xefc6e40, 0xc42140c240, 0xc420068120, 0x34, 0xc4210ca280) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/master.go:220 +0x122 github.com/openshift/origin/pkg/cmd/server/start.StartAPI(0xc42074a400, 0xefc6e40, 0xc42140c240, 0x1, 0x1) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_master.go:571 +0xb2 github.com/openshift/origin/pkg/cmd/server/start.(*Master).Start(0xc42116ab30, 0xc42116ab30, 0x0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_master.go:542 +0x9de github.com/openshift/origin/pkg/cmd/server/start.MasterOptions.RunMaster(0xc420185400, 0x1, 0x2da, 0x721, 0x7fff5a677e44, 0x40, 0xef7abc0, 0xc42000e018, 0x0, 0x0, ...) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_master.go:303 +0x2ef github.com/openshift/origin/pkg/cmd/server/start.AllInOneOptions.StartAllInOne(0xc420d6aae0, 0xc420a0bf00, 0x2da, 0x721, 0x0, 0x576f0c2, 0x16, 0x7fff5a677e93, 0x46, 0x0, ...) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_allinone.go:305 +0x1ea github.com/openshift/origin/pkg/cmd/server/start.NewCommandStartAllInOne.func1(0xc420219b00, 0xc4210a5640, 0x0, 0x2) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_allinone.go:89 +0x125 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).execute(0xc420219b00, 0xc4210a54c0, 0x2, 0x2, 0xc420219b00, 0xc4210a54c0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:603 +0x234 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420219680, 0xc42000e018, 0xc420219680, 0x8) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:689 +0x2fe github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420219680, 0x9, 0xc420219680) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:648 +0x2b main.main() /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/cmd/openshift/openshift.go:36 +0x24b ```
1.0
oc cluster up v3.9.0-alpha.3 fails due to panic - `oc cluster up` fails because the server start panics inside the container. ##### Version ``` $ oc version oc v3.9.0-alpha.3+4f709b4-198-dirty kubernetes v1.9.1+a0ce1bc657 features: Basic-Auth GSSAPI Kerberos SPNEGO ``` ##### Steps To Reproduce 1. Build from latest master (`4f709b48f8e52e8c6012bd8b91945f022a437a6a`) 2. `oc cluster up` ##### Current Result ``` Starting OpenShift using openshift/origin:v3.9.0-alpha.3 ... -- Checking OpenShift client ... OK -- Checking Docker client ... OK -- Checking Docker version ... OK -- Checking for existing OpenShift container ... Deleted existing OpenShift container -- Checking for openshift/origin:v3.9.0-alpha.3 image ... OK -- Checking Docker daemon configuration ... OK -- Checking for available ports ... WARNING: Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients. -- Checking type of volume mount ... Using nsenter mounter for OpenShift volumes -- Creating host directories ... OK -- Finding server IP ... Using 127.0.0.1 as the server IP -- Starting OpenShift container ... Creating initial OpenShift configuration Starting OpenShift using container 'origin' FAIL Error: could not start OpenShift container "origin" Details: Last 10 lines of "origin" container log: github.com/openshift/origin/pkg/cmd/server/start.NewCommandStartAllInOne.func1(0xc420219b00, 0xc4210a5640, 0x0, 0x2) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_allinone.go:89 +0x125 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).execute(0xc420219b00, 0xc4210a54c0, 0x2, 0x2, 0xc420219b00, 0xc4210a54c0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:603 +0x234 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420219680, 0xc42000e018, 0xc420219680, 0x8) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:689 +0x2fe github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420219680, 0x9, 0xc420219680) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:648 +0x2b main.main() /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/cmd/openshift/openshift.go:36 +0x24b $ docker logs 4cdc83a70125 W0125 22:09:23.250592 20853 server.go:160] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP. E0125 22:09:23.250648 20853 controllers.go:121] Server isn't healthy yet. Waiting a little while. I0125 22:09:23.254652 20853 server.go:556] Version: v1.9.1+a0ce1bc657 I0125 22:09:23.254911 20853 server.go:586] starting metrics server on 0.0.0.0:10251 I0125 22:09:23.255502 20853 controllermanager.go:109] Version: v1.9.1+a0ce1bc657 E0125 22:09:23.255553 20853 controllermanager.go:117] unable to register configz: register config "componentconfig" twice E0125 22:09:23.257902 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:595: Failed to list *v1.Pod: Get https://127.0.0.1:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258727 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://127.0.0.1:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258735 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://127.0.0.1:8443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258745 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolumeClaim: Get https://127.0.0.1:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258897 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.PodDisruptionBudget: Get https://127.0.0.1:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258965 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://127.0.0.1:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.258977 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://127.0.0.1:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.259119 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Node: Get https://127.0.0.1:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused E0125 22:09:23.259164 20853 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://127.0.0.1:8443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused I0125 22:09:23.267037 20853 leaderelection.go:174] attempting to acquire leader lease... E0125 22:09:23.267335 20853 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:8443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager: dial tcp 127.0.0.1:8443: getsockopt: connection refused W0125 22:09:23.267737 20853 admission.go:66] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I0125 22:09:23.708350 20853 master_config.go:356] Will report 172.31.0.101 as public IP address. 2018-01-25 22:09:23.713938 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:4001: getsockopt: connection refused"; Reconnecting to {127.0.0.1:4001 <nil>} I0125 22:09:23.714893 20853 start_master.go:532] Starting master on 0.0.0.0:8443 (v3.9.0-alpha.3+78ddc10) I0125 22:09:23.714907 20853 start_master.go:533] Public master address is https://127.0.0.1:8443 I0125 22:09:23.714917 20853 start_master.go:540] Using images from "openshift/origin-<component>:v3.9.0-alpha.3" 2018-01-25 22:09:23.715025 I | embed: peerTLS: cert = /var/lib/origin/openshift.local.config/master/etcd.server.crt, key = /var/lib/origin/openshift.local.config/master/etcd.server.key, ca = /var/lib/origin/openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true 2018-01-25 22:09:23.715118 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:4001: getsockopt: connection refused"; Reconnecting to {127.0.0.1:4001 <nil>} 2018-01-25 22:09:23.715669 I | embed: listening for peers on https://0.0.0.0:7001 2018-01-25 22:09:23.715718 I | embed: listening for client requests on 0.0.0.0:4001 2018-01-25 22:09:23.733531 I | etcdserver: name = openshift.local 2018-01-25 22:09:23.733549 I | etcdserver: data dir = /var/lib/origin/openshift.local.etcd 2018-01-25 22:09:23.733555 I | etcdserver: member dir = /var/lib/origin/openshift.local.etcd/member 2018-01-25 22:09:23.733559 I | etcdserver: heartbeat = 100ms 2018-01-25 22:09:23.733562 I | etcdserver: election = 1000ms 2018-01-25 22:09:23.733566 I | etcdserver: snapshot count = 100000 2018-01-25 22:09:23.733577 I | etcdserver: advertise client URLs = https://127.0.0.1:4001 2018-01-25 22:09:23.733583 I | etcdserver: initial advertise peer URLs = https://127.0.0.1:7001 2018-01-25 22:09:23.733591 I | etcdserver: initial cluster = openshift.local=https://127.0.0.1:7001 2018-01-25 22:09:23.753838 I | etcdserver: starting member 51cc720fdd39e048 in cluster dcf5ba954f7ebe11 2018-01-25 22:09:23.753888 I | raft: 51cc720fdd39e048 became follower at term 0 2018-01-25 22:09:23.753904 I | raft: newRaft 51cc720fdd39e048 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 2018-01-25 22:09:23.753917 I | raft: 51cc720fdd39e048 became follower at term 1 2018-01-25 22:09:23.779416 W | auth: simple token is not cryptographically signed 2018-01-25 22:09:23.792959 I | etcdserver: starting server... [version: 3.2.8, cluster version: to_be_decided] 2018-01-25 22:09:23.793014 I | embed: ClientTLS: cert = /var/lib/origin/openshift.local.config/master/etcd.server.crt, key = /var/lib/origin/openshift.local.config/master/etcd.server.key, ca = /var/lib/origin/openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true 2018-01-25 22:09:23.793420 I | etcdserver/membership: added member 51cc720fdd39e048 [https://127.0.0.1:7001] to cluster dcf5ba954f7ebe11 2018-01-25 22:09:24.754253 I | raft: 51cc720fdd39e048 is starting a new election at term 1 2018-01-25 22:09:24.754328 I | raft: 51cc720fdd39e048 became candidate at term 2 2018-01-25 22:09:24.756263 I | raft: 51cc720fdd39e048 received MsgVoteResp from 51cc720fdd39e048 at term 2 2018-01-25 22:09:24.756299 I | raft: 51cc720fdd39e048 became leader at term 2 2018-01-25 22:09:24.756312 I | raft: raft.node: 51cc720fdd39e048 elected leader 51cc720fdd39e048 at term 2 2018-01-25 22:09:24.756586 I | etcdserver: setting up the initial cluster version to 3.2 2018-01-25 22:09:24.759626 N | etcdserver/membership: set the initial cluster version to 3.2 2018-01-25 22:09:24.759670 I | etcdserver/api: enabled capabilities for version 3.2 2018-01-25 22:09:24.759701 I | embed: ready to serve client requests I0125 22:09:24.759728 20853 run.go:81] Started etcd at 127.0.0.1:4001 2018-01-25 22:09:24.759774 I | etcdserver: published {Name:openshift.local ClientURLs:[https://127.0.0.1:4001]} to cluster dcf5ba954f7ebe11 2018-01-25 22:09:24.769033 I | embed: serving client requests on [::]:4001 W0125 22:09:24.783916 20853 run_components.go:49] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients W0125 22:09:24.784355 20853 server.go:85] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53 2018-01-25 22:09:24.784374 I | etcdserver/api/v3rpc: Failed to dial 0.0.0.0:4001: connection error: desc = "transport: remote error: tls: bad certificate"; please retry. I0125 22:09:24.784565 20853 logs.go:41] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0] I0125 22:09:24.784584 20853 logs.go:41] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0] I0125 22:09:24.884868 20853 run_components.go:75] DNS listening at 0.0.0.0:8053 panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0xc0 pc=0x4144536] goroutine 1 [running]: github.com/openshift/origin/pkg/cmd/server/origin.(*MasterConfig).buildHandlerChain.func1(0xef81600, 0xc420aec220, 0xc4205088c0, 0xc420aec220, 0x51ee3c0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/master.go:288 +0x146 github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server.completedConfig.New.func1(0xef81600, 0xc420aec220, 0xef81600, 0xc420aec220) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/config.go:437 +0x45 github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server.NewAPIServerHandler(0x5773af6, 0x17, 0xef92d80, 0xc42033a810, 0xefc01c0, 0xc4216918c0, 0xc420aec1a0, 0x0, 0x0, 0x7fefcbed28b0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/handler.go:103 +0x338 github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server.completedConfig.New(0xc4205088c0, 0x0, 0x0, 0x5773af6, 0x17, 0xefdabe0, 0xc420bc97d0, 0x524a560, 0xc421a50001, 0xc420aec180) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/server/config.go:439 +0x149 github.com/openshift/origin/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.completedConfig.New(0xc420aec180, 0xc420aec048, 0xefdabe0, 0xc420bc97d0, 0x0, 0x0, 0x0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go:125 +0x87 github.com/openshift/origin/pkg/cmd/server/origin.createAPIExtensionsServer(0xc420aec040, 0xefdabe0, 0xc420bc97d0, 0xc4206b3b90, 0xef81280, 0xc420d4b8e0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/apiextensions.go:35 +0x5d github.com/openshift/origin/pkg/cmd/server/origin.(*MasterConfig).withAPIExtensions(0xc42074a400, 0xefdabe0, 0xc420bc97d0, 0xc4212dcc30, 0xc4212772c0, 0xef73440, 0xc4206b3b90, 0xef81280, 0xc420d4b8e0, 0x0, ...) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/master.go:109 +0xf9 github.com/openshift/origin/pkg/cmd/server/origin.(*MasterConfig).Run(0xc42074a400, 0xefc6e40, 0xc42140c240, 0xc420068120, 0x34, 0xc4210ca280) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/origin/master.go:220 +0x122 github.com/openshift/origin/pkg/cmd/server/start.StartAPI(0xc42074a400, 0xefc6e40, 0xc42140c240, 0x1, 0x1) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_master.go:571 +0xb2 github.com/openshift/origin/pkg/cmd/server/start.(*Master).Start(0xc42116ab30, 0xc42116ab30, 0x0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_master.go:542 +0x9de github.com/openshift/origin/pkg/cmd/server/start.MasterOptions.RunMaster(0xc420185400, 0x1, 0x2da, 0x721, 0x7fff5a677e44, 0x40, 0xef7abc0, 0xc42000e018, 0x0, 0x0, ...) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_master.go:303 +0x2ef github.com/openshift/origin/pkg/cmd/server/start.AllInOneOptions.StartAllInOne(0xc420d6aae0, 0xc420a0bf00, 0x2da, 0x721, 0x0, 0x576f0c2, 0x16, 0x7fff5a677e93, 0x46, 0x0, ...) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_allinone.go:305 +0x1ea github.com/openshift/origin/pkg/cmd/server/start.NewCommandStartAllInOne.func1(0xc420219b00, 0xc4210a5640, 0x0, 0x2) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/pkg/cmd/server/start/start_allinone.go:89 +0x125 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).execute(0xc420219b00, 0xc4210a54c0, 0x2, 0x2, 0xc420219b00, 0xc4210a54c0) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:603 +0x234 github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420219680, 0xc42000e018, 0xc420219680, 0x8) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:689 +0x2fe github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420219680, 0x9, 0xc420219680) /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:648 +0x2b main.main() /tmp/openshift/build-rpms/rpm/BUILD/origin-3.9.0/_output/local/go/src/github.com/openshift/origin/cmd/openshift/openshift.go:36 +0x24b ```
priority
oc cluster up alpha fails due to panic oc cluster up fails because the server start panics inside the container version oc version oc alpha dirty kubernetes features basic auth gssapi kerberos spnego steps to reproduce build from latest master oc cluster up current result starting openshift using openshift origin alpha checking openshift client ok checking docker client ok checking docker version ok checking for existing openshift container deleted existing openshift container checking for openshift origin alpha image ok checking docker daemon configuration ok checking for available ports warning binding dns on port instead of which may not be resolvable from all clients checking type of volume mount using nsenter mounter for openshift volumes creating host directories ok finding server ip using as the server ip starting openshift container creating initial openshift configuration starting openshift using container origin fail error could not start openshift container origin details last lines of origin container log github com openshift origin pkg cmd server start newcommandstartallinone tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server start start allinone go github com openshift origin vendor github com cobra command execute tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor github com cobra command go github com openshift origin vendor github com cobra command executec tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor github com cobra command go github com openshift origin vendor github com cobra command execute tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor github com cobra command go main main tmp openshift build rpms rpm build origin output local go src github com openshift origin cmd openshift openshift go docker logs server go warning all flags than config are deprecated please begin using a config file asap controllers go server isn t healthy yet waiting a little while server go version server go starting metrics server on controllermanager go version controllermanager go unable to register configz register config componentconfig twice reflector go github com openshift origin vendor io kubernetes plugin cmd kube scheduler app server go failed to list pod get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list replicationcontroller get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list statefulset get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list persistentvolumeclaim get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list poddisruptionbudget get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list persistentvolume get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list service get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list node get dial tcp getsockopt connection refused reflector go github com openshift origin vendor io client go informers factory go failed to list replicaset get dial tcp getsockopt connection refused leaderelection go attempting to acquire leader lease leaderelection go error retrieving resource lock kube system kube controller manager get dial tcp getsockopt connection refused admission go persistentvolumelabel admission controller is deprecated please remove this controller from your configuration files and scripts master config go will report as public ip address i etcdserver api grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp getsockopt connection refused reconnecting to start master go starting master on alpha start master go public master address is start master go using images from openshift origin alpha i embed peertls cert var lib origin openshift local config master etcd server crt key var lib origin openshift local config master etcd server key ca var lib origin openshift local config master ca crt trusted ca client cert auth true i etcdserver api grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp getsockopt connection refused reconnecting to i embed listening for peers on i embed listening for client requests on i etcdserver name openshift local i etcdserver data dir var lib origin openshift local etcd i etcdserver member dir var lib origin openshift local etcd member i etcdserver heartbeat i etcdserver election i etcdserver snapshot count i etcdserver advertise client urls i etcdserver initial advertise peer urls i etcdserver initial cluster openshift local i etcdserver starting member in cluster i raft became follower at term i raft newraft term commit applied lastindex lastterm i raft became follower at term w auth simple token is not cryptographically signed i etcdserver starting server i embed clienttls cert var lib origin openshift local config master etcd server crt key var lib origin openshift local config master etcd server key ca var lib origin openshift local config master ca crt trusted ca client cert auth true i etcdserver membership added member to cluster i raft is starting a new election at term i raft became candidate at term i raft received msgvoteresp from at term i raft became leader at term i raft raft node elected leader at term i etcdserver setting up the initial cluster version to n etcdserver membership set the initial cluster version to i etcdserver api enabled capabilities for version i embed ready to serve client requests run go started etcd at i etcdserver published name openshift local clienturls to cluster i embed serving client requests on run components go binding dns on port instead of which may not be resolvable from all clients server go unable to keep dnsmasq up to date must point to port i etcdserver api failed to dial connection error desc transport remote error tls bad certificate please retry logs go skydns ready for queries on cluster local for logs go skydns ready for queries on cluster local for run components go dns listening at panic runtime error invalid memory address or nil pointer dereference goroutine github com openshift origin pkg cmd server origin masterconfig buildhandlerchain tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server origin master go github com openshift origin vendor io apiserver pkg server completedconfig new tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor io apiserver pkg server config go github com openshift origin vendor io apiserver pkg server newapiserverhandler tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor io apiserver pkg server handler go github com openshift origin vendor io apiserver pkg server completedconfig new tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor io apiserver pkg server config go github com openshift origin vendor io apiextensions apiserver pkg apiserver completedconfig new tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor io apiextensions apiserver pkg apiserver apiserver go github com openshift origin pkg cmd server origin createapiextensionsserver tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server origin apiextensions go github com openshift origin pkg cmd server origin masterconfig withapiextensions tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server origin master go github com openshift origin pkg cmd server origin masterconfig run tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server origin master go github com openshift origin pkg cmd server start startapi tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server start start master go github com openshift origin pkg cmd server start master start tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server start start master go github com openshift origin pkg cmd server start masteroptions runmaster tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server start start master go github com openshift origin pkg cmd server start allinoneoptions startallinone tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server start start allinone go github com openshift origin pkg cmd server start newcommandstartallinone tmp openshift build rpms rpm build origin output local go src github com openshift origin pkg cmd server start start allinone go github com openshift origin vendor github com cobra command execute tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor github com cobra command go github com openshift origin vendor github com cobra command executec tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor github com cobra command go github com openshift origin vendor github com cobra command execute tmp openshift build rpms rpm build origin output local go src github com openshift origin vendor github com cobra command go main main tmp openshift build rpms rpm build origin output local go src github com openshift origin cmd openshift openshift go
1
299,850
9,205,911,527
IssuesEvent
2019-03-08 12:04:52
qissue-bot/QGIS
https://api.github.com/repos/qissue-bot/QGIS
closed
Relative paths within QGIS project file
Category: Project Loading/Saving Component: Easy fix? Component: Pull Request or Patch supplied Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Feature request
--- Author Name: **springmeyer -** (springmeyer -) Original Redmine Issue: 1211, https://issues.qgis.org/issues/1211 Original Assignee: nobody - --- It would be extremely useful to allow for the use of relative paths to files loaded within a Qgis project file that rest on the filesystem. This would be a user controlled setting so that when checked project files by default store the relative paths to loaded files.
1.0
Relative paths within QGIS project file - --- Author Name: **springmeyer -** (springmeyer -) Original Redmine Issue: 1211, https://issues.qgis.org/issues/1211 Original Assignee: nobody - --- It would be extremely useful to allow for the use of relative paths to files loaded within a Qgis project file that rest on the filesystem. This would be a user controlled setting so that when checked project files by default store the relative paths to loaded files.
priority
relative paths within qgis project file author name springmeyer springmeyer original redmine issue original assignee nobody it would be extremely useful to allow for the use of relative paths to files loaded within a qgis project file that rest on the filesystem this would be a user controlled setting so that when checked project files by default store the relative paths to loaded files
1
445,720
12,835,441,163
IssuesEvent
2020-07-07 12:53:34
yalla-coop/tempo
https://api.github.com/repos/yalla-coop/tempo
closed
Update get all Spend Activities query
3-points back-end backlog priority-3
__Is this part of a User Journey?__ #616 --- ### Acceptance Criteria: - [x] Update DB to ensure spend activity and spend venue profiles are now in line with latest architecture #577 - [x] Query to fetch all spend venue listings (spend venue has marked themselves as public AND the spend activity has been marked as public) - [x] In the query we also need the total transactions for each activity and to order the activities based on this size (this is indication of popularity) - [ ] Connect with Front end pages to find spend venues https://github.com/yalla-coop/tempo/issues/617 - [ ] Update member dashboard query to ensure it's fetching the three most popular spend activities and connect with front end
1.0
Update get all Spend Activities query - __Is this part of a User Journey?__ #616 --- ### Acceptance Criteria: - [x] Update DB to ensure spend activity and spend venue profiles are now in line with latest architecture #577 - [x] Query to fetch all spend venue listings (spend venue has marked themselves as public AND the spend activity has been marked as public) - [x] In the query we also need the total transactions for each activity and to order the activities based on this size (this is indication of popularity) - [ ] Connect with Front end pages to find spend venues https://github.com/yalla-coop/tempo/issues/617 - [ ] Update member dashboard query to ensure it's fetching the three most popular spend activities and connect with front end
priority
update get all spend activities query is this part of a user journey acceptance criteria update db to ensure spend activity and spend venue profiles are now in line with latest architecture query to fetch all spend venue listings spend venue has marked themselves as public and the spend activity has been marked as public in the query we also need the total transactions for each activity and to order the activities based on this size this is indication of popularity connect with front end pages to find spend venues update member dashboard query to ensure it s fetching the three most popular spend activities and connect with front end
1
178,658
14,675,973,062
IssuesEvent
2020-12-30 18:57:46
Andrew-Chen-Wang/cookiecutter-django-ecs-github
https://api.github.com/repos/Andrew-Chen-Wang/cookiecutter-django-ecs-github
opened
Tutorial: Adding custom VPC
documentation enhancement
For [Donate Anything](https://donate-anything.org/) ([GitHub link](https://github.com/Donate-Anything/Donate-Anything/)), I created a custom VPC so that I could still use my default VPC for other tests the required an EC2 instance. It's also beneficial to do so so you can separate your security groups and instances based on VPC for security measures and better filterability. The next time I deploy a website, I'll upload a tutorial in these comment sections. But for pro tips: - Make sure you enable the internet gateway and your route tables are connected properly if not setup. - There is a doc in AWS that shows you how to add the VPC I believe - Somewhere around the ECS configuration is when you add the VPC. Same with ALB.
1.0
Tutorial: Adding custom VPC - For [Donate Anything](https://donate-anything.org/) ([GitHub link](https://github.com/Donate-Anything/Donate-Anything/)), I created a custom VPC so that I could still use my default VPC for other tests the required an EC2 instance. It's also beneficial to do so so you can separate your security groups and instances based on VPC for security measures and better filterability. The next time I deploy a website, I'll upload a tutorial in these comment sections. But for pro tips: - Make sure you enable the internet gateway and your route tables are connected properly if not setup. - There is a doc in AWS that shows you how to add the VPC I believe - Somewhere around the ECS configuration is when you add the VPC. Same with ALB.
non_priority
tutorial adding custom vpc for i created a custom vpc so that i could still use my default vpc for other tests the required an instance it s also beneficial to do so so you can separate your security groups and instances based on vpc for security measures and better filterability the next time i deploy a website i ll upload a tutorial in these comment sections but for pro tips make sure you enable the internet gateway and your route tables are connected properly if not setup there is a doc in aws that shows you how to add the vpc i believe somewhere around the ecs configuration is when you add the vpc same with alb
0
47,392
5,890,404,012
IssuesEvent
2017-05-17 14:54:14
RestComm/Restcomm-Connect
https://api.github.com/repos/RestComm/Restcomm-Connect
closed
CallManager properly handle numbers that start with +
1. Bug Testing unplanned
bug discovered from failing test case: org.restcomm.connect.testsuite.telephony.TestDialVerbPartThree#testDialClientAliceWithPlusSign
1.0
CallManager properly handle numbers that start with + - bug discovered from failing test case: org.restcomm.connect.testsuite.telephony.TestDialVerbPartThree#testDialClientAliceWithPlusSign
non_priority
callmanager properly handle numbers that start with bug discovered from failing test case org restcomm connect testsuite telephony testdialverbpartthree testdialclientalicewithplussign
0
279,129
24,201,650,836
IssuesEvent
2022-09-24 16:57:18
wearable-learning-cloud-platform/wlcp-issues
https://api.github.com/repos/wearable-learning-cloud-platform/wlcp-issues
closed
24 Hour Sessions
enhancement game manager game editor high priority game player ready for testing
Sessions in the WLCP should only last 24 hours then you should be automatically logged out.
1.0
24 Hour Sessions - Sessions in the WLCP should only last 24 hours then you should be automatically logged out.
non_priority
hour sessions sessions in the wlcp should only last hours then you should be automatically logged out
0
145,571
11,698,990,987
IssuesEvent
2020-03-06 14:52:23
OpenLiberty/open-liberty
https://api.github.com/repos/OpenLiberty/open-liberty
closed
Test defect persistent timers demo
team:Zombie Apocalypse test bug
A test defect has been found during a build where the AutomaticDatabase timer in com.ibm.ws.concurrent.persistent_fat_demo_timers had it's `@PostConstruct` method called a second time. Likely indicating there may have been a race condition issue, or was prematurely destroyed. As a stateless bean (and not singleton) this behavior would be acceptable? To increase the rigidity of this test we should actually look at the database to see how many times the timer has run.
1.0
Test defect persistent timers demo - A test defect has been found during a build where the AutomaticDatabase timer in com.ibm.ws.concurrent.persistent_fat_demo_timers had it's `@PostConstruct` method called a second time. Likely indicating there may have been a race condition issue, or was prematurely destroyed. As a stateless bean (and not singleton) this behavior would be acceptable? To increase the rigidity of this test we should actually look at the database to see how many times the timer has run.
non_priority
test defect persistent timers demo a test defect has been found during a build where the automaticdatabase timer in com ibm ws concurrent persistent fat demo timers had it s postconstruct method called a second time likely indicating there may have been a race condition issue or was prematurely destroyed as a stateless bean and not singleton this behavior would be acceptable to increase the rigidity of this test we should actually look at the database to see how many times the timer has run
0
147,370
5,638,436,514
IssuesEvent
2017-04-06 11:57:58
kuzzleio/kuzzle
https://api.github.com/repos/kuzzleio/kuzzle
closed
The Redis cache must be disable in RoleRepository and ProfileRepository
bug priority-high
The `RoleRepository` and `ProfileRepository` was ment to force the disabling of Redis cache as they already have inmemory cache, but was somehow re-enabled by mistake.
1.0
The Redis cache must be disable in RoleRepository and ProfileRepository - The `RoleRepository` and `ProfileRepository` was ment to force the disabling of Redis cache as they already have inmemory cache, but was somehow re-enabled by mistake.
priority
the redis cache must be disable in rolerepository and profilerepository the rolerepository and profilerepository was ment to force the disabling of redis cache as they already have inmemory cache but was somehow re enabled by mistake
1
429,169
12,421,737,942
IssuesEvent
2020-05-23 18:20:33
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Problem trying to add a text box
Priority:P2 Reporting/Dashboards Type:Bug
Info --- **OS** ```CentOS release 6.9 (Final)``` **Metabase** ```Metabase version v0.29.0``` **Metabase hosting** ```Docker version 1.7.1, build 786b29d/1.7.1``` **DB** ```MySQL Community Server (GPL) 5.7.20``` ---- Hi, I have a problem when I try to add a "text box" in all dashboards, this is the error: ```javascript { "message":"Column 'card_id' cannot be null", "type":"class com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException", "stacktrace":[ "models.dashboard_card$fn__35931$create_dashboard_card_BANG___35936$fn__35937$fn__35939.invoke(dashboard_card.clj:151)", "models.dashboard_card$fn__35931$create_dashboard_card_BANG___35936$fn__35937.invoke(dashboard_card.clj:150)", "models.dashboard_card$fn__35931$create_dashboard_card_BANG___35936.invoke(dashboard_card.clj:144)", "models.dashboard$add_dashcard_BANG_.invokeStatic(dashboard.clj:215)", "models.dashboard$add_dashcard_BANG_.doInvoke(dashboard.clj:202)", "api.dashboard$fn__41383$fn__41386.invoke(dashboard.clj:256)", "api.common.internal$do_with_caught_api_exceptions.invokeStatic(internal.clj:254)", "api.common.internal$do_with_caught_api_exceptions.invoke(internal.clj:249)", "api.dashboard$fn__41383.invokeStatic(dashboard.clj:248)", "api.dashboard$fn__41383.invoke(dashboard.clj:248)", "middleware$enforce_authentication$fn__38506.invoke(middleware.clj:118)", "api.routes$fn__49378.invokeStatic(routes.clj:66)", "api.routes$fn__49378.invoke(routes.clj:66)", "routes$fn__49469$fn__49470.doInvoke(routes.clj:108)", "routes$fn__49469.invokeStatic(routes.clj:103)", "routes$fn__49469.invoke(routes.clj:103)", "middleware$log_api_call$fn__38605$fn__38607.invoke(middleware.clj:349)", "middleware$log_api_call$fn__38605.invoke(middleware.clj:348)", "middleware$add_security_headers$fn__38555.invoke(middleware.clj:251)", "core$wrap_streamed_json_response$fn__54821.invoke(core.clj:67)", "middleware$bind_current_user$fn__38510.invoke(middleware.clj:139)", "middleware$maybe_set_site_url$fn__38559.invoke(middleware.clj:275)" ], "sql-exception-chain":[ "MySQLIntegrityConstraintViolationException:", "Message: Column 'card_id' cannot be null", "SQLState: 23000", "Error Code: 1048" ] } ``` I figured out how solve this problem, the column `card_id` in `report_dashboardcard` have a `NOT NULL` property and if I change this property to false, the text box is added without problems. **Why ?** I think is because this text box is not a question and is not created in `report_card` and its value is saved in `visualization_settings` a column of `report_dashboardcard`. I dont know if this is a bug....
1.0
Problem trying to add a text box - Info --- **OS** ```CentOS release 6.9 (Final)``` **Metabase** ```Metabase version v0.29.0``` **Metabase hosting** ```Docker version 1.7.1, build 786b29d/1.7.1``` **DB** ```MySQL Community Server (GPL) 5.7.20``` ---- Hi, I have a problem when I try to add a "text box" in all dashboards, this is the error: ```javascript { "message":"Column 'card_id' cannot be null", "type":"class com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException", "stacktrace":[ "models.dashboard_card$fn__35931$create_dashboard_card_BANG___35936$fn__35937$fn__35939.invoke(dashboard_card.clj:151)", "models.dashboard_card$fn__35931$create_dashboard_card_BANG___35936$fn__35937.invoke(dashboard_card.clj:150)", "models.dashboard_card$fn__35931$create_dashboard_card_BANG___35936.invoke(dashboard_card.clj:144)", "models.dashboard$add_dashcard_BANG_.invokeStatic(dashboard.clj:215)", "models.dashboard$add_dashcard_BANG_.doInvoke(dashboard.clj:202)", "api.dashboard$fn__41383$fn__41386.invoke(dashboard.clj:256)", "api.common.internal$do_with_caught_api_exceptions.invokeStatic(internal.clj:254)", "api.common.internal$do_with_caught_api_exceptions.invoke(internal.clj:249)", "api.dashboard$fn__41383.invokeStatic(dashboard.clj:248)", "api.dashboard$fn__41383.invoke(dashboard.clj:248)", "middleware$enforce_authentication$fn__38506.invoke(middleware.clj:118)", "api.routes$fn__49378.invokeStatic(routes.clj:66)", "api.routes$fn__49378.invoke(routes.clj:66)", "routes$fn__49469$fn__49470.doInvoke(routes.clj:108)", "routes$fn__49469.invokeStatic(routes.clj:103)", "routes$fn__49469.invoke(routes.clj:103)", "middleware$log_api_call$fn__38605$fn__38607.invoke(middleware.clj:349)", "middleware$log_api_call$fn__38605.invoke(middleware.clj:348)", "middleware$add_security_headers$fn__38555.invoke(middleware.clj:251)", "core$wrap_streamed_json_response$fn__54821.invoke(core.clj:67)", "middleware$bind_current_user$fn__38510.invoke(middleware.clj:139)", "middleware$maybe_set_site_url$fn__38559.invoke(middleware.clj:275)" ], "sql-exception-chain":[ "MySQLIntegrityConstraintViolationException:", "Message: Column 'card_id' cannot be null", "SQLState: 23000", "Error Code: 1048" ] } ``` I figured out how solve this problem, the column `card_id` in `report_dashboardcard` have a `NOT NULL` property and if I change this property to false, the text box is added without problems. **Why ?** I think is because this text box is not a question and is not created in `report_card` and its value is saved in `visualization_settings` a column of `report_dashboardcard`. I dont know if this is a bug....
priority
problem trying to add a text box info os centos release final metabase metabase version metabase hosting docker version build db mysql community server gpl hi i have a problem when i try to add a text box in all dashboards this is the error javascript message column card id cannot be null type class com mysql jdbc exceptions mysqlintegrityconstraintviolationexception stacktrace models dashboard card fn create dashboard card bang fn fn invoke dashboard card clj models dashboard card fn create dashboard card bang fn invoke dashboard card clj models dashboard card fn create dashboard card bang invoke dashboard card clj models dashboard add dashcard bang invokestatic dashboard clj models dashboard add dashcard bang doinvoke dashboard clj api dashboard fn fn invoke dashboard clj api common internal do with caught api exceptions invokestatic internal clj api common internal do with caught api exceptions invoke internal clj api dashboard fn invokestatic dashboard clj api dashboard fn invoke dashboard clj middleware enforce authentication fn invoke middleware clj api routes fn invokestatic routes clj api routes fn invoke routes clj routes fn fn doinvoke routes clj routes fn invokestatic routes clj routes fn invoke routes clj middleware log api call fn fn invoke middleware clj middleware log api call fn invoke middleware clj middleware add security headers fn invoke middleware clj core wrap streamed json response fn invoke core clj middleware bind current user fn invoke middleware clj middleware maybe set site url fn invoke middleware clj sql exception chain mysqlintegrityconstraintviolationexception message column card id cannot be null sqlstate error code i figured out how solve this problem the column card id in report dashboardcard have a not null property and if i change this property to false the text box is added without problems why i think is because this text box is not a question and is not created in report card and its value is saved in visualization settings a column of report dashboardcard i dont know if this is a bug
1
46,132
13,150,684,608
IssuesEvent
2020-08-09 12:57:35
shaundmorris/ddf
https://api.github.com/repos/shaundmorris/ddf
closed
CVE-2018-10237 Medium Severity Vulnerability detected by WhiteSource
security vulnerability wontfix
## CVE-2018-10237 - Medium Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>guava-18.0.jar</b>, <b>guava-14.0.1.jar</b>, <b>guava-19.0.jar</b>, <b>guava-16.0.1.jar</b>, <b>guava-20.0.jar</b></p></summary> <p> <details><summary><b>guava-18.0.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /ddf/distribution/ddf/target/dependencies/apache-karaf-4.2.2/system/com/google/guava/guava/18.0/guava-18.0.jar</p> <p> <p>Library home page: <a href=http://code.google.com/p/guava-libraries/guava>http://code.google.com/p/guava-libraries/guava</a></p> Dependency Hierarchy: - :x: **guava-18.0.jar** (Vulnerable Library) </details> <details><summary><b>guava-14.0.1.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has two code dependencies - javax.annotation per the JSR-305 spec and javax.inject per the JSR-330 spec.</p> <p>path: /ddf/distribution/kernel/target/dependencies/solr/server/solr-webapp/webapp/WEB-INF/lib/guava-14.0.1.jar,/ddf/distribution/solr-distro/target/solr-7.4.0/server/solr-webapp/webapp/WEB-INF/lib/guava-14.0.1.jar,/ddf/distribution/ddf/target/dependencies/solr/server/solr-webapp/webapp/WEB-INF/lib/guava-14.0.1.jar</p> <p> <p>Library home page: <a href=http://code.google.com/p/guava-libraries/guava>http://code.google.com/p/guava-libraries/guava</a></p> Dependency Hierarchy: - :x: **guava-14.0.1.jar** (Vulnerable Library) </details> <details><summary><b>guava-19.0.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /ddf/distribution/ddf/target/dependencies/apache-karaf-4.2.2/system/com/google/guava/guava/19.0/guava-19.0.jar</p> <p> <p>Library home page: <a href=https://github.com/google/guava/guava>https://github.com/google/guava/guava</a></p> Dependency Hierarchy: - :x: **guava-19.0.jar** (Vulnerable Library) </details> <details><summary><b>guava-16.0.1.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /ddf/distribution/ddf/target/dependencies/apache-karaf-4.2.2/system/com/google/guava/guava/16.0.1/guava-16.0.1.jar</p> <p> <p>Library home page: <a href=http://code.google.com/p/guava-libraries/guava>http://code.google.com/p/guava-libraries/guava</a></p> Dependency Hierarchy: - :x: **guava-16.0.1.jar** (Vulnerable Library) </details> <details><summary><b>guava-20.0.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /root/.m2/repository/com/google/guava/guava/20.0/guava-20.0.jar,2/repository/com/google/guava/guava/20.0/guava-20.0.jar</p> <p> <p>Library home page: <a href=https://github.com/google/guava/guava>https://github.com/google/guava/guava</a></p> Dependency Hierarchy: - :x: **guava-20.0.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/shaundmorris/ddf/commit/ea35fc52f05b85ef437e9a9e39a887ad51692ff0">ea35fc52f05b85ef437e9a9e39a887ad51692ff0</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable. <p>Publish Date: 2018-04-26 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10237>CVE-2018-10237</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://www.securitytracker.com/id/1041707">http://www.securitytracker.com/id/1041707</a></p> <p>Fix Resolution: Red Hat has issued a fix. The Red Hat advisory is available at: https://access.redhat.com/errata/RHSA-2018:2740 https://access.redhat.com/errata/RHSA-2018:2741 https://access.redhat.com/errata/RHSA-2018:2742 https://access.redhat.com/errata/RHSA-2018:2743</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-10237 Medium Severity Vulnerability detected by WhiteSource - ## CVE-2018-10237 - Medium Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>guava-18.0.jar</b>, <b>guava-14.0.1.jar</b>, <b>guava-19.0.jar</b>, <b>guava-16.0.1.jar</b>, <b>guava-20.0.jar</b></p></summary> <p> <details><summary><b>guava-18.0.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /ddf/distribution/ddf/target/dependencies/apache-karaf-4.2.2/system/com/google/guava/guava/18.0/guava-18.0.jar</p> <p> <p>Library home page: <a href=http://code.google.com/p/guava-libraries/guava>http://code.google.com/p/guava-libraries/guava</a></p> Dependency Hierarchy: - :x: **guava-18.0.jar** (Vulnerable Library) </details> <details><summary><b>guava-14.0.1.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has two code dependencies - javax.annotation per the JSR-305 spec and javax.inject per the JSR-330 spec.</p> <p>path: /ddf/distribution/kernel/target/dependencies/solr/server/solr-webapp/webapp/WEB-INF/lib/guava-14.0.1.jar,/ddf/distribution/solr-distro/target/solr-7.4.0/server/solr-webapp/webapp/WEB-INF/lib/guava-14.0.1.jar,/ddf/distribution/ddf/target/dependencies/solr/server/solr-webapp/webapp/WEB-INF/lib/guava-14.0.1.jar</p> <p> <p>Library home page: <a href=http://code.google.com/p/guava-libraries/guava>http://code.google.com/p/guava-libraries/guava</a></p> Dependency Hierarchy: - :x: **guava-14.0.1.jar** (Vulnerable Library) </details> <details><summary><b>guava-19.0.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /ddf/distribution/ddf/target/dependencies/apache-karaf-4.2.2/system/com/google/guava/guava/19.0/guava-19.0.jar</p> <p> <p>Library home page: <a href=https://github.com/google/guava/guava>https://github.com/google/guava/guava</a></p> Dependency Hierarchy: - :x: **guava-19.0.jar** (Vulnerable Library) </details> <details><summary><b>guava-16.0.1.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /ddf/distribution/ddf/target/dependencies/apache-karaf-4.2.2/system/com/google/guava/guava/16.0.1/guava-16.0.1.jar</p> <p> <p>Library home page: <a href=http://code.google.com/p/guava-libraries/guava>http://code.google.com/p/guava-libraries/guava</a></p> Dependency Hierarchy: - :x: **guava-16.0.1.jar** (Vulnerable Library) </details> <details><summary><b>guava-20.0.jar</b></p></summary> <p>Guava is a suite of core and expanded libraries that include utility classes, google's collections, io classes, and much much more. Guava has only one code dependency - javax.annotation, per the JSR-305 spec.</p> <p>path: /root/.m2/repository/com/google/guava/guava/20.0/guava-20.0.jar,2/repository/com/google/guava/guava/20.0/guava-20.0.jar</p> <p> <p>Library home page: <a href=https://github.com/google/guava/guava>https://github.com/google/guava/guava</a></p> Dependency Hierarchy: - :x: **guava-20.0.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/shaundmorris/ddf/commit/ea35fc52f05b85ef437e9a9e39a887ad51692ff0">ea35fc52f05b85ef437e9a9e39a887ad51692ff0</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Unbounded memory allocation in Google Guava 11.0 through 24.x before 24.1.1 allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker-provided data, because the AtomicDoubleArray class (when serialized with Java serialization) and the CompoundOrdering class (when serialized with GWT serialization) perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable. <p>Publish Date: 2018-04-26 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10237>CVE-2018-10237</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://www.securitytracker.com/id/1041707">http://www.securitytracker.com/id/1041707</a></p> <p>Fix Resolution: Red Hat has issued a fix. The Red Hat advisory is available at: https://access.redhat.com/errata/RHSA-2018:2740 https://access.redhat.com/errata/RHSA-2018:2741 https://access.redhat.com/errata/RHSA-2018:2742 https://access.redhat.com/errata/RHSA-2018:2743</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve medium severity vulnerability detected by whitesource cve medium severity vulnerability vulnerable libraries guava jar guava jar guava jar guava jar guava jar guava jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more guava has only one code dependency javax annotation per the jsr spec path ddf distribution ddf target dependencies apache karaf system com google guava guava guava jar library home page a href dependency hierarchy x guava jar vulnerable library guava jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more guava has two code dependencies javax annotation per the jsr spec and javax inject per the jsr spec path ddf distribution kernel target dependencies solr server solr webapp webapp web inf lib guava jar ddf distribution solr distro target solr server solr webapp webapp web inf lib guava jar ddf distribution ddf target dependencies solr server solr webapp webapp web inf lib guava jar library home page a href dependency hierarchy x guava jar vulnerable library guava jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more guava has only one code dependency javax annotation per the jsr spec path ddf distribution ddf target dependencies apache karaf system com google guava guava guava jar library home page a href dependency hierarchy x guava jar vulnerable library guava jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more guava has only one code dependency javax annotation per the jsr spec path ddf distribution ddf target dependencies apache karaf system com google guava guava guava jar library home page a href dependency hierarchy x guava jar vulnerable library guava jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more guava has only one code dependency javax annotation per the jsr spec path root repository com google guava guava guava jar repository com google guava guava guava jar library home page a href dependency hierarchy x guava jar vulnerable library found in head commit a href vulnerability details unbounded memory allocation in google guava through x before allows remote attackers to conduct denial of service attacks against servers that depend on this library and deserialize attacker provided data because the atomicdoublearray class when serialized with java serialization and the compoundordering class when serialized with gwt serialization perform eager allocation without appropriate checks on what a client has sent and whether the data size is reasonable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href fix resolution red hat has issued a fix the red hat advisory is available at step up your open source security game with whitesource
0
22,702
15,386,811,982
IssuesEvent
2021-03-03 08:44:24
cividi/spatial-data-package-platform
https://api.github.com/repos/cividi/spatial-data-package-platform
closed
Re-Setup autodeploy to stage
enhancement infrastructure
After migrating to master stage should be reconnected to automatically update on every push / merged pull request to master.
1.0
Re-Setup autodeploy to stage - After migrating to master stage should be reconnected to automatically update on every push / merged pull request to master.
non_priority
re setup autodeploy to stage after migrating to master stage should be reconnected to automatically update on every push merged pull request to master
0
394,277
11,634,393,722
IssuesEvent
2020-02-28 10:17:12
Seniru/merchant
https://api.github.com/repos/Seniru/merchant
opened
Set a maximum limit of workers a company can have
difficulty: hard good first issue help wanted priority: high status: unassigned type: enhancement type: feature request
This issue is mostly related with the changes planned in #79 The company's growth will be retarded when the maximum salary is set to a certain maximum level - since there's no obvious to invest it anymore. That'd make a bad experience to players also, since they can't improve themselves nor their companies. Because of this issue it has been discussed to set a maximum limit of players that a company can bear (maximum limit should be no higher than **30**) *Also note the maximum limit at the beginning should be 5 or a few more.* How this thing is related with investment is that you can increase the limit by investing more. The amount that should invest in order to achieve more limit should be exponential, so players can spend their money earned by high paying salaries on them. For example, the maximum limit and the amount invested should have a relationship like below ![image](https://user-images.githubusercontent.com/34127015/75539973-29ddd780-5a41-11ea-914d-4da81eaa58d1.png) So players have to buy new companies as it is hard to buy more space for their workers. This would add more challenge to the game as well
1.0
Set a maximum limit of workers a company can have - This issue is mostly related with the changes planned in #79 The company's growth will be retarded when the maximum salary is set to a certain maximum level - since there's no obvious to invest it anymore. That'd make a bad experience to players also, since they can't improve themselves nor their companies. Because of this issue it has been discussed to set a maximum limit of players that a company can bear (maximum limit should be no higher than **30**) *Also note the maximum limit at the beginning should be 5 or a few more.* How this thing is related with investment is that you can increase the limit by investing more. The amount that should invest in order to achieve more limit should be exponential, so players can spend their money earned by high paying salaries on them. For example, the maximum limit and the amount invested should have a relationship like below ![image](https://user-images.githubusercontent.com/34127015/75539973-29ddd780-5a41-11ea-914d-4da81eaa58d1.png) So players have to buy new companies as it is hard to buy more space for their workers. This would add more challenge to the game as well
priority
set a maximum limit of workers a company can have this issue is mostly related with the changes planned in the company s growth will be retarded when the maximum salary is set to a certain maximum level since there s no obvious to invest it anymore that d make a bad experience to players also since they can t improve themselves nor their companies because of this issue it has been discussed to set a maximum limit of players that a company can bear maximum limit should be no higher than also note the maximum limit at the beginning should be or a few more how this thing is related with investment is that you can increase the limit by investing more the amount that should invest in order to achieve more limit should be exponential so players can spend their money earned by high paying salaries on them for example the maximum limit and the amount invested should have a relationship like below so players have to buy new companies as it is hard to buy more space for their workers this would add more challenge to the game as well
1
300,390
9,210,259,390
IssuesEvent
2019-03-09 03:09:33
smacademic/project-bdf
https://api.github.com/repos/smacademic/project-bdf
closed
Create product requirements
priority: high type: missing
In order to stay organized and on task, we will need to create a product requirement document. We may follow the CS305 guidelines for product requirements. - Every requirement should be verifiable and traceable - Every requirement should have: - a unique ID - a priority to show urgency - a status to indicate standing - a relationship to other requirements: dependency, grouping, etc. This issue may be related to issues #3 and #4.
1.0
Create product requirements - In order to stay organized and on task, we will need to create a product requirement document. We may follow the CS305 guidelines for product requirements. - Every requirement should be verifiable and traceable - Every requirement should have: - a unique ID - a priority to show urgency - a status to indicate standing - a relationship to other requirements: dependency, grouping, etc. This issue may be related to issues #3 and #4.
priority
create product requirements in order to stay organized and on task we will need to create a product requirement document we may follow the guidelines for product requirements every requirement should be verifiable and traceable every requirement should have a unique id a priority to show urgency a status to indicate standing a relationship to other requirements dependency grouping etc this issue may be related to issues and
1
124,185
26,417,262,025
IssuesEvent
2023-01-13 16:55:48
patternfly/pf-codemods
https://api.github.com/repos/patternfly/pf-codemods
closed
Popover - Remove deprecated props
codemod
Follow up to breaking change PR https://github.com/patternfly/patternfly-react/pull/8201 - Any consumer references to Popover's `boundary` and `tippyProps` props should be removed. - Any consumers defining and passing Popover a `shouldClose` function with more than one parameter, needs to be given a warning that the first parameter has been removed (or we can remove it for them if possible). - Any consumers defining parameters as part of functions passed to the following Popover props should be warned that those function types have been updated to remove all parameters: - onHidden - onHide - onMount - onShow - onShown _Required actions:_ 1. Build codemod 2. Build test 3. Update readme with description & example
1.0
Popover - Remove deprecated props - Follow up to breaking change PR https://github.com/patternfly/patternfly-react/pull/8201 - Any consumer references to Popover's `boundary` and `tippyProps` props should be removed. - Any consumers defining and passing Popover a `shouldClose` function with more than one parameter, needs to be given a warning that the first parameter has been removed (or we can remove it for them if possible). - Any consumers defining parameters as part of functions passed to the following Popover props should be warned that those function types have been updated to remove all parameters: - onHidden - onHide - onMount - onShow - onShown _Required actions:_ 1. Build codemod 2. Build test 3. Update readme with description & example
non_priority
popover remove deprecated props follow up to breaking change pr any consumer references to popover s boundary and tippyprops props should be removed any consumers defining and passing popover a shouldclose function with more than one parameter needs to be given a warning that the first parameter has been removed or we can remove it for them if possible any consumers defining parameters as part of functions passed to the following popover props should be warned that those function types have been updated to remove all parameters onhidden onhide onmount onshow onshown required actions build codemod build test update readme with description example
0
38,046
18,900,944,073
IssuesEvent
2021-11-16 00:46:27
keras-team/keras
https://api.github.com/repos/keras-team/keras
closed
fix Cropping2D layer return empty list if crop is higher than data shape
type:bug/performance
As we did in #14970. This issue solve the `Cropping2D` layer of return and empty list if the `cropping` parameter is higher than data shape. We solved in the same way that we did in #14970. Please refer to #14970 for more info.
True
fix Cropping2D layer return empty list if crop is higher than data shape - As we did in #14970. This issue solve the `Cropping2D` layer of return and empty list if the `cropping` parameter is higher than data shape. We solved in the same way that we did in #14970. Please refer to #14970 for more info.
non_priority
fix layer return empty list if crop is higher than data shape as we did in this issue solve the layer of return and empty list if the cropping parameter is higher than data shape we solved in the same way that we did in please refer to for more info
0
22,823
6,303,713,992
IssuesEvent
2017-07-21 14:21:33
Microsoft/WindowsTemplateStudio
https://api.github.com/repos/Microsoft/WindowsTemplateStudio
opened
Port existing C# templates to VB
Generated Code
As a continuation of #371 and X-Ref https://github.com/Microsoft/WindowsTemplateStudio/wiki/Visual-Basic - Port existing C# templates to VB. - Also, need strategy and automation for the handling of future changes to C# templates and localization.
1.0
Port existing C# templates to VB - As a continuation of #371 and X-Ref https://github.com/Microsoft/WindowsTemplateStudio/wiki/Visual-Basic - Port existing C# templates to VB. - Also, need strategy and automation for the handling of future changes to C# templates and localization.
non_priority
port existing c templates to vb as a continuation of and x ref port existing c templates to vb also need strategy and automation for the handling of future changes to c templates and localization
0
238,326
19,712,245,414
IssuesEvent
2022-01-13 07:15:08
milvus-io/milvus
https://api.github.com/repos/milvus-io/milvus
opened
[Bug]: [benchmark][cluster][performance] 50 million data sets, create ivf_flat index, increase the number of indexnodes, the time reduction of index creation does not meet expectations
kind/bug needs-triage test/benchmark performance tuning
### Is there an existing issue for this? - [X] I have searched the existing issues ### Environment ```markdown - Milvus version: - Deployment mode(standalone or cluster): - SDK version(e.g. pymilvus v2.0.0rc2): - OS(Ubuntu or CentOS): - CPU/Memory: - GPU: - Others: ``` ### Current Behavior **50 million data sets, create ivf_flat index, increase the number of indexnodes, the time reduction of index creation does not meet expectations** <img width="1338" alt="截屏2022-01-13 15 14 07" src="https://user-images.githubusercontent.com/26307815/149283036-a6ebc3c9-c4d6-4510-aa38-d3c683137e7f.png"> ### Expected Behavior _No response_ ### Steps To Reproduce ```markdown 1、create collection 2、insert 50 million dataset 3、flush collection 4、build index ``` ### Anything else? _No response_
1.0
[Bug]: [benchmark][cluster][performance] 50 million data sets, create ivf_flat index, increase the number of indexnodes, the time reduction of index creation does not meet expectations - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Environment ```markdown - Milvus version: - Deployment mode(standalone or cluster): - SDK version(e.g. pymilvus v2.0.0rc2): - OS(Ubuntu or CentOS): - CPU/Memory: - GPU: - Others: ``` ### Current Behavior **50 million data sets, create ivf_flat index, increase the number of indexnodes, the time reduction of index creation does not meet expectations** <img width="1338" alt="截屏2022-01-13 15 14 07" src="https://user-images.githubusercontent.com/26307815/149283036-a6ebc3c9-c4d6-4510-aa38-d3c683137e7f.png"> ### Expected Behavior _No response_ ### Steps To Reproduce ```markdown 1、create collection 2、insert 50 million dataset 3、flush collection 4、build index ``` ### Anything else? _No response_
non_priority
million data sets create ivf flat index increase the number of indexnodes the time reduction of index creation does not meet expectations is there an existing issue for this i have searched the existing issues environment markdown milvus version deployment mode standalone or cluster sdk version e g pymilvus os ubuntu or centos cpu memory gpu others current behavior million data sets create ivf flat index increase the number of indexnodes the time reduction of index creation does not meet expectations img width alt src expected behavior no response steps to reproduce markdown 、create collection 、insert million dataset 、flush collection 、build index anything else no response
0
710,999
24,446,596,026
IssuesEvent
2022-10-06 18:31:58
systemd/systemd
https://api.github.com/repos/systemd/systemd
closed
Support `libbpf` v1.0.0
RFE 🎁 pid1 priority bpf
### Component systemd ### Is your feature request related to a problem? Please describe systemd does not support the [recently released `libbpf` v1.0.0](https://github.com/libbpf/libbpf/releases/tag/v1.0.0). ### Describe the solution you'd like - Bump the version of `libbpf.so`: https://github.com/systemd/systemd/blob/1a0e065e9f154f46fd68cd45f46310bc7df7a51c/src/shared/bpf-dlopen.c#L49 - Adapt implementation which currently relies on functions which got removed in v1.0.0 (`bpf_create_map`, `bpf_map__resize`, `bpf_probe_prog_type`, maybe others). ### Describe alternatives you've considered None ### The systemd version you checked that didn't have the feature you are asking for _No response_
1.0
Support `libbpf` v1.0.0 - ### Component systemd ### Is your feature request related to a problem? Please describe systemd does not support the [recently released `libbpf` v1.0.0](https://github.com/libbpf/libbpf/releases/tag/v1.0.0). ### Describe the solution you'd like - Bump the version of `libbpf.so`: https://github.com/systemd/systemd/blob/1a0e065e9f154f46fd68cd45f46310bc7df7a51c/src/shared/bpf-dlopen.c#L49 - Adapt implementation which currently relies on functions which got removed in v1.0.0 (`bpf_create_map`, `bpf_map__resize`, `bpf_probe_prog_type`, maybe others). ### Describe alternatives you've considered None ### The systemd version you checked that didn't have the feature you are asking for _No response_
priority
support libbpf component systemd is your feature request related to a problem please describe systemd does not support the describe the solution you d like bump the version of libbpf so adapt implementation which currently relies on functions which got removed in bpf create map bpf map resize bpf probe prog type maybe others describe alternatives you ve considered none the systemd version you checked that didn t have the feature you are asking for no response
1
382,598
11,308,777,867
IssuesEvent
2020-01-19 08:33:45
dirkwhoffmann/vAmiga
https://api.github.com/repos/dirkwhoffmann/vAmiga
closed
Test bltpri3 fails
Blitter Priority-High bug
Amiga 500 8A 🥰: ![bltpri3_A500_8A](https://user-images.githubusercontent.com/12561945/72666206-52b98880-3a10-11ea-9577-8c5ea7a3e961.jpeg) UAE 👍: <img width="721" alt="bltpri3_uae" src="https://user-images.githubusercontent.com/12561945/72666308-176b8980-3a11-11ea-9931-41e71f8b7de1.png"> vAmiga: 🙈 <img width="898" alt="Bildschirmfoto 2020-01-18 um 16 26 01" src="https://user-images.githubusercontent.com/12561945/72666214-5ea54a80-3a10-11ea-9554-e14f8f4be1f3.png"> If I interpret the images correctly, vAmiga's CPU is delayed too much here. In contrast to test bltpri2 which has all bitplanes disabled, this test runs with 6 bitplanes enabled. Possible explanation: Up to know I thought BLTPRI is a thing just between the CPU and the Blitter: If the Blitter has stolen n cycles from the CPU and BLTPRI equals 0, the Blitter gives a cycle to the CPU. New hypothesis: If n cycles were stolen from the CPU (bitplane DMA can steal as well), the Blitter is nice and gives a cycle to the CPU.
1.0
Test bltpri3 fails - Amiga 500 8A 🥰: ![bltpri3_A500_8A](https://user-images.githubusercontent.com/12561945/72666206-52b98880-3a10-11ea-9577-8c5ea7a3e961.jpeg) UAE 👍: <img width="721" alt="bltpri3_uae" src="https://user-images.githubusercontent.com/12561945/72666308-176b8980-3a11-11ea-9931-41e71f8b7de1.png"> vAmiga: 🙈 <img width="898" alt="Bildschirmfoto 2020-01-18 um 16 26 01" src="https://user-images.githubusercontent.com/12561945/72666214-5ea54a80-3a10-11ea-9554-e14f8f4be1f3.png"> If I interpret the images correctly, vAmiga's CPU is delayed too much here. In contrast to test bltpri2 which has all bitplanes disabled, this test runs with 6 bitplanes enabled. Possible explanation: Up to know I thought BLTPRI is a thing just between the CPU and the Blitter: If the Blitter has stolen n cycles from the CPU and BLTPRI equals 0, the Blitter gives a cycle to the CPU. New hypothesis: If n cycles were stolen from the CPU (bitplane DMA can steal as well), the Blitter is nice and gives a cycle to the CPU.
priority
test fails amiga 🥰 uae 👍 img width alt uae src vamiga 🙈 img width alt bildschirmfoto um src if i interpret the images correctly vamiga s cpu is delayed too much here in contrast to test which has all bitplanes disabled this test runs with bitplanes enabled possible explanation up to know i thought bltpri is a thing just between the cpu and the blitter if the blitter has stolen n cycles from the cpu and bltpri equals the blitter gives a cycle to the cpu new hypothesis if n cycles were stolen from the cpu bitplane dma can steal as well the blitter is nice and gives a cycle to the cpu
1
25,726
12,728,842,110
IssuesEvent
2020-06-25 03:58:24
yalelibrary/YUL-DC
https://api.github.com/repos/yalelibrary/YUL-DC
opened
Can't Deploy to ECS from efs-mounts branch
bug performance team
``` bin/build-cluster.sh yul-cd # runs successfully bin/add-alb.sh yul-dc # runs successfully bin/deploy-psql.sh yul-cd # runs successfully bin/deploy-solr.sh yul-cd # runs successfully bin/deploy-main.sh yul-cd Target cluster: yul-cd Using AWS_PROFILE=dce-hosting AWS_DEFAULT_REGION=us-east-1 arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-blacklight/ffaaa73891e018a1 arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-images/cd6adae7f473a55b arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-manifests/0d14aa662b255cd1 arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-management/927dbfe5c2eed903 ERRO[0001] Error registering task definition error="ClientException: Container.image should not be null or empty." family=yul-cd-main ERRO[0001] Create task definition failed error="ClientException: Container.image should not be null or empty." FATA[0001] ClientException: Container.image should not be null or empty. ```
True
Can't Deploy to ECS from efs-mounts branch - ``` bin/build-cluster.sh yul-cd # runs successfully bin/add-alb.sh yul-dc # runs successfully bin/deploy-psql.sh yul-cd # runs successfully bin/deploy-solr.sh yul-cd # runs successfully bin/deploy-main.sh yul-cd Target cluster: yul-cd Using AWS_PROFILE=dce-hosting AWS_DEFAULT_REGION=us-east-1 arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-blacklight/ffaaa73891e018a1 arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-images/cd6adae7f473a55b arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-manifests/0d14aa662b255cd1 arn:aws:elasticloadbalancing:us-east-1:229792048549:targetgroup/tg-yul-cd-management/927dbfe5c2eed903 ERRO[0001] Error registering task definition error="ClientException: Container.image should not be null or empty." family=yul-cd-main ERRO[0001] Create task definition failed error="ClientException: Container.image should not be null or empty." FATA[0001] ClientException: Container.image should not be null or empty. ```
non_priority
can t deploy to ecs from efs mounts branch bin build cluster sh yul cd runs successfully bin add alb sh yul dc runs successfully bin deploy psql sh yul cd runs successfully bin deploy solr sh yul cd runs successfully bin deploy main sh yul cd target cluster yul cd using aws profile dce hosting aws default region us east arn aws elasticloadbalancing us east targetgroup tg yul cd blacklight arn aws elasticloadbalancing us east targetgroup tg yul cd images arn aws elasticloadbalancing us east targetgroup tg yul cd manifests arn aws elasticloadbalancing us east targetgroup tg yul cd management erro error registering task definition error clientexception container image should not be null or empty family yul cd main erro create task definition failed error clientexception container image should not be null or empty fata clientexception container image should not be null or empty
0
50,891
7,643,206,020
IssuesEvent
2018-05-08 11:55:26
shoebot/shoebot
https://api.github.com/repos/shoebot/shoebot
opened
Our index.html is just an index..
documentation
Maybe this should be more of a front-page. I've been using a lot of python projects recently, and my favorites: - Tell you what they are - Have small code examples + screenshots demonstrating how awsome they are on their landing page. The worst offenders, say things like "Frob" is an implementation of "Baa", when you click to "Baa" it tells you it is a "Thingathing" etc. We're not as bad as this, but it would be easy given the complex family tree. Our strengths are the simplicity of our code and our nice vector based output, we should be showing these off. This ticket is a little vague - I'll try and link in good + bad examples if I get time.
1.0
Our index.html is just an index.. - Maybe this should be more of a front-page. I've been using a lot of python projects recently, and my favorites: - Tell you what they are - Have small code examples + screenshots demonstrating how awsome they are on their landing page. The worst offenders, say things like "Frob" is an implementation of "Baa", when you click to "Baa" it tells you it is a "Thingathing" etc. We're not as bad as this, but it would be easy given the complex family tree. Our strengths are the simplicity of our code and our nice vector based output, we should be showing these off. This ticket is a little vague - I'll try and link in good + bad examples if I get time.
non_priority
our index html is just an index maybe this should be more of a front page i ve been using a lot of python projects recently and my favorites tell you what they are have small code examples screenshots demonstrating how awsome they are on their landing page the worst offenders say things like frob is an implementation of baa when you click to baa it tells you it is a thingathing etc we re not as bad as this but it would be easy given the complex family tree our strengths are the simplicity of our code and our nice vector based output we should be showing these off this ticket is a little vague i ll try and link in good bad examples if i get time
0
89,629
3,798,156,782
IssuesEvent
2016-03-23 11:14:57
BrcMapsTeam/europe-15-situational-awareness
https://api.github.com/repos/BrcMapsTeam/europe-15-situational-awareness
opened
Add in High Importance Events to Forecasting Graphs to add Context
high priority new feature
Would be good to incorporate the high importance events from the situational awareness dash to the forecasting graphs for better context
1.0
Add in High Importance Events to Forecasting Graphs to add Context - Would be good to incorporate the high importance events from the situational awareness dash to the forecasting graphs for better context
priority
add in high importance events to forecasting graphs to add context would be good to incorporate the high importance events from the situational awareness dash to the forecasting graphs for better context
1
134,663
18,491,463,097
IssuesEvent
2021-10-19 01:14:24
serhii73/ukrdict
https://api.github.com/repos/serhii73/ukrdict
opened
CVE-2020-7212 (High) detected in urllib3-1.25.3-py2.py3-none-any.whl
security vulnerability
## CVE-2020-7212 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.25.3-py2.py3-none-any.whl</b></p></summary> <p>HTTP library with thread-safe connection pooling, file post, and more.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/e6/60/247f23a7121ae632d62811ba7f273d0e58972d75e58a94d329d51550a47d/urllib3-1.25.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/e6/60/247f23a7121ae632d62811ba7f273d0e58972d75e58a94d329d51550a47d/urllib3-1.25.3-py2.py3-none-any.whl</a></p> <p> Dependency Hierarchy: - requests-2.26.0-py2.py3-none-any.whl (Root Library) - :x: **urllib3-1.25.3-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/serhii73/ukrdict/commit/0fabe3ecbabd59957519819e22667e8700ce22c9">0fabe3ecbabd59957519819e22667e8700ce22c9</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). <p>Publish Date: 2020-03-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7212>CVE-2020-7212</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-hmv2-79q8-fv6g">https://github.com/advisories/GHSA-hmv2-79q8-fv6g</a></p> <p>Release Date: 2020-03-09</p> <p>Fix Resolution: urllib3 - 1.25.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7212 (High) detected in urllib3-1.25.3-py2.py3-none-any.whl - ## CVE-2020-7212 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.25.3-py2.py3-none-any.whl</b></p></summary> <p>HTTP library with thread-safe connection pooling, file post, and more.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/e6/60/247f23a7121ae632d62811ba7f273d0e58972d75e58a94d329d51550a47d/urllib3-1.25.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/e6/60/247f23a7121ae632d62811ba7f273d0e58972d75e58a94d329d51550a47d/urllib3-1.25.3-py2.py3-none-any.whl</a></p> <p> Dependency Hierarchy: - requests-2.26.0-py2.py3-none-any.whl (Root Library) - :x: **urllib3-1.25.3-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/serhii73/ukrdict/commit/0fabe3ecbabd59957519819e22667e8700ce22c9">0fabe3ecbabd59957519819e22667e8700ce22c9</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). <p>Publish Date: 2020-03-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7212>CVE-2020-7212</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-hmv2-79q8-fv6g">https://github.com/advisories/GHSA-hmv2-79q8-fv6g</a></p> <p>Release Date: 2020-03-09</p> <p>Fix Resolution: urllib3 - 1.25.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve high detected in none any whl cve high severity vulnerability vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href dependency hierarchy requests none any whl root library x none any whl vulnerable library found in head commit a href vulnerability details the encode invalid chars function in util url py in the library through for python allows a denial of service cpu consumption because of an inefficient algorithm the percent encodings array contains all matches of percent encodings it is not deduplicated for a url of length n the size of percent encodings may be up to o n the next step normalize existing percent encoded bytes also takes up to o n for each step so the total time is o n if percent encodings were deduplicated the time to compute encode invalid chars would be o kn where k is at most publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
3,890
2,711,123,998
IssuesEvent
2015-04-09 02:10:24
openaustralia/righttoknow
https://api.github.com/repos/openaustralia/righttoknow
closed
'View as HTML' label is meaningless to the citizen
bug design wording
For the link for citizens to view a document attached to a request in their browser: ![screen shot 2015-04-07 at 12 16 03 pm](https://cloud.githubusercontent.com/assets/1239550/7015983/1630e22a-dd20-11e4-87fc-5bceeee014db.png) Why are we telling them what format it's in? Do they know what it means that it's in HTML? Maybe a label like: 'View online' or 'View in your browser' or 'View text version' would be better? On the document view page we also say "This is an HTML version of an attachment to the Freedom of Information request". Same question applies here.
1.0
'View as HTML' label is meaningless to the citizen - For the link for citizens to view a document attached to a request in their browser: ![screen shot 2015-04-07 at 12 16 03 pm](https://cloud.githubusercontent.com/assets/1239550/7015983/1630e22a-dd20-11e4-87fc-5bceeee014db.png) Why are we telling them what format it's in? Do they know what it means that it's in HTML? Maybe a label like: 'View online' or 'View in your browser' or 'View text version' would be better? On the document view page we also say "This is an HTML version of an attachment to the Freedom of Information request". Same question applies here.
non_priority
view as html label is meaningless to the citizen for the link for citizens to view a document attached to a request in their browser why are we telling them what format it s in do they know what it means that it s in html maybe a label like view online or view in your browser or view text version would be better on the document view page we also say this is an html version of an attachment to the freedom of information request same question applies here
0
130,476
12,429,876,678
IssuesEvent
2020-05-25 09:17:37
enzoampil/fastquant
https://api.github.com/repos/enzoampil/fastquant
closed
Add auto-optimization feature in readme
documentation
The new auto-optimization feature is not yet in the readme.
1.0
Add auto-optimization feature in readme - The new auto-optimization feature is not yet in the readme.
non_priority
add auto optimization feature in readme the new auto optimization feature is not yet in the readme
0
179,953
14,729,302,636
IssuesEvent
2021-01-06 11:13:56
rl-institut/oemof-B3
https://api.github.com/repos/rl-institut/oemof-B3
opened
Create model and scenario factsheets on OEP
documentation enhancement help wanted
Framework Factsheet: [oemof](https://openenergy-platform.org/factsheets/frameworks/5/) Model Factsheet: [oemof-B3](https://openenergy-platform.org/factsheets/models/175/) Scenario Factsheet: (ToDo) * study * scenario 1 * scenario 2 * scenario 3
1.0
Create model and scenario factsheets on OEP - Framework Factsheet: [oemof](https://openenergy-platform.org/factsheets/frameworks/5/) Model Factsheet: [oemof-B3](https://openenergy-platform.org/factsheets/models/175/) Scenario Factsheet: (ToDo) * study * scenario 1 * scenario 2 * scenario 3
non_priority
create model and scenario factsheets on oep framework factsheet model factsheet scenario factsheet todo study scenario scenario scenario
0
240,309
7,800,917,804
IssuesEvent
2018-06-09 15:04:09
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
closed
0011954: setup.php (webbased) "accept conditions"-dialog broken now
Bug Mantis Setup high priority
**Reported by Tesla42 on 18 Jun 2016 23:20** **Version:** Egon Community Edition (2016.03.3) Just tested the new 2016.03.3. After first install opening localhost/tine20setup.php in browser accepting terms and conditions is broken. **Steps to reproduce:** I checked the two checkboxes and clicked the accept button up left. Nothing happend. **Additional information:** Was a clean new install. Tine20 never installed on this system.
1.0
0011954: setup.php (webbased) "accept conditions"-dialog broken now - **Reported by Tesla42 on 18 Jun 2016 23:20** **Version:** Egon Community Edition (2016.03.3) Just tested the new 2016.03.3. After first install opening localhost/tine20setup.php in browser accepting terms and conditions is broken. **Steps to reproduce:** I checked the two checkboxes and clicked the accept button up left. Nothing happend. **Additional information:** Was a clean new install. Tine20 never installed on this system.
priority
setup php webbased accept conditions dialog broken now reported by on jun version egon community edition just tested the new after first install opening localhost php in browser accepting terms and conditions is broken steps to reproduce i checked the two checkboxes and clicked the accept button up left nothing happend additional information was a clean new install never installed on this system
1
11,502
5,012,929,959
IssuesEvent
2016-12-13 13:02:15
CleverRaven/Cataclysm-DDA
https://api.github.com/repos/CleverRaven/Cataclysm-DDA
closed
linux compile error
Build
src/string_id.h:66:50: error: instantiation of variable 'string_id<ter_t>::NULL_ID' required here, but no definition is available [-Werror,-Wundefined-var-template] string_id( const null_id_type & ) : _id( NULL_ID._id ), _cid( NULL_ID._cid ) {} ^ src/mapdata.h:70:50: note: in instantiation of member function 'string_id<ter_t>::string_id' requested here drop_group("EMPTY_GROUP"), ter_set(NULL_ID), furn_set(NULL_ID) {}; ^ src/string_id.h:154:35: note: forward declaration of template entity is here static const string_id<T> NULL_ID; ^ src/string_id.h:66:50: note: add an explicit instantiation declaration to suppress this warning if 'string_id<ter_t>::NULL_ID' is explicitly instantiated in another translation unit string_id( const null_id_type & ) : _id( NULL_ID._id ), _cid( NULL_ID._cid ) {}
1.0
linux compile error - src/string_id.h:66:50: error: instantiation of variable 'string_id<ter_t>::NULL_ID' required here, but no definition is available [-Werror,-Wundefined-var-template] string_id( const null_id_type & ) : _id( NULL_ID._id ), _cid( NULL_ID._cid ) {} ^ src/mapdata.h:70:50: note: in instantiation of member function 'string_id<ter_t>::string_id' requested here drop_group("EMPTY_GROUP"), ter_set(NULL_ID), furn_set(NULL_ID) {}; ^ src/string_id.h:154:35: note: forward declaration of template entity is here static const string_id<T> NULL_ID; ^ src/string_id.h:66:50: note: add an explicit instantiation declaration to suppress this warning if 'string_id<ter_t>::NULL_ID' is explicitly instantiated in another translation unit string_id( const null_id_type & ) : _id( NULL_ID._id ), _cid( NULL_ID._cid ) {}
non_priority
linux compile error src string id h error instantiation of variable string id null id required here but no definition is available string id const null id type id null id id cid null id cid src mapdata h note in instantiation of member function string id string id requested here drop group empty group ter set null id furn set null id src string id h note forward declaration of template entity is here static const string id null id src string id h note add an explicit instantiation declaration to suppress this warning if string id null id is explicitly instantiated in another translation unit string id const null id type id null id id cid null id cid
0
459,098
13,186,289,506
IssuesEvent
2020-08-12 23:40:31
Automattic/woocommerce-payments
https://api.github.com/repos/Automattic/woocommerce-payments
closed
Checkout does not work when creating an account using 3DS cards
Priority: High [Feature] Checkout [Type] Bug
**Bug description** When customer creation is enabled on checkout and customers try to pay with a 3DS card, the payment is not processed successfully after the card authentication, due to nonce verification issues. This seems to happen because the `wcpay_update_order_status_nonce` is created while the customer is still a guest, but it's only used when the customer already has an account. Since the UID is different for guest vs customer, the nonce verification fails and the order status is not updated. **Expected behavior** Guest customers should be able to pay with a 3DS card when creating an account during checkout.
1.0
Checkout does not work when creating an account using 3DS cards - **Bug description** When customer creation is enabled on checkout and customers try to pay with a 3DS card, the payment is not processed successfully after the card authentication, due to nonce verification issues. This seems to happen because the `wcpay_update_order_status_nonce` is created while the customer is still a guest, but it's only used when the customer already has an account. Since the UID is different for guest vs customer, the nonce verification fails and the order status is not updated. **Expected behavior** Guest customers should be able to pay with a 3DS card when creating an account during checkout.
priority
checkout does not work when creating an account using cards bug description when customer creation is enabled on checkout and customers try to pay with a card the payment is not processed successfully after the card authentication due to nonce verification issues this seems to happen because the wcpay update order status nonce is created while the customer is still a guest but it s only used when the customer already has an account since the uid is different for guest vs customer the nonce verification fails and the order status is not updated expected behavior guest customers should be able to pay with a card when creating an account during checkout
1
228,200
18,164,796,300
IssuesEvent
2021-09-27 13:37:55
eclipse-openj9/openj9
https://api.github.com/repos/eclipse-openj9/openj9
opened
java/lang/invoke/MethodHandlesAsCollectorTest.java testAsCollector null AssertionError
comp:jit test failure
https://github.com/eclipse-openj9/openj9/issues/13457 jdk_lang_0 -Xdump:system:none -Xdump:heap:none -Xdump:system:events=gpf+abort+traceassert+corruptcache -XX:-JITServerTechPreviewMessage -XX:+UseCompressedOops java/lang/invoke/MethodHandlesAsCollectorTest.java ``` 23:03:21 JavaTest Message: JUnit Failure: testAsCollector(test.java.lang.invoke.MethodHandlesAsCollectorTest): null 23:03:21 java.lang.AssertionError 23:03:21 at java.base/java.lang.invoke.LambdaFormBuffer.insertExpression(LambdaFormBuffer.java:388) 23:03:21 at java.base/java.lang.invoke.LambdaFormEditor.spreadArgumentsForm(LambdaFormEditor.java:624) 23:03:21 at java.base/java.lang.invoke.MethodHandle.asSpreader(MethodHandle.java:1028) 23:03:21 at java.base/java.lang.invoke.MethodHandle.asSpreader(MethodHandle.java:982) 23:03:21 at java.base/java.lang.invoke.Invokers.spreadInvoker(Invokers.java:225) 23:03:21 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:732) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest.testAsCollector(MethodHandlesAsCollectorTest.java:88) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest.testAsCollector0(MethodHandlesAsCollectorTest.java:65) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest$$Lambda$25/0x000000002308c650.run(Unknown Source) 23:03:21 at test.java.lang.invoke.lib.CodeCacheOverflowProcessor$$Lambda$26/0x000000002308d028.run(Unknown Source) 23:03:21 at jdk.test.lib.Utils.filterException(Utils.java:624) 23:03:21 at test.java.lang.invoke.lib.CodeCacheOverflowProcessor.runMHTest(CodeCacheOverflowProcessor.java:71) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest.testAsCollector(MethodHandlesAsCollectorTest.java:49)
1.0
java/lang/invoke/MethodHandlesAsCollectorTest.java testAsCollector null AssertionError - https://github.com/eclipse-openj9/openj9/issues/13457 jdk_lang_0 -Xdump:system:none -Xdump:heap:none -Xdump:system:events=gpf+abort+traceassert+corruptcache -XX:-JITServerTechPreviewMessage -XX:+UseCompressedOops java/lang/invoke/MethodHandlesAsCollectorTest.java ``` 23:03:21 JavaTest Message: JUnit Failure: testAsCollector(test.java.lang.invoke.MethodHandlesAsCollectorTest): null 23:03:21 java.lang.AssertionError 23:03:21 at java.base/java.lang.invoke.LambdaFormBuffer.insertExpression(LambdaFormBuffer.java:388) 23:03:21 at java.base/java.lang.invoke.LambdaFormEditor.spreadArgumentsForm(LambdaFormEditor.java:624) 23:03:21 at java.base/java.lang.invoke.MethodHandle.asSpreader(MethodHandle.java:1028) 23:03:21 at java.base/java.lang.invoke.MethodHandle.asSpreader(MethodHandle.java:982) 23:03:21 at java.base/java.lang.invoke.Invokers.spreadInvoker(Invokers.java:225) 23:03:21 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:732) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest.testAsCollector(MethodHandlesAsCollectorTest.java:88) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest.testAsCollector0(MethodHandlesAsCollectorTest.java:65) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest$$Lambda$25/0x000000002308c650.run(Unknown Source) 23:03:21 at test.java.lang.invoke.lib.CodeCacheOverflowProcessor$$Lambda$26/0x000000002308d028.run(Unknown Source) 23:03:21 at jdk.test.lib.Utils.filterException(Utils.java:624) 23:03:21 at test.java.lang.invoke.lib.CodeCacheOverflowProcessor.runMHTest(CodeCacheOverflowProcessor.java:71) 23:03:21 at test.java.lang.invoke.MethodHandlesAsCollectorTest.testAsCollector(MethodHandlesAsCollectorTest.java:49)
non_priority
java lang invoke methodhandlesascollectortest java testascollector null assertionerror jdk lang xdump system none xdump heap none xdump system events gpf abort traceassert corruptcache xx jitservertechpreviewmessage xx usecompressedoops java lang invoke methodhandlesascollectortest java javatest message junit failure testascollector test java lang invoke methodhandlesascollectortest null java lang assertionerror at java base java lang invoke lambdaformbuffer insertexpression lambdaformbuffer java at java base java lang invoke lambdaformeditor spreadargumentsform lambdaformeditor java at java base java lang invoke methodhandle asspreader methodhandle java at java base java lang invoke methodhandle asspreader methodhandle java at java base java lang invoke invokers spreadinvoker invokers java at java base java lang invoke methodhandle invokewitharguments methodhandle java at test java lang invoke methodhandlesascollectortest testascollector methodhandlesascollectortest java at test java lang invoke methodhandlesascollectortest methodhandlesascollectortest java at test java lang invoke methodhandlesascollectortest lambda run unknown source at test java lang invoke lib codecacheoverflowprocessor lambda run unknown source at jdk test lib utils filterexception utils java at test java lang invoke lib codecacheoverflowprocessor runmhtest codecacheoverflowprocessor java at test java lang invoke methodhandlesascollectortest testascollector methodhandlesascollectortest java
0
124,631
16,619,923,052
IssuesEvent
2021-06-02 22:25:46
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
Move type:repo below type:symbol
design refresh team/search-product
I've inadvertently elevated the most problematic type command above its general stature. (It used to be the tab after symbol). To hopefully reduce instances of this item's misuse, move the filter below type:symbol.
1.0
Move type:repo below type:symbol - I've inadvertently elevated the most problematic type command above its general stature. (It used to be the tab after symbol). To hopefully reduce instances of this item's misuse, move the filter below type:symbol.
non_priority
move type repo below type symbol i ve inadvertently elevated the most problematic type command above its general stature it used to be the tab after symbol to hopefully reduce instances of this item s misuse move the filter below type symbol
0
721,421
24,826,022,506
IssuesEvent
2022-10-25 20:43:07
googleapis/google-cloud-java
https://api.github.com/repos/googleapis/google-cloud-java
closed
Flaky or broken dialogflow tests
type: bug priority: p2
https://source.cloud.google.com/results/invocations/2bc9384a-a547-427f-8dd7-4a1f987c52cb/targets ``` [INFO] Running com.google.cloud.dialogflow.v2.it.ITSystemTest [ERROR] Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 1.834 s <<< FAILURE! - in com.google.cloud.dialogflow.v2.it.ITSystemTest [ERROR] com.google.cloud.dialogflow.v2.it.ITSystemTest.getAgentTest Time elapsed: 0.17 s <<< FAILURE! org.junit.ComparisonFailure: expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> at org.junit.Assert.assertEquals(Assert.java:117) at org.junit.Assert.assertEquals(Assert.java:146) at com.google.cloud.dialogflow.v2.it.ITSystemTest.getAgentTest(ITSystemTest.java:203) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581) [ERROR] com.google.cloud.dialogflow.v2.it.ITSystemTest.searchAgentsTest Time elapsed: 0.117 s <<< FAILURE! org.junit.ComparisonFailure: expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> at org.junit.Assert.assertEquals(Assert.java:117) at org.junit.Assert.assertEquals(Assert.java:146) at com.google.cloud.dialogflow.v2.it.ITSystemTest.searchAgentsTest(ITSystemTest.java:188) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581) [INFO] [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITSystemTest.getAgentTest:203 expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> [ERROR] ITSystemTest.searchAgentsTest:188 expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> [INFO] [ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 0 [INFO] ```
1.0
Flaky or broken dialogflow tests - https://source.cloud.google.com/results/invocations/2bc9384a-a547-427f-8dd7-4a1f987c52cb/targets ``` [INFO] Running com.google.cloud.dialogflow.v2.it.ITSystemTest [ERROR] Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 1.834 s <<< FAILURE! - in com.google.cloud.dialogflow.v2.it.ITSystemTest [ERROR] com.google.cloud.dialogflow.v2.it.ITSystemTest.getAgentTest Time elapsed: 0.17 s <<< FAILURE! org.junit.ComparisonFailure: expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> at org.junit.Assert.assertEquals(Assert.java:117) at org.junit.Assert.assertEquals(Assert.java:146) at com.google.cloud.dialogflow.v2.it.ITSystemTest.getAgentTest(ITSystemTest.java:203) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581) [ERROR] com.google.cloud.dialogflow.v2.it.ITSystemTest.searchAgentsTest Time elapsed: 0.117 s <<< FAILURE! org.junit.ComparisonFailure: expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> at org.junit.Assert.assertEquals(Assert.java:117) at org.junit.Assert.assertEquals(Assert.java:146) at com.google.cloud.dialogflow.v2.it.ITSystemTest.searchAgentsTest(ITSystemTest.java:188) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581) [INFO] [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITSystemTest.getAgentTest:203 expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> [ERROR] ITSystemTest.searchAgentsTest:188 expected:<...gle-cloud-java-tests[]> but was:<...gle-cloud-java-tests[-3b2d2e]> [INFO] [ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 0 [INFO] ```
priority
flaky or broken dialogflow tests running com google cloud dialogflow it itsystemtest tests run failures errors skipped time elapsed s failure in com google cloud dialogflow it itsystemtest com google cloud dialogflow it itsystemtest getagenttest time elapsed s failure org junit comparisonfailure expected but was at org junit assert assertequals assert java at org junit assert assertequals assert java at com google cloud dialogflow it itsystemtest getagenttest itsystemtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java com google cloud dialogflow it itsystemtest searchagentstest time elapsed s failure org junit comparisonfailure expected but was at org junit assert assertequals assert java at org junit assert assertequals assert java at com google cloud dialogflow it itsystemtest searchagentstest itsystemtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java results failures itsystemtest getagenttest expected but was itsystemtest searchagentstest expected but was tests run failures errors skipped
1
423,917
12,303,697,658
IssuesEvent
2020-05-11 19:09:31
qutebrowser/qutebrowser
https://api.github.com/repos/qutebrowser/qutebrowser
closed
qutebrowser doesn't close properly with qt5-webkit 5.212.0alpha3
bug: segfault/crash/hang component: QtWebKit priority: 1 - middle
``` $ qutebrowser --temp-basedir 15:33:38 WARNING: /usr/lib/python3.7/subprocess.py:858: ResourceWarning: subprocess 1936 is still running ResourceWarning, source=self) $ ASSERTION FAILED: m_cacheDirectory.isNull() /build/qt5-webkit/src/Source/WebCore/loader/appcache/ApplicationCacheStorage.cpp(369) : void WebCore::ApplicationCacheStorage::setCacheDirectory(const WTF::String&) 1 0x7fb4dcfe7a98 /usr/lib/libQt5WebKit.so.5(WTFCrash+0x18) [0x7fb4dcfe7a98] 2 0x7fb4dd2c192d /usr/lib/libQt5WebKit.so.5(_ZN7WebCore23ApplicationCacheStorage17setCacheDirectoryERKN3WTF6StringE+0x6d) [0x7fb4dd2c192d] 3 0x7fb4dc475566 /usr/lib/libQt5WebKit.so.5(_ZN12QWebSettings33setOfflineWebApplicationCachePathERK7QString+0x46) [0x7fb4dc475566] 4 0x7fb4e09106c3 /usr/lib/python3.7/site-packages/PyQt5/QtWebKit.so(+0x196c3) [0x7fb4e09106c3] 5 0x7fb4e78e8252 /usr/lib/libpython3.7m.so.1.0(_PyMethodDef_RawFastCallKeywords+0x262) [0x7fb4e78e8252] 6 0x7fb4e78e8394 /usr/lib/libpython3.7m.so.1.0(_PyCFunction_FastCallKeywords+0x24) [0x7fb4e78e8394] 7 0x7fb4e7920d4c /usr/lib/libpython3.7m.so.1.0(+0x143d4c) [0x7fb4e7920d4c] 8 0x7fb4e79626b2 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x5252) [0x7fb4e79626b2] 9 0x7fb4e790dc03 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x103) [0x7fb4e790dc03] 10 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 11 0x7fb4e79626b2 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x5252) [0x7fb4e79626b2] 12 0x7fb4e790dc03 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x103) [0x7fb4e790dc03] 13 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 14 0x7fb4e79626b2 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x5252) [0x7fb4e79626b2] 15 0x7fb4e790cd18 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalCodeWithName+0x2f8) [0x7fb4e790cd18] 16 0x7fb4e790dda3 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x2a3) [0x7fb4e790dda3] 17 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 18 0x7fb4e795eb96 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x1736) [0x7fb4e795eb96] 19 0x7fb4e790cd18 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalCodeWithName+0x2f8) [0x7fb4e790cd18] 20 0x7fb4e790dda3 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x2a3) [0x7fb4e790dda3] 21 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 22 0x7fb4e795eb96 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x1736) [0x7fb4e795eb96] 23 0x7fb4e790e27b /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallDict+0x11b) [0x7fb4e790e27b] 24 0x7fb4e78dc3d8 /usr/lib/libpython3.7m.so.1.0(_PyObject_Call_Prepend+0x68) [0x7fb4e78dc3d8] 25 0x7fb4e790e98e /usr/lib/libpython3.7m.so.1.0(PyObject_Call+0x7e) [0x7fb4e790e98e] 26 0x7fb4e6256731 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1fe731) [0x7fb4e6256731] 27 0x7fb4e6256c01 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1fec01) [0x7fb4e6256c01] 28 0x7fb4e6256ef1 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1feef1) [0x7fb4e6256ef1] 29 0x7fb4e62579c0 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1ff9c0) [0x7fb4e62579c0] 30 0x7fb4e5df1acd /usr/lib/libQt5Core.so.5(_ZN11QMetaObject8activateEP7QObjectiiPPv+0x87d) [0x7fb4e5df1acd] 31 0x7fb4e155ffba /usr/lib/libQt5Widgets.so.5(_ZN14QWidgetPrivate12close_helperENS_9CloseModeE+0x36a) [0x7fb4e155ffba] ``` I'm using the latest commit.
1.0
qutebrowser doesn't close properly with qt5-webkit 5.212.0alpha3 - ``` $ qutebrowser --temp-basedir 15:33:38 WARNING: /usr/lib/python3.7/subprocess.py:858: ResourceWarning: subprocess 1936 is still running ResourceWarning, source=self) $ ASSERTION FAILED: m_cacheDirectory.isNull() /build/qt5-webkit/src/Source/WebCore/loader/appcache/ApplicationCacheStorage.cpp(369) : void WebCore::ApplicationCacheStorage::setCacheDirectory(const WTF::String&) 1 0x7fb4dcfe7a98 /usr/lib/libQt5WebKit.so.5(WTFCrash+0x18) [0x7fb4dcfe7a98] 2 0x7fb4dd2c192d /usr/lib/libQt5WebKit.so.5(_ZN7WebCore23ApplicationCacheStorage17setCacheDirectoryERKN3WTF6StringE+0x6d) [0x7fb4dd2c192d] 3 0x7fb4dc475566 /usr/lib/libQt5WebKit.so.5(_ZN12QWebSettings33setOfflineWebApplicationCachePathERK7QString+0x46) [0x7fb4dc475566] 4 0x7fb4e09106c3 /usr/lib/python3.7/site-packages/PyQt5/QtWebKit.so(+0x196c3) [0x7fb4e09106c3] 5 0x7fb4e78e8252 /usr/lib/libpython3.7m.so.1.0(_PyMethodDef_RawFastCallKeywords+0x262) [0x7fb4e78e8252] 6 0x7fb4e78e8394 /usr/lib/libpython3.7m.so.1.0(_PyCFunction_FastCallKeywords+0x24) [0x7fb4e78e8394] 7 0x7fb4e7920d4c /usr/lib/libpython3.7m.so.1.0(+0x143d4c) [0x7fb4e7920d4c] 8 0x7fb4e79626b2 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x5252) [0x7fb4e79626b2] 9 0x7fb4e790dc03 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x103) [0x7fb4e790dc03] 10 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 11 0x7fb4e79626b2 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x5252) [0x7fb4e79626b2] 12 0x7fb4e790dc03 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x103) [0x7fb4e790dc03] 13 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 14 0x7fb4e79626b2 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x5252) [0x7fb4e79626b2] 15 0x7fb4e790cd18 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalCodeWithName+0x2f8) [0x7fb4e790cd18] 16 0x7fb4e790dda3 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x2a3) [0x7fb4e790dda3] 17 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 18 0x7fb4e795eb96 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x1736) [0x7fb4e795eb96] 19 0x7fb4e790cd18 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalCodeWithName+0x2f8) [0x7fb4e790cd18] 20 0x7fb4e790dda3 /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallKeywords+0x2a3) [0x7fb4e790dda3] 21 0x7fb4e7920c30 /usr/lib/libpython3.7m.so.1.0(+0x143c30) [0x7fb4e7920c30] 22 0x7fb4e795eb96 /usr/lib/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x1736) [0x7fb4e795eb96] 23 0x7fb4e790e27b /usr/lib/libpython3.7m.so.1.0(_PyFunction_FastCallDict+0x11b) [0x7fb4e790e27b] 24 0x7fb4e78dc3d8 /usr/lib/libpython3.7m.so.1.0(_PyObject_Call_Prepend+0x68) [0x7fb4e78dc3d8] 25 0x7fb4e790e98e /usr/lib/libpython3.7m.so.1.0(PyObject_Call+0x7e) [0x7fb4e790e98e] 26 0x7fb4e6256731 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1fe731) [0x7fb4e6256731] 27 0x7fb4e6256c01 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1fec01) [0x7fb4e6256c01] 28 0x7fb4e6256ef1 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1feef1) [0x7fb4e6256ef1] 29 0x7fb4e62579c0 /usr/lib/python3.7/site-packages/PyQt5/QtCore.so(+0x1ff9c0) [0x7fb4e62579c0] 30 0x7fb4e5df1acd /usr/lib/libQt5Core.so.5(_ZN11QMetaObject8activateEP7QObjectiiPPv+0x87d) [0x7fb4e5df1acd] 31 0x7fb4e155ffba /usr/lib/libQt5Widgets.so.5(_ZN14QWidgetPrivate12close_helperENS_9CloseModeE+0x36a) [0x7fb4e155ffba] ``` I'm using the latest commit.
priority
qutebrowser doesn t close properly with webkit qutebrowser temp basedir warning usr lib subprocess py resourcewarning subprocess is still running resourcewarning source self assertion failed m cachedirectory isnull build webkit src source webcore loader appcache applicationcachestorage cpp void webcore applicationcachestorage setcachedirectory const wtf string usr lib so wtfcrash usr lib so usr lib so usr lib site packages qtwebkit so usr lib so pymethoddef rawfastcallkeywords usr lib so pycfunction fastcallkeywords usr lib so usr lib so pyeval evalframedefault usr lib so pyfunction fastcallkeywords usr lib so usr lib so pyeval evalframedefault usr lib so pyfunction fastcallkeywords usr lib so usr lib so pyeval evalframedefault usr lib so pyeval evalcodewithname usr lib so pyfunction fastcallkeywords usr lib so usr lib so pyeval evalframedefault usr lib so pyeval evalcodewithname usr lib so pyfunction fastcallkeywords usr lib so usr lib so pyeval evalframedefault usr lib so pyfunction fastcalldict usr lib so pyobject call prepend usr lib so pyobject call usr lib site packages qtcore so usr lib site packages qtcore so usr lib site packages qtcore so usr lib site packages qtcore so usr lib so usr lib so helperens i m using the latest commit
1
215,052
24,126,414,993
IssuesEvent
2022-09-21 01:08:09
DavidSpek/pipelines
https://api.github.com/repos/DavidSpek/pipelines
opened
CVE-2022-35997 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl
security vulnerability
## CVE-2022-35997 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p> <p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/samples/core/ai_platform/training</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/DavidSpek/pipelines/commit/6f7433f006e282c4f25441e7502b80d73751e38f">6f7433f006e282c4f25441e7502b80d73751e38f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. If `tf.sparse.cross` receives an input `separator` that is not a scalar, it gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 83dcb4dbfa094e33db084e97c4d0531a559e0ebf. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue. <p>Publish Date: 2022-09-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35997>CVE-2022-35997</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf</a></p> <p>Release Date: 2022-09-16</p> <p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-35997 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2022-35997 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p> <p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/samples/core/ai_platform/training</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/DavidSpek/pipelines/commit/6f7433f006e282c4f25441e7502b80d73751e38f">6f7433f006e282c4f25441e7502b80d73751e38f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. If `tf.sparse.cross` receives an input `separator` that is not a scalar, it gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 83dcb4dbfa094e33db084e97c4d0531a559e0ebf. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue. <p>Publish Date: 2022-09-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35997>CVE-2022-35997</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-p7hr-f446-x6qf</a></p> <p>Release Date: 2022-09-16</p> <p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file contrib components openvino ovms deployer containers requirements txt path to vulnerable library contrib components openvino ovms deployer containers requirements txt samples core ai platform training dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch master vulnerability details tensorflow is an open source platform for machine learning if tf sparse cross receives an input separator that is not a scalar it gives a check fail that can be used to trigger a denial of service attack we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
0
573,088
17,023,587,610
IssuesEvent
2021-07-03 02:47:57
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Option to hide bots from history
Component: website Priority: major Resolution: duplicate Type: enhancement
**[Submitted to the original trac issue database at 2.57pm, Monday, 10th May 2010]** The history tab urgently needs an option to hide bot-made edits. The list is usually littered with changesets by xybot, which could be filtered out by having the option of hiding changesets with bot=yes.
1.0
Option to hide bots from history - **[Submitted to the original trac issue database at 2.57pm, Monday, 10th May 2010]** The history tab urgently needs an option to hide bot-made edits. The list is usually littered with changesets by xybot, which could be filtered out by having the option of hiding changesets with bot=yes.
priority
option to hide bots from history the history tab urgently needs an option to hide bot made edits the list is usually littered with changesets by xybot which could be filtered out by having the option of hiding changesets with bot yes
1
200,837
7,017,339,052
IssuesEvent
2017-12-21 09:15:46
OpenNebula/one
https://api.github.com/repos/OpenNebula/one
closed
Make ipset size configurable
Category: Drivers - Network Priority: High Sponsored Status: Accepted Type: Feature
# Enhancement Request ## Description ipset default size is 65536, which is not adequate for security group definitions. Affected line 149 in security_groups_iptables.rb: cmds.add :ipset, "create #{set} hash:net,port family #{family} maxattrs 1000000" ## Use case Users specifying a large port range and multople IPs in their security groups. ## Interface Changes Make an option available in OpenNebulaNetwork.conf # Progress Status - [x] Branch created - [x] Code committed to development branch - [x] Testing - QA - [x] Documentation - [x] Release notes - resolved issues, compatibility, known issues - [x] Code committed to upstream release/hotfix branches - [x] Documentation committed to upstream release/hotfix branches
1.0
Make ipset size configurable - # Enhancement Request ## Description ipset default size is 65536, which is not adequate for security group definitions. Affected line 149 in security_groups_iptables.rb: cmds.add :ipset, "create #{set} hash:net,port family #{family} maxattrs 1000000" ## Use case Users specifying a large port range and multople IPs in their security groups. ## Interface Changes Make an option available in OpenNebulaNetwork.conf # Progress Status - [x] Branch created - [x] Code committed to development branch - [x] Testing - QA - [x] Documentation - [x] Release notes - resolved issues, compatibility, known issues - [x] Code committed to upstream release/hotfix branches - [x] Documentation committed to upstream release/hotfix branches
priority
make ipset size configurable enhancement request description ipset default size is which is not adequate for security group definitions affected line in security groups iptables rb cmds add ipset create set hash net port family family maxattrs use case users specifying a large port range and multople ips in their security groups interface changes make an option available in opennebulanetwork conf progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches
1
237,923
19,684,394,179
IssuesEvent
2022-01-11 20:17:29
denoland/deno
https://api.github.com/repos/denoland/deno
closed
🐛 `deno coverage` line and branch counts are incorrect
bug testing
I'm getting some broken coverage results. For the code: ```js export function f(b: boolean) { var result; if (b) { result = true; } else { result = false; } return result; } ``` as a Deno script (from the repo https://github.com/rivy-t/cover-deno): ```shell C:>deno test --unstable -A --coverage=coverage && deno coverage --unstable coverage --lcov ... SF:C:\Users\Roy\OneDrive\Projects\deno\dexter\repo.t-cover-deno\src\mod.ts FN:2,f FNDA:4,f FNF:1 FNH:1 BRDA:4,1,0,2 BRF:1 BRH:1 DA:1,2 DA:2,6 DA:3,6 DA:4,8 DA:5,8 DA:6,8 DA:7,6 DA:8,6 DA:9,2 LH:9 LF:9 end_of_record ``` vs the same code as ESM, using nyc/ava (from https://github.com/rivy-t/cover-esm): ```shell C:>npm i --silent && npx nyc --silent ava && npx nyc report --reporter=text-lcov ... TN: SF:src\esm.js FN:1,f FNF:1 FNH:1 FNDA:2,f DA:3,2 DA:4,1 DA:6,1 DA:8,2 LF:4 LH:4 BRDA:3,0,0,1 BRDA:3,0,1,1 BRF:2 BRH:2 end_of_record ``` Specifically, the Deno branch coverage is incorrect; the line coverage has odd counts, off-by-one's, and, arguably, counts non-executing lines.
1.0
🐛 `deno coverage` line and branch counts are incorrect - I'm getting some broken coverage results. For the code: ```js export function f(b: boolean) { var result; if (b) { result = true; } else { result = false; } return result; } ``` as a Deno script (from the repo https://github.com/rivy-t/cover-deno): ```shell C:>deno test --unstable -A --coverage=coverage && deno coverage --unstable coverage --lcov ... SF:C:\Users\Roy\OneDrive\Projects\deno\dexter\repo.t-cover-deno\src\mod.ts FN:2,f FNDA:4,f FNF:1 FNH:1 BRDA:4,1,0,2 BRF:1 BRH:1 DA:1,2 DA:2,6 DA:3,6 DA:4,8 DA:5,8 DA:6,8 DA:7,6 DA:8,6 DA:9,2 LH:9 LF:9 end_of_record ``` vs the same code as ESM, using nyc/ava (from https://github.com/rivy-t/cover-esm): ```shell C:>npm i --silent && npx nyc --silent ava && npx nyc report --reporter=text-lcov ... TN: SF:src\esm.js FN:1,f FNF:1 FNH:1 FNDA:2,f DA:3,2 DA:4,1 DA:6,1 DA:8,2 LF:4 LH:4 BRDA:3,0,0,1 BRDA:3,0,1,1 BRF:2 BRH:2 end_of_record ``` Specifically, the Deno branch coverage is incorrect; the line coverage has odd counts, off-by-one's, and, arguably, counts non-executing lines.
non_priority
🐛 deno coverage line and branch counts are incorrect i m getting some broken coverage results for the code js export function f b boolean var result if b result true else result false return result as a deno script from the repo shell c deno test unstable a coverage coverage deno coverage unstable coverage lcov sf c users roy onedrive projects deno dexter repo t cover deno src mod ts fn f fnda f fnf fnh brda brf brh da da da da da da da da da lh lf end of record vs the same code as esm using nyc ava from shell c npm i silent npx nyc silent ava npx nyc report reporter text lcov tn sf src esm js fn f fnf fnh fnda f da da da da lf lh brda brda brf brh end of record specifically the deno branch coverage is incorrect the line coverage has odd counts off by one s and arguably counts non executing lines
0
117,410
9,934,628,904
IssuesEvent
2019-07-02 14:50:47
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Error while trying to rotate certificates on a cluster with one service certificate rotated
[zube]: To Test area/ui kind/bug-qa team/az
**What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** 1. Have a cluster(DO cluster-3 nodes) in v2.2.4 - with custom image, giving 30 minutes of expiry for the cluster. 2. Upgrade to v2.2-head with UI tag `2.2.81`. 3. Warning shown on the UI - `This cluster has certs expiring in 30 days or less. Please rotate your certificates now.` 4. Click on rotate from the warning message 5. Select an individual service to rotate - say - etcd service 6. Click on rotate. 7. After the cluster has finished updating, click on rotate from the warning message. **Expected Result:** User should be able to perform Rotate Certificates action **Actual Result:** The UI is stuck, on the console - `Uncaught TypeError: Cannot read property 'daysUntil' of undefined` is seen. <img width="1915" alt="Screen Shot 2019-06-27 at 10 07 26 AM" src="https://user-images.githubusercontent.com/26032343/60287119-fc46fa00-98c5-11e9-9e75-7a7ea91de790.png"> <img width="1912" alt="Screen Shot 2019-06-27 at 10 04 33 AM" src="https://user-images.githubusercontent.com/26032343/60287175-18e33200-98c6-11e9-990a-f9d0cad6ff11.png"> **Other details that may be helpful:** **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): rancher:v2.2-head with UI tag `2.2.81` - Installation option (single install/HA): single <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): DO - Kubernetes version (use `kubectl version`): ``` 1.13 ```
1.0
Error while trying to rotate certificates on a cluster with one service certificate rotated - **What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** 1. Have a cluster(DO cluster-3 nodes) in v2.2.4 - with custom image, giving 30 minutes of expiry for the cluster. 2. Upgrade to v2.2-head with UI tag `2.2.81`. 3. Warning shown on the UI - `This cluster has certs expiring in 30 days or less. Please rotate your certificates now.` 4. Click on rotate from the warning message 5. Select an individual service to rotate - say - etcd service 6. Click on rotate. 7. After the cluster has finished updating, click on rotate from the warning message. **Expected Result:** User should be able to perform Rotate Certificates action **Actual Result:** The UI is stuck, on the console - `Uncaught TypeError: Cannot read property 'daysUntil' of undefined` is seen. <img width="1915" alt="Screen Shot 2019-06-27 at 10 07 26 AM" src="https://user-images.githubusercontent.com/26032343/60287119-fc46fa00-98c5-11e9-9e75-7a7ea91de790.png"> <img width="1912" alt="Screen Shot 2019-06-27 at 10 04 33 AM" src="https://user-images.githubusercontent.com/26032343/60287175-18e33200-98c6-11e9-990a-f9d0cad6ff11.png"> **Other details that may be helpful:** **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): rancher:v2.2-head with UI tag `2.2.81` - Installation option (single install/HA): single <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): DO - Kubernetes version (use `kubectl version`): ``` 1.13 ```
non_priority
error while trying to rotate certificates on a cluster with one service certificate rotated what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible have a cluster do cluster nodes in with custom image giving minutes of expiry for the cluster upgrade to head with ui tag warning shown on the ui this cluster has certs expiring in days or less please rotate your certificates now click on rotate from the warning message select an individual service to rotate say etcd service click on rotate after the cluster has finished updating click on rotate from the warning message expected result user should be able to perform rotate certificates action actual result the ui is stuck on the console uncaught typeerror cannot read property daysuntil of undefined is seen img width alt screen shot at am src img width alt screen shot at am src other details that may be helpful environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui rancher head with ui tag installation option single install ha single if the reported issue is regarding a created cluster please provide requested info below cluster information cluster type hosted infrastructure provider custom imported do kubernetes version use kubectl version
0
19,114
11,148,579,688
IssuesEvent
2019-12-23 15:56:05
microsoft/botbuilder-dotnet
https://api.github.com/repos/microsoft/botbuilder-dotnet
closed
Microsoft BOT Framework BOT is stopped working intermittently
Bot Services customer-replied-to customer-reported
I am facing intermittent issue in BOT. I have two bots which are written using c# and BOT Framework V4 and both are same and are hosted on IIS. Also i have hosted two WebChat Node.js applications and both are same. I have two different Azure registration for two bots. One bot is constantly throwing issue and it stops working intermittently. Then we have to recycle its application pool to make it start again. Server : Windows Server 2012 R2 IIS : 8.5.9600 BOT Framework V4 SDK version : 4.3.2.0 The error is given below, ``` fail: EnterpriseTestBot.Startup[0] Exception Occured :-> The operation was canceled.----- at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken) at System.Threading.Tasks.ValueTask`1.get_Result() at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Threading.Tasks.ValueTask`1.get_Result() at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask) at System.Threading.Tasks.ValueTask`1.get_Result() at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken) at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts) at Microsoft.Bot.Connector.Conversations.ReplyToActivityWithHttpMessagesAsync(String conversationId, String activityId, Activity activity, Dictionary`2 customHeaders, CancellationToken cancellationToken) at Microsoft.Bot.Connector.ConversationsExtensions.ReplyToActivityAsync(IConversations operations, String conversationId, String activityId, Activity activity, CancellationToken cancellationToken) at Microsoft.Bot.Builder.BotFrameworkAdapter.SendActivitiesAsync(ITurnContext turnContext, Activity[] activities, CancellationToken cancellationToken) at Microsoft.Bot.Builder.TurnContext.<>c__DisplayClass22_0.<<SendActivitiesAsync>g__SendActivitiesThroughAdapter|1>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at Microsoft.Bot.Builder.TurnContext.SendActivityAsync(IActivity activity, CancellationToken cancellationToken) at Microsoft.Bot.Builder.TemplateManager.TemplateManager.ReplyWith(ITurnContext turnContext, String templateId, Object data) at EnterpriseTestBot.Dialogs.Main.MainDialog.OnStartAsync(DialogContext dc, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Dialogs\Main\MainDialog.cs:line 101 at EnterpriseTestBot.Dialogs.Shared.RouterDialog.OnContinueDialogAsync(DialogContext innerDc, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Dialogs\Shared\RouterDialog.cs:line 35 at Microsoft.Bot.Builder.Dialogs.ComponentDialog.BeginDialogAsync(DialogContext outerDc, Object options, CancellationToken cancellationToken) at Microsoft.Bot.Builder.Dialogs.DialogContext.BeginDialogAsync(String dialogId, Object options, CancellationToken cancellationToken) at EnterpriseTestBot.EnterpriseTestBot.OnTurnAsync(ITurnContext turnContext, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\EnterpriseTestBot.cs:line 96 at EnterpriseTestBot.Middleware.GetSetUserDataMiddleware.OnTurnAsync(ITurnContext turnContext, NextDelegate next, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Middleware\GetSetUserDataMiddleware.cs:line 98 at Microsoft.Bot.Builder.AutoSaveStateMiddleware.OnTurnAsync(ITurnContext turnContext, NextDelegate next, CancellationToken cancellationToken) at EnterpriseTestBot.Middleware.SetLocaleMiddleware.OnTurnAsync(ITurnContext context, NextDelegate next, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Middleware\SetLocaleMiddleware.cs:line 49 at Microsoft.Bot.Builder.MiddlewareSet.ReceiveActivityWithStatusAsync(ITurnContext turnContext, BotCallbackHandler callback, CancellationToken cancellationToken) at Microsoft.Bot.Builder.BotAdapter.RunPipelineAsync(ITurnContext turnContext, BotCallbackHandler callback, CancellationToken cancellationToken)-----System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request. ---> System.Net.Sockets.SocketException: The I/O operation has been aborted because of either a thread exit or an application request --- End of inner exception stack trace --- at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error) at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.GetResult(Int16 token) at System.Net.Http.HttpConnection.ReadBufferedAsyncCore(Memory`1 destination) at System.Net.Http.HttpConnection.RawConnectionStream.ReadAsync(Memory`1 buffer, CancellationToken cancellationToken) at System.Net.FixedSizeReader.ReadPacketAsync(Stream transport, AsyncProtocolRequest request) at System.Net.Security.SslState.ThrowIfExceptional() at System.Net.Security.SslState.InternalEndProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.Security.SslState.EndProcessAuthentication(IAsyncResult result) at System.Net.Security.SslStream.EndAuthenticateAsClient(IAsyncResult asyncResult) at System.Net.Security.SslStream.<>c.<AuthenticateAsClientAsync>b__47_1(IAsyncResult iar) at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization) --- End of stack trace from previous location where exception was thrown --- at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken) info: Microsoft.Bot.Builder.Integration.IAdapterIntegration[0] Sending activity. ReplyToId: 5Ly5VgcOJHH info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1] Request starting HTTP/1.1 POST http://TestBot.MyCompany.com/ATOM/api/messages application/json; charset=utf-8 377 info: Microsoft.Bot.Builder.Integration.IAdapterIntegration[0] Received an incoming activity. ActivityId: 6WV3KIRaT3y info: Microsoft.Bot.Builder.Integration.IAdapterIntegration[0] Sending activity. ReplyToId: 6WV3KIRaT3y ```
1.0
Microsoft BOT Framework BOT is stopped working intermittently - I am facing intermittent issue in BOT. I have two bots which are written using c# and BOT Framework V4 and both are same and are hosted on IIS. Also i have hosted two WebChat Node.js applications and both are same. I have two different Azure registration for two bots. One bot is constantly throwing issue and it stops working intermittently. Then we have to recycle its application pool to make it start again. Server : Windows Server 2012 R2 IIS : 8.5.9600 BOT Framework V4 SDK version : 4.3.2.0 The error is given below, ``` fail: EnterpriseTestBot.Startup[0] Exception Occured :-> The operation was canceled.----- at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken) at System.Threading.Tasks.ValueTask`1.get_Result() at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Threading.Tasks.ValueTask`1.get_Result() at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask) at System.Threading.Tasks.ValueTask`1.get_Result() at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken) at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts) at Microsoft.Bot.Connector.Conversations.ReplyToActivityWithHttpMessagesAsync(String conversationId, String activityId, Activity activity, Dictionary`2 customHeaders, CancellationToken cancellationToken) at Microsoft.Bot.Connector.ConversationsExtensions.ReplyToActivityAsync(IConversations operations, String conversationId, String activityId, Activity activity, CancellationToken cancellationToken) at Microsoft.Bot.Builder.BotFrameworkAdapter.SendActivitiesAsync(ITurnContext turnContext, Activity[] activities, CancellationToken cancellationToken) at Microsoft.Bot.Builder.TurnContext.<>c__DisplayClass22_0.<<SendActivitiesAsync>g__SendActivitiesThroughAdapter|1>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at Microsoft.Bot.Builder.TurnContext.SendActivityAsync(IActivity activity, CancellationToken cancellationToken) at Microsoft.Bot.Builder.TemplateManager.TemplateManager.ReplyWith(ITurnContext turnContext, String templateId, Object data) at EnterpriseTestBot.Dialogs.Main.MainDialog.OnStartAsync(DialogContext dc, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Dialogs\Main\MainDialog.cs:line 101 at EnterpriseTestBot.Dialogs.Shared.RouterDialog.OnContinueDialogAsync(DialogContext innerDc, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Dialogs\Shared\RouterDialog.cs:line 35 at Microsoft.Bot.Builder.Dialogs.ComponentDialog.BeginDialogAsync(DialogContext outerDc, Object options, CancellationToken cancellationToken) at Microsoft.Bot.Builder.Dialogs.DialogContext.BeginDialogAsync(String dialogId, Object options, CancellationToken cancellationToken) at EnterpriseTestBot.EnterpriseTestBot.OnTurnAsync(ITurnContext turnContext, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\EnterpriseTestBot.cs:line 96 at EnterpriseTestBot.Middleware.GetSetUserDataMiddleware.OnTurnAsync(ITurnContext turnContext, NextDelegate next, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Middleware\GetSetUserDataMiddleware.cs:line 98 at Microsoft.Bot.Builder.AutoSaveStateMiddleware.OnTurnAsync(ITurnContext turnContext, NextDelegate next, CancellationToken cancellationToken) at EnterpriseTestBot.Middleware.SetLocaleMiddleware.OnTurnAsync(ITurnContext context, NextDelegate next, CancellationToken cancellationToken) in D:\Test\BOT\Enterprise_BOT\EnterpriseTestBot\Middleware\SetLocaleMiddleware.cs:line 49 at Microsoft.Bot.Builder.MiddlewareSet.ReceiveActivityWithStatusAsync(ITurnContext turnContext, BotCallbackHandler callback, CancellationToken cancellationToken) at Microsoft.Bot.Builder.BotAdapter.RunPipelineAsync(ITurnContext turnContext, BotCallbackHandler callback, CancellationToken cancellationToken)-----System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request. ---> System.Net.Sockets.SocketException: The I/O operation has been aborted because of either a thread exit or an application request --- End of inner exception stack trace --- at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error) at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.GetResult(Int16 token) at System.Net.Http.HttpConnection.ReadBufferedAsyncCore(Memory`1 destination) at System.Net.Http.HttpConnection.RawConnectionStream.ReadAsync(Memory`1 buffer, CancellationToken cancellationToken) at System.Net.FixedSizeReader.ReadPacketAsync(Stream transport, AsyncProtocolRequest request) at System.Net.Security.SslState.ThrowIfExceptional() at System.Net.Security.SslState.InternalEndProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.Security.SslState.EndProcessAuthentication(IAsyncResult result) at System.Net.Security.SslStream.EndAuthenticateAsClient(IAsyncResult asyncResult) at System.Net.Security.SslStream.<>c.<AuthenticateAsClientAsync>b__47_1(IAsyncResult iar) at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization) --- End of stack trace from previous location where exception was thrown --- at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken) info: Microsoft.Bot.Builder.Integration.IAdapterIntegration[0] Sending activity. ReplyToId: 5Ly5VgcOJHH info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1] Request starting HTTP/1.1 POST http://TestBot.MyCompany.com/ATOM/api/messages application/json; charset=utf-8 377 info: Microsoft.Bot.Builder.Integration.IAdapterIntegration[0] Received an incoming activity. ActivityId: 6WV3KIRaT3y info: Microsoft.Bot.Builder.Integration.IAdapterIntegration[0] Sending activity. ReplyToId: 6WV3KIRaT3y ```
non_priority
microsoft bot framework bot is stopped working intermittently i am facing intermittent issue in bot i have two bots which are written using c and bot framework and both are same and are hosted on iis also i have hosted two webchat node js applications and both are same i have two different azure registration for two bots one bot is constantly throwing issue and it stops working intermittently then we have to recycle its application pool to make it start again server windows server iis bot framework sdk version the error is given below fail enterprisetestbot startup exception occured the operation was canceled at system net http connecthelper establishsslconnectionasynccore stream stream sslclientauthenticationoptions ssloptions cancellationtoken cancellationtoken at system threading tasks valuetask get result at system net http httpconnectionpool createconnectionasync httprequestmessage request cancellationtoken cancellationtoken at system threading tasks valuetask get result at system net http httpconnectionpool waitforcreatedconnectionasync valuetask creationtask at system threading tasks valuetask get result at system net http httpconnectionpool sendwithretryasync httprequestmessage request boolean dorequestauth cancellationtoken cancellationtoken at system net http redirecthandler sendasync httprequestmessage request cancellationtoken cancellationtoken at system net http httpclient finishsendasyncbuffered task sendtask httprequestmessage request cancellationtokensource cts boolean disposects at microsoft bot connector conversations replytoactivitywithhttpmessagesasync string conversationid string activityid activity activity dictionary customheaders cancellationtoken cancellationtoken at microsoft bot connector conversationsextensions replytoactivityasync iconversations operations string conversationid string activityid activity activity cancellationtoken cancellationtoken at microsoft bot builder botframeworkadapter sendactivitiesasync iturncontext turncontext activity activities cancellationtoken cancellationtoken at microsoft bot builder turncontext c g sendactivitiesthroughadapter d movenext end of stack trace from previous location where exception was thrown at microsoft bot builder turncontext sendactivityasync iactivity activity cancellationtoken cancellationtoken at microsoft bot builder templatemanager templatemanager replywith iturncontext turncontext string templateid object data at enterprisetestbot dialogs main maindialog onstartasync dialogcontext dc cancellationtoken cancellationtoken in d test bot enterprise bot enterprisetestbot dialogs main maindialog cs line at enterprisetestbot dialogs shared routerdialog oncontinuedialogasync dialogcontext innerdc cancellationtoken cancellationtoken in d test bot enterprise bot enterprisetestbot dialogs shared routerdialog cs line at microsoft bot builder dialogs componentdialog begindialogasync dialogcontext outerdc object options cancellationtoken cancellationtoken at microsoft bot builder dialogs dialogcontext begindialogasync string dialogid object options cancellationtoken cancellationtoken at enterprisetestbot enterprisetestbot onturnasync iturncontext turncontext cancellationtoken cancellationtoken in d test bot enterprise bot enterprisetestbot enterprisetestbot cs line at enterprisetestbot middleware getsetuserdatamiddleware onturnasync iturncontext turncontext nextdelegate next cancellationtoken cancellationtoken in d test bot enterprise bot enterprisetestbot middleware getsetuserdatamiddleware cs line at microsoft bot builder autosavestatemiddleware onturnasync iturncontext turncontext nextdelegate next cancellationtoken cancellationtoken at enterprisetestbot middleware setlocalemiddleware onturnasync iturncontext context nextdelegate next cancellationtoken cancellationtoken in d test bot enterprise bot enterprisetestbot middleware setlocalemiddleware cs line at microsoft bot builder middlewareset receiveactivitywithstatusasync iturncontext turncontext botcallbackhandler callback cancellationtoken cancellationtoken at microsoft bot builder botadapter runpipelineasync iturncontext turncontext botcallbackhandler callback cancellationtoken cancellationtoken system io ioexception unable to read data from the transport connection the i o operation has been aborted because of either a thread exit or an application request system net sockets socketexception the i o operation has been aborted because of either a thread exit or an application request end of inner exception stack trace at system net sockets socket awaitablesocketasynceventargs throwexception socketerror error at system net sockets socket awaitablesocketasynceventargs getresult token at system net http httpconnection readbufferedasynccore memory destination at system net http httpconnection rawconnectionstream readasync memory buffer cancellationtoken cancellationtoken at system net fixedsizereader readpacketasync stream transport asyncprotocolrequest request at system net security sslstate throwifexceptional at system net security sslstate internalendprocessauthentication lazyasyncresult lazyresult at system net security sslstate endprocessauthentication iasyncresult result at system net security sslstream endauthenticateasclient iasyncresult asyncresult at system net security sslstream c b iasyncresult iar at system threading tasks taskfactory fromasynccorelogic iasyncresult iar func endfunction action endaction task promise boolean requiressynchronization end of stack trace from previous location where exception was thrown at system net http connecthelper establishsslconnectionasynccore stream stream sslclientauthenticationoptions ssloptions cancellationtoken cancellationtoken info microsoft bot builder integration iadapterintegration sending activity replytoid info microsoft aspnetcore hosting internal webhost request starting http post application json charset utf info microsoft bot builder integration iadapterintegration received an incoming activity activityid info microsoft bot builder integration iadapterintegration sending activity replytoid
0
606,759
18,768,083,653
IssuesEvent
2021-11-06 09:43:38
AY2122S1-CS2103T-W12-3/tp
https://api.github.com/repos/AY2122S1-CS2103T-W12-3/tp
closed
[PE-D] Emails case sensitive
priority.Medium fixed
A really minor nitpick again. Emails are not supposed to be case sensitive. It would make more sense to parse the email into an all lowercase string. ![image.png](https://raw.githubusercontent.com/bryanwee023/ped/main/files/d648efc1-168c-4e3e-94d8-297686c9b57b.png) <!--session: 1635495680379-15d85747-342d-48bd-9fc8-b2cb4d5af0d3--> <!--Version: Desktop v3.4.1--> ------------- Labels: `severity.Low` `type.FeatureFlaw` original: bryanwee023/ped#12
1.0
[PE-D] Emails case sensitive - A really minor nitpick again. Emails are not supposed to be case sensitive. It would make more sense to parse the email into an all lowercase string. ![image.png](https://raw.githubusercontent.com/bryanwee023/ped/main/files/d648efc1-168c-4e3e-94d8-297686c9b57b.png) <!--session: 1635495680379-15d85747-342d-48bd-9fc8-b2cb4d5af0d3--> <!--Version: Desktop v3.4.1--> ------------- Labels: `severity.Low` `type.FeatureFlaw` original: bryanwee023/ped#12
priority
emails case sensitive a really minor nitpick again emails are not supposed to be case sensitive it would make more sense to parse the email into an all lowercase string labels severity low type featureflaw original ped
1
117,388
17,480,327,941
IssuesEvent
2021-08-09 00:07:15
jasonrundell/jasonrundell2020
https://api.github.com/repos/jasonrundell/jasonrundell2020
opened
CVE-2021-32640 (Medium) detected in ws-7.4.5.tgz
security vulnerability
## CVE-2021-32640 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-7.4.5.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-7.4.5.tgz">https://registry.npmjs.org/ws/-/ws-7.4.5.tgz</a></p> <p>Path to dependency file: jasonrundell2020/package.json</p> <p>Path to vulnerable library: jasonrundell2020/node_modules/ws</p> <p> Dependency Hierarchy: - gatsby-3.10.2.tgz (Root Library) - eslint-plugin-graphql-4.0.0.tgz - graphql-config-3.3.0.tgz - url-loader-6.10.1.tgz - :x: **ws-7.4.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jasonrundell/jasonrundell2020/commit/22fef3aad064f346bcbc2ef303a9f40e73697ea9">22fef3aad064f346bcbc2ef303a9f40e73697ea9</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options. <p>Publish Date: 2021-05-25 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p> <p>Release Date: 2021-05-25</p> <p>Fix Resolution: ws - 7.4.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-32640 (Medium) detected in ws-7.4.5.tgz - ## CVE-2021-32640 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-7.4.5.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-7.4.5.tgz">https://registry.npmjs.org/ws/-/ws-7.4.5.tgz</a></p> <p>Path to dependency file: jasonrundell2020/package.json</p> <p>Path to vulnerable library: jasonrundell2020/node_modules/ws</p> <p> Dependency Hierarchy: - gatsby-3.10.2.tgz (Root Library) - eslint-plugin-graphql-4.0.0.tgz - graphql-config-3.3.0.tgz - url-loader-6.10.1.tgz - :x: **ws-7.4.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jasonrundell/jasonrundell2020/commit/22fef3aad064f346bcbc2ef303a9f40e73697ea9">22fef3aad064f346bcbc2ef303a9f40e73697ea9</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options. <p>Publish Date: 2021-05-25 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p> <p>Release Date: 2021-05-25</p> <p>Fix Resolution: ws - 7.4.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_priority
cve medium detected in ws tgz cve medium severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file package json path to vulnerable library node modules ws dependency hierarchy gatsby tgz root library eslint plugin graphql tgz graphql config tgz url loader tgz x ws tgz vulnerable library found in head commit a href found in base branch main vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws step up your open source security game with whitesource
0
213,272
23,975,410,775
IssuesEvent
2022-09-13 11:10:21
snowdensb/CSPBR
https://api.github.com/repos/snowdensb/CSPBR
reopened
CVE-2022-31129 (High) detected in moment-2.9.0.js
security vulnerability
## CVE-2022-31129 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.9.0.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.9.0/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.9.0/moment.js</a></p> <p>Path to vulnerable library: /public/plugins/daterangepicker/moment.js</p> <p> Dependency Hierarchy: - :x: **moment-2.9.0.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/CSPBR/commit/86334f0df744f6cfa35a438987cbb4d18d65b5c2">86334f0df744f6cfa35a438987cbb4d18d65b5c2</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> moment is a JavaScript date library for parsing, validating, manipulating, and formatting dates. Affected versions of moment were found to use an inefficient parsing algorithm. Specifically using string-to-date parsing in moment (more specifically rfc2822 parsing, which is tried by default) has quadratic (N^2) complexity on specific inputs. Users may notice a noticeable slowdown is observed with inputs above 10k characters. Users who pass user-provided strings without sanity length checks to moment constructor are vulnerable to (Re)DoS attacks. The problem is patched in 2.29.4, the patch can be applied to all affected versions with minimal tweaking. Users are advised to upgrade. Users unable to upgrade should consider limiting date lengths accepted from user input. <p>Publish Date: 2022-07-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31129>CVE-2022-31129</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g">https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g</a></p> <p>Release Date: 2022-07-06</p> <p>Fix Resolution: moment - 2.29.4</p> </p> </details> <p></p>
True
CVE-2022-31129 (High) detected in moment-2.9.0.js - ## CVE-2022-31129 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.9.0.js</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.9.0/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.9.0/moment.js</a></p> <p>Path to vulnerable library: /public/plugins/daterangepicker/moment.js</p> <p> Dependency Hierarchy: - :x: **moment-2.9.0.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/CSPBR/commit/86334f0df744f6cfa35a438987cbb4d18d65b5c2">86334f0df744f6cfa35a438987cbb4d18d65b5c2</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> moment is a JavaScript date library for parsing, validating, manipulating, and formatting dates. Affected versions of moment were found to use an inefficient parsing algorithm. Specifically using string-to-date parsing in moment (more specifically rfc2822 parsing, which is tried by default) has quadratic (N^2) complexity on specific inputs. Users may notice a noticeable slowdown is observed with inputs above 10k characters. Users who pass user-provided strings without sanity length checks to moment constructor are vulnerable to (Re)DoS attacks. The problem is patched in 2.29.4, the patch can be applied to all affected versions with minimal tweaking. Users are advised to upgrade. Users unable to upgrade should consider limiting date lengths accepted from user input. <p>Publish Date: 2022-07-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31129>CVE-2022-31129</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g">https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g</a></p> <p>Release Date: 2022-07-06</p> <p>Fix Resolution: moment - 2.29.4</p> </p> </details> <p></p>
non_priority
cve high detected in moment js cve high severity vulnerability vulnerable library moment js parse validate manipulate and display dates library home page a href path to vulnerable library public plugins daterangepicker moment js dependency hierarchy x moment js vulnerable library found in head commit a href found in base branch master vulnerability details moment is a javascript date library for parsing validating manipulating and formatting dates affected versions of moment were found to use an inefficient parsing algorithm specifically using string to date parsing in moment more specifically parsing which is tried by default has quadratic n complexity on specific inputs users may notice a noticeable slowdown is observed with inputs above characters users who pass user provided strings without sanity length checks to moment constructor are vulnerable to re dos attacks the problem is patched in the patch can be applied to all affected versions with minimal tweaking users are advised to upgrade users unable to upgrade should consider limiting date lengths accepted from user input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment
0
318,942
9,725,522,746
IssuesEvent
2019-05-30 08:56:48
intel-analytics/analytics-zoo
https://api.github.com/repos/intel-analytics/analytics-zoo
closed
Error while running the openvino example
high priority
Run by downloading the prebuilt package. https://github.com/intel-analytics/analytics-zoo/tree/master/pyzoo/zoo/examples/openvino Traceback (most recent call last): File "/tmp/1558587157102-0/model-optimizer/mo_tf.py", line 28, in <module> from mo.main import main File "/tmp/1558587157102-0/model-optimizer/mo/main.py", line 28, in <module> from mo.utils.cli_parser import get_placeholder_shapes, get_tuple_values, get_model_name, \ File "/tmp/1558587157102-0/model-optimizer/mo/utils/cli_parser.py", line 26, in <module> from mo.front.extractor import split_node_in_port File "/tmp/1558587157102-0/model-optimizer/mo/front/extractor.py", line 21, in <module> import networkx as nx ModuleNotFoundError: No module named 'networkx'
1.0
Error while running the openvino example - Run by downloading the prebuilt package. https://github.com/intel-analytics/analytics-zoo/tree/master/pyzoo/zoo/examples/openvino Traceback (most recent call last): File "/tmp/1558587157102-0/model-optimizer/mo_tf.py", line 28, in <module> from mo.main import main File "/tmp/1558587157102-0/model-optimizer/mo/main.py", line 28, in <module> from mo.utils.cli_parser import get_placeholder_shapes, get_tuple_values, get_model_name, \ File "/tmp/1558587157102-0/model-optimizer/mo/utils/cli_parser.py", line 26, in <module> from mo.front.extractor import split_node_in_port File "/tmp/1558587157102-0/model-optimizer/mo/front/extractor.py", line 21, in <module> import networkx as nx ModuleNotFoundError: No module named 'networkx'
priority
error while running the openvino example run by downloading the prebuilt package traceback most recent call last file tmp model optimizer mo tf py line in from mo main import main file tmp model optimizer mo main py line in from mo utils cli parser import get placeholder shapes get tuple values get model name file tmp model optimizer mo utils cli parser py line in from mo front extractor import split node in port file tmp model optimizer mo front extractor py line in import networkx as nx modulenotfounderror no module named networkx
1
131,654
5,163,603,530
IssuesEvent
2017-01-17 07:33:37
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
Устранить проблему по отработке услуги
hi priority version _wf-base
Устранить проблему по отработке услуги с большим количеством принт-форм: https://docs.google.com/document/d/1Fx1tKPXEyYsc5_P7hHWm-joPivznDWbDIcINdhWoJHM/edit
1.0
Устранить проблему по отработке услуги - Устранить проблему по отработке услуги с большим количеством принт-форм: https://docs.google.com/document/d/1Fx1tKPXEyYsc5_P7hHWm-joPivznDWbDIcINdhWoJHM/edit
priority
устранить проблему по отработке услуги устранить проблему по отработке услуги с большим количеством принт форм
1
307,648
9,419,844,959
IssuesEvent
2019-04-10 23:31:15
fedora-infra/bodhi
https://api.github.com/repos/fedora-infra/bodhi
opened
Bodhi dies when trying to send e-mails for fedora_modular
Composer Crash High priority
I get this e-mail regularly: Message ------- [2019-04-10 07:32:56][bodhi.server ERROR] ```python release_name = 'fedora_modular' ``` Process Details --------------- - host: bodhi-backend01.phx2.fedoraproject.org - PID: 63827 - name: fedmsg-hub-3 - command: /usr/bin/python3 /usr/bin/fedmsg-hub-3 - msg_id: Callstack that lead to the logging statement -------------------------------------------- ```python File "/usr/lib64/python3.7/threading.py", line 885 in _bootstrap File "/usr/lib64/python3.7/threading.py", line 917 in _bootstrap_inner File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 344 in run File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 429 in work File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 76 in wrapper File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 747 in send_stable_announcements File "/usr/lib/python3.7/site-packages/bodhi/server/models.py", line 2876 in send_update_notice ``` See also #2633
1.0
Bodhi dies when trying to send e-mails for fedora_modular - I get this e-mail regularly: Message ------- [2019-04-10 07:32:56][bodhi.server ERROR] ```python release_name = 'fedora_modular' ``` Process Details --------------- - host: bodhi-backend01.phx2.fedoraproject.org - PID: 63827 - name: fedmsg-hub-3 - command: /usr/bin/python3 /usr/bin/fedmsg-hub-3 - msg_id: Callstack that lead to the logging statement -------------------------------------------- ```python File "/usr/lib64/python3.7/threading.py", line 885 in _bootstrap File "/usr/lib64/python3.7/threading.py", line 917 in _bootstrap_inner File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 344 in run File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 429 in work File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 76 in wrapper File "/usr/lib/python3.7/site-packages/bodhi/server/consumers/masher.py", line 747 in send_stable_announcements File "/usr/lib/python3.7/site-packages/bodhi/server/models.py", line 2876 in send_update_notice ``` See also #2633
priority
bodhi dies when trying to send e mails for fedora modular i get this e mail regularly message python release name fedora modular process details host bodhi fedoraproject org pid name fedmsg hub command usr bin usr bin fedmsg hub msg id callstack that lead to the logging statement python file usr threading py line in bootstrap file usr threading py line in bootstrap inner file usr lib site packages bodhi server consumers masher py line in run file usr lib site packages bodhi server consumers masher py line in work file usr lib site packages bodhi server consumers masher py line in wrapper file usr lib site packages bodhi server consumers masher py line in send stable announcements file usr lib site packages bodhi server models py line in send update notice see also
1
242,599
20,254,552,712
IssuesEvent
2022-02-14 21:32:20
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: pgjdbc failed
C-test-failure O-robot O-roachtest branch-release-21.1
roachtest.pgjdbc [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4236957&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4236957&tab=artifacts#/pgjdbc) on release-21.1 @ [0fd6eead6c6eb7b2529deb39197cc3c95e93ded8](https://github.com/cockroachdb/cockroach/commits/0fd6eead6c6eb7b2529deb39197cc3c95e93ded8): ``` The test failed on branch=release-21.1, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/pgjdbc/run_1 orm_helpers.go:228,orm_helpers.go:154,java_helpers.go:216,pgjdbc.go:177,pgjdbc.go:190,test_runner.go:733: Tests run on Cockroach v21.1.13-45-g0fd6eead6c Tests run against pgjdbc REL42.2.19 5298 Total Tests Run 4461 tests passed 837 tests failed 60 tests skipped 185 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: org.postgresql.test.jdbc2.CursorFetchTest.testBasicFetch[binary = REGULAR] - unknown (unexpected) For a full summary look at the pgjdbc artifacts An updated blocklist (pgjdbcBlockList21_1) is available in the artifacts' pgjdbc log ``` <details><summary>Reproduce</summary> <p> <p>To reproduce, try: ```bash # From https://go.crdb.dev/p/roachstress, perhaps edited lightly. caffeinate ./roachstress.sh pgjdbc ``` </p> </p> </details> <details><summary>Same failure on other branches</summary> <p> - #75589 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot branch-master] - #75547 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot branch-release-21.2] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*pgjdbc.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: pgjdbc failed - roachtest.pgjdbc [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4236957&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4236957&tab=artifacts#/pgjdbc) on release-21.1 @ [0fd6eead6c6eb7b2529deb39197cc3c95e93ded8](https://github.com/cockroachdb/cockroach/commits/0fd6eead6c6eb7b2529deb39197cc3c95e93ded8): ``` The test failed on branch=release-21.1, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/pgjdbc/run_1 orm_helpers.go:228,orm_helpers.go:154,java_helpers.go:216,pgjdbc.go:177,pgjdbc.go:190,test_runner.go:733: Tests run on Cockroach v21.1.13-45-g0fd6eead6c Tests run against pgjdbc REL42.2.19 5298 Total Tests Run 4461 tests passed 837 tests failed 60 tests skipped 185 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: org.postgresql.test.jdbc2.CursorFetchTest.testBasicFetch[binary = REGULAR] - unknown (unexpected) For a full summary look at the pgjdbc artifacts An updated blocklist (pgjdbcBlockList21_1) is available in the artifacts' pgjdbc log ``` <details><summary>Reproduce</summary> <p> <p>To reproduce, try: ```bash # From https://go.crdb.dev/p/roachstress, perhaps edited lightly. caffeinate ./roachstress.sh pgjdbc ``` </p> </p> </details> <details><summary>Same failure on other branches</summary> <p> - #75589 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot branch-master] - #75547 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot branch-release-21.2] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*pgjdbc.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
non_priority
roachtest pgjdbc failed roachtest pgjdbc with on release the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts pgjdbc run orm helpers go orm helpers go java helpers go pgjdbc go pgjdbc go test runner go tests run on cockroach tests run against pgjdbc total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly test failed unexpectedly tests expected failed but skipped tests expected failed but not run fail org postgresql test cursorfetchtest testbasicfetch unknown unexpected for a full summary look at the pgjdbc artifacts an updated blocklist is available in the artifacts pgjdbc log reproduce to reproduce try bash from perhaps edited lightly caffeinate roachstress sh pgjdbc same failure on other branches roachtest pgjdbc failed roachtest pgjdbc failed cc cockroachdb sql experience
0
421,191
28,310,648,291
IssuesEvent
2023-04-10 15:06:15
AY2223S2-CS2103T-W11-4/tp
https://api.github.com/repos/AY2223S2-CS2103T-W11-4/tp
closed
Try PDF conversions early
documentation priority.High
The DG and UG should be PDF-friendly. Don't use expandable panels, embedded videos, animated GIFs etc. Reason: The UG and DG used in the final grading will be in PDF format:
1.0
Try PDF conversions early - The DG and UG should be PDF-friendly. Don't use expandable panels, embedded videos, animated GIFs etc. Reason: The UG and DG used in the final grading will be in PDF format:
non_priority
try pdf conversions early the dg and ug should be pdf friendly don t use expandable panels embedded videos animated gifs etc reason the ug and dg used in the final grading will be in pdf format
0
818,612
30,696,215,529
IssuesEvent
2023-07-26 18:49:07
microsoft/STL
https://api.github.com/repos/microsoft/STL
closed
`<string>`: LibFuzzer failure during startup due to `basic_string::append` failing under ASAN
bug invalid high priority ASAN
# Describe the bug We are seeing ASAN-originated LibFuzzer failures with the following stacktrace: ``` ==6048==ERROR: AddressSanitizer: container-overflow on address 0x11d0e55a0f17 at pc 0x7ff7e1b09a3c bp 0x009688c8eeb0 sp 0x009688c8e640 WRITE of size 2 at 0x11d0e55a0f17 thread T0 #0 0x7ff7e1b09a3b in __asan_wrap_memmove D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\sanitizer_common\sanitizer_common_interceptors.inc:813 #1 0x7ff7e1c4daf5 in std::_Char_traits<char,int>::move VCCRT\crtw32\stdhpp\xstring:119 #2 0x7ff7e1c4daf5 in std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>::append(char const *const, unsigned __int64) VCCRT\crtw32\stdhpp\xstring:3313 #3 0x7ff7e1b542ad in std::basic_string<char,std::char_traits<char>,std::allocator<char> >::append D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:3279 #4 0x7ff7e1b542ad in std::operator+ D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:5003 #5 0x7ff7e1b542ad in fuzzer::DirPlusFile(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:133 #6 0x7ff7e1b51dd8 in fuzzer::ListFilesInDirRecursive(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, long *, class std::vector<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>> *, bool) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIOWindows.cpp:137 #7 0x7ff7e1b548b3 in fuzzer::GetSizedFilesFromDir(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::vector<struct fuzzer::SizedFile, class std::allocator<struct fuzzer::SizedFile>> *) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:125 #8 0x7ff7e1b617fa in fuzzer::ReadCorpora D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:631 #9 0x7ff7e1b5ec81 in fuzzer::FuzzerDriver(int *, char ***, int (__cdecl *)(unsigned char const *, unsigned __int64)) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:911 #10 0x7ff7e1b50ab2 in main D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerMain.cpp:20 #11 0x7ff7e1b3f22b in invoke_main VCCRT\vcstartup\src\startup\exe_common.inl:78 #12 0x7ff7e1b3f22b in __scrt_common_main_seh VCCRT\vcstartup\src\startup\exe_common.inl:288 #13 0x7ffc9c9b7613 (C:\Windows\System32\KERNEL32.DLL+0x180017613) #14 0x7ffc9cc226f0 (C:\Windows\SYSTEM32\ntdll.dll+0x1800526f0) ``` The ASAN failure (two different binaries here): ``` WRITE of size 2 at 0x11d0e55a0f17 thread T0 0x11d0e55a0f17 is located 103 bytes inside of 112-byte region [0x11d0e55a0eb0,0x11d0e55a0f20) ``` ``` WRITE of size 2 at 0x11d013da1077 thread T0 0x11d013da1077 is located 103 bytes inside of 112-byte region [0x11d013da1010,0x11d013da1080) ``` And the allocation information is: ``` 0x11d0e55a0f17 is located 103 bytes inside of 112-byte region [0x11d0e55a0eb0,0x11d0e55a0f20) allocated by thread T0 here: #0 0x7ff7e1b23fae in operator new(unsigned __int64) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\asan\asan_win_new_scalar_thunk.cpp:40 #1 0x7ff7e1b53a77 in std::allocator<char>::allocate D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xmemory:951 #2 0x7ff7e1b53a77 in std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>(struct std::_String_constructor_concat_tag, class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, char const *const, unsigned __int64, char const *const, unsigned __int64) D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:2833 #3 0x7ff7e1b54292 in std::operator+ D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:4988 #4 0x7ff7e1b54292 in fuzzer::DirPlusFile(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:133 #5 0x7ff7e1b51dd8 in fuzzer::ListFilesInDirRecursive(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, long *, class std::vector<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>> *, bool) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIOWindows.cpp:137 #6 0x7ff7e1b548b3 in fuzzer::GetSizedFilesFromDir(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::vector<struct fuzzer::SizedFile, class std::allocator<struct fuzzer::SizedFile>> *) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:125 #7 0x7ff7e1b617fa in fuzzer::ReadCorpora D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:631 #8 0x7ff7e1b5ec81 in fuzzer::FuzzerDriver(int *, char ***, int (__cdecl *)(unsigned char const *, unsigned __int64)) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:911 #9 0x7ff7e1b50ab2 in main D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerMain.cpp:20 #10 0x7ff7e1b3f22b in invoke_main VCCRT\vcstartup\src\startup\exe_common.inl:78 #11 0x7ff7e1b3f22b in __scrt_common_main_seh VCCRT\vcstartup\src\startup\exe_common.inl:288 #12 0x7ffc9c9b7613 (C:\Windows\System32\KERNEL32.DLL+0x180017613) #13 0x7ffc9cc226f0 (C:\Windows\SYSTEM32\ntdll.dll+0x1800526f0) ``` # Command-line test case I've not yet created a minimal repro; all the code in the stack in question is provided by VC/STL (LibFuzzer libraries). I can provide the problematic binaries internally.
1.0
`<string>`: LibFuzzer failure during startup due to `basic_string::append` failing under ASAN - # Describe the bug We are seeing ASAN-originated LibFuzzer failures with the following stacktrace: ``` ==6048==ERROR: AddressSanitizer: container-overflow on address 0x11d0e55a0f17 at pc 0x7ff7e1b09a3c bp 0x009688c8eeb0 sp 0x009688c8e640 WRITE of size 2 at 0x11d0e55a0f17 thread T0 #0 0x7ff7e1b09a3b in __asan_wrap_memmove D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\sanitizer_common\sanitizer_common_interceptors.inc:813 #1 0x7ff7e1c4daf5 in std::_Char_traits<char,int>::move VCCRT\crtw32\stdhpp\xstring:119 #2 0x7ff7e1c4daf5 in std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>::append(char const *const, unsigned __int64) VCCRT\crtw32\stdhpp\xstring:3313 #3 0x7ff7e1b542ad in std::basic_string<char,std::char_traits<char>,std::allocator<char> >::append D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:3279 #4 0x7ff7e1b542ad in std::operator+ D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:5003 #5 0x7ff7e1b542ad in fuzzer::DirPlusFile(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:133 #6 0x7ff7e1b51dd8 in fuzzer::ListFilesInDirRecursive(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, long *, class std::vector<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>> *, bool) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIOWindows.cpp:137 #7 0x7ff7e1b548b3 in fuzzer::GetSizedFilesFromDir(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::vector<struct fuzzer::SizedFile, class std::allocator<struct fuzzer::SizedFile>> *) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:125 #8 0x7ff7e1b617fa in fuzzer::ReadCorpora D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:631 #9 0x7ff7e1b5ec81 in fuzzer::FuzzerDriver(int *, char ***, int (__cdecl *)(unsigned char const *, unsigned __int64)) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:911 #10 0x7ff7e1b50ab2 in main D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerMain.cpp:20 #11 0x7ff7e1b3f22b in invoke_main VCCRT\vcstartup\src\startup\exe_common.inl:78 #12 0x7ff7e1b3f22b in __scrt_common_main_seh VCCRT\vcstartup\src\startup\exe_common.inl:288 #13 0x7ffc9c9b7613 (C:\Windows\System32\KERNEL32.DLL+0x180017613) #14 0x7ffc9cc226f0 (C:\Windows\SYSTEM32\ntdll.dll+0x1800526f0) ``` The ASAN failure (two different binaries here): ``` WRITE of size 2 at 0x11d0e55a0f17 thread T0 0x11d0e55a0f17 is located 103 bytes inside of 112-byte region [0x11d0e55a0eb0,0x11d0e55a0f20) ``` ``` WRITE of size 2 at 0x11d013da1077 thread T0 0x11d013da1077 is located 103 bytes inside of 112-byte region [0x11d013da1010,0x11d013da1080) ``` And the allocation information is: ``` 0x11d0e55a0f17 is located 103 bytes inside of 112-byte region [0x11d0e55a0eb0,0x11d0e55a0f20) allocated by thread T0 here: #0 0x7ff7e1b23fae in operator new(unsigned __int64) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\asan\asan_win_new_scalar_thunk.cpp:40 #1 0x7ff7e1b53a77 in std::allocator<char>::allocate D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xmemory:951 #2 0x7ff7e1b53a77 in std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>(struct std::_String_constructor_concat_tag, class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, char const *const, unsigned __int64, char const *const, unsigned __int64) D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:2833 #3 0x7ff7e1b54292 in std::operator+ D:\a\_work\1\s\src\vctools\crt\github\stl\inc\xstring:4988 #4 0x7ff7e1b54292 in fuzzer::DirPlusFile(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:133 #5 0x7ff7e1b51dd8 in fuzzer::ListFilesInDirRecursive(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, long *, class std::vector<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>> *, bool) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIOWindows.cpp:137 #6 0x7ff7e1b548b3 in fuzzer::GetSizedFilesFromDir(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &, class std::vector<struct fuzzer::SizedFile, class std::allocator<struct fuzzer::SizedFile>> *) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerIO.cpp:125 #7 0x7ff7e1b617fa in fuzzer::ReadCorpora D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:631 #8 0x7ff7e1b5ec81 in fuzzer::FuzzerDriver(int *, char ***, int (__cdecl *)(unsigned char const *, unsigned __int64)) D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerDriver.cpp:911 #9 0x7ff7e1b50ab2 in main D:\a\_work\1\s\src\vctools\asan\llvm\compiler-rt\lib\fuzzer\FuzzerMain.cpp:20 #10 0x7ff7e1b3f22b in invoke_main VCCRT\vcstartup\src\startup\exe_common.inl:78 #11 0x7ff7e1b3f22b in __scrt_common_main_seh VCCRT\vcstartup\src\startup\exe_common.inl:288 #12 0x7ffc9c9b7613 (C:\Windows\System32\KERNEL32.DLL+0x180017613) #13 0x7ffc9cc226f0 (C:\Windows\SYSTEM32\ntdll.dll+0x1800526f0) ``` # Command-line test case I've not yet created a minimal repro; all the code in the stack in question is provided by VC/STL (LibFuzzer libraries). I can provide the problematic binaries internally.
priority
libfuzzer failure during startup due to basic string append failing under asan describe the bug we are seeing asan originated libfuzzer failures with the following stacktrace error addresssanitizer container overflow on address at pc bp sp write of size at thread in asan wrap memmove d a work s src vctools asan llvm compiler rt lib sanitizer common sanitizer common interceptors inc in std char traits move vccrt stdhpp xstring in std basic string class std allocator append char const const unsigned vccrt stdhpp xstring in std basic string std allocator append d a work s src vctools crt github stl inc xstring in std operator d a work s src vctools crt github stl inc xstring in fuzzer dirplusfile class std basic string class std allocator const class std basic string class std allocator const d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerio cpp in fuzzer listfilesindirrecursive class std basic string class std allocator const long class std vector class std allocator class std allocator class std allocator bool d a work s src vctools asan llvm compiler rt lib fuzzer fuzzeriowindows cpp in fuzzer getsizedfilesfromdir class std basic string class std allocator const class std vector d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerio cpp in fuzzer readcorpora d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerdriver cpp in fuzzer fuzzerdriver int char int cdecl unsigned char const unsigned d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerdriver cpp in main d a work s src vctools asan llvm compiler rt lib fuzzer fuzzermain cpp in invoke main vccrt vcstartup src startup exe common inl in scrt common main seh vccrt vcstartup src startup exe common inl c windows dll c windows ntdll dll the asan failure two different binaries here write of size at thread is located bytes inside of byte region write of size at thread is located bytes inside of byte region and the allocation information is is located bytes inside of byte region allocated by thread here in operator new unsigned d a work s src vctools asan llvm compiler rt lib asan asan win new scalar thunk cpp in std allocator allocate d a work s src vctools crt github stl inc xmemory in std basic string class std allocator basic string class std allocator struct std string constructor concat tag class std basic string class std allocator const char const const unsigned char const const unsigned d a work s src vctools crt github stl inc xstring in std operator d a work s src vctools crt github stl inc xstring in fuzzer dirplusfile class std basic string class std allocator const class std basic string class std allocator const d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerio cpp in fuzzer listfilesindirrecursive class std basic string class std allocator const long class std vector class std allocator class std allocator class std allocator bool d a work s src vctools asan llvm compiler rt lib fuzzer fuzzeriowindows cpp in fuzzer getsizedfilesfromdir class std basic string class std allocator const class std vector d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerio cpp in fuzzer readcorpora d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerdriver cpp in fuzzer fuzzerdriver int char int cdecl unsigned char const unsigned d a work s src vctools asan llvm compiler rt lib fuzzer fuzzerdriver cpp in main d a work s src vctools asan llvm compiler rt lib fuzzer fuzzermain cpp in invoke main vccrt vcstartup src startup exe common inl in scrt common main seh vccrt vcstartup src startup exe common inl c windows dll c windows ntdll dll command line test case i ve not yet created a minimal repro all the code in the stack in question is provided by vc stl libfuzzer libraries i can provide the problematic binaries internally
1
805,499
29,522,363,293
IssuesEvent
2023-06-05 03:57:01
okTurtles/group-income
https://api.github.com/repos/okTurtles/group-income
opened
Proposals from one group appear in another group sometimes
Kind:Bug App:Frontend Priority:High Note:Contracts
### Problem While showing the app to a friend, who created his own group and invited me (resulting in me being in 2 groups), I saw in his group a cancelled proposal from our group: ![Screenshot 2023-06-04 at 6 57 39 PM copy](https://github.com/okTurtles/group-income/assets/138706/f656d282-182d-41f1-887c-5880f03e0b69) The steps to reproduce are similar to #1636, although I'm not exactly sure how to reproduce consistently. ### Solution Find and fix bug. Proposals from one group should never appear on another group.
1.0
Proposals from one group appear in another group sometimes - ### Problem While showing the app to a friend, who created his own group and invited me (resulting in me being in 2 groups), I saw in his group a cancelled proposal from our group: ![Screenshot 2023-06-04 at 6 57 39 PM copy](https://github.com/okTurtles/group-income/assets/138706/f656d282-182d-41f1-887c-5880f03e0b69) The steps to reproduce are similar to #1636, although I'm not exactly sure how to reproduce consistently. ### Solution Find and fix bug. Proposals from one group should never appear on another group.
priority
proposals from one group appear in another group sometimes problem while showing the app to a friend who created his own group and invited me resulting in me being in groups i saw in his group a cancelled proposal from our group the steps to reproduce are similar to although i m not exactly sure how to reproduce consistently solution find and fix bug proposals from one group should never appear on another group
1
37,852
5,145,889,368
IssuesEvent
2017-01-12 22:57:24
NAVADMC/ADSM
https://api.github.com/repos/NAVADMC/ADSM
opened
User Story - Control Protocol Assignment Test Case
Test Case
### As a user, I need to assign a control protocol to a production type - [ ] Are all my production types listed? - [ ] Are all my control protocol presented on the pull down list? - [ ] Can I null out an assignment? - [ ] Can I add a note - [ ] Do my status lights populate by production type - [ ] When a change is made, does the apply or cancel button become active? - [ ] Do I understand which fields are required? - [ ] In this case, is this an easy enough presentation that it is intuitive to the user Are other tests discovered during the exercise of this functionality?
1.0
User Story - Control Protocol Assignment Test Case - ### As a user, I need to assign a control protocol to a production type - [ ] Are all my production types listed? - [ ] Are all my control protocol presented on the pull down list? - [ ] Can I null out an assignment? - [ ] Can I add a note - [ ] Do my status lights populate by production type - [ ] When a change is made, does the apply or cancel button become active? - [ ] Do I understand which fields are required? - [ ] In this case, is this an easy enough presentation that it is intuitive to the user Are other tests discovered during the exercise of this functionality?
non_priority
user story control protocol assignment test case as a user i need to assign a control protocol to a production type are all my production types listed are all my control protocol presented on the pull down list can i null out an assignment can i add a note do my status lights populate by production type when a change is made does the apply or cancel button become active do i understand which fields are required in this case is this an easy enough presentation that it is intuitive to the user are other tests discovered during the exercise of this functionality
0