Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
32,855
| 6,953,398,012
|
IssuesEvent
|
2017-12-06 20:52:59
|
Dzhuneyt/jquery-tubular
|
https://api.github.com/repos/Dzhuneyt/jquery-tubular
|
closed
|
Mute:true not working in google chrome
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. setting mute:true in plugin options
What is the expected output? What do you see instead?
In Google Chrome it reproduces audio although i set "mute:true" in options
array.
What version of the product are you using? On what operating system?
Windows 7. Release 1.0.1
```
Original issue reported on code.google.com by `giovanni...@gmail.com` on 27 Nov 2012 at 9:55
|
1.0
|
Mute:true not working in google chrome - ```
What steps will reproduce the problem?
1. setting mute:true in plugin options
What is the expected output? What do you see instead?
In Google Chrome it reproduces audio although i set "mute:true" in options
array.
What version of the product are you using? On what operating system?
Windows 7. Release 1.0.1
```
Original issue reported on code.google.com by `giovanni...@gmail.com` on 27 Nov 2012 at 9:55
|
defect
|
mute true not working in google chrome what steps will reproduce the problem setting mute true in plugin options what is the expected output what do you see instead in google chrome it reproduces audio although i set mute true in options array what version of the product are you using on what operating system windows release original issue reported on code google com by giovanni gmail com on nov at
| 1
|
48,147
| 13,067,478,834
|
IssuesEvent
|
2020-07-31 00:35:17
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
make ZeroMQ a required dependency of SNDAQ (Trac #1860)
|
Migrated from Trac defect other
|
In BeerTrooper_XIII 0MQ is not a required dependency in CMake, but the code will not build properly if it's not found on the system. Either make 0MQ a required dependency or figure out how make the build fail elegantly if 0MQ is not found.
Migrated from https://code.icecube.wisc.edu/ticket/1860
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:44",
"description": "In BeerTrooper_XIII 0MQ is not a required dependency in CMake, but the code will not build properly if it's not found on the system. Either make 0MQ a required dependency or figure out how make the build fail elegantly if 0MQ is not found.",
"reporter": "sybenzvi",
"cc": "gmoment, rcross",
"resolution": "fixed",
"_ts": "1550067284370534",
"component": "other",
"summary": "make ZeroMQ a required dependency of SNDAQ",
"priority": "minor",
"keywords": "sndaq",
"time": "2016-09-13T18:19:33",
"milestone": "",
"owner": "sybenzvi",
"type": "defect"
}
```
|
1.0
|
make ZeroMQ a required dependency of SNDAQ (Trac #1860) - In BeerTrooper_XIII 0MQ is not a required dependency in CMake, but the code will not build properly if it's not found on the system. Either make 0MQ a required dependency or figure out how make the build fail elegantly if 0MQ is not found.
Migrated from https://code.icecube.wisc.edu/ticket/1860
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:44",
"description": "In BeerTrooper_XIII 0MQ is not a required dependency in CMake, but the code will not build properly if it's not found on the system. Either make 0MQ a required dependency or figure out how make the build fail elegantly if 0MQ is not found.",
"reporter": "sybenzvi",
"cc": "gmoment, rcross",
"resolution": "fixed",
"_ts": "1550067284370534",
"component": "other",
"summary": "make ZeroMQ a required dependency of SNDAQ",
"priority": "minor",
"keywords": "sndaq",
"time": "2016-09-13T18:19:33",
"milestone": "",
"owner": "sybenzvi",
"type": "defect"
}
```
|
defect
|
make zeromq a required dependency of sndaq trac in beertrooper xiii is not a required dependency in cmake but the code will not build properly if it s not found on the system either make a required dependency or figure out how make the build fail elegantly if is not found migrated from json status closed changetime description in beertrooper xiii is not a required dependency in cmake but the code will not build properly if it s not found on the system either make a required dependency or figure out how make the build fail elegantly if is not found reporter sybenzvi cc gmoment rcross resolution fixed ts component other summary make zeromq a required dependency of sndaq priority minor keywords sndaq time milestone owner sybenzvi type defect
| 1
|
39,333
| 9,393,603,325
|
IssuesEvent
|
2019-04-07 13:00:25
|
combatopera/pym2149
|
https://api.github.com/repos/combatopera/pym2149
|
opened
|
timer effect api revamp
|
Concern defect
|
* instead of the current shambles, add a timerframe method to Note (or perhaps a method each for noteon/noteoff) in which you can write code just like in the regular methods, and instead of executing the code we use static analysis to apply a performant timer effect, and reject anything too fancy
* it would also be good to have a nudge method to apply changes, the use-case for this is tone waveform reset by setting period to a small number. this may be a separate ticket
|
1.0
|
timer effect api revamp - * instead of the current shambles, add a timerframe method to Note (or perhaps a method each for noteon/noteoff) in which you can write code just like in the regular methods, and instead of executing the code we use static analysis to apply a performant timer effect, and reject anything too fancy
* it would also be good to have a nudge method to apply changes, the use-case for this is tone waveform reset by setting period to a small number. this may be a separate ticket
|
defect
|
timer effect api revamp instead of the current shambles add a timerframe method to note or perhaps a method each for noteon noteoff in which you can write code just like in the regular methods and instead of executing the code we use static analysis to apply a performant timer effect and reject anything too fancy it would also be good to have a nudge method to apply changes the use case for this is tone waveform reset by setting period to a small number this may be a separate ticket
| 1
|
19,725
| 3,249,295,492
|
IssuesEvent
|
2015-10-18 01:54:40
|
adsbypasser/adsbypasser
|
https://api.github.com/repos/adsbypasser/adsbypasser
|
closed
|
eafyfsuh.net broken again
|
defect need_infomation type:link
|
link example: `http://www.eafyfsuh.net/DFItk`
It does not redirect. I tried to disable extensions (excludes GM) and load the page. However it didn't redirect.
How can I do to work on eafysuh?
adsbypasser: 5.37.1
fx 41 with extensions GM 3.4.1, uBlock₀, NoScript, Ghostery, etc.
console log:
```
AdsBypasser: working on
http://www.eafyfsuh.net/DFItk
with
{"version":3,"alignCenter":true,"changeBackground":true,"externalServerSupport":false,"redirectImage":true,"scaleImage":true,"logLevel":1}"
```
thanks.
|
1.0
|
eafyfsuh.net broken again - link example: `http://www.eafyfsuh.net/DFItk`
It does not redirect. I tried to disable extensions (excludes GM) and load the page. However it didn't redirect.
How can I do to work on eafysuh?
adsbypasser: 5.37.1
fx 41 with extensions GM 3.4.1, uBlock₀, NoScript, Ghostery, etc.
console log:
```
AdsBypasser: working on
http://www.eafyfsuh.net/DFItk
with
{"version":3,"alignCenter":true,"changeBackground":true,"externalServerSupport":false,"redirectImage":true,"scaleImage":true,"logLevel":1}"
```
thanks.
|
defect
|
eafyfsuh net broken again link example it does not redirect i tried to disable extensions excludes gm and load the page however it didn t redirect how can i do to work on eafysuh adsbypasser fx with extensions gm ublock₀ noscript ghostery etc console log adsbypasser working on with version aligncenter true changebackground true externalserversupport false redirectimage true scaleimage true loglevel thanks
| 1
|
110,931
| 24,036,785,909
|
IssuesEvent
|
2022-09-15 19:59:32
|
GoogleForCreators/web-stories-wp
|
https://api.github.com/repos/GoogleForCreators/web-stories-wp
|
closed
|
Code Quality: Remove `enableThumbnailCaching` feature flag code
|
P2 Type: Code Quality Pod: Prometheus Pod: WP
|
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Task Description
<!-- A clear and concise description of what this task is about. -->
This feature has been enabled by default for a while. Time to remove all code around the `enableThumbnailCaching` flag check.
|
1.0
|
Code Quality: Remove `enableThumbnailCaching` feature flag code - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Task Description
<!-- A clear and concise description of what this task is about. -->
This feature has been enabled by default for a while. Time to remove all code around the `enableThumbnailCaching` flag check.
|
non_defect
|
code quality remove enablethumbnailcaching feature flag code task description this feature has been enabled by default for a while time to remove all code around the enablethumbnailcaching flag check
| 0
|
58,002
| 16,327,924,096
|
IssuesEvent
|
2021-05-12 05:09:51
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
corrupted pool with 2.1.0-rc5
|
Status: Triage Needed Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | EndeavourOS
Distribution Version |
Linux Kernel | 5.12.2 (zen)
Architecture | x86 (AMD)
ZFS Version | 2.1.0-rc5
SPL Version | 2.1.0-rc5
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
Today I gave 2.1.0-rc5 a try. I had 2.0.4 installed before. install with dkms was going fine, but after boot one of my pools is throwing an error:
```
pool: zstore
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 07:35:40 with 0 errors on Thu Apr 1 07:35:41 2021
config:
NAME STATE READ WRITE CKSUM
zstore ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
WD40EFRX-WCC7K6XALFKX ONLINE 0 0 0
WD40EFRX-WCC7K2YAJA69 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
WD40EFRX-WCC7K6YX6SAA ONLINE 0 0 0
WD40EFRX-WCC7K4JKN36U ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
zstore/data/BACKUP/rakete_home:<0x0>
```
I did not do a feature upgrade yet. Obvioulsy it is an issue with dataset `zstore/data/BACKUP/rakete_home`. It is not mounted and if I try to mount it I get:
```
6# zfs mount zstore/data/BACKUP/rakete_home
cannot mount 'zstore/data/BACKUP/rakete_home': Input/output error
```
How should I approach this now?
|
1.0
|
corrupted pool with 2.1.0-rc5 - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | EndeavourOS
Distribution Version |
Linux Kernel | 5.12.2 (zen)
Architecture | x86 (AMD)
ZFS Version | 2.1.0-rc5
SPL Version | 2.1.0-rc5
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
Today I gave 2.1.0-rc5 a try. I had 2.0.4 installed before. install with dkms was going fine, but after boot one of my pools is throwing an error:
```
pool: zstore
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 07:35:40 with 0 errors on Thu Apr 1 07:35:41 2021
config:
NAME STATE READ WRITE CKSUM
zstore ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
WD40EFRX-WCC7K6XALFKX ONLINE 0 0 0
WD40EFRX-WCC7K2YAJA69 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
WD40EFRX-WCC7K6YX6SAA ONLINE 0 0 0
WD40EFRX-WCC7K4JKN36U ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
zstore/data/BACKUP/rakete_home:<0x0>
```
I did not do a feature upgrade yet. Obvioulsy it is an issue with dataset `zstore/data/BACKUP/rakete_home`. It is not mounted and if I try to mount it I get:
```
6# zfs mount zstore/data/BACKUP/rakete_home
cannot mount 'zstore/data/BACKUP/rakete_home': Input/output error
```
How should I approach this now?
|
defect
|
corrupted pool with thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name endeavouros distribution version linux kernel zen architecture amd zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing today i gave a try i had installed before install with dkms was going fine but after boot one of my pools is throwing an error pool zstore state online status one or more devices has experienced an error resulting in data corruption applications may be affected action restore the file in question if possible otherwise restore the entire pool from backup see scan scrub repaired in with errors on thu apr config name state read write cksum zstore online mirror online online online mirror online online online errors permanent errors have been detected in the following files zstore data backup rakete home i did not do a feature upgrade yet obvioulsy it is an issue with dataset zstore data backup rakete home it is not mounted and if i try to mount it i get zfs mount zstore data backup rakete home cannot mount zstore data backup rakete home input output error how should i approach this now
| 1
|
79,707
| 3,541,708,152
|
IssuesEvent
|
2016-01-19 03:09:35
|
pathwaysmedical/frasernw
|
https://api.github.com/repos/pathwaysmedical/frasernw
|
closed
|
sc_item share with divisions not working
|
Bug High Priority
|
This is a bug from my last implementation I can fix it.
|
1.0
|
sc_item share with divisions not working - This is a bug from my last implementation I can fix it.
|
non_defect
|
sc item share with divisions not working this is a bug from my last implementation i can fix it
| 0
|
66,767
| 20,624,227,346
|
IssuesEvent
|
2022-03-07 20:37:59
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]:
|
I-defect needs-triaging
|
### What happened?
I am following an example on microsofts website for selenium 4. I am using the msEdgeDriver.exe for 99.0. I am using edge 99.0. it is a simple basic script. I am running in VsTools. The browser will launch, and it goes to the site. In the debugger though I am getting an error. "Exception has occurred: NoSuchElementException
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)"
Any help would be appreciated Here is the example I followed https://docs.microsoft.com/en-us/microsoft-edge/webdriver-chromium/?tabs=c-sharp.
### How can we reproduce the issue?
```shell
Should not be hard. VSCODE, MS Edge, are both default installs.
#Testing new webdriver
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.edge.service import Service
dPath = '.\msedgedriver99.exe'
driver = webdriver.Edge(dPath)
service = Service(executable_path=dPath)
driver.get('https://www.google.com')
element = driver.find_element(By.ID, 'sb_form_q')
element.send_keys('WebDriver')
element.submit()
time.sleep(5)
driver.quit()
```
### Relevant log output
```shell
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)
Stacktrace:
Backtrace:
Microsoft::Applications::Events::EventProperties::unpack [0x005A4E63+58211]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004837C1+1400481]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x0027406E+3470]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DC30+100304]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DDB0+100688]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002C1252+245234]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1D34+182484]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002BFAD3+239219]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1A66+181766]
Microsoft::Applications::Events::GUID_t::GUID_t [0x00294C66+63494]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002959F6+66966]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x0049D895+1507189]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x007212E2+115298]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00721046+114630]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00724D60+130272]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x0072197C+116988]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x00495237+1472791]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0078+1517400]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0202+1517794]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004B1EB2+1590674]
BaseThreadInitThunk [0x75A5FA29+25]
RtlGetAppContainerNamedObjectPath [0x77177A9E+286]
RtlGetAppContainerNamedObjectPath [0x77177A6E+238]
File "C:\Users\TJ423JZ\OneDrive - EY\Documents\VSCode\Workspaces\Selenium 4 Test\seleniumTest.py", line 11, in <module>
element = driver.find_element(By.ID, 'sb_form_q')
```
### Operating System
Windows 10
### Selenium version
Python 3.10.2 VScode v1.64.2
### What are the browser(s) and version(s) where you see this issue?
Edge 99.0
### What are the browser driver(s) and version(s) where you see this issue?
EdgeDriver99
### Are you using Selenium Grid?
4.0
|
1.0
|
[🐛 Bug]: - ### What happened?
I am following an example on microsofts website for selenium 4. I am using the msEdgeDriver.exe for 99.0. I am using edge 99.0. it is a simple basic script. I am running in VsTools. The browser will launch, and it goes to the site. In the debugger though I am getting an error. "Exception has occurred: NoSuchElementException
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)"
Any help would be appreciated Here is the example I followed https://docs.microsoft.com/en-us/microsoft-edge/webdriver-chromium/?tabs=c-sharp.
### How can we reproduce the issue?
```shell
Should not be hard. VSCODE, MS Edge, are both default installs.
#Testing new webdriver
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.edge.service import Service
dPath = '.\msedgedriver99.exe'
driver = webdriver.Edge(dPath)
service = Service(executable_path=dPath)
driver.get('https://www.google.com')
element = driver.find_element(By.ID, 'sb_form_q')
element.send_keys('WebDriver')
element.submit()
time.sleep(5)
driver.quit()
```
### Relevant log output
```shell
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)
Stacktrace:
Backtrace:
Microsoft::Applications::Events::EventProperties::unpack [0x005A4E63+58211]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004837C1+1400481]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x0027406E+3470]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DC30+100304]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DDB0+100688]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002C1252+245234]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1D34+182484]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002BFAD3+239219]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1A66+181766]
Microsoft::Applications::Events::GUID_t::GUID_t [0x00294C66+63494]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002959F6+66966]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x0049D895+1507189]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x007212E2+115298]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00721046+114630]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00724D60+130272]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x0072197C+116988]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x00495237+1472791]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0078+1517400]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0202+1517794]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004B1EB2+1590674]
BaseThreadInitThunk [0x75A5FA29+25]
RtlGetAppContainerNamedObjectPath [0x77177A9E+286]
RtlGetAppContainerNamedObjectPath [0x77177A6E+238]
File "C:\Users\TJ423JZ\OneDrive - EY\Documents\VSCode\Workspaces\Selenium 4 Test\seleniumTest.py", line 11, in <module>
element = driver.find_element(By.ID, 'sb_form_q')
```
### Operating System
Windows 10
### Selenium version
Python 3.10.2 VScode v1.64.2
### What are the browser(s) and version(s) where you see this issue?
Edge 99.0
### What are the browser driver(s) and version(s) where you see this issue?
EdgeDriver99
### Are you using Selenium Grid?
4.0
|
defect
|
what happened i am following an example on microsofts website for selenium i am using the msedgedriver exe for i am using edge it is a simple basic script i am running in vstools the browser will launch and it goes to the site in the debugger though i am getting an error exception has occurred nosuchelementexception message no such element unable to locate element method css selector selector session info microsoftedge any help would be appreciated here is the example i followed how can we reproduce the issue shell should not be hard vscode ms edge are both default installs testing new webdriver from selenium import webdriver from selenium webdriver common by import by import time from selenium webdriver edge service import service dpath exe driver webdriver edge dpath service service executable path dpath driver get element driver find element by id sb form q element send keys webdriver element submit time sleep driver quit relevant log output shell message no such element unable to locate element method css selector selector session info microsoftedge stacktrace backtrace microsoft applications events eventproperties unpack microsoft applications events isemanticcontext setcommonfield microsoft applications events ilogconfiguration operator microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events isemanticcontext setcommonfield microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events isemanticcontext setcommonfield microsoft applications events isemanticcontext setcommonfield microsoft applications events isemanticcontext setcommonfield microsoft applications events isemanticcontext setcommonfield basethreadinitthunk rtlgetappcontainernamedobjectpath rtlgetappcontainernamedobjectpath file c users onedrive ey documents vscode workspaces selenium test seleniumtest py line in element driver find element by id sb form q operating system windows selenium version python vscode what are the browser s and version s where you see this issue edge what are the browser driver s and version s where you see this issue are you using selenium grid
| 1
|
79,887
| 29,499,801,528
|
IssuesEvent
|
2023-06-02 20:26:23
|
dotCMS/core
|
https://api.github.com/repos/dotCMS/core
|
closed
|
Remove hardcoded referrer from Reach Page conditions
|
Type : Defect Merged QA : Passed Internal Team : Falcon dotCMS : Experiments Next Release
|
### Parent Issue
https://github.com/dotCMS/core/issues/22760
### Problem Statement
When an Experiment is configured with a Reach Page goal, there's a internal condition hardcoded which is that the referrer needs to be the page where the Experiment is created. This is not convenient since it lefts out cases like for example reaching a thank-you page, which can be a number of hops ahead of the page being tested, for example a particular product.
The referrer needs to be empty.
### Steps to Reproduce
* Configure a Experiment with a reach page goal to a certain page
* Hit the page not coming straight from the page in the Experiment
Current Behavior: The goal will not be counted.
Expected: The goal should be counted.
### Acceptance Criteria
When configuring an Experiment with a Reach Page goal, reaching the specified Page does not necessarily need to come from the page in the Experiment.
IMPORTANT: The Experiment page need to be visited previous to reaching the specified Page (anytime before)
Automated test
### dotCMS Version
master / 23.06
### Proposed Objective
Core Features
### Proposed Priority
Priority 2 - Important
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
_No response_
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds
_No response_
### Sub-Tasks & Estimates
_No response_
|
1.0
|
Remove hardcoded referrer from Reach Page conditions - ### Parent Issue
https://github.com/dotCMS/core/issues/22760
### Problem Statement
When an Experiment is configured with a Reach Page goal, there's a internal condition hardcoded which is that the referrer needs to be the page where the Experiment is created. This is not convenient since it lefts out cases like for example reaching a thank-you page, which can be a number of hops ahead of the page being tested, for example a particular product.
The referrer needs to be empty.
### Steps to Reproduce
* Configure a Experiment with a reach page goal to a certain page
* Hit the page not coming straight from the page in the Experiment
Current Behavior: The goal will not be counted.
Expected: The goal should be counted.
### Acceptance Criteria
When configuring an Experiment with a Reach Page goal, reaching the specified Page does not necessarily need to come from the page in the Experiment.
IMPORTANT: The Experiment page need to be visited previous to reaching the specified Page (anytime before)
Automated test
### dotCMS Version
master / 23.06
### Proposed Objective
Core Features
### Proposed Priority
Priority 2 - Important
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
_No response_
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds
_No response_
### Sub-Tasks & Estimates
_No response_
|
defect
|
remove hardcoded referrer from reach page conditions parent issue problem statement when an experiment is configured with a reach page goal there s a internal condition hardcoded which is that the referrer needs to be the page where the experiment is created this is not convenient since it lefts out cases like for example reaching a thank you page which can be a number of hops ahead of the page being tested for example a particular product the referrer needs to be empty steps to reproduce configure a experiment with a reach page goal to a certain page hit the page not coming straight from the page in the experiment current behavior the goal will not be counted expected the goal should be counted acceptance criteria when configuring an experiment with a reach page goal reaching the specified page does not necessarily need to come from the page in the experiment important the experiment page need to be visited previous to reaching the specified page anytime before automated test dotcms version master proposed objective core features proposed priority priority important external links slack conversations support tickets figma designs etc no response assumptions initiation needs no response quality assurance notes workarounds no response sub tasks estimates no response
| 1
|
223,679
| 24,733,934,029
|
IssuesEvent
|
2022-10-20 20:06:34
|
lukebrogan-mend/NodeGoat
|
https://api.github.com/repos/lukebrogan-mend/NodeGoat
|
opened
|
CVE-2021-32804 (High) detected in tar-4.4.8.tgz
|
security vulnerability
|
## CVE-2021-32804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.8.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p>
<p>
Dependency Hierarchy:
- forever-2.0.0.tgz (Root Library)
- forever-monitor-2.0.0.tgz
- chokidar-2.1.8.tgz
- fsevents-1.2.9.tgz
- node-pre-gyp-0.12.0.tgz
- :x: **tar-4.4.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebrogan-mend/NodeGoat/commit/f95d3c97683d86893d0c292585ba5dd1abf0a95e">f95d3c97683d86893d0c292585ba5dd1abf0a95e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution (tar): 4.4.14</p>
<p>Direct dependency fix Resolution (forever): 3.0.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2021-32804 (High) detected in tar-4.4.8.tgz - ## CVE-2021-32804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.8.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p>
<p>
Dependency Hierarchy:
- forever-2.0.0.tgz (Root Library)
- forever-monitor-2.0.0.tgz
- chokidar-2.1.8.tgz
- fsevents-1.2.9.tgz
- node-pre-gyp-0.12.0.tgz
- :x: **tar-4.4.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebrogan-mend/NodeGoat/commit/f95d3c97683d86893d0c292585ba5dd1abf0a95e">f95d3c97683d86893d0c292585ba5dd1abf0a95e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution (tar): 4.4.14</p>
<p>Direct dependency fix Resolution (forever): 3.0.0</p>
</p>
</details>
<p></p>
|
non_defect
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href dependency hierarchy forever tgz root library forever monitor tgz chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch master vulnerability details the npm package tar aka node tar before versions and has a arbitrary file creation overwrite vulnerability due to insufficient absolute path sanitization node tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservepaths flag is not set to true this is achieved by stripping the absolute path root from any absolute file paths contained in a tar file for example home user bashrc would turn into home user bashrc this logic was insufficient when file paths contained repeated path roots such as home user bashrc node tar would only strip a single path root from such paths when given an absolute file path with repeating path roots the resulting path e g home user bashrc would still resolve to an absolute path thus allowing arbitrary file creation and overwrite this issue was addressed in releases and users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry path or a filter method which removes entries with absolute paths see referenced github advisory for details be aware of cve which fixes a similar bug in later versions of tar publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution forever
| 0
|
213,840
| 16,540,655,832
|
IssuesEvent
|
2021-05-27 16:22:29
|
chapel-lang/chapel
|
https://api.github.com/repos/chapel-lang/chapel
|
closed
|
Using operator methods instead of standalone operator functions may thwart out intent optimizations?
|
area: Compiler area: Tests
|
### Summary of Problem
When trying to adjust the following program to use an operator method instead of an operator function, the output generated by the program greatly increased.
Here is the output for the simplified program when the operator is defined as a standalone function:
```
test1
init 1 1
(x = 1, ptr = {xx = 1})
deinit 1 1
```
And here is the increased output when the operator is defined as a method:
```
test1
init (default)
init (default)
init 1 1
lhs(0 0) = rhs(1 1)
deinit 1 1
lhs(0 0) = rhs(1 1)
deinit 1 1
(x = 1, ptr = {xx = 1})
deinit 1 1
```
### Steps to Reproduce
Here's the shorter version of the modified program. The program contains other variations with out intents and at a quick glance they seem similarly impacted. Similar programs that used in or inout intents did not seem impacted.
**Source Code:**
```chapel
class C {
var xx: int = 0;
}
record R {
var x: int = 0;
var ptr: shared C = new shared C(0);
proc init() {
this.x = 0;
this.ptr = new shared C(0);
writeln("init (default)");
}
proc init(arg:int) {
this.x = arg;
this.ptr = new shared C(arg);
writeln("init ", arg, " ", arg);
}
proc init=(other: R) {
this.x = other.x;
this.ptr = new shared C(other.ptr.xx);
writeln("init= ", other.x, " ", other.ptr.xx);
}
proc deinit() {
writeln("deinit ", x, " ", ptr.xx);
}
proc toString() {
return "(" + this.x:string + " " + this.ptr.xx:string + ")";
}
proc set1() {
this.x = 1;
this.ptr.xx = 1;
return this;
}
}
operator R.=(ref lhs:R, rhs:R) { // This line causes the difference in output, though maybe indirectly
writeln("lhs", lhs.toString(), " = rhs", rhs.toString());
lhs.x = rhs.x;
lhs.ptr = new shared C(rhs.ptr.xx);
}
proc makeR() {
return new R(1);
}
proc makeR(arg: int) {
return new R(arg);
}
proc out1(out arg: R) {
arg = makeR();
}
proc test1() {
writeln("test1");
var x: R;
out1(x);
writeln(x);
}
test1();
```
**Compile command:**
`chpl foo.chpl`
**Execution command:**
`./foo`
**Associated Future Test(s):**
This prevented the update of test/types/records/intents/out-intent.chpl to use operator methods instead of standalone operator functions. It may also be responsible for mismatches when updating the following programs in similar ways:
- test/types/records/split-init/inner-fn.chpl
- test/types/records/split-init/split-init-global1.chpl
- test/types/records/split-init/split-init-out-on.chpl
- test/library/standard/Types/copyable-custom-records.chpl
- test/types/chplhashtable/test-chpl-hashtable.chpl
<!-- Are there any tests in Chapel's test system that demonstrate this issue?
e.g. [`test/path/to/foo.chpl`](
https://github.com/chapel-lang/chapel/blob/master/test/path/to/foo.chpl
) #1234 -->
### Configuration Information
- Output of `chpl --version`: chapel 1.25 (pre-release)
|
1.0
|
Using operator methods instead of standalone operator functions may thwart out intent optimizations? - ### Summary of Problem
When trying to adjust the following program to use an operator method instead of an operator function, the output generated by the program greatly increased.
Here is the output for the simplified program when the operator is defined as a standalone function:
```
test1
init 1 1
(x = 1, ptr = {xx = 1})
deinit 1 1
```
And here is the increased output when the operator is defined as a method:
```
test1
init (default)
init (default)
init 1 1
lhs(0 0) = rhs(1 1)
deinit 1 1
lhs(0 0) = rhs(1 1)
deinit 1 1
(x = 1, ptr = {xx = 1})
deinit 1 1
```
### Steps to Reproduce
Here's the shorter version of the modified program. The program contains other variations with out intents and at a quick glance they seem similarly impacted. Similar programs that used in or inout intents did not seem impacted.
**Source Code:**
```chapel
class C {
var xx: int = 0;
}
record R {
var x: int = 0;
var ptr: shared C = new shared C(0);
proc init() {
this.x = 0;
this.ptr = new shared C(0);
writeln("init (default)");
}
proc init(arg:int) {
this.x = arg;
this.ptr = new shared C(arg);
writeln("init ", arg, " ", arg);
}
proc init=(other: R) {
this.x = other.x;
this.ptr = new shared C(other.ptr.xx);
writeln("init= ", other.x, " ", other.ptr.xx);
}
proc deinit() {
writeln("deinit ", x, " ", ptr.xx);
}
proc toString() {
return "(" + this.x:string + " " + this.ptr.xx:string + ")";
}
proc set1() {
this.x = 1;
this.ptr.xx = 1;
return this;
}
}
operator R.=(ref lhs:R, rhs:R) { // This line causes the difference in output, though maybe indirectly
writeln("lhs", lhs.toString(), " = rhs", rhs.toString());
lhs.x = rhs.x;
lhs.ptr = new shared C(rhs.ptr.xx);
}
proc makeR() {
return new R(1);
}
proc makeR(arg: int) {
return new R(arg);
}
proc out1(out arg: R) {
arg = makeR();
}
proc test1() {
writeln("test1");
var x: R;
out1(x);
writeln(x);
}
test1();
```
**Compile command:**
`chpl foo.chpl`
**Execution command:**
`./foo`
**Associated Future Test(s):**
This prevented the update of test/types/records/intents/out-intent.chpl to use operator methods instead of standalone operator functions. It may also be responsible for mismatches when updating the following programs in similar ways:
- test/types/records/split-init/inner-fn.chpl
- test/types/records/split-init/split-init-global1.chpl
- test/types/records/split-init/split-init-out-on.chpl
- test/library/standard/Types/copyable-custom-records.chpl
- test/types/chplhashtable/test-chpl-hashtable.chpl
<!-- Are there any tests in Chapel's test system that demonstrate this issue?
e.g. [`test/path/to/foo.chpl`](
https://github.com/chapel-lang/chapel/blob/master/test/path/to/foo.chpl
) #1234 -->
### Configuration Information
- Output of `chpl --version`: chapel 1.25 (pre-release)
|
non_defect
|
using operator methods instead of standalone operator functions may thwart out intent optimizations summary of problem when trying to adjust the following program to use an operator method instead of an operator function the output generated by the program greatly increased here is the output for the simplified program when the operator is defined as a standalone function init x ptr xx deinit and here is the increased output when the operator is defined as a method init default init default init lhs rhs deinit lhs rhs deinit x ptr xx deinit steps to reproduce here s the shorter version of the modified program the program contains other variations with out intents and at a quick glance they seem similarly impacted similar programs that used in or inout intents did not seem impacted source code chapel class c var xx int record r var x int var ptr shared c new shared c proc init this x this ptr new shared c writeln init default proc init arg int this x arg this ptr new shared c arg writeln init arg arg proc init other r this x other x this ptr new shared c other ptr xx writeln init other x other ptr xx proc deinit writeln deinit x ptr xx proc tostring return this x string this ptr xx string proc this x this ptr xx return this operator r ref lhs r rhs r this line causes the difference in output though maybe indirectly writeln lhs lhs tostring rhs rhs tostring lhs x rhs x lhs ptr new shared c rhs ptr xx proc maker return new r proc maker arg int return new r arg proc out arg r arg maker proc writeln var x r x writeln x compile command chpl foo chpl execution command foo associated future test s this prevented the update of test types records intents out intent chpl to use operator methods instead of standalone operator functions it may also be responsible for mismatches when updating the following programs in similar ways test types records split init inner fn chpl test types records split init split init chpl test types records split init split init out on chpl test library standard types copyable custom records chpl test types chplhashtable test chpl hashtable chpl are there any tests in chapel s test system that demonstrate this issue e g configuration information output of chpl version chapel pre release
| 0
|
112,944
| 17,112,308,250
|
IssuesEvent
|
2021-07-10 15:28:04
|
tj-actions/tj-actions.github.io
|
https://api.github.com/repos/tj-actions/tj-actions.github.io
|
opened
|
CVE-2021-33502 (High) detected in normalize-url-3.3.0.tgz, normalize-url-1.9.1.tgz
|
security vulnerability
|
## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: tj-actions.github.io/package.json</p>
<p>Path to vulnerable library: tj-actions.github.io/node_modules/normalize-url</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.13.tgz (Root Library)
- optimize-cssnano-plugin-1.0.6.tgz
- cssnano-preset-default-4.0.8.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-1.9.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p>
<p>Path to dependency file: tj-actions.github.io/package.json</p>
<p>Path to vulnerable library: tj-actions.github.io/node_modules/normalize-url</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.13.tgz (Root Library)
- mini-css-extract-plugin-0.9.0.tgz
- :x: **normalize-url-1.9.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/tj-actions/tj-actions.github.io/commit/f4f5adc5636b7c5e1b316db8b64ba5f6cac9b920">f4f5adc5636b7c5e1b316db8b64ba5f6cac9b920</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution: normalize-url - 4.5.1, 5.3.1, 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33502 (High) detected in normalize-url-3.3.0.tgz, normalize-url-1.9.1.tgz - ## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: tj-actions.github.io/package.json</p>
<p>Path to vulnerable library: tj-actions.github.io/node_modules/normalize-url</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.13.tgz (Root Library)
- optimize-cssnano-plugin-1.0.6.tgz
- cssnano-preset-default-4.0.8.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-1.9.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p>
<p>Path to dependency file: tj-actions.github.io/package.json</p>
<p>Path to vulnerable library: tj-actions.github.io/node_modules/normalize-url</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.13.tgz (Root Library)
- mini-css-extract-plugin-0.9.0.tgz
- :x: **normalize-url-1.9.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/tj-actions/tj-actions.github.io/commit/f4f5adc5636b7c5e1b316db8b64ba5f6cac9b920">f4f5adc5636b7c5e1b316db8b64ba5f6cac9b920</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution: normalize-url - 4.5.1, 5.3.1, 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in normalize url tgz normalize url tgz cve high severity vulnerability vulnerable libraries normalize url tgz normalize url tgz normalize url tgz normalize a url library home page a href path to dependency file tj actions github io package json path to vulnerable library tj actions github io node modules normalize url dependency hierarchy cli service tgz root library optimize cssnano plugin tgz cssnano preset default tgz postcss normalize url tgz x normalize url tgz vulnerable library normalize url tgz normalize a url library home page a href path to dependency file tj actions github io package json path to vulnerable library tj actions github io node modules normalize url dependency hierarchy cli service tgz root library mini css extract plugin tgz x normalize url tgz vulnerable library found in head commit a href found in base branch main vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url step up your open source security game with whitesource
| 0
|
5,610
| 2,610,191,695
|
IssuesEvent
|
2015-02-26 19:00:36
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
详解鼻子上长黑色斑点
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
我不知道它从哪里来,要到哪里去,也不知道它因何而来,��
�何而去。时光的隧道里,它存在了千年,亦穿越了千年。期�
��的冷暖,个中的磨难,无论是深重,还是沧桑,都由它独自
承担。宛若前世预约的邂逅,只消一个照面,便刻在心尖,��
�此相互缠绕,相依相偎,彼此的世界都不再孤单。风起的日�
��,静静聆听它的浅吟低唱,风落的日子,默默细数风过的划
痕。任微风撩起岁月的裙幔,趟过飘香的温暖,漫过飞雪的��
�寒,直至生命烟消云散。但是岁月的脚步,真的无法更改吗�
��脸上的雀斑真的无法去除吗!鼻子上长黑色斑点,
《客户案例》
我是个典型的江南女子,一头长发,乌黑油亮,身材高��
�,眉清目秀,一双眼睛清辙迷人,皮肤也是出奇的白嫩,很�
��人都夸我仙女般漂亮。然而,二十五岁生日刚过的我偶然在
化妆的时候发现自己脸上居然长了黄褐斑,这下可把我吓坏��
�。平时爱说爱笑的我一下子变得沉默寡言了,一脸斑怎么见�
��啊?我的老公是一家电子公司的老板,出入各种派对的时候��
�是携我一起出席。自从长了斑,老公也不愿带我出去应酬了�
��这可把我急坏了。</br>
一次和网友聊天,网友说我长斑是由于我经常面对电脑��
�射,工作压力大,情绪波动大,饮食不规律,喜欢熬夜,就�
��引起内分泌失调,体内不同部位的气滞血瘀及病变,就会在
面部相对部位以斑块表象出来。
内分泌紊乱是色斑形成的根本原因:当人体内分泌紊乱时,��
�病理状态下互耗互损,导致了人体内的气血运行不畅,经脉�
��通,导致瘀血内停,气滞不畅,
心血不能达到皮肤颜面、营养肌肤,而皮肤中的黑色素就不��
�随着人体的正常新陈代谢排出去,长期如此,就形成了色斑�
��朋友介绍所现在有种祛斑产品,黛芙薇尔是专门针对调节内
分泌彻底祛斑的,效果十分好,我听了此话于是就在黛芙薇��
�官网订购了2个周期。</br>
开始我在他们官方网站上订购了两个周期,用的一个周��
�的时候感觉效果不大,我把自己这种情况详细询问了专家,�
��家说这种情况是正常现象,天然精华需要个调理的过程,于
是我就又继续用了第二个周期,这次果然有效果了,斑明显��
�变淡了,皮肤也开始变的滑滑的,我这次彻底放心了,等我�
��第二个周期用完,我的斑已经淡化了很多了,为了能彻底祛
斑,我又订购了一个周期,现在我已经用完四个多月了,脸��
�的斑不仅没有了,皮肤也变的又白又嫩的,摸上去滑溜溜的�
��我的很多姐妹都很羡慕,现在我把我的祛斑经历写下来,希
望更多的姐妹能祛斑成功。
阅读了鼻子上长黑色斑点,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
鼻子上长黑色斑点,同时为您分享祛斑小方法
香蕉牛奶祛斑法
香蕉:全脂牛奶:水=3:6:2,混合后像敷面膜一样涂抹至脸上,然
后轻轻的拍打脸部,然后静静的躺10—30分钟,洗掉后,你会�
��现脸绝对白了一圈。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:14
|
1.0
|
详解鼻子上长黑色斑点 - ```
《摘要》
我不知道它从哪里来,要到哪里去,也不知道它因何而来,��
�何而去。时光的隧道里,它存在了千年,亦穿越了千年。期�
��的冷暖,个中的磨难,无论是深重,还是沧桑,都由它独自
承担。宛若前世预约的邂逅,只消一个照面,便刻在心尖,��
�此相互缠绕,相依相偎,彼此的世界都不再孤单。风起的日�
��,静静聆听它的浅吟低唱,风落的日子,默默细数风过的划
痕。任微风撩起岁月的裙幔,趟过飘香的温暖,漫过飞雪的��
�寒,直至生命烟消云散。但是岁月的脚步,真的无法更改吗�
��脸上的雀斑真的无法去除吗!鼻子上长黑色斑点,
《客户案例》
我是个典型的江南女子,一头长发,乌黑油亮,身材高��
�,眉清目秀,一双眼睛清辙迷人,皮肤也是出奇的白嫩,很�
��人都夸我仙女般漂亮。然而,二十五岁生日刚过的我偶然在
化妆的时候发现自己脸上居然长了黄褐斑,这下可把我吓坏��
�。平时爱说爱笑的我一下子变得沉默寡言了,一脸斑怎么见�
��啊?我的老公是一家电子公司的老板,出入各种派对的时候��
�是携我一起出席。自从长了斑,老公也不愿带我出去应酬了�
��这可把我急坏了。</br>
一次和网友聊天,网友说我长斑是由于我经常面对电脑��
�射,工作压力大,情绪波动大,饮食不规律,喜欢熬夜,就�
��引起内分泌失调,体内不同部位的气滞血瘀及病变,就会在
面部相对部位以斑块表象出来。
内分泌紊乱是色斑形成的根本原因:当人体内分泌紊乱时,��
�病理状态下互耗互损,导致了人体内的气血运行不畅,经脉�
��通,导致瘀血内停,气滞不畅,
心血不能达到皮肤颜面、营养肌肤,而皮肤中的黑色素就不��
�随着人体的正常新陈代谢排出去,长期如此,就形成了色斑�
��朋友介绍所现在有种祛斑产品,黛芙薇尔是专门针对调节内
分泌彻底祛斑的,效果十分好,我听了此话于是就在黛芙薇��
�官网订购了2个周期。</br>
开始我在他们官方网站上订购了两个周期,用的一个周��
�的时候感觉效果不大,我把自己这种情况详细询问了专家,�
��家说这种情况是正常现象,天然精华需要个调理的过程,于
是我就又继续用了第二个周期,这次果然有效果了,斑明显��
�变淡了,皮肤也开始变的滑滑的,我这次彻底放心了,等我�
��第二个周期用完,我的斑已经淡化了很多了,为了能彻底祛
斑,我又订购了一个周期,现在我已经用完四个多月了,脸��
�的斑不仅没有了,皮肤也变的又白又嫩的,摸上去滑溜溜的�
��我的很多姐妹都很羡慕,现在我把我的祛斑经历写下来,希
望更多的姐妹能祛斑成功。
阅读了鼻子上长黑色斑点,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
鼻子上长黑色斑点,同时为您分享祛斑小方法
香蕉牛奶祛斑法
香蕉:全脂牛奶:水=3:6:2,混合后像敷面膜一样涂抹至脸上,然
后轻轻的拍打脸部,然后静静的躺10—30分钟,洗掉后,你会�
��现脸绝对白了一圈。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:14
|
defect
|
详解鼻子上长黑色斑点 《摘要》 我不知道它从哪里来,要到哪里去,也不知道它因何而来,�� �何而去。时光的隧道里,它存在了千年,亦穿越了千年。期� ��的冷暖,个中的磨难,无论是深重,还是沧桑,都由它独自 承担。宛若前世预约的邂逅,只消一个照面,便刻在心尖,�� �此相互缠绕,相依相偎,彼此的世界都不再孤单。风起的日� ��,静静聆听它的浅吟低唱,风落的日子,默默细数风过的划 痕。任微风撩起岁月的裙幔,趟过飘香的温暖,漫过飞雪的�� �寒,直至生命烟消云散。但是岁月的脚步,真的无法更改吗� ��脸上的雀斑真的无法去除吗!鼻子上长黑色斑点, 《客户案例》 我是个典型的江南女子,一头长发,乌黑油亮,身材高�� �,眉清目秀,一双眼睛清辙迷人,皮肤也是出奇的白嫩,很� ��人都夸我仙女般漂亮。然而,二十五岁生日刚过的我偶然在 化妆的时候发现自己脸上居然长了黄褐斑,这下可把我吓坏�� �。平时爱说爱笑的我一下子变得沉默寡言了,一脸斑怎么见� ��啊 我的老公是一家电子公司的老板,出入各种派对的时候�� �是携我一起出席。自从长了斑,老公也不愿带我出去应酬了� ��这可把我急坏了。 一次和网友聊天,网友说我长斑是由于我经常面对电脑�� �射,工作压力大,情绪波动大,饮食不规律,喜欢熬夜,就� ��引起内分泌失调,体内不同部位的气滞血瘀及病变,就会在 面部相对部位以斑块表象出来。 内分泌紊乱是色斑形成的根本原因:当人体内分泌紊乱时,�� �病理状态下互耗互损,导致了人体内的气血运行不畅,经脉� ��通,导致瘀血内停,气滞不畅, 心血不能达到皮肤颜面、营养肌肤,而皮肤中的黑色素就不�� �随着人体的正常新陈代谢排出去,长期如此,就形成了色斑� ��朋友介绍所现在有种祛斑产品,黛芙薇尔是专门针对调节内 分泌彻底祛斑的,效果十分好,我听了此话于是就在黛芙薇�� � 。 开始我在他们官方网站上订购了两个周期,用的一个周�� �的时候感觉效果不大,我把自己这种情况详细询问了专家,� ��家说这种情况是正常现象,天然精华需要个调理的过程,于 是我就又继续用了第二个周期,这次果然有效果了,斑明显�� �变淡了,皮肤也开始变的滑滑的,我这次彻底放心了,等我� ��第二个周期用完,我的斑已经淡化了很多了,为了能彻底祛 斑,我又订购了一个周期,现在我已经用完四个多月了,脸�� �的斑不仅没有了,皮肤也变的又白又嫩的,摸上去滑溜溜的� ��我的很多姐妹都很羡慕,现在我把我的祛斑经历写下来,希 望更多的姐妹能祛斑成功。 阅读了鼻子上长黑色斑点,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 鼻子上长黑色斑点,同时为您分享祛斑小方法 香蕉牛奶祛斑法 香蕉 全脂牛奶 水 ,混合后像敷面膜一样涂抹至脸上,然 后轻轻的拍打脸部, — ,洗掉后,你会� ��现脸绝对白了一圈。 original issue reported on code google com by additive gmail com on jul at
| 1
|
467,057
| 13,440,021,674
|
IssuesEvent
|
2020-09-07 23:18:20
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
Submit search button in form-mini should be disabled until minimum search length is reached
|
Area: Frontend Component: Search Fixed in 2.4.x Issue: Confirmed Issue: Format is valid Issue: Ready for Work Priority: P3 Progress: ready for dev Reproduced on 2.4.x Severity: S3
|
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.4.0) and any important information on the environment where bug is reproducible.
-->
1. Base install of Magento 2.4.0 with sample data installed.
2. The issue is prevalent on every operating system and browser.
### Steps to reproduce (*)
1. Focus on form-mini search box.
2. Type one or two alphanumeric characters in search box.
### Expected result (*)
- Submit button (magnifying glass) remains disabled until minimum search length is reached:

### Actual result (*)
- Submit button (magnifying glass) becomes enabled even if minimum search length is not reached:

### Additional Info
Minimum search length is defined in the related component at https://github.com/magento/magento2/blob/2.4-develop/app/code/Magento/Search/view/frontend/web/js/form-mini.js#L33
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ X ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
|
1.0
|
Submit search button in form-mini should be disabled until minimum search length is reached - ### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.4.0) and any important information on the environment where bug is reproducible.
-->
1. Base install of Magento 2.4.0 with sample data installed.
2. The issue is prevalent on every operating system and browser.
### Steps to reproduce (*)
1. Focus on form-mini search box.
2. Type one or two alphanumeric characters in search box.
### Expected result (*)
- Submit button (magnifying glass) remains disabled until minimum search length is reached:

### Actual result (*)
- Submit button (magnifying glass) becomes enabled even if minimum search length is not reached:

### Additional Info
Minimum search length is defined in the related component at https://github.com/magento/magento2/blob/2.4-develop/app/code/Magento/Search/view/frontend/web/js/form-mini.js#L33
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ X ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
|
non_defect
|
submit search button in form mini should be disabled until minimum search length is reached preconditions provide the exact magento version example and any important information on the environment where bug is reproducible base install of magento with sample data installed the issue is prevalent on every operating system and browser steps to reproduce focus on form mini search box type one or two alphanumeric characters in search box expected result submit button magnifying glass remains disabled until minimum search length is reached actual result submit button magnifying glass becomes enabled even if minimum search length is not reached additional info minimum search length is defined in the related component at please provide assessment for the issue as reporter this information will help during confirmation and issue triage processes severity affects critical data or functionality and leaves users without workaround severity affects critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and does not force users to employ a workaround severity affects aesthetics professional look and feel “quality” or “usability”
| 0
|
69,778
| 22,667,392,035
|
IssuesEvent
|
2022-07-03 04:51:31
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Timestamp position on file panel is wrong
|
T-Defect
|
### Steps to reproduce
1. Open a room
2. Upload a file
3. Open a file panel
### Outcome
#### What did you expect?
The timestamp should be aligned to the right side.

#### What happened instead?
The timestamp is centered.

### Operating system
Debian
### Browser information
Firefox ESR 91
### URL for webapp
app.element.io
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Timestamp position on file panel is wrong - ### Steps to reproduce
1. Open a room
2. Upload a file
3. Open a file panel
### Outcome
#### What did you expect?
The timestamp should be aligned to the right side.

#### What happened instead?
The timestamp is centered.

### Operating system
Debian
### Browser information
Firefox ESR 91
### URL for webapp
app.element.io
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
timestamp position on file panel is wrong steps to reproduce open a room upload a file open a file panel outcome what did you expect the timestamp should be aligned to the right side what happened instead the timestamp is centered operating system debian browser information firefox esr url for webapp app element io application version no response homeserver no response will you send logs no
| 1
|
118,117
| 25,254,829,555
|
IssuesEvent
|
2022-11-15 17:11:21
|
objectos/objectos
|
https://api.github.com/repos/objectos/objectos
|
closed
|
Objectos Code: Class declarations
|
t:feature c:code a:objectos-code
|
## Test cases
- [x] tc01: final modifier
- [x] tc02: single annotation
- [x] tc03: single method
|
2.0
|
Objectos Code: Class declarations - ## Test cases
- [x] tc01: final modifier
- [x] tc02: single annotation
- [x] tc03: single method
|
non_defect
|
objectos code class declarations test cases final modifier single annotation single method
| 0
|
192,079
| 14,599,363,721
|
IssuesEvent
|
2020-12-21 03:58:00
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
presslabs/mysql-operator: pkg/controller/mysqlbackupcron/job_backup_test.go; 3 LoC
|
fresh test tiny
|
Found a possible issue in [presslabs/mysql-operator](https://www.github.com/presslabs/mysql-operator) at [pkg/controller/mysqlbackupcron/job_backup_test.go](https://github.com/presslabs/mysql-operator/blob/47436cc5edd3d974239e45c994ccf7e22afa65e0/pkg/controller/mysqlbackupcron/job_backup_test.go#L104-L106)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to b at line 105 may start a goroutine
[Click here to see the code in its original context.](https://github.com/presslabs/mysql-operator/blob/47436cc5edd3d974239e45c994ccf7e22afa65e0/pkg/controller/mysqlbackupcron/job_backup_test.go#L104-L106)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, b := range backups {
c.Delete(context.TODO(), &b)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 47436cc5edd3d974239e45c994ccf7e22afa65e0
|
1.0
|
presslabs/mysql-operator: pkg/controller/mysqlbackupcron/job_backup_test.go; 3 LoC -
Found a possible issue in [presslabs/mysql-operator](https://www.github.com/presslabs/mysql-operator) at [pkg/controller/mysqlbackupcron/job_backup_test.go](https://github.com/presslabs/mysql-operator/blob/47436cc5edd3d974239e45c994ccf7e22afa65e0/pkg/controller/mysqlbackupcron/job_backup_test.go#L104-L106)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to b at line 105 may start a goroutine
[Click here to see the code in its original context.](https://github.com/presslabs/mysql-operator/blob/47436cc5edd3d974239e45c994ccf7e22afa65e0/pkg/controller/mysqlbackupcron/job_backup_test.go#L104-L106)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, b := range backups {
c.Delete(context.TODO(), &b)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 47436cc5edd3d974239e45c994ccf7e22afa65e0
|
non_defect
|
presslabs mysql operator pkg controller mysqlbackupcron job backup test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to b at line may start a goroutine click here to show the line s of go which triggered the analyzer go for b range backups c delete context todo b leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
43,131
| 11,496,713,933
|
IssuesEvent
|
2020-02-12 08:37:25
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
ConfirmationDialog and Dialog leaves screen grayed on rejection
|
defect
|
I'm trying to update my app to Angular 9 & PrimeNg 9.0.0-rc.4 and have faced this issue (which did not exists in Angular 8/PrimeNg 8):
**I'm submitting a ...** (check one with "x")
```
[X ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Plz find the reproduction project in the attachment. [demo.zip](https://github.com/primefaces/primeng/files/4182672/demo.zip)
It is a simple Angular 9 project created by Angular CLI with Prime Ng added to it. The actual code is in app.component.ts/html.
I've also tried reproducing the problem using the issue template:
https://stackblitz.com/edit/github-llfw9v
but was not able to reproduce it. I cannot figure out why is that.
**Current behavior**
When using OnPush change detection the screen stays grayed after the confirmation dialog is dismissed (rejected).
**Expected behavior**
Screen should not be grayed after the confirmation dialog is dismissed.
**Minimal reproduction of the problem with instructions**
1. Click 'Delete' button
2. Confirmation dialog is shown
3. Click 'No'
4. Observer that the screen is grayed.
5. Comment out line 7 in app.component.ts (so that the component does not use OnPush change detection strategy) and repeat the test. Now it works as expected.
**Please tell us about your environment:**
Windows 10, Chrome 80
* **Angular version:** 5.X
9.0.0
* **PrimeNG version:** 5.X
9.0.0-rc.4
|
1.0
|
ConfirmationDialog and Dialog leaves screen grayed on rejection - I'm trying to update my app to Angular 9 & PrimeNg 9.0.0-rc.4 and have faced this issue (which did not exists in Angular 8/PrimeNg 8):
**I'm submitting a ...** (check one with "x")
```
[X ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Plz find the reproduction project in the attachment. [demo.zip](https://github.com/primefaces/primeng/files/4182672/demo.zip)
It is a simple Angular 9 project created by Angular CLI with Prime Ng added to it. The actual code is in app.component.ts/html.
I've also tried reproducing the problem using the issue template:
https://stackblitz.com/edit/github-llfw9v
but was not able to reproduce it. I cannot figure out why is that.
**Current behavior**
When using OnPush change detection the screen stays grayed after the confirmation dialog is dismissed (rejected).
**Expected behavior**
Screen should not be grayed after the confirmation dialog is dismissed.
**Minimal reproduction of the problem with instructions**
1. Click 'Delete' button
2. Confirmation dialog is shown
3. Click 'No'
4. Observer that the screen is grayed.
5. Comment out line 7 in app.component.ts (so that the component does not use OnPush change detection strategy) and repeat the test. Now it works as expected.
**Please tell us about your environment:**
Windows 10, Chrome 80
* **Angular version:** 5.X
9.0.0
* **PrimeNG version:** 5.X
9.0.0-rc.4
|
defect
|
confirmationdialog and dialog leaves screen grayed on rejection i m trying to update my app to angular primeng rc and have faced this issue which did not exists in angular primeng i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports plz find the reproduction project in the attachment it is a simple angular project created by angular cli with prime ng added to it the actual code is in app component ts html i ve also tried reproducing the problem using the issue template but was not able to reproduce it i cannot figure out why is that current behavior when using onpush change detection the screen stays grayed after the confirmation dialog is dismissed rejected expected behavior screen should not be grayed after the confirmation dialog is dismissed minimal reproduction of the problem with instructions click delete button confirmation dialog is shown click no observer that the screen is grayed comment out line in app component ts so that the component does not use onpush change detection strategy and repeat the test now it works as expected please tell us about your environment windows chrome angular version x primeng version x rc
| 1
|
10,810
| 2,622,191,091
|
IssuesEvent
|
2015-03-04 00:23:06
|
byzhang/cudpp
|
https://api.github.com/repos/byzhang/cudpp
|
closed
|
Radix Sort fails for 40M elements on GeForce GTX 470
|
auto-migrated Milestone-Release2.0 OpSys-All Priority-High Type-Defect Usability
|
```
From an email to the CUDPP Google Group:
Hi Mark,
What OS?
Ans: CentOS 5.2
What do you mean by "otherwise"? In what circumstances do you get the
error, and in what circumstances does it run?
Ans: When I run radixSort -n=40000000, the error appears.
When you say "change the display card", do you mean physically, or
using the display control panel?
Ans: Change it physically, only one display card in the machine.
Regards,
Lemon
On 6月3日, 上午5時14分, Mark Harris <harr...@gmail.com> wrote:
- Hide quoted text -
> What OS?
> What do you mean by "otherwise"? In what circumstances do you get the
> error, and in what circumstances does it run?
> When you say "change the display card", do you mean physically, or using
the
> display control panel?
> Are you selecting the device in the app using "radixsort -device=<index>
> ..."?
> Mark
> On Thu, Jun 3, 2010 at 1:57 AM, Lemon <lemon...@gmail.com> wrote:
> > Hi, I have another problem of 'radixSort' in NVIDIA_SDK 3.0.
> > When I use GT 240 display card in linux 64 bit platform to run the
> > radixSort. It can sort about 63Million unsigned int keys and value.
> > But I have changed the display card to GTX 470 recently , and run the
> > radixSort. It can only sort about 30Million unsigned int keys and
> > value, otherwise it throws the exception:
> > Cuda error: after radixsort in file 'testradixsort.cpp' in
> > line 287 : invalid configuration argument.
> > I am not sure that is it proper to ask this question here, but anyone
> > can give me some help?
> > Thanks.
> > Lemon
```
Original issue reported on code.google.com by `harr...@gmail.com` on 4 Jun 2010 at 12:15
|
1.0
|
Radix Sort fails for 40M elements on GeForce GTX 470 - ```
From an email to the CUDPP Google Group:
Hi Mark,
What OS?
Ans: CentOS 5.2
What do you mean by "otherwise"? In what circumstances do you get the
error, and in what circumstances does it run?
Ans: When I run radixSort -n=40000000, the error appears.
When you say "change the display card", do you mean physically, or
using the display control panel?
Ans: Change it physically, only one display card in the machine.
Regards,
Lemon
On 6月3日, 上午5時14分, Mark Harris <harr...@gmail.com> wrote:
- Hide quoted text -
> What OS?
> What do you mean by "otherwise"? In what circumstances do you get the
> error, and in what circumstances does it run?
> When you say "change the display card", do you mean physically, or using
the
> display control panel?
> Are you selecting the device in the app using "radixsort -device=<index>
> ..."?
> Mark
> On Thu, Jun 3, 2010 at 1:57 AM, Lemon <lemon...@gmail.com> wrote:
> > Hi, I have another problem of 'radixSort' in NVIDIA_SDK 3.0.
> > When I use GT 240 display card in linux 64 bit platform to run the
> > radixSort. It can sort about 63Million unsigned int keys and value.
> > But I have changed the display card to GTX 470 recently , and run the
> > radixSort. It can only sort about 30Million unsigned int keys and
> > value, otherwise it throws the exception:
> > Cuda error: after radixsort in file 'testradixsort.cpp' in
> > line 287 : invalid configuration argument.
> > I am not sure that is it proper to ask this question here, but anyone
> > can give me some help?
> > Thanks.
> > Lemon
```
Original issue reported on code.google.com by `harr...@gmail.com` on 4 Jun 2010 at 12:15
|
defect
|
radix sort fails for elements on geforce gtx from an email to the cudpp google group hi mark what os ans centos what do you mean by otherwise in what circumstances do you get the error and in what circumstances does it run ans when i run radixsort n the error appears when you say change the display card do you mean physically or using the display control panel ans change it physically only one display card in the machine regards lemon on mark harris wrote hide quoted text what os what do you mean by otherwise in what circumstances do you get the error and in what circumstances does it run when you say change the display card do you mean physically or using the display control panel are you selecting the device in the app using radixsort device mark on thu jun at am lemon wrote hi i have another problem of radixsort in nvidia sdk when i use gt display card in linux bit platform to run the radixsort it can sort about unsigned int keys and value but i have changed the display card to gtx recently and run the radixsort it can only sort about unsigned int keys and value otherwise it throws the exception cuda error after radixsort in file testradixsort cpp in line invalid configuration argument i am not sure that is it proper to ask this question here but anyone can give me some help thanks lemon original issue reported on code google com by harr gmail com on jun at
| 1
|
1,017
| 9,168,529,036
|
IssuesEvent
|
2019-03-02 23:12:32
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Question about reboot options during update
|
automation/svc cxp product-question triaged
|
* Does always reboot mean "always reboot if you installed updates" or does it actually mean "always reboot even when no updates were installed"
* How is "Reboot if required" working? Especially for Linux OSes how is it determined that a reboot is needed?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c3461048-c7fc-3979-a818-39af99d5e6bb
* Version Independent ID: d0e5e766-ef63-d934-b21b-678933a5cc65
* Content: [Manage updates and patches for your Azure Windows VMs](https://docs.microsoft.com/en-us/azure/automation/automation-tutorial-update-management#schedule-an-update-deployment)
* Content Source: [articles/automation/automation-tutorial-update-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-tutorial-update-management.md)
* Service: **automation**
* GitHub Login: @zjalexander
* Microsoft Alias: **zachal**
|
1.0
|
Question about reboot options during update - * Does always reboot mean "always reboot if you installed updates" or does it actually mean "always reboot even when no updates were installed"
* How is "Reboot if required" working? Especially for Linux OSes how is it determined that a reboot is needed?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c3461048-c7fc-3979-a818-39af99d5e6bb
* Version Independent ID: d0e5e766-ef63-d934-b21b-678933a5cc65
* Content: [Manage updates and patches for your Azure Windows VMs](https://docs.microsoft.com/en-us/azure/automation/automation-tutorial-update-management#schedule-an-update-deployment)
* Content Source: [articles/automation/automation-tutorial-update-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-tutorial-update-management.md)
* Service: **automation**
* GitHub Login: @zjalexander
* Microsoft Alias: **zachal**
|
non_defect
|
question about reboot options during update does always reboot mean always reboot if you installed updates or does it actually mean always reboot even when no updates were installed how is reboot if required working especially for linux oses how is it determined that a reboot is needed document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation github login zjalexander microsoft alias zachal
| 0
|
55,332
| 14,371,402,606
|
IssuesEvent
|
2020-12-01 12:31:31
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
commandLink returns incorrect item on dataTable with liveScroll="true" lazy="true"
|
defect
|
When using a commandLink (or commandButton) on a liveScroll, lazy datatable, the first items (before the first lazy load) report the correct item. After loading the second batch of items, items in the first batch correspond to items in the second batch.
It seems that JSF reuses the indexes when loading more items using ajax.
This has been reproduced on 4.0 and 5.2.
The problem have been reported previously as [Issue 6879: DataTable with lazy loading, livescroll and commandbutton not working](https://code.google.com/p/primefaces/issues/detail?id=6879) on Google Code. It has been marked as WontMoveToGithub.
|
1.0
|
commandLink returns incorrect item on dataTable with liveScroll="true" lazy="true" - When using a commandLink (or commandButton) on a liveScroll, lazy datatable, the first items (before the first lazy load) report the correct item. After loading the second batch of items, items in the first batch correspond to items in the second batch.
It seems that JSF reuses the indexes when loading more items using ajax.
This has been reproduced on 4.0 and 5.2.
The problem have been reported previously as [Issue 6879: DataTable with lazy loading, livescroll and commandbutton not working](https://code.google.com/p/primefaces/issues/detail?id=6879) on Google Code. It has been marked as WontMoveToGithub.
|
defect
|
commandlink returns incorrect item on datatable with livescroll true lazy true when using a commandlink or commandbutton on a livescroll lazy datatable the first items before the first lazy load report the correct item after loading the second batch of items items in the first batch correspond to items in the second batch it seems that jsf reuses the indexes when loading more items using ajax this has been reproduced on and the problem have been reported previously as on google code it has been marked as wontmovetogithub
| 1
|
34,350
| 7,447,464,685
|
IssuesEvent
|
2018-03-28 12:38:23
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
opened
|
Reading binary data from FileStream gets incorrect result
|
defect
|
When reading a remote file data using **File** / **FileStream** objects, distorted binary values could be received.
### Steps To Reproduce
https://deck.net/93b2399f0f095280669e226fda5a7ea6
```csharp
public class Program
{
public static void Main()
{
// The test file by the link below contains 256 bytes, counting from 0 to 255.
const string testFilePath = "https://raw.githubusercontent.com/AndreyChechel/Bridge/Attachments/Issues/Attachments/Issue3495_TestFile.txt";
var bytes = System.IO.File.ReadAllBytes(testFilePath);
if (bytes.Length != 256) {
Console.WriteLine("[Error]: Array length is incorrect.");
return;
}
var hasError = false;
for (var i = 0; i < 256; i++) {
if(bytes[i] != i) {
Console.WriteLine($"Expected: {i}; Actual {bytes[i]}.");
hasError = true;
}
}
if(!hasError) {
Console.WriteLine("Success!");
}
}
}
```
### Expected Result
Console output:
```
Success!
```
### Actual Result
Console output:
```
Expected: 128; Actual 172.
Expected: 130; Actual 26.
Expected: 131; Actual 146.
Expected: 132; Actual 30.
Expected: 133; Actual 38.
Expected: 134; Actual 32.
Expected: 135; Actual 33.
Expected: 136; Actual 198.
Expected: 137; Actual 48.
Expected: 138; Actual 96.
Expected: 139; Actual 57.
Expected: 140; Actual 82.
Expected: 142; Actual 125.
Expected: 145; Actual 24.
Expected: 146; Actual 25.
Expected: 147; Actual 28.
Expected: 148; Actual 29.
Expected: 149; Actual 34.
Expected: 150; Actual 19.
Expected: 151; Actual 20.
Expected: 152; Actual 220.
Expected: 153; Actual 34.
Expected: 154; Actual 97.
Expected: 155; Actual 58.
Expected: 156; Actual 83.
Expected: 158; Actual 126.
Expected: 159; Actual 120.
```
|
1.0
|
Reading binary data from FileStream gets incorrect result - When reading a remote file data using **File** / **FileStream** objects, distorted binary values could be received.
### Steps To Reproduce
https://deck.net/93b2399f0f095280669e226fda5a7ea6
```csharp
public class Program
{
public static void Main()
{
// The test file by the link below contains 256 bytes, counting from 0 to 255.
const string testFilePath = "https://raw.githubusercontent.com/AndreyChechel/Bridge/Attachments/Issues/Attachments/Issue3495_TestFile.txt";
var bytes = System.IO.File.ReadAllBytes(testFilePath);
if (bytes.Length != 256) {
Console.WriteLine("[Error]: Array length is incorrect.");
return;
}
var hasError = false;
for (var i = 0; i < 256; i++) {
if(bytes[i] != i) {
Console.WriteLine($"Expected: {i}; Actual {bytes[i]}.");
hasError = true;
}
}
if(!hasError) {
Console.WriteLine("Success!");
}
}
}
```
### Expected Result
Console output:
```
Success!
```
### Actual Result
Console output:
```
Expected: 128; Actual 172.
Expected: 130; Actual 26.
Expected: 131; Actual 146.
Expected: 132; Actual 30.
Expected: 133; Actual 38.
Expected: 134; Actual 32.
Expected: 135; Actual 33.
Expected: 136; Actual 198.
Expected: 137; Actual 48.
Expected: 138; Actual 96.
Expected: 139; Actual 57.
Expected: 140; Actual 82.
Expected: 142; Actual 125.
Expected: 145; Actual 24.
Expected: 146; Actual 25.
Expected: 147; Actual 28.
Expected: 148; Actual 29.
Expected: 149; Actual 34.
Expected: 150; Actual 19.
Expected: 151; Actual 20.
Expected: 152; Actual 220.
Expected: 153; Actual 34.
Expected: 154; Actual 97.
Expected: 155; Actual 58.
Expected: 156; Actual 83.
Expected: 158; Actual 126.
Expected: 159; Actual 120.
```
|
defect
|
reading binary data from filestream gets incorrect result when reading a remote file data using file filestream objects distorted binary values could be received steps to reproduce csharp public class program public static void main the test file by the link below contains bytes counting from to const string testfilepath var bytes system io file readallbytes testfilepath if bytes length console writeline array length is incorrect return var haserror false for var i i i if bytes i console writeline expected i actual bytes haserror true if haserror console writeline success expected result console output success actual result console output expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual expected actual
| 1
|
39,608
| 12,692,876,901
|
IssuesEvent
|
2020-06-22 01:05:13
|
kenferrara/magento2
|
https://api.github.com/repos/kenferrara/magento2
|
opened
|
CVE-2020-11022 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.12.4.js</b>, <b>jquery-1.12.4.min.js</b>, <b>jquery-1.8.0.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.12.4.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js</a></p>
<p>Path to vulnerable library: /magento2/lib/web/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.12.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js</a></p>
<p>Path to vulnerable library: /magento2/lib/web/jquery/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.0/jquery.min.js</a></p>
<p>Path to vulnerable library: /magento2/app/code/Magento/Swagger/view/frontend/web/swagger-ui/js/lib/jquery-1.8.0.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.0.min.js** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.12.4","isTransitiveDependency":false,"dependencyTree":"jquery:1.12.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.12.4","isTransitiveDependency":false,"dependencyTree":"jquery:1.12.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.8.0","isTransitiveDependency":false,"dependencyTree":"jquery:1.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-11022 (Medium) detected in multiple libraries - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.12.4.js</b>, <b>jquery-1.12.4.min.js</b>, <b>jquery-1.8.0.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.12.4.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js</a></p>
<p>Path to vulnerable library: /magento2/lib/web/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.12.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js</a></p>
<p>Path to vulnerable library: /magento2/lib/web/jquery/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.0/jquery.min.js</a></p>
<p>Path to vulnerable library: /magento2/app/code/Magento/Swagger/view/frontend/web/swagger-ui/js/lib/jquery-1.8.0.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.0.min.js** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.12.4","isTransitiveDependency":false,"dependencyTree":"jquery:1.12.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.12.4","isTransitiveDependency":false,"dependencyTree":"jquery:1.12.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.8.0","isTransitiveDependency":false,"dependencyTree":"jquery:1.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries jquery js jquery min js jquery min js jquery js javascript library for dom operations library home page a href path to vulnerable library lib web jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library lib web jquery jquery min js dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library app code magento swagger view frontend web swagger ui js lib jquery min js dependency hierarchy x jquery min js vulnerable library vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl
| 0
|
75,559
| 15,435,845,789
|
IssuesEvent
|
2021-03-07 10:39:53
|
dodekanisou/home-automation
|
https://api.github.com/repos/dodekanisou/home-automation
|
closed
|
WS-2020-0091 (High) detected in http-proxy-1.11.2.tgz, http-proxy-1.15.2.tgz
|
security vulnerability
|
## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>http-proxy-1.11.2.tgz</b>, <b>http-proxy-1.15.2.tgz</b></p></summary>
<p>
<details><summary><b>http-proxy-1.11.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz</a></p>
<p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/mustache/package.json</p>
<p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/mustache/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- zuul-3.12.0.tgz (Root Library)
- :x: **http-proxy-1.11.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>http-proxy-1.15.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz</a></p>
<p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/admin-lte/package.json</p>
<p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/admin-lte/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.7.tgz (Root Library)
- :x: **http-proxy-1.15.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/dodekanisou/home-automation/commit/9b930d6dfb4815ac9f51831ae63b498835ce6700">9b930d6dfb4815ac9f51831ae63b498835ce6700</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0091 (High) detected in http-proxy-1.11.2.tgz, http-proxy-1.15.2.tgz - ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>http-proxy-1.11.2.tgz</b>, <b>http-proxy-1.15.2.tgz</b></p></summary>
<p>
<details><summary><b>http-proxy-1.11.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz</a></p>
<p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/mustache/package.json</p>
<p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/mustache/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- zuul-3.12.0.tgz (Root Library)
- :x: **http-proxy-1.11.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>http-proxy-1.15.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz</a></p>
<p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/admin-lte/package.json</p>
<p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/admin-lte/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.7.tgz (Root Library)
- :x: **http-proxy-1.15.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/dodekanisou/home-automation/commit/9b930d6dfb4815ac9f51831ae63b498835ce6700">9b930d6dfb4815ac9f51831ae63b498835ce6700</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in http proxy tgz http proxy tgz ws high severity vulnerability vulnerable libraries http proxy tgz http proxy tgz http proxy tgz http proxying for the masses library home page a href path to dependency file home automation rpihost wwwroot lib mustache package json path to vulnerable library home automation rpihost wwwroot lib mustache node modules http proxy package json dependency hierarchy zuul tgz root library x http proxy tgz vulnerable library http proxy tgz http proxying for the masses library home page a href path to dependency file home automation rpihost wwwroot lib admin lte package json path to vulnerable library home automation rpihost wwwroot lib admin lte node modules http proxy package json dependency hierarchy browser sync tgz root library x http proxy tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy step up your open source security game with whitesource
| 0
|
58,924
| 16,919,281,978
|
IssuesEvent
|
2021-06-25 01:20:31
|
SAP/fundamental-ngx
|
https://api.github.com/repos/SAP/fundamental-ngx
|
opened
|
slider in RTL issue
|
Defect Hunting RTL ariba bug core
|
#### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
Slider RTL arrow keys are going in the wrong direction as well
|
1.0
|
slider in RTL issue - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
Slider RTL arrow keys are going in the wrong direction as well
|
defect
|
slider in rtl issue is this a bug enhancement or feature request bug briefly describe your proposal slider rtl arrow keys are going in the wrong direction as well
| 1
|
73,814
| 24,812,300,155
|
IssuesEvent
|
2022-10-25 10:22:29
|
MarcusWolschon/osmeditor4android
|
https://api.github.com/repos/MarcusWolschon/osmeditor4android
|
closed
|
Vespucci doesn't show photos from media storage
|
Defect Medium
|
This may be partly related to #1584
Photos don't work at all with Vespucci on my Fairphone 4:
- Photos in `/storage/emulated/0/Download/Vespucci/Pictures` are not shown on photo layer
- Photos from media storage (I guess that means Gallery?) are not shown on photo layer when the option is checked in preferences
- It doesn't matter if the photos were taken with stock photo app or with a Google Photos port
- When I take a picture from within Vespucci, I get an error that it has no GPS coordinates attached to it and that I should check if geotagging is enabled. 
Here are two example images for download:
https://cloud.familie-brandtner.de/s/4gxFDzw7dbKimnZ
Bug report sent from app 2022-10-22 22:21 CEST
Vespucci 18.0.3 from Playstore
Fairphone 4
Android 11
|
1.0
|
Vespucci doesn't show photos from media storage - This may be partly related to #1584
Photos don't work at all with Vespucci on my Fairphone 4:
- Photos in `/storage/emulated/0/Download/Vespucci/Pictures` are not shown on photo layer
- Photos from media storage (I guess that means Gallery?) are not shown on photo layer when the option is checked in preferences
- It doesn't matter if the photos were taken with stock photo app or with a Google Photos port
- When I take a picture from within Vespucci, I get an error that it has no GPS coordinates attached to it and that I should check if geotagging is enabled. 
Here are two example images for download:
https://cloud.familie-brandtner.de/s/4gxFDzw7dbKimnZ
Bug report sent from app 2022-10-22 22:21 CEST
Vespucci 18.0.3 from Playstore
Fairphone 4
Android 11
|
defect
|
vespucci doesn t show photos from media storage this may be partly related to photos don t work at all with vespucci on my fairphone photos in storage emulated download vespucci pictures are not shown on photo layer photos from media storage i guess that means gallery are not shown on photo layer when the option is checked in preferences it doesn t matter if the photos were taken with stock photo app or with a google photos port when i take a picture from within vespucci i get an error that it has no gps coordinates attached to it and that i should check if geotagging is enabled here are two example images for download bug report sent from app cest vespucci from playstore fairphone android
| 1
|
27,818
| 12,708,374,915
|
IssuesEvent
|
2020-06-23 10:25:53
|
EasyAbp/PaymentService
|
https://api.github.com/repos/EasyAbp/PaymentService
|
opened
|
Implement CancelAsync() method in PaymentAppService
|
Module:PaymentService
|
* Users can cancel their own payments.
* Admin users can cancel all the payments.
* Should ensure the payment is not cancelled and not completed.
|
1.0
|
Implement CancelAsync() method in PaymentAppService - * Users can cancel their own payments.
* Admin users can cancel all the payments.
* Should ensure the payment is not cancelled and not completed.
|
non_defect
|
implement cancelasync method in paymentappservice users can cancel their own payments admin users can cancel all the payments should ensure the payment is not cancelled and not completed
| 0
|
32,732
| 6,904,940,584
|
IssuesEvent
|
2017-11-27 03:27:46
|
line/armeria
|
https://api.github.com/repos/line/armeria
|
closed
|
NoSuchElementException from DefaultStreamMessage
|
defect
|
Possibly due to #848 or #836?
```
com.linecorp.armeria.client.retry.RetryingHttpClientTest > differentBackoffBasedOnStatus FAILED
java.util.concurrent.CompletionException: java.util.NoSuchElementException
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at com.linecorp.armeria.client.retry.RetryingHttpClientTest.differentBackoffBasedOnStatus(RetryingHttpClientTest.java:490)
Caused by:
java.util.NoSuchElementException
at java.util.AbstractQueue.remove(AbstractQueue.java:117)
at com.linecorp.armeria.common.stream.DefaultStreamMessage.notifySubscriberWithElements(DefaultStreamMessage.java:384)
at com.linecorp.armeria.common.stream.DefaultStreamMessage.notifySubscriber0(DefaultStreamMessage.java:367)
at com.linecorp.armeria.common.stream.DefaultStreamMessage.notifySubscriber(DefaultStreamMessage.java:298)
at com.linecorp.armeria.common.stream.DefaultStreamMessage$SubscriptionImpl.request(DefaultStreamMessage.java:580)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$StreamMessageProcessor.requestDemand(AbstractStreamMessageDuplicator.java:321)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$DownstreamSubscription.request(AbstractStreamMessageDuplicator.java:576)
at com.linecorp.armeria.internal.HttpHeaderSubscriber.onSubscribe(HttpHeaderSubscriber.java:56)
at com.linecorp.armeria.common.stream.FilteredStreamMessage$FilteringSubscriber.onSubscribe(FilteredStreamMessage.java:129)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$DownstreamSubscription.invokeOnSubscribe(AbstractStreamMessageDuplicator.java:558)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$StreamMessageProcessor.subscribe(AbstractStreamMessageDuplicator.java:272)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$ChildStreamMessage.subscribe0(AbstractStreamMessageDuplicator.java:437)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$ChildStreamMessage.subscribe(AbstractStreamMessageDuplicator.java:409)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$ChildStreamMessage.subscribe(AbstractStreamMessageDuplicator.java:403)
at com.linecorp.armeria.common.stream.StreamMessageWrapper.subscribe(StreamMessageWrapper.java:68)
at com.linecorp.armeria.common.stream.FilteredStreamMessage.subscribe(FilteredStreamMessage.java:89)
at com.linecorp.armeria.client.retry.HttpStatusBasedRetryStrategy.shouldRetry(HttpStatusBasedRetryStrategy.java:52)
at com.linecorp.armeria.client.retry.HttpStatusBasedRetryStrategy.shouldRetry(HttpStatusBasedRetryStrategy.java:35)
at com.linecorp.armeria.client.retry.RetryingHttpClient.doExecute0(RetryingHttpClient.java:132)
at com.linecorp.armeria.client.retry.RetryingHttpClient.doExecute(RetryingHttpClient.java:116)
at com.linecorp.armeria.client.retry.RetryingHttpClient.doExecute(RetryingHttpClient.java:52)
at com.linecorp.armeria.client.retry.RetryingClient.execute(RetryingClient.java:69)
at com.linecorp.armeria.client.UserClient.execute(UserClient.java:160)
at com.linecorp.armeria.client.DefaultHttpClient.execute(DefaultHttpClient.java:55)
at com.linecorp.armeria.client.DefaultHttpClient.execute(DefaultHttpClient.java:68)
at com.linecorp.armeria.client.DefaultHttpClient.execute(DefaultHttpClient.java:64)
at com.linecorp.armeria.client.HttpClient.execute(HttpClient.java:156)
at com.linecorp.armeria.client.HttpClient.get(HttpClient.java:198)
... 1 more
```
|
1.0
|
NoSuchElementException from DefaultStreamMessage - Possibly due to #848 or #836?
```
com.linecorp.armeria.client.retry.RetryingHttpClientTest > differentBackoffBasedOnStatus FAILED
java.util.concurrent.CompletionException: java.util.NoSuchElementException
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at com.linecorp.armeria.client.retry.RetryingHttpClientTest.differentBackoffBasedOnStatus(RetryingHttpClientTest.java:490)
Caused by:
java.util.NoSuchElementException
at java.util.AbstractQueue.remove(AbstractQueue.java:117)
at com.linecorp.armeria.common.stream.DefaultStreamMessage.notifySubscriberWithElements(DefaultStreamMessage.java:384)
at com.linecorp.armeria.common.stream.DefaultStreamMessage.notifySubscriber0(DefaultStreamMessage.java:367)
at com.linecorp.armeria.common.stream.DefaultStreamMessage.notifySubscriber(DefaultStreamMessage.java:298)
at com.linecorp.armeria.common.stream.DefaultStreamMessage$SubscriptionImpl.request(DefaultStreamMessage.java:580)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$StreamMessageProcessor.requestDemand(AbstractStreamMessageDuplicator.java:321)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$DownstreamSubscription.request(AbstractStreamMessageDuplicator.java:576)
at com.linecorp.armeria.internal.HttpHeaderSubscriber.onSubscribe(HttpHeaderSubscriber.java:56)
at com.linecorp.armeria.common.stream.FilteredStreamMessage$FilteringSubscriber.onSubscribe(FilteredStreamMessage.java:129)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$DownstreamSubscription.invokeOnSubscribe(AbstractStreamMessageDuplicator.java:558)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$StreamMessageProcessor.subscribe(AbstractStreamMessageDuplicator.java:272)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$ChildStreamMessage.subscribe0(AbstractStreamMessageDuplicator.java:437)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$ChildStreamMessage.subscribe(AbstractStreamMessageDuplicator.java:409)
at com.linecorp.armeria.common.stream.AbstractStreamMessageDuplicator$ChildStreamMessage.subscribe(AbstractStreamMessageDuplicator.java:403)
at com.linecorp.armeria.common.stream.StreamMessageWrapper.subscribe(StreamMessageWrapper.java:68)
at com.linecorp.armeria.common.stream.FilteredStreamMessage.subscribe(FilteredStreamMessage.java:89)
at com.linecorp.armeria.client.retry.HttpStatusBasedRetryStrategy.shouldRetry(HttpStatusBasedRetryStrategy.java:52)
at com.linecorp.armeria.client.retry.HttpStatusBasedRetryStrategy.shouldRetry(HttpStatusBasedRetryStrategy.java:35)
at com.linecorp.armeria.client.retry.RetryingHttpClient.doExecute0(RetryingHttpClient.java:132)
at com.linecorp.armeria.client.retry.RetryingHttpClient.doExecute(RetryingHttpClient.java:116)
at com.linecorp.armeria.client.retry.RetryingHttpClient.doExecute(RetryingHttpClient.java:52)
at com.linecorp.armeria.client.retry.RetryingClient.execute(RetryingClient.java:69)
at com.linecorp.armeria.client.UserClient.execute(UserClient.java:160)
at com.linecorp.armeria.client.DefaultHttpClient.execute(DefaultHttpClient.java:55)
at com.linecorp.armeria.client.DefaultHttpClient.execute(DefaultHttpClient.java:68)
at com.linecorp.armeria.client.DefaultHttpClient.execute(DefaultHttpClient.java:64)
at com.linecorp.armeria.client.HttpClient.execute(HttpClient.java:156)
at com.linecorp.armeria.client.HttpClient.get(HttpClient.java:198)
... 1 more
```
|
defect
|
nosuchelementexception from defaultstreammessage possibly due to or com linecorp armeria client retry retryinghttpclienttest differentbackoffbasedonstatus failed java util concurrent completionexception java util nosuchelementexception at java util concurrent completablefuture reportjoin completablefuture java at java util concurrent completablefuture join completablefuture java at com linecorp armeria client retry retryinghttpclienttest differentbackoffbasedonstatus retryinghttpclienttest java caused by java util nosuchelementexception at java util abstractqueue remove abstractqueue java at com linecorp armeria common stream defaultstreammessage notifysubscriberwithelements defaultstreammessage java at com linecorp armeria common stream defaultstreammessage defaultstreammessage java at com linecorp armeria common stream defaultstreammessage notifysubscriber defaultstreammessage java at com linecorp armeria common stream defaultstreammessage subscriptionimpl request defaultstreammessage java at com linecorp armeria common stream abstractstreammessageduplicator streammessageprocessor requestdemand abstractstreammessageduplicator java at com linecorp armeria common stream abstractstreammessageduplicator downstreamsubscription request abstractstreammessageduplicator java at com linecorp armeria internal httpheadersubscriber onsubscribe httpheadersubscriber java at com linecorp armeria common stream filteredstreammessage filteringsubscriber onsubscribe filteredstreammessage java at com linecorp armeria common stream abstractstreammessageduplicator downstreamsubscription invokeonsubscribe abstractstreammessageduplicator java at com linecorp armeria common stream abstractstreammessageduplicator streammessageprocessor subscribe abstractstreammessageduplicator java at com linecorp armeria common stream abstractstreammessageduplicator childstreammessage abstractstreammessageduplicator java at com linecorp armeria common stream abstractstreammessageduplicator childstreammessage subscribe abstractstreammessageduplicator java at com linecorp armeria common stream abstractstreammessageduplicator childstreammessage subscribe abstractstreammessageduplicator java at com linecorp armeria common stream streammessagewrapper subscribe streammessagewrapper java at com linecorp armeria common stream filteredstreammessage subscribe filteredstreammessage java at com linecorp armeria client retry httpstatusbasedretrystrategy shouldretry httpstatusbasedretrystrategy java at com linecorp armeria client retry httpstatusbasedretrystrategy shouldretry httpstatusbasedretrystrategy java at com linecorp armeria client retry retryinghttpclient retryinghttpclient java at com linecorp armeria client retry retryinghttpclient doexecute retryinghttpclient java at com linecorp armeria client retry retryinghttpclient doexecute retryinghttpclient java at com linecorp armeria client retry retryingclient execute retryingclient java at com linecorp armeria client userclient execute userclient java at com linecorp armeria client defaulthttpclient execute defaulthttpclient java at com linecorp armeria client defaulthttpclient execute defaulthttpclient java at com linecorp armeria client defaulthttpclient execute defaulthttpclient java at com linecorp armeria client httpclient execute httpclient java at com linecorp armeria client httpclient get httpclient java more
| 1
|
280,118
| 24,280,257,138
|
IssuesEvent
|
2022-09-28 16:45:50
|
Ascentroid/Ascentroid
|
https://api.github.com/repos/Ascentroid/Ascentroid
|
closed
|
Installer uninstall detection problem
|
bug fixed needs testing
|
When the installer is instructed by the user to install on Windows for the current user (not all users), it does not create the uninstall registry entry in order for it to detect to uninstall when the installer is run again. I'll have to see if I can create a workaround for this. Sigh.
|
1.0
|
Installer uninstall detection problem - When the installer is instructed by the user to install on Windows for the current user (not all users), it does not create the uninstall registry entry in order for it to detect to uninstall when the installer is run again. I'll have to see if I can create a workaround for this. Sigh.
|
non_defect
|
installer uninstall detection problem when the installer is instructed by the user to install on windows for the current user not all users it does not create the uninstall registry entry in order for it to detect to uninstall when the installer is run again i ll have to see if i can create a workaround for this sigh
| 0
|
18,715
| 3,080,772,479
|
IssuesEvent
|
2015-08-22 02:11:32
|
prettydiff/prettydiff
|
https://api.github.com/repos/prettydiff/prettydiff
|
closed
|
wrong jsx formatting
|
Defect QA
|
```js
export default class MyComponent extends React.Component {
render() {
const options = {a:1};
return (
<div>
<div {...options} />
</div>
);
}
}
```
is being beautified to this:
```js
export default class MyComponent extends React.Component {
render() {
const options = {
a: 1
};
return (
<div>
<div </div {...options}/> >
);
}
}
```
|
1.0
|
wrong jsx formatting - ```js
export default class MyComponent extends React.Component {
render() {
const options = {a:1};
return (
<div>
<div {...options} />
</div>
);
}
}
```
is being beautified to this:
```js
export default class MyComponent extends React.Component {
render() {
const options = {
a: 1
};
return (
<div>
<div </div {...options}/> >
);
}
}
```
|
defect
|
wrong jsx formatting js export default class mycomponent extends react component render const options a return is being beautified to this js export default class mycomponent extends react component render const options a return
| 1
|
411,120
| 27,813,449,595
|
IssuesEvent
|
2023-03-18 12:03:35
|
GDSCITM/GDSCITM-official-website
|
https://api.github.com/repos/GDSCITM/GDSCITM-official-website
|
closed
|
📚License is not identifiable by GitHub
|
documentation area/doc
|
### 💬 Description
update readme.md file for the given license bar.

### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🔖 Have you read the Code of Conduct?
- [X] Please search to see if an issue already exists for the bug or typo error you wanna add.
|
1.0
|
📚License is not identifiable by GitHub - ### 💬 Description
update readme.md file for the given license bar.

### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🔖 Have you read the Code of Conduct?
- [X] Please search to see if an issue already exists for the bug or typo error you wanna add.
|
non_defect
|
📚license is not identifiable by github 💬 description update readme md file for the given license bar 👀 have you spent some time to check if this issue has been raised before i checked and didn t find similar issue 🔖 have you read the code of conduct please search to see if an issue already exists for the bug or typo error you wanna add
| 0
|
12,719
| 5,231,653,582
|
IssuesEvent
|
2017-01-30 04:25:33
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
opened
|
java:jwt_token_creds failing
|
BUILDPONY flaky test
|
cloud_to_prod_auth:default:java:jwt_token_creds failed in https://grpc-testing.appspot.com/job/gRPC_interop_pull_requests/10969/consoleFull
```
Jan 29, 2017 5:01:58 AM io.netty.util.internal.MacAddressUtil defaultMachineId
WARNING: Failed to find a usable hardware address from the network interfaces; using random bytes: 69:61:e5:0d:72:6b:0b:d6
Shutting down
Exception in thread "main" io.grpc.StatusRuntimeException: UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:227)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:208)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:141)
at io.grpc.testing.integration.TestServiceGrpc$TestServiceBlockingStub.unaryCall(TestServiceGrpc.java:408)
at io.grpc.testing.integration.AbstractInteropTest.jwtTokenCreds(AbstractInteropTest.java:1184)
at io.grpc.testing.integration.TestServiceClient.runTest(TestServiceClient.java:242)
at io.grpc.testing.integration.TestServiceClient.run(TestServiceClient.java:194)
at io.grpc.testing.integration.TestServiceClient.main(TestServiceClient.java:78)
Jan 29, 2017 5:01:59 AM io.grpc.internal.ManagedChannelImpl maybeTerminateChannel
INFO: [io.grpc.internal.ManagedChannelImpl-1] Terminated
FAILED: cloud_to_prod_auth:default:java:jwt_token_creds [ret=1, pid=22006]
```
|
1.0
|
java:jwt_token_creds failing - cloud_to_prod_auth:default:java:jwt_token_creds failed in https://grpc-testing.appspot.com/job/gRPC_interop_pull_requests/10969/consoleFull
```
Jan 29, 2017 5:01:58 AM io.netty.util.internal.MacAddressUtil defaultMachineId
WARNING: Failed to find a usable hardware address from the network interfaces; using random bytes: 69:61:e5:0d:72:6b:0b:d6
Shutting down
Exception in thread "main" io.grpc.StatusRuntimeException: UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:227)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:208)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:141)
at io.grpc.testing.integration.TestServiceGrpc$TestServiceBlockingStub.unaryCall(TestServiceGrpc.java:408)
at io.grpc.testing.integration.AbstractInteropTest.jwtTokenCreds(AbstractInteropTest.java:1184)
at io.grpc.testing.integration.TestServiceClient.runTest(TestServiceClient.java:242)
at io.grpc.testing.integration.TestServiceClient.run(TestServiceClient.java:194)
at io.grpc.testing.integration.TestServiceClient.main(TestServiceClient.java:78)
Jan 29, 2017 5:01:59 AM io.grpc.internal.ManagedChannelImpl maybeTerminateChannel
INFO: [io.grpc.internal.ManagedChannelImpl-1] Terminated
FAILED: cloud_to_prod_auth:default:java:jwt_token_creds [ret=1, pid=22006]
```
|
non_defect
|
java jwt token creds failing cloud to prod auth default java jwt token creds failed in jan am io netty util internal macaddressutil defaultmachineid warning failed to find a usable hardware address from the network interfaces using random bytes shutting down exception in thread main io grpc statusruntimeexception unauthenticated request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see at io grpc stub clientcalls tostatusruntimeexception clientcalls java at io grpc stub clientcalls getunchecked clientcalls java at io grpc stub clientcalls blockingunarycall clientcalls java at io grpc testing integration testservicegrpc testserviceblockingstub unarycall testservicegrpc java at io grpc testing integration abstractinteroptest jwttokencreds abstractinteroptest java at io grpc testing integration testserviceclient runtest testserviceclient java at io grpc testing integration testserviceclient run testserviceclient java at io grpc testing integration testserviceclient main testserviceclient java jan am io grpc internal managedchannelimpl maybeterminatechannel info terminated failed cloud to prod auth default java jwt token creds
| 0
|
274,671
| 23,856,004,682
|
IssuesEvent
|
2022-09-06 23:39:49
|
mountaincharlie/project-five-midi-dragon
|
https://api.github.com/repos/mountaincharlie/project-five-midi-dragon
|
closed
|
Testing
|
Testing
|
responsiveness across screen sizes:
- [x] layout change from mobile to large screens
checkout template:
- [x] check that the page is only rendered if the user has something in their tracklist (else redirected and a message displayed)
- [x] check that the items from the tracklist are correctly listed and the total price is correct
- [x] check that if the user has details in the My Details section of their Profile, it is pre-popped in the Your Details section (may need to wait until the Profile app is created)
- [x] check that the 'My Tracklist' btn takes the user back to their Tracklist
automated testing:
- [x] create test_urls.py to check checkout urls render pages correctly
document:
- [x] document all tests in README
|
1.0
|
Testing - responsiveness across screen sizes:
- [x] layout change from mobile to large screens
checkout template:
- [x] check that the page is only rendered if the user has something in their tracklist (else redirected and a message displayed)
- [x] check that the items from the tracklist are correctly listed and the total price is correct
- [x] check that if the user has details in the My Details section of their Profile, it is pre-popped in the Your Details section (may need to wait until the Profile app is created)
- [x] check that the 'My Tracklist' btn takes the user back to their Tracklist
automated testing:
- [x] create test_urls.py to check checkout urls render pages correctly
document:
- [x] document all tests in README
|
non_defect
|
testing responsiveness across screen sizes layout change from mobile to large screens checkout template check that the page is only rendered if the user has something in their tracklist else redirected and a message displayed check that the items from the tracklist are correctly listed and the total price is correct check that if the user has details in the my details section of their profile it is pre popped in the your details section may need to wait until the profile app is created check that the my tracklist btn takes the user back to their tracklist automated testing create test urls py to check checkout urls render pages correctly document document all tests in readme
| 0
|
219,062
| 7,333,102,339
|
IssuesEvent
|
2018-03-05 18:19:29
|
NCEAS/metacat
|
https://api.github.com/repos/NCEAS/metacat
|
closed
|
Disable the feature of downloading external schemas for unregistered namespaces
|
Component: Bugzilla-Id Priority: Normal Status: Resolved Tracker: Feature
|
---
Author Name: **Jing Tao** (Jing Tao)
Original Redmine Issue: 7096, https://projects.ecoinformatics.org/ecoinfo/issues/7096
Original Date: 2016-08-30
Original Assignee: Jing Tao
---
Currently, Metacat will download the schemas during the metadata object inserting if it finds the namespaces is not found in the local cache system. In order to use the same schemas to validate xml objects, we need to disable the feature.
If the schemas of the object can't be found locally, Metacat will reject the documents (throw an exception). However, we need to provide those features for users:
1. A clear message to specify that the schema is not registered and contact Metacat operators.
2. A new admin page for Metacat operators to register schemas with/without namespaces.
|
1.0
|
Disable the feature of downloading external schemas for unregistered namespaces - ---
Author Name: **Jing Tao** (Jing Tao)
Original Redmine Issue: 7096, https://projects.ecoinformatics.org/ecoinfo/issues/7096
Original Date: 2016-08-30
Original Assignee: Jing Tao
---
Currently, Metacat will download the schemas during the metadata object inserting if it finds the namespaces is not found in the local cache system. In order to use the same schemas to validate xml objects, we need to disable the feature.
If the schemas of the object can't be found locally, Metacat will reject the documents (throw an exception). However, we need to provide those features for users:
1. A clear message to specify that the schema is not registered and contact Metacat operators.
2. A new admin page for Metacat operators to register schemas with/without namespaces.
|
non_defect
|
disable the feature of downloading external schemas for unregistered namespaces author name jing tao jing tao original redmine issue original date original assignee jing tao currently metacat will download the schemas during the metadata object inserting if it finds the namespaces is not found in the local cache system in order to use the same schemas to validate xml objects we need to disable the feature if the schemas of the object can t be found locally metacat will reject the documents throw an exception however we need to provide those features for users a clear message to specify that the schema is not registered and contact metacat operators a new admin page for metacat operators to register schemas with without namespaces
| 0
|
69,243
| 22,291,344,438
|
IssuesEvent
|
2022-06-12 12:11:43
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
Unable to automatically mount ZFS share - Centos 8 Realtime Kernel
|
Type: Defect Status: Stale Status: Triage Needed
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Centos
Distribution Version | Stream
Linux Kernel | 4.18.0-277.rt7.42.el8.x86_64
Architecture | x64
ZFS Version |2.0.3-1
SPL Version |2.0.3-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
After reboot there is no pool visible.
Using DKMS module, installation seems fine, no errors, but after reboot there is an issue shown during kernel boot and after that zfs datasets are unavailable.
After all shown below services restart, pool is visible and datasets also.
### Describe how to reproduce the problem
old version od ZFS was removed
#sudo dnf remove zfs zfs-kmod spl spl-kmod libzfs2 libnvpair1 libuutil1 libzpool2 zfs-release
centos 8 release repo was installed
#sudo dnf install http://zfsonlinux.org/epel/zfs-release.el8_3.noarch.rpm
key added
#gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
inside zfs.repo testing DKMS repo was enabled
next I installed ZFS 2.0.3 package
#sudo dnf install zfs
and at the end all necessary services was enabled and system restarted
#systemctl enable zfs-import-cache
#systemctl enable zfs-import-scan
#systemctl enable zfs-mount
#systemctl enable zfs-share
#systemctl enable zfs-zed
#systemctl enable zfs.target
#systemctl enable zfs-import.target
#reboot
Of course after reboot there was no pool visible.
###Log messages shown below :
```
Mar 2 01:10:58 nserver kernel: Large kmem_alloc(69120, 0x1000), please file an issue at:#012https://github.com/openzfs/zfs/issues/new
Mar 2 01:10:58 nserver kernel: CPU: 136 PID: 5153 Comm: modprobe Tainted: P OE --------- - - 4.18.0-277.rt7.42.el8.x86_64 #1
Mar 2 01:10:58 nserver kernel: Hardware name: Dell Inc. PowerEdge R830/0VVT0H, BIOS 1.11.0 12/30/2019
Mar 2 01:10:58 nserver kernel: Call Trace:
Mar 2 01:10:58 nserver kernel: dump_stack+0x5c/0x80
Mar 2 01:10:58 nserver kernel: spl_kmem_zalloc.cold.3+0x14/0x19 [spl]
Mar 2 01:10:58 nserver kernel: init_module+0x45/0x1000 [zzstd]
Mar 2 01:10:58 nserver kernel: ? 0xffffffffc0bfa000
Mar 2 01:10:58 nserver kernel: do_one_initcall+0x4f/0x268
Mar 2 01:10:58 nserver kernel: ? __vunmap+0x1f1/0x230
Mar 2 01:10:58 nserver kernel: ? kmem_cache_alloc_trace+0x117/0x360
Mar 2 01:10:58 nserver kernel: do_init_module+0x5a/0x220
Mar 2 01:10:58 nserver kernel: load_module+0x141a/0x1770
Mar 2 01:10:58 nserver kernel: ? __do_sys_init_module+0x141/0x190
Mar 2 01:10:58 nserver kernel: __do_sys_init_module+0x141/0x190
Mar 2 01:10:58 nserver kernel: do_syscall_64+0x87/0x1a0
Mar 2 01:10:58 nserver kernel: entry_SYSCALL_64_after_hwframe+0x65/0xca
Mar 2 01:10:58 nserver kernel: RIP: 0033:0x7f816eb787fe
Mar 2 01:10:58 nserver kernel: Code: 48 8b 0d 8d 16 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 5a 16 2c 00 f7 d8 64 89 01 48
Mar 2 01:10:58 nserver kernel: RSP: 002b:00007ffcde42e618 EFLAGS: 00000246 ORIG_RAX: 00000000000000af
Mar 2 01:10:58 nserver kernel: RAX: ffffffffffffffda RBX: 000055b5588b9e80 RCX: 00007f816eb787fe
Mar 2 01:10:58 nserver kernel: RDX: 000055b5578b97d6 RSI: 00000000000a0fe8 RDI: 000055b5591e7900
Mar 2 01:10:58 nserver kernel: RBP: 000055b5578b97d6 R08: 000055b5588b901a R09: 0000000000000000
Mar 2 01:10:58 nserver kernel: R10: 000055b5588b9010 R11: 0000000000000246 R12: 000055b5591e7900
Mar 2 01:10:58 nserver kernel: R13: 000055b5588b9e30 R14: 0000000000040000 R15: 0000000000000000
Mar 2 01:10:58 nserver kernel: Large kmem_alloc(69120, 0x1000), please file an issue at:#012https://github.com/openzfs/zfs/issues/new
Mar 2 01:10:59 nserver kernel: CPU: 136 PID: 5153 Comm: modprobe Tainted: P OE --------- - - 4.18.0-277.rt7.42.el8.x86_64 #1
Mar 2 01:10:59 nserver kernel: Hardware name: Dell Inc. PowerEdge R830/0VVT0H, BIOS 1.11.0 12/30/2019
Mar 2 01:10:59 nserver kernel: Call Trace:
Mar 2 01:10:59 nserver kernel: dump_stack+0x5c/0x80
Mar 2 01:10:59 nserver kernel: spl_kmem_zalloc.cold.3+0x14/0x19 [spl]
Mar 2 01:10:59 nserver kernel: init_module+0x6a/0x1000 [zzstd]
Mar 2 01:10:59 nserver kernel: ? 0xffffffffc0bfa000
Mar 2 01:10:59 nserver kernel: do_one_initcall+0x4f/0x268
Mar 2 01:10:59 nserver kernel: ? __vunmap+0x1f1/0x230
Mar 2 01:10:59 nserver kernel: ? kmem_cache_alloc_trace+0x117/0x360
Mar 2 01:10:59 nserver kernel: do_init_module+0x5a/0x220
Mar 2 01:10:59 nserver kernel: load_module+0x141a/0x1770
Mar 2 01:10:59 nserver kernel: ? __do_sys_init_module+0x141/0x190
Mar 2 01:10:59 nserver kernel: __do_sys_init_module+0x141/0x190
Mar 2 01:10:59 nserver kernel: do_syscall_64+0x87/0x1a0
Mar 2 01:10:59 nserver kernel: entry_SYSCALL_64_after_hwframe+0x65/0xca
Mar 2 01:10:59 nserver kernel: RIP: 0033:0x7f816eb787fe
Mar 2 01:10:59 nserver kernel: Code: 48 8b 0d 8d 16 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 5a 16 2c 00 f7 d8 64 89 01 48
Mar 2 01:10:59 nserver kernel: RSP: 002b:00007ffcde42e618 EFLAGS: 00000246 ORIG_RAX: 00000000000000af
Mar 2 01:10:59 nserver kernel: RAX: ffffffffffffffda RBX: 000055b5588b9e80 RCX: 00007f816eb787fe
Mar 2 01:10:59 nserver kernel: RDX: 000055b5578b97d6 RSI: 00000000000a0fe8 RDI: 000055b5591e7900
Mar 2 01:10:59 nserver kernel: RBP: 000055b5578b97d6 R08: 000055b5588b901a R09: 0000000000000000
Mar 2 01:10:59 nserver kernel: R10: 000055b5588b9010 R11: 0000000000000246 R12: 000055b5591e7900
Mar 2 01:10:59 nserver kernel: R13: 000055b5588b9e30 R14: 0000000000040000 R15: 0000000000000000
```
|
1.0
|
Unable to automatically mount ZFS share - Centos 8 Realtime Kernel - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Centos
Distribution Version | Stream
Linux Kernel | 4.18.0-277.rt7.42.el8.x86_64
Architecture | x64
ZFS Version |2.0.3-1
SPL Version |2.0.3-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
After reboot there is no pool visible.
Using DKMS module, installation seems fine, no errors, but after reboot there is an issue shown during kernel boot and after that zfs datasets are unavailable.
After all shown below services restart, pool is visible and datasets also.
### Describe how to reproduce the problem
old version od ZFS was removed
#sudo dnf remove zfs zfs-kmod spl spl-kmod libzfs2 libnvpair1 libuutil1 libzpool2 zfs-release
centos 8 release repo was installed
#sudo dnf install http://zfsonlinux.org/epel/zfs-release.el8_3.noarch.rpm
key added
#gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
inside zfs.repo testing DKMS repo was enabled
next I installed ZFS 2.0.3 package
#sudo dnf install zfs
and at the end all necessary services was enabled and system restarted
#systemctl enable zfs-import-cache
#systemctl enable zfs-import-scan
#systemctl enable zfs-mount
#systemctl enable zfs-share
#systemctl enable zfs-zed
#systemctl enable zfs.target
#systemctl enable zfs-import.target
#reboot
Of course after reboot there was no pool visible.
###Log messages shown below :
```
Mar 2 01:10:58 nserver kernel: Large kmem_alloc(69120, 0x1000), please file an issue at:#012https://github.com/openzfs/zfs/issues/new
Mar 2 01:10:58 nserver kernel: CPU: 136 PID: 5153 Comm: modprobe Tainted: P OE --------- - - 4.18.0-277.rt7.42.el8.x86_64 #1
Mar 2 01:10:58 nserver kernel: Hardware name: Dell Inc. PowerEdge R830/0VVT0H, BIOS 1.11.0 12/30/2019
Mar 2 01:10:58 nserver kernel: Call Trace:
Mar 2 01:10:58 nserver kernel: dump_stack+0x5c/0x80
Mar 2 01:10:58 nserver kernel: spl_kmem_zalloc.cold.3+0x14/0x19 [spl]
Mar 2 01:10:58 nserver kernel: init_module+0x45/0x1000 [zzstd]
Mar 2 01:10:58 nserver kernel: ? 0xffffffffc0bfa000
Mar 2 01:10:58 nserver kernel: do_one_initcall+0x4f/0x268
Mar 2 01:10:58 nserver kernel: ? __vunmap+0x1f1/0x230
Mar 2 01:10:58 nserver kernel: ? kmem_cache_alloc_trace+0x117/0x360
Mar 2 01:10:58 nserver kernel: do_init_module+0x5a/0x220
Mar 2 01:10:58 nserver kernel: load_module+0x141a/0x1770
Mar 2 01:10:58 nserver kernel: ? __do_sys_init_module+0x141/0x190
Mar 2 01:10:58 nserver kernel: __do_sys_init_module+0x141/0x190
Mar 2 01:10:58 nserver kernel: do_syscall_64+0x87/0x1a0
Mar 2 01:10:58 nserver kernel: entry_SYSCALL_64_after_hwframe+0x65/0xca
Mar 2 01:10:58 nserver kernel: RIP: 0033:0x7f816eb787fe
Mar 2 01:10:58 nserver kernel: Code: 48 8b 0d 8d 16 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 5a 16 2c 00 f7 d8 64 89 01 48
Mar 2 01:10:58 nserver kernel: RSP: 002b:00007ffcde42e618 EFLAGS: 00000246 ORIG_RAX: 00000000000000af
Mar 2 01:10:58 nserver kernel: RAX: ffffffffffffffda RBX: 000055b5588b9e80 RCX: 00007f816eb787fe
Mar 2 01:10:58 nserver kernel: RDX: 000055b5578b97d6 RSI: 00000000000a0fe8 RDI: 000055b5591e7900
Mar 2 01:10:58 nserver kernel: RBP: 000055b5578b97d6 R08: 000055b5588b901a R09: 0000000000000000
Mar 2 01:10:58 nserver kernel: R10: 000055b5588b9010 R11: 0000000000000246 R12: 000055b5591e7900
Mar 2 01:10:58 nserver kernel: R13: 000055b5588b9e30 R14: 0000000000040000 R15: 0000000000000000
Mar 2 01:10:58 nserver kernel: Large kmem_alloc(69120, 0x1000), please file an issue at:#012https://github.com/openzfs/zfs/issues/new
Mar 2 01:10:59 nserver kernel: CPU: 136 PID: 5153 Comm: modprobe Tainted: P OE --------- - - 4.18.0-277.rt7.42.el8.x86_64 #1
Mar 2 01:10:59 nserver kernel: Hardware name: Dell Inc. PowerEdge R830/0VVT0H, BIOS 1.11.0 12/30/2019
Mar 2 01:10:59 nserver kernel: Call Trace:
Mar 2 01:10:59 nserver kernel: dump_stack+0x5c/0x80
Mar 2 01:10:59 nserver kernel: spl_kmem_zalloc.cold.3+0x14/0x19 [spl]
Mar 2 01:10:59 nserver kernel: init_module+0x6a/0x1000 [zzstd]
Mar 2 01:10:59 nserver kernel: ? 0xffffffffc0bfa000
Mar 2 01:10:59 nserver kernel: do_one_initcall+0x4f/0x268
Mar 2 01:10:59 nserver kernel: ? __vunmap+0x1f1/0x230
Mar 2 01:10:59 nserver kernel: ? kmem_cache_alloc_trace+0x117/0x360
Mar 2 01:10:59 nserver kernel: do_init_module+0x5a/0x220
Mar 2 01:10:59 nserver kernel: load_module+0x141a/0x1770
Mar 2 01:10:59 nserver kernel: ? __do_sys_init_module+0x141/0x190
Mar 2 01:10:59 nserver kernel: __do_sys_init_module+0x141/0x190
Mar 2 01:10:59 nserver kernel: do_syscall_64+0x87/0x1a0
Mar 2 01:10:59 nserver kernel: entry_SYSCALL_64_after_hwframe+0x65/0xca
Mar 2 01:10:59 nserver kernel: RIP: 0033:0x7f816eb787fe
Mar 2 01:10:59 nserver kernel: Code: 48 8b 0d 8d 16 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 5a 16 2c 00 f7 d8 64 89 01 48
Mar 2 01:10:59 nserver kernel: RSP: 002b:00007ffcde42e618 EFLAGS: 00000246 ORIG_RAX: 00000000000000af
Mar 2 01:10:59 nserver kernel: RAX: ffffffffffffffda RBX: 000055b5588b9e80 RCX: 00007f816eb787fe
Mar 2 01:10:59 nserver kernel: RDX: 000055b5578b97d6 RSI: 00000000000a0fe8 RDI: 000055b5591e7900
Mar 2 01:10:59 nserver kernel: RBP: 000055b5578b97d6 R08: 000055b5588b901a R09: 0000000000000000
Mar 2 01:10:59 nserver kernel: R10: 000055b5588b9010 R11: 0000000000000246 R12: 000055b5591e7900
Mar 2 01:10:59 nserver kernel: R13: 000055b5588b9e30 R14: 0000000000040000 R15: 0000000000000000
```
|
defect
|
unable to automatically mount zfs share centos realtime kernel thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name centos distribution version stream linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing after reboot there is no pool visible using dkms module installation seems fine no errors but after reboot there is an issue shown during kernel boot and after that zfs datasets are unavailable after all shown below services restart pool is visible and datasets also describe how to reproduce the problem old version od zfs was removed sudo dnf remove zfs zfs kmod spl spl kmod zfs release centos release repo was installed sudo dnf install key added gpg quiet with fingerprint etc pki rpm gpg rpm gpg key zfsonlinux inside zfs repo testing dkms repo was enabled next i installed zfs package sudo dnf install zfs and at the end all necessary services was enabled and system restarted systemctl enable zfs import cache systemctl enable zfs import scan systemctl enable zfs mount systemctl enable zfs share systemctl enable zfs zed systemctl enable zfs target systemctl enable zfs import target reboot of course after reboot there was no pool visible log messages shown below mar nserver kernel large kmem alloc please file an issue at mar nserver kernel cpu pid comm modprobe tainted p oe mar nserver kernel hardware name dell inc poweredge bios mar nserver kernel call trace mar nserver kernel dump stack mar nserver kernel spl kmem zalloc cold mar nserver kernel init module mar nserver kernel mar nserver kernel do one initcall mar nserver kernel vunmap mar nserver kernel kmem cache alloc trace mar nserver kernel do init module mar nserver kernel load module mar nserver kernel do sys init module mar nserver kernel do sys init module mar nserver kernel do syscall mar nserver kernel entry syscall after hwframe mar nserver kernel rip mar nserver kernel code ff fa ca af ff ff mar nserver kernel rsp eflags orig rax mar nserver kernel rax ffffffffffffffda rbx rcx mar nserver kernel rdx rsi rdi mar nserver kernel rbp mar nserver kernel mar nserver kernel mar nserver kernel large kmem alloc please file an issue at mar nserver kernel cpu pid comm modprobe tainted p oe mar nserver kernel hardware name dell inc poweredge bios mar nserver kernel call trace mar nserver kernel dump stack mar nserver kernel spl kmem zalloc cold mar nserver kernel init module mar nserver kernel mar nserver kernel do one initcall mar nserver kernel vunmap mar nserver kernel kmem cache alloc trace mar nserver kernel do init module mar nserver kernel load module mar nserver kernel do sys init module mar nserver kernel do sys init module mar nserver kernel do syscall mar nserver kernel entry syscall after hwframe mar nserver kernel rip mar nserver kernel code ff fa ca af ff ff mar nserver kernel rsp eflags orig rax mar nserver kernel rax ffffffffffffffda rbx rcx mar nserver kernel rdx rsi rdi mar nserver kernel rbp mar nserver kernel mar nserver kernel
| 1
|
72,882
| 24,343,731,828
|
IssuesEvent
|
2022-10-02 02:43:31
|
colour-science/colour
|
https://api.github.com/repos/colour-science/colour
|
closed
|
[BUG]: colour.plotting.diagrams.plot_spectral_locus overrides alpha for patch?
|
Defect
|
### Description
The `colour.plotting.diagrams.plot_spectral_locus` function appears to consistently override the alpha setting such that the surrounding borders fail to adopt the colours set in rcparams. This requires always following the `colour.plotting.diagrams.plot_spectral_locus` with a call to `fig.patch.set_alpha(1.0)` to repair.
Is there anything that can be done that would remove the requirement to "fix" every call?
### Code for Reproduction
_No response_
### Exception Message
_No response_
### Environment Information
_No response_
|
1.0
|
[BUG]: colour.plotting.diagrams.plot_spectral_locus overrides alpha for patch? - ### Description
The `colour.plotting.diagrams.plot_spectral_locus` function appears to consistently override the alpha setting such that the surrounding borders fail to adopt the colours set in rcparams. This requires always following the `colour.plotting.diagrams.plot_spectral_locus` with a call to `fig.patch.set_alpha(1.0)` to repair.
Is there anything that can be done that would remove the requirement to "fix" every call?
### Code for Reproduction
_No response_
### Exception Message
_No response_
### Environment Information
_No response_
|
defect
|
colour plotting diagrams plot spectral locus overrides alpha for patch description the colour plotting diagrams plot spectral locus function appears to consistently override the alpha setting such that the surrounding borders fail to adopt the colours set in rcparams this requires always following the colour plotting diagrams plot spectral locus with a call to fig patch set alpha to repair is there anything that can be done that would remove the requirement to fix every call code for reproduction no response exception message no response environment information no response
| 1
|
49,208
| 13,185,294,103
|
IssuesEvent
|
2020-08-12 21:06:25
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
Payload-parsing needs tests for all payload types (Trac #962)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/962
, reported by cweaver and owned by rmaunu</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:26",
"description": "There appear to be no real tests of decoding most payload types. Fixing this probably requires obtaining suitable example data, adding it to the test data, and writing tests of reading it. ",
"reporter": "cweaver",
"cc": "",
"resolution": "fixed",
"_ts": "1550067086520250",
"component": "combo core",
"summary": "Payload-parsing needs tests for all payload types",
"priority": "major",
"keywords": "",
"time": "2015-05-05T17:29:31",
"milestone": "",
"owner": "rmaunu",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Payload-parsing needs tests for all payload types (Trac #962) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/962
, reported by cweaver and owned by rmaunu</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:26",
"description": "There appear to be no real tests of decoding most payload types. Fixing this probably requires obtaining suitable example data, adding it to the test data, and writing tests of reading it. ",
"reporter": "cweaver",
"cc": "",
"resolution": "fixed",
"_ts": "1550067086520250",
"component": "combo core",
"summary": "Payload-parsing needs tests for all payload types",
"priority": "major",
"keywords": "",
"time": "2015-05-05T17:29:31",
"milestone": "",
"owner": "rmaunu",
"type": "defect"
}
```
</p>
</details>
|
defect
|
payload parsing needs tests for all payload types trac migrated from reported by cweaver and owned by rmaunu json status closed changetime description there appear to be no real tests of decoding most payload types fixing this probably requires obtaining suitable example data adding it to the test data and writing tests of reading it reporter cweaver cc resolution fixed ts component combo core summary payload parsing needs tests for all payload types priority major keywords time milestone owner rmaunu type defect
| 1
|
26,380
| 4,689,575,241
|
IssuesEvent
|
2016-10-11 01:11:26
|
cakephp/app
|
https://api.github.com/repos/cakephp/app
|
closed
|
IDs for base layout too generic
|
Defect
|
I knew this was gonna haunt us some day.
When trying to bake some table with a "content" textarea field it gets a "width: 95" because `#content` apparently is used then for both base layout container and the textarea field itself.
So either we
- reapply model prefixing for form fields (which I was always a big fan off to avoid those clashes in the first place)
- or we are less generic with the default classes and ids used in the main layout, e.g. `layout-` prefix: `#layout-content: {...}`
Our base template shipped with the app should be usable without lots of clashes, it should have good and sane default values to work with out of the box.
|
1.0
|
IDs for base layout too generic - I knew this was gonna haunt us some day.
When trying to bake some table with a "content" textarea field it gets a "width: 95" because `#content` apparently is used then for both base layout container and the textarea field itself.
So either we
- reapply model prefixing for form fields (which I was always a big fan off to avoid those clashes in the first place)
- or we are less generic with the default classes and ids used in the main layout, e.g. `layout-` prefix: `#layout-content: {...}`
Our base template shipped with the app should be usable without lots of clashes, it should have good and sane default values to work with out of the box.
|
defect
|
ids for base layout too generic i knew this was gonna haunt us some day when trying to bake some table with a content textarea field it gets a width because content apparently is used then for both base layout container and the textarea field itself so either we reapply model prefixing for form fields which i was always a big fan off to avoid those clashes in the first place or we are less generic with the default classes and ids used in the main layout e g layout prefix layout content our base template shipped with the app should be usable without lots of clashes it should have good and sane default values to work with out of the box
| 1
|
25,743
| 2,683,955,816
|
IssuesEvent
|
2015-03-28 14:20:04
|
ConEmu/old-issues
|
https://api.github.com/repos/ConEmu/old-issues
|
closed
|
Cant use Terminal font
|
1 star bug imported Priority-Medium wontfix
|
_From [alix.peo...@gmail.com](https://code.google.com/u/106507263718644587290/) on October 11, 2010 06:00:38_
When i set main font as terminal i see an error
Failed to create main font, Requested: Terminal 8x12, Created: Courier New.
(screenshot in attachment)
I tried to set name both as "Terminal" and "Terminal 8x12". Font "Terminal" itself exists.
Win7 Home x32. FAR 2.0 build 1.666. Need more details?
**Attachment:** [conemu.png](http://code.google.com/p/conemu-maximus5/issues/detail?id=302)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=302_
|
1.0
|
Cant use Terminal font - _From [alix.peo...@gmail.com](https://code.google.com/u/106507263718644587290/) on October 11, 2010 06:00:38_
When i set main font as terminal i see an error
Failed to create main font, Requested: Terminal 8x12, Created: Courier New.
(screenshot in attachment)
I tried to set name both as "Terminal" and "Terminal 8x12". Font "Terminal" itself exists.
Win7 Home x32. FAR 2.0 build 1.666. Need more details?
**Attachment:** [conemu.png](http://code.google.com/p/conemu-maximus5/issues/detail?id=302)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=302_
|
non_defect
|
cant use terminal font from on october when i set main font as terminal i see an error failed to create main font requested terminal created courier new screenshot in attachment i tried to set name both as terminal and terminal font terminal itself exists home far build need more details attachment original issue
| 0
|
177,095
| 21,464,545,485
|
IssuesEvent
|
2022-04-26 01:20:53
|
Hari-Nagarajan/asyncbits.com
|
https://api.github.com/repos/Hari-Nagarajan/asyncbits.com
|
closed
|
WS-2016-0090 (Medium) detected in jquery-1.9.1.js, jquery-1.7.1.min.js - autoclosed
|
security vulnerability
|
## WS-2016-0090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.1.js</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/asyncbits.com/node_modules/tinycolor2/test/index.html</p>
<p>Path to vulnerable library: /asyncbits.com/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js,/asyncbits.com/node_modules/tinycolor2/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/asyncbits.com/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>Path to vulnerable library: /asyncbits.com/node_modules/sockjs/examples/express-3.x/index.html,/asyncbits.com/node_modules/sockjs/examples/hapi/html/index.html,/asyncbits.com/node_modules/sockjs/examples/express/index.html,/asyncbits.com/node_modules/sockjs/examples/multiplex/index.html,/asyncbits.com/node_modules/sockjs/examples/echo/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Hari-Nagarajan/asyncbits.com/commit/4effdbcc6c26a2145d1a99f61decf496fbcdeb1d">4effdbcc6c26a2145d1a99f61decf496fbcdeb1d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JQuery, before 2.2.0, is vulnerable to Cross-site Scripting (XSS) attacks via text/javascript response with arbitrary code execution.
<p>Publish Date: 2016-11-27
<p>URL: <a href=https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614>WS-2016-0090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614">https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614</a></p>
<p>Release Date: 2019-04-08</p>
<p>Fix Resolution: 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2016-0090 (Medium) detected in jquery-1.9.1.js, jquery-1.7.1.min.js - autoclosed - ## WS-2016-0090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.1.js</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/asyncbits.com/node_modules/tinycolor2/test/index.html</p>
<p>Path to vulnerable library: /asyncbits.com/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js,/asyncbits.com/node_modules/tinycolor2/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/asyncbits.com/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>Path to vulnerable library: /asyncbits.com/node_modules/sockjs/examples/express-3.x/index.html,/asyncbits.com/node_modules/sockjs/examples/hapi/html/index.html,/asyncbits.com/node_modules/sockjs/examples/express/index.html,/asyncbits.com/node_modules/sockjs/examples/multiplex/index.html,/asyncbits.com/node_modules/sockjs/examples/echo/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Hari-Nagarajan/asyncbits.com/commit/4effdbcc6c26a2145d1a99f61decf496fbcdeb1d">4effdbcc6c26a2145d1a99f61decf496fbcdeb1d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JQuery, before 2.2.0, is vulnerable to Cross-site Scripting (XSS) attacks via text/javascript response with arbitrary code execution.
<p>Publish Date: 2016-11-27
<p>URL: <a href=https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614>WS-2016-0090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614">https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614</a></p>
<p>Release Date: 2019-04-08</p>
<p>Fix Resolution: 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws medium detected in jquery js jquery min js autoclosed ws medium severity vulnerability vulnerable libraries jquery js jquery min js jquery js javascript library for dom operations library home page a href path to dependency file tmp ws scm asyncbits com node modules test index html path to vulnerable library asyncbits com node modules test demo jquery js asyncbits com node modules demo jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm asyncbits com node modules sockjs examples express x index html path to vulnerable library asyncbits com node modules sockjs examples express x index html asyncbits com node modules sockjs examples hapi html index html asyncbits com node modules sockjs examples express index html asyncbits com node modules sockjs examples multiplex index html asyncbits com node modules sockjs examples echo index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks via text javascript response with arbitrary code execution publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
28,233
| 8,124,766,749
|
IssuesEvent
|
2018-08-16 18:38:15
|
bitcoin/bitcoin
|
https://api.github.com/repos/bitcoin/bitcoin
|
closed
|
Compilation: warnings, Alpine Linux (ARM)
|
Build system
|
```
# uname -a
Linux localhost 3.4.0-lineageos-gb263a89 #1 SMP PREEMPT Thu May 3 05:26:06 CEST 2018 armv7l GNU/Linux
```
When compiling on up-to-date Alpine Linux 3.7.0 on ARM I get many of similar warnings:
```
/usr/include/c++/6.4.0/bits/stl_map.h:483:4: note: parameter passing for argument of type 'std::_Rb_tree<CSubNet, std::pair<const CSubNet, CBanEntry>, std::_Select1st<std::pair<const CSubNet, CBanEntry> >, std::less<CSubNet>, std::allocator<std::pair<const CSubNet, CBanEntry> > >::const_iterator {aka std::_Rb_tree_const_iterator<std::pair<const CSubNet, CBanEntry> >}' will change in GCC 7.1
__i = _M_t._M_emplace_hint_unique(__i, std::piecewise_construct,
^~~
```
I am wondering if I am doing something wrong or if anything in the code needs to be fixed. The warnings do not show on x86_64 Alpine Linux using the same version of GCC.
The bitcoin version used is c93c360eec4de77af8aaa74d9140dba57160e30d
|
1.0
|
Compilation: warnings, Alpine Linux (ARM) - ```
# uname -a
Linux localhost 3.4.0-lineageos-gb263a89 #1 SMP PREEMPT Thu May 3 05:26:06 CEST 2018 armv7l GNU/Linux
```
When compiling on up-to-date Alpine Linux 3.7.0 on ARM I get many of similar warnings:
```
/usr/include/c++/6.4.0/bits/stl_map.h:483:4: note: parameter passing for argument of type 'std::_Rb_tree<CSubNet, std::pair<const CSubNet, CBanEntry>, std::_Select1st<std::pair<const CSubNet, CBanEntry> >, std::less<CSubNet>, std::allocator<std::pair<const CSubNet, CBanEntry> > >::const_iterator {aka std::_Rb_tree_const_iterator<std::pair<const CSubNet, CBanEntry> >}' will change in GCC 7.1
__i = _M_t._M_emplace_hint_unique(__i, std::piecewise_construct,
^~~
```
I am wondering if I am doing something wrong or if anything in the code needs to be fixed. The warnings do not show on x86_64 Alpine Linux using the same version of GCC.
The bitcoin version used is c93c360eec4de77af8aaa74d9140dba57160e30d
|
non_defect
|
compilation warnings alpine linux arm uname a linux localhost lineageos smp preempt thu may cest gnu linux when compiling on up to date alpine linux on arm i get many of similar warnings usr include c bits stl map h note parameter passing for argument of type std rb tree std std less std allocator const iterator aka std rb tree const iterator will change in gcc i m t m emplace hint unique i std piecewise construct i am wondering if i am doing something wrong or if anything in the code needs to be fixed the warnings do not show on alpine linux using the same version of gcc the bitcoin version used is
| 0
|
60,463
| 17,023,430,553
|
IssuesEvent
|
2021-07-03 01:59:54
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Can't select version for reverting a way
|
Component: potlatch (flash editor) Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 12.26pm, Sunday, 28th June 2009]**
using Seamonkey 1.9.1 branch build (the same Gecko version is used by Firefox 3.5) and Shockwave Flash 10.0 r22 on windows 2003 or IE6 with the same Flash version only as ActiveX Plugin and a display resolution 1280x1024
Steps to reproduce:
1. load http://www.openstreetmap.org/edit?lat=48.74607&lon=9.50743&zoom=17
(I hope the way is still in this area if we get it reverted with other tools)
2. select the boundary way with ID 29412601
3. Press h for history
4. in the history dialog open the dropdown list for selecting the revision to restore
5: You can not select an entry in the list for restoring, could be that the history list is too long ( I need the version from the 4th june)
|
1.0
|
Can't select version for reverting a way - **[Submitted to the original trac issue database at 12.26pm, Sunday, 28th June 2009]**
using Seamonkey 1.9.1 branch build (the same Gecko version is used by Firefox 3.5) and Shockwave Flash 10.0 r22 on windows 2003 or IE6 with the same Flash version only as ActiveX Plugin and a display resolution 1280x1024
Steps to reproduce:
1. load http://www.openstreetmap.org/edit?lat=48.74607&lon=9.50743&zoom=17
(I hope the way is still in this area if we get it reverted with other tools)
2. select the boundary way with ID 29412601
3. Press h for history
4. in the history dialog open the dropdown list for selecting the revision to restore
5: You can not select an entry in the list for restoring, could be that the history list is too long ( I need the version from the 4th june)
|
defect
|
can t select version for reverting a way using seamonkey branch build the same gecko version is used by firefox and shockwave flash on windows or with the same flash version only as activex plugin and a display resolution steps to reproduce load i hope the way is still in this area if we get it reverted with other tools select the boundary way with id press h for history in the history dialog open the dropdown list for selecting the revision to restore you can not select an entry in the list for restoring could be that the history list is too long i need the version from the june
| 1
|
206,766
| 15,773,190,260
|
IssuesEvent
|
2021-03-31 22:54:11
|
microsoft/FluidFramework
|
https://api.github.com/repos/microsoft/FluidFramework
|
closed
|
Fault Injection Document Service Factory Wrapper
|
testing
|
We should build a wrapper document service factory that injects retriable failures on service calls that we can run all tests based on our testing abstraction #4655 in a mode where faults are randomly injected
|
1.0
|
Fault Injection Document Service Factory Wrapper - We should build a wrapper document service factory that injects retriable failures on service calls that we can run all tests based on our testing abstraction #4655 in a mode where faults are randomly injected
|
non_defect
|
fault injection document service factory wrapper we should build a wrapper document service factory that injects retriable failures on service calls that we can run all tests based on our testing abstraction in a mode where faults are randomly injected
| 0
|
779,335
| 27,349,155,093
|
IssuesEvent
|
2023-02-27 08:15:01
|
Physical-Science-Academy/NoCheatPlus
|
https://api.github.com/repos/Physical-Science-Academy/NoCheatPlus
|
closed
|
WaterDog Proxy Support
|
Low Priority | 低优先级 Confirmed | 已确认 Request | 请求
|
Get the real ping from WaterDog to prevent NCP detect legit players in mistakes.
|
1.0
|
WaterDog Proxy Support - Get the real ping from WaterDog to prevent NCP detect legit players in mistakes.
|
non_defect
|
waterdog proxy support get the real ping from waterdog to prevent ncp detect legit players in mistakes
| 0
|
31,745
| 6,612,232,794
|
IssuesEvent
|
2017-09-20 02:25:08
|
extnet/Ext.NET
|
https://api.github.com/repos/extnet/Ext.NET
|
closed
|
NumericAxis minimum adjust issue if < 1
|
2.x 3.x 4.x defect sencha sencha-disclaim
|
http://forums.ext.net/showthread.php?23521
http://www.sencha.com/forum/showthread.php?256685
**Update:** Issue still open after ExtJS 6.0.1 release.
|
1.0
|
NumericAxis minimum adjust issue if < 1 - http://forums.ext.net/showthread.php?23521
http://www.sencha.com/forum/showthread.php?256685
**Update:** Issue still open after ExtJS 6.0.1 release.
|
defect
|
numericaxis minimum adjust issue if update issue still open after extjs release
| 1
|
560,559
| 16,599,558,575
|
IssuesEvent
|
2021-06-01 17:25:32
|
pw-software-engineering/n-team
|
https://api.github.com/repos/pw-software-engineering/n-team
|
closed
|
[Client UI] Offer reviews logic UI
|
Priority: medium
|
- [x] Implement API mockup for client reviews
- [x] Implement logic UI for offer reviews on offer details page
Time spent: 3 h
|
1.0
|
[Client UI] Offer reviews logic UI - - [x] Implement API mockup for client reviews
- [x] Implement logic UI for offer reviews on offer details page
Time spent: 3 h
|
non_defect
|
offer reviews logic ui implement api mockup for client reviews implement logic ui for offer reviews on offer details page time spent h
| 0
|
17,110
| 2,974,598,084
|
IssuesEvent
|
2015-07-15 02:13:31
|
Reimashi/jotai
|
https://api.github.com/repos/Reimashi/jotai
|
closed
|
Include/enable xAP support
|
auto-migrated Priority-Medium Type-Defect
|
```
Feature request:
Reporting of system values in xAP protocol using the PC.status schema.
Configurable with reporting interval/seconds, and choosing which system values
are included in the report.
```
Original issue reported on code.google.com by `B.Lowe...@gmail.com` on 24 Oct 2011 at 3:17
|
1.0
|
Include/enable xAP support - ```
Feature request:
Reporting of system values in xAP protocol using the PC.status schema.
Configurable with reporting interval/seconds, and choosing which system values
are included in the report.
```
Original issue reported on code.google.com by `B.Lowe...@gmail.com` on 24 Oct 2011 at 3:17
|
defect
|
include enable xap support feature request reporting of system values in xap protocol using the pc status schema configurable with reporting interval seconds and choosing which system values are included in the report original issue reported on code google com by b lowe gmail com on oct at
| 1
|
56,353
| 3,079,376,254
|
IssuesEvent
|
2015-08-21 15:54:41
|
songwoo/tracker
|
https://api.github.com/repos/songwoo/tracker
|
closed
|
Entering wrong user id or email address on the login screen should raise an error
|
medium priority
|
place the text "Incorrect user id or password - please try again" above the username text box

|
1.0
|
Entering wrong user id or email address on the login screen should raise an error - place the text "Incorrect user id or password - please try again" above the username text box

|
non_defect
|
entering wrong user id or email address on the login screen should raise an error place the text incorrect user id or password please try again above the username text box
| 0
|
494,450
| 14,258,801,502
|
IssuesEvent
|
2020-11-20 07:02:40
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Parser crashes for invalid annotation attach point
|
Area/Parser Points/0.5 Priority/Blocker Team/CompilerFE Type/Bug
|
**Description:**
Consider the following code snippet
```bal
public const annotation ObjectData Binding on object funciton;
```
It throws the following exception
```
Exception in thread "main" java.lang.IllegalStateException: found close-brace in: const-decl
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.getNextRuleForCloseBrace(BallerinaParserErrorHandler.java:3599)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.getNextRule(BallerinaParserErrorHandler.java:2176)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.fixAndContinue(AbstractParserErrorHandler.java:387)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.fixAndContinue(AbstractParserErrorHandler.java:329)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatch(BallerinaParserErrorHandler.java:1053)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.seekMatchInSubTree(AbstractParserErrorHandler.java:200)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.seekInAlternativesPaths(AbstractParserErrorHandler.java:249)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekInStatements(BallerinaParserErrorHandler.java:1908)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatchInStmtRelatedAlternativePaths(BallerinaParserErrorHandler.java:1545)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatchInAlternativePaths(BallerinaParserErrorHandler.java:1529)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatch(BallerinaParserErrorHandler.java:1042)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.seekMatchInSubTree(AbstractParserErrorHandler.java:200)
```
**Suggested Labels (optional):**
Type/Bug, Priority/Blocker, Area/Language, Compiler/Parser
|
1.0
|
Parser crashes for invalid annotation attach point - **Description:**
Consider the following code snippet
```bal
public const annotation ObjectData Binding on object funciton;
```
It throws the following exception
```
Exception in thread "main" java.lang.IllegalStateException: found close-brace in: const-decl
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.getNextRuleForCloseBrace(BallerinaParserErrorHandler.java:3599)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.getNextRule(BallerinaParserErrorHandler.java:2176)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.fixAndContinue(AbstractParserErrorHandler.java:387)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.fixAndContinue(AbstractParserErrorHandler.java:329)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatch(BallerinaParserErrorHandler.java:1053)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.seekMatchInSubTree(AbstractParserErrorHandler.java:200)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.seekInAlternativesPaths(AbstractParserErrorHandler.java:249)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekInStatements(BallerinaParserErrorHandler.java:1908)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatchInStmtRelatedAlternativePaths(BallerinaParserErrorHandler.java:1545)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatchInAlternativePaths(BallerinaParserErrorHandler.java:1529)
at io.ballerinalang.compiler.internal.parser.BallerinaParserErrorHandler.seekMatch(BallerinaParserErrorHandler.java:1042)
at io.ballerinalang.compiler.internal.parser.AbstractParserErrorHandler.seekMatchInSubTree(AbstractParserErrorHandler.java:200)
```
**Suggested Labels (optional):**
Type/Bug, Priority/Blocker, Area/Language, Compiler/Parser
|
non_defect
|
parser crashes for invalid annotation attach point description consider the following code snippet bal public const annotation objectdata binding on object funciton it throws the following exception exception in thread main java lang illegalstateexception found close brace in const decl at io ballerinalang compiler internal parser ballerinaparsererrorhandler getnextruleforclosebrace ballerinaparsererrorhandler java at io ballerinalang compiler internal parser ballerinaparsererrorhandler getnextrule ballerinaparsererrorhandler java at io ballerinalang compiler internal parser abstractparsererrorhandler fixandcontinue abstractparsererrorhandler java at io ballerinalang compiler internal parser abstractparsererrorhandler fixandcontinue abstractparsererrorhandler java at io ballerinalang compiler internal parser ballerinaparsererrorhandler seekmatch ballerinaparsererrorhandler java at io ballerinalang compiler internal parser abstractparsererrorhandler seekmatchinsubtree abstractparsererrorhandler java at io ballerinalang compiler internal parser abstractparsererrorhandler seekinalternativespaths abstractparsererrorhandler java at io ballerinalang compiler internal parser ballerinaparsererrorhandler seekinstatements ballerinaparsererrorhandler java at io ballerinalang compiler internal parser ballerinaparsererrorhandler seekmatchinstmtrelatedalternativepaths ballerinaparsererrorhandler java at io ballerinalang compiler internal parser ballerinaparsererrorhandler seekmatchinalternativepaths ballerinaparsererrorhandler java at io ballerinalang compiler internal parser ballerinaparsererrorhandler seekmatch ballerinaparsererrorhandler java at io ballerinalang compiler internal parser abstractparsererrorhandler seekmatchinsubtree abstractparsererrorhandler java suggested labels optional type bug priority blocker area language compiler parser
| 0
|
21,430
| 10,608,222,059
|
IssuesEvent
|
2019-10-11 06:56:25
|
fufunoyu/example-maven-travis
|
https://api.github.com/repos/fufunoyu/example-maven-travis
|
opened
|
CVE-2016-1000341 (Medium) detected in bcprov-ext-jdk15on-1.49.jar
|
security vulnerability
|
## CVE-2016-1000341 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-ext-jdk15on-1.49.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.7. Note: this package includes the IDEA and NTRU encryption algorithms.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tmp/ws-scm/example-maven-travis/pom.xml</p>
<p>Path to vulnerable library: epository/org/bouncycastle/bcprov-ext-jdk15on/1.49/bcprov-ext-jdk15on-1.49.jar</p>
<p>
Dependency Hierarchy:
- :x: **bcprov-ext-jdk15on-1.49.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/example-maven-travis/commit/48c9533d746c2e0017ea5f7739a8c4b5eadc874a">48c9533d746c2e0017ea5f7739a8c4b5eadc874a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider version 1.55 and earlier DSA signature generation is vulnerable to timing attack. Where timings can be closely observed for the generation of signatures, the lack of blinding in 1.55, or earlier, may allow an attacker to gain information about the signature's k value and ultimately the private value as well.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000341>CVE-2016-1000341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000341</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: bcprov-debug-jdk14 - 1.59;bcprov-debug-jdk15on - 1.56;bcprov-ext-debug-jdk15on - 1.56;bcprov-ext-jdk15on - 1.56;bcprov-jdk14 - 1.56;bcprov-jdk15on - 1.56</p>
</p>
</details>
<p></p>
|
True
|
CVE-2016-1000341 (Medium) detected in bcprov-ext-jdk15on-1.49.jar - ## CVE-2016-1000341 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-ext-jdk15on-1.49.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.7. Note: this package includes the IDEA and NTRU encryption algorithms.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tmp/ws-scm/example-maven-travis/pom.xml</p>
<p>Path to vulnerable library: epository/org/bouncycastle/bcprov-ext-jdk15on/1.49/bcprov-ext-jdk15on-1.49.jar</p>
<p>
Dependency Hierarchy:
- :x: **bcprov-ext-jdk15on-1.49.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/example-maven-travis/commit/48c9533d746c2e0017ea5f7739a8c4b5eadc874a">48c9533d746c2e0017ea5f7739a8c4b5eadc874a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider version 1.55 and earlier DSA signature generation is vulnerable to timing attack. Where timings can be closely observed for the generation of signatures, the lack of blinding in 1.55, or earlier, may allow an attacker to gain information about the signature's k value and ultimately the private value as well.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000341>CVE-2016-1000341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000341</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: bcprov-debug-jdk14 - 1.59;bcprov-debug-jdk15on - 1.56;bcprov-ext-debug-jdk15on - 1.56;bcprov-ext-jdk15on - 1.56;bcprov-jdk14 - 1.56;bcprov-jdk15on - 1.56</p>
</p>
</details>
<p></p>
|
non_defect
|
cve medium detected in bcprov ext jar cve medium severity vulnerability vulnerable library bcprov ext jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk note this package includes the idea and ntru encryption algorithms library home page a href path to dependency file tmp ws scm example maven travis pom xml path to vulnerable library epository org bouncycastle bcprov ext bcprov ext jar dependency hierarchy x bcprov ext jar vulnerable library found in head commit a href vulnerability details in the bouncy castle jce provider version and earlier dsa signature generation is vulnerable to timing attack where timings can be closely observed for the generation of signatures the lack of blinding in or earlier may allow an attacker to gain information about the signature s k value and ultimately the private value as well publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bcprov debug bcprov debug bcprov ext debug bcprov ext bcprov bcprov
| 0
|
78,414
| 27,511,699,380
|
IssuesEvent
|
2023-03-06 09:16:25
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: Git pack file is really large
|
I-defect needs-triaging
|
### What happened?
Cloning SeleniumHQ is really large.

It appears that there are a large number of binary files stored in cpp, third_party and website.
Suggest moving any references to Git LFS or just referencing a url to download from the vendor.
### How can we reproduce the issue?
```shell
1. Clone the repo
2. Run the following command from this page
https://stackoverflow.com/questions/10622179/how-to-find-identify-large-commits-in-git-history
git rev-list --objects --all | git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' | sed -n 's/^blob //p' | awk '$2 >= 2^20' | sort --numeric-sort --key=2 | cut -c 1-12,41- | $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest > output.txt
```
```
### Relevant log output
```shell
Some of the larger files...
277ca571dcb3 5.1MiB common/manager/linux/selenium-manager
87a299d8ba46 5.1MiB third_party/closure/bin/compiler-20120123.jar
d958a61882cf 5.1MiB third_party/closure/bin/compiler-20120305.jar
24ae5637d301 5.2MiB third_party/java/android/android-8.jar
b7111d8b95e9 5.2MiB lib/buildtime/jruby-complete-r5974.jar
52a920f28c74 5.2MiB third_party/closure/bin/compiler-20120710.jar
7e0290349482 5.3MiB third_party/selenium/selenium-server-1.0.1.jar
fb4eb39f0119 5.4MiB third_party/dotnet/nuget/NuGet.exe
96430f7a5ee9 5.5MiB third_party/py/Sphinx-1.8.2.tar.gz
e23352ff0bdb 5.6MiB third_party/closure/bin/compiler-20120917.jar
22025c98c19a 5.6MiB website/download/selenium-rc-1.0-SNAPSHOT.zip
c12a17d3e227 5.7MiB website/webapps/ROOT/selenium_v0.1.3.zip
a12f24be30b3 5.7MiB third_party/closure/bin/compiler-20140814.jar
e2338676d56a 6.0MiB third_party/closure/bin/compiler-20130227.jar
d969a3313dca 6.0MiB third_party/closure/bin/compiler.jar
33b44ed43902 6.0MiB third_party/closure/bin/compiler.jar
a297dc0f126d 6.0MiB third_party/closure/bin/compiler.jar
0b0e68c3dd16 6.0MiB third_party/closure/bin/compiler-20130603.jar
e4e6756afc65 6.1MiB third_party/closure/bin/compiler.jar
9cae6957d4c0 6.1MiB third_party/closure/bin/compiler.jar
0d42389c08b7 6.1MiB third_party/closure/bin/compiler.jar
0f83b17f510b 6.2MiB third_party/dotnet/nuget/NuGet.exe
57859f0b2f04 6.3MiB third_party/closure/bin/compiler.jar
5943f680ceae 6.4MiB third_party/dotnet/nuget/NuGet.exe
4fed060629ee 6.5MiB website/download/selenium-remote-control-0.7.zip
67aa45afcffd 6.5MiB third_party/closure/bin/compiler-20140407.jar
a1d81349816d 6.5MiB website/download/selenium-remote-control-0.7.1-SNAPSHOT.zip
ec9f5b0784e2 6.5MiB website/download/selenium-remote-control-0.7.1.zip
d621604551a5 6.9MiB third_party/closure/bin/compiler.jar
239bf9fbae07 7.4MiB third_party/closure/bin/compiler.jar
27358741fe71 7.6MiB third_party/py/Babel-2.6.0.tar.gz
35effd7af200 8.3MiB third_party/py/jython.jar
3f2c98347240 9.0MiB docs/api/java/index-all.html
3800352dd9de 10MiB website/download/selenium-remote-control-0.7.2-SNAPSHOT-924-1023.zip
db0e1befa4a2 11MiB third_party/jruby/jruby-complete-1.4.0RC1.jar
859a8c03220f 11MiB third_party/jruby/jruby-complete-1.5.0.RC2.jar
7572a14d1723 11MiB third_party/java/browsermob_proxy/browsermob-proxy-1.0-SNAPSHOT-release.jar
f32c46305db1 11MiB third_party/java/htmlunit/htmlunit-driver-2.20.0-SNAPSHOT.jar
40289c9469dd 12MiB third_party/jruby/jruby-complete.jar
35f6451ec32f 12MiB third_party/jruby/jruby-complete.jar
15d50e102d34 13MiB third_party/jruby/jruby-complete-1.5.0.RC2.jar
9e01e5caa0c3 14MiB third_party/py/jython.jar
476761b860fc 14MiB third_party/jruby/jruby-complete.jar
7dfaeb5e621e 14MiB third_party/jruby/jruby-complete.jar
e88a9eb7e281 15MiB third_party/jruby/jruby-complete.jar
4a979b82f080 16MiB third_party/jruby/jruby-complete.jar
edd1c82164f5 17MiB third_party/java/android/android-14.jar
96dd3dd7b503 18MiB third_party/gecko-1.9.0.11/mac/lib/XUL
10128805fa57 18MiB third_party/java/dossier/dossier-0.1.1.jar
112b31b46496 18MiB third_party/java/dossier/dossier-0.2.1.jar
fb5d97499e26 18MiB third_party/java/dossier/dossier-0.3.0.jar
4d630f4a8c54 18MiB third_party/java/dossier/dossier-0.4.0.jar
4c9e78044e34 18MiB third_party/java/dossier/dossier-0.4.1.jar
be4e36b9093f 18MiB third_party/gecko-1.9.0.11/linux/lib/libxul.so
ad0da1fa57ba 19MiB third_party/jruby/jruby-complete.jar
5d52d406e53d 19MiB third_party/jruby/jruby-complete.jar
f25bc68c91d8 19MiB third_party/java/dossier/dossier-0.7.1.jar
ac78b3a4923e 19MiB third_party/java/dossier/dossier-0.7.2.jar
58977b359e30 19MiB third_party/java/dossier/dossier-0.6.1.jar
3fa4e9c04cec 19MiB third_party/java/dossier/dossier-0.6.2.jar
4b968cc2db46 19MiB third_party/jruby/jruby-complete.jar
f82d46909648 25MiB third_party/jruby/jruby-complete.jar
eb3e6639c466 25MiB third_party/jruby/jruby-complete.jar
e090d1926e24 27MiB third_party/java/htmlunit/htmlunit-2.11-src.zip
f886fb00a4fb 34MiB third_party/jruby/jruby-complete.jar
fccd11760146 34MiB third_party/jruby/jruby-complete.jar
613bb32ba35d 34MiB third_party/jruby/jruby-complete.jar
```
### Operating System
Windows 11 22H2
### Selenium version
N/A
### What are the browser(s) and version(s) where you see this issue?
N/A
### What are the browser driver(s) and version(s) where you see this issue?
N/A
### Are you using Selenium Grid?
No
|
1.0
|
[🐛 Bug]: Git pack file is really large - ### What happened?
Cloning SeleniumHQ is really large.

It appears that there are a large number of binary files stored in cpp, third_party and website.
Suggest moving any references to Git LFS or just referencing a url to download from the vendor.
### How can we reproduce the issue?
```shell
1. Clone the repo
2. Run the following command from this page
https://stackoverflow.com/questions/10622179/how-to-find-identify-large-commits-in-git-history
git rev-list --objects --all | git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' | sed -n 's/^blob //p' | awk '$2 >= 2^20' | sort --numeric-sort --key=2 | cut -c 1-12,41- | $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest > output.txt
```
```
### Relevant log output
```shell
Some of the larger files...
277ca571dcb3 5.1MiB common/manager/linux/selenium-manager
87a299d8ba46 5.1MiB third_party/closure/bin/compiler-20120123.jar
d958a61882cf 5.1MiB third_party/closure/bin/compiler-20120305.jar
24ae5637d301 5.2MiB third_party/java/android/android-8.jar
b7111d8b95e9 5.2MiB lib/buildtime/jruby-complete-r5974.jar
52a920f28c74 5.2MiB third_party/closure/bin/compiler-20120710.jar
7e0290349482 5.3MiB third_party/selenium/selenium-server-1.0.1.jar
fb4eb39f0119 5.4MiB third_party/dotnet/nuget/NuGet.exe
96430f7a5ee9 5.5MiB third_party/py/Sphinx-1.8.2.tar.gz
e23352ff0bdb 5.6MiB third_party/closure/bin/compiler-20120917.jar
22025c98c19a 5.6MiB website/download/selenium-rc-1.0-SNAPSHOT.zip
c12a17d3e227 5.7MiB website/webapps/ROOT/selenium_v0.1.3.zip
a12f24be30b3 5.7MiB third_party/closure/bin/compiler-20140814.jar
e2338676d56a 6.0MiB third_party/closure/bin/compiler-20130227.jar
d969a3313dca 6.0MiB third_party/closure/bin/compiler.jar
33b44ed43902 6.0MiB third_party/closure/bin/compiler.jar
a297dc0f126d 6.0MiB third_party/closure/bin/compiler.jar
0b0e68c3dd16 6.0MiB third_party/closure/bin/compiler-20130603.jar
e4e6756afc65 6.1MiB third_party/closure/bin/compiler.jar
9cae6957d4c0 6.1MiB third_party/closure/bin/compiler.jar
0d42389c08b7 6.1MiB third_party/closure/bin/compiler.jar
0f83b17f510b 6.2MiB third_party/dotnet/nuget/NuGet.exe
57859f0b2f04 6.3MiB third_party/closure/bin/compiler.jar
5943f680ceae 6.4MiB third_party/dotnet/nuget/NuGet.exe
4fed060629ee 6.5MiB website/download/selenium-remote-control-0.7.zip
67aa45afcffd 6.5MiB third_party/closure/bin/compiler-20140407.jar
a1d81349816d 6.5MiB website/download/selenium-remote-control-0.7.1-SNAPSHOT.zip
ec9f5b0784e2 6.5MiB website/download/selenium-remote-control-0.7.1.zip
d621604551a5 6.9MiB third_party/closure/bin/compiler.jar
239bf9fbae07 7.4MiB third_party/closure/bin/compiler.jar
27358741fe71 7.6MiB third_party/py/Babel-2.6.0.tar.gz
35effd7af200 8.3MiB third_party/py/jython.jar
3f2c98347240 9.0MiB docs/api/java/index-all.html
3800352dd9de 10MiB website/download/selenium-remote-control-0.7.2-SNAPSHOT-924-1023.zip
db0e1befa4a2 11MiB third_party/jruby/jruby-complete-1.4.0RC1.jar
859a8c03220f 11MiB third_party/jruby/jruby-complete-1.5.0.RC2.jar
7572a14d1723 11MiB third_party/java/browsermob_proxy/browsermob-proxy-1.0-SNAPSHOT-release.jar
f32c46305db1 11MiB third_party/java/htmlunit/htmlunit-driver-2.20.0-SNAPSHOT.jar
40289c9469dd 12MiB third_party/jruby/jruby-complete.jar
35f6451ec32f 12MiB third_party/jruby/jruby-complete.jar
15d50e102d34 13MiB third_party/jruby/jruby-complete-1.5.0.RC2.jar
9e01e5caa0c3 14MiB third_party/py/jython.jar
476761b860fc 14MiB third_party/jruby/jruby-complete.jar
7dfaeb5e621e 14MiB third_party/jruby/jruby-complete.jar
e88a9eb7e281 15MiB third_party/jruby/jruby-complete.jar
4a979b82f080 16MiB third_party/jruby/jruby-complete.jar
edd1c82164f5 17MiB third_party/java/android/android-14.jar
96dd3dd7b503 18MiB third_party/gecko-1.9.0.11/mac/lib/XUL
10128805fa57 18MiB third_party/java/dossier/dossier-0.1.1.jar
112b31b46496 18MiB third_party/java/dossier/dossier-0.2.1.jar
fb5d97499e26 18MiB third_party/java/dossier/dossier-0.3.0.jar
4d630f4a8c54 18MiB third_party/java/dossier/dossier-0.4.0.jar
4c9e78044e34 18MiB third_party/java/dossier/dossier-0.4.1.jar
be4e36b9093f 18MiB third_party/gecko-1.9.0.11/linux/lib/libxul.so
ad0da1fa57ba 19MiB third_party/jruby/jruby-complete.jar
5d52d406e53d 19MiB third_party/jruby/jruby-complete.jar
f25bc68c91d8 19MiB third_party/java/dossier/dossier-0.7.1.jar
ac78b3a4923e 19MiB third_party/java/dossier/dossier-0.7.2.jar
58977b359e30 19MiB third_party/java/dossier/dossier-0.6.1.jar
3fa4e9c04cec 19MiB third_party/java/dossier/dossier-0.6.2.jar
4b968cc2db46 19MiB third_party/jruby/jruby-complete.jar
f82d46909648 25MiB third_party/jruby/jruby-complete.jar
eb3e6639c466 25MiB third_party/jruby/jruby-complete.jar
e090d1926e24 27MiB third_party/java/htmlunit/htmlunit-2.11-src.zip
f886fb00a4fb 34MiB third_party/jruby/jruby-complete.jar
fccd11760146 34MiB third_party/jruby/jruby-complete.jar
613bb32ba35d 34MiB third_party/jruby/jruby-complete.jar
```
### Operating System
Windows 11 22H2
### Selenium version
N/A
### What are the browser(s) and version(s) where you see this issue?
N/A
### What are the browser driver(s) and version(s) where you see this issue?
N/A
### Are you using Selenium Grid?
No
|
defect
|
git pack file is really large what happened cloning seleniumhq is really large it appears that there are a large number of binary files stored in cpp third party and website suggest moving any references to git lfs or just referencing a url to download from the vendor how can we reproduce the issue shell clone the repo run the following command from this page git rev list objects all git cat file batch check objecttype objectname objectsize rest sed n s blob p awk sort numeric sort key cut c command v gnumfmt echo numfmt field to iec i suffix b padding round nearest output txt relevant log output shell some of the larger files common manager linux selenium manager third party closure bin compiler jar third party closure bin compiler jar third party java android android jar lib buildtime jruby complete jar third party closure bin compiler jar third party selenium selenium server jar third party dotnet nuget nuget exe third party py sphinx tar gz third party closure bin compiler jar website download selenium rc snapshot zip website webapps root selenium zip third party closure bin compiler jar third party closure bin compiler jar third party closure bin compiler jar third party closure bin compiler jar third party closure bin compiler jar third party closure bin compiler jar third party closure bin compiler jar third party closure bin compiler jar third party closure bin compiler jar third party dotnet nuget nuget exe third party closure bin compiler jar third party dotnet nuget nuget exe website download selenium remote control zip third party closure bin compiler jar website download selenium remote control snapshot zip website download selenium remote control zip third party closure bin compiler jar third party closure bin compiler jar third party py babel tar gz third party py jython jar docs api java index all html website download selenium remote control snapshot zip third party jruby jruby complete jar third party jruby jruby complete jar third party java browsermob proxy browsermob proxy snapshot release jar third party java htmlunit htmlunit driver snapshot jar third party jruby jruby complete jar third party jruby jruby complete jar third party jruby jruby complete jar third party py jython jar third party jruby jruby complete jar third party jruby jruby complete jar third party jruby jruby complete jar third party jruby jruby complete jar third party java android android jar third party gecko mac lib xul third party java dossier dossier jar third party java dossier dossier jar third party java dossier dossier jar third party java dossier dossier jar third party java dossier dossier jar third party gecko linux lib libxul so third party jruby jruby complete jar third party jruby jruby complete jar third party java dossier dossier jar third party java dossier dossier jar third party java dossier dossier jar third party java dossier dossier jar third party jruby jruby complete jar third party jruby jruby complete jar third party jruby jruby complete jar third party java htmlunit htmlunit src zip third party jruby jruby complete jar third party jruby jruby complete jar third party jruby jruby complete jar operating system windows selenium version n a what are the browser s and version s where you see this issue n a what are the browser driver s and version s where you see this issue n a are you using selenium grid no
| 1
|
28,761
| 5,348,486,330
|
IssuesEvent
|
2017-02-18 05:32:19
|
jbpt/codebase
|
https://api.github.com/repos/jbpt/codebase
|
closed
|
No jbpt in MavenCentral
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
Open search.maven.org and try to search. Can't add dependency with maven or
gradle
For example you can add junit dependency in one line like that:
testCompile 'junit:junit:4.11'
But without jbpt in Maven I can't include library in config, without putting
binaries into project.
```
Original issue reported on code.google.com by `traceurz...@gmail.com` on 13 Jun 2014 at 4:28
|
1.0
|
No jbpt in MavenCentral - ```
What steps will reproduce the problem?
Open search.maven.org and try to search. Can't add dependency with maven or
gradle
For example you can add junit dependency in one line like that:
testCompile 'junit:junit:4.11'
But without jbpt in Maven I can't include library in config, without putting
binaries into project.
```
Original issue reported on code.google.com by `traceurz...@gmail.com` on 13 Jun 2014 at 4:28
|
defect
|
no jbpt in mavencentral what steps will reproduce the problem open search maven org and try to search can t add dependency with maven or gradle for example you can add junit dependency in one line like that testcompile junit junit but without jbpt in maven i can t include library in config without putting binaries into project original issue reported on code google com by traceurz gmail com on jun at
| 1
|
665,935
| 22,336,119,578
|
IssuesEvent
|
2022-06-14 18:39:43
|
VEuPathDB/EdaNewIssues
|
https://api.github.com/repos/VEuPathDB/EdaNewIssues
|
closed
|
Site search feedback on descriptors / terminologies used in results
|
high priority
|
We need better descriptors for “categories path”, “name”, and “study info”.
Also “entity” is terminology that should only be used internally, so we should rename this also
|
1.0
|
Site search feedback on descriptors / terminologies used in results - We need better descriptors for “categories path”, “name”, and “study info”.
Also “entity” is terminology that should only be used internally, so we should rename this also
|
non_defect
|
site search feedback on descriptors terminologies used in results we need better descriptors for “categories path” “name” and “study info” also “entity” is terminology that should only be used internally so we should rename this also
| 0
|
7,350
| 2,610,364,719
|
IssuesEvent
|
2015-02-26 19:57:51
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
Can't log into blog
|
auto-migrated Priority-Medium tumblr Type-Defect
|
```
What's the problem?
I cannot log into my blog (either Tumblr or my wordpress blogs) from
scribefire. I am able to enter the correct username and password in the
browser, but scribefire returns an error.
What browser are you using?
firefox
What version of ScribeFire are you running?
Unknown
```
-----
Original issue reported on code.google.com by `NCastill...@gmail.com` on 21 Nov 2012 at 1:23
* Merged into: #766
|
1.0
|
Can't log into blog - ```
What's the problem?
I cannot log into my blog (either Tumblr or my wordpress blogs) from
scribefire. I am able to enter the correct username and password in the
browser, but scribefire returns an error.
What browser are you using?
firefox
What version of ScribeFire are you running?
Unknown
```
-----
Original issue reported on code.google.com by `NCastill...@gmail.com` on 21 Nov 2012 at 1:23
* Merged into: #766
|
defect
|
can t log into blog what s the problem i cannot log into my blog either tumblr or my wordpress blogs from scribefire i am able to enter the correct username and password in the browser but scribefire returns an error what browser are you using firefox what version of scribefire are you running unknown original issue reported on code google com by ncastill gmail com on nov at merged into
| 1
|
75,884
| 26,122,791,740
|
IssuesEvent
|
2022-12-28 14:37:08
|
DependencyTrack/dependency-track
|
https://api.github.com/repos/DependencyTrack/dependency-track
|
opened
|
Portfolio Access Control do not apply to child projects
|
defect in triage
|
### Current Behavior
When you add a project with a childs to some team in Portfolio Access Control, members of this team have access **only** to the parent project.
### Steps to Reproduce
1. Enable Portfolio Access Control
2. Create a project hierarchy (parent with some childs)
3. Grant access to the parent project for some team in Portfolio Access Control
4. Login with the team's member account
### Expected Behavior
Parent and child projects should be accessible.
### Dependency-Track Version
4.7.0
### Dependency-Track Distribution
Container Image
### Database Server
N/A
### Database Server Version
_No response_
### Browser
N/A
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
|
1.0
|
Portfolio Access Control do not apply to child projects - ### Current Behavior
When you add a project with a childs to some team in Portfolio Access Control, members of this team have access **only** to the parent project.
### Steps to Reproduce
1. Enable Portfolio Access Control
2. Create a project hierarchy (parent with some childs)
3. Grant access to the parent project for some team in Portfolio Access Control
4. Login with the team's member account
### Expected Behavior
Parent and child projects should be accessible.
### Dependency-Track Version
4.7.0
### Dependency-Track Distribution
Container Image
### Database Server
N/A
### Database Server Version
_No response_
### Browser
N/A
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
|
defect
|
portfolio access control do not apply to child projects current behavior when you add a project with a childs to some team in portfolio access control members of this team have access only to the parent project steps to reproduce enable portfolio access control create a project hierarchy parent with some childs grant access to the parent project for some team in portfolio access control login with the team s member account expected behavior parent and child projects should be accessible dependency track version dependency track distribution container image database server n a database server version no response browser n a checklist i have read and understand the i have checked the for whether this defect was already reported
| 1
|
15,336
| 2,850,636,512
|
IssuesEvent
|
2015-05-31 18:58:06
|
damonkohler/sl4a
|
https://api.github.com/repos/damonkohler/sl4a
|
opened
|
NPE when launching script from desktop shortcut
|
auto-migrated Priority-Medium Type-Defect
|
_From @GoogleCodeExporter on May 31, 2015 11:29_
```
What device(s) are you experiencing the problem on?
HTC Desire Z
What firmware version are you running on the device?
Android 2.2.1
2.6.32.21 kernel
1.82.405.1 software number
What steps will reproduce the problem?
1. Create a python script
2. Add a shortcut on the desktop
3. Reboot the phone (or wait for a long time)
4. Tap on the shortcut
What is the expected output? What do you see instead?
The script should be executed with no error. I see an error popup, instead.
Then, after a while, the script is executed.
What version of the product are you using? On what operating system?
SL4A R4 on Android 2.2.1 with python for android (interpreter version: 16)
Also tried with source build, and got the same error.
Please provide any additional information below.
This issue is somehow like issue n. 432, but the proposed patch for the 432
does not fix this problem.
It seems that the script is executed before the python interpreter is started.
This does not happens every time, and it's not easy to reproduce, since once
that the interpreter is loaded the problem does not show up anymore.
I am attaching a patch that I tried and it seems to solve the problem.
```
Original issue reported on code.google.com by `alessand...@gmail.com` on 3 May 2011 at 6:55
Attachments:
* [log.txt](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-543/comment-0/log.txt)
* [patch_npe_interpreter.diff](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-543/comment-0/patch_npe_interpreter.diff)
_Copied from original issue: damonkohler/android-scripting#543_
|
1.0
|
NPE when launching script from desktop shortcut - _From @GoogleCodeExporter on May 31, 2015 11:29_
```
What device(s) are you experiencing the problem on?
HTC Desire Z
What firmware version are you running on the device?
Android 2.2.1
2.6.32.21 kernel
1.82.405.1 software number
What steps will reproduce the problem?
1. Create a python script
2. Add a shortcut on the desktop
3. Reboot the phone (or wait for a long time)
4. Tap on the shortcut
What is the expected output? What do you see instead?
The script should be executed with no error. I see an error popup, instead.
Then, after a while, the script is executed.
What version of the product are you using? On what operating system?
SL4A R4 on Android 2.2.1 with python for android (interpreter version: 16)
Also tried with source build, and got the same error.
Please provide any additional information below.
This issue is somehow like issue n. 432, but the proposed patch for the 432
does not fix this problem.
It seems that the script is executed before the python interpreter is started.
This does not happens every time, and it's not easy to reproduce, since once
that the interpreter is loaded the problem does not show up anymore.
I am attaching a patch that I tried and it seems to solve the problem.
```
Original issue reported on code.google.com by `alessand...@gmail.com` on 3 May 2011 at 6:55
Attachments:
* [log.txt](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-543/comment-0/log.txt)
* [patch_npe_interpreter.diff](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-543/comment-0/patch_npe_interpreter.diff)
_Copied from original issue: damonkohler/android-scripting#543_
|
defect
|
npe when launching script from desktop shortcut from googlecodeexporter on may what device s are you experiencing the problem on htc desire z what firmware version are you running on the device android kernel software number what steps will reproduce the problem create a python script add a shortcut on the desktop reboot the phone or wait for a long time tap on the shortcut what is the expected output what do you see instead the script should be executed with no error i see an error popup instead then after a while the script is executed what version of the product are you using on what operating system on android with python for android interpreter version also tried with source build and got the same error please provide any additional information below this issue is somehow like issue n but the proposed patch for the does not fix this problem it seems that the script is executed before the python interpreter is started this does not happens every time and it s not easy to reproduce since once that the interpreter is loaded the problem does not show up anymore i am attaching a patch that i tried and it seems to solve the problem original issue reported on code google com by alessand gmail com on may at attachments copied from original issue damonkohler android scripting
| 1
|
421,542
| 12,258,302,097
|
IssuesEvent
|
2020-05-06 14:55:28
|
airshipit/airshipctl
|
https://api.github.com/repos/airshipit/airshipctl
|
closed
|
Enhance `airshipctl cluster init` with logging support
|
enhancement priority/critical
|
**Problem description (if applicable)**
Currently execution or `airshipctl cluster init` command does not provide any logging
**Proposed change**
Enable logging messaged for `airshipctl cluster init`. Debug flag should be handled appropriately by increasing output verbosity level
**Potential impacts**
Cluster API components deployment logs will be visible.
|
1.0
|
Enhance `airshipctl cluster init` with logging support - **Problem description (if applicable)**
Currently execution or `airshipctl cluster init` command does not provide any logging
**Proposed change**
Enable logging messaged for `airshipctl cluster init`. Debug flag should be handled appropriately by increasing output verbosity level
**Potential impacts**
Cluster API components deployment logs will be visible.
|
non_defect
|
enhance airshipctl cluster init with logging support problem description if applicable currently execution or airshipctl cluster init command does not provide any logging proposed change enable logging messaged for airshipctl cluster init debug flag should be handled appropriately by increasing output verbosity level potential impacts cluster api components deployment logs will be visible
| 0
|
81,654
| 15,785,023,710
|
IssuesEvent
|
2021-04-01 15:49:27
|
MicrosoftDocs/intellicode
|
https://api.github.com/repos/MicrosoftDocs/intellicode
|
closed
|
Option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection
|
product-feedback vscode
|
**Feature Request for VS Code**
It would be great if there was an option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection.
Something similar to the `remote.SSH.allowLocalServerDownload` or the `remote.downloadExtensionsLocally` options that are available in the [vscode-remote extension](https://github.com/microsoft/vscode-remote-release).
A lot of us use VS Code in corporate environments with restricted or no access to the Internet.
|
1.0
|
Option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection - **Feature Request for VS Code**
It would be great if there was an option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection.
Something similar to the `remote.SSH.allowLocalServerDownload` or the `remote.downloadExtensionsLocally` options that are available in the [vscode-remote extension](https://github.com/microsoft/vscode-remote-release).
A lot of us use VS Code in corporate environments with restricted or no access to the Internet.
|
non_defect
|
option to download the model locally and transfer it to the remote instance running vs code on a host without internet access over the ssh connection feature request for vs code it would be great if there was an option to download the model locally and transfer it to the remote instance running vs code on a host without internet access over the ssh connection something similar to the remote ssh allowlocalserverdownload or the remote downloadextensionslocally options that are available in the a lot of us use vs code in corporate environments with restricted or no access to the internet
| 0
|
80,196
| 30,107,283,015
|
IssuesEvent
|
2023-06-30 03:15:32
|
zed-industries/community
|
https://api.github.com/repos/zed-industries/community
|
opened
|
cursor not on where click spot belongs to when zed got refcoused
|
defect triage admin read
|
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
last cursor focus is on editor, if click terminal area to get zed app refocused to input commands, the focusedposition is still on editor in spite of the position of click cursor, it will mass the source code ...
expect not keep the focused cursor as it was, but focus on where the mouse cursor spot is.
### Environment
Zed: v0.92.3 (stable)
OS: macOS 13.4.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
1.0
|
cursor not on where click spot belongs to when zed got refcoused - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
last cursor focus is on editor, if click terminal area to get zed app refocused to input commands, the focusedposition is still on editor in spite of the position of click cursor, it will mass the source code ...
expect not keep the focused cursor as it was, but focus on where the mouse cursor spot is.
### Environment
Zed: v0.92.3 (stable)
OS: macOS 13.4.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
defect
|
cursor not on where click spot belongs to when zed got refcoused check for existing issues completed describe the bug provide steps to reproduce it last cursor focus is on editor if click terminal area to get zed app refocused to input commands the focusedposition is still on editor in spite of the position of click cursor it will mass the source code expect not keep the focused cursor as it was but focus on where the mouse cursor spot is environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
| 1
|
70,351
| 30,639,582,756
|
IssuesEvent
|
2023-07-24 20:40:16
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
[Functions] Add ability to specify that System.Text.Json should be used for POCO bindings
|
Service Bus Event Hubs Event Grid Client customer-reported feature-request Tables Functions
|
### Library name and version
Microsoft.Azure.WebJobs.Extensions.ServiceBus 5.3.0
### Query/Question
How do I configure the ServiceBus bindings to use System.Text.Json rather than the unexpected default of Newtonsoft.Json?
I created a POCO message class using `JsonElement` like this:
``` csharp
using System.Text.Json;
namespace MyNamespace;
public sealed record GenericSendRealTimeEventPayload
{
public JsonElement? EventData { get; init; }
public string? EventTarget { get; init; }
public EventTargetTypes? EventTargetType { get; init; }
}
```
and then spent the next couple hours tracking down why the Model Binding was not populating the JsonElement property like the [System.Text.Json docs state](https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-handle-overflow?pivots=dotnet-6-0#deserialize-into-jsonelement-or-jsonnode). This was especially confusing as my unit test:
``` csharp
[Fact]
public void DeserializesAsExpected()
{
string jsonEventData = JsonSerializer.Serialize(ModelMocks.ValidEventData);
GenericSendRealTimeEventPayload expectedPayload = new GenericSendRealTimeEventPayload
{
EventData = JsonDocument.Parse(jsonEventData).RootElement,
EventTarget = ModelMocks.ValidPayload.EventTarget,
EventTargetType = ModelMocks.ValidPayload.EventTargetType,
};
string jsonPayload = JsonSerializer.Serialize(ModelMocks.ValidPayload);
GenericSendRealTimeEventPayload? actualPayload = JsonSerializer.Deserialize<GenericSendRealTimeEventPayload>(jsonPayload);
Assert.Equal(expectedPayload.ToString(), actualPayload?.ToString());
}
```
is passing as expected.
I finally discovered this repo and that the default json serializer is still unexpectedly Newtonsoft.Json. Unfortunately, I can't seem to find any documentation mentioning how to change it (or even what the default is).
Could you please help me understand what I need to do to enable System.Text.Json serializer for my SreviceBus bindings?
### Environment
<PropertyGroup>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
<PublishReadyToRun>true</PublishReadyToRun>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
</PropertyGroup>
Developing locally on Win 10 using Visual Studio 2022 (17.2.0) and .NET 6
|
1.0
|
[Functions] Add ability to specify that System.Text.Json should be used for POCO bindings - ### Library name and version
Microsoft.Azure.WebJobs.Extensions.ServiceBus 5.3.0
### Query/Question
How do I configure the ServiceBus bindings to use System.Text.Json rather than the unexpected default of Newtonsoft.Json?
I created a POCO message class using `JsonElement` like this:
``` csharp
using System.Text.Json;
namespace MyNamespace;
public sealed record GenericSendRealTimeEventPayload
{
public JsonElement? EventData { get; init; }
public string? EventTarget { get; init; }
public EventTargetTypes? EventTargetType { get; init; }
}
```
and then spent the next couple hours tracking down why the Model Binding was not populating the JsonElement property like the [System.Text.Json docs state](https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-handle-overflow?pivots=dotnet-6-0#deserialize-into-jsonelement-or-jsonnode). This was especially confusing as my unit test:
``` csharp
[Fact]
public void DeserializesAsExpected()
{
string jsonEventData = JsonSerializer.Serialize(ModelMocks.ValidEventData);
GenericSendRealTimeEventPayload expectedPayload = new GenericSendRealTimeEventPayload
{
EventData = JsonDocument.Parse(jsonEventData).RootElement,
EventTarget = ModelMocks.ValidPayload.EventTarget,
EventTargetType = ModelMocks.ValidPayload.EventTargetType,
};
string jsonPayload = JsonSerializer.Serialize(ModelMocks.ValidPayload);
GenericSendRealTimeEventPayload? actualPayload = JsonSerializer.Deserialize<GenericSendRealTimeEventPayload>(jsonPayload);
Assert.Equal(expectedPayload.ToString(), actualPayload?.ToString());
}
```
is passing as expected.
I finally discovered this repo and that the default json serializer is still unexpectedly Newtonsoft.Json. Unfortunately, I can't seem to find any documentation mentioning how to change it (or even what the default is).
Could you please help me understand what I need to do to enable System.Text.Json serializer for my SreviceBus bindings?
### Environment
<PropertyGroup>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
<PublishReadyToRun>true</PublishReadyToRun>
<RuntimeIdentifier>win-x64</RuntimeIdentifier>
</PropertyGroup>
Developing locally on Win 10 using Visual Studio 2022 (17.2.0) and .NET 6
|
non_defect
|
add ability to specify that system text json should be used for poco bindings library name and version microsoft azure webjobs extensions servicebus query question how do i configure the servicebus bindings to use system text json rather than the unexpected default of newtonsoft json i created a poco message class using jsonelement like this csharp using system text json namespace mynamespace public sealed record genericsendrealtimeeventpayload public jsonelement eventdata get init public string eventtarget get init public eventtargettypes eventtargettype get init and then spent the next couple hours tracking down why the model binding was not populating the jsonelement property like the this was especially confusing as my unit test csharp public void deserializesasexpected string jsoneventdata jsonserializer serialize modelmocks valideventdata genericsendrealtimeeventpayload expectedpayload new genericsendrealtimeeventpayload eventdata jsondocument parse jsoneventdata rootelement eventtarget modelmocks validpayload eventtarget eventtargettype modelmocks validpayload eventtargettype string jsonpayload jsonserializer serialize modelmocks validpayload genericsendrealtimeeventpayload actualpayload jsonserializer deserialize jsonpayload assert equal expectedpayload tostring actualpayload tostring is passing as expected i finally discovered this repo and that the default json serializer is still unexpectedly newtonsoft json unfortunately i can t seem to find any documentation mentioning how to change it or even what the default is could you please help me understand what i need to do to enable system text json serializer for my srevicebus bindings environment true win developing locally on win using visual studio and net
| 0
|
39,624
| 9,582,396,380
|
IssuesEvent
|
2019-05-08 00:27:08
|
IBM/CAST
|
https://api.github.com/repos/IBM/CAST
|
closed
|
Prior failed jobs causing failover problems
|
Comp: Burst Buffer PhaseFound: Development Sev: 3 Status: Closed Type: Defect
|
When LVs are left on one or more CNs due to failures, subsequent failover attempts to a new bbServer attempts to register those LVs and restart any associated transfer definitions for those failed LVs. Not sure of the exposure right now, but this is definitely a problem for spanning handles where not all contributors will be restarted.
|
1.0
|
Prior failed jobs causing failover problems - When LVs are left on one or more CNs due to failures, subsequent failover attempts to a new bbServer attempts to register those LVs and restart any associated transfer definitions for those failed LVs. Not sure of the exposure right now, but this is definitely a problem for spanning handles where not all contributors will be restarted.
|
defect
|
prior failed jobs causing failover problems when lvs are left on one or more cns due to failures subsequent failover attempts to a new bbserver attempts to register those lvs and restart any associated transfer definitions for those failed lvs not sure of the exposure right now but this is definitely a problem for spanning handles where not all contributors will be restarted
| 1
|
61,311
| 17,023,663,914
|
IssuesEvent
|
2021-07-03 03:10:35
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
popups not staying on the home locations map display
|
Component: website Priority: minor Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 10.58am, Monday, 20th December 2010]**
The map display showing your home location and nearby people's home locations, shows popup voice bubble things when you mouseover the markers.
These should probably stay when you click on the marker, because currently it's impossible to click on the username link which is shown there.
|
1.0
|
popups not staying on the home locations map display - **[Submitted to the original trac issue database at 10.58am, Monday, 20th December 2010]**
The map display showing your home location and nearby people's home locations, shows popup voice bubble things when you mouseover the markers.
These should probably stay when you click on the marker, because currently it's impossible to click on the username link which is shown there.
|
defect
|
popups not staying on the home locations map display the map display showing your home location and nearby people s home locations shows popup voice bubble things when you mouseover the markers these should probably stay when you click on the marker because currently it s impossible to click on the username link which is shown there
| 1
|
29,401
| 5,682,023,091
|
IssuesEvent
|
2017-04-13 08:32:35
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
Autocomplete - cursor allways move to the end of the string
|
defect
|
```
[X] bug report => Search github for a similar issue or PR before submitting
```
**Current behavior**
When deleting one character on autocomplete, the cursor allways move to the end of the string
**Expected behavior**
The cursor should stay where we are deleting the string
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
<!-- Windows, Chrome -->
* **Angular version:** 4.0.X
* **PrimeNG version:** 2.0.X
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** [Chrome XX | Firefox XX | IE XX ]
<!-- All browsers where this could be reproduced -->
* **Language:** [TypeScript]
|
1.0
|
Autocomplete - cursor allways move to the end of the string - ```
[X] bug report => Search github for a similar issue or PR before submitting
```
**Current behavior**
When deleting one character on autocomplete, the cursor allways move to the end of the string
**Expected behavior**
The cursor should stay where we are deleting the string
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
<!-- Windows, Chrome -->
* **Angular version:** 4.0.X
* **PrimeNG version:** 2.0.X
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** [Chrome XX | Firefox XX | IE XX ]
<!-- All browsers where this could be reproduced -->
* **Language:** [TypeScript]
|
defect
|
autocomplete cursor allways move to the end of the string bug report search github for a similar issue or pr before submitting current behavior when deleting one character on autocomplete the cursor allways move to the end of the string expected behavior the cursor should stay where we are deleting the string minimal reproduction of the problem with instructions if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point what is the motivation use case for changing the behavior please tell us about your environment angular version x primeng version x browser language
| 1
|
41,059
| 21,419,958,624
|
IssuesEvent
|
2022-04-22 14:41:22
|
tradle/lambda-plugins
|
https://api.github.com/repos/tradle/lambda-plugins
|
opened
|
Remove s3 downloads after installation
|
performance backlog
|
Currently the s3 installation process downloads files from `s3` into the `/tmp` folder. These files are not necessary after the installation process and could/should be deleted. Deletion may delay the warmup a little longer but lambdas are storage limited and it may become a problem later on if the lamba is on the limit of the available storage.
|
True
|
Remove s3 downloads after installation - Currently the s3 installation process downloads files from `s3` into the `/tmp` folder. These files are not necessary after the installation process and could/should be deleted. Deletion may delay the warmup a little longer but lambdas are storage limited and it may become a problem later on if the lamba is on the limit of the available storage.
|
non_defect
|
remove downloads after installation currently the installation process downloads files from into the tmp folder these files are not necessary after the installation process and could should be deleted deletion may delay the warmup a little longer but lambdas are storage limited and it may become a problem later on if the lamba is on the limit of the available storage
| 0
|
11,204
| 7,107,370,060
|
IssuesEvent
|
2018-01-16 19:44:28
|
truetandem/e-QIP-prototype
|
https://api.github.com/repos/truetandem/e-QIP-prototype
|
closed
|
Previously used address block
|
bug low usability
|
When using an address from the previously used address book -
After address is input it doesn't get the green verified style and check marks
<img width="898" alt="screen shot 2017-10-16 at 3 25 44 pm" src="https://user-images.githubusercontent.com/19935974/31631179-e354155e-b286-11e7-9601-6a6c5ec40900.png">
|
True
|
Previously used address block - When using an address from the previously used address book -
After address is input it doesn't get the green verified style and check marks
<img width="898" alt="screen shot 2017-10-16 at 3 25 44 pm" src="https://user-images.githubusercontent.com/19935974/31631179-e354155e-b286-11e7-9601-6a6c5ec40900.png">
|
non_defect
|
previously used address block when using an address from the previously used address book after address is input it doesn t get the green verified style and check marks img width alt screen shot at pm src
| 0
|
198,406
| 22,638,753,974
|
IssuesEvent
|
2022-06-30 22:08:50
|
snowflakedb/snowflake-jdbc
|
https://api.github.com/repos/snowflakedb/snowflake-jdbc
|
closed
|
SNOW-620299: tika-core-2.4.0.jar: 1 vulnerabilities (highest severity is: 5.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tika-core-2.4.0.jar</b></p></summary>
<p>This is the core Apache Tika? toolkit library from which all other modules inherit functionality. It
also
includes the core facades for the Tika API.</p>
<p>Library home page: <a href="https://tika.apache.org/">https://tika.apache.org/</a></p>
<p>Path to dependency file: /public_pom.xml</p>
<p>Path to vulnerable library: /sitory/org/apache/tika/tika-core/2.4.0/tika-core-2.4.0.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-33879](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33879) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | tika-core-2.4.0.jar | Direct | org.apache.tika:tika-core:1.28.4,2.4.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-33879</summary>
### Vulnerable Library - <b>tika-core-2.4.0.jar</b></p>
<p>This is the core Apache Tika? toolkit library from which all other modules inherit functionality. It
also
includes the core facades for the Tika API.</p>
<p>Library home page: <a href="https://tika.apache.org/">https://tika.apache.org/</a></p>
<p>Path to dependency file: /public_pom.xml</p>
<p>Path to vulnerable library: /sitory/org/apache/tika/tika-core/2.4.0/tika-core-2.4.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **tika-core-2.4.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The initial fixes in CVE-2022-30126 and CVE-2022-30973 for regexes in the StandardsExtractingContentHandler were insufficient, and we found a separate, new regex DoS in a different regex in the StandardsExtractingContentHandler. These are now fixed in 1.28.4 and 2.4.1.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33879>CVE-2022-33879</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33879">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33879</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution: org.apache.tika:tika-core:1.28.4,2.4.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
SNOW-620299: tika-core-2.4.0.jar: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tika-core-2.4.0.jar</b></p></summary>
<p>This is the core Apache Tika? toolkit library from which all other modules inherit functionality. It
also
includes the core facades for the Tika API.</p>
<p>Library home page: <a href="https://tika.apache.org/">https://tika.apache.org/</a></p>
<p>Path to dependency file: /public_pom.xml</p>
<p>Path to vulnerable library: /sitory/org/apache/tika/tika-core/2.4.0/tika-core-2.4.0.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-33879](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33879) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | tika-core-2.4.0.jar | Direct | org.apache.tika:tika-core:1.28.4,2.4.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-33879</summary>
### Vulnerable Library - <b>tika-core-2.4.0.jar</b></p>
<p>This is the core Apache Tika? toolkit library from which all other modules inherit functionality. It
also
includes the core facades for the Tika API.</p>
<p>Library home page: <a href="https://tika.apache.org/">https://tika.apache.org/</a></p>
<p>Path to dependency file: /public_pom.xml</p>
<p>Path to vulnerable library: /sitory/org/apache/tika/tika-core/2.4.0/tika-core-2.4.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **tika-core-2.4.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The initial fixes in CVE-2022-30126 and CVE-2022-30973 for regexes in the StandardsExtractingContentHandler were insufficient, and we found a separate, new regex DoS in a different regex in the StandardsExtractingContentHandler. These are now fixed in 1.28.4 and 2.4.1.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33879>CVE-2022-33879</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33879">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33879</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution: org.apache.tika:tika-core:1.28.4,2.4.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_defect
|
snow tika core jar vulnerabilities highest severity is vulnerable library tika core jar this is the core apache tika toolkit library from which all other modules inherit functionality it also includes the core facades for the tika api library home page a href path to dependency file public pom xml path to vulnerable library sitory org apache tika tika core tika core jar vulnerabilities cve severity cvss dependency type fixed in remediation available medium tika core jar direct org apache tika tika core details cve vulnerable library tika core jar this is the core apache tika toolkit library from which all other modules inherit functionality it also includes the core facades for the tika api library home page a href path to dependency file public pom xml path to vulnerable library sitory org apache tika tika core tika core jar dependency hierarchy x tika core jar vulnerable library found in base branch master vulnerability details the initial fixes in cve and cve for regexes in the standardsextractingcontenthandler were insufficient and we found a separate new regex dos in a different regex in the standardsextractingcontenthandler these are now fixed in and publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tika tika core rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
| 0
|
15,763
| 2,869,060,846
|
IssuesEvent
|
2015-06-05 23:00:56
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Deploy Polymer App generates code with errors
|
Area-Pkg Pkg-Observe PolymerMilestone-Next Priority-Medium Triaged Type-Defect
|
Using:
polymer:
description: polymer
source: hosted
version: "0.7.6+4"
polymer_expressions:
description: polymer_expressions
source: hosted
version: "0.7.6+4"
Dart Editor version 0.7.6_r28108
Just ran "deploy polymer app", and it generated code that doesn't compile.
This is from the todo_element sample here: https://github.com/sethladd/dart-polymer-dart-examples/tree/master/web/todo_element
Build looks like this:
#!/usr/bin/env dart
// Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file
// for details. All rights reserved. Use of this source code is governed by a
// BSD-style license that can be found in the LICENSE file.
import 'package:polymer/builder.dart';
void main() {
build(entryPoints: ['web/simple_custom_element/index.html', 'web/todo_element/index.html']);
}
______
**Attachment:**
[Screen Shot 2013-10-05 at 10.02.45 AM.png](https://storage.googleapis.com/google-code-attachments/dart/issue-13849/comment-0/Screen Shot 2013-10-05 at 10.02.45 AM.png) (205.33 KB)
|
1.0
|
Deploy Polymer App generates code with errors - Using:
polymer:
description: polymer
source: hosted
version: "0.7.6+4"
polymer_expressions:
description: polymer_expressions
source: hosted
version: "0.7.6+4"
Dart Editor version 0.7.6_r28108
Just ran "deploy polymer app", and it generated code that doesn't compile.
This is from the todo_element sample here: https://github.com/sethladd/dart-polymer-dart-examples/tree/master/web/todo_element
Build looks like this:
#!/usr/bin/env dart
// Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file
// for details. All rights reserved. Use of this source code is governed by a
// BSD-style license that can be found in the LICENSE file.
import 'package:polymer/builder.dart';
void main() {
build(entryPoints: ['web/simple_custom_element/index.html', 'web/todo_element/index.html']);
}
______
**Attachment:**
[Screen Shot 2013-10-05 at 10.02.45 AM.png](https://storage.googleapis.com/google-code-attachments/dart/issue-13849/comment-0/Screen Shot 2013-10-05 at 10.02.45 AM.png) (205.33 KB)
|
defect
|
deploy polymer app generates code with errors using nbsp nbsp polymer nbsp nbsp nbsp nbsp description polymer nbsp nbsp nbsp nbsp source hosted nbsp nbsp nbsp nbsp version quot quot nbsp nbsp polymer expressions nbsp nbsp nbsp nbsp description polymer expressions nbsp nbsp nbsp nbsp source hosted nbsp nbsp nbsp nbsp version quot quot dart editor version just ran quot deploy polymer app quot and it generated code that doesn t compile this is from the todo element sample here build looks like this usr bin env dart copyright c the dart project authors please see the authors file for details all rights reserved use of this source code is governed by a bsd style license that can be found in the license file import package polymer builder dart void main nbsp nbsp build entrypoints attachment shot at am png kb
| 1
|
7,150
| 2,610,329,380
|
IssuesEvent
|
2015-02-26 19:46:08
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
opened
|
Engineer's Turrets: strong and weak against entries
|
auto-migrated Priority-Medium Type-Defect
|
```
The turrets built in the droid engineer's pad have vanilla FOC entries for
strong against and weak against.
Need to fix this in Groundturrets.xml and Engineerturrets.xml
I have fixed a few issues in Groundturrets.xml and sent it to Z, but I didnt
fix this
```
-----
Original issue reported on code.google.com by `richarda...@gmail.com` on 14 Jun 2011 at 7:12
|
1.0
|
Engineer's Turrets: strong and weak against entries - ```
The turrets built in the droid engineer's pad have vanilla FOC entries for
strong against and weak against.
Need to fix this in Groundturrets.xml and Engineerturrets.xml
I have fixed a few issues in Groundturrets.xml and sent it to Z, but I didnt
fix this
```
-----
Original issue reported on code.google.com by `richarda...@gmail.com` on 14 Jun 2011 at 7:12
|
defect
|
engineer s turrets strong and weak against entries the turrets built in the droid engineer s pad have vanilla foc entries for strong against and weak against need to fix this in groundturrets xml and engineerturrets xml i have fixed a few issues in groundturrets xml and sent it to z but i didnt fix this original issue reported on code google com by richarda gmail com on jun at
| 1
|
71,428
| 3,357,171,582
|
IssuesEvent
|
2015-11-18 23:48:22
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
ABAC authorizer should provide access to unversioned endpoints
|
priority/P1 team/control-plane team/CSI
|
Right now the [abac authorizer](https://github.com/kubernetes/kubernetes/blob/master/pkg/auth/authorizer/abac/abac.go) doesn't provide a way to authorize access to unversioned api endpoints, e.g. /version, /api. This means that you cannot use`kubectl` to interact with resources on an apiserver using `--authorization-mode=ABAC`, since `kubectl` hits /api to get the list of supported api versions.
At least [one user](http://stackoverflow.com/questions/33790438/using-kubectl-with-kubernetes-authorization-mode-abac) has hit this problem. cc @liggitt, @deads2k, @lavalamp
|
1.0
|
ABAC authorizer should provide access to unversioned endpoints - Right now the [abac authorizer](https://github.com/kubernetes/kubernetes/blob/master/pkg/auth/authorizer/abac/abac.go) doesn't provide a way to authorize access to unversioned api endpoints, e.g. /version, /api. This means that you cannot use`kubectl` to interact with resources on an apiserver using `--authorization-mode=ABAC`, since `kubectl` hits /api to get the list of supported api versions.
At least [one user](http://stackoverflow.com/questions/33790438/using-kubectl-with-kubernetes-authorization-mode-abac) has hit this problem. cc @liggitt, @deads2k, @lavalamp
|
non_defect
|
abac authorizer should provide access to unversioned endpoints right now the doesn t provide a way to authorize access to unversioned api endpoints e g version api this means that you cannot use kubectl to interact with resources on an apiserver using authorization mode abac since kubectl hits api to get the list of supported api versions at least has hit this problem cc liggitt lavalamp
| 0
|
313,354
| 9,560,113,443
|
IssuesEvent
|
2019-05-03 18:38:25
|
idaholab/raven
|
https://api.github.com/repos/idaholab/raven
|
opened
|
Update Manuals INL Code reports
|
manuals priority_normal task
|
--------
Issue Description
--------
Update the manuals codes and Printing dates.
This issue should be a floating one (not close but just reference)
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [ ] 1. Is it tagged with a type: defect or task?
- [ ] 2. Is it tagged with a priority: critical, normal or minor?
- [ ] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [ ] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [ ] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
1.0
|
Update Manuals INL Code reports - --------
Issue Description
--------
Update the manuals codes and Printing dates.
This issue should be a floating one (not close but just reference)
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [ ] 1. Is it tagged with a type: defect or task?
- [ ] 2. Is it tagged with a priority: critical, normal or minor?
- [ ] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [ ] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [ ] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
non_defect
|
update manuals inl code reports issue description update the manuals codes and printing dates this issue should be a floating one not close but just reference for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
| 0
|
15,452
| 2,855,284,369
|
IssuesEvent
|
2015-06-02 08:35:14
|
contao/core
|
https://api.github.com/repos/contao/core
|
closed
|
[3.4.5] Artikel Inhaltselement Typ "Download" Beschreibung bei Dateiauswahl
|
defect
|
Bei einem Inhaltselement des Typs "Download" in einem Artikel ist bei der Dateiauswahl folgende Beschreibung:
>Bitte wählen Sie eine Datei oder einen Ordner aus der Dateiübersicht.
Auswählbar sind ist aber nur eine Datei und keine Ordner.
|
1.0
|
[3.4.5] Artikel Inhaltselement Typ "Download" Beschreibung bei Dateiauswahl - Bei einem Inhaltselement des Typs "Download" in einem Artikel ist bei der Dateiauswahl folgende Beschreibung:
>Bitte wählen Sie eine Datei oder einen Ordner aus der Dateiübersicht.
Auswählbar sind ist aber nur eine Datei und keine Ordner.
|
defect
|
artikel inhaltselement typ download beschreibung bei dateiauswahl bei einem inhaltselement des typs download in einem artikel ist bei der dateiauswahl folgende beschreibung bitte wählen sie eine datei oder einen ordner aus der dateiübersicht auswählbar sind ist aber nur eine datei und keine ordner
| 1
|
10,778
| 2,622,188,275
|
IssuesEvent
|
2015-03-04 00:21:53
|
byzhang/cudpp
|
https://api.github.com/repos/byzhang/cudpp
|
closed
|
Document Size Limitations for all Algorithms
|
auto-migrated Component-Docs Milestone-Release2.0 Priority-Medium Type-Defect
|
```
We get a lot of questions about supported size limitations. We need to
document all limitations in the CUDPP docs.
```
Original issue reported on code.google.com by `harr...@gmail.com` on 11 Jun 2009 at 8:22
|
1.0
|
Document Size Limitations for all Algorithms - ```
We get a lot of questions about supported size limitations. We need to
document all limitations in the CUDPP docs.
```
Original issue reported on code.google.com by `harr...@gmail.com` on 11 Jun 2009 at 8:22
|
defect
|
document size limitations for all algorithms we get a lot of questions about supported size limitations we need to document all limitations in the cudpp docs original issue reported on code google com by harr gmail com on jun at
| 1
|
44,232
| 11,413,169,134
|
IssuesEvent
|
2020-02-01 17:52:14
|
aaroncawte/bbncreative-portfolio
|
https://api.github.com/repos/aaroncawte/bbncreative-portfolio
|
closed
|
Upgrade to Django 3
|
Build and deploy
|
The official guide on doing so: https://docs.djangoproject.com/en/3.0/howto/upgrade-version/
Work has already been done around:
- Changing security configuration which becomes default from 3.0
- Unifying working environments to same Python version
- Updating other Python dependencies
|
1.0
|
Upgrade to Django 3 - The official guide on doing so: https://docs.djangoproject.com/en/3.0/howto/upgrade-version/
Work has already been done around:
- Changing security configuration which becomes default from 3.0
- Unifying working environments to same Python version
- Updating other Python dependencies
|
non_defect
|
upgrade to django the official guide on doing so work has already been done around changing security configuration which becomes default from unifying working environments to same python version updating other python dependencies
| 0
|
74,194
| 25,003,298,602
|
IssuesEvent
|
2022-11-03 09:49:24
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
opened
|
rec: startup race can happen that provokes a.root-servers.net to go Bogus on statup
|
rec defect
|
This is a return of #6138
I can provoke it of I only only use the `a` and `j` `root-servers as these return `aa` answers for `DS`queries:
```
$ dig root-servers.net DS @j.root-servers.net +dnssec
; <<>> DiG 9.10.6 <<>> root-servers.net DS @j.root-servers.net +dnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51491
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 1472
;; QUESTION SECTION:
;root-servers.net. IN DS
;; AUTHORITY SECTION:
root-servers.net. 3600000 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2022102600 14400 7200 1209600 3600000
;; Query time: 173 msec
;; SERVER: 192.58.128.30#53(192.58.128.30)
;; WHEN: Thu Nov 03 10:40:17 CET 2022
;; MSG SIZE rcvd: 105
```
`a.root-servers.net` sometimes does. So the set of NS that show the `aa` response is different from three years ago.
During thread startup, the threads that handle queries start already doing so while the priming of the root in the handler thread is still underway.
Also, the initial priming does not enforce the the `net. NS` records to be retrieved by calling `primeRootNSZones`, as the housekeeping does.
These things together can cause an early `a.root-servers.net|A` query to find no `.net|A` records in the cache, hitting the #6138 scenario.
To provoke: use `dnssec=validate` and `qname-minimization=no`, restart a few times with healthcheck to the recursor active.
|
1.0
|
rec: startup race can happen that provokes a.root-servers.net to go Bogus on statup - This is a return of #6138
I can provoke it of I only only use the `a` and `j` `root-servers as these return `aa` answers for `DS`queries:
```
$ dig root-servers.net DS @j.root-servers.net +dnssec
; <<>> DiG 9.10.6 <<>> root-servers.net DS @j.root-servers.net +dnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51491
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 1472
;; QUESTION SECTION:
;root-servers.net. IN DS
;; AUTHORITY SECTION:
root-servers.net. 3600000 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2022102600 14400 7200 1209600 3600000
;; Query time: 173 msec
;; SERVER: 192.58.128.30#53(192.58.128.30)
;; WHEN: Thu Nov 03 10:40:17 CET 2022
;; MSG SIZE rcvd: 105
```
`a.root-servers.net` sometimes does. So the set of NS that show the `aa` response is different from three years ago.
During thread startup, the threads that handle queries start already doing so while the priming of the root in the handler thread is still underway.
Also, the initial priming does not enforce the the `net. NS` records to be retrieved by calling `primeRootNSZones`, as the housekeeping does.
These things together can cause an early `a.root-servers.net|A` query to find no `.net|A` records in the cache, hitting the #6138 scenario.
To provoke: use `dnssec=validate` and `qname-minimization=no`, restart a few times with healthcheck to the recursor active.
|
defect
|
rec startup race can happen that provokes a root servers net to go bogus on statup this is a return of i can provoke it of i only only use the a and j root servers as these return aa answers for ds queries dig root servers net ds j root servers net dnssec dig root servers net ds j root servers net dnssec global options cmd got answer header opcode query status noerror id flags qr aa rd query answer authority additional warning recursion requested but not available opt pseudosection edns version flags do udp question section root servers net in ds authority section root servers net in soa a root servers net nstld verisign grs com query time msec server when thu nov cet msg size rcvd a root servers net sometimes does so the set of ns that show the aa response is different from three years ago during thread startup the threads that handle queries start already doing so while the priming of the root in the handler thread is still underway also the initial priming does not enforce the the net ns records to be retrieved by calling primerootnszones as the housekeeping does these things together can cause an early a root servers net a query to find no net a records in the cache hitting the scenario to provoke use dnssec validate and qname minimization no restart a few times with healthcheck to the recursor active
| 1
|
5,230
| 7,086,105,191
|
IssuesEvent
|
2018-01-11 13:31:11
|
david0437/internetlicenseservicesstatus
|
https://api.github.com/repos/david0437/internetlicenseservicesstatus
|
opened
|
Planned maintenance of the servers
|
Internet License Service Web Portal
|
Servers will be updated on Sunday, January 14 at 11:55 p.m. Internet license service will not be available during the update (5-30 minutes).
|
1.0
|
Planned maintenance of the servers - Servers will be updated on Sunday, January 14 at 11:55 p.m. Internet license service will not be available during the update (5-30 minutes).
|
non_defect
|
planned maintenance of the servers servers will be updated on sunday january at p m internet license service will not be available during the update minutes
| 0
|
80,138
| 30,034,953,423
|
IssuesEvent
|
2023-06-27 12:10:28
|
vector-im/element-desktop
|
https://api.github.com/repos/vector-im/element-desktop
|
closed
|
Scrolling glitches / set backs
|
T-Defect
|
### Steps to reproduce
1. Where are you starting? What can you see?
I try to scroll down
https://www.youtube.com/watch?v=73w6mDHOOqc&feature=youtu.be
(nothing else is done in this video, just scrolling down)
### Outcome
#### What did you expect?
Smooth scrolling
#### What happened instead?
scrolling starts to glitch and then set back
I have this in many Rooms occasionally
This time it's so bad, that i can't even bypass it by scrolling fast.
### Operating system
Ubuzntu Mate 22.04
### Application version
Version von Element: 1.11.34 Version von Olm: 3.2.14
### How did you install the app?
element.io
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Scrolling glitches / set backs - ### Steps to reproduce
1. Where are you starting? What can you see?
I try to scroll down
https://www.youtube.com/watch?v=73w6mDHOOqc&feature=youtu.be
(nothing else is done in this video, just scrolling down)
### Outcome
#### What did you expect?
Smooth scrolling
#### What happened instead?
scrolling starts to glitch and then set back
I have this in many Rooms occasionally
This time it's so bad, that i can't even bypass it by scrolling fast.
### Operating system
Ubuzntu Mate 22.04
### Application version
Version von Element: 1.11.34 Version von Olm: 3.2.14
### How did you install the app?
element.io
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
scrolling glitches set backs steps to reproduce where are you starting what can you see i try to scroll down nothing else is done in this video just scrolling down outcome what did you expect smooth scrolling what happened instead scrolling starts to glitch and then set back i have this in many rooms occasionally this time it s so bad that i can t even bypass it by scrolling fast operating system ubuzntu mate application version version von element version von olm how did you install the app element io homeserver matrix org will you send logs no
| 1
|
152,712
| 19,693,723,984
|
IssuesEvent
|
2022-01-12 09:56:43
|
tlkh/serverless-transformers
|
https://api.github.com/repos/tlkh/serverless-transformers
|
opened
|
CVE-2022-22815 (Medium) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2022-22815 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/backend/requirements.txt</p>
<p>
Dependency Hierarchy:
- streamlit-0.55.2-py2.py3-none-any.whl (Root Library)
- :x: **Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
path_getbbox in path.c in Pillow before 9.0.0 improperly initializes ImagePath.Path.
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22815>CVE-2022-22815</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-22815">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-22815</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: Pillow - 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-22815 (Medium) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-22815 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/backend/requirements.txt</p>
<p>
Dependency Hierarchy:
- streamlit-0.55.2-py2.py3-none-any.whl (Root Library)
- :x: **Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
path_getbbox in path.c in Pillow before 9.0.0 improperly initializes ImagePath.Path.
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22815>CVE-2022-22815</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-22815">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-22815</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: Pillow - 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in pillow whl cve medium severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to dependency file requirements txt path to vulnerable library requirements txt backend requirements txt dependency hierarchy streamlit none any whl root library x pillow whl vulnerable library found in base branch main vulnerability details path getbbox in path c in pillow before improperly initializes imagepath path publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pillow step up your open source security game with whitesource
| 0
|
9,124
| 2,615,132,651
|
IssuesEvent
|
2015-03-01 06:02:58
|
chrsmith/google-api-java-client
|
https://api.github.com/repos/chrsmith/google-api-java-client
|
opened
|
Directory.Members.Get bugs
|
auto-migrated Priority-Medium Type-Defect
|
```
Version of google-api-java-client (e.g. 1.15.0-rc)?
1.15.0-rc
Java environment (e.g. Java 6, Android 2.3, App Engine)?
Java 6
application
Describe the problem.
GET
https://www.googleapis.com/admin/directory/v1/groups/{groupKey}/members/{memberK
ey}
if I set member's email to {memberKey}, "400 Bad Request" happen
but I set member's ID to {memberKey}, OK
How would you expect it to be fixed?
```
Original issue reported on code.google.com by `fuhong...@isr.co.jp` on 18 Jun 2013 at 2:23
|
1.0
|
Directory.Members.Get bugs - ```
Version of google-api-java-client (e.g. 1.15.0-rc)?
1.15.0-rc
Java environment (e.g. Java 6, Android 2.3, App Engine)?
Java 6
application
Describe the problem.
GET
https://www.googleapis.com/admin/directory/v1/groups/{groupKey}/members/{memberK
ey}
if I set member's email to {memberKey}, "400 Bad Request" happen
but I set member's ID to {memberKey}, OK
How would you expect it to be fixed?
```
Original issue reported on code.google.com by `fuhong...@isr.co.jp` on 18 Jun 2013 at 2:23
|
defect
|
directory members get bugs version of google api java client e g rc rc java environment e g java android app engine java application describe the problem get ey if i set member s email to memberkey bad request happen but i set member s id to memberkey ok how would you expect it to be fixed original issue reported on code google com by fuhong isr co jp on jun at
| 1
|
53,716
| 13,262,138,807
|
IssuesEvent
|
2020-08-20 21:10:39
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
Can't have two decoders in a single tray (Trac #1929)
|
Migrated from Trac combo core defect
|
It's not possible to have two decoders:
tray.AddSegment(payload_parsing.I3DOMLaunchExtractor...)
in the same tray. We now need this for the combined L1 and L2 processing of the pass2 processing.
Both services used by the decoder use default services names
to set and get services.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1929">https://code.icecube.wisc.edu/projects/icecube/ticket/1929</a>, reported by blaufussand owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:30",
"_ts": "1550067210114669",
"description": "It's not possible to have two decoders:\ntray.AddSegment(payload_parsing.I3DOMLaunchExtractor...)\nin the same tray. We now need this for the combined L1 and L2 processing of the pass2 processing.\n\nBoth services used by the decoder use default services names\nto set and get services. ",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2016-12-22T01:32:20",
"component": "combo core",
"summary": "Can't have two decoders in a single tray",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Can't have two decoders in a single tray (Trac #1929) - It's not possible to have two decoders:
tray.AddSegment(payload_parsing.I3DOMLaunchExtractor...)
in the same tray. We now need this for the combined L1 and L2 processing of the pass2 processing.
Both services used by the decoder use default services names
to set and get services.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1929">https://code.icecube.wisc.edu/projects/icecube/ticket/1929</a>, reported by blaufussand owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:30",
"_ts": "1550067210114669",
"description": "It's not possible to have two decoders:\ntray.AddSegment(payload_parsing.I3DOMLaunchExtractor...)\nin the same tray. We now need this for the combined L1 and L2 processing of the pass2 processing.\n\nBoth services used by the decoder use default services names\nto set and get services. ",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2016-12-22T01:32:20",
"component": "combo core",
"summary": "Can't have two decoders in a single tray",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
defect
|
can t have two decoders in a single tray trac it s not possible to have two decoders tray addsegment payload parsing in the same tray we now need this for the combined and processing of the processing both services used by the decoder use default services names to set and get services migrated from json status closed changetime ts description it s not possible to have two decoders ntray addsegment payload parsing nin the same tray we now need this for the combined and processing of the processing n nboth services used by the decoder use default services names nto set and get services reporter blaufuss cc resolution fixed time component combo core summary can t have two decoders in a single tray priority normal keywords milestone owner blaufuss type defect
| 1
|
88,390
| 8,140,240,768
|
IssuesEvent
|
2018-08-20 20:25:25
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
teamcity: failed tests on master: testrace/TestInitialPartitioning, test/TestInitialPartitioning
|
C-test-failure O-robot
|
The following tests appear to have failed:
[#848371](https://teamcity.cockroachdb.com/viewLog.html?buildId=848371):
```
--- FAIL: testrace/TestInitialPartitioning (242.380s)
------- Stdout: -------
W180820 01:59:37.399904 1000 server/status/runtime.go:294 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 01:59:37.440283 1000 server/server.go:829 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 01:59:37.440860 1000 base/addr_validation.go:260 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:37.441011 1000 base/addr_validation.go:300 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:37.465470 1000 server/config.go:496 [n?] 1 storage engine initialized
I180820 01:59:37.465594 1000 server/config.go:499 [n?] RocksDB cache size: 128 MiB
I180820 01:59:37.466002 1000 server/config.go:499 [n?] store 0: in-memory, size 0 B
I180820 01:59:37.534309 1000 server/node.go:373 [n?] **** cluster f18aada7-b5f5-463a-956a-d20947d68a02 has been created
I180820 01:59:37.534424 1000 server/server.go:1400 [n?] **** add additional nodes by specifying --join=127.0.0.1:36095
I180820 01:59:37.535873 1000 gossip/gossip.go:383 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36095" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1784-g97db142" started_at:1534730377535346318
I180820 01:59:37.555314 1000 storage/store.go:1535 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180820 01:59:37.556159 1000 server/node.go:476 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180820 01:59:37.556475 1000 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
I180820 01:59:37.556894 1000 server/node.go:697 [n1] connecting to gossip network to verify cluster ID...
I180820 01:59:37.558133 1000 server/node.go:722 [n1] node connected via gossip and verified as part of cluster "f18aada7-b5f5-463a-956a-d20947d68a02"
I180820 01:59:37.558609 1000 server/node.go:546 [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180820 01:59:37.560825 1000 server/status/recorder.go:652 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:37.560941 1000 server/server.go:1806 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 01:59:37.563520 1234 server/status/recorder.go:652 [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:37.564042 1000 server/server.go:1537 [n1] starting https server at 127.0.0.1:39071 (use: 127.0.0.1:39071)
I180820 01:59:37.564278 1000 server/server.go:1539 [n1] starting grpc/postgres server at 127.0.0.1:36095
I180820 01:59:37.564327 1000 server/server.go:1540 [n1] advertising CockroachDB node at 127.0.0.1:36095
W180820 01:59:37.565697 1000 jobs/registry.go:308 [n1] unable to get node liveness: node not in the liveness table
I180820 01:59:37.620005 1269 storage/replica_command.go:282 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180820 01:59:37.749875 1257 storage/replica_command.go:282 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180820 01:59:37.757732 1195 storage/intent_resolver.go:668 [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=48db07cf key=/Table/SystemConfigSpan/Start rw=true pri=0.00577476 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534730377.611829761,0 orig=1534730377.611829761,0 max=1534730377.611829761,0 wto=false rop=false seq=6
I180820 01:59:37.964827 1274 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180820 01:59:38.070899 1034 rpc/nodedialer/nodedialer.go:92 [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
I180820 01:59:38.178205 1286 storage/replica_command.go:282 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180820 01:59:38.264443 1315 storage/replica_command.go:282 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180820 01:59:38.332477 1319 storage/replica_command.go:282 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180820 01:59:38.363101 1249 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180820 01:59:38.433995 1350 storage/replica_command.go:282 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180820 01:59:38.564133 1293 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180820 01:59:38.597155 1365 storage/replica_command.go:282 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180820 01:59:38.725675 1398 storage/replica_command.go:282 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180820 01:59:38.839826 1442 storage/replica_command.go:282 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180820 01:59:38.877937 1402 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180820 01:59:38.936517 1477 storage/replica_command.go:282 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180820 01:59:38.997862 1491 storage/replica_command.go:282 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180820 01:59:39.075124 1432 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180820 01:59:39.130764 1495 storage/replica_command.go:282 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180820 01:59:39.173011 1388 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180820 01:59:39.201751 1000 server/server.go:1593 [n1] done ensuring all necessary migrations have run
I180820 01:59:39.201888 1000 server/server.go:1596 [n1] serving sql connections
I180820 01:59:39.242310 1420 storage/replica_command.go:282 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180820 01:59:39.254313 1511 server/server_update.go:67 [n1] no need to upgrade, cluster already at the newest version
I180820 01:59:39.287102 1513 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:36095} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1784-g97db142 StartedAt:1534730377535346318 LocalityAddress:[]} ClusterID:f18aada7-b5f5-463a-956a-d20947d68a02 StartedAt:1534730377535346318 LastUp:1534730377535346318}
I180820 01:59:39.321741 1519 storage/replica_command.go:282 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180820 01:59:39.383384 1424 storage/replica_command.go:282 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180820 01:59:39.456597 1503 storage/replica_command.go:282 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180820 01:59:39.506830 1560 storage/replica_command.go:282 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180820 01:59:39.580998 1622 storage/replica_command.go:282 [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180820 01:59:39.611457 1650 storage/replica_command.go:282 [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180820 01:59:39.664562 1633 storage/replica_command.go:282 [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180820 01:59:39.708227 1616 storage/replica_command.go:282 [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180820 01:59:40.038315 1000 server/status/runtime.go:294 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 01:59:40.175405 1000 server/server.go:829 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 01:59:40.175960 1000 base/addr_validation.go:260 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.176063 1000 base/addr_validation.go:300 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.197234 1000 server/config.go:496 [n?] 1 storage engine initialized
I180820 01:59:40.197318 1000 server/config.go:499 [n?] RocksDB cache size: 128 MiB
I180820 01:59:40.197351 1000 server/config.go:499 [n?] store 0: in-memory, size 0 B
W180820 01:59:40.197655 1000 gossip/gossip.go:1351 [n?] no incoming or outgoing connections
I180820 01:59:40.198339 1000 server/server.go:1402 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 01:59:40.330941 1686 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:42495}
I180820 01:59:40.332184 1766 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:36095
I180820 01:59:40.361698 1000 server/node.go:697 [n?] connecting to gossip network to verify cluster ID...
I180820 01:59:40.362742 1000 server/node.go:722 [n?] node connected via gossip and verified as part of cluster "f18aada7-b5f5-463a-956a-d20947d68a02"
I180820 01:59:40.368091 1727 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:40.377364 1726 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:40.392357 1000 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:40.398408 1000 server/node.go:428 [n?] new node allocated ID 2
I180820 01:59:40.398840 1000 gossip/gossip.go:383 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:42495" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1784-g97db142" started_at:1534730380398506049
I180820 01:59:40.400095 1000 storage/stores.go:242 [n2] read 0 node addresses from persistent storage
I180820 01:59:40.400527 1000 storage/stores.go:261 [n2] wrote 1 node addresses to persistent storage
I180820 01:59:40.402087 1649 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I180820 01:59:40.435589 1000 server/node.go:672 [n2] bootstrapped store [n2,s2]
I180820 01:59:40.437284 1000 server/node.go:546 [n2] node=2: started with [] engine(s) and attributes []
I180820 01:59:40.440138 1000 server/status/recorder.go:652 [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:40.440250 1000 server/server.go:1806 [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 01:59:40.443189 1867 server/status/recorder.go:652 [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:40.450338 1000 server/server.go:1537 [n2] starting https server at 127.0.0.1:33161 (use: 127.0.0.1:33161)
I180820 01:59:40.450461 1000 server/server.go:1539 [n2] starting grpc/postgres server at 127.0.0.1:42495
I180820 01:59:40.450507 1000 server/server.go:1540 [n2] advertising CockroachDB node at 127.0.0.1:42495
I180820 01:59:40.464060 1000 server/server.go:1593 [n2] done ensuring all necessary migrations have run
I180820 01:59:40.464176 1000 server/server.go:1596 [n2] serving sql connections
I180820 01:59:40.471376 1892 storage/replica_raftstorage.go:538 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot b25c1917 at index 39
I180820 01:59:40.587857 1909 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I180820 01:59:40.610665 1911 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:42495} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1784-g97db142 StartedAt:1534730380398506049 LocalityAddress:[]} ClusterID:f18aada7-b5f5-463a-956a-d20947d68a02 StartedAt:1534730380398506049 LastUp:1534730380398506049}
I180820 01:59:40.634139 1774 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180820 01:59:40.683215 1892 rpc/nodedialer/nodedialer.go:92 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] connection to n2 established
I180820 01:59:40.683504 1918 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
I180820 01:59:40.689547 1892 storage/store_snapshot.go:655 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 34, log entries: 29, rate-limit: 2.0 MiB/sec, 6ms
I180820 01:59:40.693713 1970 storage/replica_raftstorage.go:782 [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=b25c1917, encoded size=27443, 1 rocksdb batches, 29 log entries)
I180820 01:59:40.706835 1970 storage/replica_raftstorage.go:788 [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=4ms commit=1ms]
I180820 01:59:40.711592 1892 storage/replica_command.go:792 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=842354451496]
W180820 01:59:40.715136 1000 server/status/runtime.go:294 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 01:59:40.737513 1892 storage/replica.go:3749 [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:40.793267 1975 storage/replica_range_lease.go:554 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] transferring lease to s2
I180820 01:59:40.794671 1976 util/protoutil/randnullability.go:94 inserting null for (roachpb.Lease).Expiration: true
I180820 01:59:40.794826 1976 util/protoutil/randnullability.go:94 inserting null for (roachpb.Lease).DeprecatedStartStasis: false
I180820 01:59:40.804733 1975 storage/replica_range_lease.go:617 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] done transferring lease to s2: <nil>
I180820 01:59:40.807362 2022 storage/replica_raftstorage.go:538 [replicate,n1,s1,r16/1:/Table/{19-20}] generated preemptive snapshot 420af157 at index 19
I180820 01:59:40.807617 1794 storage/replica_proposal.go:214 [n2,s2,r4/2:/System/{NodeLive…-tsd}] new range lease repl=(n2,s2):2 seq=3 start=1534730380.793394013,0 epo=1 pro=1534730380.793406704,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:40.823920 2022 storage/store_snapshot.go:655 [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 2.0 MiB/sec, 15ms
I180820 01:59:40.827380 1961 storage/replica_raftstorage.go:782 [n2,s2,r16/?:{-}] applying preemptive snapshot at index 19 (id=420af157, encoded size=2590, 1 rocksdb batches, 9 log entries)
I180820 01:59:40.831033 1961 storage/replica_raftstorage.go:788 [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 3ms [clear=1ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:40.835309 2022 storage/replica_command.go:792 [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=842353293368]
I180820 01:59:40.854467 1000 server/server.go:829 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 01:59:40.855054 1000 base/addr_validation.go:260 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.855176 1000 base/addr_validation.go:300 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.874743 1000 server/config.go:496 [n?] 1 storage engine initialized
I180820 01:59:40.874858 1000 server/config.go:499 [n?] RocksDB cache size: 128 MiB
I180820 01:59:40.874917 1000 server/config.go:499 [n?] store 0: in-memory, size 0 B
W180820 01:59:40.875183 1000 gossip/gossip.go:1351 [n?] no incoming or outgoing connections
I180820 01:59:40.880258 2022 storage/replica.go:3749 [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:40.902566 1000 server/server.go:1402 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 01:59:40.961695 1977 storage/replica_command.go:792 [replicate,n2,s2,r4/2:/System/{NodeLive…-tsd}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, (n2,s2):2, next=3, gen=842353495096]
I180820 01:59:40.994410 1950 storage/replica_range_lease.go:554 [replicate,n1,s1,r16/1:/Table/{19-20}] transferring lease to s2
I180820 01:59:41.043492 1802 storage/replica_proposal.go:214 [n2,s2,r16/2:/Table/{19-20}] new range lease repl=(n2,s2):2 seq=3 start=1534730380.994544835,0 epo=1 pro=1534730380.994564458,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.051679 1950 storage/replica_range_lease.go:617 [replicate,n1,s1,r16/1:/Table/{19-20}] done transferring lease to s2: <nil>
I180820 01:59:41.053872 2131 storage/replica_raftstorage.go:538 [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot c1dc4863 at index 18
I180820 01:59:41.065717 2131 storage/store_snapshot.go:655 [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 11ms
I180820 01:59:41.068063 2134 storage/replica_raftstorage.go:782 [n2,s2,r12/?:{-}] applying preemptive snapshot at index 18 (id=c1dc4863, encoded size=2386, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.070253 2134 storage/replica_raftstorage.go:788 [n2,s2,r12/?:/Table/1{5-6}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:41.080058 2131 storage/replica_command.go:792 [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=842369604472]
I180820 01:59:41.146477 2131 storage/replica.go:3749 [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.152731 2024 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:36095
I180820 01:59:41.154453 2076 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:33963}
I180820 01:59:41.161182 1000 server/node.go:697 [n?] connecting to gossip network to verify cluster ID...
I180820 01:59:41.161467 1000 server/node.go:722 [n?] node connected via gossip and verified as part of cluster "f18aada7-b5f5-463a-956a-d20947d68a02"
I180820 01:59:41.168081 2103 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:41.182505 2203 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:41.205846 1000 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:41.211308 2167 storage/replica_range_lease.go:554 [replicate,n1,s1,r12/1:/Table/1{5-6}] transferring lease to s2
I180820 01:59:41.211081 1977 storage/replica.go:3749 [n2,s2,r4/2:/System/{NodeLive…-tsd}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:41.225992 2167 storage/replica_range_lease.go:617 [replicate,n1,s1,r12/1:/Table/1{5-6}] done transferring lease to s2: <nil>
I180820 01:59:41.227235 1815 storage/replica_proposal.go:214 [n2,s2,r12/2:/Table/1{5-6}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.211426378,0 epo=1 pro=1534730381.211438917,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.247842 2184 storage/replica_raftstorage.go:538 [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot c865c7bc at index 18
I180820 01:59:41.255900 2184 storage/store_snapshot.go:655 [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 7ms
I180820 01:59:41.265761 2207 storage/replica_raftstorage.go:782 [n2,s2,r13/?:{-}] applying preemptive snapshot at index 18 (id=c865c7bc, encoded size=2552, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.268178 2207 storage/replica_raftstorage.go:788 [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:41.276746 2184 storage/replica_command.go:792 [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=842353293512]
I180820 01:59:41.283147 1959 storage/store.go:3717 [n1,s1,r4/1:/System/{NodeLive…-tsd}] added to replica GC queue (peer suggestion)
I180820 01:59:41.294038 2109 storage/replica_command.go:792 [replicate,n2,s2,r16/2:/Table/{19-20}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, (n2,s2):2, next=3, gen=842354532256]
I180820 01:59:41.358093 2126 storage/store.go:2612 [replicaGC,n1,s1,r4/1:/System/{NodeLive…-tsd}] removing replica
I180820 01:59:41.361638 2126 storage/replica.go:880 [replicaGC,n1,s1,r4/1:/System/{NodeLive…-tsd}] removed 27 (19+8) keys in 3ms [clear=2ms commit=0ms]
I180820 01:59:41.426825 2184 storage/replica.go:3749 [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.486173 2177 storage/replica_range_lease.go:554 [replicate,n1,s1,r13/1:/Table/1{6-7}] transferring lease to s2
I180820 01:59:41.495523 2109 storage/replica.go:3749 [n2,s2,r16/2:/Table/{19-20}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:41.496978 2177 storage/replica_range_lease.go:617 [replicate,n1,s1,r13/1:/Table/1{6-7}] done transferring lease to s2: <nil>
I180820 01:59:41.498016 1842 storage/replica_proposal.go:214 [n2,s2,r13/2:/Table/1{6-7}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.486300359,0 epo=1 pro=1534730381.486312737,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.499394 2260 storage/replica_raftstorage.go:538 [replicate,n1,s1,r21/1:/Table/5{0-1}] generated preemptive snapshot ee9f03ed at index 18
I180820 01:59:41.521497 1959 storage/store.go:3717 [n1,s1,r16/1:/Table/{19-20}] added to replica GC queue (peer suggestion)
I180820 01:59:41.526706 2260 storage/store_snapshot.go:655 [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 27ms
I180820 01:59:41.528920 2246 storage/replica_raftstorage.go:782 [n2,s2,r21/?:{-}] applying preemptive snapshot at index 18 (id=ee9f03ed, encoded size=2362, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.529422 2145 storage/store.go:2612 [replicaGC,n1,s1,r16/1:/Table/{19-20}] removing replica
I180820 01:59:41.530692 2145 storage/replica.go:880 [replicaGC,n1,s1,r16/1:/Table/{19-20}] removed 8 (0+8) keys in 0ms [clear=0ms commit=0ms]
I180820 01:59:41.531124 2246 storage/replica_raftstorage.go:788 [n2,s2,r21/?:/Table/5{0-1}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:41.538661 2260 storage/replica_command.go:792 [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=842351376688]
I180820 01:59:41.563583 1000 server/node.go:428 [n?] new node allocated ID 3
I180820 01:59:41.565414 1000 gossip/gossip.go:383 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:33963" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1784-g97db142" started_at:1534730381565051877
I180820 01:59:41.567199 1000 storage/stores.go:242 [n3] read 0 node addresses from persistent storage
I180820 01:59:41.567503 1000 storage/stores.go:261 [n3] wrote 2 node addresses to persistent storage
I180820 01:59:41.569937 2264 storage/stores.go:261 [n1] wrote 2 node addresses to persistent storage
I180820 01:59:41.572867 2300 storage/stores.go:261 [n2] wrote 2 node addresses to persistent storage
I180820 01:59:41.587412 2293 storage/replica_command.go:792 [replicate,n2,s2,r12/2:/Table/1{5-6}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, (n2,s2):2, next=3, gen=842368193568]
I180820 01:59:41.596112 2260 storage/replica.go:3749 [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.668802 1000 server/node.go:672 [n3] bootstrapped store [n3,s3]
I180820 01:59:41.675572 1000 server/node.go:546 [n3] node=3: started with [] engine(s) and attributes []
I180820 01:59:41.676945 1000 server/status/recorder.go:652 [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:41.677086 1000 server/server.go:1806 [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 01:59:41.689416 2454 server/status/recorder.go:652 [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:41.698379 2456 storage/replica_range_lease.go:554 [replicate,n1,s1,r21/1:/Table/5{0-1}] transferring lease to s2
I180820 01:59:41.708026 1000 server/server.go:1537 [n3] starting https server at 127.0.0.1:44305 (use: 127.0.0.1:44305)
I180820 01:59:41.708181 1000 server/server.go:1539 [n3] starting grpc/postgres server at 127.0.0.1:33963
I180820 01:59:41.708235 1000 server/server.go:1540 [n3] advertising CockroachDB node at 127.0.0.1:33963
I180820 01:59:41.714035 1797 storage/replica_proposal.go:214 [n2,s2,r21/2:/Table/5{0-1}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.699599669,0 epo=1 pro=1534730381.699620560,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.717944 2456 storage/replica_range_lease.go:617 [replicate,n1,s1,r21/1:/Table/5{0-1}] done transferring lease to s2: <nil>
I180820 01:59:41.723740 1000 server/server.go:1593 [n3] done ensuring all necessary migrations have run
I180820 01:59:41.724005 1000 server/server.go:1596 [n3] serving sql connections
I180820 01:59:41.736965 2425 storage/replica_raftstorage.go:538 [replicate,n1,s1,r19/1:/Table/2{2-3}] generated preemptive snapshot bdc91102 at index 18
I180820 01:59:41.766799 2425 storage/store_snapshot.go:655 [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 20ms
I180820 01:59:41.775744 2428 storage/replica_raftstorage.go:782 [n2,s2,r19/?:{-}] applying preemptive snapshot at index 18 (id=bdc91102, encoded size=2386, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.780051 2428 storage/replica_raftstorage.go:788 [n2,s2,r19/?:/Table/2{2-3}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=1ms]
I180820 01:59:41.803377 2425 storage/replica_command.go:792 [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=842354152384]
I180820 01:59:41.858971 2293 storage/replica.go:3749 [n2,s2,r12/2:/Table/1{5-6}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:41.865037 2518 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180820 01:59:41.865247 2517 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
I180820 01:59:41.897835 2425 storage/replica.go:3749 [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.919017 2537 storage/replica_command.go:792 [replicate,n2,s2,r13/2:/Table/1{6-7}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, (n2,s2):2, next=3, gen=842350792840]
I180820 01:59:41.921349 1959 storage/store.go:3717 [n1,s1,r12/1:/Table/1{5-6}] added to replica GC queue (peer suggestion)
I180820 01:59:41.950553 2549 storage/replica_range_lease.go:554 [replicate,n1,s1,r19/1:/Table/2{2-3}] transferring lease to s2
I180820 01:59:41.971555 2317 storage/store.go:2612 [replicaGC,n1,s1,r12/1:/Table/1{5-6}] removing replica
I180820 01:59:41.973569 2317 storage/replica.go:880 [replicaGC,n1,s1,r12/1:/Table/1{5-6}] removed 8 (0+8) keys in 1ms [clear=1ms commit=0ms]
I180820 01:59:41.977686 1812 storage/replica_proposal.go:214 [n2,s2,r19/2:/Table/2{2-3}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.951882389,0 epo=1 pro=1534730381.951898915,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.983193 2549 storage/replica_range_lease.go:617 [replicate,n1,s1,r19/1:/Table/2{2-3}] done transferring lease to s2: <nil>
I180820 01:59:41.987293 2448 storage/replica_raftstorage.go:538 [replicate,n1,s1,r20/1:/Table/{23-50}] generated preemptive snapshot 330fc715 at index 18
I180820 01:59:42.018996 2504 sql/event_log.go:126 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:33963} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1784-g97db142 StartedAt:1534730381565051877 LocalityAddress:[]} ClusterID:f18aada7-b5f5-463a-956a-d20947d68a02 StartedAt:1534730381565051877 LastUp:1534730381565051877}
I180820 01:59:42.067318 2502 server/server_update.go:67 [n3] no need to upgrade, cluster already at the newest version
I180820 01:59:42.073460 2513 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180820 01:59:42.117222 2537 storage/replica.go:3749 [n2,s2,r13/2:/Table/1{6-7}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:42.166046 1959 storage/store.go:3717 [n1,s1,r13/1:/Table/1{6-7}] added to replica GC queue (peer suggestion)
I180820 01:59:42.176827 2630 storage/replica_command.go:792 [replicate,n2,s2,r21/2:/Table/5{0-1}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, (n2,s2):2, next=3, gen=842353497552]
I180820 01:59:42.232685 2568 storage/store.go:2612 [replicaGC,n1,s1,r13/1:/Table/1{6-7}] removing replica
I180820 01:59:42.234108 2568 storage/replica.go:880 [replicaGC,n1,s1,r13/1:/Table/1{6-7}] removed 8 (0+8) keys in 0ms [clear=0ms commit=0ms]
I180820 01:59:42.250247 2462 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180820 01:59:42.250524 2448 rpc/nodedialer/nodedialer.go:92 [replicate,n1,s1,r20/1:/Table/{23-50}] connection to n3 established
I180820 01:59:42.255478 2448 storage/store_snapshot.go:655 [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
I180820 01:59:42.258324 2619 storage/replica_raftstorage.go:782 [n3,s3,r20/?:{-}] applying preemptive snapshot at index 18 (id=330fc715, encoded size=2684, 1 rocksdb batches, 8 log entries)
I180820 01:59:42.261437 2619 storage/replica_raftstorage.go:788 [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I180820 01:59:42.270677 2448 storage/replica_command.go:792 [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=842368780992]
I180820 01:59:42.328562 2630 storage/replica.go:3749 [n2,s2,r21/2:/Table/5{0-1}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:42.333812 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180820 01:59:42.337137 2448 storage/replica.go:3749 [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180820 01:59:42.342500 1959 storage/store.go:3717 [n1,s1,r21/1:/Table/5{0-1}] added to replica GC queue (peer suggestion)
I180820 01:59:42.370495 2641 storage/store.go:2612 [replicaGC,n1,s1,r21/1:/Table/5{0-1}] removing replica
I180820 01:59:42.374037 2641 storage/replica.go:880 [replicaGC,n1,s1,r21/1:/Table/5{0-1}] removed 8 (0+8) keys in 0ms [clear=0ms commit=0ms]
I180820 01:59:42.380666 2707 storage/replica_command.go:282 [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180820 01:59:42.425362 2651 storage/replica_range_lease.go:554 [replicate,n1,s1,r20/1:/Table/{23-50}] transferring lease to s3
I180820 01:59:42.434548 2625 storage/replica_command.go:792 [replicate,n2,s2,r19/2:/Table/2{2-3}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, (n2,s2):2, next=3, gen=842368195968]
I180820 01:59:42.441572 2651 storage/replica_range_lease.go:617 [replicate,n1,s1,r20/1:/Table/{23-50}] done transferring lease to s3: <nil>
I180820 01:59:42.447553 2346 storage/replica_proposal.go:214 [n3,s3,r20/2:/Table/{23-50}] new range lease repl=(n3,s3):2 seq=3 start=1534730382.425507270,0 epo=1 pro=1534730382.425524030,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:42.455030 2754 storage/replica_raftstorage.go:538 [replicate,n1,s1,r9/1:/Table/1{2-3}] generated preemptive snapshot c0d98391 at index 34
I180820 01:59:42.467399 2754 storage/store_snapshot.go:655 [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 69, log entries: 24, rate-limit: 2.0 MiB/sec, 12ms
I180820 01:59:42.470386 2745 storage/replica_raftstorage.go:782 [n3,s3,r9/?:{-}] applying preemptive snapshot at index 34 (id=c0d98391, encoded size=20980, 1 rocksdb batches, 24 log entries)
I180820 01:59:42.475365 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180820 01:59:42.477283 2745 storage/replica_raftstorage.go:788 [n3,s3,r9/?:/Table/1{2-3}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=5ms commit=0ms]
I180820 01:59:42.503618 2754 storage/replica_command.go:792 [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=842378255104]
I180820 01:59:42.528180 2757 storage/replica_command.go:792 [replicate,n3,s3,r20/2:/Table/{23-50}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n3,s3):2, next=3, gen=842353288464]
I180820 01:59:42.604699 2754 storage/replica.go:3749 [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180820 01:59:42.629901 2625 storage/replica.go:3749 [n2,s2,r19/2:/Table/2{2-3}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:42.632037 2752 storage/replica_range_lease.go:554 [replicate,n1,s1,r9/1:/Table/1{2-3}] transferring lease to s3
I180820 01:59:42.640238 1959 storage/store.go:3717 [n1,s1,r19/1:/Table/2{2-3}] added to replica GC queue (peer suggestion)
I180820 01:59:42.674842 2359 storage/replica_proposal.go:214 [n3,s3,r9/2:/Table/1{2-3}] new range lease repl=(n3,s3):2 seq=3 start=1534730382.642656057,0 epo=1 pro=1534730382.642672921,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:42.687613 2752 storage/replica_range_lease.go:617 [replicate,n1,s1,r9/1:/Table/1{2-3}] done transferring lease to s3: <nil>
I180820 01:59:42.712117 2788 storage/store.go:2612 [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removing replica
I180820 01:59:42.712701 2821 storage/replica_raftstorage.go:538 [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot f8d1ccea at index 18
I180820 01:59:42.714085 2788 storage/replica.go:880 [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180820 01:59:42.750060 2757 storage/replica.go:3749 [n3,s3,r20/2:/Table/{23-50}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n3,s3):2] next=3
I180820 01:59:42.750085 2821 storage/store_snapshot.go:655 [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 37ms
I180820 01:59:42.752389 2825 storage/replica_raftstorage.go:782 [n3,s3,r11/?:{-}] applying preemptive snapshot at index 18 (id=f8d1ccea, encoded size=2385, 1 rocksdb batches, 8 log entries)
I180820 01:59:42.754440 2825 storage/replica_raftstorage.go:788 [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:42.756262 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
I180820 01:59:42.774977 2821 storage/replica_command.go:792 [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=842369600856]
I180820 01:59:42.781267 2725 storage/store.go:3717 [n1,s1,r20/1:/Table/{23-50}] added to replica GC queue (peer suggestion)
--- FAIL: testrace/TestInitialPartitioning: TestInitialPartitioning/TIMESTAMPTZ (46.780s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n1: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
[n3,s3,r7/2:/Table/{SystemCon…-11}] read completed
full trace:
=== SPAN START: session recording ===
[n1,client=127.0.0.1:43144,user=root] [NoTxn pos:3631] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
[n1,client=127.0.0.1:43144,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE in state: NoTxn
=== SPAN START: sql txn ===
[n1,client=127.0.0.1:43144,user=root] [Open pos:3631] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
[n1,client=127.0.0.1:43144,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE in state: Open
[n1,client=127.0.0.1:43144,user=root] planning starts: SELECT
[n1,client=127.0.0.1:43144,user=root] generating optimizer plan
=== SPAN START: txn coordinator send ===
=== SPAN START: dist sender send ===
[client=127.0.0.1:43144,user=root,txn=f84e2336,n1] querying next range at /Table/2/1/52/"TIMESTAMPTZ"/3/1
[client=127.0.0.1:43144,user=root,txn=f84e2336,n1] r7: sending batch 1 Get to (n3,s3):2
[client=127.0.0.1:43144,user=root,txn=f84e2336,n1] sending request to 127.0.0.1:33963
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
[n3] 1 Get
[n3,s3] executing 1 requests
[n3,s3,r7/2:/Table/{SystemCon…-11}] read-only path
[n3,s3,r7/2:/Table/{SystemCon…-11}] command queue
[n3,s3,r7/2:/Table/{SystemCon…-11}] waiting for read lock
[n3,s3,r7/2:/Table/{SystemCon…-11}] read completed
=== SPAN START: txn coordinator send ===
[n1,client=127.0.0.1:43144,user=root] client.Txn did AutoCommit. err: <nil>
[n1,client=127.0.0.1:43144,user=root] added table 'data.public."TIMESTAMPTZ"' to table collection
[n1,client=127.0.0.1:43144,user=root] optimizer plan succeeded
[n1,client=127.0.0.1:43144,user=root] planning ends
[n1,client=127.0.0.1:43144,user=root] checking distributability
[n1,client=127.0.0.1:43144,user=root] distributable plan: true
[n1,client=127.0.0.1:43144,user=root] execution starts: distributed
=== SPAN START: consuming rows ===
[n1,client=127.0.0.1:43144,user=root] creating DistSQL plan with distributedMode=true
[n1,client=127.0.0.1:43144,user=root] querying next range at /Table/81/1
[n1,client=127.0.0.1:43144,user=root] running DistSQL plan
=== SPAN START: flow ===
[n1,client=127.0.0.1:43144,user=root] starting (1 processors, 0 startables)
=== SPAN START: table reader ===
cockroach.processorid: 0
cockroach.stat.tablereader.stalltime: 2µs
cockroach.stat.tablereader.input.rows: 0
[n1,client=127.0.0.1:43144,user=root] Scan /Table/81/1{-/1970-01-11T03:42:28.000515Z/#}
=== SPAN START: txn coordinator send ===
=== SPAN START: dist sender send ===
[client=127.0.0.1:43144,user=root,txn=c1848c5e,n1] querying next range at /Table/81/1
[client=127.0.0.1:43144,user=root,txn=c1848c5e,n1] r247: sending batch 1 Scan to (n1,s1):2
[client=127.0.0.1:43144,user=root,txn=c1848c5e,n1] sending request to local server
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
[n1] 1 Scan
[n1,s1] executing 1 requests
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] read-only path
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] read has no clock uncertainty
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] command queue
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] waiting for read lock
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] read completed
=== SPAN START: ordered aggregator ===
cockroach.stat.aggregator.stalltime: 364µs
cockroach.stat.aggregator.mem.max: 10 KiB
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
[n1,client=127.0.0.1:43144,user=root] accumulation complete
[n1,client=127.0.0.1:43144,user=root] exiting aggregator
[n1,client=127.0.0.1:43144,user=root] execution ends
[n1,client=127.0.0.1:43144,user=root] rows affected: 1
=== SPAN START: txn coordinator send ===
[n1,client=127.0.0.1:43144,user=root] AutoCommit. err: <nil>
[n1,client=127.0.0.1:43144,user=root] releasing 1 tables
[n1,client=127.0.0.1:43144,user=root] [NoTxn pos:3632] executing ExecStmt: SET TRACING = off
[n1,client=127.0.0.1:43144,user=root] executing: SET TRACING = off in state: NoTxn
goroutine 30285 [running]:
runtime/debug.Stack(0xa7a358200, 0xc421671530, 0x3da7de0)
/usr/local/go/src/runtime/debug/stack.go:24 +0xb5
github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e092c0, 0xc4242d23c0, 0xc421671500)
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc4242d23c0)
/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1156 +0x258
testing.tRunner(0xc4242d23c0, 0xc422af2330)
/usr/local/go/src/testing/testing.go:777 +0x16e
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180820 02:02:03.827699 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "create_table", target: 81, info: {TableName:data.public."TIMESTAMPTZ" Statement:CREATE TABLE "TIMESTAMPTZ" (a TIMESTAMP WITH TIME ZONE PRIMARY KEY) PARTITION BY LIST (a) (PARTITION p VALUES IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)) User:root}
I180820 02:02:03.903121 30325 storage/replica_command.go:792 [replicate,n2,s2,r244/3:/Table/80/1/1511705146{8-9}] change replicas (REMOVE_REPLICA (n1,s1):2): read existing descriptor r244:/Table/80/1/1511705146{8-9} [(n1,s1):2, (n2,s2):3, next=4, gen=842353873400]
I180820 02:02:03.996604 30257 storage/replica_command.go:282 [split,n3,s3,r243/1:/{Table/80/2-Max}] initiating a split of this range at key /Table/81 [r246]
I180820 02:02:04.040205 30325 storage/replica.go:3749 [n2,s2,r244/3:/Table/80/1/1511705146{8-9}] proposing REMOVE_REPLICA((n1,s1):2): updated=[(n2,s2):3] next=4
I180820 02:02:04.074974 1959 storage/store.go:3717 [n1,s1,r244/2:/Table/80/1/1511705146{8-9}] added to replica GC queue (peer suggestion)
I180820 02:02:04.106033 30330 storage/store.go:2612 [replicaGC,n1,s1,r244/2:/Table/80/1/1511705146{8-9}] removing replica
I180820 02:02:04.119350 2390 storage/replica_proposal.go:214 [n3,s3,r243/1:/{Table/80/2-Max}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:04.164935 30330 storage/replica.go:880 [replicaGC,n1,s1,r244/2:/Table/80/1/1511705146{8-9}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180820 02:02:04.173040 2036 storage/store.go:3736 [n2,s2,r244/3:/Table/80/1/1511705146{8-9}] added to replica GC queue (contacted deleted peer)
I180820 02:02:04.458180 30379 storage/replica_command.go:282 [split,n3,s3,r246/1:/{Table/81-Max}] initiating a split of this range at key /Table/81/1 [r247]
I180820 02:02:04.504322 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_zone_config", target: 81, info: {Target:data."TIMESTAMPTZ"@primary Config:constraints: [+n1] User:root}
I180820 02:02:04.773781 2346 storage/replica_proposal.go:214 [n3,s3,r246/1:/{Table/81-Max}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:04.778877 30416 storage/replica_command.go:282 [split,n3,s3,r247/1:/{Table/81/1-Max}] initiating a split of this range at key /Table/81/2 [r248]
I180820 02:02:05.056413 2365 storage/replica_proposal.go:214 [n3,s3,r247/1:/{Table/81/1-Max}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.072201 30519 storage/replica_raftstorage.go:538 [replicate,n3,s3,r247/1:/Table/81/{1-2}] generated preemptive snapshot 4aedac1e at index 15
I180820 02:02:05.086383 30519 storage/store_snapshot.go:655 [replicate,n3,s3,r247/1:/Table/81/{1-2}] streamed snapshot to (n1,s1):?: kv pairs: 9, log entries: 5, rate-limit: 2.0 MiB/sec, 13ms
I180820 02:02:05.090075 30505 storage/replica_raftstorage.go:782 [n1,s1,r247/?:{-}] applying preemptive snapshot at index 15 (id=4aedac1e, encoded size=2391, 1 rocksdb batches, 5 log entries)
I180820 02:02:05.092490 30505 storage/replica_raftstorage.go:788 [n1,s1,r247/?:/Table/81/{1-2}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 02:02:05.099390 30519 storage/replica_command.go:792 [replicate,n3,s3,r247/1:/Table/81/{1-2}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r247:/Table/81/{1-2} [(n3,s3):1, next=2, gen=842397011328]
I180820 02:02:05.181744 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_zone_config", target: 81, info: {Target:data."TIMESTAMPTZ".p Config:constraints: [+n2] User:root}
I180820 02:02:05.182911 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:05.183629 30507 storage/replica_command.go:282 [split,n3,s3,r247/1:/Table/81/{1-2}] initiating a split of this range at key /Table/81/1/1970-01-11T03:42:28.000516Z [r249]
I180820 02:02:05.244979 30519 storage/replica.go:3749 [n3,s3,r247/1:/Table/81/{1-2}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180820 02:02:05.256901 30507 storage/split_queue.go:135 [split,n3,s3,r247/1:/Table/81/{1-2}] split saw concurrent descriptor modification; maybe retrying
I180820 02:02:05.258291 30447 storage/replica_command.go:282 [split,n3,s3,r247/1:/Table/81/{1-2}] initiating a split of this range at key /Table/81/1/1970-01-11T03:42:28.000516Z [r250]
I180820 02:02:05.480965 2357 storage/replica_proposal.go:214 [n3,s3,r247/1:/Table/81/{1-2}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.489860 30464 storage/replica_command.go:282 [split,n3,s3,r250/1:/Table/81/{1/1970-…-2}] initiating a split of this range at key /Table/81/1/1970-01-11T03:42:28.000516001Z [r251]
I180820 02:02:05.503691 30552 storage/replica_range_lease.go:554 [replicate,n3,s3,r247/1:/Table/81/1{-/1970-0…}] transferring lease to s1
I180820 02:02:05.518673 30552 storage/replica_range_lease.go:617 [replicate,n3,s3,r247/1:/Table/81/1{-/1970-0…}] done transferring lease to s1: <nil>
I180820 02:02:05.519052 1184 storage/replica_proposal.go:214 [n1,s1,r247/2:/Table/81/1{-/1970-0…}] new range lease repl=(n1,s1):2 seq=6 start=1534730525.504798119,0 epo=1 pro=1534730525.504827117,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.740465 2346 storage/replica_proposal.go:214 [n3,s3,r250/1:/Table/81/{1/1970-…-2}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.766643 30615 storage/replica_range_lease.go:554 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] transferring lease to s1
I180820 02:02:05.824194 30615 storage/replica_range_lease.go:617 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] done transferring lease to s1: <nil>
I180820 02:02:05.826609 1142 storage/replica_proposal.go:214 [n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] new range lease repl=(n1,s1):2 seq=6 start=1534730525.766811117,0 epo=1 pro=1534730525.766837525,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
E180820 02:02:05.859136 30647 storage/queue.go:788 [replicate,n3,s3,r251/1:/Table/81/{1/1970-…-2}] no removable replicas from range that needs a removal: [raft progress unknown]
I180820 02:02:05.916344 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:05.986266 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a > '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.098687 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.107318 30708 storage/replica_command.go:792 [replicate,n1,s1,r247/2:/Table/81/1{-/1970-0…}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r247:/Table/81/1{-/1970-01-11T03:42:28.000516Z} [(n3,s3):1, (n1,s1):2, next=3, gen=842394703968]
I180820 02:02:06.234344 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.297677 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.323231 30708 storage/replica.go:3749 [n1,s1,r247/2:/Table/81/1{-/1970-0…}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180820 02:02:06.341089 2742 storage/store.go:3717 [n3,s3,r247/1:/Table/81/1{-/1970-0…}] added to replica GC queue (peer suggestion)
I180820 02:02:06.387462 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.388254 30658 storage/store.go:2612 [replicaGC,n3,s3,r247/1:/Table/81/1{-/1970-0…}] removing replica
I180820 02:02:06.410520 30658 storage/replica.go:880 [replicaGC,n3,s3,r247/1:/Table/81/1{-/1970-0…}] removed 8 (0+8) keys in 4ms [clear=4ms commit=0ms]
I180820 02:02:06.425397 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.484472 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.539990 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.584637 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.624168 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.706597 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.756127 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.811101 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.857426 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.893298 30717 storage/replica_range_lease.go:554 [replicate,n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] transferring lease to s3
I180820 02:02:06.918430 30717 storage/replica_range_lease.go:617 [replicate,n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] done transferring lease to s3: <nil>
I180820 02:02:06.924796 2391 storage/replica_proposal.go:214 [n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] new range lease repl=(n3,s3):1 seq=7 start=1534730526.893461253,0 epo=1 pro=1534730526.893489519,0 following repl=(n1,s1):2 seq=6 start=1534730525.766811117,0 epo=1 pro=1534730525.766837525,0
I180820 02:02:06.929008 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.003260 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.086013 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.132009 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.175074 30773 storage/replica_range_lease.go:554 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] transferring lease to s1
I180820 02:02:07.200241 30773 storage/replica_range_lease.go:617 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] done transferring lease to s1: <nil>
I180820 02:02:07.201078 1173 storage/replica_proposal.go:214 [n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] new range lease repl=(n1,s1):2 seq=8 start=1534730527.175232857,0 epo=1 pro=1534730527.175258570,0 following repl=(n3,s3):1 seq=7 start=1534730526.893461253,0 epo=1 pro=1534730526.893489519,0
I180820 02:02:07.246823 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.273414 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.352331 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.380761 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.544076 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.568174 1216 server/status/runtime.go:433 [n1] runtime stats: 1.0 GiB RSS, 648 goroutines, 36 MiB/24 MiB/86 MiB GO alloc/idle/total, 111 MiB/153 MiB CGO alloc/total, 2995.27cgo/sec, 1.64/0.07 %(u/s)time, 0.02 %gc (12x)
I180820 02:02:07.597320 30852 storage/replica_range_lease.go:554 [replicate,n3,s3,r251/1:/Table/81/{1/1970-…-2}] transferring lease to s1
I180820 02:02:07.607393 30852 storage/replica_range_lease.go:617 [replicate,n3,s3,r251/1:/Table/81/{1/1970-…-2}] done transferring lease to s1: <nil>
I180820 02:02:07.626564 1164 storage/replica_proposal.go:214 [n1,s1,r251/2:/Table/81/{1/1970-…-2}] new range lease repl=(n1,s1):2 seq=6 start=1534730527.597504394,0 epo=1 pro=1534730527.597552054,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:07.634070 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.728733 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.779077 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.808378 1234 server/status/recorder.go:652 [n1,summaries] available memory from cgroups (8.0 EiB)
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed tests on master: testrace/TestInitialPartitioning, test/TestInitialPartitioning - The following tests appear to have failed:
[#848371](https://teamcity.cockroachdb.com/viewLog.html?buildId=848371):
```
--- FAIL: testrace/TestInitialPartitioning (242.380s)
------- Stdout: -------
W180820 01:59:37.399904 1000 server/status/runtime.go:294 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 01:59:37.440283 1000 server/server.go:829 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 01:59:37.440860 1000 base/addr_validation.go:260 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:37.441011 1000 base/addr_validation.go:300 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:37.465470 1000 server/config.go:496 [n?] 1 storage engine initialized
I180820 01:59:37.465594 1000 server/config.go:499 [n?] RocksDB cache size: 128 MiB
I180820 01:59:37.466002 1000 server/config.go:499 [n?] store 0: in-memory, size 0 B
I180820 01:59:37.534309 1000 server/node.go:373 [n?] **** cluster f18aada7-b5f5-463a-956a-d20947d68a02 has been created
I180820 01:59:37.534424 1000 server/server.go:1400 [n?] **** add additional nodes by specifying --join=127.0.0.1:36095
I180820 01:59:37.535873 1000 gossip/gossip.go:383 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36095" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1784-g97db142" started_at:1534730377535346318
I180820 01:59:37.555314 1000 storage/store.go:1535 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180820 01:59:37.556159 1000 server/node.go:476 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180820 01:59:37.556475 1000 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
I180820 01:59:37.556894 1000 server/node.go:697 [n1] connecting to gossip network to verify cluster ID...
I180820 01:59:37.558133 1000 server/node.go:722 [n1] node connected via gossip and verified as part of cluster "f18aada7-b5f5-463a-956a-d20947d68a02"
I180820 01:59:37.558609 1000 server/node.go:546 [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180820 01:59:37.560825 1000 server/status/recorder.go:652 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:37.560941 1000 server/server.go:1806 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 01:59:37.563520 1234 server/status/recorder.go:652 [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:37.564042 1000 server/server.go:1537 [n1] starting https server at 127.0.0.1:39071 (use: 127.0.0.1:39071)
I180820 01:59:37.564278 1000 server/server.go:1539 [n1] starting grpc/postgres server at 127.0.0.1:36095
I180820 01:59:37.564327 1000 server/server.go:1540 [n1] advertising CockroachDB node at 127.0.0.1:36095
W180820 01:59:37.565697 1000 jobs/registry.go:308 [n1] unable to get node liveness: node not in the liveness table
I180820 01:59:37.620005 1269 storage/replica_command.go:282 [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180820 01:59:37.749875 1257 storage/replica_command.go:282 [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180820 01:59:37.757732 1195 storage/intent_resolver.go:668 [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=48db07cf key=/Table/SystemConfigSpan/Start rw=true pri=0.00577476 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534730377.611829761,0 orig=1534730377.611829761,0 max=1534730377.611829761,0 wto=false rop=false seq=6
I180820 01:59:37.964827 1274 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180820 01:59:38.070899 1034 rpc/nodedialer/nodedialer.go:92 [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
I180820 01:59:38.178205 1286 storage/replica_command.go:282 [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180820 01:59:38.264443 1315 storage/replica_command.go:282 [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180820 01:59:38.332477 1319 storage/replica_command.go:282 [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180820 01:59:38.363101 1249 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180820 01:59:38.433995 1350 storage/replica_command.go:282 [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180820 01:59:38.564133 1293 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180820 01:59:38.597155 1365 storage/replica_command.go:282 [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180820 01:59:38.725675 1398 storage/replica_command.go:282 [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180820 01:59:38.839826 1442 storage/replica_command.go:282 [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180820 01:59:38.877937 1402 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180820 01:59:38.936517 1477 storage/replica_command.go:282 [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180820 01:59:38.997862 1491 storage/replica_command.go:282 [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180820 01:59:39.075124 1432 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180820 01:59:39.130764 1495 storage/replica_command.go:282 [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180820 01:59:39.173011 1388 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180820 01:59:39.201751 1000 server/server.go:1593 [n1] done ensuring all necessary migrations have run
I180820 01:59:39.201888 1000 server/server.go:1596 [n1] serving sql connections
I180820 01:59:39.242310 1420 storage/replica_command.go:282 [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180820 01:59:39.254313 1511 server/server_update.go:67 [n1] no need to upgrade, cluster already at the newest version
I180820 01:59:39.287102 1513 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:36095} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1784-g97db142 StartedAt:1534730377535346318 LocalityAddress:[]} ClusterID:f18aada7-b5f5-463a-956a-d20947d68a02 StartedAt:1534730377535346318 LastUp:1534730377535346318}
I180820 01:59:39.321741 1519 storage/replica_command.go:282 [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180820 01:59:39.383384 1424 storage/replica_command.go:282 [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180820 01:59:39.456597 1503 storage/replica_command.go:282 [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180820 01:59:39.506830 1560 storage/replica_command.go:282 [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180820 01:59:39.580998 1622 storage/replica_command.go:282 [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180820 01:59:39.611457 1650 storage/replica_command.go:282 [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180820 01:59:39.664562 1633 storage/replica_command.go:282 [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180820 01:59:39.708227 1616 storage/replica_command.go:282 [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180820 01:59:40.038315 1000 server/status/runtime.go:294 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 01:59:40.175405 1000 server/server.go:829 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 01:59:40.175960 1000 base/addr_validation.go:260 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.176063 1000 base/addr_validation.go:300 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.197234 1000 server/config.go:496 [n?] 1 storage engine initialized
I180820 01:59:40.197318 1000 server/config.go:499 [n?] RocksDB cache size: 128 MiB
I180820 01:59:40.197351 1000 server/config.go:499 [n?] store 0: in-memory, size 0 B
W180820 01:59:40.197655 1000 gossip/gossip.go:1351 [n?] no incoming or outgoing connections
I180820 01:59:40.198339 1000 server/server.go:1402 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 01:59:40.330941 1686 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:42495}
I180820 01:59:40.332184 1766 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:36095
I180820 01:59:40.361698 1000 server/node.go:697 [n?] connecting to gossip network to verify cluster ID...
I180820 01:59:40.362742 1000 server/node.go:722 [n?] node connected via gossip and verified as part of cluster "f18aada7-b5f5-463a-956a-d20947d68a02"
I180820 01:59:40.368091 1727 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:40.377364 1726 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:40.392357 1000 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:40.398408 1000 server/node.go:428 [n?] new node allocated ID 2
I180820 01:59:40.398840 1000 gossip/gossip.go:383 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:42495" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1784-g97db142" started_at:1534730380398506049
I180820 01:59:40.400095 1000 storage/stores.go:242 [n2] read 0 node addresses from persistent storage
I180820 01:59:40.400527 1000 storage/stores.go:261 [n2] wrote 1 node addresses to persistent storage
I180820 01:59:40.402087 1649 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I180820 01:59:40.435589 1000 server/node.go:672 [n2] bootstrapped store [n2,s2]
I180820 01:59:40.437284 1000 server/node.go:546 [n2] node=2: started with [] engine(s) and attributes []
I180820 01:59:40.440138 1000 server/status/recorder.go:652 [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:40.440250 1000 server/server.go:1806 [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 01:59:40.443189 1867 server/status/recorder.go:652 [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:40.450338 1000 server/server.go:1537 [n2] starting https server at 127.0.0.1:33161 (use: 127.0.0.1:33161)
I180820 01:59:40.450461 1000 server/server.go:1539 [n2] starting grpc/postgres server at 127.0.0.1:42495
I180820 01:59:40.450507 1000 server/server.go:1540 [n2] advertising CockroachDB node at 127.0.0.1:42495
I180820 01:59:40.464060 1000 server/server.go:1593 [n2] done ensuring all necessary migrations have run
I180820 01:59:40.464176 1000 server/server.go:1596 [n2] serving sql connections
I180820 01:59:40.471376 1892 storage/replica_raftstorage.go:538 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot b25c1917 at index 39
I180820 01:59:40.587857 1909 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I180820 01:59:40.610665 1911 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:42495} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1784-g97db142 StartedAt:1534730380398506049 LocalityAddress:[]} ClusterID:f18aada7-b5f5-463a-956a-d20947d68a02 StartedAt:1534730380398506049 LastUp:1534730380398506049}
I180820 01:59:40.634139 1774 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180820 01:59:40.683215 1892 rpc/nodedialer/nodedialer.go:92 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] connection to n2 established
I180820 01:59:40.683504 1918 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
I180820 01:59:40.689547 1892 storage/store_snapshot.go:655 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 34, log entries: 29, rate-limit: 2.0 MiB/sec, 6ms
I180820 01:59:40.693713 1970 storage/replica_raftstorage.go:782 [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=b25c1917, encoded size=27443, 1 rocksdb batches, 29 log entries)
I180820 01:59:40.706835 1970 storage/replica_raftstorage.go:788 [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=4ms commit=1ms]
I180820 01:59:40.711592 1892 storage/replica_command.go:792 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=842354451496]
W180820 01:59:40.715136 1000 server/status/runtime.go:294 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 01:59:40.737513 1892 storage/replica.go:3749 [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:40.793267 1975 storage/replica_range_lease.go:554 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] transferring lease to s2
I180820 01:59:40.794671 1976 util/protoutil/randnullability.go:94 inserting null for (roachpb.Lease).Expiration: true
I180820 01:59:40.794826 1976 util/protoutil/randnullability.go:94 inserting null for (roachpb.Lease).DeprecatedStartStasis: false
I180820 01:59:40.804733 1975 storage/replica_range_lease.go:617 [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] done transferring lease to s2: <nil>
I180820 01:59:40.807362 2022 storage/replica_raftstorage.go:538 [replicate,n1,s1,r16/1:/Table/{19-20}] generated preemptive snapshot 420af157 at index 19
I180820 01:59:40.807617 1794 storage/replica_proposal.go:214 [n2,s2,r4/2:/System/{NodeLive…-tsd}] new range lease repl=(n2,s2):2 seq=3 start=1534730380.793394013,0 epo=1 pro=1534730380.793406704,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:40.823920 2022 storage/store_snapshot.go:655 [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 2.0 MiB/sec, 15ms
I180820 01:59:40.827380 1961 storage/replica_raftstorage.go:782 [n2,s2,r16/?:{-}] applying preemptive snapshot at index 19 (id=420af157, encoded size=2590, 1 rocksdb batches, 9 log entries)
I180820 01:59:40.831033 1961 storage/replica_raftstorage.go:788 [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 3ms [clear=1ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:40.835309 2022 storage/replica_command.go:792 [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=842353293368]
I180820 01:59:40.854467 1000 server/server.go:829 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 01:59:40.855054 1000 base/addr_validation.go:260 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.855176 1000 base/addr_validation.go:300 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 01:59:40.874743 1000 server/config.go:496 [n?] 1 storage engine initialized
I180820 01:59:40.874858 1000 server/config.go:499 [n?] RocksDB cache size: 128 MiB
I180820 01:59:40.874917 1000 server/config.go:499 [n?] store 0: in-memory, size 0 B
W180820 01:59:40.875183 1000 gossip/gossip.go:1351 [n?] no incoming or outgoing connections
I180820 01:59:40.880258 2022 storage/replica.go:3749 [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:40.902566 1000 server/server.go:1402 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 01:59:40.961695 1977 storage/replica_command.go:792 [replicate,n2,s2,r4/2:/System/{NodeLive…-tsd}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, (n2,s2):2, next=3, gen=842353495096]
I180820 01:59:40.994410 1950 storage/replica_range_lease.go:554 [replicate,n1,s1,r16/1:/Table/{19-20}] transferring lease to s2
I180820 01:59:41.043492 1802 storage/replica_proposal.go:214 [n2,s2,r16/2:/Table/{19-20}] new range lease repl=(n2,s2):2 seq=3 start=1534730380.994544835,0 epo=1 pro=1534730380.994564458,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.051679 1950 storage/replica_range_lease.go:617 [replicate,n1,s1,r16/1:/Table/{19-20}] done transferring lease to s2: <nil>
I180820 01:59:41.053872 2131 storage/replica_raftstorage.go:538 [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot c1dc4863 at index 18
I180820 01:59:41.065717 2131 storage/store_snapshot.go:655 [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 11ms
I180820 01:59:41.068063 2134 storage/replica_raftstorage.go:782 [n2,s2,r12/?:{-}] applying preemptive snapshot at index 18 (id=c1dc4863, encoded size=2386, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.070253 2134 storage/replica_raftstorage.go:788 [n2,s2,r12/?:/Table/1{5-6}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:41.080058 2131 storage/replica_command.go:792 [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=842369604472]
I180820 01:59:41.146477 2131 storage/replica.go:3749 [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.152731 2024 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:36095
I180820 01:59:41.154453 2076 gossip/server.go:219 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:33963}
I180820 01:59:41.161182 1000 server/node.go:697 [n?] connecting to gossip network to verify cluster ID...
I180820 01:59:41.161467 1000 server/node.go:722 [n?] node connected via gossip and verified as part of cluster "f18aada7-b5f5-463a-956a-d20947d68a02"
I180820 01:59:41.168081 2103 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:41.182505 2203 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:41.205846 1000 kv/dist_sender.go:345 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 01:59:41.211308 2167 storage/replica_range_lease.go:554 [replicate,n1,s1,r12/1:/Table/1{5-6}] transferring lease to s2
I180820 01:59:41.211081 1977 storage/replica.go:3749 [n2,s2,r4/2:/System/{NodeLive…-tsd}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:41.225992 2167 storage/replica_range_lease.go:617 [replicate,n1,s1,r12/1:/Table/1{5-6}] done transferring lease to s2: <nil>
I180820 01:59:41.227235 1815 storage/replica_proposal.go:214 [n2,s2,r12/2:/Table/1{5-6}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.211426378,0 epo=1 pro=1534730381.211438917,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.247842 2184 storage/replica_raftstorage.go:538 [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot c865c7bc at index 18
I180820 01:59:41.255900 2184 storage/store_snapshot.go:655 [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 7ms
I180820 01:59:41.265761 2207 storage/replica_raftstorage.go:782 [n2,s2,r13/?:{-}] applying preemptive snapshot at index 18 (id=c865c7bc, encoded size=2552, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.268178 2207 storage/replica_raftstorage.go:788 [n2,s2,r13/?:/Table/1{6-7}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:41.276746 2184 storage/replica_command.go:792 [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=842353293512]
I180820 01:59:41.283147 1959 storage/store.go:3717 [n1,s1,r4/1:/System/{NodeLive…-tsd}] added to replica GC queue (peer suggestion)
I180820 01:59:41.294038 2109 storage/replica_command.go:792 [replicate,n2,s2,r16/2:/Table/{19-20}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, (n2,s2):2, next=3, gen=842354532256]
I180820 01:59:41.358093 2126 storage/store.go:2612 [replicaGC,n1,s1,r4/1:/System/{NodeLive…-tsd}] removing replica
I180820 01:59:41.361638 2126 storage/replica.go:880 [replicaGC,n1,s1,r4/1:/System/{NodeLive…-tsd}] removed 27 (19+8) keys in 3ms [clear=2ms commit=0ms]
I180820 01:59:41.426825 2184 storage/replica.go:3749 [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.486173 2177 storage/replica_range_lease.go:554 [replicate,n1,s1,r13/1:/Table/1{6-7}] transferring lease to s2
I180820 01:59:41.495523 2109 storage/replica.go:3749 [n2,s2,r16/2:/Table/{19-20}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:41.496978 2177 storage/replica_range_lease.go:617 [replicate,n1,s1,r13/1:/Table/1{6-7}] done transferring lease to s2: <nil>
I180820 01:59:41.498016 1842 storage/replica_proposal.go:214 [n2,s2,r13/2:/Table/1{6-7}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.486300359,0 epo=1 pro=1534730381.486312737,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.499394 2260 storage/replica_raftstorage.go:538 [replicate,n1,s1,r21/1:/Table/5{0-1}] generated preemptive snapshot ee9f03ed at index 18
I180820 01:59:41.521497 1959 storage/store.go:3717 [n1,s1,r16/1:/Table/{19-20}] added to replica GC queue (peer suggestion)
I180820 01:59:41.526706 2260 storage/store_snapshot.go:655 [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 27ms
I180820 01:59:41.528920 2246 storage/replica_raftstorage.go:782 [n2,s2,r21/?:{-}] applying preemptive snapshot at index 18 (id=ee9f03ed, encoded size=2362, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.529422 2145 storage/store.go:2612 [replicaGC,n1,s1,r16/1:/Table/{19-20}] removing replica
I180820 01:59:41.530692 2145 storage/replica.go:880 [replicaGC,n1,s1,r16/1:/Table/{19-20}] removed 8 (0+8) keys in 0ms [clear=0ms commit=0ms]
I180820 01:59:41.531124 2246 storage/replica_raftstorage.go:788 [n2,s2,r21/?:/Table/5{0-1}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:41.538661 2260 storage/replica_command.go:792 [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=842351376688]
I180820 01:59:41.563583 1000 server/node.go:428 [n?] new node allocated ID 3
I180820 01:59:41.565414 1000 gossip/gossip.go:383 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:33963" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1784-g97db142" started_at:1534730381565051877
I180820 01:59:41.567199 1000 storage/stores.go:242 [n3] read 0 node addresses from persistent storage
I180820 01:59:41.567503 1000 storage/stores.go:261 [n3] wrote 2 node addresses to persistent storage
I180820 01:59:41.569937 2264 storage/stores.go:261 [n1] wrote 2 node addresses to persistent storage
I180820 01:59:41.572867 2300 storage/stores.go:261 [n2] wrote 2 node addresses to persistent storage
I180820 01:59:41.587412 2293 storage/replica_command.go:792 [replicate,n2,s2,r12/2:/Table/1{5-6}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, (n2,s2):2, next=3, gen=842368193568]
I180820 01:59:41.596112 2260 storage/replica.go:3749 [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.668802 1000 server/node.go:672 [n3] bootstrapped store [n3,s3]
I180820 01:59:41.675572 1000 server/node.go:546 [n3] node=3: started with [] engine(s) and attributes []
I180820 01:59:41.676945 1000 server/status/recorder.go:652 [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:41.677086 1000 server/server.go:1806 [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 01:59:41.689416 2454 server/status/recorder.go:652 [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 01:59:41.698379 2456 storage/replica_range_lease.go:554 [replicate,n1,s1,r21/1:/Table/5{0-1}] transferring lease to s2
I180820 01:59:41.708026 1000 server/server.go:1537 [n3] starting https server at 127.0.0.1:44305 (use: 127.0.0.1:44305)
I180820 01:59:41.708181 1000 server/server.go:1539 [n3] starting grpc/postgres server at 127.0.0.1:33963
I180820 01:59:41.708235 1000 server/server.go:1540 [n3] advertising CockroachDB node at 127.0.0.1:33963
I180820 01:59:41.714035 1797 storage/replica_proposal.go:214 [n2,s2,r21/2:/Table/5{0-1}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.699599669,0 epo=1 pro=1534730381.699620560,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.717944 2456 storage/replica_range_lease.go:617 [replicate,n1,s1,r21/1:/Table/5{0-1}] done transferring lease to s2: <nil>
I180820 01:59:41.723740 1000 server/server.go:1593 [n3] done ensuring all necessary migrations have run
I180820 01:59:41.724005 1000 server/server.go:1596 [n3] serving sql connections
I180820 01:59:41.736965 2425 storage/replica_raftstorage.go:538 [replicate,n1,s1,r19/1:/Table/2{2-3}] generated preemptive snapshot bdc91102 at index 18
I180820 01:59:41.766799 2425 storage/store_snapshot.go:655 [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 20ms
I180820 01:59:41.775744 2428 storage/replica_raftstorage.go:782 [n2,s2,r19/?:{-}] applying preemptive snapshot at index 18 (id=bdc91102, encoded size=2386, 1 rocksdb batches, 8 log entries)
I180820 01:59:41.780051 2428 storage/replica_raftstorage.go:788 [n2,s2,r19/?:/Table/2{2-3}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=1ms]
I180820 01:59:41.803377 2425 storage/replica_command.go:792 [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=842354152384]
I180820 01:59:41.858971 2293 storage/replica.go:3749 [n2,s2,r12/2:/Table/1{5-6}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:41.865037 2518 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180820 01:59:41.865247 2517 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
I180820 01:59:41.897835 2425 storage/replica.go:3749 [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 01:59:41.919017 2537 storage/replica_command.go:792 [replicate,n2,s2,r13/2:/Table/1{6-7}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, (n2,s2):2, next=3, gen=842350792840]
I180820 01:59:41.921349 1959 storage/store.go:3717 [n1,s1,r12/1:/Table/1{5-6}] added to replica GC queue (peer suggestion)
I180820 01:59:41.950553 2549 storage/replica_range_lease.go:554 [replicate,n1,s1,r19/1:/Table/2{2-3}] transferring lease to s2
I180820 01:59:41.971555 2317 storage/store.go:2612 [replicaGC,n1,s1,r12/1:/Table/1{5-6}] removing replica
I180820 01:59:41.973569 2317 storage/replica.go:880 [replicaGC,n1,s1,r12/1:/Table/1{5-6}] removed 8 (0+8) keys in 1ms [clear=1ms commit=0ms]
I180820 01:59:41.977686 1812 storage/replica_proposal.go:214 [n2,s2,r19/2:/Table/2{2-3}] new range lease repl=(n2,s2):2 seq=3 start=1534730381.951882389,0 epo=1 pro=1534730381.951898915,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:41.983193 2549 storage/replica_range_lease.go:617 [replicate,n1,s1,r19/1:/Table/2{2-3}] done transferring lease to s2: <nil>
I180820 01:59:41.987293 2448 storage/replica_raftstorage.go:538 [replicate,n1,s1,r20/1:/Table/{23-50}] generated preemptive snapshot 330fc715 at index 18
I180820 01:59:42.018996 2504 sql/event_log.go:126 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:33963} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1784-g97db142 StartedAt:1534730381565051877 LocalityAddress:[]} ClusterID:f18aada7-b5f5-463a-956a-d20947d68a02 StartedAt:1534730381565051877 LastUp:1534730381565051877}
I180820 01:59:42.067318 2502 server/server_update.go:67 [n3] no need to upgrade, cluster already at the newest version
I180820 01:59:42.073460 2513 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180820 01:59:42.117222 2537 storage/replica.go:3749 [n2,s2,r13/2:/Table/1{6-7}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:42.166046 1959 storage/store.go:3717 [n1,s1,r13/1:/Table/1{6-7}] added to replica GC queue (peer suggestion)
I180820 01:59:42.176827 2630 storage/replica_command.go:792 [replicate,n2,s2,r21/2:/Table/5{0-1}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, (n2,s2):2, next=3, gen=842353497552]
I180820 01:59:42.232685 2568 storage/store.go:2612 [replicaGC,n1,s1,r13/1:/Table/1{6-7}] removing replica
I180820 01:59:42.234108 2568 storage/replica.go:880 [replicaGC,n1,s1,r13/1:/Table/1{6-7}] removed 8 (0+8) keys in 0ms [clear=0ms commit=0ms]
I180820 01:59:42.250247 2462 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180820 01:59:42.250524 2448 rpc/nodedialer/nodedialer.go:92 [replicate,n1,s1,r20/1:/Table/{23-50}] connection to n3 established
I180820 01:59:42.255478 2448 storage/store_snapshot.go:655 [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
I180820 01:59:42.258324 2619 storage/replica_raftstorage.go:782 [n3,s3,r20/?:{-}] applying preemptive snapshot at index 18 (id=330fc715, encoded size=2684, 1 rocksdb batches, 8 log entries)
I180820 01:59:42.261437 2619 storage/replica_raftstorage.go:788 [n3,s3,r20/?:/Table/{23-50}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I180820 01:59:42.270677 2448 storage/replica_command.go:792 [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=842368780992]
I180820 01:59:42.328562 2630 storage/replica.go:3749 [n2,s2,r21/2:/Table/5{0-1}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:42.333812 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180820 01:59:42.337137 2448 storage/replica.go:3749 [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180820 01:59:42.342500 1959 storage/store.go:3717 [n1,s1,r21/1:/Table/5{0-1}] added to replica GC queue (peer suggestion)
I180820 01:59:42.370495 2641 storage/store.go:2612 [replicaGC,n1,s1,r21/1:/Table/5{0-1}] removing replica
I180820 01:59:42.374037 2641 storage/replica.go:880 [replicaGC,n1,s1,r21/1:/Table/5{0-1}] removed 8 (0+8) keys in 0ms [clear=0ms commit=0ms]
I180820 01:59:42.380666 2707 storage/replica_command.go:282 [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180820 01:59:42.425362 2651 storage/replica_range_lease.go:554 [replicate,n1,s1,r20/1:/Table/{23-50}] transferring lease to s3
I180820 01:59:42.434548 2625 storage/replica_command.go:792 [replicate,n2,s2,r19/2:/Table/2{2-3}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, (n2,s2):2, next=3, gen=842368195968]
I180820 01:59:42.441572 2651 storage/replica_range_lease.go:617 [replicate,n1,s1,r20/1:/Table/{23-50}] done transferring lease to s3: <nil>
I180820 01:59:42.447553 2346 storage/replica_proposal.go:214 [n3,s3,r20/2:/Table/{23-50}] new range lease repl=(n3,s3):2 seq=3 start=1534730382.425507270,0 epo=1 pro=1534730382.425524030,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:42.455030 2754 storage/replica_raftstorage.go:538 [replicate,n1,s1,r9/1:/Table/1{2-3}] generated preemptive snapshot c0d98391 at index 34
I180820 01:59:42.467399 2754 storage/store_snapshot.go:655 [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 69, log entries: 24, rate-limit: 2.0 MiB/sec, 12ms
I180820 01:59:42.470386 2745 storage/replica_raftstorage.go:782 [n3,s3,r9/?:{-}] applying preemptive snapshot at index 34 (id=c0d98391, encoded size=20980, 1 rocksdb batches, 24 log entries)
I180820 01:59:42.475365 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180820 01:59:42.477283 2745 storage/replica_raftstorage.go:788 [n3,s3,r9/?:/Table/1{2-3}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=5ms commit=0ms]
I180820 01:59:42.503618 2754 storage/replica_command.go:792 [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=842378255104]
I180820 01:59:42.528180 2757 storage/replica_command.go:792 [replicate,n3,s3,r20/2:/Table/{23-50}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n3,s3):2, next=3, gen=842353288464]
I180820 01:59:42.604699 2754 storage/replica.go:3749 [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180820 01:59:42.629901 2625 storage/replica.go:3749 [n2,s2,r19/2:/Table/2{2-3}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 01:59:42.632037 2752 storage/replica_range_lease.go:554 [replicate,n1,s1,r9/1:/Table/1{2-3}] transferring lease to s3
I180820 01:59:42.640238 1959 storage/store.go:3717 [n1,s1,r19/1:/Table/2{2-3}] added to replica GC queue (peer suggestion)
I180820 01:59:42.674842 2359 storage/replica_proposal.go:214 [n3,s3,r9/2:/Table/1{2-3}] new range lease repl=(n3,s3):2 seq=3 start=1534730382.642656057,0 epo=1 pro=1534730382.642672921,0 following repl=(n1,s1):1 seq=2 start=1534730377.539763868,0 exp=1534730386.557630202,0 pro=1534730377.557673004,0
I180820 01:59:42.687613 2752 storage/replica_range_lease.go:617 [replicate,n1,s1,r9/1:/Table/1{2-3}] done transferring lease to s3: <nil>
I180820 01:59:42.712117 2788 storage/store.go:2612 [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removing replica
I180820 01:59:42.712701 2821 storage/replica_raftstorage.go:538 [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot f8d1ccea at index 18
I180820 01:59:42.714085 2788 storage/replica.go:880 [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180820 01:59:42.750060 2757 storage/replica.go:3749 [n3,s3,r20/2:/Table/{23-50}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n3,s3):2] next=3
I180820 01:59:42.750085 2821 storage/store_snapshot.go:655 [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 37ms
I180820 01:59:42.752389 2825 storage/replica_raftstorage.go:782 [n3,s3,r11/?:{-}] applying preemptive snapshot at index 18 (id=f8d1ccea, encoded size=2385, 1 rocksdb batches, 8 log entries)
I180820 01:59:42.754440 2825 storage/replica_raftstorage.go:788 [n3,s3,r11/?:/Table/1{4-5}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 01:59:42.756262 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
I180820 01:59:42.774977 2821 storage/replica_command.go:792 [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=842369600856]
I180820 01:59:42.781267 2725 storage/store.go:3717 [n1,s1,r20/1:/Table/{23-50}] added to replica GC queue (peer suggestion)
--- FAIL: testrace/TestInitialPartitioning: TestInitialPartitioning/TIMESTAMPTZ (46.780s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n1: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
[n3,s3,r7/2:/Table/{SystemCon…-11}] read completed
full trace:
=== SPAN START: session recording ===
[n1,client=127.0.0.1:43144,user=root] [NoTxn pos:3631] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
[n1,client=127.0.0.1:43144,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE in state: NoTxn
=== SPAN START: sql txn ===
[n1,client=127.0.0.1:43144,user=root] [Open pos:3631] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
[n1,client=127.0.0.1:43144,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE in state: Open
[n1,client=127.0.0.1:43144,user=root] planning starts: SELECT
[n1,client=127.0.0.1:43144,user=root] generating optimizer plan
=== SPAN START: txn coordinator send ===
=== SPAN START: dist sender send ===
[client=127.0.0.1:43144,user=root,txn=f84e2336,n1] querying next range at /Table/2/1/52/"TIMESTAMPTZ"/3/1
[client=127.0.0.1:43144,user=root,txn=f84e2336,n1] r7: sending batch 1 Get to (n3,s3):2
[client=127.0.0.1:43144,user=root,txn=f84e2336,n1] sending request to 127.0.0.1:33963
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
[n3] 1 Get
[n3,s3] executing 1 requests
[n3,s3,r7/2:/Table/{SystemCon…-11}] read-only path
[n3,s3,r7/2:/Table/{SystemCon…-11}] command queue
[n3,s3,r7/2:/Table/{SystemCon…-11}] waiting for read lock
[n3,s3,r7/2:/Table/{SystemCon…-11}] read completed
=== SPAN START: txn coordinator send ===
[n1,client=127.0.0.1:43144,user=root] client.Txn did AutoCommit. err: <nil>
[n1,client=127.0.0.1:43144,user=root] added table 'data.public."TIMESTAMPTZ"' to table collection
[n1,client=127.0.0.1:43144,user=root] optimizer plan succeeded
[n1,client=127.0.0.1:43144,user=root] planning ends
[n1,client=127.0.0.1:43144,user=root] checking distributability
[n1,client=127.0.0.1:43144,user=root] distributable plan: true
[n1,client=127.0.0.1:43144,user=root] execution starts: distributed
=== SPAN START: consuming rows ===
[n1,client=127.0.0.1:43144,user=root] creating DistSQL plan with distributedMode=true
[n1,client=127.0.0.1:43144,user=root] querying next range at /Table/81/1
[n1,client=127.0.0.1:43144,user=root] running DistSQL plan
=== SPAN START: flow ===
[n1,client=127.0.0.1:43144,user=root] starting (1 processors, 0 startables)
=== SPAN START: table reader ===
cockroach.processorid: 0
cockroach.stat.tablereader.stalltime: 2µs
cockroach.stat.tablereader.input.rows: 0
[n1,client=127.0.0.1:43144,user=root] Scan /Table/81/1{-/1970-01-11T03:42:28.000515Z/#}
=== SPAN START: txn coordinator send ===
=== SPAN START: dist sender send ===
[client=127.0.0.1:43144,user=root,txn=c1848c5e,n1] querying next range at /Table/81/1
[client=127.0.0.1:43144,user=root,txn=c1848c5e,n1] r247: sending batch 1 Scan to (n1,s1):2
[client=127.0.0.1:43144,user=root,txn=c1848c5e,n1] sending request to local server
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
[n1] 1 Scan
[n1,s1] executing 1 requests
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] read-only path
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] read has no clock uncertainty
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] command queue
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] waiting for read lock
[n1,s1,r247/2:/Table/81/1{-/1970-0…}] read completed
=== SPAN START: ordered aggregator ===
cockroach.stat.aggregator.stalltime: 364µs
cockroach.stat.aggregator.mem.max: 10 KiB
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
[n1,client=127.0.0.1:43144,user=root] accumulation complete
[n1,client=127.0.0.1:43144,user=root] exiting aggregator
[n1,client=127.0.0.1:43144,user=root] execution ends
[n1,client=127.0.0.1:43144,user=root] rows affected: 1
=== SPAN START: txn coordinator send ===
[n1,client=127.0.0.1:43144,user=root] AutoCommit. err: <nil>
[n1,client=127.0.0.1:43144,user=root] releasing 1 tables
[n1,client=127.0.0.1:43144,user=root] [NoTxn pos:3632] executing ExecStmt: SET TRACING = off
[n1,client=127.0.0.1:43144,user=root] executing: SET TRACING = off in state: NoTxn
goroutine 30285 [running]:
runtime/debug.Stack(0xa7a358200, 0xc421671530, 0x3da7de0)
/usr/local/go/src/runtime/debug/stack.go:24 +0xb5
github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e092c0, 0xc4242d23c0, 0xc421671500)
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc4242d23c0)
/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1156 +0x258
testing.tRunner(0xc4242d23c0, 0xc422af2330)
/usr/local/go/src/testing/testing.go:777 +0x16e
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180820 02:02:03.827699 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "create_table", target: 81, info: {TableName:data.public."TIMESTAMPTZ" Statement:CREATE TABLE "TIMESTAMPTZ" (a TIMESTAMP WITH TIME ZONE PRIMARY KEY) PARTITION BY LIST (a) (PARTITION p VALUES IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)) User:root}
I180820 02:02:03.903121 30325 storage/replica_command.go:792 [replicate,n2,s2,r244/3:/Table/80/1/1511705146{8-9}] change replicas (REMOVE_REPLICA (n1,s1):2): read existing descriptor r244:/Table/80/1/1511705146{8-9} [(n1,s1):2, (n2,s2):3, next=4, gen=842353873400]
I180820 02:02:03.996604 30257 storage/replica_command.go:282 [split,n3,s3,r243/1:/{Table/80/2-Max}] initiating a split of this range at key /Table/81 [r246]
I180820 02:02:04.040205 30325 storage/replica.go:3749 [n2,s2,r244/3:/Table/80/1/1511705146{8-9}] proposing REMOVE_REPLICA((n1,s1):2): updated=[(n2,s2):3] next=4
I180820 02:02:04.074974 1959 storage/store.go:3717 [n1,s1,r244/2:/Table/80/1/1511705146{8-9}] added to replica GC queue (peer suggestion)
I180820 02:02:04.106033 30330 storage/store.go:2612 [replicaGC,n1,s1,r244/2:/Table/80/1/1511705146{8-9}] removing replica
I180820 02:02:04.119350 2390 storage/replica_proposal.go:214 [n3,s3,r243/1:/{Table/80/2-Max}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:04.164935 30330 storage/replica.go:880 [replicaGC,n1,s1,r244/2:/Table/80/1/1511705146{8-9}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180820 02:02:04.173040 2036 storage/store.go:3736 [n2,s2,r244/3:/Table/80/1/1511705146{8-9}] added to replica GC queue (contacted deleted peer)
I180820 02:02:04.458180 30379 storage/replica_command.go:282 [split,n3,s3,r246/1:/{Table/81-Max}] initiating a split of this range at key /Table/81/1 [r247]
I180820 02:02:04.504322 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_zone_config", target: 81, info: {Target:data."TIMESTAMPTZ"@primary Config:constraints: [+n1] User:root}
I180820 02:02:04.773781 2346 storage/replica_proposal.go:214 [n3,s3,r246/1:/{Table/81-Max}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:04.778877 30416 storage/replica_command.go:282 [split,n3,s3,r247/1:/{Table/81/1-Max}] initiating a split of this range at key /Table/81/2 [r248]
I180820 02:02:05.056413 2365 storage/replica_proposal.go:214 [n3,s3,r247/1:/{Table/81/1-Max}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.072201 30519 storage/replica_raftstorage.go:538 [replicate,n3,s3,r247/1:/Table/81/{1-2}] generated preemptive snapshot 4aedac1e at index 15
I180820 02:02:05.086383 30519 storage/store_snapshot.go:655 [replicate,n3,s3,r247/1:/Table/81/{1-2}] streamed snapshot to (n1,s1):?: kv pairs: 9, log entries: 5, rate-limit: 2.0 MiB/sec, 13ms
I180820 02:02:05.090075 30505 storage/replica_raftstorage.go:782 [n1,s1,r247/?:{-}] applying preemptive snapshot at index 15 (id=4aedac1e, encoded size=2391, 1 rocksdb batches, 5 log entries)
I180820 02:02:05.092490 30505 storage/replica_raftstorage.go:788 [n1,s1,r247/?:/Table/81/{1-2}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 02:02:05.099390 30519 storage/replica_command.go:792 [replicate,n3,s3,r247/1:/Table/81/{1-2}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r247:/Table/81/{1-2} [(n3,s3):1, next=2, gen=842397011328]
I180820 02:02:05.181744 2529 sql/event_log.go:126 [n1,client=127.0.0.1:43144,user=root] Event: "set_zone_config", target: 81, info: {Target:data."TIMESTAMPTZ".p Config:constraints: [+n2] User:root}
I180820 02:02:05.182911 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:05.183629 30507 storage/replica_command.go:282 [split,n3,s3,r247/1:/Table/81/{1-2}] initiating a split of this range at key /Table/81/1/1970-01-11T03:42:28.000516Z [r249]
I180820 02:02:05.244979 30519 storage/replica.go:3749 [n3,s3,r247/1:/Table/81/{1-2}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180820 02:02:05.256901 30507 storage/split_queue.go:135 [split,n3,s3,r247/1:/Table/81/{1-2}] split saw concurrent descriptor modification; maybe retrying
I180820 02:02:05.258291 30447 storage/replica_command.go:282 [split,n3,s3,r247/1:/Table/81/{1-2}] initiating a split of this range at key /Table/81/1/1970-01-11T03:42:28.000516Z [r250]
I180820 02:02:05.480965 2357 storage/replica_proposal.go:214 [n3,s3,r247/1:/Table/81/{1-2}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.489860 30464 storage/replica_command.go:282 [split,n3,s3,r250/1:/Table/81/{1/1970-…-2}] initiating a split of this range at key /Table/81/1/1970-01-11T03:42:28.000516001Z [r251]
I180820 02:02:05.503691 30552 storage/replica_range_lease.go:554 [replicate,n3,s3,r247/1:/Table/81/1{-/1970-0…}] transferring lease to s1
I180820 02:02:05.518673 30552 storage/replica_range_lease.go:617 [replicate,n3,s3,r247/1:/Table/81/1{-/1970-0…}] done transferring lease to s1: <nil>
I180820 02:02:05.519052 1184 storage/replica_proposal.go:214 [n1,s1,r247/2:/Table/81/1{-/1970-0…}] new range lease repl=(n1,s1):2 seq=6 start=1534730525.504798119,0 epo=1 pro=1534730525.504827117,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.740465 2346 storage/replica_proposal.go:214 [n3,s3,r250/1:/Table/81/{1/1970-…-2}] new range lease repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:05.766643 30615 storage/replica_range_lease.go:554 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] transferring lease to s1
I180820 02:02:05.824194 30615 storage/replica_range_lease.go:617 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] done transferring lease to s1: <nil>
I180820 02:02:05.826609 1142 storage/replica_proposal.go:214 [n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] new range lease repl=(n1,s1):2 seq=6 start=1534730525.766811117,0 epo=1 pro=1534730525.766837525,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
E180820 02:02:05.859136 30647 storage/queue.go:788 [replicate,n3,s3,r251/1:/Table/81/{1/1970-…-2}] no removable replicas from range that needs a removal: [raft progress unknown]
I180820 02:02:05.916344 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:05.986266 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a > '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.098687 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.107318 30708 storage/replica_command.go:792 [replicate,n1,s1,r247/2:/Table/81/1{-/1970-0…}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r247:/Table/81/1{-/1970-01-11T03:42:28.000516Z} [(n3,s3):1, (n1,s1):2, next=3, gen=842394703968]
I180820 02:02:06.234344 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.297677 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.323231 30708 storage/replica.go:3749 [n1,s1,r247/2:/Table/81/1{-/1970-0…}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180820 02:02:06.341089 2742 storage/store.go:3717 [n3,s3,r247/1:/Table/81/1{-/1970-0…}] added to replica GC queue (peer suggestion)
I180820 02:02:06.387462 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.388254 30658 storage/store.go:2612 [replicaGC,n3,s3,r247/1:/Table/81/1{-/1970-0…}] removing replica
I180820 02:02:06.410520 30658 storage/replica.go:880 [replicaGC,n3,s3,r247/1:/Table/81/1{-/1970-0…}] removed 8 (0+8) keys in 4ms [clear=4ms commit=0ms]
I180820 02:02:06.425397 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.484472 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.539990 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.584637 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.624168 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.706597 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE)
I180820 02:02:06.756127 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.811101 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.857426 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:06.893298 30717 storage/replica_range_lease.go:554 [replicate,n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] transferring lease to s3
I180820 02:02:06.918430 30717 storage/replica_range_lease.go:617 [replicate,n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] done transferring lease to s3: <nil>
I180820 02:02:06.924796 2391 storage/replica_proposal.go:214 [n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] new range lease repl=(n3,s3):1 seq=7 start=1534730526.893461253,0 epo=1 pro=1534730526.893489519,0 following repl=(n1,s1):2 seq=6 start=1534730525.766811117,0 epo=1 pro=1534730525.766837525,0
I180820 02:02:06.929008 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.003260 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.086013 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.132009 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.175074 30773 storage/replica_range_lease.go:554 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] transferring lease to s1
I180820 02:02:07.200241 30773 storage/replica_range_lease.go:617 [replicate,n3,s3,r250/1:/Table/81/1/1970-01-11T03:42:…] done transferring lease to s1: <nil>
I180820 02:02:07.201078 1173 storage/replica_proposal.go:214 [n1,s1,r250/2:/Table/81/1/1970-01-11T03:42:…] new range lease repl=(n1,s1):2 seq=8 start=1534730527.175232857,0 epo=1 pro=1534730527.175258570,0 following repl=(n3,s3):1 seq=7 start=1534730526.893461253,0 epo=1 pro=1534730526.893489519,0
I180820 02:02:07.246823 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.273414 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.352331 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.380761 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.544076 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.568174 1216 server/status/runtime.go:433 [n1] runtime stats: 1.0 GiB RSS, 648 goroutines, 36 MiB/24 MiB/86 MiB GO alloc/idle/total, 111 MiB/153 MiB CGO alloc/total, 2995.27cgo/sec, 1.64/0.07 %(u/s)time, 0.02 %gc (12x)
I180820 02:02:07.597320 30852 storage/replica_range_lease.go:554 [replicate,n3,s3,r251/1:/Table/81/{1/1970-…-2}] transferring lease to s1
I180820 02:02:07.607393 30852 storage/replica_range_lease.go:617 [replicate,n3,s3,r251/1:/Table/81/{1/1970-…-2}] done transferring lease to s1: <nil>
I180820 02:02:07.626564 1164 storage/replica_proposal.go:214 [n1,s1,r251/2:/Table/81/{1/1970-…-2}] new range lease repl=(n1,s1):2 seq=6 start=1534730527.597504394,0 epo=1 pro=1534730527.597552054,0 following repl=(n3,s3):1 seq=5 start=1534730408.485714913,0 epo=1 pro=1534730408.485743739,0
I180820 02:02:07.634070 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.728733 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.779077 30285 ccl/partitionccl/partition_test.go:200 query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-11 03:42:28.000516+00:00':::TIMESTAMP WITH TIME ZONE
I180820 02:02:07.808378 1234 server/status/recorder.go:652 [n1,summaries] available memory from cgroups (8.0 EiB)
```
Please assign, take a look and update the issue accordingly.
|
non_defect
|
teamcity failed tests on master testrace testinitialpartitioning test testinitialpartitioning the following tests appear to have failed fail testrace testinitialpartitioning stdout server status runtime go could not parse build timestamp parsing time as cannot parse as server server go monitoring forward clock jumps based on server clock forward jump check enabled base addr validation go server certificate addresses ip dns localhost local cn node base addr validation go web ui certificate addresses ip dns localhost local cn node server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b server node go cluster has been created server server go add additional nodes by specifying join gossip gossip go nodedescriptor set to node id address attrs locality serverversion build tag alpha started at storage store go failed initial metrics computation system config not yet available server node go initialized store disk capacity mib available mib used b logicalbytes kib ranges leases queries writes bytesperreplica pmax writesperreplica pmax storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at jobs registry go unable to get node liveness node not in the liveness table storage replica command go initiating a split of this range at key system storage replica command go initiating a split of this range at key system nodeliveness storage intent resolver go failed to push during intent resolution failed to push sql txn id key table systemconfigspan start rw true pri iso serializable stat pending epo ts orig max wto false rop false seq sql event log go event set cluster setting target info settingname diagnostics reporting enabled value true user root rpc nodedialer nodedialer go connection to established storage replica command go initiating a split of this range at key system nodelivenessmax storage replica command go initiating a split of this range at key system tsd storage replica command go initiating a split of this range at key system tse sql event log go event set cluster setting target info settingname version value user root storage replica command go initiating a split of this range at key table systemconfigspan start sql event log go event set cluster setting target info settingname trace debug enable value false user root storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event set cluster setting target info settingname cluster secret value gen random uuid string user root storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename defaultdb statement create database if not exists defaultdb user root storage replica command go initiating a split of this range at key table sql event log go event create database target info databasename postgres statement create database if not exists postgres user root server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica command go initiating a split of this range at key table server server update go no need to upgrade cluster already at the newest version sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag alpha startedat localityaddress clusterid startedat lastup storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table server status runtime go could not parse build timestamp parsing time as cannot parse as server server go monitoring forward clock jumps based on server clock forward jump check enabled base addr validation go server certificate addresses ip dns localhost local cn node base addr validation go web ui certificate addresses ip dns localhost local cn node server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip server go received initial cluster verification connection from tcp gossip client go started gossip client to server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion build tag alpha started at storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server node go bootstrapped store server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica raftstorage go generated preemptive snapshot at index server server update go no need to upgrade cluster already at the newest version sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag alpha startedat localityaddress clusterid startedat lastup rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor system nodelivenessmax tsd server status runtime go could not parse build timestamp parsing time as cannot parse as storage replica go proposing add replica updated next storage replica range lease go transferring lease to util protoutil randnullability go inserting null for roachpb lease expiration true util protoutil randnullability go inserting null for roachpb lease deprecatedstartstasis false storage replica range lease go done transferring lease to storage replica raftstorage go generated preemptive snapshot at index storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table server server go monitoring forward clock jumps based on server clock forward jump check enabled base addr validation go server certificate addresses ip dns localhost local cn node base addr validation go web ui certificate addresses ip dns localhost local cn node server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections storage replica go proposing add replica updated next server server go no stores bootstrapped and join flag specified awaiting init command storage replica command go change replicas remove replica read existing descriptor system nodelivenessmax tsd storage replica range lease go transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica range lease go done transferring lease to storage replica raftstorage go generated preemptive snapshot at index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing add replica updated next gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping storage replica range lease go transferring lease to storage replica go proposing remove replica updated next storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage store go added to replica gc queue peer suggestion storage replica command go change replicas remove replica read existing descriptor table storage store go removing replica storage replica go removed keys in storage replica go proposing add replica updated next storage replica range lease go transferring lease to storage replica go proposing remove replica updated next storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store go added to replica gc queue peer suggestion storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage store go removing replica storage replica go removed keys in storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion build tag alpha started at storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage replica command go change replicas remove replica read existing descriptor table storage replica go proposing add replica updated next server node go bootstrapped store server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server status recorder go available memory from cgroups eib exceeds system memory gib using system memory storage replica range lease go transferring lease to server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica range lease go done transferring lease to server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica raftstorage go generated preemptive snapshot at index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing remove replica updated next rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established storage replica go proposing add replica updated next storage replica command go change replicas remove replica read existing descriptor table storage store go added to replica gc queue peer suggestion storage replica range lease go transferring lease to storage store go removing replica storage replica go removed keys in storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica range lease go done transferring lease to storage replica raftstorage go generated preemptive snapshot at index sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag alpha startedat localityaddress clusterid startedat lastup server server update go no need to upgrade cluster already at the newest version rpc nodedialer nodedialer go connection to established storage replica go proposing remove replica updated next storage store go added to replica gc queue peer suggestion storage replica command go change replicas remove replica read existing descriptor table storage store go removing replica storage replica go removed keys in rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica go proposing remove replica updated next sql event log go event create database target info databasename data statement create database data user root storage replica go proposing add replica updated next storage store go added to replica gc queue peer suggestion storage store go removing replica storage replica go removed keys in storage replica command go initiating a split of this range at key table storage replica range lease go transferring lease to storage replica command go change replicas remove replica read existing descriptor table storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica raftstorage go generated preemptive snapshot at index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries sql event log go event set cluster setting target info settingname server declined reservation timeout value user root storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table storage replica command go change replicas remove replica read existing descriptor table storage replica go proposing add replica updated next storage replica go proposing remove replica updated next storage replica range lease go transferring lease to storage store go added to replica gc queue peer suggestion storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica range lease go done transferring lease to storage store go removing replica storage replica raftstorage go generated preemptive snapshot at index storage replica go removed keys in storage replica go proposing remove replica updated next storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in sql event log go event set cluster setting target info settingname server failed reservation timeout value user root storage replica command go change replicas add replica read existing descriptor table storage store go added to replica gc queue peer suggestion fail testrace testinitialpartitioning testinitialpartitioning timestamptz soon go condition failed to evaluate within expected to scan on select count from timestamptz where a timestamp with time zone read completed full trace span start session recording executing execstmt select count from timestamptz where a timestamp with time zone executing select count from timestamptz where a timestamp with time zone in state notxn span start sql txn executing execstmt select count from timestamptz where a timestamp with time zone executing select count from timestamptz where a timestamp with time zone in state open planning starts select generating optimizer plan span start txn coordinator send span start dist sender send querying next range at table timestamptz sending batch get to sending request to span start cockroach roachpb internal batch span start cockroach roachpb internal batch get executing requests read only path command queue waiting for read lock read completed span start txn coordinator send client txn did autocommit err added table data public timestamptz to table collection optimizer plan succeeded planning ends checking distributability distributable plan true execution starts distributed span start consuming rows creating distsql plan with distributedmode true querying next range at table running distsql plan span start flow starting processors startables span start table reader cockroach processorid cockroach stat tablereader stalltime cockroach stat tablereader input rows scan table span start txn coordinator send span start dist sender send querying next range at table sending batch scan to sending request to local server span start cockroach roachpb internal batch scan executing requests read only path read has no clock uncertainty command queue waiting for read lock read completed span start ordered aggregator cockroach stat aggregator stalltime cockroach stat aggregator mem max kib cockroach processorid cockroach stat aggregator input rows accumulation complete exiting aggregator execution ends rows affected span start txn coordinator send autocommit err releasing tables executing execstmt set tracing off executing set tracing off in state notxn goroutine runtime debug stack usr local go src runtime debug stack go github com cockroachdb cockroach pkg testutils succeedssoon go src github com cockroachdb cockroach pkg testutils soon go github com cockroachdb cockroach pkg ccl partitionccl testinitialpartitioning go src github com cockroachdb cockroach pkg ccl partitionccl partition test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go stdout sql event log go event create table target info tablename data public timestamptz statement create table timestamptz a timestamp with time zone primary key partition by list a partition p values in timestamp with time zone user root storage replica command go change replicas remove replica read existing descriptor table storage replica command go initiating a split of this range at key table storage replica go proposing remove replica updated next storage store go added to replica gc queue peer suggestion storage store go removing replica storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro storage replica go removed keys in storage store go added to replica gc queue contacted deleted peer storage replica command go initiating a split of this range at key table sql event log go event set zone config target info target data timestamptz primary config constraints user root storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro storage replica raftstorage go generated preemptive snapshot at index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table sql event log go event set zone config target info target data timestamptz p config constraints user root ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone storage replica command go initiating a split of this range at key table storage replica go proposing add replica updated next storage split queue go split saw concurrent descriptor modification maybe retrying storage replica command go initiating a split of this range at key table storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro storage replica command go initiating a split of this range at key table storage replica range lease go transferring lease to storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro storage replica range lease go transferring lease to storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro storage queue go no removable replicas from range that needs a removal ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone storage replica command go change replicas remove replica read existing descriptor table ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a in timestamp with time zone storage replica go proposing remove replica updated next storage store go added to replica gc queue peer suggestion ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone storage store go removing replica storage replica go removed keys in ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a in timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a in timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a in timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone storage replica range lease go transferring lease to storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone storage replica range lease go transferring lease to storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone server status runtime go runtime stats gib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total sec u s time gc storage replica range lease go transferring lease to storage replica range lease go done transferring lease to storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone ccl partitionccl partition test go query select count from timestamptz where a timestamp with time zone server status recorder go available memory from cgroups eib please assign take a look and update the issue accordingly
| 0
|
41,388
| 10,437,946,309
|
IssuesEvent
|
2019-09-18 00:00:46
|
DannyCork/python-whois
|
https://api.github.com/repos/DannyCork/python-whois
|
closed
|
Retrieve Domain Status Information From WhoIs Results
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will fix the problem?
1. As it is, no domain status is provided in the dictionary
2. Sample script:
import whois
domain = whois.query('google.com')
print (domain.status)
3. Simple change/addition to '_3_adjust.py' on line 17 (tabbed as necessary):
self.status = data['status']
What is the expected output? What do you see instead?
This returns the entries on a single line.
['clientDeleteProhibited', 'clientRenewProhibited', 'clientTransferProhibited',
'clientUpdateProhibited']
With this change, thought output is provided, it should be listed as such:
clientDeleteProhibited
clientRenewProhibited
clientTransferProhibited
clientUpdateProhibited
What version of the product are you using? On what operating system?
Kubuntu 12.10
Python 3.2
whois 0.7
Please provide any additional information below.
Looking to get all of the domain status information as a list; 1 item per line.
The search is also getting "Domain status" in the results.
Found this out when a domain with "OK" is coming back twice.
Domain status: ok
Status: ok
The regex should be made a literal "Status".
```
Original issue reported on code.google.com by `draik...@gmail.com` on 20 Jan 2013 at 6:58
|
1.0
|
Retrieve Domain Status Information From WhoIs Results - ```
What steps will fix the problem?
1. As it is, no domain status is provided in the dictionary
2. Sample script:
import whois
domain = whois.query('google.com')
print (domain.status)
3. Simple change/addition to '_3_adjust.py' on line 17 (tabbed as necessary):
self.status = data['status']
What is the expected output? What do you see instead?
This returns the entries on a single line.
['clientDeleteProhibited', 'clientRenewProhibited', 'clientTransferProhibited',
'clientUpdateProhibited']
With this change, thought output is provided, it should be listed as such:
clientDeleteProhibited
clientRenewProhibited
clientTransferProhibited
clientUpdateProhibited
What version of the product are you using? On what operating system?
Kubuntu 12.10
Python 3.2
whois 0.7
Please provide any additional information below.
Looking to get all of the domain status information as a list; 1 item per line.
The search is also getting "Domain status" in the results.
Found this out when a domain with "OK" is coming back twice.
Domain status: ok
Status: ok
The regex should be made a literal "Status".
```
Original issue reported on code.google.com by `draik...@gmail.com` on 20 Jan 2013 at 6:58
|
defect
|
retrieve domain status information from whois results what steps will fix the problem as it is no domain status is provided in the dictionary sample script import whois domain whois query google com print domain status simple change addition to adjust py on line tabbed as necessary self status data what is the expected output what do you see instead this returns the entries on a single line clientdeleteprohibited clientrenewprohibited clienttransferprohibited clientupdateprohibited with this change thought output is provided it should be listed as such clientdeleteprohibited clientrenewprohibited clienttransferprohibited clientupdateprohibited what version of the product are you using on what operating system kubuntu python whois please provide any additional information below looking to get all of the domain status information as a list item per line the search is also getting domain status in the results found this out when a domain with ok is coming back twice domain status ok status ok the regex should be made a literal status original issue reported on code google com by draik gmail com on jan at
| 1
|
4,726
| 2,610,153,546
|
IssuesEvent
|
2015-02-26 18:48:53
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Aayla Secura
|
auto-migrated Priority-Medium Type-Defect
|
```
Missing shadow
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:04
|
1.0
|
Aayla Secura - ```
Missing shadow
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:04
|
defect
|
aayla secura missing shadow original issue reported on code google com by gmail com on jan at
| 1
|
21,200
| 11,597,982,179
|
IssuesEvent
|
2020-02-24 22:01:57
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Explore Cognos reporting options
|
Product: VZA App Service: Apps Workgroup: AMD Workgroup: VZ
|
Objective: Determine whether we can use custom Cognos reports to get Brazos data into PowerBI reliably and efficiently.
Reported fields:
- Can we add Secondary Street to the report that we generate so that we can roughly geolocate citations?
- Are there any other fields that would be useful?
Report delivery:
- Is there a way to access reports via FTP so that we could have a programmatic integration?
- Can we schedule automatic emails?
... @johnclary @SurbhiBakshi - what else?
|
1.0
|
Explore Cognos reporting options - Objective: Determine whether we can use custom Cognos reports to get Brazos data into PowerBI reliably and efficiently.
Reported fields:
- Can we add Secondary Street to the report that we generate so that we can roughly geolocate citations?
- Are there any other fields that would be useful?
Report delivery:
- Is there a way to access reports via FTP so that we could have a programmatic integration?
- Can we schedule automatic emails?
... @johnclary @SurbhiBakshi - what else?
|
non_defect
|
explore cognos reporting options objective determine whether we can use custom cognos reports to get brazos data into powerbi reliably and efficiently reported fields can we add secondary street to the report that we generate so that we can roughly geolocate citations are there any other fields that would be useful report delivery is there a way to access reports via ftp so that we could have a programmatic integration can we schedule automatic emails johnclary surbhibakshi what else
| 0
|
340,391
| 24,652,803,634
|
IssuesEvent
|
2022-10-17 20:11:44
|
softlayer/softlayer-python
|
https://api.github.com/repos/softlayer/softlayer-python
|
closed
|
Developer Resources Twitter link Update
|
Bug Documentation
|
The Twitter link under [Developer Resources](https://softlayer-python.readthedocs.io/en/latest/dev/#developer-resources) shows `@SoftLayerDevs
This account doesn’t exist`


|
1.0
|
Developer Resources Twitter link Update - The Twitter link under [Developer Resources](https://softlayer-python.readthedocs.io/en/latest/dev/#developer-resources) shows `@SoftLayerDevs
This account doesn’t exist`


|
non_defect
|
developer resources twitter link update the twitter link under shows softlayerdevs this account doesn’t exist
| 0
|
387,233
| 26,716,988,753
|
IssuesEvent
|
2023-01-28 16:52:16
|
primefaces/primereact
|
https://api.github.com/repos/primefaces/primereact
|
closed
|
DocSectionCode: Copy Code
|
Component: Documentation
|
### Describe the bug
Hi,
While exploring new docs, I was trying to copy some import statements. After I paste, I found that copied text was undefined (broken). It is because inside DocSectionCode component, codeLang state is set to javascript by default. However, some pages are rendering DocSectionCode with passing code object prop with only basic property. If codeLang state should be always javascript, there should be basic as a fallback whenever code object prop is only have basic property.
### Reproducer
_No response_
### PrimeReact version
9.0.0-beta.1
### React version
18.x
### Language
TypeScript
### Build / Runtime
Create React App (CRA)
### Browser(s)
_No response_
### Steps to reproduce the behavior
1. Visit https://primereact.org/dropdown/
2. Click Copy Code icon in Import section
3. Paste and you will see copied text is undefined
### Expected behavior
Copy what was rendered
|
1.0
|
DocSectionCode: Copy Code - ### Describe the bug
Hi,
While exploring new docs, I was trying to copy some import statements. After I paste, I found that copied text was undefined (broken). It is because inside DocSectionCode component, codeLang state is set to javascript by default. However, some pages are rendering DocSectionCode with passing code object prop with only basic property. If codeLang state should be always javascript, there should be basic as a fallback whenever code object prop is only have basic property.
### Reproducer
_No response_
### PrimeReact version
9.0.0-beta.1
### React version
18.x
### Language
TypeScript
### Build / Runtime
Create React App (CRA)
### Browser(s)
_No response_
### Steps to reproduce the behavior
1. Visit https://primereact.org/dropdown/
2. Click Copy Code icon in Import section
3. Paste and you will see copied text is undefined
### Expected behavior
Copy what was rendered
|
non_defect
|
docsectioncode copy code describe the bug hi while exploring new docs i was trying to copy some import statements after i paste i found that copied text was undefined broken it is because inside docsectioncode component codelang state is set to javascript by default however some pages are rendering docsectioncode with passing code object prop with only basic property if codelang state should be always javascript there should be basic as a fallback whenever code object prop is only have basic property reproducer no response primereact version beta react version x language typescript build runtime create react app cra browser s no response steps to reproduce the behavior visit click copy code icon in import section paste and you will see copied text is undefined expected behavior copy what was rendered
| 0
|
16,917
| 2,962,703,342
|
IssuesEvent
|
2015-07-10 03:22:38
|
robolectric/robolectric
|
https://api.github.com/repos/robolectric/robolectric
|
closed
|
NotFoundException: not such label
|
3.0 defect
|
I'm trying to execute some tests with org.robolectric:robolectric:3.0-SNAPSHOT and I'm having this error:
```
android.content.res.Resources$NotFoundException: no such label com.my.packageINT.debug:string/app_name
at org.robolectric.util.ActivityController.getActivityTitle(ActivityController.java:104)
at org.robolectric.util.ActivityController.attach(ActivityController.java:49)
at org.robolectric.util.ActivityController$1.run(ActivityController.java:121)
at org.robolectric.shadows.ShadowLooper.runPaused(ShadowLooper.java:309)
at org.robolectric.shadows.CoreShadowsAdapter$2.runPaused(CoreShadowsAdapter.java:45)
(....)
```
My project is build with Android Studio, com.android.tools.build:gradle:1.1.3 and has flavors and buildtypes.
Looking for an answer I found different possible solutions without solving the problem.
I've tried some custom RobolectricRunners:
https://github.com/nenick/AndroidStudioAndRobolectric/blob/master/app/src/test/java/com/example/myapplication/CustomRobolectricRunner.java
and this:
https://github.com/nenick/AndroidStudioAndRobolectric/blob/flavors/app%2Fsrc%2Ftest%2Fjava%2Fcom%2Fexample%2Fmyapplication%2FCustomRobolectricRunner.java
and this:
http://stackoverflow.com/questions/22102176/resourcesnotfoundexception-when-using-robolectric-with-gradle
Always with the same error as result :(
|
1.0
|
NotFoundException: not such label - I'm trying to execute some tests with org.robolectric:robolectric:3.0-SNAPSHOT and I'm having this error:
```
android.content.res.Resources$NotFoundException: no such label com.my.packageINT.debug:string/app_name
at org.robolectric.util.ActivityController.getActivityTitle(ActivityController.java:104)
at org.robolectric.util.ActivityController.attach(ActivityController.java:49)
at org.robolectric.util.ActivityController$1.run(ActivityController.java:121)
at org.robolectric.shadows.ShadowLooper.runPaused(ShadowLooper.java:309)
at org.robolectric.shadows.CoreShadowsAdapter$2.runPaused(CoreShadowsAdapter.java:45)
(....)
```
My project is build with Android Studio, com.android.tools.build:gradle:1.1.3 and has flavors and buildtypes.
Looking for an answer I found different possible solutions without solving the problem.
I've tried some custom RobolectricRunners:
https://github.com/nenick/AndroidStudioAndRobolectric/blob/master/app/src/test/java/com/example/myapplication/CustomRobolectricRunner.java
and this:
https://github.com/nenick/AndroidStudioAndRobolectric/blob/flavors/app%2Fsrc%2Ftest%2Fjava%2Fcom%2Fexample%2Fmyapplication%2FCustomRobolectricRunner.java
and this:
http://stackoverflow.com/questions/22102176/resourcesnotfoundexception-when-using-robolectric-with-gradle
Always with the same error as result :(
|
defect
|
notfoundexception not such label i m trying to execute some tests with org robolectric robolectric snapshot and i m having this error android content res resources notfoundexception no such label com my packageint debug string app name at org robolectric util activitycontroller getactivitytitle activitycontroller java at org robolectric util activitycontroller attach activitycontroller java at org robolectric util activitycontroller run activitycontroller java at org robolectric shadows shadowlooper runpaused shadowlooper java at org robolectric shadows coreshadowsadapter runpaused coreshadowsadapter java my project is build with android studio com android tools build gradle and has flavors and buildtypes looking for an answer i found different possible solutions without solving the problem i ve tried some custom robolectricrunners and this and this always with the same error as result
| 1
|
374,971
| 11,098,041,047
|
IssuesEvent
|
2019-12-16 14:30:18
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
closed
|
[gcp] kfctl should check if user has required permissions
|
area/kfctl kind/feature lifecycle/stale platform/gcp priority/p1
|
kfctl should try to detect whether user has required permissions to set up Kubeflow and if not print suitable error messages.
A simple check would be to see if the user can via the IAM policy for the project.
We could also check and warn the user if they are running on a VM and using a compute service account.
|
1.0
|
[gcp] kfctl should check if user has required permissions - kfctl should try to detect whether user has required permissions to set up Kubeflow and if not print suitable error messages.
A simple check would be to see if the user can via the IAM policy for the project.
We could also check and warn the user if they are running on a VM and using a compute service account.
|
non_defect
|
kfctl should check if user has required permissions kfctl should try to detect whether user has required permissions to set up kubeflow and if not print suitable error messages a simple check would be to see if the user can via the iam policy for the project we could also check and warn the user if they are running on a vm and using a compute service account
| 0
|
381,602
| 11,276,984,606
|
IssuesEvent
|
2020-01-15 01:07:13
|
streamnative/bookkeeper
|
https://api.github.com/repos/streamnative/bookkeeper
|
opened
|
ISSUE-1908: Bookie ledger replication error lead to increased disk usage
|
area/autorecovery area/bookie priority/important triage/week-7 type/bug
|
Original Issue: apache/bookkeeper#1908
---
**BUG REPORT**
Problem happens after reboot bookie(Bare metal), i can't remember a little bit about the detailed steps at the time.
Found that disk usage continued growth:

Then check the bookie log, found some ledger replicate error log:
```
00:02:15.246 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4139-E4139, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.246 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4139
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.246 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.259 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4140-E4140, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.259 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4140
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.259 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.262 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4141-E4141, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.262 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4141
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.262 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.269 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4142-E4142, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.269 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4142
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.269 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.275 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4143-E4143, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.275 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4143
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.276 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4144-E4144, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4144
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error replicating ledger fragments for ledger: 426399
org.apache.bookkeeper.client.BKException$BKLedgerRecoveryException: Error while recovering ledger
at org.apache.bookkeeper.client.BKException.create(BKException.java:78) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$SingleFragmentCallback.processResult(LedgerFragmentReplicator.java:357) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$1.processResult(LedgerFragmentReplicator.java:184) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.doCallback(BookkeeperInternalCallbacks.java:244) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.callback(BookkeeperInternalCallbacks.java:236) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.tick(BookkeeperInternalCallbacks.java:218) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.processResult(BookkeeperInternalCallbacks.java:254) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:306) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.280 [ReplicationWorker] WARN org.apache.bookkeeper.replication.ReplicationWorker - BKLedgerRecoveryException while replicating the fragment
org.apache.bookkeeper.client.BKException$BKLedgerRecoveryException: Error while recovering ledger
at org.apache.bookkeeper.client.BKException.create(BKException.java:78) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.BookKeeperAdmin.replicateLedgerFragment(BookKeeperAdmin.java:1058) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.BookKeeperAdmin.replicateLedgerFragment(BookKeeperAdmin.java:1036) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.replication.ReplicationWorker.rereplicate(ReplicationWorker.java:254) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.replication.ReplicationWorker.rereplicate(ReplicationWorker.java:207) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.replication.ReplicationWorker.run(ReplicationWorker.java:168) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
```
This won't always happen, I also restarted bookie before.
I don't know why lost these four entries[E4139 - E4144] in bookie and ledger.
And this is dangerous that ledger replicate so many times.
After we disable autorecovery and find the topic use this ledger, then delete the topic,
entry log was successfully cleaned up
|
1.0
|
ISSUE-1908: Bookie ledger replication error lead to increased disk usage - Original Issue: apache/bookkeeper#1908
---
**BUG REPORT**
Problem happens after reboot bookie(Bare metal), i can't remember a little bit about the detailed steps at the time.
Found that disk usage continued growth:

Then check the bookie log, found some ledger replicate error log:
```
00:02:15.246 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4139-E4139, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.246 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4139
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.246 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.259 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4140-E4140, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.259 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4140
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.259 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.262 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4141-E4141, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.262 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4141
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.262 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.269 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4142-E4142, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.269 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4142
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.269 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.275 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4143-E4143, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.275 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4143
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.276 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.PendingReadOp - Read of ledger entry failed: L426399 E4144-E4144, Sent to [172.30.92.15:3181, 172.30.92.16:3181], Heard from [] : bitset = {}. First unread entry is -1
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error reading ledger entry: 4144
org.apache.bookkeeper.client.BKException$BKNoSuchEntryException: No such entry
at org.apache.bookkeeper.client.BKException.create(BKException.java:84) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:305) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.proto.BookkeeperInternalCallbacks - Error in multi callback : -13
00:02:15.280 [BookKeeperClientWorker-OrderedExecutor-15-0] ERROR org.apache.bookkeeper.client.LedgerFragmentReplicator - BK error replicating ledger fragments for ledger: 426399
org.apache.bookkeeper.client.BKException$BKLedgerRecoveryException: Error while recovering ledger
at org.apache.bookkeeper.client.BKException.create(BKException.java:78) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$SingleFragmentCallback.processResult(LedgerFragmentReplicator.java:357) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$1.processResult(LedgerFragmentReplicator.java:184) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.doCallback(BookkeeperInternalCallbacks.java:244) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.callback(BookkeeperInternalCallbacks.java:236) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.tick(BookkeeperInternalCallbacks.java:218) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.proto.BookkeeperInternalCallbacks$MultiCallback.processResult(BookkeeperInternalCallbacks.java:254) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerFragmentReplicator$3.readComplete(LedgerFragmentReplicator.java:306) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.LedgerHandle$6.onFailure(LedgerHandle.java:797) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:38) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.common.concurrent.FutureEventListener.accept(FutureEventListener.java:26) [org.apache.bookkeeper-bookkeeper-common-4.7.2.jar:4.7.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) [?:1.8.0_181]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
00:02:15.280 [ReplicationWorker] WARN org.apache.bookkeeper.replication.ReplicationWorker - BKLedgerRecoveryException while replicating the fragment
org.apache.bookkeeper.client.BKException$BKLedgerRecoveryException: Error while recovering ledger
at org.apache.bookkeeper.client.BKException.create(BKException.java:78) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.BookKeeperAdmin.replicateLedgerFragment(BookKeeperAdmin.java:1058) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.client.BookKeeperAdmin.replicateLedgerFragment(BookKeeperAdmin.java:1036) ~[org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.replication.ReplicationWorker.rereplicate(ReplicationWorker.java:254) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.replication.ReplicationWorker.rereplicate(ReplicationWorker.java:207) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at org.apache.bookkeeper.replication.ReplicationWorker.run(ReplicationWorker.java:168) [org.apache.bookkeeper-bookkeeper-server-4.7.2.jar:4.7.2]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-all-4.1.22.Final.jar:4.1.22.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
```
This won't always happen, I also restarted bookie before.
I don't know why lost these four entries[E4139 - E4144] in bookie and ledger.
And this is dangerous that ledger replicate so many times.
After we disable autorecovery and find the topic use this ledger, then delete the topic,
entry log was successfully cleaned up
|
non_defect
|
issue bookie ledger replication error lead to increased disk usage original issue apache bookkeeper bug report problem happens after reboot bookie bare metal i can t remember a little bit about the detailed steps at the time found that disk usage continued growth then check the bookie log found some ledger replicate error log error org apache bookkeeper client pendingreadop read of ledger entry failed sent to heard from bitset first unread entry is error org apache bookkeeper client ledgerfragmentreplicator bk error reading ledger entry org apache bookkeeper client bkexception bknosuchentryexception no such entry at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client ledgerfragmentreplicator readcomplete ledgerfragmentreplicator java at org apache bookkeeper client ledgerhandle onfailure ledgerhandle java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture completion run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java error org apache bookkeeper proto bookkeeperinternalcallbacks error in multi callback error org apache bookkeeper client pendingreadop read of ledger entry failed sent to heard from bitset first unread entry is error org apache bookkeeper client ledgerfragmentreplicator bk error reading ledger entry org apache bookkeeper client bkexception bknosuchentryexception no such entry at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client ledgerfragmentreplicator readcomplete ledgerfragmentreplicator java at org apache bookkeeper client ledgerhandle onfailure ledgerhandle java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture completion run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java error org apache bookkeeper proto bookkeeperinternalcallbacks error in multi callback error org apache bookkeeper client pendingreadop read of ledger entry failed sent to heard from bitset first unread entry is error org apache bookkeeper client ledgerfragmentreplicator bk error reading ledger entry org apache bookkeeper client bkexception bknosuchentryexception no such entry at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client ledgerfragmentreplicator readcomplete ledgerfragmentreplicator java at org apache bookkeeper client ledgerhandle onfailure ledgerhandle java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture completion run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java error org apache bookkeeper proto bookkeeperinternalcallbacks error in multi callback error org apache bookkeeper client pendingreadop read of ledger entry failed sent to heard from bitset first unread entry is error org apache bookkeeper client ledgerfragmentreplicator bk error reading ledger entry org apache bookkeeper client bkexception bknosuchentryexception no such entry at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client ledgerfragmentreplicator readcomplete ledgerfragmentreplicator java at org apache bookkeeper client ledgerhandle onfailure ledgerhandle java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture completion run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java error org apache bookkeeper proto bookkeeperinternalcallbacks error in multi callback error org apache bookkeeper client pendingreadop read of ledger entry failed sent to heard from bitset first unread entry is error org apache bookkeeper client ledgerfragmentreplicator bk error reading ledger entry org apache bookkeeper client bkexception bknosuchentryexception no such entry at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client ledgerfragmentreplicator readcomplete ledgerfragmentreplicator java at org apache bookkeeper client ledgerhandle onfailure ledgerhandle java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture completion run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java error org apache bookkeeper proto bookkeeperinternalcallbacks error in multi callback error org apache bookkeeper client pendingreadop read of ledger entry failed sent to heard from bitset first unread entry is error org apache bookkeeper client ledgerfragmentreplicator bk error reading ledger entry org apache bookkeeper client bkexception bknosuchentryexception no such entry at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client ledgerfragmentreplicator readcomplete ledgerfragmentreplicator java at org apache bookkeeper client ledgerhandle onfailure ledgerhandle java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture completion run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java error org apache bookkeeper proto bookkeeperinternalcallbacks error in multi callback error org apache bookkeeper client ledgerfragmentreplicator bk error replicating ledger fragments for ledger org apache bookkeeper client bkexception bkledgerrecoveryexception error while recovering ledger at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client ledgerfragmentreplicator singlefragmentcallback processresult ledgerfragmentreplicator java at org apache bookkeeper client ledgerfragmentreplicator processresult ledgerfragmentreplicator java at org apache bookkeeper proto bookkeeperinternalcallbacks multicallback docallback bookkeeperinternalcallbacks java at org apache bookkeeper proto bookkeeperinternalcallbacks multicallback callback bookkeeperinternalcallbacks java at org apache bookkeeper proto bookkeeperinternalcallbacks multicallback tick bookkeeperinternalcallbacks java at org apache bookkeeper proto bookkeeperinternalcallbacks multicallback processresult bookkeeperinternalcallbacks java at org apache bookkeeper client ledgerfragmentreplicator readcomplete ledgerfragmentreplicator java at org apache bookkeeper client ledgerhandle onfailure ledgerhandle java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at org apache bookkeeper common concurrent futureeventlistener accept futureeventlistener java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture completion run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java warn org apache bookkeeper replication replicationworker bkledgerrecoveryexception while replicating the fragment org apache bookkeeper client bkexception bkledgerrecoveryexception error while recovering ledger at org apache bookkeeper client bkexception create bkexception java at org apache bookkeeper client bookkeeperadmin replicateledgerfragment bookkeeperadmin java at org apache bookkeeper client bookkeeperadmin replicateledgerfragment bookkeeperadmin java at org apache bookkeeper replication replicationworker rereplicate replicationworker java at org apache bookkeeper replication replicationworker rereplicate replicationworker java at org apache bookkeeper replication replicationworker run replicationworker java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java this won t always happen i also restarted bookie before i don t know why lost these four entries in bookie and ledger and this is dangerous that ledger replicate so many times after we disable autorecovery and find the topic use this ledger then delete the topic entry log was successfully cleaned up
| 0
|
70,970
| 9,466,447,500
|
IssuesEvent
|
2019-04-18 04:35:26
|
golang/go
|
https://api.github.com/repos/golang/go
|
closed
|
x/tools/internal/lsp: add support for document symbols
|
Documentation NeedsFix gopls
|
Tracking issue to add support for the [document symbols request](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentSymbol)
|
1.0
|
x/tools/internal/lsp: add support for document symbols - Tracking issue to add support for the [document symbols request](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentSymbol)
|
non_defect
|
x tools internal lsp add support for document symbols tracking issue to add support for the
| 0
|
47,227
| 13,056,064,958
|
IssuesEvent
|
2020-07-30 03:32:59
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
phys-service test failing on ubu 8.04LTS x64 only in Release build. (Trac #164)
|
Migrated from Trac combo core defect
|
Strange:
./bin/phys-services-test -sa
all tests pass, but run singly:
./bin/phys-services-test -s \
TestGeoTrimmers.cxx/FromRecoPulsesWithNeighbors
TestGeoTrimmers.cxx...
FromRecoPulsesWithNeighbors................................. FAIL
/build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx:104: FAIL
File: /build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx
Line: 104
Predicate: ENSURE_EQUAL(sz, 8): sz == 7, 8 == 8
Message: Wrong number of OM's
Migrated from https://code.icecube.wisc.edu/ticket/164
```json
{
"status": "closed",
"changetime": "2011-04-22T22:19:05",
"description": "Strange:\n\n ./bin/phys-services-test -sa\n\nall tests pass, but run singly:\n ./bin/phys-services-test -s \\\nTestGeoTrimmers.cxx/FromRecoPulsesWithNeighbors\n\nTestGeoTrimmers.cxx...\n FromRecoPulsesWithNeighbors................................. FAIL\n/build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx:104: FAIL\n\n\tFile: /build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx\n\tLine: 104\n\tPredicate: ENSURE_EQUAL(sz, 8): sz == 7, 8 == 8\n\tMessage: Wrong number of OM's\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "wontfix",
"_ts": "1303510745000000",
"component": "combo core",
"summary": "phys-service test failing on ubu 8.04LTS x64 only in Release build.",
"priority": "normal",
"keywords": "",
"time": "2009-06-12T21:04:33",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
|
1.0
|
phys-service test failing on ubu 8.04LTS x64 only in Release build. (Trac #164) - Strange:
./bin/phys-services-test -sa
all tests pass, but run singly:
./bin/phys-services-test -s \
TestGeoTrimmers.cxx/FromRecoPulsesWithNeighbors
TestGeoTrimmers.cxx...
FromRecoPulsesWithNeighbors................................. FAIL
/build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx:104: FAIL
File: /build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx
Line: 104
Predicate: ENSURE_EQUAL(sz, 8): sz == 7, 8 == 8
Message: Wrong number of OM's
Migrated from https://code.icecube.wisc.edu/ticket/164
```json
{
"status": "closed",
"changetime": "2011-04-22T22:19:05",
"description": "Strange:\n\n ./bin/phys-services-test -sa\n\nall tests pass, but run singly:\n ./bin/phys-services-test -s \\\nTestGeoTrimmers.cxx/FromRecoPulsesWithNeighbors\n\nTestGeoTrimmers.cxx...\n FromRecoPulsesWithNeighbors................................. FAIL\n/build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx:104: FAIL\n\n\tFile: /build/slave_build/manual/offline_software/src/phys-services/private/test/TestGeoTrimmers.cxx\n\tLine: 104\n\tPredicate: ENSURE_EQUAL(sz, 8): sz == 7, 8 == 8\n\tMessage: Wrong number of OM's\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "wontfix",
"_ts": "1303510745000000",
"component": "combo core",
"summary": "phys-service test failing on ubu 8.04LTS x64 only in Release build.",
"priority": "normal",
"keywords": "",
"time": "2009-06-12T21:04:33",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
|
defect
|
phys service test failing on ubu only in release build trac strange bin phys services test sa all tests pass but run singly bin phys services test s testgeotrimmers cxx fromrecopulseswithneighbors testgeotrimmers cxx fromrecopulseswithneighbors fail build slave build manual offline software src phys services private test testgeotrimmers cxx fail file build slave build manual offline software src phys services private test testgeotrimmers cxx line predicate ensure equal sz sz message wrong number of om s migrated from json status closed changetime description strange n n bin phys services test sa n nall tests pass but run singly n bin phys services test s ntestgeotrimmers cxx fromrecopulseswithneighbors n ntestgeotrimmers cxx n fromrecopulseswithneighbors fail n build slave build manual offline software src phys services private test testgeotrimmers cxx fail n n tfile build slave build manual offline software src phys services private test testgeotrimmers cxx n tline n tpredicate ensure equal sz sz n tmessage wrong number of om s n reporter blaufuss cc resolution wontfix ts component combo core summary phys service test failing on ubu only in release build priority normal keywords time milestone owner olivas type defect
| 1
|
39,143
| 9,217,593,615
|
IssuesEvent
|
2019-03-11 11:09:09
|
hazelcast/hazelcast-jet
|
https://api.github.com/repos/hazelcast/hazelcast-jet
|
closed
|
Sources.fileWatcher re-emits all the lines on Linux
|
core defect
|
The fileWatcher is supposed to emit new lines added to a file in a specified directory.
On Linux, it emits all the lines in a file every time the line is added and the file is saved. It works properly on macOS.
|
1.0
|
Sources.fileWatcher re-emits all the lines on Linux - The fileWatcher is supposed to emit new lines added to a file in a specified directory.
On Linux, it emits all the lines in a file every time the line is added and the file is saved. It works properly on macOS.
|
defect
|
sources filewatcher re emits all the lines on linux the filewatcher is supposed to emit new lines added to a file in a specified directory on linux it emits all the lines in a file every time the line is added and the file is saved it works properly on macos
| 1
|
63,109
| 17,377,637,573
|
IssuesEvent
|
2021-07-31 02:48:14
|
Questie/Questie
|
https://api.github.com/repos/Questie/Questie
|
opened
|
Detecting if addon updated wihout game restart logic fails in Questie.lua
|
Type - Defect
|
https://github.com/Questie/Questie/blob/60ef402a359f54970862e3f80d82024d0d7191ab/Questie.lua#L69-L116
Those all always evaluate to true as table reference is assigned to each few lines earlier by for example:
```lua
local QuestieLib = QuestieLoader:ImportModule("QuestieLib");
```
Both `function QuestieLoader:ImportModule(name)` and `function QuestieLoader:CreateModule(name)` are identical functions currently, so both initialize a module same way:
https://github.com/Questie/Questie/blob/60ef402a359f54970862e3f80d82024d0d7191ab/Modules/Libs/QuestieLoader.lua#L14-L18
Note: `versionCheck.lua` should check version okey. But I am not sure if it will display error? Does StaticPopup_Show() work always okey at middle of addon loading process while game UI is not necessary available yet? Should it also or instead add timer and chat messages?
`versionCheck.lua` error message styles should be unified. Maybe add function to call with a message and it will handle passing it to UI / user.
v6.5.1
|
1.0
|
Detecting if addon updated wihout game restart logic fails in Questie.lua - https://github.com/Questie/Questie/blob/60ef402a359f54970862e3f80d82024d0d7191ab/Questie.lua#L69-L116
Those all always evaluate to true as table reference is assigned to each few lines earlier by for example:
```lua
local QuestieLib = QuestieLoader:ImportModule("QuestieLib");
```
Both `function QuestieLoader:ImportModule(name)` and `function QuestieLoader:CreateModule(name)` are identical functions currently, so both initialize a module same way:
https://github.com/Questie/Questie/blob/60ef402a359f54970862e3f80d82024d0d7191ab/Modules/Libs/QuestieLoader.lua#L14-L18
Note: `versionCheck.lua` should check version okey. But I am not sure if it will display error? Does StaticPopup_Show() work always okey at middle of addon loading process while game UI is not necessary available yet? Should it also or instead add timer and chat messages?
`versionCheck.lua` error message styles should be unified. Maybe add function to call with a message and it will handle passing it to UI / user.
v6.5.1
|
defect
|
detecting if addon updated wihout game restart logic fails in questie lua those all always evaluate to true as table reference is assigned to each few lines earlier by for example lua local questielib questieloader importmodule questielib both function questieloader importmodule name and function questieloader createmodule name are identical functions currently so both initialize a module same way note versioncheck lua should check version okey but i am not sure if it will display error does staticpopup show work always okey at middle of addon loading process while game ui is not necessary available yet should it also or instead add timer and chat messages versioncheck lua error message styles should be unified maybe add function to call with a message and it will handle passing it to ui user
| 1
|
25,821
| 4,466,897,193
|
IssuesEvent
|
2016-08-25 01:09:30
|
AtlasOfLivingAustralia/spatial-portal
|
https://api.github.com/repos/AtlasOfLivingAustralia/spatial-portal
|
closed
|
Tabulations outstanding issues
|
priority-high status-started type-defect
|
*migrated from:* https://code.google.com/p/ala/issues/detail?id=547
*date:* Mon Jan 27 16:59:46 2014
*author:* leebel...@gmail.com
---
In writing a Case Study on Tabulations (Tools | Tabulate), I note a number of outstanding Tabulation issues
1. Defect: The drop down list includes a typo "Tillage practive - dominant (area)(class)". I am unsure where the list comes from as it is ok in the Layers Table (where I can edit it).
2. Enhancement: The tabulations are currently missing the contextual layers
Surface geology (`#60` Miles to get 250m grid data from Kristen Williams)
Koppen Climate Classification*
Koppen Climate Classification – sub areas*
Land Cover Type – IGBP (2011)*
|
1.0
|
Tabulations outstanding issues -
*migrated from:* https://code.google.com/p/ala/issues/detail?id=547
*date:* Mon Jan 27 16:59:46 2014
*author:* leebel...@gmail.com
---
In writing a Case Study on Tabulations (Tools | Tabulate), I note a number of outstanding Tabulation issues
1. Defect: The drop down list includes a typo "Tillage practive - dominant (area)(class)". I am unsure where the list comes from as it is ok in the Layers Table (where I can edit it).
2. Enhancement: The tabulations are currently missing the contextual layers
Surface geology (`#60` Miles to get 250m grid data from Kristen Williams)
Koppen Climate Classification*
Koppen Climate Classification – sub areas*
Land Cover Type – IGBP (2011)*
|
defect
|
tabulations outstanding issues migrated from date mon jan author leebel gmail com in writing a case study on tabulations tools tabulate i note a number of outstanding tabulation issues defect the drop down list includes a typo tillage practive dominant area class i am unsure where the list comes from as it is ok in the layers table where i can edit it enhancement the tabulations are currently missing the contextual layers surface geology miles to get grid data from kristen williams koppen climate classification koppen climate classification – sub areas land cover type – igbp
| 1
|
109,933
| 23,844,179,327
|
IssuesEvent
|
2022-09-06 12:51:11
|
sast-automation-dev/WebGoat-NET-21
|
https://api.github.com/repos/sast-automation-dev/WebGoat-NET-21
|
opened
|
Code Security Report: 81 high severity findings, 177 total findings
|
code security findings
|
# Code Security Report
**Latest Scan:** 2022-09-06 12:47pm
**Total Findings:** 177
**Tested Project Files:** 314
**Detected Programming Languages:** 2
<!-- SAST-MANUAL-SCAN-START -->
- [ ] Check this box to manually trigger a scan
<!-- SAST-MANUAL-SCAN-END -->
## Language: C#
| Severity | CWE | Vulnerability Type | Count |
|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-89](https://cwe.mitre.org/data/definitions/89.html)|SQL Injection|34|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-643](https://cwe.mitre.org/data/definitions/643.html)|XPath Injection|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Path/Directory Traversal|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|File Manipulation|3|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Cross-Site Scripting|41|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-798](https://cwe.mitre.org/data/definitions/798.html)|Hardcoded Password/Credentials|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Heap Inspection|51|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Error Messages Information Exposure|27|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-400](https://cwe.mitre.org/data/definitions/400.html)|Regex Denial of Service (ReDoS)|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-434](https://cwe.mitre.org/data/definitions/434.html)|File Upload|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Unvalidated/Open Redirect|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Log Forging|6|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|HTTP Header Injection|3|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Cookie Injection|1|
### Details
> The below list presents the 20 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://dev.whitesourcesoftware.com/sast/#/scans/8f3f4fea-5852-42a3-ace4-6a52bda41f75/details).
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>SQL Injection (CWE-89) : 20</summary>
#### Findings
<details>
<summary>DB/MySqlDbProvider.cs:123</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L118-L123
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/CustomerLogin.aspx.cs#L34
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L112
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L115
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L118
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L123
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:191</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L186-L191
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L89
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L180
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L190
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L191
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:329</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L324-L329
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ForgotPassword.aspx.cs#L28
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L319
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L321
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L329
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:578</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L573-L578
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/Autocomplete.ashx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L569
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L571
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L578
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:353</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L348-L353
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L66
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L75
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L341
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L352
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L353
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:329</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L324-L329
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L319
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L321
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L329
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:325</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L320-L325
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L313
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L315
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L325
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:564</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L559-L564
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/Autocomplete.ashx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L557
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L559
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L564
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:228</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L223-L228
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/MainPage.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L215
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L217
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L228
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:204</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L199-L204
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L89
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L195
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L203
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L204
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:533</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L528-L533
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/SQLInjection.aspx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L524
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L526
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L533
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:420</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L415-L420
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L55
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L406
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L415
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L419
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L420
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:416</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L411-L416
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L55
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L406
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L415
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L416
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:86</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L81-L86
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/CustomerLogin.aspx.cs#L33
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L73
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L79
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L86
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:522</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L517-L522
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/SQLInjection.aspx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L515
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L517
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L522
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:420</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L415-L420
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L55
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L408
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L419
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L420
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:217</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L212-L217
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/MainPage.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L202
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L204
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L217
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:555</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L550-L555
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/SQLInjectionDiscovery.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L544
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L554
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L555
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:282</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L277-L282
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L41
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L271
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L273
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L282
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:274</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L269-L274
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L41
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L263
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L265
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L274
</details>
</details>
</details>
## Language: JavaScript / Node.js
| Severity | CWE | Vulnerability Type | Count |
|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Weak Pseudo-Random|1|
### Details
> No high vulnerability findings detected. To view information on the remaining findings, navigate to the [Mend SAST Application](https://dev.whitesourcesoftware.com/sast/#/scans/8f3f4fea-5852-42a3-ace4-6a52bda41f75/details).
|
1.0
|
Code Security Report: 81 high severity findings, 177 total findings - # Code Security Report
**Latest Scan:** 2022-09-06 12:47pm
**Total Findings:** 177
**Tested Project Files:** 314
**Detected Programming Languages:** 2
<!-- SAST-MANUAL-SCAN-START -->
- [ ] Check this box to manually trigger a scan
<!-- SAST-MANUAL-SCAN-END -->
## Language: C#
| Severity | CWE | Vulnerability Type | Count |
|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-89](https://cwe.mitre.org/data/definitions/89.html)|SQL Injection|34|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-643](https://cwe.mitre.org/data/definitions/643.html)|XPath Injection|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Path/Directory Traversal|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|File Manipulation|3|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Cross-Site Scripting|41|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-798](https://cwe.mitre.org/data/definitions/798.html)|Hardcoded Password/Credentials|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Heap Inspection|51|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Error Messages Information Exposure|27|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-400](https://cwe.mitre.org/data/definitions/400.html)|Regex Denial of Service (ReDoS)|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-434](https://cwe.mitre.org/data/definitions/434.html)|File Upload|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Unvalidated/Open Redirect|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Log Forging|6|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|HTTP Header Injection|3|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Cookie Injection|1|
### Details
> The below list presents the 20 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://dev.whitesourcesoftware.com/sast/#/scans/8f3f4fea-5852-42a3-ace4-6a52bda41f75/details).
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>SQL Injection (CWE-89) : 20</summary>
#### Findings
<details>
<summary>DB/MySqlDbProvider.cs:123</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L118-L123
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/CustomerLogin.aspx.cs#L34
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L112
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L115
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L118
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L123
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:191</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L186-L191
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L89
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L180
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L190
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L191
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:329</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L324-L329
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ForgotPassword.aspx.cs#L28
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L319
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L321
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L329
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:578</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L573-L578
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/Autocomplete.ashx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L569
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L571
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L578
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:353</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L348-L353
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L66
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L75
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L341
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L352
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L353
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:329</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L324-L329
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L319
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L321
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L329
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:325</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L320-L325
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/ForgotPassword.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L313
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L315
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L325
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:564</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L559-L564
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/Autocomplete.ashx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L557
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L559
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L564
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:228</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L223-L228
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/MainPage.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L215
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L217
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L228
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:204</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L199-L204
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L89
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L195
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L203
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L204
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:533</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L528-L533
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/SQLInjection.aspx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L524
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L526
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L533
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:420</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L415-L420
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L55
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L406
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L415
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L419
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L420
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:416</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L411-L416
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L55
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L406
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L415
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L416
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:86</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L81-L86
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/CustomerLogin.aspx.cs#L33
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L73
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L79
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L86
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:522</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L517-L522
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/SQLInjection.aspx.cs#L25
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L515
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L517
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L522
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:420</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L415-L420
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L55
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L408
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L419
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L420
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:217</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L212-L217
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/MainPage.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L202
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L204
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L217
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:555</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L550-L555
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/Content/SQLInjectionDiscovery.aspx.cs#L27
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L544
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L554
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L555
</details>
</details>
<details>
<summary>DB/MySqlDbProvider.cs:282</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L277-L282
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L41
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L271
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L273
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/MySqlDbProvider.cs#L282
</details>
</details>
<details>
<summary>DB/SqliteDbProvider.cs:274</summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L269-L274
<details>
<summary> Trace </summary>
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/WebGoatCoins/ProductDetails.aspx.cs#L41
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L263
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L265
https://github.com/sast-automation-dev/WebGoat-NET-21/blob/fc4e46cfb179cf5cd1c6405ff5fe7a590638e404/WebGoat-NET-21/WebGoat/App_Code/DB/SqliteDbProvider.cs#L274
</details>
</details>
</details>
## Language: JavaScript / Node.js
| Severity | CWE | Vulnerability Type | Count |
|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Weak Pseudo-Random|1|
### Details
> No high vulnerability findings detected. To view information on the remaining findings, navigate to the [Mend SAST Application](https://dev.whitesourcesoftware.com/sast/#/scans/8f3f4fea-5852-42a3-ace4-6a52bda41f75/details).
|
non_defect
|
code security report high severity findings total findings code security report latest scan total findings tested project files detected programming languages check this box to manually trigger a scan language c severity cwe vulnerability type count high injection high injection high traversal high manipulation high scripting medium password credentials medium inspection medium messages information exposure medium denial of service redos low upload low redirect low forging low header injection low injection details the below list presents the most relevant findings that need your attention to view information on the remaining findings navigate to the sql injection cwe findings db mysqldbprovider cs trace db sqlitedbprovider cs trace db mysqldbprovider cs trace db sqlitedbprovider cs trace db sqlitedbprovider cs trace db mysqldbprovider cs trace db sqlitedbprovider cs trace db mysqldbprovider cs trace db mysqldbprovider cs trace db mysqldbprovider cs trace db sqlitedbprovider cs trace db mysqldbprovider cs trace db mysqldbprovider cs trace db sqlitedbprovider cs trace db mysqldbprovider cs trace db sqlitedbprovider cs trace db sqlitedbprovider cs trace db sqlitedbprovider cs trace db mysqldbprovider cs trace db sqlitedbprovider cs trace language javascript node js severity cwe vulnerability type count medium pseudo random details no high vulnerability findings detected to view information on the remaining findings navigate to the
| 0
|
44,661
| 12,311,814,389
|
IssuesEvent
|
2020-05-12 13:02:16
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
NodeQueryCacheConfigurator.findQueryCacheConfigFromMapConfig throws java.util.ConcurrentModificationException
|
Module: IMap Source: Community Team: Core Type: Defect
|
Got `ConcurrentModificationException` while setting up Query Cache from multiple threads.
```
java.util.ConcurrentModificationException: null
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
at java.util.ArrayList$Itr.next(ArrayList.java:859)
at com.hazelcast.map.impl.querycache.subscriber.NodeQueryCacheConfigurator.findQueryCacheConfigFromMapConfig(NodeQueryCacheConfigurator.java:83)
at com.hazelcast.map.impl.querycache.subscriber.NodeQueryCacheConfigurator.getOrNull(NodeQueryCacheConfigurator.java:70)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.getOrNullQueryCacheConfig(AbstractQueryCacheEndToEndConstructor.java:157)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.initQueryCacheConfig(AbstractQueryCacheEndToEndConstructor.java:134)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.createNew(AbstractQueryCacheEndToEndConstructor.java:74)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.createNew(AbstractQueryCacheEndToEndConstructor.java:39)
at com.hazelcast.map.impl.querycache.subscriber.QueryCacheEndToEndProvider.tryCreateQueryCache(QueryCacheEndToEndProvider.java:92)
at com.hazelcast.map.impl.querycache.subscriber.QueryCacheEndToEndProvider.getOrCreateQueryCache(QueryCacheEndToEndProvider.java:60)
at com.hazelcast.map.impl.proxy.MapProxyImpl.createQueryCache(MapProxyImpl.java:1074)
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCacheInternal(MapProxyImpl.java:1067)
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCache(MapProxyImpl.java:1032)
```
**Expected behavior**
It is expected that method `getQueryCache` is thread-safe.
**To Reproduce**
This is a race condition so reproducer is a bit complicated. But the code inside `NodeQueryCacheConfigurator` is obviously not thread-safe.
**Additional context**
This happened during Jet job initialization.
```
...
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCacheInternal(MapProxyImpl.java:1067)
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCache(MapProxyImpl.java:1032)
...
at com.hazelcast.jet.impl.execution.ProcessorTasklet.init(ProcessorTasklet.java:231)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at com.hazelcast.jet.impl.execution.TaskletExecutionService.lambda$submitCooperativeTasklets$4(TaskletExecutionService.java:170)
at com.hazelcast.jet.impl.util.Util.doWithClassLoader(Util.java:440)
at com.hazelcast.jet.impl.execution.TaskletExecutionService.submitCooperativeTasklets(TaskletExecutionService.java:169)
...
at com.hazelcast.jet.impl.MasterJobContext.invokeStartExecution(MasterJobContext.java:462)
at com.hazelcast.jet.impl.MasterJobContext.lambda$onInitStepCompleted$4(MasterJobContext.java:437)
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$44(JobCoordinationService.java:971)
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$45(JobCoordinationService.java:990)
at com.hazelcast.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:67)
at com.hazelcast.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
```
Hazelcast Jet 3.2.1
|
1.0
|
NodeQueryCacheConfigurator.findQueryCacheConfigFromMapConfig throws java.util.ConcurrentModificationException - Got `ConcurrentModificationException` while setting up Query Cache from multiple threads.
```
java.util.ConcurrentModificationException: null
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
at java.util.ArrayList$Itr.next(ArrayList.java:859)
at com.hazelcast.map.impl.querycache.subscriber.NodeQueryCacheConfigurator.findQueryCacheConfigFromMapConfig(NodeQueryCacheConfigurator.java:83)
at com.hazelcast.map.impl.querycache.subscriber.NodeQueryCacheConfigurator.getOrNull(NodeQueryCacheConfigurator.java:70)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.getOrNullQueryCacheConfig(AbstractQueryCacheEndToEndConstructor.java:157)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.initQueryCacheConfig(AbstractQueryCacheEndToEndConstructor.java:134)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.createNew(AbstractQueryCacheEndToEndConstructor.java:74)
at com.hazelcast.map.impl.querycache.subscriber.AbstractQueryCacheEndToEndConstructor.createNew(AbstractQueryCacheEndToEndConstructor.java:39)
at com.hazelcast.map.impl.querycache.subscriber.QueryCacheEndToEndProvider.tryCreateQueryCache(QueryCacheEndToEndProvider.java:92)
at com.hazelcast.map.impl.querycache.subscriber.QueryCacheEndToEndProvider.getOrCreateQueryCache(QueryCacheEndToEndProvider.java:60)
at com.hazelcast.map.impl.proxy.MapProxyImpl.createQueryCache(MapProxyImpl.java:1074)
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCacheInternal(MapProxyImpl.java:1067)
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCache(MapProxyImpl.java:1032)
```
**Expected behavior**
It is expected that method `getQueryCache` is thread-safe.
**To Reproduce**
This is a race condition so reproducer is a bit complicated. But the code inside `NodeQueryCacheConfigurator` is obviously not thread-safe.
**Additional context**
This happened during Jet job initialization.
```
...
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCacheInternal(MapProxyImpl.java:1067)
at com.hazelcast.map.impl.proxy.MapProxyImpl.getQueryCache(MapProxyImpl.java:1032)
...
at com.hazelcast.jet.impl.execution.ProcessorTasklet.init(ProcessorTasklet.java:231)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at com.hazelcast.jet.impl.execution.TaskletExecutionService.lambda$submitCooperativeTasklets$4(TaskletExecutionService.java:170)
at com.hazelcast.jet.impl.util.Util.doWithClassLoader(Util.java:440)
at com.hazelcast.jet.impl.execution.TaskletExecutionService.submitCooperativeTasklets(TaskletExecutionService.java:169)
...
at com.hazelcast.jet.impl.MasterJobContext.invokeStartExecution(MasterJobContext.java:462)
at com.hazelcast.jet.impl.MasterJobContext.lambda$onInitStepCompleted$4(MasterJobContext.java:437)
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$44(JobCoordinationService.java:971)
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitToCoordinatorThread$45(JobCoordinationService.java:990)
at com.hazelcast.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:67)
at com.hazelcast.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
```
Hazelcast Jet 3.2.1
|
defect
|
nodequerycacheconfigurator findquerycacheconfigfrommapconfig throws java util concurrentmodificationexception got concurrentmodificationexception while setting up query cache from multiple threads java util concurrentmodificationexception null at java util arraylist itr checkforcomodification arraylist java at java util arraylist itr next arraylist java at com hazelcast map impl querycache subscriber nodequerycacheconfigurator findquerycacheconfigfrommapconfig nodequerycacheconfigurator java at com hazelcast map impl querycache subscriber nodequerycacheconfigurator getornull nodequerycacheconfigurator java at com hazelcast map impl querycache subscriber abstractquerycacheendtoendconstructor getornullquerycacheconfig abstractquerycacheendtoendconstructor java at com hazelcast map impl querycache subscriber abstractquerycacheendtoendconstructor initquerycacheconfig abstractquerycacheendtoendconstructor java at com hazelcast map impl querycache subscriber abstractquerycacheendtoendconstructor createnew abstractquerycacheendtoendconstructor java at com hazelcast map impl querycache subscriber abstractquerycacheendtoendconstructor createnew abstractquerycacheendtoendconstructor java at com hazelcast map impl querycache subscriber querycacheendtoendprovider trycreatequerycache querycacheendtoendprovider java at com hazelcast map impl querycache subscriber querycacheendtoendprovider getorcreatequerycache querycacheendtoendprovider java at com hazelcast map impl proxy mapproxyimpl createquerycache mapproxyimpl java at com hazelcast map impl proxy mapproxyimpl getquerycacheinternal mapproxyimpl java at com hazelcast map impl proxy mapproxyimpl getquerycache mapproxyimpl java expected behavior it is expected that method getquerycache is thread safe to reproduce this is a race condition so reproducer is a bit complicated but the code inside nodequerycacheconfigurator is obviously not thread safe additional context this happened during jet job initialization at com hazelcast map impl proxy mapproxyimpl getquerycacheinternal mapproxyimpl java at com hazelcast map impl proxy mapproxyimpl getquerycache mapproxyimpl java at com hazelcast jet impl execution processortasklet init processortasklet java at java util arraylist foreach arraylist java at com hazelcast jet impl execution taskletexecutionservice lambda submitcooperativetasklets taskletexecutionservice java at com hazelcast jet impl util util dowithclassloader util java at com hazelcast jet impl execution taskletexecutionservice submitcooperativetasklets taskletexecutionservice java at com hazelcast jet impl masterjobcontext invokestartexecution masterjobcontext java at com hazelcast jet impl masterjobcontext lambda oninitstepcompleted masterjobcontext java at com hazelcast jet impl jobcoordinationservice lambda submittocoordinatorthread jobcoordinationservice java at com hazelcast jet impl jobcoordinationservice lambda submittocoordinatorthread jobcoordinationservice java at com hazelcast util executor completablefuturetask run completablefuturetask java at com hazelcast util executor cachedexecutorservicedelegate worker run cachedexecutorservicedelegate java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast util executor hazelcastmanagedthread run hazelcastmanagedthread java hazelcast jet
| 1
|
250,251
| 27,064,619,866
|
IssuesEvent
|
2023-02-13 22:53:01
|
DevOps-PM-PGDip-2022-2023/easybuggy4django
|
https://api.github.com/repos/DevOps-PM-PGDip-2022-2023/easybuggy4django
|
opened
|
CVE-2018-14040 (Medium) detected in bootstrap-3.3.7.min.js
|
security vulnerability
|
## CVE-2018-14040 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /templates/base.html</p>
<p>Path to vulnerable library: /templates/base.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevOps-PM-PGDip-2022-2023/easybuggy4django/commit/f07e7d7c4d846e11a1e7f5cbb4e3d868539da2bd">f07e7d7c4d846e11a1e7f5cbb4e3d868539da2bd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-14040 (Medium) detected in bootstrap-3.3.7.min.js - ## CVE-2018-14040 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /templates/base.html</p>
<p>Path to vulnerable library: /templates/base.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevOps-PM-PGDip-2022-2023/easybuggy4django/commit/f07e7d7c4d846e11a1e7f5cbb4e3d868539da2bd">f07e7d7c4d846e11a1e7f5cbb4e3d868539da2bd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-14040>CVE-2018-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file templates base html path to vulnerable library templates base html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before xss is possible in the collapse data parent attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution org webjars npm bootstrap org webjars bootstrap step up your open source security game with mend
| 0
|
281,313
| 8,693,716,749
|
IssuesEvent
|
2018-12-04 10:31:25
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.uber.com - see bug description
|
browser-firefox-mobile priority-important
|
<!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.uber.com/a/carousel-vs-1?exp=80917_c1
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: cannot unsubscribe please block
**Steps to Reproduce**:
Unsubcribe me
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.uber.com - see bug description - <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.uber.com/a/carousel-vs-1?exp=80917_c1
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: cannot unsubscribe please block
**Steps to Reproduce**:
Unsubcribe me
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
see bug description url browser version firefox mobile operating system android tested another browser unknown problem type something else description cannot unsubscribe please block steps to reproduce unsubcribe me browser configuration none from with ❤️
| 0
|
65,120
| 19,100,713,180
|
IssuesEvent
|
2021-11-29 22:09:01
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: importing webdriver: «File "<frozen importlib._bootstrap_external>", line 1039, in get_data», getAttribute.js FileNotFoundError
|
R-awaiting answer C-py I-defect
|
### What happened?
Importing `webdriver` with `from selenium import webdriver` produces this error:
```python
File "<frozen importlib._bootstrap_external>", line 1039, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/getAttribute.js'
```
### How can we reproduce the issue?
```python
from selenium import webdriver
```
### Relevant log output
```shell
>>> from selenium import webdriver
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/__init__.py", line 18, in <module>
from .firefox.webdriver import WebDriver as Firefox # noqa
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/firefox/webdriver.py", line 25, in <module>
from selenium.webdriver.remote.webdriver import WebDriver as RemoteWebDriver
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 42, in <module>
from .switch_to import SwitchTo
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/switch_to.py", line 25, in <module>
from selenium.webdriver.remote.webelement import WebElement
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/webelement.py", line 37, in <module>
getAttribute_js = pkgutil.get_data(_pkg, 'getAttribute.js').decode('utf8')
File "/usr/lib64/python3.9/pkgutil.py", line 639, in get_data
return loader.get_data(resource_name)
File "<frozen importlib._bootstrap_external>", line 1039, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/getAttribute.js'
```
### Operating System
slackware64-current (Linux kernel 5.15.4)
### Selenium version
Python 3.9.9 with Selenium 4.1.0
### What are the browser(s) and version(s) where you see this issue?
Firefox 91.3.0esr
### What are the browser driver(s) and version(s) where you see this issue?
GeckoDriver 0.30.0
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: importing webdriver: «File "<frozen importlib._bootstrap_external>", line 1039, in get_data», getAttribute.js FileNotFoundError - ### What happened?
Importing `webdriver` with `from selenium import webdriver` produces this error:
```python
File "<frozen importlib._bootstrap_external>", line 1039, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/getAttribute.js'
```
### How can we reproduce the issue?
```python
from selenium import webdriver
```
### Relevant log output
```shell
>>> from selenium import webdriver
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/__init__.py", line 18, in <module>
from .firefox.webdriver import WebDriver as Firefox # noqa
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/firefox/webdriver.py", line 25, in <module>
from selenium.webdriver.remote.webdriver import WebDriver as RemoteWebDriver
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 42, in <module>
from .switch_to import SwitchTo
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/switch_to.py", line 25, in <module>
from selenium.webdriver.remote.webelement import WebElement
File "/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/webelement.py", line 37, in <module>
getAttribute_js = pkgutil.get_data(_pkg, 'getAttribute.js').decode('utf8')
File "/usr/lib64/python3.9/pkgutil.py", line 639, in get_data
return loader.get_data(resource_name)
File "<frozen importlib._bootstrap_external>", line 1039, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/usr/lib64/python3.9/site-packages/selenium/webdriver/remote/getAttribute.js'
```
### Operating System
slackware64-current (Linux kernel 5.15.4)
### Selenium version
Python 3.9.9 with Selenium 4.1.0
### What are the browser(s) and version(s) where you see this issue?
Firefox 91.3.0esr
### What are the browser driver(s) and version(s) where you see this issue?
GeckoDriver 0.30.0
### Are you using Selenium Grid?
_No response_
|
defect
|
importing webdriver «file line in get data» getattribute js filenotfounderror what happened importing webdriver with from selenium import webdriver produces this error python file line in get data filenotfounderror no such file or directory usr site packages selenium webdriver remote getattribute js how can we reproduce the issue python from selenium import webdriver relevant log output shell from selenium import webdriver traceback most recent call last file line in file usr site packages selenium webdriver init py line in from firefox webdriver import webdriver as firefox noqa file usr site packages selenium webdriver firefox webdriver py line in from selenium webdriver remote webdriver import webdriver as remotewebdriver file usr site packages selenium webdriver remote webdriver py line in from switch to import switchto file usr site packages selenium webdriver remote switch to py line in from selenium webdriver remote webelement import webelement file usr site packages selenium webdriver remote webelement py line in getattribute js pkgutil get data pkg getattribute js decode file usr pkgutil py line in get data return loader get data resource name file line in get data filenotfounderror no such file or directory usr site packages selenium webdriver remote getattribute js operating system current linux kernel selenium version python with selenium what are the browser s and version s where you see this issue firefox what are the browser driver s and version s where you see this issue geckodriver are you using selenium grid no response
| 1
|
222,047
| 17,389,611,334
|
IssuesEvent
|
2021-08-02 04:49:05
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
RKE2 Provisioning: Cluster Options - CNI - Canal
|
[zube]: To Test area/rke2
|
`canal` as a CNI Option
```
apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
name: #string
annotations:
{}
# key: string
labels:
{}
# key: string
namespace: fleet-default
spec:
cloudCredentialSecretName: do
kubernetesVersion: v1.20.5+rke2r1
rkeConfig:
# cni: string
```
|
1.0
|
RKE2 Provisioning: Cluster Options - CNI - Canal - `canal` as a CNI Option
```
apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
name: #string
annotations:
{}
# key: string
labels:
{}
# key: string
namespace: fleet-default
spec:
cloudCredentialSecretName: do
kubernetesVersion: v1.20.5+rke2r1
rkeConfig:
# cni: string
```
|
non_defect
|
provisioning cluster options cni canal canal as a cni option apiversion provisioning cattle io kind cluster metadata name string annotations key string labels key string namespace fleet default spec cloudcredentialsecretname do kubernetesversion rkeconfig cni string
| 0
|
220,698
| 7,370,135,294
|
IssuesEvent
|
2018-03-13 07:05:36
|
T-Soft/unismev
|
https://api.github.com/repos/T-Soft/unismev
|
closed
|
Urgent namespaces debugging. Add ability to send custom request/response into a pipline.
|
HIGH PRIORITY feature ui
|
Add ability to manually send previously saved XML message from SMEV into a message processing pipeline. Think of how this could be laid out in UI.
|
1.0
|
Urgent namespaces debugging. Add ability to send custom request/response into a pipline. - Add ability to manually send previously saved XML message from SMEV into a message processing pipeline. Think of how this could be laid out in UI.
|
non_defect
|
urgent namespaces debugging add ability to send custom request response into a pipline add ability to manually send previously saved xml message from smev into a message processing pipeline think of how this could be laid out in ui
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.