Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
274,307 | 8,559,599,189 | IssuesEvent | 2018-11-08 21:47:10 | OpenSRP/opensrp-server-web | https://api.github.com/repos/OpenSRP/opensrp-server-web | opened | Update Sync API to include new domain objects | Priority: High | Depending on how it's implemented, we may need to update the sync API endpoint and underlying logic to support the addition of new domain objects locations, campaigns and tasks.
- [ ] Review the new location, campaign and task entities that have been added to the OpenSRP server
- [ ] Develop a logic model on how to process each item
- [ ] Implement the change
- [ ] Test that sync functionality | 1.0 | Update Sync API to include new domain objects - Depending on how it's implemented, we may need to update the sync API endpoint and underlying logic to support the addition of new domain objects locations, campaigns and tasks.
- [ ] Review the new location, campaign and task entities that have been added to the OpenSRP server
- [ ] Develop a logic model on how to process each item
- [ ] Implement the change
- [ ] Test that sync functionality | non_defect | update sync api to include new domain objects depending on how it s implemented we may need to update the sync api endpoint and underlying logic to support the addition of new domain objects locations campaigns and tasks review the new location campaign and task entities that have been added to the opensrp server develop a logic model on how to process each item implement the change test that sync functionality | 0 |
50,555 | 13,187,577,424 | IssuesEvent | 2020-08-13 03:52:20 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | look at getting nvidia drivers on the bots for clsim testing (Trac #935) | Migrated from Trac defect infrastructure | clsim testing and coverage is woefully weak.
look at getting the nvidia drivers on the bots w/ crusty nvidia cards, or scrounging for some half height cards.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/935
, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-04-17T00:10:22",
"description": "clsim testing and coverage is woefully weak.\n\nlook at getting the nvidia drivers on the bots w/ crusty nvidia cards, or scrounging for some half height cards.",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1429229422652487",
"component": "infrastructure",
"summary": "look at getting nvidia drivers on the bots for clsim testing",
"priority": "normal",
"keywords": "",
"time": "2015-04-14T20:08:36",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | look at getting nvidia drivers on the bots for clsim testing (Trac #935) - clsim testing and coverage is woefully weak.
look at getting the nvidia drivers on the bots w/ crusty nvidia cards, or scrounging for some half height cards.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/935
, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-04-17T00:10:22",
"description": "clsim testing and coverage is woefully weak.\n\nlook at getting the nvidia drivers on the bots w/ crusty nvidia cards, or scrounging for some half height cards.",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1429229422652487",
"component": "infrastructure",
"summary": "look at getting nvidia drivers on the bots for clsim testing",
"priority": "normal",
"keywords": "",
"time": "2015-04-14T20:08:36",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | look at getting nvidia drivers on the bots for clsim testing trac clsim testing and coverage is woefully weak look at getting the nvidia drivers on the bots w crusty nvidia cards or scrounging for some half height cards migrated from reported by nega and owned by nega json status closed changetime description clsim testing and coverage is woefully weak n nlook at getting the nvidia drivers on the bots w crusty nvidia cards or scrounging for some half height cards reporter nega cc resolution fixed ts component infrastructure summary look at getting nvidia drivers on the bots for clsim testing priority normal keywords time milestone owner nega type defect | 1 |
69,264 | 22,304,682,366 | IssuesEvent | 2022-06-13 12:00:16 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Remove the "modern IDEs" section from the manual | T: Defect C: Documentation P: Medium R: Fixed E: All Editions | This section was never written, and it won't be, either. Let's just remove it:
https://www.jooq.org/doc/latest/manual/getting-started/tutorials/jooq-in-modern-ides/ | 1.0 | Remove the "modern IDEs" section from the manual - This section was never written, and it won't be, either. Let's just remove it:
https://www.jooq.org/doc/latest/manual/getting-started/tutorials/jooq-in-modern-ides/ | defect | remove the modern ides section from the manual this section was never written and it won t be either let s just remove it | 1 |
26,423 | 20,105,290,970 | IssuesEvent | 2022-02-07 09:54:22 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Problem building CairoContext and accessing netcore3.1 | bug interface/infrastructure | I have just pulled the latest version so I can run a user simulation, but I cannot build the project even after a clean solution and rebuild all.
1. Severity Code Description Project File Line Suppression State
Error CS8141 The tuple element names in the signature of method 'CairoContext.GetPixelExtents(string, bool, bool)' must match the tuple element names of interface method 'IDrawContext.GetPixelExtents(string, bool, bool)' (including on the return type). ApsimNG (netcoreapp3.1) C:\Data\Source\Repos\ApsimX\ApsimNG\Views\Sheet\CairoContext.cs 75 Active
Any advice to get back up and running? | 1.0 | Problem building CairoContext and accessing netcore3.1 - I have just pulled the latest version so I can run a user simulation, but I cannot build the project even after a clean solution and rebuild all.
1. Severity Code Description Project File Line Suppression State
Error CS8141 The tuple element names in the signature of method 'CairoContext.GetPixelExtents(string, bool, bool)' must match the tuple element names of interface method 'IDrawContext.GetPixelExtents(string, bool, bool)' (including on the return type). ApsimNG (netcoreapp3.1) C:\Data\Source\Repos\ApsimX\ApsimNG\Views\Sheet\CairoContext.cs 75 Active
Any advice to get back up and running? | non_defect | problem building cairocontext and accessing i have just pulled the latest version so i can run a user simulation but i cannot build the project even after a clean solution and rebuild all severity code description project file line suppression state error the tuple element names in the signature of method cairocontext getpixelextents string bool bool must match the tuple element names of interface method idrawcontext getpixelextents string bool bool including on the return type apsimng c data source repos apsimx apsimng views sheet cairocontext cs active any advice to get back up and running | 0 |
50,721 | 13,187,697,518 | IssuesEvent | 2020-08-13 04:16:20 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | [coinc-twc] example scrip does not work + missing documentation (Trac #1234) | Migrated from Trac combo reconstruction defect | 1) example testCoincTWC.py does *not* work.
Loading libflat-ntuple....................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (I3Tray.py:36 in load)
Traceback (most recent call last):
File "testCoincTWC.py", line 15, in <module>
load("libflat-ntuple")
File "/home/tpalczewski/code-sprint/IceRec-V5/build/lib/I3Tray.py", line 36, in load
% (sys.exc_info()[0], sys.exc_info()[1]), "I3Tray")
File "/home/tpalczewski/code-sprint/IceRec-V5/build/lib/icecube/icetray/i3logging.py", line 150, in log_fatal
raise RuntimeError(message + " (in " + tb[2] + ")")
RuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (in load)
script tries to use the deprecated libflat-ntuple library.
In addition tests should be placed in the resources/test directory.
2) BTW the resources/scripts directory is empty.
3) Please convert RELEASE_NOTES to .rst format.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1234">https://code.icecube.wisc.edu/ticket/1234</a>, reported by tpalczewski and owned by sderidder</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "1) example testCoincTWC.py does *not* work. \n\nLoading libflat-ntuple....................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (I3Tray.py:36 in load)\nTraceback (most recent call last):\n File \"testCoincTWC.py\", line 15, in <module>\n load(\"libflat-ntuple\")\n File \"/home/tpalczewski/code-sprint/IceRec-V5/build/lib/I3Tray.py\", line 36, in load\n % (sys.exc_info()[0], sys.exc_info()[1]), \"I3Tray\")\n File \"/home/tpalczewski/code-sprint/IceRec-V5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (in load)\n\nscript tries to use the deprecated libflat-ntuple library. \n\nIn addition tests should be placed in the resources/test directory. \n\n2) BTW the resources/scripts directory is empty. \n\n3) Please convert RELEASE_NOTES to .rst format. \n",
"reporter": "tpalczewski",
"cc": "Sam.DeRidder@UGent.be",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[coinc-twc] example scrip does not work + missing documentation",
"priority": "blocker",
"keywords": "",
"time": "2015-08-20T06:51:30",
"milestone": "",
"owner": "sderidder",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [coinc-twc] example scrip does not work + missing documentation (Trac #1234) - 1) example testCoincTWC.py does *not* work.
Loading libflat-ntuple....................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (I3Tray.py:36 in load)
Traceback (most recent call last):
File "testCoincTWC.py", line 15, in <module>
load("libflat-ntuple")
File "/home/tpalczewski/code-sprint/IceRec-V5/build/lib/I3Tray.py", line 36, in load
% (sys.exc_info()[0], sys.exc_info()[1]), "I3Tray")
File "/home/tpalczewski/code-sprint/IceRec-V5/build/lib/icecube/icetray/i3logging.py", line 150, in log_fatal
raise RuntimeError(message + " (in " + tb[2] + ")")
RuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (in load)
script tries to use the deprecated libflat-ntuple library.
In addition tests should be placed in the resources/test directory.
2) BTW the resources/scripts directory is empty.
3) Please convert RELEASE_NOTES to .rst format.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1234">https://code.icecube.wisc.edu/ticket/1234</a>, reported by tpalczewski and owned by sderidder</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "1) example testCoincTWC.py does *not* work. \n\nLoading libflat-ntuple....................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (I3Tray.py:36 in load)\nTraceback (most recent call last):\n File \"testCoincTWC.py\", line 15, in <module>\n load(\"libflat-ntuple\")\n File \"/home/tpalczewski/code-sprint/IceRec-V5/build/lib/I3Tray.py\", line 36, in load\n % (sys.exc_info()[0], sys.exc_info()[1]), \"I3Tray\")\n File \"/home/tpalczewski/code-sprint/IceRec-V5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /home/tpalczewski/code-sprint/IceRec-V5/build/lib/libflat-ntuple.so: cannot open shared object file: No such file or directory (in load)\n\nscript tries to use the deprecated libflat-ntuple library. \n\nIn addition tests should be placed in the resources/test directory. \n\n2) BTW the resources/scripts directory is empty. \n\n3) Please convert RELEASE_NOTES to .rst format. \n",
"reporter": "tpalczewski",
"cc": "Sam.DeRidder@UGent.be",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[coinc-twc] example scrip does not work + missing documentation",
"priority": "blocker",
"keywords": "",
"time": "2015-08-20T06:51:30",
"milestone": "",
"owner": "sderidder",
"type": "defect"
}
```
</p>
</details>
| defect | example scrip does not work missing documentation trac example testcoinctwc py does not work loading libflat ntuple fatal failed to load library dlopen dynamic loading error home tpalczewski code sprint icerec build lib libflat ntuple so cannot open shared object file no such file or directory py in load traceback most recent call last file testcoinctwc py line in load libflat ntuple file home tpalczewski code sprint icerec build lib py line in load sys exc info sys exc info file home tpalczewski code sprint icerec build lib icecube icetray py line in log fatal raise runtimeerror message in tb runtimeerror failed to load library dlopen dynamic loading error home tpalczewski code sprint icerec build lib libflat ntuple so cannot open shared object file no such file or directory in load script tries to use the deprecated libflat ntuple library in addition tests should be placed in the resources test directory btw the resources scripts directory is empty please convert release notes to rst format migrated from json status closed changetime description example testcoinctwc py does not work n nloading libflat ntuple fatal failed to load library dlopen dynamic loading error home tpalczewski code sprint icerec build lib libflat ntuple so cannot open shared object file no such file or directory py in load ntraceback most recent call last n file testcoinctwc py line in n load libflat ntuple n file home tpalczewski code sprint icerec build lib py line in load n sys exc info sys exc info n file home tpalczewski code sprint icerec build lib icecube icetray py line in log fatal n raise runtimeerror message in tb nruntimeerror failed to load library dlopen dynamic loading error home tpalczewski code sprint icerec build lib libflat ntuple so cannot open shared object file no such file or directory in load n nscript tries to use the deprecated libflat ntuple library n nin addition tests should be placed in the resources test directory n btw the resources scripts directory is empty n please convert release notes to rst format n reporter tpalczewski cc sam deridder ugent be resolution fixed ts component combo reconstruction summary example scrip does not work missing documentation priority blocker keywords time milestone owner sderidder type defect | 1 |
437,469 | 30,600,502,967 | IssuesEvent | 2023-07-22 10:10:02 | Wintespe/ScanIt | https://api.github.com/repos/Wintespe/ScanIt | opened | CheckReady command added to the console | documentation |
**@­CheckReady command added to the console**
_v2.1.5_
Some commands – like Restart 1 - cause Tasmota to restart. @­CheckReady tests if the device is ready after this command.
**Parameter list**
```
Pre Wait time in ms before the test.
Retry Number of loops during the test.
Until Waiting time in ms in a loop.
Post Waiting time in ms after the test.
```
Example
@­CheckReady Pre:100; Retry:10; Untll:200; Post:10
­ | 1.0 | CheckReady command added to the console -
**@­CheckReady command added to the console**
_v2.1.5_
Some commands – like Restart 1 - cause Tasmota to restart. @­CheckReady tests if the device is ready after this command.
**Parameter list**
```
Pre Wait time in ms before the test.
Retry Number of loops during the test.
Until Waiting time in ms in a loop.
Post Waiting time in ms after the test.
```
Example
@­CheckReady Pre:100; Retry:10; Untll:200; Post:10
­ | non_defect | checkready command added to the console checkready command added to the console some commands – like restart cause tasmota to restart checkready tests if the device is ready after this command parameter list pre wait time in ms before the test retry number of loops during the test until waiting time in ms in a loop post waiting time in ms after the test example checkready pre retry untll post | 0 |
6,121 | 2,610,221,533 | IssuesEvent | 2015-02-26 19:10:17 | chrsmith/somefinders | https://api.github.com/repos/chrsmith/somefinders | opened | siemens gigaset 4000 classic инструкция | auto-migrated Priority-Medium Type-Defect | ```
'''Анвар Дементьев'''
Привет всем не подскажите где можно найти
.siemens gigaset 4000 classic инструкция. как то
выкладывали уже
'''Герт Савельев'''
Вот хороший сайт где можно скачать
http://bit.ly/1h3T93s
'''Василько Гущин'''
Спасибо вроде то но просит телефон вводить
'''Вильям Лихачёв'''
Неа все ок у меня ничего не списало
'''Андрон Большаков'''
Не это не влияет на баланс
Информация о файле: siemens gigaset 4000 classic
инструкция
Загружен: В этом месяце
Скачан раз: 175
Рейтинг: 799
Средняя скорость скачивания: 1237
Похожих файлов: 11
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 11:57 | 1.0 | siemens gigaset 4000 classic инструкция - ```
'''Анвар Дементьев'''
Привет всем не подскажите где можно найти
.siemens gigaset 4000 classic инструкция. как то
выкладывали уже
'''Герт Савельев'''
Вот хороший сайт где можно скачать
http://bit.ly/1h3T93s
'''Василько Гущин'''
Спасибо вроде то но просит телефон вводить
'''Вильям Лихачёв'''
Неа все ок у меня ничего не списало
'''Андрон Большаков'''
Не это не влияет на баланс
Информация о файле: siemens gigaset 4000 classic
инструкция
Загружен: В этом месяце
Скачан раз: 175
Рейтинг: 799
Средняя скорость скачивания: 1237
Похожих файлов: 11
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 11:57 | defect | siemens gigaset classic инструкция анвар дементьев привет всем не подскажите где можно найти siemens gigaset classic инструкция как то выкладывали уже герт савельев вот хороший сайт где можно скачать василько гущин спасибо вроде то но просит телефон вводить вильям лихачёв неа все ок у меня ничего не списало андрон большаков не это не влияет на баланс информация о файле siemens gigaset classic инструкция загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at | 1 |
30,148 | 6,033,371,252 | IssuesEvent | 2017-06-09 08:07:03 | moosetechnology/Moose | https://api.github.com/repos/moosetechnology/Moose | closed | Not all shapes support borderColor/borderWidth | Priority-Medium Type-Defect | Originally reported on Google Code with ID 1097
```
Some shapes are using the default stroke and/or the default stroke width, even if the
user sets a borderColor/borderWidth
|v ver circle box poly es |
v := RTView new.
ver := (1 to:5)collect:[:i | Point r:100 degrees:(360/5*i)].
circle := RTEllipse new size: 200; color: Color red; borderWidth:5;borderColor: Color
green.
box := RTBox new size: 200; color: Color red; borderWidth:5;borderColor: Color green.
poly := RTPolygon new size: 200; vertices:ver; color: Color red; borderWidth:5;borderColor:
Color green.
es := circle elementOn:'hello'.
v add: es.
es := box elementOn:'hello'.
v add: es.
es := poly elementOn:'hello'.
v add: es.
v @ RTDraggableView .
RTGridLayout on: v elements.
v
"all shapes should use the provided borderWidth (5) and borderColor (Green)"
moose build 3147
* Type-Defect
* Component-Roassal2
```
Reported by `nicolaihess` on 2014-11-13 11:40:38
<hr>
- _Attachment: shapes.png<br>_
| 1.0 | Not all shapes support borderColor/borderWidth - Originally reported on Google Code with ID 1097
```
Some shapes are using the default stroke and/or the default stroke width, even if the
user sets a borderColor/borderWidth
|v ver circle box poly es |
v := RTView new.
ver := (1 to:5)collect:[:i | Point r:100 degrees:(360/5*i)].
circle := RTEllipse new size: 200; color: Color red; borderWidth:5;borderColor: Color
green.
box := RTBox new size: 200; color: Color red; borderWidth:5;borderColor: Color green.
poly := RTPolygon new size: 200; vertices:ver; color: Color red; borderWidth:5;borderColor:
Color green.
es := circle elementOn:'hello'.
v add: es.
es := box elementOn:'hello'.
v add: es.
es := poly elementOn:'hello'.
v add: es.
v @ RTDraggableView .
RTGridLayout on: v elements.
v
"all shapes should use the provided borderWidth (5) and borderColor (Green)"
moose build 3147
* Type-Defect
* Component-Roassal2
```
Reported by `nicolaihess` on 2014-11-13 11:40:38
<hr>
- _Attachment: shapes.png<br>_
| defect | not all shapes support bordercolor borderwidth originally reported on google code with id some shapes are using the default stroke and or the default stroke width even if the user sets a bordercolor borderwidth v ver circle box poly es v rtview new ver to collect circle rtellipse new size color color red borderwidth bordercolor color green box rtbox new size color color red borderwidth bordercolor color green poly rtpolygon new size vertices ver color color red borderwidth bordercolor color green es circle elementon hello v add es es box elementon hello v add es es poly elementon hello v add es v rtdraggableview rtgridlayout on v elements v all shapes should use the provided borderwidth and bordercolor green moose build type defect component reported by nicolaihess on attachment shapes png | 1 |
38,612 | 8,948,475,582 | IssuesEvent | 2019-01-25 02:34:42 | svigerske/ipopt-donotuse | https://api.github.com/repos/svigerske/ipopt-donotuse | closed | test bug report | Ipopt defect | Issue created by migration from Trac.
Original creator: andreasw@us.ibm.com
Original creation time: 2006-05-02 18:11:07
Assignee: andreasw
Version: 3.0
I'm just testing if the forwarding of ticket changes to the mailing list works... | 1.0 | test bug report - Issue created by migration from Trac.
Original creator: andreasw@us.ibm.com
Original creation time: 2006-05-02 18:11:07
Assignee: andreasw
Version: 3.0
I'm just testing if the forwarding of ticket changes to the mailing list works... | defect | test bug report issue created by migration from trac original creator andreasw us ibm com original creation time assignee andreasw version i m just testing if the forwarding of ticket changes to the mailing list works | 1 |
570,211 | 17,021,256,246 | IssuesEvent | 2021-07-02 19:31:59 | HHS81/c182s | https://api.github.com/repos/HHS81/c182s | closed | Animation: nose gear scissor animation | 3D model animations (XML) bug middle priority | when I found out how the tracking animation is working.....
| 1.0 | Animation: nose gear scissor animation - when I found out how the tracking animation is working.....
| non_defect | animation nose gear scissor animation when i found out how the tracking animation is working | 0 |
35,219 | 7,659,745,381 | IssuesEvent | 2018-05-11 07:57:01 | PowerDNS/pdns | https://api.github.com/repos/PowerDNS/pdns | closed | dnsdist: setVerboseHealthChecks() is missing from the documentation | defect dnsdist docs | ### Short description
<!-- Explain in a few sentences what the issue/request is -->
`setVerboseHealthChecks()` is not documented, it would be nice to document it.
| 1.0 | dnsdist: setVerboseHealthChecks() is missing from the documentation - ### Short description
<!-- Explain in a few sentences what the issue/request is -->
`setVerboseHealthChecks()` is not documented, it would be nice to document it.
| defect | dnsdist setverbosehealthchecks is missing from the documentation short description setverbosehealthchecks is not documented it would be nice to document it | 1 |
29,206 | 5,592,675,754 | IssuesEvent | 2017-03-30 05:45:39 | CenturyLinkCloud/MDW | https://api.github.com/repos/CenturyLinkCloud/MDW | opened | Process version discrepancies | Defect | Workflow process versions are stored in the .proc JSON definition and also in .mdw/versions. Discrepancies can cause major headaches. Removing version from the .proc file would be a major change because it would affect our ability to handle in-flight processes. However, we should at least perform some checks during import/export and when initializing.
One problem this causes is when launching a process from the Workflow tab of MDWHub. If the versions don't match, then the backend REST service may return a 404 response. | 1.0 | Process version discrepancies - Workflow process versions are stored in the .proc JSON definition and also in .mdw/versions. Discrepancies can cause major headaches. Removing version from the .proc file would be a major change because it would affect our ability to handle in-flight processes. However, we should at least perform some checks during import/export and when initializing.
One problem this causes is when launching a process from the Workflow tab of MDWHub. If the versions don't match, then the backend REST service may return a 404 response. | defect | process version discrepancies workflow process versions are stored in the proc json definition and also in mdw versions discrepancies can cause major headaches removing version from the proc file would be a major change because it would affect our ability to handle in flight processes however we should at least perform some checks during import export and when initializing one problem this causes is when launching a process from the workflow tab of mdwhub if the versions don t match then the backend rest service may return a response | 1 |
30,582 | 4,210,361,304 | IssuesEvent | 2016-06-29 09:38:31 | redaxo/redaxo | https://api.github.com/repos/redaxo/redaxo | closed | Editieransicht Hintergrundfarbe | Design / CSS | 
Durch background-color: #9ca5b2 in .rex-main-frame hat die Editieransicht eine ganz andere Hintergrundfarbe als alle anderen Oberflächen in Redaxo sonst. Dort, und nur dort. Schmerzen da nur meine Augen oder empfindet das noch jemand als störend?
Ohne wäre es wie in jeder anderen Unterseite auch:

| 1.0 | Editieransicht Hintergrundfarbe - 
Durch background-color: #9ca5b2 in .rex-main-frame hat die Editieransicht eine ganz andere Hintergrundfarbe als alle anderen Oberflächen in Redaxo sonst. Dort, und nur dort. Schmerzen da nur meine Augen oder empfindet das noch jemand als störend?
Ohne wäre es wie in jeder anderen Unterseite auch:

| non_defect | editieransicht hintergrundfarbe durch background color in rex main frame hat die editieransicht eine ganz andere hintergrundfarbe als alle anderen oberflächen in redaxo sonst dort und nur dort schmerzen da nur meine augen oder empfindet das noch jemand als störend ohne wäre es wie in jeder anderen unterseite auch | 0 |
43,866 | 17,702,775,946 | IssuesEvent | 2021-08-25 01:33:42 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Cost doubt | container-service/svc triaged cxp product-question Pri1 | It's said that AKS is free, but what about the AKS load balancers? AFAIK, AKS uses a Standard LB by default. Is this still free?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 60a7a0a8-97e7-0fda-763c-1a9972f4e9bc
* Version Independent ID: 82b46441-43fc-fe48-97e2-0f3fca3d6eab
* Content: [Use a Public Load Balancer - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/load-balancer-standard)
* Content Source: [articles/aks/load-balancer-standard.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/load-balancer-standard.md)
* Service: **container-service**
* GitHub Login: @palma21
* Microsoft Alias: **jpalma** | 1.0 | Cost doubt - It's said that AKS is free, but what about the AKS load balancers? AFAIK, AKS uses a Standard LB by default. Is this still free?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 60a7a0a8-97e7-0fda-763c-1a9972f4e9bc
* Version Independent ID: 82b46441-43fc-fe48-97e2-0f3fca3d6eab
* Content: [Use a Public Load Balancer - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/load-balancer-standard)
* Content Source: [articles/aks/load-balancer-standard.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/load-balancer-standard.md)
* Service: **container-service**
* GitHub Login: @palma21
* Microsoft Alias: **jpalma** | non_defect | cost doubt it s said that aks is free but what about the aks load balancers afaik aks uses a standard lb by default is this still free document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login microsoft alias jpalma | 0 |
51,244 | 13,207,401,227 | IssuesEvent | 2020-08-14 22:57:53 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | Python on Mac defaulting to system (Trac #90) | IceTray Incomplete Migration Migrated from Trac defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/90">https://code.icecube.wisc.edu/projects/icecube/ticket/90</a>, reported by cgilsand owned by cgils</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-09-07T17:14:11",
"_ts": "1189185251000000",
"description": "15:47 < blaufuss@> blaufuss@teufel[~/..cework/offline/build](I3)% ./examples/resources/scripts/pass1.py\n\n15:47 < blaufuss@> Fatal Python error: Interpreter not initialized (version mismatch?)\n\n15:47 < blaufuss@> Abort trap\n\n15:47 < blaufuss@> blaufuss@teufel[~/..cework/offline/build](I3)% which python\n\n15:47 < blaufuss@> /Users/blaufuss/icework/i3tools/bin/python\n\n15:47 < blaufuss@> I reproduce Georges error on teufel\n\n15:49 < gekolu > OK so I'm not totally mad :-)\n\n15:49 < drool@> hrm\n\n15:50 < blaufuss@> otool -L libithon.so\n\n15:50 < blaufuss@> /System/Library/Frameworks/Python.framework/Versions/2.3/Python (compatibility version 2.3.0, current version 2.3.5)",
"reporter": "cgils",
"cc": "",
"resolution": "fixed",
"time": "2007-08-13T19:58:05",
"component": "IceTray",
"summary": "Python on Mac defaulting to system",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Python on Mac defaulting to system (Trac #90) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/90">https://code.icecube.wisc.edu/projects/icecube/ticket/90</a>, reported by cgilsand owned by cgils</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-09-07T17:14:11",
"_ts": "1189185251000000",
"description": "15:47 < blaufuss@> blaufuss@teufel[~/..cework/offline/build](I3)% ./examples/resources/scripts/pass1.py\n\n15:47 < blaufuss@> Fatal Python error: Interpreter not initialized (version mismatch?)\n\n15:47 < blaufuss@> Abort trap\n\n15:47 < blaufuss@> blaufuss@teufel[~/..cework/offline/build](I3)% which python\n\n15:47 < blaufuss@> /Users/blaufuss/icework/i3tools/bin/python\n\n15:47 < blaufuss@> I reproduce Georges error on teufel\n\n15:49 < gekolu > OK so I'm not totally mad :-)\n\n15:49 < drool@> hrm\n\n15:50 < blaufuss@> otool -L libithon.so\n\n15:50 < blaufuss@> /System/Library/Frameworks/Python.framework/Versions/2.3/Python (compatibility version 2.3.0, current version 2.3.5)",
"reporter": "cgils",
"cc": "",
"resolution": "fixed",
"time": "2007-08-13T19:58:05",
"component": "IceTray",
"summary": "Python on Mac defaulting to system",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
</p>
</details>
| defect | python on mac defaulting to system trac migrated from json status closed changetime ts description blaufuss teufel examples resources scripts py n fatal python error interpreter not initialized version mismatch n abort trap n blaufuss teufel which python n users blaufuss icework bin python n i reproduce georges error on teufel n ok so i m not totally mad n hrm n otool l libithon so n system library frameworks python framework versions python compatibility version current version reporter cgils cc resolution fixed time component icetray summary python on mac defaulting to system priority normal keywords milestone owner cgils type defect | 1 |
42,756 | 11,256,140,351 | IssuesEvent | 2020-01-12 14:22:52 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | DataTable: DOM memory leak on widget.refresh() | defect | 1) Environment
- PrimeFaces version: 7.0+
- Does it work on the newest released PrimeFaces version? NO
- Does it work on the newest sources in GitHub? NO
- Application server + version: ALL
- Affected browsers: ALL
## 2) Expected behavior
When calling `widget.refresh(cfg)` there should not be multiple duplicate cloned THEAD objects.
## 3) Actual behavior
For scrollable datatable if you open F12 Console and execute `widget.refresh(cfg)` to refresh the datatable you will see multiple identical THEAD_CLONE DOM elements get created.
```xml
<thead class="ui-datatable-scrollable-theadclone" style="height: 0px;"></thead>
<thead class="ui-datatable-scrollable-theadclone" style="height: 0px;"></thead>
<thead class="ui-datatable-scrollable-theadclone" style="height: 0px;"></thead>
```
## 4) Steps to reproduce
1. Create a Scrollable Datatable.
2. In F12 Console type `widget.refresh(widget.cfg)`
3. Observe duplicate THEAD.
## 5) Sample XHTML
```xml
<p:dataTable id="tblScroll" widgetVar="widget" var="car" value="#{dtScrollView.cars2}" scrollable="true" scrollWidth="100%" resizeMode="expand" paginator="true" rows="2" sortMode="multiple">
<p:column headerText="Id" footerText="Id" sortBy="#{car.id}" filterBy="#{car.id}">
<h:outputText value="#{car.id}" />
</p:column>
<p:column headerText="Year" footerText="Year">
<h:outputText value="#{car.year}" />
</p:column>
<p:column headerText="Brand" footerText="Brand">
<h:outputText value="#{car.brand}" />
</p:column>
<p:column headerText="Color" footerText="Color">
<h:outputText value="#{car.color}" />
</p:column>
</p:dataTable>
```
## 6) Sample bean
From Showcase.
| 1.0 | DataTable: DOM memory leak on widget.refresh() - 1) Environment
- PrimeFaces version: 7.0+
- Does it work on the newest released PrimeFaces version? NO
- Does it work on the newest sources in GitHub? NO
- Application server + version: ALL
- Affected browsers: ALL
## 2) Expected behavior
When calling `widget.refresh(cfg)` there should not be multiple duplicate cloned THEAD objects.
## 3) Actual behavior
For scrollable datatable if you open F12 Console and execute `widget.refresh(cfg)` to refresh the datatable you will see multiple identical THEAD_CLONE DOM elements get created.
```xml
<thead class="ui-datatable-scrollable-theadclone" style="height: 0px;"></thead>
<thead class="ui-datatable-scrollable-theadclone" style="height: 0px;"></thead>
<thead class="ui-datatable-scrollable-theadclone" style="height: 0px;"></thead>
```
## 4) Steps to reproduce
1. Create a Scrollable Datatable.
2. In F12 Console type `widget.refresh(widget.cfg)`
3. Observe duplicate THEAD.
## 5) Sample XHTML
```xml
<p:dataTable id="tblScroll" widgetVar="widget" var="car" value="#{dtScrollView.cars2}" scrollable="true" scrollWidth="100%" resizeMode="expand" paginator="true" rows="2" sortMode="multiple">
<p:column headerText="Id" footerText="Id" sortBy="#{car.id}" filterBy="#{car.id}">
<h:outputText value="#{car.id}" />
</p:column>
<p:column headerText="Year" footerText="Year">
<h:outputText value="#{car.year}" />
</p:column>
<p:column headerText="Brand" footerText="Brand">
<h:outputText value="#{car.brand}" />
</p:column>
<p:column headerText="Color" footerText="Color">
<h:outputText value="#{car.color}" />
</p:column>
</p:dataTable>
```
## 6) Sample bean
From Showcase.
| defect | datatable dom memory leak on widget refresh environment primefaces version does it work on the newest released primefaces version no does it work on the newest sources in github no application server version all affected browsers all expected behavior when calling widget refresh cfg there should not be multiple duplicate cloned thead objects actual behavior for scrollable datatable if you open console and execute widget refresh cfg to refresh the datatable you will see multiple identical thead clone dom elements get created xml steps to reproduce create a scrollable datatable in console type widget refresh widget cfg observe duplicate thead sample xhtml xml sample bean from showcase | 1 |
8,133 | 2,611,453,798 | IssuesEvent | 2015-02-27 05:01:03 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | graphic bug any map doesnot show well | auto-migrated Priority-Medium Type-Defect | ```
when starting a game map shows well but when player move, the graphic does not
show well. the backgroud is limited to a small rectangle and the hedges seems
to stay in the air.
What is the expected output? What do you see instead?
stable graphic
What version of the product are you using? On what operating system?
0.9.14.1
Please provide any additional information below.
```
Original issue reported on code.google.com by `longb...@gmail.com` on 15 Dec 2010 at 8:31 | 1.0 | graphic bug any map doesnot show well - ```
when starting a game map shows well but when player move, the graphic does not
show well. the backgroud is limited to a small rectangle and the hedges seems
to stay in the air.
What is the expected output? What do you see instead?
stable graphic
What version of the product are you using? On what operating system?
0.9.14.1
Please provide any additional information below.
```
Original issue reported on code.google.com by `longb...@gmail.com` on 15 Dec 2010 at 8:31 | defect | graphic bug any map doesnot show well when starting a game map shows well but when player move the graphic does not show well the backgroud is limited to a small rectangle and the hedges seems to stay in the air what is the expected output what do you see instead stable graphic what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by longb gmail com on dec at | 1 |
68,268 | 21,573,455,381 | IssuesEvent | 2022-05-02 11:08:25 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Disable user location caching in LocationPicker | T-Defect O-Uncommon A-Location-Sharing | ### Steps to reproduce
1. Reset perms for element
2. Open the location sharing dialog, select 'own location'
3. Allow geolocation in the browser permissions popup
4. Close the location share dialog
5. Block geolocation permissions
6. Open the location sharing dialog again, select 'own location'
### Outcome
#### What did you expect?
'No permissions' error
#### What happened instead?
The map centered on my current location
Maplibre caches your last known position for performance reasons when `trackUserLocation` config is true.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Disable user location caching in LocationPicker - ### Steps to reproduce
1. Reset perms for element
2. Open the location sharing dialog, select 'own location'
3. Allow geolocation in the browser permissions popup
4. Close the location share dialog
5. Block geolocation permissions
6. Open the location sharing dialog again, select 'own location'
### Outcome
#### What did you expect?
'No permissions' error
#### What happened instead?
The map centered on my current location
Maplibre caches your last known position for performance reasons when `trackUserLocation` config is true.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No | defect | disable user location caching in locationpicker steps to reproduce reset perms for element open the location sharing dialog select own location allow geolocation in the browser permissions popup close the location share dialog block geolocation permissions open the location sharing dialog again select own location outcome what did you expect no permissions error what happened instead the map centered on my current location maplibre caches your last known position for performance reasons when trackuserlocation config is true operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs no | 1 |
6,798 | 9,100,909,511 | IssuesEvent | 2019-02-20 09:47:00 | marcelm/cutadapt | https://api.github.com/repos/marcelm/cutadapt | opened | Remove --rest-file option | incompatible | The `--info-file` option should be used instead. Also document how to turn an info file into an equivalent rest file. | True | Remove --rest-file option - The `--info-file` option should be used instead. Also document how to turn an info file into an equivalent rest file. | non_defect | remove rest file option the info file option should be used instead also document how to turn an info file into an equivalent rest file | 0 |
41,601 | 6,922,852,548 | IssuesEvent | 2017-11-30 06:04:08 | KeevanDance/CFThrowdown-lite | https://api.github.com/repos/KeevanDance/CFThrowdown-lite | closed | QC Checklist | Documentation | **Admin**
- [ ] Clicking logout should logout the user and return to the public page removing access to write data
Workouts:
- [ ] Clicking workouts should show all workouts where default sort is RX (or scaled if RX is not an option)
- [ ] Clicking on a workout should take you to the details page of that workout displaying name, division, gender, score type, and steps
- [ ] Clicking on Add Workout should allow you to enter and submit a new workout. This should add the workout to all existing competitors of the same division and gender and to the division that was selected
- [ ] Clicking "submit workout scores" should iterate through all competitors apart of this workout and place them according to their score for both Timed and Weighted workouts
Competitors:
- [ ] Click on Competitors should show all competitors with options to filter by gender and division
- [ ] Clicking Add Competitor should allow you to create a new competitor. This should add all of the existing relevant workouts to the scores array for that competitor based on division and gender
- [ ] Clicking on a Competitor should show more information for that competitor
- [ ] Clicking on Edit Mode should flip the button text to Save Changes and allow you to edit every part of the competitor. Updating gender and/or division should alter the scores array to only have workouts that match the combination
Divisions:
- [ ] Clicking on Divisions should show all available divisions
- [ ] clicking on Add Division should give you the option to add a division
- [ ] clicking on a division should give you a button to then delete the division
- [ ] clicking the delete this division button should delete the division (including workoutIds under that division), workouts associated with this division, and update competitors removing scores and division for those that are associated with this division
**Public**
- [ ] Clicking Login here should take the user to a login page where they can enter admin credentials and be taken to an admin home page
Leaderboard:
Workouts:
- [ ] Clicking Workouts should display all of the workouts for the filter set (default is Men, RX)
Competitors:
- [ ] Clicking on Competitors should show all competitors allowing for the filtering by gender and division | 1.0 | QC Checklist - **Admin**
- [ ] Clicking logout should logout the user and return to the public page removing access to write data
Workouts:
- [ ] Clicking workouts should show all workouts where default sort is RX (or scaled if RX is not an option)
- [ ] Clicking on a workout should take you to the details page of that workout displaying name, division, gender, score type, and steps
- [ ] Clicking on Add Workout should allow you to enter and submit a new workout. This should add the workout to all existing competitors of the same division and gender and to the division that was selected
- [ ] Clicking "submit workout scores" should iterate through all competitors apart of this workout and place them according to their score for both Timed and Weighted workouts
Competitors:
- [ ] Click on Competitors should show all competitors with options to filter by gender and division
- [ ] Clicking Add Competitor should allow you to create a new competitor. This should add all of the existing relevant workouts to the scores array for that competitor based on division and gender
- [ ] Clicking on a Competitor should show more information for that competitor
- [ ] Clicking on Edit Mode should flip the button text to Save Changes and allow you to edit every part of the competitor. Updating gender and/or division should alter the scores array to only have workouts that match the combination
Divisions:
- [ ] Clicking on Divisions should show all available divisions
- [ ] clicking on Add Division should give you the option to add a division
- [ ] clicking on a division should give you a button to then delete the division
- [ ] clicking the delete this division button should delete the division (including workoutIds under that division), workouts associated with this division, and update competitors removing scores and division for those that are associated with this division
**Public**
- [ ] Clicking Login here should take the user to a login page where they can enter admin credentials and be taken to an admin home page
Leaderboard:
Workouts:
- [ ] Clicking Workouts should display all of the workouts for the filter set (default is Men, RX)
Competitors:
- [ ] Clicking on Competitors should show all competitors allowing for the filtering by gender and division | non_defect | qc checklist admin clicking logout should logout the user and return to the public page removing access to write data workouts clicking workouts should show all workouts where default sort is rx or scaled if rx is not an option clicking on a workout should take you to the details page of that workout displaying name division gender score type and steps clicking on add workout should allow you to enter and submit a new workout this should add the workout to all existing competitors of the same division and gender and to the division that was selected clicking submit workout scores should iterate through all competitors apart of this workout and place them according to their score for both timed and weighted workouts competitors click on competitors should show all competitors with options to filter by gender and division clicking add competitor should allow you to create a new competitor this should add all of the existing relevant workouts to the scores array for that competitor based on division and gender clicking on a competitor should show more information for that competitor clicking on edit mode should flip the button text to save changes and allow you to edit every part of the competitor updating gender and or division should alter the scores array to only have workouts that match the combination divisions clicking on divisions should show all available divisions clicking on add division should give you the option to add a division clicking on a division should give you a button to then delete the division clicking the delete this division button should delete the division including workoutids under that division workouts associated with this division and update competitors removing scores and division for those that are associated with this division public clicking login here should take the user to a login page where they can enter admin credentials and be taken to an admin home page leaderboard workouts clicking workouts should display all of the workouts for the filter set default is men rx competitors clicking on competitors should show all competitors allowing for the filtering by gender and division | 0 |
20,817 | 3,420,904,147 | IssuesEvent | 2015-12-08 16:35:55 | dkfans/keeperfx | https://api.github.com/repos/dkfans/keeperfx | closed | Imps never claim when gems are available (Stable) | Branch-Stable Priority-High Status-Fixed Type-Defect | Load the attached save and see Orcs break down a door. Notice Imps will not claim the path beyond it no matter how long you wait.
For players this is annoying, but for computer players this might be game breaking.

Save is from KeeperFX v0.4.6 r1737 patch, git 04d7924, dated 2015-11-05 18:59:28 | 1.0 | Imps never claim when gems are available (Stable) - Load the attached save and see Orcs break down a door. Notice Imps will not claim the path beyond it no matter how long you wait.
For players this is annoying, but for computer players this might be game breaking.

Save is from KeeperFX v0.4.6 r1737 patch, git 04d7924, dated 2015-11-05 18:59:28 | defect | imps never claim when gems are available stable load the attached save and see orcs break down a door notice imps will not claim the path beyond it no matter how long you wait for players this is annoying but for computer players this might be game breaking save is from keeperfx patch git dated | 1 |
313,081 | 9,556,782,403 | IssuesEvent | 2019-05-03 09:26:42 | OpenSourceEconomics/soepy | https://api.github.com/repos/OpenSourceEconomics/soepy | closed | PACKAGE_DIR path | pb package priority low size small | `PACKAGE_DIR` points one level higher than `soepy` and at the root of the repository. I would like us to not reference anything outside package itself there. So, instead of
`
PACKAGE_DIR = Path(__file__).parent.parent.absolute()
TEST_RESOURCES_DIR = PACKAGE_DIR / "soepy" / "test" / "resources"
`
we should use
`
PACKAGE_DIR = Path(__file__).parent.absolute()
TEST_RESOURCES_DIR = PACKAGE_DIR / "test" / "resources"
`
However, this requires to check how the tests that use the two paths are affected.
| 1.0 | PACKAGE_DIR path - `PACKAGE_DIR` points one level higher than `soepy` and at the root of the repository. I would like us to not reference anything outside package itself there. So, instead of
`
PACKAGE_DIR = Path(__file__).parent.parent.absolute()
TEST_RESOURCES_DIR = PACKAGE_DIR / "soepy" / "test" / "resources"
`
we should use
`
PACKAGE_DIR = Path(__file__).parent.absolute()
TEST_RESOURCES_DIR = PACKAGE_DIR / "test" / "resources"
`
However, this requires to check how the tests that use the two paths are affected.
| non_defect | package dir path package dir points one level higher than soepy and at the root of the repository i would like us to not reference anything outside package itself there so instead of package dir path file parent parent absolute test resources dir package dir soepy test resources we should use package dir path file parent absolute test resources dir package dir test resources however this requires to check how the tests that use the two paths are affected | 0 |
3,769 | 2,540,122,359 | IssuesEvent | 2015-01-27 19:38:30 | EFForg/privacybadgerchrome | https://api.github.com/repos/EFForg/privacybadgerchrome | closed | Enhancement: 1-click config fixes for common sites | bug High priority | ## Scenario
I'm on youtube, I've installed PB. It seems to work. I want to comment, click comment box. Popup opens then closes. Click it again, chrome blocks popup. Allow popups, click again. Popup opens & closes. Youtube appears to be trying to call to google+ or whatever for commenting. Check privacy badger:
* apis.google.com :yellow_heart:
* gg.google.com :red_circle:
* plus.google.com :yellow_heart:
* www.google.com :yellow_heart:
Ok.... what do I do now? "I just want to comment."
## Probable outcome for many people
Disable/remove pb which is "breaking" the site, or just disable blocking willy nilly (which sort of defeats the purpose of the tool).
## Proposal
It would be nice if the pb icon blinked or something in this scenario so I could click it & be presented with "Allow youtube commenting." These configurations would have to be tailor-made for each site/service, but I believe that a 1-click-fix on e.g. the top 10 services could alleviate 80+% of user pain around this. | 1.0 | Enhancement: 1-click config fixes for common sites - ## Scenario
I'm on youtube, I've installed PB. It seems to work. I want to comment, click comment box. Popup opens then closes. Click it again, chrome blocks popup. Allow popups, click again. Popup opens & closes. Youtube appears to be trying to call to google+ or whatever for commenting. Check privacy badger:
* apis.google.com :yellow_heart:
* gg.google.com :red_circle:
* plus.google.com :yellow_heart:
* www.google.com :yellow_heart:
Ok.... what do I do now? "I just want to comment."
## Probable outcome for many people
Disable/remove pb which is "breaking" the site, or just disable blocking willy nilly (which sort of defeats the purpose of the tool).
## Proposal
It would be nice if the pb icon blinked or something in this scenario so I could click it & be presented with "Allow youtube commenting." These configurations would have to be tailor-made for each site/service, but I believe that a 1-click-fix on e.g. the top 10 services could alleviate 80+% of user pain around this. | non_defect | enhancement click config fixes for common sites scenario i m on youtube i ve installed pb it seems to work i want to comment click comment box popup opens then closes click it again chrome blocks popup allow popups click again popup opens closes youtube appears to be trying to call to google or whatever for commenting check privacy badger apis google com yellow heart gg google com red circle plus google com yellow heart yellow heart ok what do i do now i just want to comment probable outcome for many people disable remove pb which is breaking the site or just disable blocking willy nilly which sort of defeats the purpose of the tool proposal it would be nice if the pb icon blinked or something in this scenario so i could click it be presented with allow youtube commenting these configurations would have to be tailor made for each site service but i believe that a click fix on e g the top services could alleviate of user pain around this | 0 |
9,909 | 2,616,009,823 | IssuesEvent | 2015-03-02 00:53:24 | jasonhall/bwapi | https://api.github.com/repos/jasonhall/bwapi | closed | CoolDown shows zero for several types of Units | auto-migrated Component-Logic Priority-Medium Type-Defect Usability | ```
What steps will reproduce the problem?
1. unit->getAirWeaponCooldown() is always 0
2. unit->getGroundWeaponCooldown() is always 0
3. only happens for tank and goliath
What is the expected output? What do you see instead?
expect to find some non zero value when the unit is attacking.
What version of the product are you using? On what operating system?
BWAPI 2.6.1
Please provide any additional information below.
this happens only on tank and goliath (among the units I tested). nothing
is wrong for marine, ghost, ultralisk...
also, unit->isStartingAttack() always return false for tank and goliath.
```
Original issue reported on code.google.com by `hero...@gmail.com` on 22 Jan 2010 at 7:55 | 1.0 | CoolDown shows zero for several types of Units - ```
What steps will reproduce the problem?
1. unit->getAirWeaponCooldown() is always 0
2. unit->getGroundWeaponCooldown() is always 0
3. only happens for tank and goliath
What is the expected output? What do you see instead?
expect to find some non zero value when the unit is attacking.
What version of the product are you using? On what operating system?
BWAPI 2.6.1
Please provide any additional information below.
this happens only on tank and goliath (among the units I tested). nothing
is wrong for marine, ghost, ultralisk...
also, unit->isStartingAttack() always return false for tank and goliath.
```
Original issue reported on code.google.com by `hero...@gmail.com` on 22 Jan 2010 at 7:55 | defect | cooldown shows zero for several types of units what steps will reproduce the problem unit getairweaponcooldown is always unit getgroundweaponcooldown is always only happens for tank and goliath what is the expected output what do you see instead expect to find some non zero value when the unit is attacking what version of the product are you using on what operating system bwapi please provide any additional information below this happens only on tank and goliath among the units i tested nothing is wrong for marine ghost ultralisk also unit isstartingattack always return false for tank and goliath original issue reported on code google com by hero gmail com on jan at | 1 |
17,071 | 2,974,593,141 | IssuesEvent | 2015-07-15 02:10:23 | Reimashi/jotai | https://api.github.com/repos/Reimashi/jotai | closed | gadget loses info on shutdown/startup in WinXP | auto-migrated Priority-Medium Type-Defect wontfix | ```
Beta .4 works great with WinXP sp3, MSI 790GX-G65 mb & AMD Phenom II processor,
4g memory.
Only problem I've encountered is that the desktop gadget loses the applied info
(fan speed, cpu temp, etc) at system shutdown/startup and it has to be
reapplied. Have I missed something?
Thanks for a great utility.
```
Original issue reported on code.google.com by `ethe...@gmail.com` on 4 Apr 2012 at 11:22 | 1.0 | gadget loses info on shutdown/startup in WinXP - ```
Beta .4 works great with WinXP sp3, MSI 790GX-G65 mb & AMD Phenom II processor,
4g memory.
Only problem I've encountered is that the desktop gadget loses the applied info
(fan speed, cpu temp, etc) at system shutdown/startup and it has to be
reapplied. Have I missed something?
Thanks for a great utility.
```
Original issue reported on code.google.com by `ethe...@gmail.com` on 4 Apr 2012 at 11:22 | defect | gadget loses info on shutdown startup in winxp beta works great with winxp msi mb amd phenom ii processor memory only problem i ve encountered is that the desktop gadget loses the applied info fan speed cpu temp etc at system shutdown startup and it has to be reapplied have i missed something thanks for a great utility original issue reported on code google com by ethe gmail com on apr at | 1 |
120,689 | 17,644,257,790 | IssuesEvent | 2021-08-20 02:04:13 | DavidSpek/kale | https://api.github.com/repos/DavidSpek/kale | opened | CVE-2021-29518 (High) detected in tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl, tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl | security vulnerability | ## CVE-2021-29518 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>
Dependency Hierarchy:
- tfx_bsl-0.21.4-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library)
- :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference. The implementation(https://github.com/tensorflow/tensorflow/blob/eebb96c2830d48597d055d247c0e9aebaea94cd5/tensorflow/core/kernels/session_ops.cc#L104) dereferences the session state pointer without checking if it is valid. Thus, in eager mode, `ctx->session_state()` is nullptr and the call of the member function is undefined behavior. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29518>CVE-2021-29518</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-62gx-355r-9fhg">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-62gx-355r-9fhg</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-29518 (High) detected in tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl, tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-29518 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>
Dependency Hierarchy:
- tfx_bsl-0.21.4-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library)
- :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In eager mode (default in TF 2.0 and later), session operations are invalid. However, users could still call the raw ops associated with them and trigger a null pointer dereference. The implementation(https://github.com/tensorflow/tensorflow/blob/eebb96c2830d48597d055d247c0e9aebaea94cd5/tensorflow/core/kernels/session_ops.cc#L104) dereferences the session state pointer without checking if it is valid. Thus, in eager mode, `ctx->session_state()` is nullptr and the call of the member function is undefined behavior. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29518>CVE-2021-29518</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-62gx-355r-9fhg">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-62gx-355r-9fhg</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in tensorflow whl tensorflow whl cve high severity vulnerability vulnerable libraries tensorflow whl tensorflow whl tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file kale examples dog breed classification requirements requirements txt path to vulnerable library kale examples dog breed classification requirements requirements txt dependency hierarchy x tensorflow whl vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file kale examples taxi cab classification requirements txt path to vulnerable library kale examples taxi cab classification requirements txt dependency hierarchy tfx bsl whl root library x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning in eager mode default in tf and later session operations are invalid however users could still call the raw ops associated with them and trigger a null pointer dereference the implementation dereferences the session state pointer without checking if it is valid thus in eager mode ctx session state is nullptr and the call of the member function is undefined behavior the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource | 0 |
181,451 | 14,020,733,713 | IssuesEvent | 2020-10-29 20:08:24 | lbowes/keep-to-calendar | https://api.github.com/repos/lbowes/keep-to-calendar | opened | test/pytest-config | incomplete type: test | ### Description
Investigate specifying `pytest` options in a config file
### TODO
todo | 1.0 | test/pytest-config - ### Description
Investigate specifying `pytest` options in a config file
### TODO
todo | non_defect | test pytest config description investigate specifying pytest options in a config file todo todo | 0 |
84,636 | 16,527,687,636 | IssuesEvent | 2021-05-26 22:50:20 | DIT112-V21/group-17 | https://api.github.com/repos/DIT112-V21/group-17 | opened | [PROBLEM] confirmPickupMessage(mailman,receiver) receiver object | Android Bug HighPriority Java code To improve | Description
mailman object not found properly, the confirmPickupMessage(mailman,receiver) the method Controller.confirmPickupMessage(mailman,receiver); can't find the mailman, if used in another class, it crashes.
related issue: #48 #65 | 1.0 | [PROBLEM] confirmPickupMessage(mailman,receiver) receiver object - Description
mailman object not found properly, the confirmPickupMessage(mailman,receiver) the method Controller.confirmPickupMessage(mailman,receiver); can't find the mailman, if used in another class, it crashes.
related issue: #48 #65 | non_defect | confirmpickupmessage mailman receiver receiver object description mailman object not found properly the confirmpickupmessage mailman receiver the method controller confirmpickupmessage mailman receiver can t find the mailman if used in another class it crashes related issue | 0 |
57,834 | 16,094,839,224 | IssuesEvent | 2021-04-26 21:30:37 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | Discovery issue for VAMC "Links must have discernible text" Defect-1 items | 508-defect-1 508/Accessibility frontend frontend-vamc stretch-goal vsa vsa-facilities | ## Issue Description
"Links must have discernible text" defects were discovered on several VAMC pages. We need to look at them individually to determine if they all have the same root cause and fix.
- https://www.va.gov/black-hills-health-care/policies/
- https://www.va.gov/minneapolis-health-care/programs/
- https://www.va.gov/northern-california-health-care/stories/
<details>
<summary> Policies page </summary>

</details>
<details>
<summary> Programs page </summary>

</details>
<details>
<summary>Stories page has two issues </summary>


</details>
---
## Tasks
- [ ] Dig into code to find source of error (content vs template, etc)
- [ ] For issues which can be addressed from front end, document tasks to resolve
## Acceptance Criteria
- [ ] Defects on the pages listed above have been investigated and a path for resolution is determined.
---
| 1.0 | Discovery issue for VAMC "Links must have discernible text" Defect-1 items - ## Issue Description
"Links must have discernible text" defects were discovered on several VAMC pages. We need to look at them individually to determine if they all have the same root cause and fix.
- https://www.va.gov/black-hills-health-care/policies/
- https://www.va.gov/minneapolis-health-care/programs/
- https://www.va.gov/northern-california-health-care/stories/
<details>
<summary> Policies page </summary>

</details>
<details>
<summary> Programs page </summary>

</details>
<details>
<summary>Stories page has two issues </summary>


</details>
---
## Tasks
- [ ] Dig into code to find source of error (content vs template, etc)
- [ ] For issues which can be addressed from front end, document tasks to resolve
## Acceptance Criteria
- [ ] Defects on the pages listed above have been investigated and a path for resolution is determined.
---
| defect | discovery issue for vamc links must have discernible text defect items issue description links must have discernible text defects were discovered on several vamc pages we need to look at them individually to determine if they all have the same root cause and fix policies page programs page stories page has two issues tasks dig into code to find source of error content vs template etc for issues which can be addressed from front end document tasks to resolve acceptance criteria defects on the pages listed above have been investigated and a path for resolution is determined | 1 |
228,637 | 25,236,121,796 | IssuesEvent | 2022-11-15 01:07:24 | eng-dibo/dibo | https://api.github.com/repos/eng-dibo/dibo | closed | ngx-cms-file:projects/ngx-cms.tgz: 1 vulnerabilities (highest severity is: 7.5) - autoclosed | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ngx-cms-file:projects/ngx-cms.tgz</b></p></summary>
<p></p>
<p>Path to vulnerable library: /node_modules/dicer/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/eng-dibo/dibo/commit/4477bf0a8efee663deac6a071c68de3a2874a056">4477bf0a8efee663deac6a071c68de3a2874a056</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (ngx-cms-file:projects/ngx-cms.tgz version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-24434](https://www.mend.io/vulnerability-database/CVE-2022-24434) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | dicer-0.2.5.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-24434</summary>
### Vulnerable Library - <b>dicer-0.2.5.tgz</b></p>
<p>A very fast streaming multipart parser for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/dicer/-/dicer-0.2.5.tgz">https://registry.npmjs.org/dicer/-/dicer-0.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/dicer/package.json</p>
<p>
Dependency Hierarchy:
- ngx-cms-file:projects/ngx-cms.tgz (Root Library)
- multer-1.4.4.tgz
- busboy-0.2.14.tgz
- :x: **dicer-0.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/eng-dibo/dibo/commit/4477bf0a8efee663deac6a071c68de3a2874a056">4477bf0a8efee663deac6a071c68de3a2874a056</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
This affects all versions of package dicer. A malicious attacker can send a modified form to server, and crash the nodejs service. An attacker could sent the payload again and again so that the service continuously crashes.
<p>Publish Date: 2022-05-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24434>CVE-2022-24434</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | ngx-cms-file:projects/ngx-cms.tgz: 1 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ngx-cms-file:projects/ngx-cms.tgz</b></p></summary>
<p></p>
<p>Path to vulnerable library: /node_modules/dicer/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/eng-dibo/dibo/commit/4477bf0a8efee663deac6a071c68de3a2874a056">4477bf0a8efee663deac6a071c68de3a2874a056</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (ngx-cms-file:projects/ngx-cms.tgz version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-24434](https://www.mend.io/vulnerability-database/CVE-2022-24434) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | dicer-0.2.5.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-24434</summary>
### Vulnerable Library - <b>dicer-0.2.5.tgz</b></p>
<p>A very fast streaming multipart parser for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/dicer/-/dicer-0.2.5.tgz">https://registry.npmjs.org/dicer/-/dicer-0.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/dicer/package.json</p>
<p>
Dependency Hierarchy:
- ngx-cms-file:projects/ngx-cms.tgz (Root Library)
- multer-1.4.4.tgz
- busboy-0.2.14.tgz
- :x: **dicer-0.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/eng-dibo/dibo/commit/4477bf0a8efee663deac6a071c68de3a2874a056">4477bf0a8efee663deac6a071c68de3a2874a056</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
This affects all versions of package dicer. A malicious attacker can send a modified form to server, and crash the nodejs service. An attacker could sent the payload again and again so that the service continuously crashes.
<p>Publish Date: 2022-05-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24434>CVE-2022-24434</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_defect | ngx cms file projects ngx cms tgz vulnerabilities highest severity is autoclosed vulnerable library ngx cms file projects ngx cms tgz path to vulnerable library node modules dicer package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in ngx cms file projects ngx cms tgz version remediation available high dicer tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library dicer tgz a very fast streaming multipart parser for node js library home page a href path to dependency file package json path to vulnerable library node modules dicer package json dependency hierarchy ngx cms file projects ngx cms tgz root library multer tgz busboy tgz x dicer tgz vulnerable library found in head commit a href found in base branch main vulnerability details this affects all versions of package dicer a malicious attacker can send a modified form to server and crash the nodejs service an attacker could sent the payload again and again so that the service continuously crashes publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
3,066 | 2,607,982,448 | IssuesEvent | 2015-02-26 00:50:10 | chrsmithdemos/zen-coding | https://api.github.com/repos/chrsmithdemos/zen-coding | closed | Spaces instead of tabs indentation in Notepad++ | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Set Notepad++ to use spaces instead of tabs from
Settings->Preferences->Language Menu/Tab Settings
2. Write div#test>li*5 and expand abbreviation.
What is the expected output? What do you see instead?
<li> is indented using tabs instead of spaces.
What version of the product are you using? On what operating system?
Notepad++ v5.8.7 UNICODE
Windows XP SP3 32bit.
```
-----
Original issue reported on code.google.com by `teh.y...@gmail.com` on 22 Feb 2011 at 8:53 | 1.0 | Spaces instead of tabs indentation in Notepad++ - ```
What steps will reproduce the problem?
1. Set Notepad++ to use spaces instead of tabs from
Settings->Preferences->Language Menu/Tab Settings
2. Write div#test>li*5 and expand abbreviation.
What is the expected output? What do you see instead?
<li> is indented using tabs instead of spaces.
What version of the product are you using? On what operating system?
Notepad++ v5.8.7 UNICODE
Windows XP SP3 32bit.
```
-----
Original issue reported on code.google.com by `teh.y...@gmail.com` on 22 Feb 2011 at 8:53 | defect | spaces instead of tabs indentation in notepad what steps will reproduce the problem set notepad to use spaces instead of tabs from settings preferences language menu tab settings write div test li and expand abbreviation what is the expected output what do you see instead is indented using tabs instead of spaces what version of the product are you using on what operating system notepad unicode windows xp original issue reported on code google com by teh y gmail com on feb at | 1 |
437,768 | 30,608,837,763 | IssuesEvent | 2023-07-23 11:02:41 | Noorin99/HomeTaste | https://api.github.com/repos/Noorin99/HomeTaste | closed | Reset Password | documentation feature testing | - Reset password at login if user forgot password or password is not correct
- Unit tests and documentation should be included | 1.0 | Reset Password - - Reset password at login if user forgot password or password is not correct
- Unit tests and documentation should be included | non_defect | reset password reset password at login if user forgot password or password is not correct unit tests and documentation should be included | 0 |
27,240 | 5,325,715,044 | IssuesEvent | 2017-02-15 00:45:40 | algolia/instantsearch.js | https://api.github.com/repos/algolia/instantsearch.js | closed | Add widgets Algolia requirements | API documentation Website ◾ react-instantsearch | Every widget and connector should be explicit on the kind of Algolia requirement needed.
For example to use a RefinementList, the underlying attributeName should be declared as an attribute for faceting. | 1.0 | Add widgets Algolia requirements - Every widget and connector should be explicit on the kind of Algolia requirement needed.
For example to use a RefinementList, the underlying attributeName should be declared as an attribute for faceting. | non_defect | add widgets algolia requirements every widget and connector should be explicit on the kind of algolia requirement needed for example to use a refinementlist the underlying attributename should be declared as an attribute for faceting | 0 |
174,662 | 6,542,107,662 | IssuesEvent | 2017-09-02 00:42:05 | elementary/switchboard-plug-pantheon-shell | https://api.github.com/repos/elementary/switchboard-plug-pantheon-shell | closed | Wallpaper chooser does not remember the previous state | Priority: Low | If i change the source of the wallpapers to "Pictures", choose a picture from there, close the Switchboard, reopen it, then the state is not remembered. The original wallpapers are shown again, but actually a custom wallpaper is selected. This plug should know that and show the pictures inside the Pictures folder instead.
Launchpad Details: [#LP1504906](https://bugs.launchpad.net/bugs/1504906) Kristjan Vool - 2015-10-11 08:32:07 +0000
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/43027902-wallpaper-chooser-does-not-remember-the-previous-state?utm_campaign=plugin&utm_content=tracker%2F57462545&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F57462545&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | Wallpaper chooser does not remember the previous state - If i change the source of the wallpapers to "Pictures", choose a picture from there, close the Switchboard, reopen it, then the state is not remembered. The original wallpapers are shown again, but actually a custom wallpaper is selected. This plug should know that and show the pictures inside the Pictures folder instead.
Launchpad Details: [#LP1504906](https://bugs.launchpad.net/bugs/1504906) Kristjan Vool - 2015-10-11 08:32:07 +0000
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/43027902-wallpaper-chooser-does-not-remember-the-previous-state?utm_campaign=plugin&utm_content=tracker%2F57462545&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F57462545&utm_medium=issues&utm_source=github).
</bountysource-plugin> | non_defect | wallpaper chooser does not remember the previous state if i change the source of the wallpapers to pictures choose a picture from there close the switchboard reopen it then the state is not remembered the original wallpapers are shown again but actually a custom wallpaper is selected this plug should know that and show the pictures inside the pictures folder instead launchpad details kristjan vool want to back this issue we accept bounties via | 0 |
62,007 | 17,023,830,855 | IssuesEvent | 2021-07-03 04:04:26 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | OpenStreetmap Trac Icon on this webpage should link to https://trac.openstreetmap.org | Component: admin Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 12.00pm, Sunday, 14th October 2012]**
Clicking at the icon (top left), refers to http://www.openstreetmap.org/
I think it should link to https://trac.openstreetmap.org | 1.0 | OpenStreetmap Trac Icon on this webpage should link to https://trac.openstreetmap.org - **[Submitted to the original trac issue database at 12.00pm, Sunday, 14th October 2012]**
Clicking at the icon (top left), refers to http://www.openstreetmap.org/
I think it should link to https://trac.openstreetmap.org | defect | openstreetmap trac icon on this webpage should link to clicking at the icon top left refers to i think it should link to | 1 |
66,262 | 20,106,863,987 | IssuesEvent | 2022-02-07 11:22:39 | microsoft/TypeScript | https://api.github.com/repos/microsoft/TypeScript | closed | Discriminated union shows inconsistent error messages | Not a Defect | # Bug Report
### 🔎 Search Terms
discriminated union
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about discriminated unions.
### ⏯ Playground Link
[Playground link with relevant code](https://www.typescriptlang.org/play?ts=4.6.0-dev.20220116#code/C4TwDgpgBAEg9gJwJYC84DtgEMA2BhLBAEwAUE4wBnKAXigG8AoKFqRJCTLYJDAfgBcUAOQALdmi45hzVpVQRBIygFtc0qAB8RKiESQBXFTIC+jUJCgA1CAh4BjXAWJkK1Ok1ZtknbD35CwgButg64Ml7yKIqBuvpGwloiOIQA5hCm5uDQzqTkVLSwEhjY+IR5bkk2dkiOZS75lIyMAPQtUADqiADWTfYYlMBQITV1QrmuBR6yLOy+3LzogSNh0gA0M1BREIGq6pltUAAicBDU6HBDAO49jP3og1DiyJKl4+WT7gybc1z+SyJnqgSuENpEFIEUgh0pkgA)
### 💻 Code
```ts
type HorizontalCardProps = {
orientation?: 'horizontal'
size?: 'small' | 'medium'
}
type VerticalCardProps = {
orientation?: 'vertical'
size?: 'medium' | 'large'
}
type CardProps = HorizontalCardProps | VerticalCardProps
// Works
const vertical: CardProps = {
orientation: 'vertical',
size: 'small'
}
// Does not work
const horizontal: CardProps = {
orientation: 'horizontal',
size: 'large'
}
```
### 🙁 Actual behavior
**Working**: Using a `vertical` orientation, linter shows properly that the `size` property's value is invalid.

**Not working**: Using a `horizontal` orientation, linter marks `orientation` to be invalid, instead of `size`.

### 🙂 Expected behavior
Consistency in the error messages. The ideal scenario would be for the non working example to show _`Type "large" is not assignable to type "small" | "medium" | undefined.`_, as the property `orientation` with the value `horizontal` matches the `HorizontalCardProps` type.
| 1.0 | Discriminated union shows inconsistent error messages - # Bug Report
### 🔎 Search Terms
discriminated union
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about discriminated unions.
### ⏯ Playground Link
[Playground link with relevant code](https://www.typescriptlang.org/play?ts=4.6.0-dev.20220116#code/C4TwDgpgBAEg9gJwJYC84DtgEMA2BhLBAEwAUE4wBnKAXigG8AoKFqRJCTLYJDAfgBcUAOQALdmi45hzVpVQRBIygFtc0qAB8RKiESQBXFTIC+jUJCgA1CAh4BjXAWJkK1Ok1ZtknbD35CwgButg64Ml7yKIqBuvpGwloiOIQA5hCm5uDQzqTkVLSwEhjY+IR5bkk2dkiOZS75lIyMAPQtUADqiADWTfYYlMBQITV1QrmuBR6yLOy+3LzogSNh0gA0M1BREIGq6pltUAAicBDU6HBDAO49jP3og1DiyJKl4+WT7gybc1z+SyJnqgSuENpEFIEUgh0pkgA)
### 💻 Code
```ts
type HorizontalCardProps = {
orientation?: 'horizontal'
size?: 'small' | 'medium'
}
type VerticalCardProps = {
orientation?: 'vertical'
size?: 'medium' | 'large'
}
type CardProps = HorizontalCardProps | VerticalCardProps
// Works
const vertical: CardProps = {
orientation: 'vertical',
size: 'small'
}
// Does not work
const horizontal: CardProps = {
orientation: 'horizontal',
size: 'large'
}
```
### 🙁 Actual behavior
**Working**: Using a `vertical` orientation, linter shows properly that the `size` property's value is invalid.

**Not working**: Using a `horizontal` orientation, linter marks `orientation` to be invalid, instead of `size`.

### 🙂 Expected behavior
Consistency in the error messages. The ideal scenario would be for the non working example to show _`Type "large" is not assignable to type "small" | "medium" | undefined.`_, as the property `orientation` with the value `horizontal` matches the `HorizontalCardProps` type.
| defect | discriminated union shows inconsistent error messages bug report 🔎 search terms discriminated union 🕗 version regression information this is the behavior in every version i tried and i reviewed the faq for entries about discriminated unions ⏯ playground link 💻 code ts type horizontalcardprops orientation horizontal size small medium type verticalcardprops orientation vertical size medium large type cardprops horizontalcardprops verticalcardprops works const vertical cardprops orientation vertical size small does not work const horizontal cardprops orientation horizontal size large 🙁 actual behavior working using a vertical orientation linter shows properly that the size property s value is invalid not working using a horizontal orientation linter marks orientation to be invalid instead of size 🙂 expected behavior consistency in the error messages the ideal scenario would be for the non working example to show type large is not assignable to type small medium undefined as the property orientation with the value horizontal matches the horizontalcardprops type | 1 |
71,274 | 23,516,069,696 | IssuesEvent | 2022-08-18 21:31:53 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | Nonuity build results in VPP errors using conda stacs on Macs | C: Framework T: defect P: normal | ## Bug Description
If moose is compiled using non-unity build and the conda stacks on Macs (both ARM and Intel), we get the following errors:
```
vectorpostprocessors/parallel_consistency.test ................................... [min_cpus=2] FAILED (CRASH)
vectorpostprocessors/parallel_consistency.broadcast .............................. [min_cpus=2] FAILED (CRASH)
auxkernels/vector_postprocessor_visualization.test ............................... [min_cpus=3] FAILED (CRASH)
```
## Steps to Reproduce
```
conda activate moose (latest moose environment with libmesh)
git clone https://www.github.com/idaholab/moose
cd moose/test
MOOSE_UNITY=false make -j8
./run_tests -j8
```
## Impact
Will enable libtorch-related merges due to the fact that it disables unity build in certain folders.
| 1.0 | Nonuity build results in VPP errors using conda stacs on Macs - ## Bug Description
If moose is compiled using non-unity build and the conda stacks on Macs (both ARM and Intel), we get the following errors:
```
vectorpostprocessors/parallel_consistency.test ................................... [min_cpus=2] FAILED (CRASH)
vectorpostprocessors/parallel_consistency.broadcast .............................. [min_cpus=2] FAILED (CRASH)
auxkernels/vector_postprocessor_visualization.test ............................... [min_cpus=3] FAILED (CRASH)
```
## Steps to Reproduce
```
conda activate moose (latest moose environment with libmesh)
git clone https://www.github.com/idaholab/moose
cd moose/test
MOOSE_UNITY=false make -j8
./run_tests -j8
```
## Impact
Will enable libtorch-related merges due to the fact that it disables unity build in certain folders.
| defect | nonuity build results in vpp errors using conda stacs on macs bug description if moose is compiled using non unity build and the conda stacks on macs both arm and intel we get the following errors vectorpostprocessors parallel consistency test failed crash vectorpostprocessors parallel consistency broadcast failed crash auxkernels vector postprocessor visualization test failed crash steps to reproduce conda activate moose latest moose environment with libmesh git clone cd moose test moose unity false make run tests impact will enable libtorch related merges due to the fact that it disables unity build in certain folders | 1 |
194,193 | 6,892,278,366 | IssuesEvent | 2017-11-22 20:20:05 | dashbase/dashbase-tools | https://api.github.com/repos/dashbase/dashbase-tools | closed | print out client-side and server-side latency for all tools | high priority | for all tools, print out a client side time with server side time.
serverside time is returned on the json, and client side time can be measured via a timer in the python code. The goal is to determine the RPC overhead/latency | 1.0 | print out client-side and server-side latency for all tools - for all tools, print out a client side time with server side time.
serverside time is returned on the json, and client side time can be measured via a timer in the python code. The goal is to determine the RPC overhead/latency | non_defect | print out client side and server side latency for all tools for all tools print out a client side time with server side time serverside time is returned on the json and client side time can be measured via a timer in the python code the goal is to determine the rpc overhead latency | 0 |
189,837 | 22,047,134,214 | IssuesEvent | 2022-05-30 03:58:05 | Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | closed | CVE-2021-3506 (High) detected in linuxlinux-4.19.88 - autoclosed | security vulnerability | ## CVE-2021-3506 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/f2fs/node.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An out-of-bounds (OOB) memory access flaw was found in fs/f2fs/node.c in the f2fs module in the Linux kernel in versions before 5.12.0-rc4. A bounds check failure allows a local attacker to gain access to out-of-bounds memory leading to a system crash or a leak of internal kernel information. The highest threat from this vulnerability is to system availability.
<p>Publish Date: 2021-04-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3506>CVE-2021-3506</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3506">https://www.linuxkernelcves.com/cves/CVE-2021-3506</a></p>
<p>Release Date: 2021-04-19</p>
<p>Fix Resolution: v4.19.191,v5.10.36,v5.11.20,v5.12.3,v5.4.118,v5.13-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3506 (High) detected in linuxlinux-4.19.88 - autoclosed - ## CVE-2021-3506 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/fs/f2fs/node.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An out-of-bounds (OOB) memory access flaw was found in fs/f2fs/node.c in the f2fs module in the Linux kernel in versions before 5.12.0-rc4. A bounds check failure allows a local attacker to gain access to out-of-bounds memory leading to a system crash or a leak of internal kernel information. The highest threat from this vulnerability is to system availability.
<p>Publish Date: 2021-04-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3506>CVE-2021-3506</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3506">https://www.linuxkernelcves.com/cves/CVE-2021-3506</a></p>
<p>Release Date: 2021-04-19</p>
<p>Fix Resolution: v4.19.191,v5.10.36,v5.11.20,v5.12.3,v5.4.118,v5.13-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files linux fs node c vulnerability details an out of bounds oob memory access flaw was found in fs node c in the module in the linux kernel in versions before a bounds check failure allows a local attacker to gain access to out of bounds memory leading to a system crash or a leak of internal kernel information the highest threat from this vulnerability is to system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
57,370 | 6,545,420,817 | IssuesEvent | 2017-09-04 04:45:59 | brave/browser-laptop | https://api.github.com/repos/brave/browser-laptop | closed | Manual test run on OS X for 0.18.x Hotfix 1 (Release Channel) | OS/macOS release-notes/exclude tests | ## Per release specialty tests
- [ ] Enable Brave in the UA for www.netflix.com. ([#10614](https://github.com/brave/browser-laptop/issues/10614))
- [ ] Netflix Streaming Error . ([#10449](https://github.com/brave/browser-laptop/issues/10449))
## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, muon, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Last changeset test
1. [ ] Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave).
## Widevine/Netflix test
1. [ ] Test that you can log into Netflix and start a show.
## Ledger
1. [x] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [x] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [x] Check that disabling payments and enabling them again does not lose state.
## Sync
1. [x] Verify you are able to sync two devices using the secret code
2. [x] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
3. [x] Enable Browsing history sync on device 1, ensure the history is shown on device 2
4. [x] Import/Add bookmarks on device 1, ensure it is synced on device 2
5. [x] Ensure imported bookmark folder structure is maintained on device 2
6. [x] Ensure bookmark favicons are shown after sync
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [ ] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:styles loads
9. [x] Test that about:welcome loads
10. [x] Test that about:preferences changing a preference takes effect right away
11. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar with the star button works
2. [x] Test that creating a bookmark on the bookmarks toolbar by dragging the un/lock icon works
3. [x] Test that creating a bookmark folder on the bookmarks toolbar works
4. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
5. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
6. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Geolocation
1. [x] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
5. [x] Test that tabs can be teared off into a new window
6. [x] Test that you are able to reattach a tab that is teared off into a new window
7. [x] Test that tab pages can be closed
8. [x] Test that tab pages can be muted
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in https://badssl.com/ works.
6. [x] Test that Safe Browsing works (http://downloadme.org/)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for http://downloadme.org/.
8. [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
13. [x] Test that browser is not detected on https://extensions.inrialpes.fr/brave/
## Content tests
1. [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [x] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [x] Open `about:styles` and type some misspellings on a textbox, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [x] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [x] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [x] Test that flash placeholder appears on http://www.homestarrunner.com
## Autofill tests
1. [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Test that windows and tabs restore when closed, including active tab.
3. [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
| 1.0 | Manual test run on OS X for 0.18.x Hotfix 1 (Release Channel) - ## Per release specialty tests
- [ ] Enable Brave in the UA for www.netflix.com. ([#10614](https://github.com/brave/browser-laptop/issues/10614))
- [ ] Netflix Streaming Error . ([#10449](https://github.com/brave/browser-laptop/issues/10449))
## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, muon, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Last changeset test
1. [ ] Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave).
## Widevine/Netflix test
1. [ ] Test that you can log into Netflix and start a show.
## Ledger
1. [x] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [x] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [x] Check that disabling payments and enabling them again does not lose state.
## Sync
1. [x] Verify you are able to sync two devices using the secret code
2. [x] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
3. [x] Enable Browsing history sync on device 1, ensure the history is shown on device 2
4. [x] Import/Add bookmarks on device 1, ensure it is synced on device 2
5. [x] Ensure imported bookmark folder structure is maintained on device 2
6. [x] Ensure bookmark favicons are shown after sync
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [ ] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:styles loads
9. [x] Test that about:welcome loads
10. [x] Test that about:preferences changing a preference takes effect right away
11. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar with the star button works
2. [x] Test that creating a bookmark on the bookmarks toolbar by dragging the un/lock icon works
3. [x] Test that creating a bookmark folder on the bookmarks toolbar works
4. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
5. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
6. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Geolocation
1. [x] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
5. [x] Test that tabs can be teared off into a new window
6. [x] Test that you are able to reattach a tab that is teared off into a new window
7. [x] Test that tab pages can be closed
8. [x] Test that tab pages can be muted
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in https://badssl.com/ works.
6. [x] Test that Safe Browsing works (http://downloadme.org/)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for http://downloadme.org/.
8. [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
13. [x] Test that browser is not detected on https://extensions.inrialpes.fr/brave/
## Content tests
1. [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [x] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [x] Open `about:styles` and type some misspellings on a textbox, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [x] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [x] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [x] Test that flash placeholder appears on http://www.homestarrunner.com
## Autofill tests
1. [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Test that windows and tabs restore when closed, including active tab.
3. [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
| non_defect | manual test run on os x for x hotfix release channel per release specialty tests enable brave in the ua for netflix streaming error installer check that installer is close to the size of last release check signature if os run spctl assess verbose applications brave app and make sure it returns accepted if windows right click on the installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window check brave muon and libchromiumcontent version in about and make sure it is exactly as expected last changeset test test what is covered by the last changeset you can find this by clicking on the sha in about brave widevine netflix test test that you can log into netflix and start a show ledger create a wallet with a value other than selected in the monthly budget dropdown click on the add funds button and check that coinbase transactions are blocked remove all ledger json files from library application support brave go to the payments tab in about preferences enable payments click on create wallet check that the add funds button appears after a wallet is created click on add funds and verify that adding funds through coinbase increases the account balance repeat the step above but add funds by scanning the qr code in a mobile bitcoin app instead of through coinbase visit nytimes com for a few seconds and make sure it shows up in the payments table go to and click the register button in the payments tab click add funds verify that the transfer funds button is visible and that clicking on transfer funds opens a jsfiddle url in a new tab go to and click unregister verify that the transfer funds button no longer appears in the add funds modal check that disabling payments and enabling them again does not lose state sync verify you are able to sync two devices using the secret code visit a site on device and change shield setting ensure that the saved site preference is synced to device enable browsing history sync on device ensure the history is shown on device import add bookmarks on device ensure it is synced on device ensure imported bookmark folder structure is maintained on device ensure bookmark favicons are shown after sync data make sure that data from the last version appears in the new version ok test that the previous version s cookies are preserved in the next version about pages test that about adblock loads test that about autofill loads test that about bookmarks loads bookmarks test that about downloads loads downloads test that about extensions loads test that about history loads history test that about passwords loads test that about styles loads test that about welcome loads test that about preferences changing a preference takes effect right away test that about preferences language change takes effect on re start bookmarks test that creating a bookmark on the bookmarks toolbar with the star button works test that creating a bookmark on the bookmarks toolbar by dragging the un lock icon works test that creating a bookmark folder on the bookmarks toolbar works test that moving a bookmark into a folder by drag and drop on the bookmarks folder works test that clicking a bookmark in the toolbar loads the bookmark test that clicking a bookmark in a bookmark toolbar folder loads the bookmark context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control on about styles input textarea or contenteditable find on page ensure search box is shown with shortcut test successful find test forward and backward find navigation test failed find shows results test match case find geolocation check that works site hacks test sub page loads a video and you can play it downloads test downloading a file works and that all actions on the download item works fullscreen test that entering full screen window works view toggle full screen and exit back not esc test that entering full screen works and esc to go back youtube com tabs pinning and tear off tabs test that tabs are pinnable test that tabs are unpinnable test that tabs are draggable to same tabset test that tabs are draggable to alternate tabset test that tabs can be teared off into a new window test that you are able to reattach a tab that is teared off into a new window test that tab pages can be closed test that tab pages can be muted zoom test zoom in out shortcut works test hamburger menu zooms test zoom saved when you close the browser and restore on a single site test zoom saved when you navigate within a single origin site test that navigating to a different origin resets the zoom bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in works test that safe browsing works turning safe browsing off and shields off both disable safe browsing for visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that about preferences default bravery settings take effect on pages with no site settings test that turning on fingerprinting protection in about preferences shows fingerprints blocked at test that turning it off in the bravery menu shows fingerprints blocked test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked test that audio fingerprint is blocked at when fingerprinting protection is on test that browser is not detected on content tests go to and click on the twitter icon on the top right test that context menus work in the new twitter tab load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords then reload and make sure the password is autofilled open about styles and type some misspellings on a textbox make sure they are underlined make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text make sure that command click control click on windows control click on ubuntu on a link opens a new tab but does not switch to it click on it and make sure it is already loaded open an email on or inbox google com and click on a link make sure it works test that pdf is loaded at test that shows up as grey not red no mixed content scripts are run flash tests turn on flash in about preferences security test that clicking on install flash banner on myspace com shows a notification to allow flash and that the banner disappears when allow is clicked test that flash placeholder appears on autofill tests test that autofill works on session storage do not forget to make a backup of your entire library application support brave folder temporarily move away your library application support brave session store and test that clean session storage works appdata brave in windows config brave in ubuntu test that windows and tabs restore when closed including active tab move away your entire library application support brave folder appdata brave in windows config brave in ubuntu cookie and cache make a backup of your profile turn on all clearing in preferences and shut down make sure when you bring the browser back up everything is gone that is specified go to and set an evercookie check that going to prefs clearing site data and cache and going back to the evercookie site does not remember the old evercookie value update tests test that updating using brave update version env variable works correctly | 0 |
58,093 | 16,342,428,658 | IssuesEvent | 2021-05-13 00:16:55 | darshan-hpc/darshan | https://api.github.com/repos/darshan-hpc/darshan | opened | internal diagnostic timing routines are skewed if processes call MPI_Finalize() at different times | defect wrapper libraries | In GitLab by @shanedsnyder on Sep 24, 2015, 16:26
Darshan includes a feature that will display internal diagnostic timing information if the DARSHAN_INTERNAL_TIMING environment variable is set. However, the mechanism used to collect timing information assumes that all processes have called MPI_Finalize() simultaneously. If this is not the case, then the diagnostic information will erroneously attribute too much time to Darshan while it waits on the other processes to synchronize. | 1.0 | internal diagnostic timing routines are skewed if processes call MPI_Finalize() at different times - In GitLab by @shanedsnyder on Sep 24, 2015, 16:26
Darshan includes a feature that will display internal diagnostic timing information if the DARSHAN_INTERNAL_TIMING environment variable is set. However, the mechanism used to collect timing information assumes that all processes have called MPI_Finalize() simultaneously. If this is not the case, then the diagnostic information will erroneously attribute too much time to Darshan while it waits on the other processes to synchronize. | defect | internal diagnostic timing routines are skewed if processes call mpi finalize at different times in gitlab by shanedsnyder on sep darshan includes a feature that will display internal diagnostic timing information if the darshan internal timing environment variable is set however the mechanism used to collect timing information assumes that all processes have called mpi finalize simultaneously if this is not the case then the diagnostic information will erroneously attribute too much time to darshan while it waits on the other processes to synchronize | 1 |
14,534 | 2,822,513,913 | IssuesEvent | 2015-05-21 00:04:12 | lispascal/lolblender | https://api.github.com/repos/lispascal/lolblender | closed | Blender 2.69 working | auto-migrated duplicate Priority-Medium Type-Defect | ```
Just get
https://code.google.com/p/lolblender/issues/detail?id=4
patch
and in lolMesh.py comment the line 236 to 244
Launch with the console like :
import io_scene_lol
dir = 'C:\\Users\\Admin\\Downloads'
skn = 'Jayce.skn'
skl = ''
dds = ''
io_scene_lol.import_char(dir,skn,skl,dds)
And return to 3DView and you have your mesh without skin.
Maybe i'll work on this problem later.
```
Original issue reported on code.google.com by `romain...@gmail.com` on 5 Nov 2013 at 4:53 | 1.0 | Blender 2.69 working - ```
Just get
https://code.google.com/p/lolblender/issues/detail?id=4
patch
and in lolMesh.py comment the line 236 to 244
Launch with the console like :
import io_scene_lol
dir = 'C:\\Users\\Admin\\Downloads'
skn = 'Jayce.skn'
skl = ''
dds = ''
io_scene_lol.import_char(dir,skn,skl,dds)
And return to 3DView and you have your mesh without skin.
Maybe i'll work on this problem later.
```
Original issue reported on code.google.com by `romain...@gmail.com` on 5 Nov 2013 at 4:53 | defect | blender working just get patch and in lolmesh py comment the line to launch with the console like import io scene lol dir c users admin downloads skn jayce skn skl dds io scene lol import char dir skn skl dds and return to and you have your mesh without skin maybe i ll work on this problem later original issue reported on code google com by romain gmail com on nov at | 1 |
152,273 | 12,099,910,115 | IssuesEvent | 2020-04-20 13:00:12 | ICIJ/datashare | https://api.github.com/repos/ICIJ/datashare | closed | dynamic settings in datashare front | front need testing | Now that it is possible to change the settings in the UI (see #310 ) we should update in-memory copy of the backend settings.
For example, when the dataDirectory is changed, the directory remains the original one when doing a scan task.
| 1.0 | dynamic settings in datashare front - Now that it is possible to change the settings in the UI (see #310 ) we should update in-memory copy of the backend settings.
For example, when the dataDirectory is changed, the directory remains the original one when doing a scan task.
| non_defect | dynamic settings in datashare front now that it is possible to change the settings in the ui see we should update in memory copy of the backend settings for example when the datadirectory is changed the directory remains the original one when doing a scan task | 0 |
27,644 | 5,071,586,983 | IssuesEvent | 2016-12-26 14:38:18 | WizDevOps/containerschiff | https://api.github.com/repos/WizDevOps/containerschiff | closed | Virtualbox error: Dependency is not satisfiable: libvpx2 (>= 1.4.0) | bug / defect | ```sh
➜ open-source/containerschiff git:(master) ansible-playbook site.yml -K
SUDO password:
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [ubuntu-virtualbox : download the deb file & scan] ************************
changed: [localhost]
TASK [ubuntu-virtualbox : download & scan ExtPack] *****************************
changed: [localhost]
TASK [ubuntu-virtualbox : validate public key] *********************************
ok: [localhost]
TASK [ubuntu-virtualbox : install virtualbox via deb file] *********************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Dependency is n
ot satisfiable: libvpx2 (>= 1.4.0)\n"}
to retry, use: --limit @/tmp/ansible_retry/site.retry
PLAY RECAP *********************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=1
``` | 1.0 | Virtualbox error: Dependency is not satisfiable: libvpx2 (>= 1.4.0) - ```sh
➜ open-source/containerschiff git:(master) ansible-playbook site.yml -K
SUDO password:
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [ubuntu-virtualbox : download the deb file & scan] ************************
changed: [localhost]
TASK [ubuntu-virtualbox : download & scan ExtPack] *****************************
changed: [localhost]
TASK [ubuntu-virtualbox : validate public key] *********************************
ok: [localhost]
TASK [ubuntu-virtualbox : install virtualbox via deb file] *********************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Dependency is n
ot satisfiable: libvpx2 (>= 1.4.0)\n"}
to retry, use: --limit @/tmp/ansible_retry/site.retry
PLAY RECAP *********************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=1
``` | defect | virtualbox error dependency is not satisfiable sh ➜ open source containerschiff git master ansible playbook site yml k sudo password play task ok task changed task changed task ok task fatal failed changed false failed true msg dependency is n ot satisfiable n to retry use limit tmp ansible retry site retry play recap localhost ok changed unreachable failed | 1 |
74,223 | 25,013,678,098 | IssuesEvent | 2022-11-03 17:01:31 | matrix-org/synapse | https://api.github.com/repos/matrix-org/synapse | closed | Full-text-search in Synapse 1.71.0rc1 may be prohibitively expensive (DB IO) | A-Performance S-Major T-Defect A-Message-Search A-Database O-Occasional | From Matrix.org's monitoring:

In the "Toast table blocks read from disk/buffer" and "Toast index block read from buffer" graphs, there are intermittent blue spikes corresponding to the `event_search` table.
Correlated: peaks in "total txn time" due to `search_rooms`

And federation send PDU lag:

and event send time:

(Above graphs only showing the main process).
The event persister also saw pain at similar times:

It's a little tricky to interpret these, because the m.org database was updated this morning (2nd Nov 9.30 UTC) and there was some expensive background processing by postgres afterwards. (All times UTC in the graphs.)
Our suspicion is that the changes in #11635 are to blame. I reverted it, #13410 and #14311 on the hotfixes branch (https://github.com/matrix-org/synapse/commit/37307a5ccef8e21313f0631d414806ea551e6a73) and deployed to matrix.org. We haven't seen the event persisters flare up since then... but it's not completely clear that the changed mentioned were the cause.
I think we should (regrettably) back out the changes on the release branch too, before a final 1.71.0 release. | 1.0 | Full-text-search in Synapse 1.71.0rc1 may be prohibitively expensive (DB IO) - From Matrix.org's monitoring:

In the "Toast table blocks read from disk/buffer" and "Toast index block read from buffer" graphs, there are intermittent blue spikes corresponding to the `event_search` table.
Correlated: peaks in "total txn time" due to `search_rooms`

And federation send PDU lag:

and event send time:

(Above graphs only showing the main process).
The event persister also saw pain at similar times:

It's a little tricky to interpret these, because the m.org database was updated this morning (2nd Nov 9.30 UTC) and there was some expensive background processing by postgres afterwards. (All times UTC in the graphs.)
Our suspicion is that the changes in #11635 are to blame. I reverted it, #13410 and #14311 on the hotfixes branch (https://github.com/matrix-org/synapse/commit/37307a5ccef8e21313f0631d414806ea551e6a73) and deployed to matrix.org. We haven't seen the event persisters flare up since then... but it's not completely clear that the changed mentioned were the cause.
I think we should (regrettably) back out the changes on the release branch too, before a final 1.71.0 release. | defect | full text search in synapse may be prohibitively expensive db io from matrix org s monitoring in the toast table blocks read from disk buffer and toast index block read from buffer graphs there are intermittent blue spikes corresponding to the event search table correlated peaks in total txn time due to search rooms and federation send pdu lag and event send time above graphs only showing the main process the event persister also saw pain at similar times it s a little tricky to interpret these because the m org database was updated this morning nov utc and there was some expensive background processing by postgres afterwards all times utc in the graphs our suspicion is that the changes in are to blame i reverted it and on the hotfixes branch and deployed to matrix org we haven t seen the event persisters flare up since then but it s not completely clear that the changed mentioned were the cause i think we should regrettably back out the changes on the release branch too before a final release | 1 |
364,511 | 25,493,693,978 | IssuesEvent | 2022-11-27 12:10:50 | ophub/amlogic-s9xxx-armbian | https://api.github.com/repos/ophub/amlogic-s9xxx-armbian | closed | armbian安装好OMV 系统,,将新系统备份成img,将新备份系统安装在新电视盒子上吗? | documentation | 1、盒子为cm311-1a,把armbian系统写入盒子存储;
2、安装 好omv,并且在omva上面安装好docker;
3、请问盒子OMV系统,是否可以做成系统备份做成img镜像,把备份img写入u盘,然后把这个系统备份新安装到新电视盒子上呢?是否有操作明细,谢谢。
| 1.0 | armbian安装好OMV 系统,,将新系统备份成img,将新备份系统安装在新电视盒子上吗? - 1、盒子为cm311-1a,把armbian系统写入盒子存储;
2、安装 好omv,并且在omva上面安装好docker;
3、请问盒子OMV系统,是否可以做成系统备份做成img镜像,把备份img写入u盘,然后把这个系统备份新安装到新电视盒子上呢?是否有操作明细,谢谢。
| non_defect | armbian安装好omv 系统 ,将新系统备份成img,将新备份系统安装在新电视盒子上吗? 、 把armbian系统写入盒子存储; 、安装 好omv 并且在omva上面安装好docker 、请问盒子omv系统,是否可以做成系统备份做成img镜像,把备份img写入u盘,然后把这个系统备份新安装到新电视盒子上呢?是否有操作明细,谢谢。 | 0 |
62,093 | 14,656,432,477 | IssuesEvent | 2020-12-28 13:24:50 | fu1771695yongxie/angular.js | https://api.github.com/repos/fu1771695yongxie/angular.js | opened | WS-2018-0590 (High) detected in diff-1.4.0.tgz | security vulnerability | ## WS-2018-0590 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>diff-1.4.0.tgz</b></p></summary>
<p>A javascript text diff implementation.</p>
<p>Library home page: <a href="https://registry.npmjs.org/diff/-/diff-1.4.0.tgz">https://registry.npmjs.org/diff/-/diff-1.4.0.tgz</a></p>
<p>Path to dependency file: angular.js/yarn.lock</p>
<p>Path to vulnerable library: angular.js/yarn.lock</p>
<p>
Dependency Hierarchy:
- promises-aplus-tests-2.1.2.tgz (Root Library)
- mocha-2.5.3.tgz
- :x: **diff-1.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/angular.js/commit/1b1b50185bc6258e96ba849a00be9ba44d9ac05c">1b1b50185bc6258e96ba849a00be9ba44d9ac05c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in diff before v3.5.0, the affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.
<p>Publish Date: 2018-03-05
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1552148>WS-2018-0590</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0">https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0</a></p>
<p>Release Date: 2019-06-11</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0590 (High) detected in diff-1.4.0.tgz - ## WS-2018-0590 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>diff-1.4.0.tgz</b></p></summary>
<p>A javascript text diff implementation.</p>
<p>Library home page: <a href="https://registry.npmjs.org/diff/-/diff-1.4.0.tgz">https://registry.npmjs.org/diff/-/diff-1.4.0.tgz</a></p>
<p>Path to dependency file: angular.js/yarn.lock</p>
<p>Path to vulnerable library: angular.js/yarn.lock</p>
<p>
Dependency Hierarchy:
- promises-aplus-tests-2.1.2.tgz (Root Library)
- mocha-2.5.3.tgz
- :x: **diff-1.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/angular.js/commit/1b1b50185bc6258e96ba849a00be9ba44d9ac05c">1b1b50185bc6258e96ba849a00be9ba44d9ac05c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in diff before v3.5.0, the affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.
<p>Publish Date: 2018-03-05
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1552148>WS-2018-0590</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0">https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0</a></p>
<p>Release Date: 2019-06-11</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | ws high detected in diff tgz ws high severity vulnerability vulnerable library diff tgz a javascript text diff implementation library home page a href path to dependency file angular js yarn lock path to vulnerable library angular js yarn lock dependency hierarchy promises aplus tests tgz root library mocha tgz x diff tgz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was found in diff before the affected versions of this package are vulnerable to regular expression denial of service redos attacks publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
327,025 | 24,114,009,606 | IssuesEvent | 2022-09-20 13:32:00 | liuli-moe/to-the-stars | https://api.github.com/repos/liuli-moe/to-the-stars | closed | 文章格式校正计划 | documentation | 目前除第一章和番外之外,所有**粗体**、*斜体*格式均未正确保留,所以需要重新校对,将从以下几个来源进行校对
1. 百合会论坛:<https://bbs.yamibo.com/forum.php?mod=viewthread&tid=206113&extra=&authorid=61676>,主要来源,在第三卷 54 章之前应该主要参考这里
2. bilibili:<https://www.bilibili.com/read/cv692285>,第三卷 54 章之后需要参考这里
如果有人愿意帮忙一起处理,请在该 issue 下面回复,吾辈将会在对应章节后 @用户名 以避免重复处理。请使用 github diff 对比并保留之前的错误修复,避免修复了格式却让之前的错误重现,尤其是和谐的部分请留意修正。在所有章节完成校正之后将发布新的 epub,在此之前,所有格式的修复仅发布至线上网站。
如果你是从网页转换的,则下面有一些校正指南可以参考
- 保证 github markdown 预览时格式是正确的
- 更改 `* * *` 为 `---` 用于章节分割
- 清除行尾额外的空格,一般是两个,这被用于 markdown 换行使用,但在这里不需要,例如 `〈在下文中,〈〉① 中的内容需要拥有相应阅览等级才能查看。圈内的数字代表所需的密级。〉 ① ` => `〈在下文中,〈〉① 中的内容需要拥有相应阅览等级才能查看。圈内的数字代表所需的密级。〉 ①`
- 如果一段加粗的文本以标点符号结尾并且后面有中文,则需要在后面加一个空格,例如 `**真没想到我这么快就要死了,**她有些自暴自弃地想着。` => `**真没想到我这么快就要死了,** 她有些自暴自弃地想着。`
- 每个章节开头的一些引用和虚拟文献不需要使用 `>` 表示引用,仅需要在正文开始之前添加 `---` 分割即可,原作即是如此,参考:<https://www.fanfiction.net/s/7406866/63/To-the-Stars>
- 第三卷 54 章之后还需要结合英文原作确定分割线的位置,因为翻译中的分割线存在缺失
第一卷
- [x] 第一章 许愿 - 量子纠缠 -- @andylizi
- [x] 第二章 幻影 -- @rxliuli
- [x] 第三章 麻美观影记 上 -- @rxliuli
- [x] 第四章 麻美观影记 下 -- @rxliuli
- [x] 第五章 家人 -- @rxliuli
- [x] 第六章 军队 -- @rxliuli
- [x] 第七章 南方组 -- @rxliuli
- [x] 第八章 政与教 -- @rxliuli
- [x] 第九章 回声 -- @rxliuli
- [x] 第十章 准将 -- @rxliuli
- [x] 第十一章 以往生活的残骸 -- @rxliuli
- [x] 第十二章 狩猎魔兽的人 -- @rxliuli
- [x] 第十三章 不对等的信息 -- @rxliuli
- [x] 第十四章 血缘 -- @rxliuli
- [x] 第十五章 萨姆萨拉 -- @rxliuli
- [x] 第十六章 属于天空的光芒 -- @rxliuli
第二卷
- [x] 幕间一 无间迷梦 -- @rxliuli
- [x] 第十八章 落地 -- @rxliuli
- [x] 第十九章 回天 -- @rxliuli
- [x] 第二十章 相对论 -- @rxliuli
- [x] 第二十一章 突变 -- @rxliuli
- [x] 第二十二章 变化的风向 -- @rxliuli
- [x] 第二十三章 失去的爱 -- @rxliuli
- [x] 第二十四章 历史学家 -- @rxliuli
- [x] 第二十五章 穿破天空 -- @rxliuli
- [x] 第二十六章 红与绿的血 -- @rxliuli
- [x] 第二十七章 那些璀璨的明星 -- @rxliuli
- [x] 第二十八章 不屈吾魂 -- @rxliuli
- [x] 第二十九章 救世之女 -- @rxliuli
- [x] 第三十章 权力与使命 -- @rxliuli
- [x] 第三十一章 爱的形式 -- @rxliuli
- [x] 第三十二章 选择与未来 -- @rxliuli
第三卷
- [x] 幕间二 小玛德莱娜 -- @rxliuli
- [x] 第三十四章 造就我们的那些羁绊 -- @rxliuli
- [x] 第三十五章 伤亡 -- @rxliuli
- [x] 第三十六章 追逐幻影 -- @rxliuli
- [x] 第三十七章 炽天使 -- @rxliuli
- [x] 第三十八章 把握今天 -- @rxliuli
- [x] 第三十九章 无法愈合的伤痕 -- @rxliuli
- [x] 第四十章 镜之彼端 -- @rxliuli
- [x] 第四十一章 因与果 -- @rxliuli
- [x] 第四十二章 有光必有影 -- @rxliuli
- [x] 幕间 2.5 尘封的往事 上 -- @rxliuli
- [x] 幕间 2.5 尘封的往事 下 -- @rxliuli
- [x] 第四十五章 带电的肉体 -- @rxliuli
- [x] 第四十六章 轮中之轮 -- @rxliuli
- [x] 第四十七章 光明之城 -- @rxliuli
- [x] 第四十八章 知己知彼 -- @rxliuli
- [x] 第四十九章 过渡态 -- @rxliuli
- [x] 第五十章 玫瑰之下 -- @rxliuli
- [x] 第五十一章 永恒的求索 -- @rxliuli
- [x] 第五十二章 我们是谁 -- @rxliuli
- [x] 第五十三章 立场 -- @rxliuli
- [x] 第五十四章 永恒的青春 @ArgusK17
- [x] 第五十五章 新视点 -- @rxliuli
- [x] 第五十六章 脉冲星 -- @rxliuli
- [x] 第五十七章 三相点 -- @rxliuli
- [x] 第五十八章 托付信任 -- @rxliuli
- [x] 第五十九章 室内游戏 -- @rxliuli
- [x] 第六十章 数码预言 -- @rxliuli
- [x] 第六十一章 遁入虚空 -- @rxliuli
- [x] 第六十二章 消逝的光芒 上 -- @rxliuli
- [x] 第六十三章 消逝的光芒 下 @ArgusK17
| 1.0 | 文章格式校正计划 - 目前除第一章和番外之外,所有**粗体**、*斜体*格式均未正确保留,所以需要重新校对,将从以下几个来源进行校对
1. 百合会论坛:<https://bbs.yamibo.com/forum.php?mod=viewthread&tid=206113&extra=&authorid=61676>,主要来源,在第三卷 54 章之前应该主要参考这里
2. bilibili:<https://www.bilibili.com/read/cv692285>,第三卷 54 章之后需要参考这里
如果有人愿意帮忙一起处理,请在该 issue 下面回复,吾辈将会在对应章节后 @用户名 以避免重复处理。请使用 github diff 对比并保留之前的错误修复,避免修复了格式却让之前的错误重现,尤其是和谐的部分请留意修正。在所有章节完成校正之后将发布新的 epub,在此之前,所有格式的修复仅发布至线上网站。
如果你是从网页转换的,则下面有一些校正指南可以参考
- 保证 github markdown 预览时格式是正确的
- 更改 `* * *` 为 `---` 用于章节分割
- 清除行尾额外的空格,一般是两个,这被用于 markdown 换行使用,但在这里不需要,例如 `〈在下文中,〈〉① 中的内容需要拥有相应阅览等级才能查看。圈内的数字代表所需的密级。〉 ① ` => `〈在下文中,〈〉① 中的内容需要拥有相应阅览等级才能查看。圈内的数字代表所需的密级。〉 ①`
- 如果一段加粗的文本以标点符号结尾并且后面有中文,则需要在后面加一个空格,例如 `**真没想到我这么快就要死了,**她有些自暴自弃地想着。` => `**真没想到我这么快就要死了,** 她有些自暴自弃地想着。`
- 每个章节开头的一些引用和虚拟文献不需要使用 `>` 表示引用,仅需要在正文开始之前添加 `---` 分割即可,原作即是如此,参考:<https://www.fanfiction.net/s/7406866/63/To-the-Stars>
- 第三卷 54 章之后还需要结合英文原作确定分割线的位置,因为翻译中的分割线存在缺失
第一卷
- [x] 第一章 许愿 - 量子纠缠 -- @andylizi
- [x] 第二章 幻影 -- @rxliuli
- [x] 第三章 麻美观影记 上 -- @rxliuli
- [x] 第四章 麻美观影记 下 -- @rxliuli
- [x] 第五章 家人 -- @rxliuli
- [x] 第六章 军队 -- @rxliuli
- [x] 第七章 南方组 -- @rxliuli
- [x] 第八章 政与教 -- @rxliuli
- [x] 第九章 回声 -- @rxliuli
- [x] 第十章 准将 -- @rxliuli
- [x] 第十一章 以往生活的残骸 -- @rxliuli
- [x] 第十二章 狩猎魔兽的人 -- @rxliuli
- [x] 第十三章 不对等的信息 -- @rxliuli
- [x] 第十四章 血缘 -- @rxliuli
- [x] 第十五章 萨姆萨拉 -- @rxliuli
- [x] 第十六章 属于天空的光芒 -- @rxliuli
第二卷
- [x] 幕间一 无间迷梦 -- @rxliuli
- [x] 第十八章 落地 -- @rxliuli
- [x] 第十九章 回天 -- @rxliuli
- [x] 第二十章 相对论 -- @rxliuli
- [x] 第二十一章 突变 -- @rxliuli
- [x] 第二十二章 变化的风向 -- @rxliuli
- [x] 第二十三章 失去的爱 -- @rxliuli
- [x] 第二十四章 历史学家 -- @rxliuli
- [x] 第二十五章 穿破天空 -- @rxliuli
- [x] 第二十六章 红与绿的血 -- @rxliuli
- [x] 第二十七章 那些璀璨的明星 -- @rxliuli
- [x] 第二十八章 不屈吾魂 -- @rxliuli
- [x] 第二十九章 救世之女 -- @rxliuli
- [x] 第三十章 权力与使命 -- @rxliuli
- [x] 第三十一章 爱的形式 -- @rxliuli
- [x] 第三十二章 选择与未来 -- @rxliuli
第三卷
- [x] 幕间二 小玛德莱娜 -- @rxliuli
- [x] 第三十四章 造就我们的那些羁绊 -- @rxliuli
- [x] 第三十五章 伤亡 -- @rxliuli
- [x] 第三十六章 追逐幻影 -- @rxliuli
- [x] 第三十七章 炽天使 -- @rxliuli
- [x] 第三十八章 把握今天 -- @rxliuli
- [x] 第三十九章 无法愈合的伤痕 -- @rxliuli
- [x] 第四十章 镜之彼端 -- @rxliuli
- [x] 第四十一章 因与果 -- @rxliuli
- [x] 第四十二章 有光必有影 -- @rxliuli
- [x] 幕间 2.5 尘封的往事 上 -- @rxliuli
- [x] 幕间 2.5 尘封的往事 下 -- @rxliuli
- [x] 第四十五章 带电的肉体 -- @rxliuli
- [x] 第四十六章 轮中之轮 -- @rxliuli
- [x] 第四十七章 光明之城 -- @rxliuli
- [x] 第四十八章 知己知彼 -- @rxliuli
- [x] 第四十九章 过渡态 -- @rxliuli
- [x] 第五十章 玫瑰之下 -- @rxliuli
- [x] 第五十一章 永恒的求索 -- @rxliuli
- [x] 第五十二章 我们是谁 -- @rxliuli
- [x] 第五十三章 立场 -- @rxliuli
- [x] 第五十四章 永恒的青春 @ArgusK17
- [x] 第五十五章 新视点 -- @rxliuli
- [x] 第五十六章 脉冲星 -- @rxliuli
- [x] 第五十七章 三相点 -- @rxliuli
- [x] 第五十八章 托付信任 -- @rxliuli
- [x] 第五十九章 室内游戏 -- @rxliuli
- [x] 第六十章 数码预言 -- @rxliuli
- [x] 第六十一章 遁入虚空 -- @rxliuli
- [x] 第六十二章 消逝的光芒 上 -- @rxliuli
- [x] 第六十三章 消逝的光芒 下 @ArgusK17
| non_defect | 文章格式校正计划 目前除第一章和番外之外,所有 粗体 、 斜体 格式均未正确保留,所以需要重新校对,将从以下几个来源进行校对 百合会论坛: 章之前应该主要参考这里 bilibili: 章之后需要参考这里 如果有人愿意帮忙一起处理,请在该 issue 下面回复,吾辈将会在对应章节后 用户名 以避免重复处理。请使用 github diff 对比并保留之前的错误修复,避免修复了格式却让之前的错误重现,尤其是和谐的部分请留意修正。在所有章节完成校正之后将发布新的 epub,在此之前,所有格式的修复仅发布至线上网站。 如果你是从网页转换的,则下面有一些校正指南可以参考 保证 github markdown 预览时格式是正确的 更改 为 用于章节分割 清除行尾额外的空格,一般是两个,这被用于 markdown 换行使用,但在这里不需要,例如 〈在下文中,〈〉① 中的内容需要拥有相应阅览等级才能查看。圈内的数字代表所需的密级。〉 ① 〈在下文中,〈〉① 中的内容需要拥有相应阅览等级才能查看。圈内的数字代表所需的密级。〉 ① 如果一段加粗的文本以标点符号结尾并且后面有中文,则需要在后面加一个空格,例如 真没想到我这么快就要死了, 她有些自暴自弃地想着。 真没想到我这么快就要死了, 她有些自暴自弃地想着。 每个章节开头的一些引用和虚拟文献不需要使用 表示引用,仅需要在正文开始之前添加 分割即可,原作即是如此,参考: 第三卷 章之后还需要结合英文原作确定分割线的位置,因为翻译中的分割线存在缺失 第一卷 第一章 许愿 量子纠缠 andylizi 第二章 幻影 rxliuli 第三章 麻美观影记 上 rxliuli 第四章 麻美观影记 下 rxliuli 第五章 家人 rxliuli 第六章 军队 rxliuli 第七章 南方组 rxliuli 第八章 政与教 rxliuli 第九章 回声 rxliuli 第十章 准将 rxliuli 第十一章 以往生活的残骸 rxliuli 第十二章 狩猎魔兽的人 rxliuli 第十三章 不对等的信息 rxliuli 第十四章 血缘 rxliuli 第十五章 萨姆萨拉 rxliuli 第十六章 属于天空的光芒 rxliuli 第二卷 幕间一 无间迷梦 rxliuli 第十八章 落地 rxliuli 第十九章 回天 rxliuli 第二十章 相对论 rxliuli 第二十一章 突变 rxliuli 第二十二章 变化的风向 rxliuli 第二十三章 失去的爱 rxliuli 第二十四章 历史学家 rxliuli 第二十五章 穿破天空 rxliuli 第二十六章 红与绿的血 rxliuli 第二十七章 那些璀璨的明星 rxliuli 第二十八章 不屈吾魂 rxliuli 第二十九章 救世之女 rxliuli 第三十章 权力与使命 rxliuli 第三十一章 爱的形式 rxliuli 第三十二章 选择与未来 rxliuli 第三卷 幕间二 小玛德莱娜 rxliuli 第三十四章 造就我们的那些羁绊 rxliuli 第三十五章 伤亡 rxliuli 第三十六章 追逐幻影 rxliuli 第三十七章 炽天使 rxliuli 第三十八章 把握今天 rxliuli 第三十九章 无法愈合的伤痕 rxliuli 第四十章 镜之彼端 rxliuli 第四十一章 因与果 rxliuli 第四十二章 有光必有影 rxliuli 幕间 尘封的往事 上 rxliuli 幕间 尘封的往事 下 rxliuli 第四十五章 带电的肉体 rxliuli 第四十六章 轮中之轮 rxliuli 第四十七章 光明之城 rxliuli 第四十八章 知己知彼 rxliuli 第四十九章 过渡态 rxliuli 第五十章 玫瑰之下 rxliuli 第五十一章 永恒的求索 rxliuli 第五十二章 我们是谁 rxliuli 第五十三章 立场 rxliuli 第五十四章 永恒的青春 第五十五章 新视点 rxliuli 第五十六章 脉冲星 rxliuli 第五十七章 三相点 rxliuli 第五十八章 托付信任 rxliuli 第五十九章 室内游戏 rxliuli 第六十章 数码预言 rxliuli 第六十一章 遁入虚空 rxliuli 第六十二章 消逝的光芒 上 rxliuli 第六十三章 消逝的光芒 下 | 0 |
1,171 | 2,599,629,911 | IssuesEvent | 2015-02-23 10:23:40 | v-l-m/vlm | https://api.github.com/repos/v-l-m/vlm | closed | option du moteur pour pouvoir faire un run à blanc | C: moteur P: major R: fixed T: defect | **Reported by paparazzia on 20 Sep 2009 10:50 UTC**
Il manque une option dry-run du moteur.
A priori, il faut une option dans le param.php pour pouvoir inhiber les update sql faits par le moteur...
| 1.0 | option du moteur pour pouvoir faire un run à blanc - **Reported by paparazzia on 20 Sep 2009 10:50 UTC**
Il manque une option dry-run du moteur.
A priori, il faut une option dans le param.php pour pouvoir inhiber les update sql faits par le moteur...
| defect | option du moteur pour pouvoir faire un run à blanc reported by paparazzia on sep utc il manque une option dry run du moteur a priori il faut une option dans le param php pour pouvoir inhiber les update sql faits par le moteur | 1 |
283,740 | 24,560,682,539 | IssuesEvent | 2022-10-12 19:58:07 | longhorn/longhorn | https://api.github.com/repos/longhorn/longhorn | opened | [TEST] Add replica deletion and rebuilding case in the upgrade test. | kind/test | ## What's the test to develop? Please describe
Add the case of replica deletion and rebuilding case to the upgrade test https://github.com/longhorn/longhorn-tests/blob/master/manager/integration/tests/test_upgrade.py
## Describe the items of the test development (DoD, definition of done) you'd like
> Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
- [ ] Implement the test.
- [ ] Run tests with different upgrade paths on Jenkins and observe the test result.
| 1.0 | [TEST] Add replica deletion and rebuilding case in the upgrade test. - ## What's the test to develop? Please describe
Add the case of replica deletion and rebuilding case to the upgrade test https://github.com/longhorn/longhorn-tests/blob/master/manager/integration/tests/test_upgrade.py
## Describe the items of the test development (DoD, definition of done) you'd like
> Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
- [ ] Implement the test.
- [ ] Run tests with different upgrade paths on Jenkins and observe the test result.
| non_defect | add replica deletion and rebuilding case in the upgrade test what s the test to develop please describe add the case of replica deletion and rebuilding case to the upgrade test describe the items of the test development dod definition of done you d like please use a task list for items on a separate line with a clickable checkbox implement the test run tests with different upgrade paths on jenkins and observe the test result | 0 |
42,694 | 11,219,683,506 | IssuesEvent | 2020-01-07 14:23:45 | SasView/sasview | https://api.github.com/repos/SasView/sasview | closed | set reasonable min/max on polydispersity values for fit (Trac #558) | Migrated from Trac SasView Bug Fixing defect major | polydispersity cannot be negative, so set the min to 0.
relative polydispersity doesn't make a lot of sense above 1 (thickness of 30 +/- 200%?)
Migrated from http://trac.sasview.org/ticket/558
```json
{
"status": "new",
"changetime": "2017-10-27T10:49:46",
"_ts": "2017-10-27 10:49:46.234557+00:00",
"description": "polydispersity cannot be negative, so set the min to 0.\n\nrelative polydispersity doesn't make a lot of sense above 1 (thickness of 30 +/- 200%?)\n\n",
"reporter": "pkienzle",
"cc": "",
"resolution": "",
"workpackage": "SasView Bug Fixing",
"time": "2016-04-12T15:20:28",
"component": "SasView",
"summary": "set reasonable min/max on polydispersity values for fit",
"priority": "major",
"keywords": "",
"milestone": "SasView 5.0.0",
"owner": "",
"type": "defect"
}
```
| 1.0 | set reasonable min/max on polydispersity values for fit (Trac #558) - polydispersity cannot be negative, so set the min to 0.
relative polydispersity doesn't make a lot of sense above 1 (thickness of 30 +/- 200%?)
Migrated from http://trac.sasview.org/ticket/558
```json
{
"status": "new",
"changetime": "2017-10-27T10:49:46",
"_ts": "2017-10-27 10:49:46.234557+00:00",
"description": "polydispersity cannot be negative, so set the min to 0.\n\nrelative polydispersity doesn't make a lot of sense above 1 (thickness of 30 +/- 200%?)\n\n",
"reporter": "pkienzle",
"cc": "",
"resolution": "",
"workpackage": "SasView Bug Fixing",
"time": "2016-04-12T15:20:28",
"component": "SasView",
"summary": "set reasonable min/max on polydispersity values for fit",
"priority": "major",
"keywords": "",
"milestone": "SasView 5.0.0",
"owner": "",
"type": "defect"
}
```
| defect | set reasonable min max on polydispersity values for fit trac polydispersity cannot be negative so set the min to relative polydispersity doesn t make a lot of sense above thickness of migrated from json status new changetime ts description polydispersity cannot be negative so set the min to n nrelative polydispersity doesn t make a lot of sense above thickness of n n reporter pkienzle cc resolution workpackage sasview bug fixing time component sasview summary set reasonable min max on polydispersity values for fit priority major keywords milestone sasview owner type defect | 1 |
51,504 | 21,693,641,049 | IssuesEvent | 2022-05-09 17:46:11 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | OOB Deploy Request | frontend operations platform-sre Console-Services | # Instructions
Fill out Details of need and the PRs related to OOB sections below.
## PRs Related to OOB
- Please provide the list of PRs related to the need for this OOB Deploy
> https://github.com/department-of-veterans-affairs/vets-website/pull/20942
## Active Daily Users Impacted
- How many active users are impacted at this time and day.
> 700 an hour
## Has fix been confirmed in Staging?
- [x ] Yes
- [ ] No
## Description
- Please provide the details/reason for the OOB Deploy
> there are feature flags that were changed in the backend and the corresponding frontend ticket was not merged/approved in time for the prod deploy which uses the new backend code. This is resulting in all inbound authentication on production to not work.
## Verify The following
- [x] The OOB Deploy Requests is after the 2pm EST cutoff for regular deploy - [Deployment Policy](https://vfs.atlassian.net/wiki/spaces/SUP/pages/1279787009/Deployment+Policies) and [Deployment Schedules](https://depo-platform-documentation.scrollhelp.site/developer-docs/Deployments.1844641889.html)
- [x] The OOB Deploy Requests is critial and must be resolved before the next automated deploy.
*Performed by Platform SRE team*
- [x] PagerDuty OOB Deploy Incident Opened
- [x] OCTO-DE staff acknowledgment of Request
- [x] Notification is posted in the appropiate Slack support and team Channels
- [x] Infrastructure/Operations has acknowledge the Requests
- [x] Security Team has Reviewed the requests
- [x] An Incident Post Mortem is created within two business days
CC: @department-of-veterans-affairs/vsp-operations , @department-of-veterans-affairs/vsp-product-support
| 1.0 | OOB Deploy Request - # Instructions
Fill out Details of need and the PRs related to OOB sections below.
## PRs Related to OOB
- Please provide the list of PRs related to the need for this OOB Deploy
> https://github.com/department-of-veterans-affairs/vets-website/pull/20942
## Active Daily Users Impacted
- How many active users are impacted at this time and day.
> 700 an hour
## Has fix been confirmed in Staging?
- [x ] Yes
- [ ] No
## Description
- Please provide the details/reason for the OOB Deploy
> there are feature flags that were changed in the backend and the corresponding frontend ticket was not merged/approved in time for the prod deploy which uses the new backend code. This is resulting in all inbound authentication on production to not work.
## Verify The following
- [x] The OOB Deploy Requests is after the 2pm EST cutoff for regular deploy - [Deployment Policy](https://vfs.atlassian.net/wiki/spaces/SUP/pages/1279787009/Deployment+Policies) and [Deployment Schedules](https://depo-platform-documentation.scrollhelp.site/developer-docs/Deployments.1844641889.html)
- [x] The OOB Deploy Requests is critial and must be resolved before the next automated deploy.
*Performed by Platform SRE team*
- [x] PagerDuty OOB Deploy Incident Opened
- [x] OCTO-DE staff acknowledgment of Request
- [x] Notification is posted in the appropiate Slack support and team Channels
- [x] Infrastructure/Operations has acknowledge the Requests
- [x] Security Team has Reviewed the requests
- [x] An Incident Post Mortem is created within two business days
CC: @department-of-veterans-affairs/vsp-operations , @department-of-veterans-affairs/vsp-product-support
| non_defect | oob deploy request instructions fill out details of need and the prs related to oob sections below prs related to oob please provide the list of prs related to the need for this oob deploy active daily users impacted how many active users are impacted at this time and day an hour has fix been confirmed in staging yes no description please provide the details reason for the oob deploy there are feature flags that were changed in the backend and the corresponding frontend ticket was not merged approved in time for the prod deploy which uses the new backend code this is resulting in all inbound authentication on production to not work verify the following the oob deploy requests is after the est cutoff for regular deploy and the oob deploy requests is critial and must be resolved before the next automated deploy performed by platform sre team pagerduty oob deploy incident opened octo de staff acknowledgment of request notification is posted in the appropiate slack support and team channels infrastructure operations has acknowledge the requests security team has reviewed the requests an incident post mortem is created within two business days cc department of veterans affairs vsp operations department of veterans affairs vsp product support | 0 |
41,845 | 10,818,377,704 | IssuesEvent | 2019-11-08 11:55:35 | jackadull/jackadull-build | https://api.github.com/repos/jackadull/jackadull-build | opened | Thinner build definitions | Build Config Programming | Pull more of the individual project `build.sbt` into Jackadull-Build, if possible:
* Overload `JackadullBuild.onTravis` so it can directly create the `ProjectInfo` in one go.
This also enables omitting the import, as it is only used once.
* Try getting the command aliases into `JackadullBuild` somehow.
* See if it is possible to get rid of `.aggregate(docs)` for root projects with docs.
This seems to be impossible though. | 1.0 | Thinner build definitions - Pull more of the individual project `build.sbt` into Jackadull-Build, if possible:
* Overload `JackadullBuild.onTravis` so it can directly create the `ProjectInfo` in one go.
This also enables omitting the import, as it is only used once.
* Try getting the command aliases into `JackadullBuild` somehow.
* See if it is possible to get rid of `.aggregate(docs)` for root projects with docs.
This seems to be impossible though. | non_defect | thinner build definitions pull more of the individual project build sbt into jackadull build if possible overload jackadullbuild ontravis so it can directly create the projectinfo in one go this also enables omitting the import as it is only used once try getting the command aliases into jackadullbuild somehow see if it is possible to get rid of aggregate docs for root projects with docs this seems to be impossible though | 0 |
3,673 | 2,610,066,807 | IssuesEvent | 2015-02-26 18:19:37 | chrsmith/jsjsj122 | https://api.github.com/repos/chrsmith/jsjsj122 | opened | 路桥治前列腺炎去哪里效果最好 | auto-migrated Priority-Medium Type-Defect | ```
路桥治前列腺炎去哪里效果最好【台州五洲生殖医院】24小时
健康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地�
��:台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐1
04、108、118、198及椒江一金清公交车直达枫南小区,乘坐107、
105、109、112、901、
902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:40 | 1.0 | 路桥治前列腺炎去哪里效果最好 - ```
路桥治前列腺炎去哪里效果最好【台州五洲生殖医院】24小时
健康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地�
��:台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐1
04、108、118、198及椒江一金清公交车直达枫南小区,乘坐107、
105、109、112、901、
902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:40 | defect | 路桥治前列腺炎去哪里效果最好 路桥治前列腺炎去哪里效果最好【台州五洲生殖医院】 健康咨询热线 微信号tzwzszyy 医院地� �� (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at | 1 |
48,619 | 13,166,730,819 | IssuesEvent | 2020-08-11 09:03:09 | PowerDNS/pdns | https://api.github.com/repos/PowerDNS/pdns | closed | Recursor: forward-zones=.=127.0.0.1:5353 seems to bypass caching (breaking RFC7706 implementation attempt) | defect rec | - Program: Recursor
- Issue type: Bug report
### Short description
Pointing the root zone to a local auth in order to implement some sort of RFC7706 "local root server on loopback" solution using "forward-zones=.=127.0.0.1:5353" does not seem to work as it looks like replies are not cached. Every inbound lookup attempt seems to go out to the configured destination even though previous replies have already indicated that, for example, ".com can be found at x.gtld-servers.net and this won't change for another 86400 seconds".
This breaks any attempts of RFC7706 implementation using the "forward-zones" statement as it just plain overloads the local auth instance (of course depending on the amount of inbound client queries). The overall behavior might be intentional for "forward-zones" but the documentation is not very distinct regarding reply caching and I do have some doubts there.
In any case it would be good if we could forward root queries to a local target either using "forward-zones" or a similar setting. Bonus points for being able to fully slave the root zone out of the box (#3042).
### Environment
<!-- Tell us about the environment -->
- Operating system: Ubuntu 18.04
- Software version: Recursor 4.3.1
- Software source: PowerDNS repository
### Steps to reproduce
<!-- Tell us step-by-step how the issue can be triggered. Please include your configuration files and any (Lua) scripts that are loaded. -->
1. Set up local root zone slave auth instance
2. Point recursor towards the local instance for "." using "forward-zones=.=127.0.0.1:5353"
3. Send a bunch of queries for .com domains and watch local lookup traffic towards the local auth instance
### Expected behaviour
<!-- What would you expect to happen when the reproduction steps are run -->
A single query for the first domain getting back NS replies for .com
(likely multiplicated by the number of threads plus some DS and other queries)
### Actual behaviour
<!-- What did happen? Please (if possible) provide logs, output from `dig` and/or tcpdump/wireshark data -->
All inbound queries are basically being sent to the local auth instance
### Other information
Issue opened as requested, as discussed with the team. | 1.0 | Recursor: forward-zones=.=127.0.0.1:5353 seems to bypass caching (breaking RFC7706 implementation attempt) - - Program: Recursor
- Issue type: Bug report
### Short description
Pointing the root zone to a local auth in order to implement some sort of RFC7706 "local root server on loopback" solution using "forward-zones=.=127.0.0.1:5353" does not seem to work as it looks like replies are not cached. Every inbound lookup attempt seems to go out to the configured destination even though previous replies have already indicated that, for example, ".com can be found at x.gtld-servers.net and this won't change for another 86400 seconds".
This breaks any attempts of RFC7706 implementation using the "forward-zones" statement as it just plain overloads the local auth instance (of course depending on the amount of inbound client queries). The overall behavior might be intentional for "forward-zones" but the documentation is not very distinct regarding reply caching and I do have some doubts there.
In any case it would be good if we could forward root queries to a local target either using "forward-zones" or a similar setting. Bonus points for being able to fully slave the root zone out of the box (#3042).
### Environment
<!-- Tell us about the environment -->
- Operating system: Ubuntu 18.04
- Software version: Recursor 4.3.1
- Software source: PowerDNS repository
### Steps to reproduce
<!-- Tell us step-by-step how the issue can be triggered. Please include your configuration files and any (Lua) scripts that are loaded. -->
1. Set up local root zone slave auth instance
2. Point recursor towards the local instance for "." using "forward-zones=.=127.0.0.1:5353"
3. Send a bunch of queries for .com domains and watch local lookup traffic towards the local auth instance
### Expected behaviour
<!-- What would you expect to happen when the reproduction steps are run -->
A single query for the first domain getting back NS replies for .com
(likely multiplicated by the number of threads plus some DS and other queries)
### Actual behaviour
<!-- What did happen? Please (if possible) provide logs, output from `dig` and/or tcpdump/wireshark data -->
All inbound queries are basically being sent to the local auth instance
### Other information
Issue opened as requested, as discussed with the team. | defect | recursor forward zones seems to bypass caching breaking implementation attempt program recursor issue type bug report short description pointing the root zone to a local auth in order to implement some sort of local root server on loopback solution using forward zones does not seem to work as it looks like replies are not cached every inbound lookup attempt seems to go out to the configured destination even though previous replies have already indicated that for example com can be found at x gtld servers net and this won t change for another seconds this breaks any attempts of implementation using the forward zones statement as it just plain overloads the local auth instance of course depending on the amount of inbound client queries the overall behavior might be intentional for forward zones but the documentation is not very distinct regarding reply caching and i do have some doubts there in any case it would be good if we could forward root queries to a local target either using forward zones or a similar setting bonus points for being able to fully slave the root zone out of the box environment operating system ubuntu software version recursor software source powerdns repository steps to reproduce set up local root zone slave auth instance point recursor towards the local instance for using forward zones send a bunch of queries for com domains and watch local lookup traffic towards the local auth instance expected behaviour a single query for the first domain getting back ns replies for com likely multiplicated by the number of threads plus some ds and other queries actual behaviour all inbound queries are basically being sent to the local auth instance other information issue opened as requested as discussed with the team | 1 |
145,112 | 13,135,059,239 | IssuesEvent | 2020-08-07 01:34:14 | tunisiano187/WME-send-to-slack | https://api.github.com/repos/tunisiano187/WME-send-to-slack | closed | [Bug] Legacy WebHook Slack Migration | NoProgress bug documentation | https://github.com/tunisiano187/WME-send-to-slack/blob/8d6472a574e9618a9a50a896e52fb5c55c993dc3/WME-send-to-slack.user.js#L538-L555
Not sure if the script is working well in Slack, because of this documentation talking about:
_You cannot override the default channel (chosen by the user who installed your app), username, or icon when you're using Incoming Webhooks to post messages. Instead, these values will always inherit from the associated Slack app configuration._
Please check Legacy Docs:
https://api.slack.com/legacy/custom-integrations/incoming-webhooks
and Up-to-Date Slack WebHook Documentation:
https://api.slack.com/messaging/webhooks | 1.0 | [Bug] Legacy WebHook Slack Migration - https://github.com/tunisiano187/WME-send-to-slack/blob/8d6472a574e9618a9a50a896e52fb5c55c993dc3/WME-send-to-slack.user.js#L538-L555
Not sure if the script is working well in Slack, because of this documentation talking about:
_You cannot override the default channel (chosen by the user who installed your app), username, or icon when you're using Incoming Webhooks to post messages. Instead, these values will always inherit from the associated Slack app configuration._
Please check Legacy Docs:
https://api.slack.com/legacy/custom-integrations/incoming-webhooks
and Up-to-Date Slack WebHook Documentation:
https://api.slack.com/messaging/webhooks | non_defect | legacy webhook slack migration not sure if the script is working well in slack because of this documentation talking about you cannot override the default channel chosen by the user who installed your app username or icon when you re using incoming webhooks to post messages instead these values will always inherit from the associated slack app configuration please check legacy docs and up to date slack webhook documentation | 0 |
72,590 | 24,191,965,336 | IssuesEvent | 2022-09-23 18:34:15 | matrix-org/synapse | https://api.github.com/repos/matrix-org/synapse | opened | Faster room joins: Outgoing device list updates not sent if we leave the room before join completes | A-Federated-Join A-Device-List-Tracking S-Minor T-Defect O-Uncommon | We're missing complement tests for this too.
related:
* #12802
* #13891 | 1.0 | Faster room joins: Outgoing device list updates not sent if we leave the room before join completes - We're missing complement tests for this too.
related:
* #12802
* #13891 | defect | faster room joins outgoing device list updates not sent if we leave the room before join completes we re missing complement tests for this too related | 1 |
311,989 | 26,829,236,818 | IssuesEvent | 2023-02-02 14:57:26 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | Measurement Units are not Uniform | Bug Ready to Test Not Yet Working | 
In symmetric charts as the one shown, Y units must be the same as otherwise, it is hard to compare data. Please fix it and make sure that charts ar symmetrical | 1.0 | Measurement Units are not Uniform - 
In symmetric charts as the one shown, Y units must be the same as otherwise, it is hard to compare data. Please fix it and make sure that charts ar symmetrical | non_defect | measurement units are not uniform in symmetric charts as the one shown y units must be the same as otherwise it is hard to compare data please fix it and make sure that charts ar symmetrical | 0 |
252,611 | 8,038,104,748 | IssuesEvent | 2018-07-30 14:32:40 | allenlol/ySpigot-Issues | https://api.github.com/repos/allenlol/ySpigot-Issues | opened | Configurable TPS, Help, Ping Messages | enhancement help wanted low priority | Configurable TPS, Help, Ping Messages Integrated in Spigot | 1.0 | Configurable TPS, Help, Ping Messages - Configurable TPS, Help, Ping Messages Integrated in Spigot | non_defect | configurable tps help ping messages configurable tps help ping messages integrated in spigot | 0 |
105,663 | 9,099,632,715 | IssuesEvent | 2019-02-20 05:30:27 | Microsoft/ptvsd | https://api.github.com/repos/Microsoft/ptvsd | closed | Reconnect to the same debuggee in tests fails | test issue | disconnect and reconnect works partially. The session is able to reconnect but fails to receive any response.
```
@01.875000: Realized Response(18!Request('disconnect', None), ANY):
Response(18!Request('disconnect', None), ANY) by 19!Response(18!Request('disconnect', None), {})
@01.875000: Waiting for ptvsd#5678 to disconnect
@01.875000: Finalizing
@06.890000: bchan#5678 --> "continued"
@06.890000: New debug session with method 'attach_socket_import'
@06.890000: Trying to connect to ptvsd#5678
@06.890000: Successfully connected to ptvsd#5678
@07.000000: ptvsd#5678 <-- {"seq": 1, "type": "request", "command": "initialize", "arguments": {"adapterID": "test"}}
@07.000000: Waiting for Response(2!Request('initialize', {'adapterID': 'test'}), ANY)
@09.109000: Realized Response(2!Request('initialize', {'adapterID': 'test'}), ANY):
Response(2!Request('initialize', {'adapterID': 'test'}), ANY) by 3!Response(2!Request('initialize', {'adapterID': 'test'}), EOFError('No response'))
FAILED
```
run following test to repro:
```
py -3.7 -m pytest -vvs ./pytests/func/test_attach.py::test_reattach[file-attach_socket_cmdline]
``` | 1.0 | Reconnect to the same debuggee in tests fails - disconnect and reconnect works partially. The session is able to reconnect but fails to receive any response.
```
@01.875000: Realized Response(18!Request('disconnect', None), ANY):
Response(18!Request('disconnect', None), ANY) by 19!Response(18!Request('disconnect', None), {})
@01.875000: Waiting for ptvsd#5678 to disconnect
@01.875000: Finalizing
@06.890000: bchan#5678 --> "continued"
@06.890000: New debug session with method 'attach_socket_import'
@06.890000: Trying to connect to ptvsd#5678
@06.890000: Successfully connected to ptvsd#5678
@07.000000: ptvsd#5678 <-- {"seq": 1, "type": "request", "command": "initialize", "arguments": {"adapterID": "test"}}
@07.000000: Waiting for Response(2!Request('initialize', {'adapterID': 'test'}), ANY)
@09.109000: Realized Response(2!Request('initialize', {'adapterID': 'test'}), ANY):
Response(2!Request('initialize', {'adapterID': 'test'}), ANY) by 3!Response(2!Request('initialize', {'adapterID': 'test'}), EOFError('No response'))
FAILED
```
run following test to repro:
```
py -3.7 -m pytest -vvs ./pytests/func/test_attach.py::test_reattach[file-attach_socket_cmdline]
``` | non_defect | reconnect to the same debuggee in tests fails disconnect and reconnect works partially the session is able to reconnect but fails to receive any response realized response request disconnect none any response request disconnect none any by response request disconnect none waiting for ptvsd to disconnect finalizing bchan continued new debug session with method attach socket import trying to connect to ptvsd successfully connected to ptvsd ptvsd seq type request command initialize arguments adapterid test waiting for response request initialize adapterid test any realized response request initialize adapterid test any response request initialize adapterid test any by response request initialize adapterid test eoferror no response failed run following test to repro py m pytest vvs pytests func test attach py test reattach | 0 |
31,898 | 15,116,016,416 | IssuesEvent | 2021-02-09 05:52:57 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | tensorflow lite performance slower than regular tensorflow model | TF 1.12 comp:lite type:performance | i converted object detection model into tensorflow lite and i am trying to run it in raspberry pi but the performance of tensorflow lite is slower in desktop. How can i increase the performance speed of the tensorflow lite? | True | tensorflow lite performance slower than regular tensorflow model - i converted object detection model into tensorflow lite and i am trying to run it in raspberry pi but the performance of tensorflow lite is slower in desktop. How can i increase the performance speed of the tensorflow lite? | non_defect | tensorflow lite performance slower than regular tensorflow model i converted object detection model into tensorflow lite and i am trying to run it in raspberry pi but the performance of tensorflow lite is slower in desktop how can i increase the performance speed of the tensorflow lite | 0 |
68,765 | 17,395,289,022 | IssuesEvent | 2021-08-02 12:46:27 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | opened | [SB] Email field is not allowing the Domain name extension with more than 3 characters | Bug P1 Study builder | A/R:- Email field not allowing the Email with domain name extension with more than 3 characters Eg. Sample@mail.tech
E/R:- Email field should allow the Domain name extension with more than 3 characters

| 1.0 | [SB] Email field is not allowing the Domain name extension with more than 3 characters - A/R:- Email field not allowing the Email with domain name extension with more than 3 characters Eg. Sample@mail.tech
E/R:- Email field should allow the Domain name extension with more than 3 characters

| non_defect | email field is not allowing the domain name extension with more than characters a r email field not allowing the email with domain name extension with more than characters eg sample mail tech e r email field should allow the domain name extension with more than characters | 0 |
95,611 | 16,096,845,230 | IssuesEvent | 2021-04-27 01:57:26 | benlazarine/ansible | https://api.github.com/repos/benlazarine/ansible | opened | CVE-2020-25659 (Medium) detected in cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl | security vulnerability | ## CVE-2020-25659 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>cryptography is a package which provides cryptographic recipes and primitives to Python developers.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/e2/67/4597fc5d5de01bb44887844647ab8e73239079dd478c35c52d58a9eb3d45/cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/e2/67/4597fc5d5de01bb44887844647ab8e73239079dd478c35c52d58a9eb3d45/cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: ansible/lib/ansible/test-requirements.txt</p>
<p>Path to vulnerable library: ansible/lib/ansible/test-requirements.txt,ansible,ansible/test/utils/tox/requirements.txt</p>
<p>
Dependency Hierarchy:
- paramiko-2.6.0-py2.py3-none-any.whl (Root Library)
- :x: **cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
python-cryptography 3.2 is vulnerable to Bleichenbacher timing attacks in the RSA decryption API, via timed processing of valid PKCS#1 v1.5 ciphertext.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25659>CVE-2020-25659</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/pyca/cryptography/security/advisories/GHSA-hggm-jpg3-v476">https://github.com/pyca/cryptography/security/advisories/GHSA-hggm-jpg3-v476</a></p>
<p>Release Date: 2020-09-17</p>
<p>Fix Resolution: 3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-25659 (Medium) detected in cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2020-25659 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>cryptography is a package which provides cryptographic recipes and primitives to Python developers.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/e2/67/4597fc5d5de01bb44887844647ab8e73239079dd478c35c52d58a9eb3d45/cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/e2/67/4597fc5d5de01bb44887844647ab8e73239079dd478c35c52d58a9eb3d45/cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: ansible/lib/ansible/test-requirements.txt</p>
<p>Path to vulnerable library: ansible/lib/ansible/test-requirements.txt,ansible,ansible/test/utils/tox/requirements.txt</p>
<p>
Dependency Hierarchy:
- paramiko-2.6.0-py2.py3-none-any.whl (Root Library)
- :x: **cryptography-2.8-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
python-cryptography 3.2 is vulnerable to Bleichenbacher timing attacks in the RSA decryption API, via timed processing of valid PKCS#1 v1.5 ciphertext.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25659>CVE-2020-25659</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/pyca/cryptography/security/advisories/GHSA-hggm-jpg3-v476">https://github.com/pyca/cryptography/security/advisories/GHSA-hggm-jpg3-v476</a></p>
<p>Release Date: 2020-09-17</p>
<p>Fix Resolution: 3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in cryptography whl cve medium severity vulnerability vulnerable library cryptography whl cryptography is a package which provides cryptographic recipes and primitives to python developers library home page a href path to dependency file ansible lib ansible test requirements txt path to vulnerable library ansible lib ansible test requirements txt ansible ansible test utils tox requirements txt dependency hierarchy paramiko none any whl root library x cryptography whl vulnerable library vulnerability details python cryptography is vulnerable to bleichenbacher timing attacks in the rsa decryption api via timed processing of valid pkcs ciphertext publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
34,751 | 7,459,967,728 | IssuesEvent | 2018-03-30 17:38:22 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | KÜ lisamine ei tööta | C: AIS P: high R: fixed T: defect | **Reported by eilika on 1 Feb 2016 13:19 UTC**
Ei saa lisada uut KÜ-d. Kui valida "Lisa uus arhiiv/alam-üksus", siis tasandi valikust edasi ei lähe. | 1.0 | KÜ lisamine ei tööta - **Reported by eilika on 1 Feb 2016 13:19 UTC**
Ei saa lisada uut KÜ-d. Kui valida "Lisa uus arhiiv/alam-üksus", siis tasandi valikust edasi ei lähe. | defect | kü lisamine ei tööta reported by eilika on feb utc ei saa lisada uut kü d kui valida lisa uus arhiiv alam üksus siis tasandi valikust edasi ei lähe | 1 |
188,316 | 6,775,214,907 | IssuesEvent | 2017-10-27 13:32:59 | mozilla/addons-frontend | https://api.github.com/repos/mozilla/addons-frontend | reopened | Link based install button doesn't appear to send explicit tracking | component: analytics priority: mvp-polish triaged type: regression | The switch-based button had explicit tracking based on the various states of interaction. Having removed that to use a direct link button, we lost that tracking.
STR
- Go to an add-on detail page
- Open the network panel in Devtools
- Click the install button
Expected: a request is made to `https://www.google-analytics.com/r/collect`
Actual: no request is made
QA Notes:
- Please also make sure the Discopane install button does not have regressions. It should already be sending tracking events.
| 1.0 | Link based install button doesn't appear to send explicit tracking - The switch-based button had explicit tracking based on the various states of interaction. Having removed that to use a direct link button, we lost that tracking.
STR
- Go to an add-on detail page
- Open the network panel in Devtools
- Click the install button
Expected: a request is made to `https://www.google-analytics.com/r/collect`
Actual: no request is made
QA Notes:
- Please also make sure the Discopane install button does not have regressions. It should already be sending tracking events.
| non_defect | link based install button doesn t appear to send explicit tracking the switch based button had explicit tracking based on the various states of interaction having removed that to use a direct link button we lost that tracking str go to an add on detail page open the network panel in devtools click the install button expected a request is made to actual no request is made qa notes please also make sure the discopane install button does not have regressions it should already be sending tracking events | 0 |
49,976 | 13,187,302,291 | IssuesEvent | 2020-08-13 02:59:03 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | GRL() function in phys_services.goodrunlist is confused by 2017 level2pass2, as regular level2 should be used (Trac #2396) | Incomplete Migration Migrated from Trac combo core defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2396">https://code.icecube.wisc.edu/ticket/2396</a>, reported by aburgman and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"description": "At the start of season 2017 level2pass2 became the normal level2. But in /data/exp 2017 still has a level2pass2 folder that only contains test data. This confuses the GRL() function in phys_services.goodrunlist, as it attempts to load good run numbers both from the regular level2 2017 good run list and the level2pass2 2017 good run list (and therefore gets duplicates).\n\nThe recomendation to use GRL() comes from https://wiki.icecube.wisc.edu/index.php/Offline_Processing_L2_and_L3#Find_the_Data_Folder_and_the_GCD_File\n\nA minimal working example is:\n{{{\nfrom icecube.phys_services.goodrunlist import GRL\ngrl=GRL(pass2=True)\n}}}\n\nand the error message is:\n{{{\nException Traceback (most recent call last)\n<ipython-input-2-5b8490a4870d> in <module>()\n----> 1 grl=GRL(pass2=True)\n\n/home/aburgman/icecode/meta-projects/icerec/V05-02-04/RHEL_7_x86_64/lib/icecube/phys_services/goodrunlist.pyc in GRL(pass2)\n 422 \n 423 for l in lists:\n--> 424 grl.load(l)\n 425 \n 426 return grl\n\n/home/aburgman/icecode/meta-projects/icerec/V05-02-04/RHEL_7_x86_64/lib/icecube/phys_services/goodrunlist.pyc in load(self, path)\n 285 data['pass'] = 1 if not pass2 else 2\n 286 \n--> 287 self.add_run(data)\n 288 \n 289 if ic79_pass1_mode:\n\n/home/aburgman/icecode/meta-projects/icerec/V05-02-04/RHEL_7_x86_64/lib/icecube/phys_services/goodrunlist.pyc in add_run(self, data)\n 299 \n 300 if self.has_run(run_id):\n--> 301 raise Exception(\"Run {run_id} has already been added to the GRL.\".format(run_id = run_id))\n 302 \n 303 def convert_to_number(num, num_decimals):\n\nException: Run 129550 has already been added to the GRL.\n}}}",
"reporter": "aburgman",
"cc": "",
"resolution": "fixed",
"_ts": "1593001902142004",
"component": "combo core",
"summary": "GRL() function in phys_services.goodrunlist is confused by 2017 level2pass2, as regular level2 should be used",
"priority": "normal",
"keywords": "",
"time": "2020-01-07T23:17:00",
"milestone": "Autumnal Equinox 2020",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | GRL() function in phys_services.goodrunlist is confused by 2017 level2pass2, as regular level2 should be used (Trac #2396) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2396">https://code.icecube.wisc.edu/ticket/2396</a>, reported by aburgman and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"description": "At the start of season 2017 level2pass2 became the normal level2. But in /data/exp 2017 still has a level2pass2 folder that only contains test data. This confuses the GRL() function in phys_services.goodrunlist, as it attempts to load good run numbers both from the regular level2 2017 good run list and the level2pass2 2017 good run list (and therefore gets duplicates).\n\nThe recomendation to use GRL() comes from https://wiki.icecube.wisc.edu/index.php/Offline_Processing_L2_and_L3#Find_the_Data_Folder_and_the_GCD_File\n\nA minimal working example is:\n{{{\nfrom icecube.phys_services.goodrunlist import GRL\ngrl=GRL(pass2=True)\n}}}\n\nand the error message is:\n{{{\nException Traceback (most recent call last)\n<ipython-input-2-5b8490a4870d> in <module>()\n----> 1 grl=GRL(pass2=True)\n\n/home/aburgman/icecode/meta-projects/icerec/V05-02-04/RHEL_7_x86_64/lib/icecube/phys_services/goodrunlist.pyc in GRL(pass2)\n 422 \n 423 for l in lists:\n--> 424 grl.load(l)\n 425 \n 426 return grl\n\n/home/aburgman/icecode/meta-projects/icerec/V05-02-04/RHEL_7_x86_64/lib/icecube/phys_services/goodrunlist.pyc in load(self, path)\n 285 data['pass'] = 1 if not pass2 else 2\n 286 \n--> 287 self.add_run(data)\n 288 \n 289 if ic79_pass1_mode:\n\n/home/aburgman/icecode/meta-projects/icerec/V05-02-04/RHEL_7_x86_64/lib/icecube/phys_services/goodrunlist.pyc in add_run(self, data)\n 299 \n 300 if self.has_run(run_id):\n--> 301 raise Exception(\"Run {run_id} has already been added to the GRL.\".format(run_id = run_id))\n 302 \n 303 def convert_to_number(num, num_decimals):\n\nException: Run 129550 has already been added to the GRL.\n}}}",
"reporter": "aburgman",
"cc": "",
"resolution": "fixed",
"_ts": "1593001902142004",
"component": "combo core",
"summary": "GRL() function in phys_services.goodrunlist is confused by 2017 level2pass2, as regular level2 should be used",
"priority": "normal",
"keywords": "",
"time": "2020-01-07T23:17:00",
"milestone": "Autumnal Equinox 2020",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| defect | grl function in phys services goodrunlist is confused by as regular should be used trac migrated from json status closed changetime description at the start of season became the normal but in data exp still has a folder that only contains test data this confuses the grl function in phys services goodrunlist as it attempts to load good run numbers both from the regular good run list and the good run list and therefore gets duplicates n nthe recomendation to use grl comes from minimal working example is n nfrom icecube phys services goodrunlist import grl ngrl grl true n n nand the error message is n nexception traceback most recent call last n in n grl grl true n n home aburgman icecode meta projects icerec rhel lib icecube phys services goodrunlist pyc in grl n n for l in lists n grl load l n n return grl n n home aburgman icecode meta projects icerec rhel lib icecube phys services goodrunlist pyc in load self path n data if not else n n self add run data n n if mode n n home aburgman icecode meta projects icerec rhel lib icecube phys services goodrunlist pyc in add run self data n n if self has run run id n raise exception run run id has already been added to the grl format run id run id n n def convert to number num num decimals n nexception run has already been added to the grl n reporter aburgman cc resolution fixed ts component combo core summary grl function in phys services goodrunlist is confused by as regular should be used priority normal keywords time milestone autumnal equinox owner type defect | 1 |
53,359 | 13,261,465,284 | IssuesEvent | 2020-08-20 19:57:01 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | CoincSuite go through the FIXME's (Trac #1233) | Migrated from Trac combo reconstruction defect | There are several FIXME's. Please make the intended changes so the FIXME's would go away:
private/test/CoincSuiteHelpersTest.cxx
private/CoincSuite/lib/PartialCOG.cxx
private/CoincSuite/Modules/DecisionMaker.cxx
private/CoincSuite/Modules/AfterpulseDiscard.cxx
python/coincsuite.py
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1233">https://code.icecube.wisc.edu/projects/icecube/ticket/1233</a>, reported by jtatar</summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"_ts": "1550067295757382",
"description": "There are several FIXME's. Please make the intended changes so the FIXME's would go away:\n\nprivate/test/CoincSuiteHelpersTest.cxx\nprivate/CoincSuite/lib/PartialCOG.cxx\nprivate/CoincSuite/Modules/DecisionMaker.cxx\nprivate/CoincSuite/Modules/AfterpulseDiscard.cxx\npython/coincsuite.py",
"reporter": "jtatar",
"cc": "",
"resolution": "wontfix",
"time": "2015-08-19T22:39:20",
"component": "combo reconstruction",
"summary": "CoincSuite go through the FIXME's",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | CoincSuite go through the FIXME's (Trac #1233) - There are several FIXME's. Please make the intended changes so the FIXME's would go away:
private/test/CoincSuiteHelpersTest.cxx
private/CoincSuite/lib/PartialCOG.cxx
private/CoincSuite/Modules/DecisionMaker.cxx
private/CoincSuite/Modules/AfterpulseDiscard.cxx
python/coincsuite.py
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1233">https://code.icecube.wisc.edu/projects/icecube/ticket/1233</a>, reported by jtatar</summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"_ts": "1550067295757382",
"description": "There are several FIXME's. Please make the intended changes so the FIXME's would go away:\n\nprivate/test/CoincSuiteHelpersTest.cxx\nprivate/CoincSuite/lib/PartialCOG.cxx\nprivate/CoincSuite/Modules/DecisionMaker.cxx\nprivate/CoincSuite/Modules/AfterpulseDiscard.cxx\npython/coincsuite.py",
"reporter": "jtatar",
"cc": "",
"resolution": "wontfix",
"time": "2015-08-19T22:39:20",
"component": "combo reconstruction",
"summary": "CoincSuite go through the FIXME's",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| defect | coincsuite go through the fixme s trac there are several fixme s please make the intended changes so the fixme s would go away private test coincsuitehelperstest cxx private coincsuite lib partialcog cxx private coincsuite modules decisionmaker cxx private coincsuite modules afterpulsediscard cxx python coincsuite py migrated from json status closed changetime ts description there are several fixme s please make the intended changes so the fixme s would go away n nprivate test coincsuitehelperstest cxx nprivate coincsuite lib partialcog cxx nprivate coincsuite modules decisionmaker cxx nprivate coincsuite modules afterpulsediscard cxx npython coincsuite py reporter jtatar cc resolution wontfix time component combo reconstruction summary coincsuite go through the fixme s priority blocker keywords milestone owner type defect | 1 |
54,353 | 13,616,315,778 | IssuesEvent | 2020-09-23 15:28:33 | SAP/fundamental-styles | https://api.github.com/repos/SAP/fundamental-styles | closed | Select: cozy mode select Icon misaligned and other issues | 0.12.0 Bug Defect Hunting | 
Also the focus looks odd

| 1.0 | Select: cozy mode select Icon misaligned and other issues - 
Also the focus looks odd

| defect | select cozy mode select icon misaligned and other issues also the focus looks odd | 1 |
36,323 | 7,889,750,765 | IssuesEvent | 2018-06-28 06:11:30 | CenturyLinkCloud/mdw | https://api.github.com/repos/CenturyLinkCloud/mdw | closed | Workflow tab process instance item in Firefox | defect | Started and Completed dates show as NAN in Firefox browser for the Workflow processes list. | 1.0 | Workflow tab process instance item in Firefox - Started and Completed dates show as NAN in Firefox browser for the Workflow processes list. | defect | workflow tab process instance item in firefox started and completed dates show as nan in firefox browser for the workflow processes list | 1 |
14,887 | 2,831,390,178 | IssuesEvent | 2015-05-24 15:55:02 | nobodyguy/dslrdashboard | https://api.github.com/repos/nobodyguy/dslrdashboard | closed | Bulb not working with D7000 | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
The dialog box appears, I entre the exposure time and then its disappears
What version of the product are you using? On what operating system?
The newest version, Android up to date
Please provide any additional information below.
Would love this to work!
```
Original issue reported on code.google.com by `aro...@gmail.com` on 28 May 2013 at 12:29 | 1.0 | Bulb not working with D7000 - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
The dialog box appears, I entre the exposure time and then its disappears
What version of the product are you using? On what operating system?
The newest version, Android up to date
Please provide any additional information below.
Would love this to work!
```
Original issue reported on code.google.com by `aro...@gmail.com` on 28 May 2013 at 12:29 | defect | bulb not working with what steps will reproduce the problem what is the expected output what do you see instead the dialog box appears i entre the exposure time and then its disappears what version of the product are you using on what operating system the newest version android up to date please provide any additional information below would love this to work original issue reported on code google com by aro gmail com on may at | 1 |
70,316 | 23,116,184,644 | IssuesEvent | 2022-07-27 16:54:00 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | opened | VRF central condenser EIR curves not checked and bounded causing negative cooling electricity | Defect | Issue overview
--------------
It was discovered that in the curved based AirConditioner:VariableRefrigerantFlow simulation code, the EIR (as a function of PLR) curve bounds were not checked to ensure a >0 value. In this case, when a bad curve is entered, negative cooling electricity will be calculated without any warnings.
It is suggested to catch this exception by:
1. giving with recurring warning messaged about the situation;
2. reset the negative curve value to 0.0 before continuing the simulation.
The same situation is happening to the heating EIR in the code. Similar treatment is suggested for the heating part as well.
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version): All
- Version of EnergyPlus (if using an intermediate build, include SHA): v22.1
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| 1.0 | VRF central condenser EIR curves not checked and bounded causing negative cooling electricity - Issue overview
--------------
It was discovered that in the curved based AirConditioner:VariableRefrigerantFlow simulation code, the EIR (as a function of PLR) curve bounds were not checked to ensure a >0 value. In this case, when a bad curve is entered, negative cooling electricity will be calculated without any warnings.
It is suggested to catch this exception by:
1. giving with recurring warning messaged about the situation;
2. reset the negative curve value to 0.0 before continuing the simulation.
The same situation is happening to the heating EIR in the code. Similar treatment is suggested for the heating part as well.
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version): All
- Version of EnergyPlus (if using an intermediate build, include SHA): v22.1
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| defect | vrf central condenser eir curves not checked and bounded causing negative cooling electricity issue overview it was discovered that in the curved based airconditioner variablerefrigerantflow simulation code the eir as a function of plr curve bounds were not checked to ensure a value in this case when a bad curve is entered negative cooling electricity will be calculated without any warnings it is suggested to catch this exception by giving with recurring warning messaged about the situation reset the negative curve value to before continuing the simulation the same situation is happening to the heating eir in the code similar treatment is suggested for the heating part as well details some additional details for this issue if relevant platform operating system version all version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 1 |
60,527 | 17,023,448,599 | IssuesEvent | 2021-07-03 02:05:09 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Coastline Error checker unavailable | Component: utils Priority: major Resolution: invalid Type: defect | **[Submitted to the original trac issue database at 9.35am, Monday, 27th July 2009]**
The coastline error checker[1] has been offline since June 2009. Not only does this hinder fixing of coastline errors, but since it generates the shapefiles used for the Mapnik caostline data it means that the Mapnik layer is outdated.
[1] http://wiki.openstreetmap.org/wiki/Coastline_error_checker | 1.0 | Coastline Error checker unavailable - **[Submitted to the original trac issue database at 9.35am, Monday, 27th July 2009]**
The coastline error checker[1] has been offline since June 2009. Not only does this hinder fixing of coastline errors, but since it generates the shapefiles used for the Mapnik caostline data it means that the Mapnik layer is outdated.
[1] http://wiki.openstreetmap.org/wiki/Coastline_error_checker | defect | coastline error checker unavailable the coastline error checker has been offline since june not only does this hinder fixing of coastline errors but since it generates the shapefiles used for the mapnik caostline data it means that the mapnik layer is outdated | 1 |
508,148 | 14,690,551,756 | IssuesEvent | 2021-01-02 15:48:36 | ita-social-projects/horondi_client_fe | https://api.github.com/repos/ita-social-projects/horondi_client_fe | opened | [Створити категорію] Category with too short and atoo long values in mandatory fields is created | Functional bug priority: medium | **Environment:** Windows 10 Pro, Google Chrome, version 86.0.4240.183.
**Reproducible:** always.
**Build found:** commit 1fdc570
**Preconditions:**
- Go to https://horondi-admin-staging.azurewebsites.net
- Log into Administrator page as Administrator
**Steps to reproduce:**
1. Go to 'Категорії' menu item.
2. Upload photo more less than15MB.
3. Upload a photo.
4. Fill in 'Код категорії' text field (add less than 2 or more than 30 characters (E.g. Thisistestcodeofcategoryfieldmessage')
5. Fill in 'Назва категорії’ field (add less than 3 or more than 50 cyrylic letters (E.g. 'Іа'))
6. Fill in 'Name of category' field (Add less than 3 or more than 50 english letters (E.G. 'Ia'))
**Actual result:**
1. No error messages occurs.
2. Category with invalid data is created.
**Expected result:**
1. New category is not created
2. Error messages are displayed near text fields
- 'Код категорії': 'Це поле повинно містити від 2 до 30 символів’
- 'Назва категорії': ‘Назва повинна містити від 3 до 50 символів’
- 'Name of category': 'Name of category ‘ повинно містити від 3 до 50 символів’
**User story and test case links:**
User story LVHRB-15](https://jira.softserve.academy/browse/LVHRB-15)
Test [LVHRB-46](https://jira.softserve.academy/browse/LVHRB-46) | 1.0 | [Створити категорію] Category with too short and atoo long values in mandatory fields is created - **Environment:** Windows 10 Pro, Google Chrome, version 86.0.4240.183.
**Reproducible:** always.
**Build found:** commit 1fdc570
**Preconditions:**
- Go to https://horondi-admin-staging.azurewebsites.net
- Log into Administrator page as Administrator
**Steps to reproduce:**
1. Go to 'Категорії' menu item.
2. Upload photo more less than15MB.
3. Upload a photo.
4. Fill in 'Код категорії' text field (add less than 2 or more than 30 characters (E.g. Thisistestcodeofcategoryfieldmessage')
5. Fill in 'Назва категорії’ field (add less than 3 or more than 50 cyrylic letters (E.g. 'Іа'))
6. Fill in 'Name of category' field (Add less than 3 or more than 50 english letters (E.G. 'Ia'))
**Actual result:**
1. No error messages occurs.
2. Category with invalid data is created.
**Expected result:**
1. New category is not created
2. Error messages are displayed near text fields
- 'Код категорії': 'Це поле повинно містити від 2 до 30 символів’
- 'Назва категорії': ‘Назва повинна містити від 3 до 50 символів’
- 'Name of category': 'Name of category ‘ повинно містити від 3 до 50 символів’
**User story and test case links:**
User story LVHRB-15](https://jira.softserve.academy/browse/LVHRB-15)
Test [LVHRB-46](https://jira.softserve.academy/browse/LVHRB-46) | non_defect | category with too short and atoo long values in mandatory fields is created environment windows pro google chrome version reproducible always build found commit preconditions go to log into administrator page as administrator steps to reproduce go to категорії menu item upload photo more less upload a photo fill in код категорії text field add less than or more than characters e g thisistestcodeofcategoryfieldmessage fill in назва категорії’ field add less than or more than cyrylic letters e g іа fill in name of category field add less than or more than english letters e g ia actual result no error messages occurs category with invalid data is created expected result new category is not created error messages are displayed near text fields код категорії це поле повинно містити від до символів’ назва категорії ‘назва повинна містити від до символів’ name of category name of category ‘ повинно містити від до символів’ user story and test case links user story lvhrb test | 0 |
539,234 | 15,785,646,531 | IssuesEvent | 2021-04-01 16:37:03 | celo-org/celo-monorepo | https://api.github.com/repos/celo-org/celo-monorepo | opened | Isolate ImpactMarket users impact on Attestation Service completion rates | CAP Component: Identity Component: Verification Priority: P1 | We've seen a significant increase in incompletion rates in the Attestation Service metrics starting around Feb 21st. The hypothesis is that it’s largely driven by abuse within the ImpactMarket app. We’ve tracked down small issues here and there (ex. Twilio outages and AS misconfigurations) but none of them can explain the general increase in abandonment from these set of regions (BR, VE, UG). When investigating the increase from Uganda, I chatted with the ImpactMarket team who mentioned they have been seeing large amounts of abuse from some of these communities. They’ve been taking steps to shut down to abuse they’ve detected as well as improve their sybil resistance going forward.
We should work to isolate the ImpactMarket users from the attestations metrics to validate the hypothesis.
- Find a way to identify ImpactMarket users (ex. transfer events)
- Measure the completion rate for these users compared to the completion rate of everyone else | 1.0 | Isolate ImpactMarket users impact on Attestation Service completion rates - We've seen a significant increase in incompletion rates in the Attestation Service metrics starting around Feb 21st. The hypothesis is that it’s largely driven by abuse within the ImpactMarket app. We’ve tracked down small issues here and there (ex. Twilio outages and AS misconfigurations) but none of them can explain the general increase in abandonment from these set of regions (BR, VE, UG). When investigating the increase from Uganda, I chatted with the ImpactMarket team who mentioned they have been seeing large amounts of abuse from some of these communities. They’ve been taking steps to shut down to abuse they’ve detected as well as improve their sybil resistance going forward.
We should work to isolate the ImpactMarket users from the attestations metrics to validate the hypothesis.
- Find a way to identify ImpactMarket users (ex. transfer events)
- Measure the completion rate for these users compared to the completion rate of everyone else | non_defect | isolate impactmarket users impact on attestation service completion rates we ve seen a significant increase in incompletion rates in the attestation service metrics starting around feb the hypothesis is that it’s largely driven by abuse within the impactmarket app we’ve tracked down small issues here and there ex twilio outages and as misconfigurations but none of them can explain the general increase in abandonment from these set of regions br ve ug when investigating the increase from uganda i chatted with the impactmarket team who mentioned they have been seeing large amounts of abuse from some of these communities they’ve been taking steps to shut down to abuse they’ve detected as well as improve their sybil resistance going forward we should work to isolate the impactmarket users from the attestations metrics to validate the hypothesis find a way to identify impactmarket users ex transfer events measure the completion rate for these users compared to the completion rate of everyone else | 0 |
140,026 | 31,815,508,485 | IssuesEvent | 2023-09-13 20:07:56 | LouisSeranne/DICJ_Stimulus_A23 | https://api.github.com/repos/LouisSeranne/DICJ_Stimulus_A23 | closed | Vérifier prérequis des noeuds | 2 enhancement fix code | Vérifier ce qui définit les relations entre les nœuds et comment le système de priorité est configuré | 1.0 | Vérifier prérequis des noeuds - Vérifier ce qui définit les relations entre les nœuds et comment le système de priorité est configuré | non_defect | vérifier prérequis des noeuds vérifier ce qui définit les relations entre les nœuds et comment le système de priorité est configuré | 0 |
61,079 | 17,023,596,263 | IssuesEvent | 2021-07-03 02:50:17 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | mod_tile's debian package can't be build without munin. | Component: mapnik Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 1.24pm, Sunday, 23rd May 2010]**
I am installing mod_tile on an ubuntu lucid (10.04) machine. I don't have munin installed. I tried to build mod_tile (r21420) using ```dpkg-buildpackages``` and it failed with the following error message:
```
...
dh_strip
dh_compress
dh_fixperms
dh_installdeb
dh_shlibdeps
dpkg-shlibdeps: warning: debian/libapache2-mod-tile/usr/lib/apache2/modules/mod_tile.so contains an unresolvable reference to symbol apr_ctime: it's probably a plugin.
dpkg-shlibdeps: warning: 36 other similar warnings have been skipped (use -v to see them all).
dh_gencontrol
dpkg-gencontrol: warning: unused substitution variable ${shlibs:Depends}
dpkg-gencontrol: warning: unused substitution variable ${shlibs:Depends}
dh_md5sums
dh_builddeb
dpkg-deb: building package `libapache2-mod-tile' in `../libapache2-mod-tile_0.3_i386.deb'.
dpkg-deb: building package `renderd' in `../renderd_0.3_i386.deb'.
dpkg-deb: conffile `/etc/munin/plugins/renderd_queue' does not appear in package
dh_builddeb: dpkg-deb --build debian/renderd .. returned exit code 2
make: *** [binary-arch] Error 9
dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2
```
This is because ```debian/renderd.conffiles}} is like this:
```
/etc/renderd.conf
/etc/munin/plugins/renderd_queue
/etc/munin/plugins/renderd_processed
```
One solution is to remove the munin stuff from that file, with the following patch:
```
Index: debian/renderd.conffiles
===================================================================
--- debian/renderd.conffiles (revision 21420)
+++ debian/renderd.conffiles (working copy)
@@ -1,3 +1 @@
/etc/renderd.conf
-/etc/munin/plugins/renderd_queue
-/etc/munin/plugins/renderd_processed
```
This is what I'm using. I'm not sure what affect this will have if you are using munin, and I'm not sure if it's possible to have it work OK regardless if you're using munin. If it's not possible to remove that munin conffiles, then renderd should depend on munin. | 1.0 | mod_tile's debian package can't be build without munin. - **[Submitted to the original trac issue database at 1.24pm, Sunday, 23rd May 2010]**
I am installing mod_tile on an ubuntu lucid (10.04) machine. I don't have munin installed. I tried to build mod_tile (r21420) using ```dpkg-buildpackages``` and it failed with the following error message:
```
...
dh_strip
dh_compress
dh_fixperms
dh_installdeb
dh_shlibdeps
dpkg-shlibdeps: warning: debian/libapache2-mod-tile/usr/lib/apache2/modules/mod_tile.so contains an unresolvable reference to symbol apr_ctime: it's probably a plugin.
dpkg-shlibdeps: warning: 36 other similar warnings have been skipped (use -v to see them all).
dh_gencontrol
dpkg-gencontrol: warning: unused substitution variable ${shlibs:Depends}
dpkg-gencontrol: warning: unused substitution variable ${shlibs:Depends}
dh_md5sums
dh_builddeb
dpkg-deb: building package `libapache2-mod-tile' in `../libapache2-mod-tile_0.3_i386.deb'.
dpkg-deb: building package `renderd' in `../renderd_0.3_i386.deb'.
dpkg-deb: conffile `/etc/munin/plugins/renderd_queue' does not appear in package
dh_builddeb: dpkg-deb --build debian/renderd .. returned exit code 2
make: *** [binary-arch] Error 9
dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2
```
This is because ```debian/renderd.conffiles}} is like this:
```
/etc/renderd.conf
/etc/munin/plugins/renderd_queue
/etc/munin/plugins/renderd_processed
```
One solution is to remove the munin stuff from that file, with the following patch:
```
Index: debian/renderd.conffiles
===================================================================
--- debian/renderd.conffiles (revision 21420)
+++ debian/renderd.conffiles (working copy)
@@ -1,3 +1 @@
/etc/renderd.conf
-/etc/munin/plugins/renderd_queue
-/etc/munin/plugins/renderd_processed
```
This is what I'm using. I'm not sure what affect this will have if you are using munin, and I'm not sure if it's possible to have it work OK regardless if you're using munin. If it's not possible to remove that munin conffiles, then renderd should depend on munin. | defect | mod tile s debian package can t be build without munin i am installing mod tile on an ubuntu lucid machine i don t have munin installed i tried to build mod tile using dpkg buildpackages and it failed with the following error message dh strip dh compress dh fixperms dh installdeb dh shlibdeps dpkg shlibdeps warning debian mod tile usr lib modules mod tile so contains an unresolvable reference to symbol apr ctime it s probably a plugin dpkg shlibdeps warning other similar warnings have been skipped use v to see them all dh gencontrol dpkg gencontrol warning unused substitution variable shlibs depends dpkg gencontrol warning unused substitution variable shlibs depends dh dh builddeb dpkg deb building package mod tile in mod tile deb dpkg deb building package renderd in renderd deb dpkg deb conffile etc munin plugins renderd queue does not appear in package dh builddeb dpkg deb build debian renderd returned exit code make error dpkg buildpackage error fakeroot debian rules binary gave error exit status this is because debian renderd conffiles is like this etc renderd conf etc munin plugins renderd queue etc munin plugins renderd processed one solution is to remove the munin stuff from that file with the following patch index debian renderd conffiles debian renderd conffiles revision debian renderd conffiles working copy etc renderd conf etc munin plugins renderd queue etc munin plugins renderd processed this is what i m using i m not sure what affect this will have if you are using munin and i m not sure if it s possible to have it work ok regardless if you re using munin if it s not possible to remove that munin conffiles then renderd should depend on munin | 1 |
172,092 | 21,031,298,950 | IssuesEvent | 2022-03-31 01:19:11 | saurockSaurav/NextGen-SpringSecurityWithJWT-deployed-in-AWS | https://api.github.com/repos/saurockSaurav/NextGen-SpringSecurityWithJWT-deployed-in-AWS | opened | CVE-2022-22950 (Medium) detected in spring-expression-5.3.6.jar | security vulnerability | ## CVE-2022-22950 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-expression-5.3.6.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.3.6/spring-expression-5.3.6.jar</p>
<p>
Dependency Hierarchy:
- spring-context-5.3.6.jar (Root Library)
- :x: **spring-expression-5.3.6.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-expression:5.3.17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-22950 (Medium) detected in spring-expression-5.3.6.jar - ## CVE-2022-22950 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-expression-5.3.6.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.3.6/spring-expression-5.3.6.jar</p>
<p>
Dependency Hierarchy:
- spring-context-5.3.6.jar (Root Library)
- :x: **spring-expression-5.3.6.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-expression:5.3.17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in spring expression jar cve medium severity vulnerability vulnerable library spring expression jar spring expression language spel library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org springframework spring expression spring expression jar dependency hierarchy spring context jar root library x spring expression jar vulnerable library found in base branch master vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring expression step up your open source security game with whitesource | 0 |
51,283 | 13,207,407,748 | IssuesEvent | 2020-08-14 22:59:22 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | lazy frame needs docs (Trac #136) | Incomplete Migration Migrated from Trac defect documentation | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/136">https://code.icecube.wisc.edu/projects/icecube/ticket/136</a>, reported by troyand owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"_ts": "1416713876900096",
"description": "",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"time": "2008-09-30T04:42:37",
"component": "documentation",
"summary": "lazy frame needs docs",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
| 1.0 | lazy frame needs docs (Trac #136) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/136">https://code.icecube.wisc.edu/projects/icecube/ticket/136</a>, reported by troyand owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"_ts": "1416713876900096",
"description": "",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"time": "2008-09-30T04:42:37",
"component": "documentation",
"summary": "lazy frame needs docs",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
| defect | lazy frame needs docs trac migrated from json status closed changetime ts description reporter troy cc resolution wont or cant fix time component documentation summary lazy frame needs docs priority normal keywords milestone owner blaufuss type defect | 1 |
91,675 | 8,316,230,166 | IssuesEvent | 2018-09-25 08:27:10 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | teamcity: failed tests on release-2.1: testrace/TestShowTraceReplica, test/TestShowTraceReplica | C-test-failure O-robot | The following tests appear to have failed:
[#923501](https://teamcity.cockroachdb.com/viewLog.html?buildId=923501):
```
--- FAIL: test/TestShowTraceReplica (55.140s)
------- Stdout: -------
W180925 02:17:31.995877 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.035498 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.035744 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.035780 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.038233 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.038306 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.038324 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
I180925 02:17:32.045962 153043 util/stop/stopper.go:537 [n?] quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
I180925 02:17:32.046220 153043 server/node.go:374 [n?] **** cluster 05f9932a-e9df-4384-b665-efa7c2833d8d has been created
I180925 02:17:32.046240 153043 server/server.go:1412 [n?] **** add additional nodes by specifying --join=127.0.0.1:34643
I180925 02:17:32.046383 153043 gossip/gossip.go:408 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34643" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852046324373
I180925 02:17:32.047439 153043 storage/store.go:1565 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180925 02:17:32.047518 153043 server/node.go:477 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=7.0 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7183.00 p25=7183.00 p50=7183.00 p75=7183.00 p90=7183.00 pMax=7183.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180925 02:17:32.047568 153043 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
I180925 02:17:32.047634 153043 server/node.go:700 [n1] connecting to gossip network to verify cluster ID...
I180925 02:17:32.047731 153083 gossip/gossip.go:940 [n1] gossip connectivity
I180925 02:17:32.048193 153043 server/node.go:725 [n1] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.048240 153043 server/node.go:549 [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180925 02:17:32.048464 153043 server/status/recorder.go:611 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.048501 153043 server/server.go:1803 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.048572 153043 server/server.go:1549 [n1] starting https server at 127.0.0.1:38015 (use: 127.0.0.1:38015)
I180925 02:17:32.048597 153043 server/server.go:1551 [n1] starting grpc/postgres server at 127.0.0.1:34643
I180925 02:17:32.048618 153043 server/server.go:1552 [n1] advertising CockroachDB node at 127.0.0.1:34643
I180925 02:17:32.053650 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/M{in-ax}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.055681 153231 storage/replica_command.go:298 [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180925 02:17:32.061431 153239 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180925 02:17:32.063933 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/M{in-ax}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.065536 153162 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.065567 153162 storage/split_queue.go:116 [n1,split,s1,r2/1:/{System/-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.066020 153020 storage/replica_command.go:298 [n1,split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I180925 02:17:32.078569 153231 storage/split_queue.go:170 [n1,split,s1,r1/1:/{Min-System/}] split done
I180925 02:17:32.078697 153231 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.110565 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.110665 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/{System/-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.112473 153172 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.112545 153172 storage/split_queue.go:116 [n1,split,s1,r3/1:/{System/NodeL…-Max}] shouldQueue: shouldQ=true priority=1.0
W180925 02:17:32.113400 153025 storage/intent_resolver.go:668 [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=2bad9e2b key=/Table/SystemConfigSpan/Start rw=true pri=0.02109259 iso=SERIALIZABLE stat=PENDING epo=0 ts=1537841852.089209799,0 orig=1537841852.089209799,0 max=1537841852.089209799,0 wto=false rop=false seq=12
I180925 02:17:32.113615 153267 storage/replica_command.go:298 [n1,split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180925 02:17:32.118138 153020 storage/split_queue.go:170 [n1,split,s1,r2/1:/System/{-NodeLive…}] split done
I180925 02:17:32.121982 153178 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.130540 153178 storage/split_queue.go:116 [n1,split,s1,r4/1:/{System/NodeL…-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.130632 153020 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.130834 153282 storage/replica_command.go:298 [n1,split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180925 02:17:32.131549 153267 storage/split_queue.go:170 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] split done
I180925 02:17:32.138100 153023 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180925 02:17:32.141055 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.155146 153191 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.155173 153191 storage/split_queue.go:116 [n1,split,s1,r5/1:/{System/tsd-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.155624 153315 storage/replica_command.go:298 [n1,split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180925 02:17:32.158731 153282 storage/split_queue.go:170 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] split done
I180925 02:17:32.160319 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.160342 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.160359 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.162749 153201 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.162769 153201 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.163214 153303 storage/replica_command.go:298 [n1,split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180925 02:17:32.168450 153315 storage/split_queue.go:170 [n1,split,s1,r5/1:/System/ts{d-e}] split done
I180925 02:17:32.170178 153209 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.170219 153209 storage/split_queue.go:116 [n1,split,s1,r7/1:/{Table/System…-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.170432 153303 storage/split_queue.go:170 [n1,split,s1,r6/1:/{System/tse-Table/System…}] split done
I180925 02:17:32.170722 153332 storage/replica_command.go:298 [n1,split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180925 02:17:32.178467 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/{Table/System…-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.178507 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178527 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178549 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178571 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178591 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178613 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178691 153073 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180925 02:17:32.180958 153155 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.180991 153155 storage/split_queue.go:116 [n1,split,s1,r8/1:/{Table/11-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.181447 153367 storage/replica_command.go:298 [n1,split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180925 02:17:32.186888 153332 storage/split_queue.go:170 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] split done
I180925 02:17:32.186933 153332 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.189954 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.190032 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193014 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193090 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/{Table/11-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.193139 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193179 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193219 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193259 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193305 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193334 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193376 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/{Table/11-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.193413 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193441 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193484 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193516 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193543 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.194199 153168 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.195768 153168 storage/split_queue.go:116 [n1,split,s1,r9/1:/{Table/12-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.195994 153309 storage/replica_command.go:298 [n1,split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180925 02:17:32.196533 153367 storage/split_queue.go:170 [n1,split,s1,r8/1:/Table/1{1-2}] split done
I180925 02:17:32.196586 153367 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.202938 153312 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:ed0443e4-2a25-494d-8adb-126d140a6b5c User:root}
I180925 02:17:32.203043 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203067 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/{Table/12-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.203088 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203104 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203121 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203137 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203153 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203169 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203189 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.206144 153188 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.206171 153188 storage/split_queue.go:116 [n1,split,s1,r10/1:/{Table/13-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.206727 153337 storage/replica_command.go:298 [n1,split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180925 02:17:32.207279 153309 storage/split_queue.go:170 [n1,split,s1,r9/1:/Table/1{2-3}] split done
I180925 02:17:32.207308 153309 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.216052 153199 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.216098 153199 storage/split_queue.go:116 [n1,split,s1,r11/1:/{Table/14-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.216914 153377 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180925 02:17:32.218703 153289 storage/replica_command.go:298 [n1,split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180925 02:17:32.220667 153337 storage/split_queue.go:170 [n1,split,s1,r10/1:/Table/1{3-4}] split done
I180925 02:17:32.220781 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220816 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220843 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220866 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220892 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220920 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220945 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/{Table/14-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.220970 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220986 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.221013 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.221036 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224735 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224766 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224783 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224800 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/{Table/14-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.224816 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224834 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224854 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224870 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224888 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224905 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224920 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224983 153415 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180925 02:17:32.250030 153214 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.250122 153214 storage/split_queue.go:116 [n1,split,s1,r12/1:/{Table/15-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.250617 153396 storage/replica_command.go:298 [n1,split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180925 02:17:32.252185 153289 storage/split_queue.go:170 [n1,split,s1,r11/1:/Table/1{4-5}] split done
I180925 02:17:32.252243 153289 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254494 153043 server/server.go:1605 [n1] done ensuring all necessary migrations have run
I180925 02:17:32.254516 153043 server/server.go:1608 [n1] serving sql connections
I180925 02:17:32.254802 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254840 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254870 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/{Table/15-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.254894 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254922 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254952 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254986 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255020 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255048 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255073 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255082 153043 server/testserver.go:452 had 12 ranges at startup, expected 22
I180925 02:17:32.255102 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255693 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.286356 153450 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:34643} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852046324373 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852046324373 LastUp:1537841852046324373}
I180925 02:17:32.286787 153178 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.286816 153178 storage/split_queue.go:116 [n1,split,s1,r13/1:/{Table/16-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.286981 153448 server/server_update.go:67 [n1] no need to upgrade, cluster already at the newest version
I180925 02:17:32.287047 153396 storage/split_queue.go:170 [n1,split,s1,r12/1:/Table/1{5-6}] split done
I180925 02:17:32.287072 153396 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.287193 153258 storage/replica_command.go:298 [n1,split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180925 02:17:32.287466 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.287957 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288060 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288225 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288386 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288509 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288624 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288737 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288836 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288922 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289044 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289150 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289268 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289376 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289554 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.291396 153188 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.291648 153188 storage/split_queue.go:116 [n1,split,s1,r14/1:/{Table/17-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.291847 153452 storage/replica_command.go:298 [n1,split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180925 02:17:32.292377 153258 storage/split_queue.go:170 [n1,split,s1,r13/1:/Table/1{6-7}] split done
I180925 02:17:32.292680 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.292946 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.293310 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.293956 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.294708 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.297231 153197 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.297257 153197 storage/split_queue.go:116 [n1,split,s1,r15/1:/{Table/18-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.297568 153452 storage/split_queue.go:170 [n1,split,s1,r14/1:/Table/1{7-8}] split done
I180925 02:17:32.297718 153404 storage/replica_command.go:298 [n1,split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180925 02:17:32.297839 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.300318 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.304867 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.313633 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.322044 153208 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.322091 153208 storage/split_queue.go:116 [n1,split,s1,r16/1:/{Table/19-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.322645 153475 storage/replica_command.go:298 [n1,split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180925 02:17:32.325949 153404 storage/split_queue.go:170 [n1,split,s1,r15/1:/Table/1{8-9}] split done
I180925 02:17:32.329975 153161 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.330009 153161 storage/split_queue.go:116 [n1,split,s1,r17/1:/{Table/20-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.330508 153386 storage/replica_command.go:298 [n1,split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180925 02:17:32.331594 153475 storage/split_queue.go:170 [n1,split,s1,r16/1:/Table/{19-20}] split done
I180925 02:17:32.332788 153043 server/testserver.go:452 had 17 ranges at startup, expected 22
I180925 02:17:32.337761 153154 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.337937 153154 storage/split_queue.go:116 [n1,split,s1,r18/1:/{Table/21-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.338355 153506 storage/replica_command.go:298 [n1,split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180925 02:17:32.338460 153386 storage/split_queue.go:170 [n1,split,s1,r17/1:/Table/2{0-1}] split done
I180925 02:17:32.346641 153170 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.346831 153170 storage/split_queue.go:116 [n1,split,s1,r19/1:/{Table/22-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.347002 153501 storage/replica_command.go:298 [n1,split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180925 02:17:32.347586 153506 storage/split_queue.go:170 [n1,split,s1,r18/1:/Table/2{1-2}] split done
I180925 02:17:32.353508 153178 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.353705 153178 storage/split_queue.go:116 [n1,split,s1,r20/1:/{Table/23-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.353883 153503 storage/replica_command.go:298 [n1,split,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180925 02:17:32.354393 153501 storage/split_queue.go:170 [n1,split,s1,r19/1:/Table/2{2-3}] split done
I180925 02:17:32.360821 153187 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.360899 153187 storage/split_queue.go:116 [n1,split,s1,r21/1:/{Table/50-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.361392 153526 storage/replica_command.go:298 [n1,split,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
I180925 02:17:32.362852 153503 storage/split_queue.go:170 [n1,split,s1,r20/1:/Table/{23-50}] split done
I180925 02:17:32.366591 153043 server/testserver.go:452 had 21 ranges at startup, expected 22
I180925 02:17:32.372579 153195 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.372606 153195 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.373047 153526 storage/split_queue.go:170 [n1,split,s1,r21/1:/Table/5{0-1}] split done
W180925 02:17:32.450934 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.539594 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.539763 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.539780 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.542206 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.542229 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.542238 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
W180925 02:17:32.542305 153043 gossip/gossip.go:1516 [n?] no incoming or outgoing connections
I180925 02:17:32.542334 153043 server/server.go:1414 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180925 02:17:32.568540 153618 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:34643
I180925 02:17:32.568936 153626 gossip/server.go:226 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46095}
I180925 02:17:32.569679 153043 server/node.go:700 [n?] connecting to gossip network to verify cluster ID...
I180925 02:17:32.569833 153043 server/node.go:725 [n?] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.570038 153472 gossip/gossip.go:940 [n?] gossip connectivity
n1 [sentinel];
I180925 02:17:32.572918 153043 server/node.go:429 [n?] new node allocated ID 2
I180925 02:17:32.572990 153043 gossip/gossip.go:408 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:46095" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852572932937
I180925 02:17:32.573047 153043 storage/stores.go:242 [n2] read 0 node addresses from persistent storage
I180925 02:17:32.573081 153043 storage/stores.go:261 [n2] wrote 1 node addresses to persistent storage
I180925 02:17:32.573241 153472 gossip/gossip.go:940 [n2] gossip connectivity
n1 [sentinel];
n2 -> n1;
I180925 02:17:32.573990 153046 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I180925 02:17:32.574039 153083 gossip/gossip.go:940 [n1] gossip connectivity
n1 [sentinel];
n2 -> n1;
I180925 02:17:32.574861 153043 server/node.go:675 [n2] bootstrapped store [n2,s2]
I180925 02:17:32.574967 153043 server/node.go:549 [n2] node=2: started with [n2=<in-mem>] engine(s) and attributes []
I180925 02:17:32.575178 153043 server/status/recorder.go:611 [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.575210 153043 server/server.go:1803 [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.575272 153043 server/server.go:1549 [n2] starting https server at 127.0.0.1:43591 (use: 127.0.0.1:43591)
I180925 02:17:32.575285 153043 server/server.go:1551 [n2] starting grpc/postgres server at 127.0.0.1:46095
I180925 02:17:32.575296 153043 server/server.go:1552 [n2] advertising CockroachDB node at 127.0.0.1:46095
I180925 02:17:32.611014 153043 server/server.go:1605 [n2] done ensuring all necessary migrations have run
I180925 02:17:32.611048 153043 server/server.go:1608 [n2] serving sql connections
I180925 02:17:32.642455 153764 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:46095} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852572932937 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852572932937 LastUp:1537841852572932937}
I180925 02:17:32.674386 153762 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I180925 02:17:32.677685 153585 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
W180925 02:17:32.678685 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.685866 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.686169 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.686187 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.689791 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.689815 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.689824 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
W180925 02:17:32.689890 153043 gossip/gossip.go:1516 [n?] no incoming or outgoing connections
I180925 02:17:32.689916 153043 server/server.go:1414 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180925 02:17:32.731758 153860 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:34643
I180925 02:17:32.732189 153825 gossip/server.go:226 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:37279}
I180925 02:17:32.732687 153043 server/node.go:700 [n?] connecting to gossip network to verify cluster ID...
I180925 02:17:32.732715 153043 server/node.go:725 [n?] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.733747 153858 gossip/gossip.go:940 [n?] gossip connectivity
n1 [sentinel];
n2 -> n1;
I180925 02:17:32.736130 153043 server/node.go:429 [n?] new node allocated ID 3
I180925 02:17:32.736208 153043 gossip/gossip.go:408 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:37279" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852736140846
I180925 02:17:32.736263 153043 storage/stores.go:242 [n3] read 0 node addresses from persistent storage
I180925 02:17:32.736299 153043 storage/stores.go:261 [n3] wrote 2 node addresses to persistent storage
I180925 02:17:32.736451 153858 gossip/gossip.go:940 [n3] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.737103 153046 storage/stores.go:261 [n1] wrote 2 node addresses to persistent storage
I180925 02:17:32.737220 153083 gossip/gossip.go:940 [n1] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.737464 153562 storage/stores.go:261 [n2] wrote 2 node addresses to persistent storage
I180925 02:17:32.737497 153472 gossip/gossip.go:940 [n2] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.738317 153043 server/node.go:675 [n3] bootstrapped store [n3,s3]
I180925 02:17:32.738455 153043 server/node.go:549 [n3] node=3: started with [n3=<in-mem>] engine(s) and attributes []
I180925 02:17:32.738686 153043 server/status/recorder.go:611 [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.738709 153043 server/server.go:1803 [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.738780 153043 server/server.go:1549 [n3] starting https server at 127.0.0.1:38813 (use: 127.0.0.1:38813)
I180925 02:17:32.738793 153043 server/server.go:1551 [n3] starting grpc/postgres server at 127.0.0.1:37279
I180925 02:17:32.738805 153043 server/server.go:1552 [n3] advertising CockroachDB node at 127.0.0.1:37279
I180925 02:17:32.775132 153043 server/server.go:1605 [n3] done ensuring all necessary migrations have run
I180925 02:17:32.775267 153043 server/server.go:1608 [n3] serving sql connections
I180925 02:17:32.777498 153987 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180925 02:17:32.793825 153869 sql/event_log.go:126 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:37279} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852736140846 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852736140846 LastUp:1537841852736140846}
I180925 02:17:32.810701 153867 server/server_update.go:67 [n3] no need to upgrade, cluster already at the newest version
W180925 02:17:32.818623 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.861239 154055 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180925 02:17:32.871796 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.871940 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.871958 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.874980 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.875008 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.875017 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
W180925 02:17:32.875083 153043 gossip/gossip.go:1516 [n?] no incoming or outgoing connections
I180925 02:17:32.875121 153043 server/server.go:1414 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180925 02:17:32.888548 154012 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:34643
I180925 02:17:32.889099 154131 gossip/server.go:226 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46313}
I180925 02:17:32.889545 153043 server/node.go:700 [n?] connecting to gossip network to verify cluster ID...
I180925 02:17:32.889575 153043 server/node.go:725 [n?] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.890825 154095 gossip/gossip.go:940 [n?] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.892847 153043 server/node.go:429 [n?] new node allocated ID 4
I180925 02:17:32.892919 153043 gossip/gossip.go:408 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:46313" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852892861499
I180925 02:17:32.892985 153043 storage/stores.go:242 [n4] read 0 node addresses from persistent storage
I180925 02:17:32.893028 153043 storage/stores.go:261 [n4] wrote 3 node addresses to persistent storage
I180925 02:17:32.896237 154095 gossip/gossip.go:940 [n4] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.896695 153046 storage/stores.go:261 [n1] wrote 3 node addresses to persistent storage
I180925 02:17:32.896822 153083 gossip/gossip.go:940 [n1] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.897147 153794 storage/stores.go:261 [n3] wrote 3 node addresses to persistent storage
I180925 02:17:32.897187 153858 gossip/gossip.go:940 [n3] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.897261 153562 storage/stores.go:261 [n2] wrote 3 node addresses to persistent storage
I180925 02:17:32.897294 153472 gossip/gossip.go:940 [n2] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.898980 153043 server/node.go:675 [n4] bootstrapped store [n4,s4]
I180925 02:17:32.899116 153043 server/node.go:549 [n4] node=4: started with [n4=<in-mem>] engine(s) and attributes []
I180925 02:17:32.899363 153043 server/status/recorder.go:611 [n4] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.899394 153043 server/server.go:1803 [n4] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.899471 153043 server/server.go:1549 [n4] starting https server at 127.0.0.1:35711 (use: 127.0.0.1:35711)
I180925 02:17:32.899492 153043 server/server.go:1551 [n4] starting grpc/postgres server at 127.0.0.1:46313
I180925 02:17:32.899509 153043 server/server.go:1552 [n4] advertising CockroachDB node at 127.0.0.1:46313
I180925 02:17:32.903920 153043 server/server.go:1605 [n4] done ensuring all necessary migrations have run
I180925 02:17:32.903951 153043 server/server.go:1608 [n4] serving sql connections
I180925 02:17:32.926798 154137 server/server_update.go:67 [n4] no need to upgrade, cluster already at the newest version
I180925 02:17:32.937592 154139 sql/event_log.go:126 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:46313} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852892861499 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852892861499 LastUp:1537841852892861499}
I180925 02:17:32.944891 154286 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180925 02:17:32.953296 154288 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n4 established
I180925 02:17:32.973139 154287 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
I180925 02:17:33.001147 154329 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180925 02:17:33.004422 154330 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n4 established
I180925 02:17:33.010972 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011008 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011027 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011046 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011068 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011091 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011113 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011137 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011156 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011172 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011194 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011210 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011232 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011254 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011278 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011300 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011316 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011339 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011362 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011383 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011402 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011424 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.012481 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "set_zone_config", target: 0, info: {Target:.default Config: Options:constraints = '[+n4]' User:root}
I180925 02:17:33.018796 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config: Options:constraints = '[+n4]' User:root}
I180925 02:17:33.019760 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019824 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019862 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019898 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019928 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019965 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019993 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020023 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020057 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020091 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020125 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020158 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020194 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020223 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020252 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020288 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020322 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020400 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020541 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020581 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020604 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020626 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.022220 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "create_database", target: 52, info: {DatabaseName:d Statement:CREATE DATABASE d User:root}
I180925 02:17:33.023531 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023562 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023581 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023603 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023627 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:33.023667 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023686 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023708 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023730 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023750 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023766 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023788 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023809 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023832 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023854 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023877 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023893 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023915 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023932 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023951 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023973 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023996 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.024097 154393 storage/replica_command.go:298 [n1,split,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180925 02:17:33.028836 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "create_table", target: 53, info: {TableName:d.public.t1 Statement:CREATE TABLE d.public.t1 (a INT PRIMARY KEY) User:root}
I180925 02:17:33.030413 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030452 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030475 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030500 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030519 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030535 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030557 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030580 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030605 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030626 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030650 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030690 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030708 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030730 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030751 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030773 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030795 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030811 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030834 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030855 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:33.030878 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030896 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.032101 153212 storage/split_queue.go:116 [n1,split,s1,r22/1:/Table/5{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.032138 153212 storage/split_queue.go:116 [n1,split,s1,r23/1:/{Table/52-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:33.033339 154255 storage/replica_command.go:298 [n1,split,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24]
I180925 02:17:33.034285 154393 storage/split_queue.go:170 [n1,split,s1,r22/1:/Table/5{1-2}] split done
I180925 02:17:33.034316 154393 storage/split_queue.go:116 [n1,split,s1,r22/1:/Table/5{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.036850 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "create_table", target: 54, info: {TableName:d.public.t2 Statement:CREATE TABLE d.public.t2 (a INT PRIMARY KEY) User:root}
I180925 02:17:33.037769 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037835 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037867 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037906 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037945 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037976 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.038020 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.038059 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.038084 153220 storage/split_queue.go:116 [n1,spli
```
Please assign, take a look and update the issue accordingly.
| 1.0 | teamcity: failed tests on release-2.1: testrace/TestShowTraceReplica, test/TestShowTraceReplica - The following tests appear to have failed:
[#923501](https://teamcity.cockroachdb.com/viewLog.html?buildId=923501):
```
--- FAIL: test/TestShowTraceReplica (55.140s)
------- Stdout: -------
W180925 02:17:31.995877 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.035498 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.035744 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.035780 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.038233 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.038306 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.038324 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
I180925 02:17:32.045962 153043 util/stop/stopper.go:537 [n?] quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
I180925 02:17:32.046220 153043 server/node.go:374 [n?] **** cluster 05f9932a-e9df-4384-b665-efa7c2833d8d has been created
I180925 02:17:32.046240 153043 server/server.go:1412 [n?] **** add additional nodes by specifying --join=127.0.0.1:34643
I180925 02:17:32.046383 153043 gossip/gossip.go:408 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34643" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852046324373
I180925 02:17:32.047439 153043 storage/store.go:1565 [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180925 02:17:32.047518 153043 server/node.go:477 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=7.0 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7183.00 p25=7183.00 p50=7183.00 p75=7183.00 p90=7183.00 pMax=7183.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180925 02:17:32.047568 153043 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
I180925 02:17:32.047634 153043 server/node.go:700 [n1] connecting to gossip network to verify cluster ID...
I180925 02:17:32.047731 153083 gossip/gossip.go:940 [n1] gossip connectivity
I180925 02:17:32.048193 153043 server/node.go:725 [n1] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.048240 153043 server/node.go:549 [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180925 02:17:32.048464 153043 server/status/recorder.go:611 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.048501 153043 server/server.go:1803 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.048572 153043 server/server.go:1549 [n1] starting https server at 127.0.0.1:38015 (use: 127.0.0.1:38015)
I180925 02:17:32.048597 153043 server/server.go:1551 [n1] starting grpc/postgres server at 127.0.0.1:34643
I180925 02:17:32.048618 153043 server/server.go:1552 [n1] advertising CockroachDB node at 127.0.0.1:34643
I180925 02:17:32.053650 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/M{in-ax}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.055681 153231 storage/replica_command.go:298 [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180925 02:17:32.061431 153239 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180925 02:17:32.063933 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/M{in-ax}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.065536 153162 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.065567 153162 storage/split_queue.go:116 [n1,split,s1,r2/1:/{System/-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.066020 153020 storage/replica_command.go:298 [n1,split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I180925 02:17:32.078569 153231 storage/split_queue.go:170 [n1,split,s1,r1/1:/{Min-System/}] split done
I180925 02:17:32.078697 153231 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.110565 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.110665 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/{System/-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.112473 153172 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.112545 153172 storage/split_queue.go:116 [n1,split,s1,r3/1:/{System/NodeL…-Max}] shouldQueue: shouldQ=true priority=1.0
W180925 02:17:32.113400 153025 storage/intent_resolver.go:668 [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=2bad9e2b key=/Table/SystemConfigSpan/Start rw=true pri=0.02109259 iso=SERIALIZABLE stat=PENDING epo=0 ts=1537841852.089209799,0 orig=1537841852.089209799,0 max=1537841852.089209799,0 wto=false rop=false seq=12
I180925 02:17:32.113615 153267 storage/replica_command.go:298 [n1,split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180925 02:17:32.118138 153020 storage/split_queue.go:170 [n1,split,s1,r2/1:/System/{-NodeLive…}] split done
I180925 02:17:32.121982 153178 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.130540 153178 storage/split_queue.go:116 [n1,split,s1,r4/1:/{System/NodeL…-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.130632 153020 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.130834 153282 storage/replica_command.go:298 [n1,split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180925 02:17:32.131549 153267 storage/split_queue.go:170 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] split done
I180925 02:17:32.138100 153023 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.0-12 User:root}
I180925 02:17:32.141055 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.155146 153191 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.155173 153191 storage/split_queue.go:116 [n1,split,s1,r5/1:/{System/tsd-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.155624 153315 storage/replica_command.go:298 [n1,split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180925 02:17:32.158731 153282 storage/split_queue.go:170 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] split done
I180925 02:17:32.160319 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.160342 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.160359 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.162749 153201 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.162769 153201 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.163214 153303 storage/replica_command.go:298 [n1,split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180925 02:17:32.168450 153315 storage/split_queue.go:170 [n1,split,s1,r5/1:/System/ts{d-e}] split done
I180925 02:17:32.170178 153209 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.170219 153209 storage/split_queue.go:116 [n1,split,s1,r7/1:/{Table/System…-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.170432 153303 storage/split_queue.go:170 [n1,split,s1,r6/1:/{System/tse-Table/System…}] split done
I180925 02:17:32.170722 153332 storage/replica_command.go:298 [n1,split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180925 02:17:32.178467 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/{Table/System…-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.178507 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178527 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178549 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178571 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178591 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178613 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.178691 153073 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180925 02:17:32.180958 153155 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.180991 153155 storage/split_queue.go:116 [n1,split,s1,r8/1:/{Table/11-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.181447 153367 storage/replica_command.go:298 [n1,split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180925 02:17:32.186888 153332 storage/split_queue.go:170 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] split done
I180925 02:17:32.186933 153332 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.189954 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.190032 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193014 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193090 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/{Table/11-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.193139 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193179 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193219 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193259 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193305 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193334 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193376 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/{Table/11-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.193413 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193441 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193484 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193516 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.193543 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.194199 153168 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.195768 153168 storage/split_queue.go:116 [n1,split,s1,r9/1:/{Table/12-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.195994 153309 storage/replica_command.go:298 [n1,split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180925 02:17:32.196533 153367 storage/split_queue.go:170 [n1,split,s1,r8/1:/Table/1{1-2}] split done
I180925 02:17:32.196586 153367 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.202938 153312 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:ed0443e4-2a25-494d-8adb-126d140a6b5c User:root}
I180925 02:17:32.203043 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203067 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/{Table/12-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.203088 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203104 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203121 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203137 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203153 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203169 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.203189 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.206144 153188 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.206171 153188 storage/split_queue.go:116 [n1,split,s1,r10/1:/{Table/13-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.206727 153337 storage/replica_command.go:298 [n1,split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180925 02:17:32.207279 153309 storage/split_queue.go:170 [n1,split,s1,r9/1:/Table/1{2-3}] split done
I180925 02:17:32.207308 153309 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.216052 153199 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.216098 153199 storage/split_queue.go:116 [n1,split,s1,r11/1:/{Table/14-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.216914 153377 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180925 02:17:32.218703 153289 storage/replica_command.go:298 [n1,split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180925 02:17:32.220667 153337 storage/split_queue.go:170 [n1,split,s1,r10/1:/Table/1{3-4}] split done
I180925 02:17:32.220781 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220816 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220843 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220866 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220892 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220920 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220945 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/{Table/14-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.220970 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.220986 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.221013 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.221036 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224735 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224766 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224783 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224800 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/{Table/14-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.224816 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224834 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224854 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224870 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224888 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224905 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224920 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.224983 153415 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180925 02:17:32.250030 153214 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.250122 153214 storage/split_queue.go:116 [n1,split,s1,r12/1:/{Table/15-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.250617 153396 storage/replica_command.go:298 [n1,split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180925 02:17:32.252185 153289 storage/split_queue.go:170 [n1,split,s1,r11/1:/Table/1{4-5}] split done
I180925 02:17:32.252243 153289 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254494 153043 server/server.go:1605 [n1] done ensuring all necessary migrations have run
I180925 02:17:32.254516 153043 server/server.go:1608 [n1] serving sql connections
I180925 02:17:32.254802 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254840 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254870 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/{Table/15-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.254894 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254922 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254952 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.254986 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255020 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255048 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255073 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255082 153043 server/testserver.go:452 had 12 ranges at startup, expected 22
I180925 02:17:32.255102 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.255693 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.286356 153450 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:34643} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852046324373 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852046324373 LastUp:1537841852046324373}
I180925 02:17:32.286787 153178 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.286816 153178 storage/split_queue.go:116 [n1,split,s1,r13/1:/{Table/16-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.286981 153448 server/server_update.go:67 [n1] no need to upgrade, cluster already at the newest version
I180925 02:17:32.287047 153396 storage/split_queue.go:170 [n1,split,s1,r12/1:/Table/1{5-6}] split done
I180925 02:17:32.287072 153396 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.287193 153258 storage/replica_command.go:298 [n1,split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180925 02:17:32.287466 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.287957 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288060 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288225 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288386 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288509 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288624 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288737 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288836 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.288922 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289044 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289150 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289268 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289376 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.289554 153043 server/testserver.go:452 had 13 ranges at startup, expected 22
I180925 02:17:32.291396 153188 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.291648 153188 storage/split_queue.go:116 [n1,split,s1,r14/1:/{Table/17-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.291847 153452 storage/replica_command.go:298 [n1,split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180925 02:17:32.292377 153258 storage/split_queue.go:170 [n1,split,s1,r13/1:/Table/1{6-7}] split done
I180925 02:17:32.292680 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.292946 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.293310 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.293956 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.294708 153043 server/testserver.go:452 had 14 ranges at startup, expected 22
I180925 02:17:32.297231 153197 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.297257 153197 storage/split_queue.go:116 [n1,split,s1,r15/1:/{Table/18-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.297568 153452 storage/split_queue.go:170 [n1,split,s1,r14/1:/Table/1{7-8}] split done
I180925 02:17:32.297718 153404 storage/replica_command.go:298 [n1,split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180925 02:17:32.297839 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.300318 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.304867 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.313633 153043 server/testserver.go:452 had 15 ranges at startup, expected 22
I180925 02:17:32.322044 153208 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.322091 153208 storage/split_queue.go:116 [n1,split,s1,r16/1:/{Table/19-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.322645 153475 storage/replica_command.go:298 [n1,split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180925 02:17:32.325949 153404 storage/split_queue.go:170 [n1,split,s1,r15/1:/Table/1{8-9}] split done
I180925 02:17:32.329975 153161 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.330009 153161 storage/split_queue.go:116 [n1,split,s1,r17/1:/{Table/20-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.330508 153386 storage/replica_command.go:298 [n1,split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180925 02:17:32.331594 153475 storage/split_queue.go:170 [n1,split,s1,r16/1:/Table/{19-20}] split done
I180925 02:17:32.332788 153043 server/testserver.go:452 had 17 ranges at startup, expected 22
I180925 02:17:32.337761 153154 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.337937 153154 storage/split_queue.go:116 [n1,split,s1,r18/1:/{Table/21-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.338355 153506 storage/replica_command.go:298 [n1,split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180925 02:17:32.338460 153386 storage/split_queue.go:170 [n1,split,s1,r17/1:/Table/2{0-1}] split done
I180925 02:17:32.346641 153170 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.346831 153170 storage/split_queue.go:116 [n1,split,s1,r19/1:/{Table/22-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.347002 153501 storage/replica_command.go:298 [n1,split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180925 02:17:32.347586 153506 storage/split_queue.go:170 [n1,split,s1,r18/1:/Table/2{1-2}] split done
I180925 02:17:32.353508 153178 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.353705 153178 storage/split_queue.go:116 [n1,split,s1,r20/1:/{Table/23-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.353883 153503 storage/replica_command.go:298 [n1,split,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180925 02:17:32.354393 153501 storage/split_queue.go:170 [n1,split,s1,r19/1:/Table/2{2-3}] split done
I180925 02:17:32.360821 153187 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.360899 153187 storage/split_queue.go:116 [n1,split,s1,r21/1:/{Table/50-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:32.361392 153526 storage/replica_command.go:298 [n1,split,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
I180925 02:17:32.362852 153503 storage/split_queue.go:170 [n1,split,s1,r20/1:/Table/{23-50}] split done
I180925 02:17:32.366591 153043 server/testserver.go:452 had 21 ranges at startup, expected 22
I180925 02:17:32.372579 153195 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.372606 153195 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:32.373047 153526 storage/split_queue.go:170 [n1,split,s1,r21/1:/Table/5{0-1}] split done
W180925 02:17:32.450934 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.539594 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.539763 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.539780 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.542206 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.542229 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.542238 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
W180925 02:17:32.542305 153043 gossip/gossip.go:1516 [n?] no incoming or outgoing connections
I180925 02:17:32.542334 153043 server/server.go:1414 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180925 02:17:32.568540 153618 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:34643
I180925 02:17:32.568936 153626 gossip/server.go:226 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46095}
I180925 02:17:32.569679 153043 server/node.go:700 [n?] connecting to gossip network to verify cluster ID...
I180925 02:17:32.569833 153043 server/node.go:725 [n?] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.570038 153472 gossip/gossip.go:940 [n?] gossip connectivity
n1 [sentinel];
I180925 02:17:32.572918 153043 server/node.go:429 [n?] new node allocated ID 2
I180925 02:17:32.572990 153043 gossip/gossip.go:408 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:46095" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852572932937
I180925 02:17:32.573047 153043 storage/stores.go:242 [n2] read 0 node addresses from persistent storage
I180925 02:17:32.573081 153043 storage/stores.go:261 [n2] wrote 1 node addresses to persistent storage
I180925 02:17:32.573241 153472 gossip/gossip.go:940 [n2] gossip connectivity
n1 [sentinel];
n2 -> n1;
I180925 02:17:32.573990 153046 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I180925 02:17:32.574039 153083 gossip/gossip.go:940 [n1] gossip connectivity
n1 [sentinel];
n2 -> n1;
I180925 02:17:32.574861 153043 server/node.go:675 [n2] bootstrapped store [n2,s2]
I180925 02:17:32.574967 153043 server/node.go:549 [n2] node=2: started with [n2=<in-mem>] engine(s) and attributes []
I180925 02:17:32.575178 153043 server/status/recorder.go:611 [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.575210 153043 server/server.go:1803 [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.575272 153043 server/server.go:1549 [n2] starting https server at 127.0.0.1:43591 (use: 127.0.0.1:43591)
I180925 02:17:32.575285 153043 server/server.go:1551 [n2] starting grpc/postgres server at 127.0.0.1:46095
I180925 02:17:32.575296 153043 server/server.go:1552 [n2] advertising CockroachDB node at 127.0.0.1:46095
I180925 02:17:32.611014 153043 server/server.go:1605 [n2] done ensuring all necessary migrations have run
I180925 02:17:32.611048 153043 server/server.go:1608 [n2] serving sql connections
I180925 02:17:32.642455 153764 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:46095} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852572932937 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852572932937 LastUp:1537841852572932937}
I180925 02:17:32.674386 153762 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I180925 02:17:32.677685 153585 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
W180925 02:17:32.678685 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.685866 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.686169 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.686187 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.689791 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.689815 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.689824 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
W180925 02:17:32.689890 153043 gossip/gossip.go:1516 [n?] no incoming or outgoing connections
I180925 02:17:32.689916 153043 server/server.go:1414 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180925 02:17:32.731758 153860 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:34643
I180925 02:17:32.732189 153825 gossip/server.go:226 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:37279}
I180925 02:17:32.732687 153043 server/node.go:700 [n?] connecting to gossip network to verify cluster ID...
I180925 02:17:32.732715 153043 server/node.go:725 [n?] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.733747 153858 gossip/gossip.go:940 [n?] gossip connectivity
n1 [sentinel];
n2 -> n1;
I180925 02:17:32.736130 153043 server/node.go:429 [n?] new node allocated ID 3
I180925 02:17:32.736208 153043 gossip/gossip.go:408 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:37279" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852736140846
I180925 02:17:32.736263 153043 storage/stores.go:242 [n3] read 0 node addresses from persistent storage
I180925 02:17:32.736299 153043 storage/stores.go:261 [n3] wrote 2 node addresses to persistent storage
I180925 02:17:32.736451 153858 gossip/gossip.go:940 [n3] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.737103 153046 storage/stores.go:261 [n1] wrote 2 node addresses to persistent storage
I180925 02:17:32.737220 153083 gossip/gossip.go:940 [n1] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.737464 153562 storage/stores.go:261 [n2] wrote 2 node addresses to persistent storage
I180925 02:17:32.737497 153472 gossip/gossip.go:940 [n2] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.738317 153043 server/node.go:675 [n3] bootstrapped store [n3,s3]
I180925 02:17:32.738455 153043 server/node.go:549 [n3] node=3: started with [n3=<in-mem>] engine(s) and attributes []
I180925 02:17:32.738686 153043 server/status/recorder.go:611 [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.738709 153043 server/server.go:1803 [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.738780 153043 server/server.go:1549 [n3] starting https server at 127.0.0.1:38813 (use: 127.0.0.1:38813)
I180925 02:17:32.738793 153043 server/server.go:1551 [n3] starting grpc/postgres server at 127.0.0.1:37279
I180925 02:17:32.738805 153043 server/server.go:1552 [n3] advertising CockroachDB node at 127.0.0.1:37279
I180925 02:17:32.775132 153043 server/server.go:1605 [n3] done ensuring all necessary migrations have run
I180925 02:17:32.775267 153043 server/server.go:1608 [n3] serving sql connections
I180925 02:17:32.777498 153987 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180925 02:17:32.793825 153869 sql/event_log.go:126 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:37279} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852736140846 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852736140846 LastUp:1537841852736140846}
I180925 02:17:32.810701 153867 server/server_update.go:67 [n3] no need to upgrade, cluster already at the newest version
W180925 02:17:32.818623 153043 server/status/runtime.go:310 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180925 02:17:32.861239 154055 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180925 02:17:32.871796 153043 server/server.go:848 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180925 02:17:32.871940 153043 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.871958 153043 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180925 02:17:32.874980 153043 server/config.go:493 [n?] 1 storage engine initialized
I180925 02:17:32.875008 153043 server/config.go:496 [n?] RocksDB cache size: 128 MiB
I180925 02:17:32.875017 153043 server/config.go:496 [n?] store 0: in-memory, size 0 B
W180925 02:17:32.875083 153043 gossip/gossip.go:1516 [n?] no incoming or outgoing connections
I180925 02:17:32.875121 153043 server/server.go:1414 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180925 02:17:32.888548 154012 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:34643
I180925 02:17:32.889099 154131 gossip/server.go:226 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46313}
I180925 02:17:32.889545 153043 server/node.go:700 [n?] connecting to gossip network to verify cluster ID...
I180925 02:17:32.889575 153043 server/node.go:725 [n?] node connected via gossip and verified as part of cluster "05f9932a-e9df-4384-b665-efa7c2833d8d"
I180925 02:17:32.890825 154095 gossip/gossip.go:940 [n?] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1;
I180925 02:17:32.892847 153043 server/node.go:429 [n?] new node allocated ID 4
I180925 02:17:32.892919 153043 gossip/gossip.go:408 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:46313" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:12 > build_tag:"v2.1.0-beta.20180924-149-ge256966" started_at:1537841852892861499
I180925 02:17:32.892985 153043 storage/stores.go:242 [n4] read 0 node addresses from persistent storage
I180925 02:17:32.893028 153043 storage/stores.go:261 [n4] wrote 3 node addresses to persistent storage
I180925 02:17:32.896237 154095 gossip/gossip.go:940 [n4] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.896695 153046 storage/stores.go:261 [n1] wrote 3 node addresses to persistent storage
I180925 02:17:32.896822 153083 gossip/gossip.go:940 [n1] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.897147 153794 storage/stores.go:261 [n3] wrote 3 node addresses to persistent storage
I180925 02:17:32.897187 153858 gossip/gossip.go:940 [n3] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.897261 153562 storage/stores.go:261 [n2] wrote 3 node addresses to persistent storage
I180925 02:17:32.897294 153472 gossip/gossip.go:940 [n2] gossip connectivity
n1 [sentinel];
n2 -> n1; n3 -> n1; n4 -> n1;
I180925 02:17:32.898980 153043 server/node.go:675 [n4] bootstrapped store [n4,s4]
I180925 02:17:32.899116 153043 server/node.go:549 [n4] node=4: started with [n4=<in-mem>] engine(s) and attributes []
I180925 02:17:32.899363 153043 server/status/recorder.go:611 [n4] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180925 02:17:32.899394 153043 server/server.go:1803 [n4] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180925 02:17:32.899471 153043 server/server.go:1549 [n4] starting https server at 127.0.0.1:35711 (use: 127.0.0.1:35711)
I180925 02:17:32.899492 153043 server/server.go:1551 [n4] starting grpc/postgres server at 127.0.0.1:46313
I180925 02:17:32.899509 153043 server/server.go:1552 [n4] advertising CockroachDB node at 127.0.0.1:46313
I180925 02:17:32.903920 153043 server/server.go:1605 [n4] done ensuring all necessary migrations have run
I180925 02:17:32.903951 153043 server/server.go:1608 [n4] serving sql connections
I180925 02:17:32.926798 154137 server/server_update.go:67 [n4] no need to upgrade, cluster already at the newest version
I180925 02:17:32.937592 154139 sql/event_log.go:126 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:46313} Attrs: Locality: ServerVersion:2.0-12 BuildTag:v2.1.0-beta.20180924-149-ge256966 StartedAt:1537841852892861499 LocalityAddress:[]} ClusterID:05f9932a-e9df-4384-b665-efa7c2833d8d StartedAt:1537841852892861499 LastUp:1537841852892861499}
I180925 02:17:32.944891 154286 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n1 established
I180925 02:17:32.953296 154288 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n4 established
I180925 02:17:32.973139 154287 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n2 established
I180925 02:17:33.001147 154329 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n3 established
I180925 02:17:33.004422 154330 rpc/nodedialer/nodedialer.go:92 [ct-client] connection to n4 established
I180925 02:17:33.010972 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011008 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011027 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011046 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011068 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011091 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011113 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011137 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011156 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011172 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011194 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011210 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011232 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011254 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011278 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011300 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011316 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011339 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011362 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011383 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011402 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.011424 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.012481 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "set_zone_config", target: 0, info: {Target:.default Config: Options:constraints = '[+n4]' User:root}
I180925 02:17:33.018796 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config: Options:constraints = '[+n4]' User:root}
I180925 02:17:33.019760 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019824 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019862 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019898 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019928 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019965 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.019993 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020023 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020057 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020091 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020125 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020158 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020194 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020223 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020252 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020288 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020322 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020400 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020541 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020581 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020604 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.020626 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.022220 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "create_database", target: 52, info: {DatabaseName:d Statement:CREATE DATABASE d User:root}
I180925 02:17:33.023531 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023562 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023581 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023603 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023627 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:33.023667 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023686 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023708 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023730 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023750 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023766 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023788 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023809 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023832 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023854 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023877 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023893 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023915 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023932 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023951 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023973 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.023996 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.024097 154393 storage/replica_command.go:298 [n1,split,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180925 02:17:33.028836 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "create_table", target: 53, info: {TableName:d.public.t1 Statement:CREATE TABLE d.public.t1 (a INT PRIMARY KEY) User:root}
I180925 02:17:33.030413 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030452 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030475 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030500 153220 storage/split_queue.go:116 [n1,split,s1,r14/1:/Table/1{7-8}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030519 153220 storage/split_queue.go:116 [n1,split,s1,r3/1:/System/NodeLiveness{-Max}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030535 153220 storage/split_queue.go:116 [n1,split,s1,r2/1:/System/{-NodeLive…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030557 153220 storage/split_queue.go:116 [n1,split,s1,r10/1:/Table/1{3-4}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030580 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030605 153220 storage/split_queue.go:116 [n1,split,s1,r21/1:/Table/5{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030626 153220 storage/split_queue.go:116 [n1,split,s1,r11/1:/Table/1{4-5}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030650 153220 storage/split_queue.go:116 [n1,split,s1,r12/1:/Table/1{5-6}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030690 153220 storage/split_queue.go:116 [n1,split,s1,r1/1:/{Min-System/}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030708 153220 storage/split_queue.go:116 [n1,split,s1,r6/1:/{System/tse-Table/System…}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030730 153220 storage/split_queue.go:116 [n1,split,s1,r20/1:/Table/{23-50}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030751 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030773 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030795 153220 storage/split_queue.go:116 [n1,split,s1,r17/1:/Table/2{0-1}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030811 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030834 153220 storage/split_queue.go:116 [n1,split,s1,r15/1:/Table/1{8-9}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030855 153220 storage/split_queue.go:116 [n1,split,s1,r22/1:/{Table/51-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:33.030878 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.030896 153220 storage/split_queue.go:116 [n1,split,s1,r5/1:/System/ts{d-e}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.032101 153212 storage/split_queue.go:116 [n1,split,s1,r22/1:/Table/5{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.032138 153212 storage/split_queue.go:116 [n1,split,s1,r23/1:/{Table/52-Max}] shouldQueue: shouldQ=true priority=1.0
I180925 02:17:33.033339 154255 storage/replica_command.go:298 [n1,split,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24]
I180925 02:17:33.034285 154393 storage/split_queue.go:170 [n1,split,s1,r22/1:/Table/5{1-2}] split done
I180925 02:17:33.034316 154393 storage/split_queue.go:116 [n1,split,s1,r22/1:/Table/5{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.036850 154317 sql/event_log.go:126 [n1,client=127.0.0.1:43318,user=root] Event: "create_table", target: 54, info: {TableName:d.public.t2 Statement:CREATE TABLE d.public.t2 (a INT PRIMARY KEY) User:root}
I180925 02:17:33.037769 153220 storage/split_queue.go:116 [n1,split,s1,r7/1:/Table/{SystemCon…-11}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037835 153220 storage/split_queue.go:116 [n1,split,s1,r19/1:/Table/2{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037867 153220 storage/split_queue.go:116 [n1,split,s1,r9/1:/Table/1{2-3}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037906 153220 storage/split_queue.go:116 [n1,split,s1,r4/1:/System/{NodeLive…-tsd}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037945 153220 storage/split_queue.go:116 [n1,split,s1,r8/1:/Table/1{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.037976 153220 storage/split_queue.go:116 [n1,split,s1,r18/1:/Table/2{1-2}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.038020 153220 storage/split_queue.go:116 [n1,split,s1,r16/1:/Table/{19-20}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.038059 153220 storage/split_queue.go:116 [n1,split,s1,r13/1:/Table/1{6-7}] shouldQueue: shouldQ=false priority=0.0
I180925 02:17:33.038084 153220 storage/split_queue.go:116 [n1,spli
```
Please assign, take a look and update the issue accordingly.
| non_defect | teamcity failed tests on release testrace testshowtracereplica test testshowtracereplica the following tests appear to have failed fail test testshowtracereplica stdout server status runtime go could not parse build timestamp parsing time as cannot parse as server server go monitoring forward clock jumps based on server clock forward jump check enabled base addr validation go server certificate addresses ip dns localhost local cn node base addr validation go web ui certificate addresses ip dns localhost local cn node server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b util stop stopper go quiescing tasks left closedts rangefeed subscriber server node go cluster has been created server server go add additional nodes by specifying join gossip gossip go nodedescriptor set to node id address attrs locality serverversion build tag beta started at storage store go failed initial metrics computation system config not yet available server node go initialized store disk capacity mib available mib used b logicalbytes kib ranges leases queries writes bytesperreplica pmax writesperreplica pmax storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id gossip gossip go gossip connectivity server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key system sql event log go event set cluster setting target info settingname diagnostics reporting enabled value true user root storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key system nodeliveness storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage intent resolver go failed to push during intent resolution failed to push unnamed id key table systemconfigspan start rw true pri iso serializable stat pending epo ts orig max wto false rop false seq storage replica command go initiating a split of this range at key system nodelivenessmax storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage replica command go initiating a split of this range at key system tsd storage split queue go split done sql event log go event set cluster setting target info settingname version value user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key system tse storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table systemconfigspan start storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go split done storage replica command go initiating a split of this range at key table storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority sql event log go event set cluster setting target info settingname trace debug enable value false user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority sql event log go event set cluster setting target info settingname cluster secret value user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority sql event log go event create database target info databasename defaultdb statement create database if not exists defaultdb user root storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority sql event log go event create database target info databasename postgres statement create database if not exists postgres user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority server server go done ensuring all necessary migrations have run server server go serving sql connections storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority server testserver go had ranges at startup expected storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag beta startedat localityaddress clusterid startedat lastup storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority server server update go no need to upgrade cluster already at the newest version storage split queue go split done storage split queue go shouldqueue shouldq false priority storage replica command go initiating a split of this range at key table server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go split done storage replica command go initiating a split of this range at key table server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected server testserver go had ranges at startup expected storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done server testserver go had ranges at startup expected storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done server testserver go had ranges at startup expected storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go split done server status runtime go could not parse build timestamp parsing time as cannot parse as server server go monitoring forward clock jumps based on server clock forward jump check enabled base addr validation go server certificate addresses ip dns localhost local cn node base addr validation go web ui certificate addresses ip dns localhost local cn node server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster gossip gossip go gossip connectivity server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion build tag beta started at storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity server node go bootstrapped store server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag beta startedat localityaddress clusterid startedat lastup server server update go no need to upgrade cluster already at the newest version rpc nodedialer nodedialer go connection to established server status runtime go could not parse build timestamp parsing time as cannot parse as server server go monitoring forward clock jumps based on server clock forward jump check enabled base addr validation go server certificate addresses ip dns localhost local cn node base addr validation go web ui certificate addresses ip dns localhost local cn node server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster gossip gossip go gossip connectivity server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion build tag beta started at storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity server node go bootstrapped store server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections rpc nodedialer nodedialer go connection to established sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag beta startedat localityaddress clusterid startedat lastup server server update go no need to upgrade cluster already at the newest version server status runtime go could not parse build timestamp parsing time as cannot parse as rpc nodedialer nodedialer go connection to established server server go monitoring forward clock jumps based on server clock forward jump check enabled base addr validation go server certificate addresses ip dns localhost local cn node base addr validation go web ui certificate addresses ip dns localhost local cn node server config go storage engine initialized server config go rocksdb cache size mib server config go store in memory size b gossip gossip go no incoming or outgoing connections server server go no stores bootstrapped and join flag specified awaiting init command gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster gossip gossip go gossip connectivity server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality serverversion build tag beta started at storage stores go read node addresses from persistent storage storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity storage stores go wrote node addresses to persistent storage gossip gossip go gossip connectivity server node go bootstrapped store server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections server server update go no need to upgrade cluster already at the newest version sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag beta startedat localityaddress clusterid startedat lastup rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established rpc nodedialer nodedialer go connection to established storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority sql event log go event set zone config target info target default config options constraints user root sql event log go event set zone config target info target system config options constraints user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority sql event log go event create database target info databasename d statement create database d user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename d public statement create table d public a int primary key user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq true priority storage replica command go initiating a split of this range at key table storage split queue go split done storage split queue go shouldqueue shouldq false priority sql event log go event create table target info tablename d public statement create table d public a int primary key user root storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go shouldqueue shouldq false priority storage split queue go spli please assign take a look and update the issue accordingly | 0 |
272,524 | 29,795,038,057 | IssuesEvent | 2023-06-16 01:06:03 | billmcchesney1/pacbot | https://api.github.com/repos/billmcchesney1/pacbot | closed | CVE-2020-14422 (Medium) detected in ipaddress-1.0.23-py2.py3-none-any.whl - autoclosed | Mend: dependency security vulnerability | ## CVE-2020-14422 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ipaddress-1.0.23-py2.py3-none-any.whl</b></p></summary>
<p>IPv4/IPv6 manipulation library</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c2/f8/49697181b1651d8347d24c095ce46c7346c37335ddc7d255833e7cde674d/ipaddress-1.0.23-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/c2/f8/49697181b1651d8347d24c095ce46c7346c37335ddc7d255833e7cde674d/ipaddress-1.0.23-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /installer/requirements.txt</p>
<p>Path to vulnerable library: /installer/requirements.txt</p>
<p>
Dependency Hierarchy:
- docker_py-1.10.6-py2.py3-none-any.whl (Root Library)
- :x: **ipaddress-1.0.23-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/pacbot/commit/acf9a0620c1a37cee4f2896d71e1c3731c5c7b06">acf9a0620c1a37cee4f2896d71e1c3731c5c7b06</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Lib/ipaddress.py in Python through 3.8.3 improperly computes hash values in the IPv4Interface and IPv6Interface classes, which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing IPv4Interface or IPv6Interface objects, and this attacker can cause many dictionary entries to be created. This is fixed in: v3.5.10, v3.5.10rc1; v3.6.12; v3.7.9; v3.8.4, v3.8.4rc1, v3.8.5, v3.8.6, v3.8.6rc1; v3.9.0, v3.9.0b4, v3.9.0b5, v3.9.0rc1, v3.9.0rc2.
<p>Publish Date: 2020-06-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14422>CVE-2020-14422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422</a></p>
<p>Release Date: 2020-06-18</p>
<p>Fix Resolution: v3.5.10,v3.6.12,v3.7.9,v3.8.4v3.9.0</p>
</p>
</details>
<p></p>
| True | CVE-2020-14422 (Medium) detected in ipaddress-1.0.23-py2.py3-none-any.whl - autoclosed - ## CVE-2020-14422 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ipaddress-1.0.23-py2.py3-none-any.whl</b></p></summary>
<p>IPv4/IPv6 manipulation library</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c2/f8/49697181b1651d8347d24c095ce46c7346c37335ddc7d255833e7cde674d/ipaddress-1.0.23-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/c2/f8/49697181b1651d8347d24c095ce46c7346c37335ddc7d255833e7cde674d/ipaddress-1.0.23-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /installer/requirements.txt</p>
<p>Path to vulnerable library: /installer/requirements.txt</p>
<p>
Dependency Hierarchy:
- docker_py-1.10.6-py2.py3-none-any.whl (Root Library)
- :x: **ipaddress-1.0.23-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/pacbot/commit/acf9a0620c1a37cee4f2896d71e1c3731c5c7b06">acf9a0620c1a37cee4f2896d71e1c3731c5c7b06</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Lib/ipaddress.py in Python through 3.8.3 improperly computes hash values in the IPv4Interface and IPv6Interface classes, which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing IPv4Interface or IPv6Interface objects, and this attacker can cause many dictionary entries to be created. This is fixed in: v3.5.10, v3.5.10rc1; v3.6.12; v3.7.9; v3.8.4, v3.8.4rc1, v3.8.5, v3.8.6, v3.8.6rc1; v3.9.0, v3.9.0b4, v3.9.0b5, v3.9.0rc1, v3.9.0rc2.
<p>Publish Date: 2020-06-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14422>CVE-2020-14422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14422</a></p>
<p>Release Date: 2020-06-18</p>
<p>Fix Resolution: v3.5.10,v3.6.12,v3.7.9,v3.8.4v3.9.0</p>
</p>
</details>
<p></p>
| non_defect | cve medium detected in ipaddress none any whl autoclosed cve medium severity vulnerability vulnerable library ipaddress none any whl manipulation library library home page a href path to dependency file installer requirements txt path to vulnerable library installer requirements txt dependency hierarchy docker py none any whl root library x ipaddress none any whl vulnerable library found in head commit a href found in base branch master vulnerability details lib ipaddress py in python through improperly computes hash values in the and classes which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing or objects and this attacker can cause many dictionary entries to be created this is fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
96,218 | 19,915,908,180 | IssuesEvent | 2022-01-25 22:37:42 | withfig/fig | https://api.github.com/repos/withfig/fig | opened | Prompt Loading Slower | bug codebase:shell_integrations performance | After installing Fig, a user's prompt starts loading slower than before. | 1.0 | Prompt Loading Slower - After installing Fig, a user's prompt starts loading slower than before. | non_defect | prompt loading slower after installing fig a user s prompt starts loading slower than before | 0 |
304,956 | 26,348,953,368 | IssuesEvent | 2023-01-11 01:55:15 | trebol-ecommerce/spring-boot-backend | https://api.github.com/repos/trebol-ecommerce/spring-boot-backend | closed | Improve unit tests [DRAFT] | tests | These are changes I need to make.
- [ ] `src/test/java/org/trebol/jpa/services/conversion/BillingCompaniesConverterJpaServiceImplTest` - [In method `testApplyChangesToExistingEntity` there needs to be a better assertion](https://github.com/trebol-ecommerce/spring-boot-backend/pull/111#discussion_r988570605_). This also applies to these services
- [ ] `src/test/java/org/trebol/jpa/services/conversion/BillingTypesConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/conversion/SellStatusesConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/conversion/ShippersConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/conversion/UserRolesConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/GenericJpaCrudServiceTest:158` [chore: remove this BadInputException as it is no longer needed](https://github.com/trebol-ecommerce/spring-boot-backend/pull/129/files/f9956cfe223fe1dfe7ea90e3aa44910f6cfea415#r992773647) | 1.0 | Improve unit tests [DRAFT] - These are changes I need to make.
- [ ] `src/test/java/org/trebol/jpa/services/conversion/BillingCompaniesConverterJpaServiceImplTest` - [In method `testApplyChangesToExistingEntity` there needs to be a better assertion](https://github.com/trebol-ecommerce/spring-boot-backend/pull/111#discussion_r988570605_). This also applies to these services
- [ ] `src/test/java/org/trebol/jpa/services/conversion/BillingTypesConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/conversion/SellStatusesConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/conversion/ShippersConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/conversion/UserRolesConverterJpaServiceImplTest`
- [ ] `src/test/java/org/trebol/jpa/services/GenericJpaCrudServiceTest:158` [chore: remove this BadInputException as it is no longer needed](https://github.com/trebol-ecommerce/spring-boot-backend/pull/129/files/f9956cfe223fe1dfe7ea90e3aa44910f6cfea415#r992773647) | non_defect | improve unit tests these are changes i need to make src test java org trebol jpa services conversion billingcompaniesconverterjpaserviceimpltest this also applies to these services src test java org trebol jpa services conversion billingtypesconverterjpaserviceimpltest src test java org trebol jpa services conversion sellstatusesconverterjpaserviceimpltest src test java org trebol jpa services conversion shippersconverterjpaserviceimpltest src test java org trebol jpa services conversion userrolesconverterjpaserviceimpltest src test java org trebol jpa services genericjpacrudservicetest | 0 |
44,658 | 12,309,492,453 | IssuesEvent | 2020-05-12 09:02:49 | contao/contao | https://api.github.com/repos/contao/contao | closed | Fehlerhafte Vorschläge zur Datenbankaktualisierung im Installtool | defect | Nach der Installation von Contao 4.4.42 mit Newsletter-Bundle und Kopieren der Datenbank aus Contao 3.5.40 werden im Installtool zur Aktualisierung der Datenbank in der standardmäßig vorausgewählten Rubrik "Spalten ändern" u. a. die folgenden beiden Änderungen vorgeschlagen:
````
ALTER TABLE tl_newsletter_channel CHANGE smtpuser sender VARCHAR(128) DEFAULT '' NOT NULL
ALTER TABLE tl_newsletter_channel CHANGE smtppass template VARCHAR(32) DEFAULT '' NOT NULL
````
Statt die Spalten ```sender``` und ```template``` neu anzulegen, werden die Spalten ```smtpuser``` und ```smtppass``` umbebannt. | 1.0 | Fehlerhafte Vorschläge zur Datenbankaktualisierung im Installtool - Nach der Installation von Contao 4.4.42 mit Newsletter-Bundle und Kopieren der Datenbank aus Contao 3.5.40 werden im Installtool zur Aktualisierung der Datenbank in der standardmäßig vorausgewählten Rubrik "Spalten ändern" u. a. die folgenden beiden Änderungen vorgeschlagen:
````
ALTER TABLE tl_newsletter_channel CHANGE smtpuser sender VARCHAR(128) DEFAULT '' NOT NULL
ALTER TABLE tl_newsletter_channel CHANGE smtppass template VARCHAR(32) DEFAULT '' NOT NULL
````
Statt die Spalten ```sender``` und ```template``` neu anzulegen, werden die Spalten ```smtpuser``` und ```smtppass``` umbebannt. | defect | fehlerhafte vorschläge zur datenbankaktualisierung im installtool nach der installation von contao mit newsletter bundle und kopieren der datenbank aus contao werden im installtool zur aktualisierung der datenbank in der standardmäßig vorausgewählten rubrik spalten ändern u a die folgenden beiden änderungen vorgeschlagen alter table tl newsletter channel change smtpuser sender varchar default not null alter table tl newsletter channel change smtppass template varchar default not null statt die spalten sender und template neu anzulegen werden die spalten smtpuser und smtppass umbebannt | 1 |
39,421 | 9,449,508,780 | IssuesEvent | 2019-04-16 02:13:54 | googlei18n/noto-cjk | https://api.github.com/repos/googlei18n/noto-cjk | closed | fullwidth brackets should be proportional in Korean | Type-Defect | Moved from Moved from googlei18n/noto-fonts#120
@roozbehp reported on 6 Aug 2014 at 8:07:
According to Denis (@moyogo), the fullwidth angle brackets in the CJK fonts
should be proportional for Korean, any recent Korean font (by Sandoll or Yoon
Design) would do that.
@kenlunde wrote on on 7 Aug 2014 at 2:11:
Given that an ideal Korean experience with Noto Sans CJK (and, of course, the
Adobe-branded Source Han Sans) requires support for the 'locl' GSUB feature,
along with proper language-tagging at the character, paragraph, or document
level, in order to access the Korean- or CJK-specific forms of
proportional-width Western punctuation (the glyphs are aligned to the em-box
rather than to Latin features, such as the x-height or cap-height), I would
lump this request in with that, specifically that the 'palt' (or 'vpal' for
vertical) GPOS feature should be invoked, which will make the glyphs for U+3008
and U+3009 immediately suitable for proportional use. The 'palt' GPOS feature
additionally handles other similar character pairs, in case they're used
instead of their ASCII (proportional) counterparts. I thus consider the
priority relatively low.
@jungshik wrote on on 15 Aug 2014 at 12:34:
Blink (and Webkit) have two contents rendering paths. By default, CJK is
rendered in a 'simple script' rendering path where most GSUB/GPOS features are
not invoked.
The majority of documents on the web in Korean will go through that simple
script path.
Even if those 'palt' and 'vpal' can be turned on by default in 'Noto Sans CJK
Korean' (or 'Noto Sans Korean'), I'm afraid that it might not work for the
above scenario.
So, it appears that we need to have separate glyphs for U+3008 ~ U+300B (and
potentially more). As we discussed at the meeting, we'll try to come up with a
list of characters whose advance widths are different between Noto CJK/Source
Han and Korean fonts by Sandoll/Yoon design.
Attached are two screenshots, one with NanumGothic and the other with Noto Sans
Korean. They have U+300A and U+300B. The text used is "《로스트》는
평론과 대중"
There's no space between U+300B '》' and '는' (the first character after
U+300B), but visually, it looks like there is if Noto Sans Korean is used.
With NanumGothic, there's no such problem.
As I mentioned during the meeting, we can open up glyph slots by removing
separate glyphs for Hangul Halfwidth Jamos (U+FFA0 - U+FFCx) and just mapping
them to the corresponding nominal glyphs for Hangul Jamos (U+11xx block) or to
the corresponding Hangul Compat Jamo (U+31xx block). That way, we can open up ~
50 glyph slots.


@kenlunde wrote on 17 Aug 2014 at 1:21:
This issue is definitely being deferred for the first update, and I'd prefer to
defer it indefinitely because this doesn't seem to be the right way to address
the issue, especially for the long term.
Referencing what other fonts do may seem like the right approach, but that is
not always a good idea. For those who are interested in this issue, I strongly
encourage going through KLREQ: http://www.w3.org/TR/klreq/
What should happen is interaction between the layout engine and fonts, with
awareness of the language. Japanese layout engines have matured, and the same
thing is happening for Korean. Some characters are best left full-width, such
as those in U+30xx, which will allow the layout engine to deal with them
consistently.
The problem with the what is being proposed is where to draw the line. Instead,
the line should be drawn between what the font specifies and what is expected
on the layout engine. That happened for Japanese, and it needs to happen for
Korean.
Also, mapping the nominal glyphs for the U+11xx block from the half-with jamo
block (U+FFxx) would be a disaster, because combining jamo takes place at the
GID level, not character code level.
Again, please read KLREQ carefully. It sets the stage for better Korean layout.
What the referenced fonts are doing is ad hoc at best.
@jungshik wrote on 25 Aug 2014 at 10:46:
> Also, mapping the nominal glyphs for the U+11xx block from the half-with
> jamo block (U+FFxx) would be a disaster, because combining jamo takes place
> at the GID level, not character code level.
I did realize that after writing my comment. However, there'd be no problem if
we just remap U+313x glyphs for U+FFxx half-width jamo block. Nobody would care
whether or not U+FFxx block uses the same glyphs as U+313x block.
@jungshik wrote on 25 Aug 2014 at 10:51:
> Instead, the line should be drawn between what the font specifies and what is
> expected on the layout engine. That happened for Japanese, and
> it needs to happen for Korean.
Where has it happened for Japanese? InDesign?
@kenlunde wrote on 25 Aug 2014 at 11:02:
Any application that claims Japanese support for line layout should be able to
many of these basis adjustment tasks. Adobe InDesign was one of the first
desktop apps to do so, and serves as a benchmark. The fact that JLREQ is
emanating from W3C suggests that some web browsers include such support.
What the fonts you have referenced have done is equivalent to jerry-rigging the
glyph set, which is something that should be avoided, mainy because it gives
birth to legacy issues and concerns.
@kenlunde wrote on 26 Aug 2014 at 2:31:
About mapping the half-width jamo (U+FFxx) to the glyphs for compatibility jamo
(U+31xx), while you may may not care, I am guessing that a non-zero number of
users will care, which is the reason why I am reluctant to jettison those
glyphs. In any case, we'll be discussing this issue in more depth in October.
@moyogo wrote on 28 Aug 2014 at 6:14:
KLREQ is still a draft and does not clearly address spacing of punctuation.
There are already some issues with KLREQ that might need to be dealt with to
clarify this:
http://www.w3.org/International/track/issues/269 Inconsistent spacing
http://www.w3.org/International/track/issues/271 Punctuation
@kenlunde wrote on 28 Aug 2014 at 12:16:
Precisely, which is exactly why we shouldn't rush into adding such glyphs to
the fonts in case doing so creates a nasty legacy condition.
@behdad wrote on 26 Oct 2014 at 1:50:
Action item for Jungshik to test U+30xx and U+FFxx (proportional versions of
ASCII brackets) against a bunch of (15 / 20?) high-quality Korean fonts for
comparison.
| 1.0 | fullwidth brackets should be proportional in Korean - Moved from Moved from googlei18n/noto-fonts#120
@roozbehp reported on 6 Aug 2014 at 8:07:
According to Denis (@moyogo), the fullwidth angle brackets in the CJK fonts
should be proportional for Korean, any recent Korean font (by Sandoll or Yoon
Design) would do that.
@kenlunde wrote on on 7 Aug 2014 at 2:11:
Given that an ideal Korean experience with Noto Sans CJK (and, of course, the
Adobe-branded Source Han Sans) requires support for the 'locl' GSUB feature,
along with proper language-tagging at the character, paragraph, or document
level, in order to access the Korean- or CJK-specific forms of
proportional-width Western punctuation (the glyphs are aligned to the em-box
rather than to Latin features, such as the x-height or cap-height), I would
lump this request in with that, specifically that the 'palt' (or 'vpal' for
vertical) GPOS feature should be invoked, which will make the glyphs for U+3008
and U+3009 immediately suitable for proportional use. The 'palt' GPOS feature
additionally handles other similar character pairs, in case they're used
instead of their ASCII (proportional) counterparts. I thus consider the
priority relatively low.
@jungshik wrote on on 15 Aug 2014 at 12:34:
Blink (and Webkit) have two contents rendering paths. By default, CJK is
rendered in a 'simple script' rendering path where most GSUB/GPOS features are
not invoked.
The majority of documents on the web in Korean will go through that simple
script path.
Even if those 'palt' and 'vpal' can be turned on by default in 'Noto Sans CJK
Korean' (or 'Noto Sans Korean'), I'm afraid that it might not work for the
above scenario.
So, it appears that we need to have separate glyphs for U+3008 ~ U+300B (and
potentially more). As we discussed at the meeting, we'll try to come up with a
list of characters whose advance widths are different between Noto CJK/Source
Han and Korean fonts by Sandoll/Yoon design.
Attached are two screenshots, one with NanumGothic and the other with Noto Sans
Korean. They have U+300A and U+300B. The text used is "《로스트》는
평론과 대중"
There's no space between U+300B '》' and '는' (the first character after
U+300B), but visually, it looks like there is if Noto Sans Korean is used.
With NanumGothic, there's no such problem.
As I mentioned during the meeting, we can open up glyph slots by removing
separate glyphs for Hangul Halfwidth Jamos (U+FFA0 - U+FFCx) and just mapping
them to the corresponding nominal glyphs for Hangul Jamos (U+11xx block) or to
the corresponding Hangul Compat Jamo (U+31xx block). That way, we can open up ~
50 glyph slots.


@kenlunde wrote on 17 Aug 2014 at 1:21:
This issue is definitely being deferred for the first update, and I'd prefer to
defer it indefinitely because this doesn't seem to be the right way to address
the issue, especially for the long term.
Referencing what other fonts do may seem like the right approach, but that is
not always a good idea. For those who are interested in this issue, I strongly
encourage going through KLREQ: http://www.w3.org/TR/klreq/
What should happen is interaction between the layout engine and fonts, with
awareness of the language. Japanese layout engines have matured, and the same
thing is happening for Korean. Some characters are best left full-width, such
as those in U+30xx, which will allow the layout engine to deal with them
consistently.
The problem with the what is being proposed is where to draw the line. Instead,
the line should be drawn between what the font specifies and what is expected
on the layout engine. That happened for Japanese, and it needs to happen for
Korean.
Also, mapping the nominal glyphs for the U+11xx block from the half-with jamo
block (U+FFxx) would be a disaster, because combining jamo takes place at the
GID level, not character code level.
Again, please read KLREQ carefully. It sets the stage for better Korean layout.
What the referenced fonts are doing is ad hoc at best.
@jungshik wrote on 25 Aug 2014 at 10:46:
> Also, mapping the nominal glyphs for the U+11xx block from the half-with
> jamo block (U+FFxx) would be a disaster, because combining jamo takes place
> at the GID level, not character code level.
I did realize that after writing my comment. However, there'd be no problem if
we just remap U+313x glyphs for U+FFxx half-width jamo block. Nobody would care
whether or not U+FFxx block uses the same glyphs as U+313x block.
@jungshik wrote on 25 Aug 2014 at 10:51:
> Instead, the line should be drawn between what the font specifies and what is
> expected on the layout engine. That happened for Japanese, and
> it needs to happen for Korean.
Where has it happened for Japanese? InDesign?
@kenlunde wrote on 25 Aug 2014 at 11:02:
Any application that claims Japanese support for line layout should be able to
many of these basis adjustment tasks. Adobe InDesign was one of the first
desktop apps to do so, and serves as a benchmark. The fact that JLREQ is
emanating from W3C suggests that some web browsers include such support.
What the fonts you have referenced have done is equivalent to jerry-rigging the
glyph set, which is something that should be avoided, mainy because it gives
birth to legacy issues and concerns.
@kenlunde wrote on 26 Aug 2014 at 2:31:
About mapping the half-width jamo (U+FFxx) to the glyphs for compatibility jamo
(U+31xx), while you may may not care, I am guessing that a non-zero number of
users will care, which is the reason why I am reluctant to jettison those
glyphs. In any case, we'll be discussing this issue in more depth in October.
@moyogo wrote on 28 Aug 2014 at 6:14:
KLREQ is still a draft and does not clearly address spacing of punctuation.
There are already some issues with KLREQ that might need to be dealt with to
clarify this:
http://www.w3.org/International/track/issues/269 Inconsistent spacing
http://www.w3.org/International/track/issues/271 Punctuation
@kenlunde wrote on 28 Aug 2014 at 12:16:
Precisely, which is exactly why we shouldn't rush into adding such glyphs to
the fonts in case doing so creates a nasty legacy condition.
@behdad wrote on 26 Oct 2014 at 1:50:
Action item for Jungshik to test U+30xx and U+FFxx (proportional versions of
ASCII brackets) against a bunch of (15 / 20?) high-quality Korean fonts for
comparison.
| defect | fullwidth brackets should be proportional in korean moved from moved from noto fonts roozbehp reported on aug at according to denis moyogo the fullwidth angle brackets in the cjk fonts should be proportional for korean any recent korean font by sandoll or yoon design would do that kenlunde wrote on on aug at given that an ideal korean experience with noto sans cjk and of course the adobe branded source han sans requires support for the locl gsub feature along with proper language tagging at the character paragraph or document level in order to access the korean or cjk specific forms of proportional width western punctuation the glyphs are aligned to the em box rather than to latin features such as the x height or cap height i would lump this request in with that specifically that the palt or vpal for vertical gpos feature should be invoked which will make the glyphs for u and u immediately suitable for proportional use the palt gpos feature additionally handles other similar character pairs in case they re used instead of their ascii proportional counterparts i thus consider the priority relatively low jungshik wrote on on aug at blink and webkit have two contents rendering paths by default cjk is rendered in a simple script rendering path where most gsub gpos features are not invoked the majority of documents on the web in korean will go through that simple script path even if those palt and vpal can be turned on by default in noto sans cjk korean or noto sans korean i m afraid that it might not work for the above scenario so it appears that we need to have separate glyphs for u u and potentially more as we discussed at the meeting we ll try to come up with a list of characters whose advance widths are different between noto cjk source han and korean fonts by sandoll yoon design attached are two screenshots one with nanumgothic and the other with noto sans korean they have u and u the text used is 《로스트》는 평론과 대중 there s no space between u 》 and 는 the first character after u but visually it looks like there is if noto sans korean is used with nanumgothic there s no such problem as i mentioned during the meeting we can open up glyph slots by removing separate glyphs for hangul halfwidth jamos u u ffcx and just mapping them to the corresponding nominal glyphs for hangul jamos u block or to the corresponding hangul compat jamo u block that way we can open up glyph slots kenlunde wrote on aug at this issue is definitely being deferred for the first update and i d prefer to defer it indefinitely because this doesn t seem to be the right way to address the issue especially for the long term referencing what other fonts do may seem like the right approach but that is not always a good idea for those who are interested in this issue i strongly encourage going through klreq what should happen is interaction between the layout engine and fonts with awareness of the language japanese layout engines have matured and the same thing is happening for korean some characters are best left full width such as those in u which will allow the layout engine to deal with them consistently the problem with the what is being proposed is where to draw the line instead the line should be drawn between what the font specifies and what is expected on the layout engine that happened for japanese and it needs to happen for korean also mapping the nominal glyphs for the u block from the half with jamo block u ffxx would be a disaster because combining jamo takes place at the gid level not character code level again please read klreq carefully it sets the stage for better korean layout what the referenced fonts are doing is ad hoc at best jungshik wrote on aug at also mapping the nominal glyphs for the u block from the half with jamo block u ffxx would be a disaster because combining jamo takes place at the gid level not character code level i did realize that after writing my comment however there d be no problem if we just remap u glyphs for u ffxx half width jamo block nobody would care whether or not u ffxx block uses the same glyphs as u block jungshik wrote on aug at instead the line should be drawn between what the font specifies and what is expected on the layout engine that happened for japanese and it needs to happen for korean where has it happened for japanese indesign kenlunde wrote on aug at any application that claims japanese support for line layout should be able to many of these basis adjustment tasks adobe indesign was one of the first desktop apps to do so and serves as a benchmark the fact that jlreq is emanating from suggests that some web browsers include such support what the fonts you have referenced have done is equivalent to jerry rigging the glyph set which is something that should be avoided mainy because it gives birth to legacy issues and concerns kenlunde wrote on aug at about mapping the half width jamo u ffxx to the glyphs for compatibility jamo u while you may may not care i am guessing that a non zero number of users will care which is the reason why i am reluctant to jettison those glyphs in any case we ll be discussing this issue in more depth in october moyogo wrote on aug at klreq is still a draft and does not clearly address spacing of punctuation there are already some issues with klreq that might need to be dealt with to clarify this inconsistent spacing punctuation kenlunde wrote on aug at precisely which is exactly why we shouldn t rush into adding such glyphs to the fonts in case doing so creates a nasty legacy condition behdad wrote on oct at action item for jungshik to test u and u ffxx proportional versions of ascii brackets against a bunch of high quality korean fonts for comparison | 1 |
14,626 | 2,831,387,359 | IssuesEvent | 2015-05-24 15:52:16 | nobodyguy/dslrdashboard | https://api.github.com/repos/nobodyguy/dslrdashboard | closed | Video Feature request | auto-migrated Priority-Medium Type-Defect | ```
Hi, and thanks a lot for the app.
I use it to get a bigger screen (galaxy s5) for video work with a Nikon D7100
and have a few feature requests :
- I would love to have a dedicated video mode in the app, with an option to get
2.35 (letterbox) framing marks and nothing on the screen beside the liveview
and basic parameters (aperture, shutter speed and ISO)
- Don't know if it is possible but focus peaking and zebras would be awesome
(it is available on the Atomos Ninja which only gets the HDMI output from the
camera)
Thanks !
```
Original issue reported on code.google.com by `mleje...@gmail.com` on 1 Sep 2014 at 8:45 | 1.0 | Video Feature request - ```
Hi, and thanks a lot for the app.
I use it to get a bigger screen (galaxy s5) for video work with a Nikon D7100
and have a few feature requests :
- I would love to have a dedicated video mode in the app, with an option to get
2.35 (letterbox) framing marks and nothing on the screen beside the liveview
and basic parameters (aperture, shutter speed and ISO)
- Don't know if it is possible but focus peaking and zebras would be awesome
(it is available on the Atomos Ninja which only gets the HDMI output from the
camera)
Thanks !
```
Original issue reported on code.google.com by `mleje...@gmail.com` on 1 Sep 2014 at 8:45 | defect | video feature request hi and thanks a lot for the app i use it to get a bigger screen galaxy for video work with a nikon and have a few feature requests i would love to have a dedicated video mode in the app with an option to get letterbox framing marks and nothing on the screen beside the liveview and basic parameters aperture shutter speed and iso don t know if it is possible but focus peaking and zebras would be awesome it is available on the atomos ninja which only gets the hdmi output from the camera thanks original issue reported on code google com by mleje gmail com on sep at | 1 |
57,277 | 14,140,972,399 | IssuesEvent | 2020-11-10 12:00:33 | giantswarm/roadmap | https://api.github.com/repos/giantswarm/roadmap | closed | Configure instance metadata in AWS instances to use only v2 | area/kaas feature-request provider/aws team/firecracker topic/security | **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
As part of the [AWS Foundational Security Best Practices](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp-controls.html#ec2-8-remediation) there is a point to enforce the use of IMDSv2 in the EC2 instances. The IMDS provides access to temporary, frequently rotated credentials, to access the AWS metadata endpoint. The new version adds new protections to different types of vulnerabilities.
```
The control passes if HttpTokens is set to required for IMDSv2. The control fails if HttpTokens is set to optional.
```
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
Enable in Cloud Formation the enforcement to use IMDSv2 using `HttpsTokens` equals to `required` in the [launch templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-launchtemplate-launchtemplatedata-metadataoptions.html) of our instances. Verify it does no affect any systemd unit or other process that makes use of the AWS metadata endpoint.
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
Keep using IMDSv1 and accept the vulnerabilities exposure.
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| True | Configure instance metadata in AWS instances to use only v2 - **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
As part of the [AWS Foundational Security Best Practices](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp-controls.html#ec2-8-remediation) there is a point to enforce the use of IMDSv2 in the EC2 instances. The IMDS provides access to temporary, frequently rotated credentials, to access the AWS metadata endpoint. The new version adds new protections to different types of vulnerabilities.
```
The control passes if HttpTokens is set to required for IMDSv2. The control fails if HttpTokens is set to optional.
```
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
Enable in Cloud Formation the enforcement to use IMDSv2 using `HttpsTokens` equals to `required` in the [launch templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-launchtemplate-launchtemplatedata-metadataoptions.html) of our instances. Verify it does no affect any systemd unit or other process that makes use of the AWS metadata endpoint.
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
Keep using IMDSv1 and accept the vulnerabilities exposure.
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| non_defect | configure instance metadata in aws instances to use only is your feature request related to a problem please describe as part of the there is a point to enforce the use of in the instances the imds provides access to temporary frequently rotated credentials to access the aws metadata endpoint the new version adds new protections to different types of vulnerabilities the control passes if httptokens is set to required for the control fails if httptokens is set to optional describe the solution you d like enable in cloud formation the enforcement to use using httpstokens equals to required in the of our instances verify it does no affect any systemd unit or other process that makes use of the aws metadata endpoint describe alternatives you ve considered keep using and accept the vulnerabilities exposure additional context | 0 |
69,046 | 22,092,531,553 | IssuesEvent | 2022-06-01 07:23:55 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Suffix of words (plus next 5 chars) mistakenly being converted to emojis | T-Defect S-Major A-Composer A-Emoji O-Occasional | This is an issue reported by @Tezcatlipoca:matrix.org:
THIS AFFECTS BOTH WEB AND DESKTOP VERSIONS
**With "Automatically replace plain text Emoji" activated, try to write any word with a "xo" suffix, followed by at least 6 more characters. An example in Portuguese:**
"embaixo da casa"
It should just stay as it is while I'm typing (i.e. **"embaixo da casa"**), but instead Element converts it automatically to **"embai😵asa"**, not only converting the mapped string (in the example, "xo") - which it shouldn't be doing anyway -, but also "eating" the next 5 characters in the process.
This bug can be reproduced with any other mapped plaintext->emoji strings, like "xD" or "XD" for exemple. So if you write "WordXD whatever", the 6th character will trigger the bug. "xo" seems to be the more critical AFAIK, for there are words that are really common in my native language (Portuguese) which are directly affected by this bug.
In short, the suffixes of words should never be interpreted as candidates for conversion to emojis. | 1.0 | Suffix of words (plus next 5 chars) mistakenly being converted to emojis - This is an issue reported by @Tezcatlipoca:matrix.org:
THIS AFFECTS BOTH WEB AND DESKTOP VERSIONS
**With "Automatically replace plain text Emoji" activated, try to write any word with a "xo" suffix, followed by at least 6 more characters. An example in Portuguese:**
"embaixo da casa"
It should just stay as it is while I'm typing (i.e. **"embaixo da casa"**), but instead Element converts it automatically to **"embai😵asa"**, not only converting the mapped string (in the example, "xo") - which it shouldn't be doing anyway -, but also "eating" the next 5 characters in the process.
This bug can be reproduced with any other mapped plaintext->emoji strings, like "xD" or "XD" for exemple. So if you write "WordXD whatever", the 6th character will trigger the bug. "xo" seems to be the more critical AFAIK, for there are words that are really common in my native language (Portuguese) which are directly affected by this bug.
In short, the suffixes of words should never be interpreted as candidates for conversion to emojis. | defect | suffix of words plus next chars mistakenly being converted to emojis this is an issue reported by tezcatlipoca matrix org this affects both web and desktop versions with automatically replace plain text emoji activated try to write any word with a xo suffix followed by at least more characters an example in portuguese embaixo da casa it should just stay as it is while i m typing i e embaixo da casa but instead element converts it automatically to embai😵asa not only converting the mapped string in the example xo which it shouldn t be doing anyway but also eating the next characters in the process this bug can be reproduced with any other mapped plaintext emoji strings like xd or xd for exemple so if you write wordxd whatever the character will trigger the bug xo seems to be the more critical afaik for there are words that are really common in my native language portuguese which are directly affected by this bug in short the suffixes of words should never be interpreted as candidates for conversion to emojis | 1 |
148,062 | 19,527,135,092 | IssuesEvent | 2021-12-30 10:00:08 | devopsadmin12/WebGoat-2 | https://api.github.com/repos/devopsadmin12/WebGoat-2 | opened | CVE-2019-12086 (High) detected in jackson-databind-2.0.4.jar | security vulnerability | ## CVE-2019-12086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /repository/com/fasterxml/jackson/core/jackson-databind/2.0.4/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devopsadmin12/WebGoat-2/commit/728e58b33a83761901854fda9ae6e636c615e592">728e58b33a83761901854fda9ae6e636c615e592</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
<p>Publish Date: 2019-05-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p>
<p>Release Date: 2019-05-17</p>
<p>Fix Resolution: 2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-12086 (High) detected in jackson-databind-2.0.4.jar - ## CVE-2019-12086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /repository/com/fasterxml/jackson/core/jackson-databind/2.0.4/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devopsadmin12/WebGoat-2/commit/728e58b33a83761901854fda9ae6e636c615e592">728e58b33a83761901854fda9ae6e636c615e592</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
<p>Publish Date: 2019-05-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p>
<p>Release Date: 2019-05-17</p>
<p>Fix Resolution: 2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file pom xml path to vulnerable library repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
59,356 | 11,958,954,829 | IssuesEvent | 2020-04-04 20:02:10 | SharePoint/sp-dev-fx-webparts | https://api.github.com/repos/SharePoint/sp-dev-fx-webparts | closed | Tradingview Script not running in react-script-editor | area:sample-code type:bug type:question | ## Category
- [x] Question
- [x] Bug
- [ ] Enhancement
I am trying to use the Modern Script Editor to run a widget from TradingView, the Market Overview Widget. When I add the code into the Modern Script Editor webpart, only the footer appears.

I have used the code that is on the TradingView page for the Market Overview Widget, without modifications.
If I use the Advanced Chart Widget code on the TradingView widget page, this widget will work in the Modern Script Editor - however, this is not the widget we want to use.
Any ideas on why the other widgets for TradingView are not running?
Thanks,.
| 1.0 | Tradingview Script not running in react-script-editor - ## Category
- [x] Question
- [x] Bug
- [ ] Enhancement
I am trying to use the Modern Script Editor to run a widget from TradingView, the Market Overview Widget. When I add the code into the Modern Script Editor webpart, only the footer appears.

I have used the code that is on the TradingView page for the Market Overview Widget, without modifications.
If I use the Advanced Chart Widget code on the TradingView widget page, this widget will work in the Modern Script Editor - however, this is not the widget we want to use.
Any ideas on why the other widgets for TradingView are not running?
Thanks,.
| non_defect | tradingview script not running in react script editor category question bug enhancement i am trying to use the modern script editor to run a widget from tradingview the market overview widget when i add the code into the modern script editor webpart only the footer appears i have used the code that is on the tradingview page for the market overview widget without modifications if i use the advanced chart widget code on the tradingview widget page this widget will work in the modern script editor however this is not the widget we want to use any ideas on why the other widgets for tradingview are not running thanks | 0 |
18,663 | 10,272,595,243 | IssuesEvent | 2019-08-23 16:52:39 | SparkDevNetwork/Rock | https://api.github.com/repos/SparkDevNetwork/Rock | closed | Enhanced security for Event Calendar and Event Items | Status: In Progress Topic: Security Type: Enhancement | ### Description
In 8.3, Staff and Staff Like Workers roles (in addition to others) have Edit permission to the Event Calendar and Event Calendar Items. Edit permission is necessary to Add an item, however it also gives those users access to Delete. There is not a way to restrict this from Admin Tools > Security > Entity Administration because the underlying code models only support View, Edit, Approve, and Administrate permissions. We've been tasked with multiple staff or volunteers deleting calendar items that they shouldn't have.
I've looked through the code and I'd like to open a PR with the following changes:
1) Add Delete as a supported action on the EventCalendar and CalendarItem models.
2) Add a migration to restrict Delete by default to Calendar Administration and Rock Administration.
3) Add a column and filter for CreatedBy to improve visibility of your own Calendar Items
Current:

New:

### Versions
* **Rock Version:** v8.3
* **Client Culture Setting:** en-us
| True | Enhanced security for Event Calendar and Event Items - ### Description
In 8.3, Staff and Staff Like Workers roles (in addition to others) have Edit permission to the Event Calendar and Event Calendar Items. Edit permission is necessary to Add an item, however it also gives those users access to Delete. There is not a way to restrict this from Admin Tools > Security > Entity Administration because the underlying code models only support View, Edit, Approve, and Administrate permissions. We've been tasked with multiple staff or volunteers deleting calendar items that they shouldn't have.
I've looked through the code and I'd like to open a PR with the following changes:
1) Add Delete as a supported action on the EventCalendar and CalendarItem models.
2) Add a migration to restrict Delete by default to Calendar Administration and Rock Administration.
3) Add a column and filter for CreatedBy to improve visibility of your own Calendar Items
Current:

New:

### Versions
* **Rock Version:** v8.3
* **Client Culture Setting:** en-us
| non_defect | enhanced security for event calendar and event items description in staff and staff like workers roles in addition to others have edit permission to the event calendar and event calendar items edit permission is necessary to add an item however it also gives those users access to delete there is not a way to restrict this from admin tools security entity administration because the underlying code models only support view edit approve and administrate permissions we ve been tasked with multiple staff or volunteers deleting calendar items that they shouldn t have i ve looked through the code and i d like to open a pr with the following changes add delete as a supported action on the eventcalendar and calendaritem models add a migration to restrict delete by default to calendar administration and rock administration add a column and filter for createdby to improve visibility of your own calendar items current new versions rock version client culture setting en us | 0 |
66,558 | 20,327,146,599 | IssuesEvent | 2022-02-18 07:06:34 | dhbw-ka-pm/mentalmodels-for-teams | https://api.github.com/repos/dhbw-ka-pm/mentalmodels-for-teams | closed | Where is *All Models are Wrong*? | defect | AFAICS there is nothing delivered for #40 in gh-pages branch, not visible on https://dhbw-ka-pm.github.io/mentalmodels-for-teams/. Not merged? | 1.0 | Where is *All Models are Wrong*? - AFAICS there is nothing delivered for #40 in gh-pages branch, not visible on https://dhbw-ka-pm.github.io/mentalmodels-for-teams/. Not merged? | defect | where is all models are wrong afaics there is nothing delivered for in gh pages branch not visible on not merged | 1 |
259,977 | 8,202,256,396 | IssuesEvent | 2018-09-02 06:40:34 | MrBlizzard/RCAdmins-Tracker | https://api.github.com/repos/MrBlizzard/RCAdmins-Tracker | opened | [VIP/VIP+] Make one time menu for perks | enhancement priority:normal | Can put one time free legendary in here. Maybe even have it on a cd of 3-6 months. Would be better to have it in a menu then given directly via the webstore.
Other cool long time cooldown things could go in here. | 1.0 | [VIP/VIP+] Make one time menu for perks - Can put one time free legendary in here. Maybe even have it on a cd of 3-6 months. Would be better to have it in a menu then given directly via the webstore.
Other cool long time cooldown things could go in here. | non_defect | make one time menu for perks can put one time free legendary in here maybe even have it on a cd of months would be better to have it in a menu then given directly via the webstore other cool long time cooldown things could go in here | 0 |
23,897 | 3,865,368,337 | IssuesEvent | 2016-04-08 17:08:30 | Esri/military-features-data | https://api.github.com/repos/Esri/military-features-data | opened | Moving a Point Feature Shows the Selected Feature In an Unexpected Place | defect external to repo | _From @abouffard on April 8, 2016 7:13_
This is probably not an issue for the editor but was observed when editing features while testing, so is being entered here.
When selecting an air feature to move it, the yellow filled circle that appears is not centered on the air frame itself. It seems to be off-center, left of where I'd expect it to be.
When moving a sea surface feature, the filled yellow circle representing the feature appears centered on the sea surface circle frame, as expected.


_Copied from original issue: ArcGIS/military-symbol-editor-addin-wpf#25_ | 1.0 | Moving a Point Feature Shows the Selected Feature In an Unexpected Place - _From @abouffard on April 8, 2016 7:13_
This is probably not an issue for the editor but was observed when editing features while testing, so is being entered here.
When selecting an air feature to move it, the yellow filled circle that appears is not centered on the air frame itself. It seems to be off-center, left of where I'd expect it to be.
When moving a sea surface feature, the filled yellow circle representing the feature appears centered on the sea surface circle frame, as expected.


_Copied from original issue: ArcGIS/military-symbol-editor-addin-wpf#25_ | defect | moving a point feature shows the selected feature in an unexpected place from abouffard on april this is probably not an issue for the editor but was observed when editing features while testing so is being entered here when selecting an air feature to move it the yellow filled circle that appears is not centered on the air frame itself it seems to be off center left of where i d expect it to be when moving a sea surface feature the filled yellow circle representing the feature appears centered on the sea surface circle frame as expected copied from original issue arcgis military symbol editor addin wpf | 1 |
14,864 | 2,831,390,037 | IssuesEvent | 2015-05-24 15:54:55 | nobodyguy/dslrdashboard | https://api.github.com/repos/nobodyguy/dslrdashboard | closed | Feature Request: Review image after capture - for compression setting "RAW" | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Start DSLR dashboard app
2. connect Nikon D7100 with USB cable
3. tab on the liveview symbol
4. set "compression setting" to RAW (RAW only)
5. Capture a picture
What is the expected output?
The app should show the captured immage
What do you see instead?
The immage is blank
What version of the product are you using? On what operating system?
0.30.27
Acer Iconia Tab with Android 4.1.1
Nikon D7100 (Firmware 1.01) connected with USB cable
Please provide any additional information below.
```
Original issue reported on code.google.com by `svenimta...@gmail.com` on 23 Aug 2013 at 2:02 | 1.0 | Feature Request: Review image after capture - for compression setting "RAW" - ```
What steps will reproduce the problem?
1. Start DSLR dashboard app
2. connect Nikon D7100 with USB cable
3. tab on the liveview symbol
4. set "compression setting" to RAW (RAW only)
5. Capture a picture
What is the expected output?
The app should show the captured immage
What do you see instead?
The immage is blank
What version of the product are you using? On what operating system?
0.30.27
Acer Iconia Tab with Android 4.1.1
Nikon D7100 (Firmware 1.01) connected with USB cable
Please provide any additional information below.
```
Original issue reported on code.google.com by `svenimta...@gmail.com` on 23 Aug 2013 at 2:02 | defect | feature request review image after capture for compression setting raw what steps will reproduce the problem start dslr dashboard app connect nikon with usb cable tab on the liveview symbol set compression setting to raw raw only capture a picture what is the expected output the app should show the captured immage what do you see instead the immage is blank what version of the product are you using on what operating system acer iconia tab with android nikon firmware connected with usb cable please provide any additional information below original issue reported on code google com by svenimta gmail com on aug at | 1 |
1,841 | 2,603,972,281 | IssuesEvent | 2015-02-24 19:00:34 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳激光治疗疱疹 | auto-migrated Priority-Medium Type-Defect | ```
沈阳激光治疗疱疹〓沈陽軍區政治部醫院性病〓TEL:024-3102330
8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于�
��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷�
��悠久、設備精良、技術權威、專家云集,是預防、保健、醫
療、科研康復為一體的綜合性醫院。是國家首批公立甲等部��
�醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南�
��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后
勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等��
�。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:38 | 1.0 | 沈阳激光治疗疱疹 - ```
沈阳激光治疗疱疹〓沈陽軍區政治部醫院性病〓TEL:024-3102330
8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于�
��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷�
��悠久、設備精良、技術權威、專家云集,是預防、保健、醫
療、科研康復為一體的綜合性醫院。是國家首批公立甲等部��
�醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南�
��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后
勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等��
�。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:38 | defect | 沈阳激光治疗疱疹 沈阳激光治疗疱疹〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位于� �� 。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 original issue reported on code google com by gmail com on jun at | 1 |
112,722 | 17,098,644,483 | IssuesEvent | 2021-07-09 08:06:33 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [Security Solution] Threat intel List is shuffling automatically if we set the future time | QA:Validated Team: SecuritySolution Team:Detections and Resp bug fixed impact:low v7.14.0 | **Description**
Threat intel List is shuffling automatically if we set the future time
**Build Details:**
```
VERSION: 8.0.0 Master
BUILD: 9007199254740991
ARTIFACT: https://artifacts-api.elastic.co/v1/search/8.0.0-SNAPSHOT
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in.
2. Filebeat should be installed
**Steps to Reproduce:**
1. Navigate to Overview tab of security.
2. Set the future time range E.g "Next 7 Hours" on overview page.
3. Scroll down and go to threat intel card.
4. Observe that Threat intel List is shuffling automatically if we set the future time
**Impacted Test case:**
N/A
**Actual Result:**
Threat intel List is shuffling automatically if we set the future time
**Expected Result:**
Threat intel List should be stable if we set the future time
**What's working:**
N/A
**What's not working:**
N/A
**Screenshot:**
https://user-images.githubusercontent.com/61860752/123790513-0c05d880-d8fc-11eb-9da7-2df43364bb2b.mp4
| True | [Security Solution] Threat intel List is shuffling automatically if we set the future time - **Description**
Threat intel List is shuffling automatically if we set the future time
**Build Details:**
```
VERSION: 8.0.0 Master
BUILD: 9007199254740991
ARTIFACT: https://artifacts-api.elastic.co/v1/search/8.0.0-SNAPSHOT
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in.
2. Filebeat should be installed
**Steps to Reproduce:**
1. Navigate to Overview tab of security.
2. Set the future time range E.g "Next 7 Hours" on overview page.
3. Scroll down and go to threat intel card.
4. Observe that Threat intel List is shuffling automatically if we set the future time
**Impacted Test case:**
N/A
**Actual Result:**
Threat intel List is shuffling automatically if we set the future time
**Expected Result:**
Threat intel List should be stable if we set the future time
**What's working:**
N/A
**What's not working:**
N/A
**Screenshot:**
https://user-images.githubusercontent.com/61860752/123790513-0c05d880-d8fc-11eb-9da7-2df43364bb2b.mp4
| non_defect | threat intel list is shuffling automatically if we set the future time description threat intel list is shuffling automatically if we set the future time build details version master build artifact browser details all preconditions kibana user should be logged in filebeat should be installed steps to reproduce navigate to overview tab of security set the future time range e g next hours on overview page scroll down and go to threat intel card observe that threat intel list is shuffling automatically if we set the future time impacted test case n a actual result threat intel list is shuffling automatically if we set the future time expected result threat intel list should be stable if we set the future time what s working n a what s not working n a screenshot | 0 |
52,492 | 13,752,356,314 | IssuesEvent | 2020-10-06 14:25:29 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | opened | Vulnerability roundup 93: catdoc-0.95: 1 advisory [7.8] | 1.severity: security | [search](https://search.nix.gsc.io/?q=catdoc&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=catdoc+in%3Apath&type=Code)
* [ ] [CVE-2017-11110](https://nvd.nist.gov/vuln/detail/CVE-2017-11110) CVSSv3=7.8 (nixos-20.03, nixos-20.09, nixos-unstable)
Scanned versions: nixos-20.03: 0d0660fde3b; nixos-20.09: ad3a5d5092e; nixos-unstable: 84d74ae9c9c.
| True | Vulnerability roundup 93: catdoc-0.95: 1 advisory [7.8] - [search](https://search.nix.gsc.io/?q=catdoc&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=catdoc+in%3Apath&type=Code)
* [ ] [CVE-2017-11110](https://nvd.nist.gov/vuln/detail/CVE-2017-11110) CVSSv3=7.8 (nixos-20.03, nixos-20.09, nixos-unstable)
Scanned versions: nixos-20.03: 0d0660fde3b; nixos-20.09: ad3a5d5092e; nixos-unstable: 84d74ae9c9c.
| non_defect | vulnerability roundup catdoc advisory nixos nixos nixos unstable scanned versions nixos nixos nixos unstable | 0 |
14,191 | 9,212,839,862 | IssuesEvent | 2019-03-10 05:40:59 | golang/go | https://api.github.com/repos/golang/go | closed | runtime: Windows DLL preloading attack possible for winmm.dll | NeedsInvestigation OS-Windows Security | Go 1.11 seems vulnerable to dll preloading on windows with `winmm.dll`. It looks like #14959 mostly fixed this, `kernel32.dll` etc are protected, but `winmm.dll` still seems to be affected. It seems to be loaded implicitly by the go runtime — https://github.com/golang/go/blob/6174b5e21e73714c63061e66efdbe180e1c5491d/src/pkg/runtime/thread_windows.c#L31 — but I notice is not listed with the other safely loaded DLLs — https://github.com/golang/go/blob/6174b5e21e73714c63061e66efdbe180e1c5491d/src/pkg/syscall/zsyscall_windows_amd64.go#L9-L19
### What version of Go are you using (`go version`)?
<pre>
$ docker run --rm golang:1.11 go version
go version go1.11.2 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ docker run --rm golang:1.11 go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build782466443=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
1. Create a `main.go` with:
```
package main
import "fmt"
func main() {
fmt.Println("Hello world")
}
```
1. Cross-compile for windows:
```
docker run -v $PWD:/go -e GOARCH=amd64 -e GOOS=windows --rm golang:1.11 go build -o test.exe main.go
```
1. Copy `test.exe` to a windows vm
1. Add a `winmm.dll` beside `test.exe` with contents `not a dll`
1. Double click `test.exe`
### What did you expect to see?
"Hello world"
### What did you see instead?
<img width="979" alt="screen shot 2018-11-28 at 11 48 54 am" src="https://user-images.githubusercontent.com/14028/49121163-9d0b1200-f303-11e8-8426-9430301ff1f3.png"> | True | runtime: Windows DLL preloading attack possible for winmm.dll - Go 1.11 seems vulnerable to dll preloading on windows with `winmm.dll`. It looks like #14959 mostly fixed this, `kernel32.dll` etc are protected, but `winmm.dll` still seems to be affected. It seems to be loaded implicitly by the go runtime — https://github.com/golang/go/blob/6174b5e21e73714c63061e66efdbe180e1c5491d/src/pkg/runtime/thread_windows.c#L31 — but I notice is not listed with the other safely loaded DLLs — https://github.com/golang/go/blob/6174b5e21e73714c63061e66efdbe180e1c5491d/src/pkg/syscall/zsyscall_windows_amd64.go#L9-L19
### What version of Go are you using (`go version`)?
<pre>
$ docker run --rm golang:1.11 go version
go version go1.11.2 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ docker run --rm golang:1.11 go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build782466443=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
1. Create a `main.go` with:
```
package main
import "fmt"
func main() {
fmt.Println("Hello world")
}
```
1. Cross-compile for windows:
```
docker run -v $PWD:/go -e GOARCH=amd64 -e GOOS=windows --rm golang:1.11 go build -o test.exe main.go
```
1. Copy `test.exe` to a windows vm
1. Add a `winmm.dll` beside `test.exe` with contents `not a dll`
1. Double click `test.exe`
### What did you expect to see?
"Hello world"
### What did you see instead?
<img width="979" alt="screen shot 2018-11-28 at 11 48 54 am" src="https://user-images.githubusercontent.com/14028/49121163-9d0b1200-f303-11e8-8426-9430301ff1f3.png"> | non_defect | runtime windows dll preloading attack possible for winmm dll go seems vulnerable to dll preloading on windows with winmm dll it looks like mostly fixed this dll etc are protected but winmm dll still seems to be affected it seems to be loaded implicitly by the go runtime — — but i notice is not listed with the other safely loaded dlls — what version of go are you using go version docker run rm golang go version go version linux does this issue reproduce with the latest release yes what operating system and processor architecture are you using go env go env output docker run rm golang go env goarch gobin gocache root cache go build goexe goflags gohostarch gohostos linux goos linux gopath go goproxy gorace goroot usr local go gotmpdir gotooldir usr local go pkg tool linux gccgo gccgo cc gcc cxx g cgo enabled gomod cgo cflags g cgo cppflags cgo cxxflags g cgo fflags g cgo ldflags g pkg config pkg config gogccflags fpic pthread fmessage length fdebug prefix map tmp go tmp go build gno record gcc switches what did you do create a main go with package main import fmt func main fmt println hello world cross compile for windows docker run v pwd go e goarch e goos windows rm golang go build o test exe main go copy test exe to a windows vm add a winmm dll beside test exe with contents not a dll double click test exe what did you expect to see hello world what did you see instead img width alt screen shot at am src | 0 |
45,857 | 24,247,158,806 | IssuesEvent | 2022-09-27 11:32:49 | hzi-braunschweig/SORMAS-Project | https://api.github.com/repos/hzi-braunschweig/SORMAS-Project | opened | [Performance] Filtering out the cases in the Merging Duplicate Cases Directory shouldn't take that long | bug cases backend performance | <!--
Please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) before submitting an issue. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden.
-->
### Bug Description
The problem was described as an alternative scenario in ticket #10306 , but it wasn't in scope there, so a separate issue needed to be opened.
The issue was observed during the automated tests.
### Steps to Reproduce
<!-- Optional; please add more steps if necessary -->
1. Go to test-auto
2. Log in as a automation admin
3. Navigate to Case directory
4. Select an option to merge duplicates
5. Filter out f.e. cases with Covid as a disease on top of default set filter to include only cases created throughout last month
Waiting time for filtering out the results is longer than it was on 1.75 - 4 min 15 seconds
### Expected Behavior
Waiting time for filtering the results should not take this long.
### Screenshots
<!-- Optional -->
### System Details
<!-- Mandatory; you only have to specify the Server URL if the error appeared on a publicly available test server -->
* Device: Windows 10
* SORMAS version: Sormas 1.76.0-SNAPSHOT
* Android version/Browser: Chrome
* Server URL: test-auto.sormas.netzlink.com
* User Role: admin, national user
### Additional Information
<!-- Optional -->
| True | [Performance] Filtering out the cases in the Merging Duplicate Cases Directory shouldn't take that long - <!--
Please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) before submitting an issue. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden.
-->
### Bug Description
The problem was described as an alternative scenario in ticket #10306 , but it wasn't in scope there, so a separate issue needed to be opened.
The issue was observed during the automated tests.
### Steps to Reproduce
<!-- Optional; please add more steps if necessary -->
1. Go to test-auto
2. Log in as a automation admin
3. Navigate to Case directory
4. Select an option to merge duplicates
5. Filter out f.e. cases with Covid as a disease on top of default set filter to include only cases created throughout last month
Waiting time for filtering out the results is longer than it was on 1.75 - 4 min 15 seconds
### Expected Behavior
Waiting time for filtering the results should not take this long.
### Screenshots
<!-- Optional -->
### System Details
<!-- Mandatory; you only have to specify the Server URL if the error appeared on a publicly available test server -->
* Device: Windows 10
* SORMAS version: Sormas 1.76.0-SNAPSHOT
* Android version/Browser: Chrome
* Server URL: test-auto.sormas.netzlink.com
* User Role: admin, national user
### Additional Information
<!-- Optional -->
| non_defect | filtering out the cases in the merging duplicate cases directory shouldn t take that long please read the contributing guidelines before submitting an issue you don t have to remove this comment or any other comment from this issue as they will automatically be hidden bug description the problem was described as an alternative scenario in ticket but it wasn t in scope there so a separate issue needed to be opened the issue was observed during the automated tests steps to reproduce go to test auto log in as a automation admin navigate to case directory select an option to merge duplicates filter out f e cases with covid as a disease on top of default set filter to include only cases created throughout last month waiting time for filtering out the results is longer than it was on min seconds expected behavior waiting time for filtering the results should not take this long screenshots system details device windows sormas version sormas snapshot android version browser chrome server url test auto sormas netzlink com user role admin national user additional information | 0 |
80,991 | 30,649,706,453 | IssuesEvent | 2023-07-25 08:04:19 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | closed | google captcha? | T-Defect | ### Steps to reproduce
1. Try to create a new account
### Outcome
#### What did you expect?
No captcha. No malware/spyware/harmful services used.
I'm shocked. Really, are you serious?...
A secure and private messaging software using a malware/spyware service?
This is a very two-faced behavior, don't you think so?
Anti-private and anti-secure, harms the users, and donates data to one of the most powerful cyber-criminal coprorations on the planet!
This messenger cannot be called private and secure, as long as it has this nonsense.
Don't know what else to say, this is smth I absolutely didn't expect.
(Bonus: the program is unusable for those who have a hardened setup with google-booble IPs blocked system-wide)
(PS: isn't email verification enough? it is enough)
#### What happened instead?
Captcha provided by a malware/spyware service made by one of the biggest privacy/security enemies (google).
### Your phone model
N/A
### Operating system version
N/A
### Application version and app store
1.6.3 from F-Droid
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No | 1.0 | google captcha? - ### Steps to reproduce
1. Try to create a new account
### Outcome
#### What did you expect?
No captcha. No malware/spyware/harmful services used.
I'm shocked. Really, are you serious?...
A secure and private messaging software using a malware/spyware service?
This is a very two-faced behavior, don't you think so?
Anti-private and anti-secure, harms the users, and donates data to one of the most powerful cyber-criminal coprorations on the planet!
This messenger cannot be called private and secure, as long as it has this nonsense.
Don't know what else to say, this is smth I absolutely didn't expect.
(Bonus: the program is unusable for those who have a hardened setup with google-booble IPs blocked system-wide)
(PS: isn't email verification enough? it is enough)
#### What happened instead?
Captcha provided by a malware/spyware service made by one of the biggest privacy/security enemies (google).
### Your phone model
N/A
### Operating system version
N/A
### Application version and app store
1.6.3 from F-Droid
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No | defect | google captcha steps to reproduce try to create a new account outcome what did you expect no captcha no malware spyware harmful services used i m shocked really are you serious a secure and private messaging software using a malware spyware service this is a very two faced behavior don t you think so anti private and anti secure harms the users and donates data to one of the most powerful cyber criminal coprorations on the planet this messenger cannot be called private and secure as long as it has this nonsense don t know what else to say this is smth i absolutely didn t expect bonus the program is unusable for those who have a hardened setup with google booble ips blocked system wide ps isn t email verification enough it is enough what happened instead captcha provided by a malware spyware service made by one of the biggest privacy security enemies google your phone model n a operating system version n a application version and app store from f droid homeserver no response will you send logs no are you willing to provide a pr no | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.