Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
64,546 | 3,212,647,381 | IssuesEvent | 2015-10-06 16:20:47 | ceylon/ceylon-compiler | https://api.github.com/repos/ceylon/ceylon-compiler | closed | Cannot use spread operator to invoke member class | BUG high priority | This code produces a compile time backend error:
```ceylon
shared class C() {
shared class D() {}
value ds = { C() }*.D(); // Cannot find symbol symbol: method D$new$()
print("done");
}
shared void run() {
C();
}
```
Similar happens if `C` is an interface, and also if everything is contained within `run`. | 1.0 | Cannot use spread operator to invoke member class - This code produces a compile time backend error:
```ceylon
shared class C() {
shared class D() {}
value ds = { C() }*.D(); // Cannot find symbol symbol: method D$new$()
print("done");
}
shared void run() {
C();
}
```
Similar happens if `C` is an interface, and also if everything is contained within `run`. | non_defect | cannot use spread operator to invoke member class this code produces a compile time backend error ceylon shared class c shared class d value ds c d cannot find symbol symbol method d new print done shared void run c similar happens if c is an interface and also if everything is contained within run | 0 |
366,227 | 25,573,703,525 | IssuesEvent | 2022-11-30 20:00:41 | carlonicora/obsidian-rpg-manager | https://api.github.com/repos/carlonicora/obsidian-rpg-manager | closed | [Wrap-Up] Documentation - Beginners Guide | Documentation | **Current Stage**: Wrap-Up
## Final Tasks
- [x] Finalizing Layouts, Generating Pictures (pictures take a while)
- [x] Navigation link up top and bottom due to Github
- [x] Add to Wiki
- [x] Check by x1101 and carlonicora
- [x] Revisions
- [x] Second Check by x1101 and carlonicora
- [x] Publish
| 1.0 | [Wrap-Up] Documentation - Beginners Guide - **Current Stage**: Wrap-Up
## Final Tasks
- [x] Finalizing Layouts, Generating Pictures (pictures take a while)
- [x] Navigation link up top and bottom due to Github
- [x] Add to Wiki
- [x] Check by x1101 and carlonicora
- [x] Revisions
- [x] Second Check by x1101 and carlonicora
- [x] Publish
| non_defect | documentation beginners guide current stage wrap up final tasks finalizing layouts generating pictures pictures take a while navigation link up top and bottom due to github add to wiki check by and carlonicora revisions second check by and carlonicora publish | 0 |
52,459 | 13,224,736,321 | IssuesEvent | 2020-08-17 19:44:33 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | [filterscripts] spe fit injector -> key error (Trac #2239) | Incomplete Migration Migrated from Trac analysis defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2239">https://code.icecube.wisc.edu/projects/icecube/ticket/2239</a>, reported by anna.obertackeand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-06-21T17:09:41",
"_ts": "1561136981975028",
"description": "When using the new SimulationFiltering_SPE and the --spe-file option the follwoing error occurs: \n\n\nWARN (Python): Module iceprod.modules not found. Will not define IceProd Class (SimulationFiltering_SPE.py:403 in <module>)\nWARN (I3PhotoSplineTable): No nGroup in table, using default! This is probably not what you want (I3PhotoSplineTable.cxx:81 in bool I3PhotoSplineTable::SetupTable(co$\nWARN (I3PhotoSplineTable): No nGroup in table, using default! This is probably not what you want (I3PhotoSplineTable.cxx:81 in bool I3PhotoSplineTable::SetupTable(co$\nERROR (I3Tray): Exception thrown while configuring module \"fixspe\". (I3Tray.cxx:385 in void I3Tray::Configure())\nfixspe (icecube.phys_services.spe_fit_injector.I3SPEFitInjector)\n Filename\n Description : JSON (may bz2 compressed) file with SPE fit data\n Default : ''\n Configured : '/cvmfs/icecube.opensciencegrid.org/py2-v3.0.1/metaprojects/icerec/V05-02-02-RC2/filterscripts/resources/data/pass3/IC86.2016_923_NewWaveDeform_liorm_lite.json'\n\nTraceback (most recent call last):\n File \"/data/user/aobertacke/31_Filter2019/L1/scripts_newWavedeform//SimulationFiltering_SPE.py\", line 420, in <module>\n main(opts)\n File \"/data/user/aobertacke/31_Filter2019/L1/scripts_newWavedeform//SimulationFiltering_SPE.py\", line 377, in main\n tray.Execute()\n File \"/data/user/aobertacke/software/icerec_IC2019_V05_02_02-RC1/build/lib/I3Tray.py\", line 256, in Execute\n super(I3Tray, self).Execute()\n File \"/data/user/aobertacke/software/icerec_IC2019_V05_02_02-RC1/build/lib/icecube/phys_services/spe_fit_injector.py\", line 41, in Configure\n if bool(data['JOINT_fit']['valid']) == False and \\\nKeyError: 'JOINT_fit'\n\n\n\nThe TFT is currently requesting filter checks, thus this bug is an issue to hold the deadline set by TFT for ppl who have to simulate their own signal.",
"reporter": "anna.obertacke",
"cc": "",
"resolution": "fixed",
"time": "2019-02-13T11:55:25",
"component": "analysis",
"summary": "[filterscripts] spe fit injector -> key error",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [filterscripts] spe fit injector -> key error (Trac #2239) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2239">https://code.icecube.wisc.edu/projects/icecube/ticket/2239</a>, reported by anna.obertackeand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-06-21T17:09:41",
"_ts": "1561136981975028",
"description": "When using the new SimulationFiltering_SPE and the --spe-file option the follwoing error occurs: \n\n\nWARN (Python): Module iceprod.modules not found. Will not define IceProd Class (SimulationFiltering_SPE.py:403 in <module>)\nWARN (I3PhotoSplineTable): No nGroup in table, using default! This is probably not what you want (I3PhotoSplineTable.cxx:81 in bool I3PhotoSplineTable::SetupTable(co$\nWARN (I3PhotoSplineTable): No nGroup in table, using default! This is probably not what you want (I3PhotoSplineTable.cxx:81 in bool I3PhotoSplineTable::SetupTable(co$\nERROR (I3Tray): Exception thrown while configuring module \"fixspe\". (I3Tray.cxx:385 in void I3Tray::Configure())\nfixspe (icecube.phys_services.spe_fit_injector.I3SPEFitInjector)\n Filename\n Description : JSON (may bz2 compressed) file with SPE fit data\n Default : ''\n Configured : '/cvmfs/icecube.opensciencegrid.org/py2-v3.0.1/metaprojects/icerec/V05-02-02-RC2/filterscripts/resources/data/pass3/IC86.2016_923_NewWaveDeform_liorm_lite.json'\n\nTraceback (most recent call last):\n File \"/data/user/aobertacke/31_Filter2019/L1/scripts_newWavedeform//SimulationFiltering_SPE.py\", line 420, in <module>\n main(opts)\n File \"/data/user/aobertacke/31_Filter2019/L1/scripts_newWavedeform//SimulationFiltering_SPE.py\", line 377, in main\n tray.Execute()\n File \"/data/user/aobertacke/software/icerec_IC2019_V05_02_02-RC1/build/lib/I3Tray.py\", line 256, in Execute\n super(I3Tray, self).Execute()\n File \"/data/user/aobertacke/software/icerec_IC2019_V05_02_02-RC1/build/lib/icecube/phys_services/spe_fit_injector.py\", line 41, in Configure\n if bool(data['JOINT_fit']['valid']) == False and \\\nKeyError: 'JOINT_fit'\n\n\n\nThe TFT is currently requesting filter checks, thus this bug is an issue to hold the deadline set by TFT for ppl who have to simulate their own signal.",
"reporter": "anna.obertacke",
"cc": "",
"resolution": "fixed",
"time": "2019-02-13T11:55:25",
"component": "analysis",
"summary": "[filterscripts] spe fit injector -> key error",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| defect | spe fit injector key error trac migrated from json status closed changetime ts description when using the new simulationfiltering spe and the spe file option the follwoing error occurs n n nwarn python module iceprod modules not found will not define iceprod class simulationfiltering spe py in nwarn no ngroup in table using default this is probably not what you want cxx in bool setuptable co nwarn no ngroup in table using default this is probably not what you want cxx in bool setuptable co nerror exception thrown while configuring module fixspe cxx in void configure nfixspe icecube phys services spe fit injector n filename n description json may compressed file with spe fit data n default n configured cvmfs icecube opensciencegrid org metaprojects icerec filterscripts resources data newwavedeform liorm lite json n ntraceback most recent call last n file data user aobertacke scripts newwavedeform simulationfiltering spe py line in n main opts n file data user aobertacke scripts newwavedeform simulationfiltering spe py line in main n tray execute n file data user aobertacke software icerec build lib py line in execute n super self execute n file data user aobertacke software icerec build lib icecube phys services spe fit injector py line in configure n if bool data false and nkeyerror joint fit n n n nthe tft is currently requesting filter checks thus this bug is an issue to hold the deadline set by tft for ppl who have to simulate their own signal reporter anna obertacke cc resolution fixed time component analysis summary spe fit injector key error priority major keywords milestone owner olivas type defect | 1 |
88,223 | 25,348,750,063 | IssuesEvent | 2022-11-19 14:11:38 | sile-typesetter/sile | https://api.github.com/repos/sile-typesetter/sile | opened | v0.14.5 release checklist | todo builds & releases | - [x] Shuffle any issues being put off to future milestones
- [x] Close all issues in current milestone (except this one)
- [x] Spring clean
- [x] Re-fetch tooling
- [x] Configure and build
- [x] Pass all tests
- [x] Remote CI
- [x] Local
- [x] Cut release
- [ ] Push release commit & tag to master
- Update website
- [ ] Copy and post manual, update 'latest' symlink and menu links
- [ ] Copy changelog and prefix with a summary as a blog post
- [ ] Tweak summary for gfm and edit into GitHub release notes
- Update downstream distro packages
- [ ] Arch Linux official: [community/sile](https://archlinux.org/packages/community/x86_64/sile/)
- [ ] Arch Linux AUR: [AUR/sile-luajit](https://aur.archlinux.org/packages/sile-luajit) <!-- – [pull request]() -->
- [ ] Homebrew: [Formula](https://github.com/Homebrew/homebrew-core/blob/master/Formula/sile.rb) <!-- – [pull request]() -->
- [ ] NixOS: [nixpkgs](https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/typesetting/sile/default.nix) <!-- – [pull request]() – [tracker](https://nixpk.gs/pr-tracker.html?pr=) -->
- [ ] Drop devel flag from any website examples that were using it
- [ ] Ubuntu: [ppa](https://launchpad.net/~sile-typesetter/+archive/ubuntu/sile)
- [ ] Docker Hub: [tags](https://hub.docker.com/repository/docker/siletypesetter/sile/tags)
- [ ] GHCR: [versions](https://github.com/orgs/sile-typesetter/packages/container/sile/versions)
- [ ] openSUSE: <!-- pinging @Pi-Cla re [specfile](https://build.opensuse.org/package/show/openSUSE%3AFactory/sile) -->
- [ ] NetBSD: <!-- pinging @jsonn re [pkgsrc](https://pkgsrc.se/print/sile) -->
- [ ] Void Linux: <!-- [pull request](https://github.com/void-linux/void-packages/pull/18306) -->
- [ ] OpenBSD: <!-- [mailing list thread](https://marc.info/?t=157907840200001) -->
- Bump downstream projects
- [ ] [FontProof](https://github.com/sile-typesetter/fontproof) (Docker image base plus CI matrix)
- [ ] [CaSILE](https://github.com/sile-typesetter/casile) (check, consider patch release)
- [ ] Eat cake
[](https://repology.org/project/sile/versions) | 1.0 | v0.14.5 release checklist - - [x] Shuffle any issues being put off to future milestones
- [x] Close all issues in current milestone (except this one)
- [x] Spring clean
- [x] Re-fetch tooling
- [x] Configure and build
- [x] Pass all tests
- [x] Remote CI
- [x] Local
- [x] Cut release
- [ ] Push release commit & tag to master
- Update website
- [ ] Copy and post manual, update 'latest' symlink and menu links
- [ ] Copy changelog and prefix with a summary as a blog post
- [ ] Tweak summary for gfm and edit into GitHub release notes
- Update downstream distro packages
- [ ] Arch Linux official: [community/sile](https://archlinux.org/packages/community/x86_64/sile/)
- [ ] Arch Linux AUR: [AUR/sile-luajit](https://aur.archlinux.org/packages/sile-luajit) <!-- – [pull request]() -->
- [ ] Homebrew: [Formula](https://github.com/Homebrew/homebrew-core/blob/master/Formula/sile.rb) <!-- – [pull request]() -->
- [ ] NixOS: [nixpkgs](https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/typesetting/sile/default.nix) <!-- – [pull request]() – [tracker](https://nixpk.gs/pr-tracker.html?pr=) -->
- [ ] Drop devel flag from any website examples that were using it
- [ ] Ubuntu: [ppa](https://launchpad.net/~sile-typesetter/+archive/ubuntu/sile)
- [ ] Docker Hub: [tags](https://hub.docker.com/repository/docker/siletypesetter/sile/tags)
- [ ] GHCR: [versions](https://github.com/orgs/sile-typesetter/packages/container/sile/versions)
- [ ] openSUSE: <!-- pinging @Pi-Cla re [specfile](https://build.opensuse.org/package/show/openSUSE%3AFactory/sile) -->
- [ ] NetBSD: <!-- pinging @jsonn re [pkgsrc](https://pkgsrc.se/print/sile) -->
- [ ] Void Linux: <!-- [pull request](https://github.com/void-linux/void-packages/pull/18306) -->
- [ ] OpenBSD: <!-- [mailing list thread](https://marc.info/?t=157907840200001) -->
- Bump downstream projects
- [ ] [FontProof](https://github.com/sile-typesetter/fontproof) (Docker image base plus CI matrix)
- [ ] [CaSILE](https://github.com/sile-typesetter/casile) (check, consider patch release)
- [ ] Eat cake
[](https://repology.org/project/sile/versions) | non_defect | release checklist shuffle any issues being put off to future milestones close all issues in current milestone except this one spring clean re fetch tooling configure and build pass all tests remote ci local cut release push release commit tag to master update website copy and post manual update latest symlink and menu links copy changelog and prefix with a summary as a blog post tweak summary for gfm and edit into github release notes update downstream distro packages arch linux official arch linux aur homebrew nixos drop devel flag from any website examples that were using it ubuntu docker hub ghcr opensuse netbsd void linux openbsd bump downstream projects docker image base plus ci matrix check consider patch release eat cake | 0 |
209,227 | 16,187,955,633 | IssuesEvent | 2021-05-04 01:42:58 | cabelitos/mentorship-team | https://api.github.com/repos/cabelitos/mentorship-team | closed | Missing Code of Conduct | documentation | The Node.JS Code of Conduct is missing and should be added to this repo. | 1.0 | Missing Code of Conduct - The Node.JS Code of Conduct is missing and should be added to this repo. | non_defect | missing code of conduct the node js code of conduct is missing and should be added to this repo | 0 |
47,275 | 13,056,094,379 | IssuesEvent | 2020-07-30 03:38:25 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | Fixed in revision 75773 with the inclusion of SimpleMajorityTrigger (Trac #264) | Migrated from Trac combo simulation defect | Generates more and shorter triggers.
Proposed solution in SMTrigger.cxx:
l.238 should be k = q +1
l.248 : j = k should be j = i + threshold -1
But might require also a test.
Migrated from https://code.icecube.wisc.edu/ticket/264
```json
{
"status": "closed",
"changetime": "2011-05-26T16:36:38",
"description": "Generates more and shorter triggers.\n\nProposed solution in SMTrigger.cxx: \nl.238 should be k = q +1\nl.248 : j = k should be j = i + threshold -1\n\nBut might require also a test.",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"_ts": "1306427798000000",
"component": "combo simulation",
"summary": "Fixed in revision 75773 with the inclusion of SimpleMajorityTrigger",
"priority": "normal",
"keywords": "",
"time": "2011-05-17T15:59:32",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| 1.0 | Fixed in revision 75773 with the inclusion of SimpleMajorityTrigger (Trac #264) - Generates more and shorter triggers.
Proposed solution in SMTrigger.cxx:
l.238 should be k = q +1
l.248 : j = k should be j = i + threshold -1
But might require also a test.
Migrated from https://code.icecube.wisc.edu/ticket/264
```json
{
"status": "closed",
"changetime": "2011-05-26T16:36:38",
"description": "Generates more and shorter triggers.\n\nProposed solution in SMTrigger.cxx: \nl.238 should be k = q +1\nl.248 : j = k should be j = i + threshold -1\n\nBut might require also a test.",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"_ts": "1306427798000000",
"component": "combo simulation",
"summary": "Fixed in revision 75773 with the inclusion of SimpleMajorityTrigger",
"priority": "normal",
"keywords": "",
"time": "2011-05-17T15:59:32",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| defect | fixed in revision with the inclusion of simplemajoritytrigger trac generates more and shorter triggers proposed solution in smtrigger cxx l should be k q l j k should be j i threshold but might require also a test migrated from json status closed changetime description generates more and shorter triggers n nproposed solution in smtrigger cxx nl should be k q nl j k should be j i threshold n nbut might require also a test reporter icecube cc resolution fixed ts component combo simulation summary fixed in revision with the inclusion of simplemajoritytrigger priority normal keywords time milestone owner olivas type defect | 1 |
9,583 | 2,615,163,040 | IssuesEvent | 2015-03-01 06:42:31 | chrsmith/reaver-wps | https://api.github.com/repos/chrsmith/reaver-wps | opened | WARNING: Receive timeout occurred | auto-migrated Priority-Triage Type-Defect | ```
Answer the following questions for every issue submitted:
0. What version of Reaver are you using? Reaver v1.4
1. What operating system are you using? Ubuntu 12.04
2. Is your wireless card in monitor mode (yes/no)? yes, airmon-ng start wlan0,
and I'm using mon0 for reaver
3. What is the signal strength of the Access Point you are trying to crack? -19
4. What is the manufacturer and model # of the device you are trying to crack?
Telus communications, Actiontec V1000H, Firmware Version: 31.30L.55
5. What is the entire command line string you are supplying to reaver?
reaver -i mon0 -b "my router's mac" -c 1 -A -vv
6. Please describe what you think the issue is. I have absolutely no idea,
feels like I've looked everywhere and tried everything at this point
7. Paste the output from Reaver below.
root@halo:~# reaver -i mon0 -b A8:39:44:XX:XX:XX -c 1 -A -vv
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 1
[+] Waiting for beacon from A8:39:44:XX:XX:XX
[+] Associated with A8:39:44:XX:XX:XX (ESSID: H****2)
[+] Trying pin 55555678
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
So I've been all through the forums, google and youtube, tried many variations
in command line and this is the furthest I got.
This is my home router and it's only about 6FT away from my computer.
Also note that other users on here say that they have success with the same
network card as me: broadcom BCM4312 running b43-fwcutter version# 5.100.138
It is worth noting that if I do not use the -A command then all I get is a
bunch of FAILED TO ASSOCIATE messages:
root@halo:~# reaver -i mon0 -b A8:39:44:XX:XX:XX -c 1 -vv
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 1
[?] Restore previous session for A8:39:44:XX:XX:XX? [n/Y] y
[+] Restored previous session
[+] Waiting for beacon from A8:39:44:XX:XX:XX
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: H****2)
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: H****2)
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: HomeT2)
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: H****2)
===================================================================
I ran wash any everything looks ok:
root@halo:~# wash -i mon0
Wash v1.4 WiFi Protected Setup Scan Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
BSSID Channel RSSI WPS Version WPS Locked
ESSID
--------------------------------------------------------------------------------
-------------------------------
A8:39:44:XX:X:XX 1 -19 1.0 No
H****2
==================================================================
pcap file is available for these attempts.
Please help me
```
Original issue reported on code.google.com by `aldoba...@gmail.com` on 21 Jun 2012 at 5:26 | 1.0 | WARNING: Receive timeout occurred - ```
Answer the following questions for every issue submitted:
0. What version of Reaver are you using? Reaver v1.4
1. What operating system are you using? Ubuntu 12.04
2. Is your wireless card in monitor mode (yes/no)? yes, airmon-ng start wlan0,
and I'm using mon0 for reaver
3. What is the signal strength of the Access Point you are trying to crack? -19
4. What is the manufacturer and model # of the device you are trying to crack?
Telus communications, Actiontec V1000H, Firmware Version: 31.30L.55
5. What is the entire command line string you are supplying to reaver?
reaver -i mon0 -b "my router's mac" -c 1 -A -vv
6. Please describe what you think the issue is. I have absolutely no idea,
feels like I've looked everywhere and tried everything at this point
7. Paste the output from Reaver below.
root@halo:~# reaver -i mon0 -b A8:39:44:XX:XX:XX -c 1 -A -vv
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 1
[+] Waiting for beacon from A8:39:44:XX:XX:XX
[+] Associated with A8:39:44:XX:XX:XX (ESSID: H****2)
[+] Trying pin 55555678
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
So I've been all through the forums, google and youtube, tried many variations
in command line and this is the furthest I got.
This is my home router and it's only about 6FT away from my computer.
Also note that other users on here say that they have success with the same
network card as me: broadcom BCM4312 running b43-fwcutter version# 5.100.138
It is worth noting that if I do not use the -A command then all I get is a
bunch of FAILED TO ASSOCIATE messages:
root@halo:~# reaver -i mon0 -b A8:39:44:XX:XX:XX -c 1 -vv
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 1
[?] Restore previous session for A8:39:44:XX:XX:XX? [n/Y] y
[+] Restored previous session
[+] Waiting for beacon from A8:39:44:XX:XX:XX
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: H****2)
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: H****2)
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: HomeT2)
[!] WARNING: Failed to associate with A8:39:44:XX:XX:XX (ESSID: H****2)
===================================================================
I ran wash any everything looks ok:
root@halo:~# wash -i mon0
Wash v1.4 WiFi Protected Setup Scan Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
BSSID Channel RSSI WPS Version WPS Locked
ESSID
--------------------------------------------------------------------------------
-------------------------------
A8:39:44:XX:X:XX 1 -19 1.0 No
H****2
==================================================================
pcap file is available for these attempts.
Please help me
```
Original issue reported on code.google.com by `aldoba...@gmail.com` on 21 Jun 2012 at 5:26 | defect | warning receive timeout occurred answer the following questions for every issue submitted what version of reaver are you using reaver what operating system are you using ubuntu is your wireless card in monitor mode yes no yes airmon ng start and i m using for reaver what is the signal strength of the access point you are trying to crack what is the manufacturer and model of the device you are trying to crack telus communications actiontec firmware version what is the entire command line string you are supplying to reaver reaver i b my router s mac c a vv please describe what you think the issue is i have absolutely no idea feels like i ve looked everywhere and tried everything at this point paste the output from reaver below root halo reaver i b xx xx xx c a vv reaver wifi protected setup attack tool copyright c tactical network solutions craig heffner switching to channel waiting for beacon from xx xx xx associated with xx xx xx essid h trying pin sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request so i ve been all through the forums google and youtube tried many variations in command line and this is the furthest i got this is my home router and it s only about away from my computer also note that other users on here say that they have success with the same network card as me broadcom running fwcutter version it is worth noting that if i do not use the a command then all i get is a bunch of failed to associate messages root halo reaver i b xx xx xx c vv reaver wifi protected setup attack tool copyright c tactical network solutions craig heffner switching to channel restore previous session for xx xx xx y restored previous session waiting for beacon from xx xx xx warning failed to associate with xx xx xx essid h warning failed to associate with xx xx xx essid h warning failed to associate with xx xx xx essid warning failed to associate with xx xx xx essid h i ran wash any everything looks ok root halo wash i wash wifi protected setup scan tool copyright c tactical network solutions craig heffner bssid channel rssi wps version wps locked essid xx x xx no h pcap file is available for these attempts please help me original issue reported on code google com by aldoba gmail com on jun at | 1 |
61,909 | 17,023,806,941 | IssuesEvent | 2021-07-03 03:57:39 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Map does not display on changeset browser | Component: website Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 11.47am, Saturday, 30th June 2012]**
For the last couple of days, the map on the right-hand side of the /browse/changesets history view is not appearing. JavaScript error:
Error: Event.observe is not a function
Source File: http://www.openstreetmap.org/browse/changesets
Line: 315
Fails in Firefox 13.0.1, IE 9.0 and Epiphany 3.4.1. Confirmed across muleiple machines from multiple IP addresses. Last time I saw it working was probably no later than 27 June. Note that the individual changeset viewer /browse/changeset works as expected. The inline map script for the history page ends `Event.observe(window, "load", init);` while the individual changeset ends `window.onload = init;`. | 1.0 | Map does not display on changeset browser - **[Submitted to the original trac issue database at 11.47am, Saturday, 30th June 2012]**
For the last couple of days, the map on the right-hand side of the /browse/changesets history view is not appearing. JavaScript error:
Error: Event.observe is not a function
Source File: http://www.openstreetmap.org/browse/changesets
Line: 315
Fails in Firefox 13.0.1, IE 9.0 and Epiphany 3.4.1. Confirmed across muleiple machines from multiple IP addresses. Last time I saw it working was probably no later than 27 June. Note that the individual changeset viewer /browse/changeset works as expected. The inline map script for the history page ends `Event.observe(window, "load", init);` while the individual changeset ends `window.onload = init;`. | defect | map does not display on changeset browser for the last couple of days the map on the right hand side of the browse changesets history view is not appearing javascript error error event observe is not a function source file line fails in firefox ie and epiphany confirmed across muleiple machines from multiple ip addresses last time i saw it working was probably no later than june note that the individual changeset viewer browse changeset works as expected the inline map script for the history page ends event observe window load init while the individual changeset ends window onload init | 1 |
19,585 | 3,227,243,093 | IssuesEvent | 2015-10-11 01:10:00 | krashanoff/cleanrip | https://api.github.com/repos/krashanoff/cleanrip | closed | wii.dat needed on Gamecube | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
Load Cleanrip v2.0.0 with only gc.dat on Gamecube
What is the expected output? What do you see instead?
It should tell you that the Redump DAT files have been found.
Instead, it prompts you that the files have no been found and asks if you want
to download them. This is pointless seeing as the Gamecube cannot dump Wii
games, so the check for wii.dat should not be there.
Workaround:
Download a proper wii.dat
OR
What also worked for me was copying the gc.dat, renaming it wii.dat, and it
would not prompt me anymore. I tried using a blank wii.dat but it did not
work, so it needs to be some kind of valid .dat file. This was easier than
looking for a legit wii.dat.
```
Original issue reported on code.google.com by `steventy...@gmail.com` on 1 May 2014 at 4:37 | 1.0 | wii.dat needed on Gamecube - ```
What steps will reproduce the problem?
Load Cleanrip v2.0.0 with only gc.dat on Gamecube
What is the expected output? What do you see instead?
It should tell you that the Redump DAT files have been found.
Instead, it prompts you that the files have no been found and asks if you want
to download them. This is pointless seeing as the Gamecube cannot dump Wii
games, so the check for wii.dat should not be there.
Workaround:
Download a proper wii.dat
OR
What also worked for me was copying the gc.dat, renaming it wii.dat, and it
would not prompt me anymore. I tried using a blank wii.dat but it did not
work, so it needs to be some kind of valid .dat file. This was easier than
looking for a legit wii.dat.
```
Original issue reported on code.google.com by `steventy...@gmail.com` on 1 May 2014 at 4:37 | defect | wii dat needed on gamecube what steps will reproduce the problem load cleanrip with only gc dat on gamecube what is the expected output what do you see instead it should tell you that the redump dat files have been found instead it prompts you that the files have no been found and asks if you want to download them this is pointless seeing as the gamecube cannot dump wii games so the check for wii dat should not be there workaround download a proper wii dat or what also worked for me was copying the gc dat renaming it wii dat and it would not prompt me anymore i tried using a blank wii dat but it did not work so it needs to be some kind of valid dat file this was easier than looking for a legit wii dat original issue reported on code google com by steventy gmail com on may at | 1 |
92 | 2,534,816,257 | IssuesEvent | 2015-01-25 11:06:06 | chrisalexander/Learn-Chinese-app | https://api.github.com/repos/chrisalexander/Learn-Chinese-app | closed | CurrentState should be called CurrentStatus and return IEnumerable<string> | LongRunningProcess | Also update the UI to render accordingly, plus the tests | 1.0 | CurrentState should be called CurrentStatus and return IEnumerable<string> - Also update the UI to render accordingly, plus the tests | non_defect | currentstate should be called currentstatus and return ienumerable also update the ui to render accordingly plus the tests | 0 |
50,649 | 13,187,660,532 | IssuesEvent | 2020-08-13 04:08:50 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | cmake - I3_USE_ROOT macro set w/o root installed (Trac #1131) | Migrated from Trac cmake defect | if USE_ROOT is given to cmake, the C macro I3_USE_ROOT is set even if root isn't found.
see #1115
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1131">https://code.icecube.wisc.edu/ticket/1131</a>, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-11T23:59:15",
"description": "if USE_ROOT is given to cmake, the C macro I3_USE_ROOT is set even if root isn't found.\n\nsee #1121",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1547251155701656",
"component": "cmake",
"summary": "cmake - I3_USE_ROOT macro set w/o root installed",
"priority": "blocker",
"keywords": "",
"time": "2015-08-17T18:43:53",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | cmake - I3_USE_ROOT macro set w/o root installed (Trac #1131) - if USE_ROOT is given to cmake, the C macro I3_USE_ROOT is set even if root isn't found.
see #1115
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1131">https://code.icecube.wisc.edu/ticket/1131</a>, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-11T23:59:15",
"description": "if USE_ROOT is given to cmake, the C macro I3_USE_ROOT is set even if root isn't found.\n\nsee #1121",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1547251155701656",
"component": "cmake",
"summary": "cmake - I3_USE_ROOT macro set w/o root installed",
"priority": "blocker",
"keywords": "",
"time": "2015-08-17T18:43:53",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | cmake use root macro set w o root installed trac if use root is given to cmake the c macro use root is set even if root isn t found see migrated from json status closed changetime description if use root is given to cmake the c macro use root is set even if root isn t found n nsee reporter nega cc resolution fixed ts component cmake summary cmake use root macro set w o root installed priority blocker keywords time milestone owner nega type defect | 1 |
10,765 | 2,622,183,363 | IssuesEvent | 2015-03-04 00:19:48 | byzhang/leveldb | https://api.github.com/repos/byzhang/leveldb | closed | LevelDB 1.16 and 1.17 not available on downloads page | auto-migrated Priority-Medium Type-Defect | ```
https://code.google.com/p/leveldb/downloads/list
```
Original issue reported on code.google.com by `clemah...@gmail.com` on 18 Aug 2014 at 1:19
* Merged into: #240 | 1.0 | LevelDB 1.16 and 1.17 not available on downloads page - ```
https://code.google.com/p/leveldb/downloads/list
```
Original issue reported on code.google.com by `clemah...@gmail.com` on 18 Aug 2014 at 1:19
* Merged into: #240 | defect | leveldb and not available on downloads page original issue reported on code google com by clemah gmail com on aug at merged into | 1 |
4,222 | 2,610,089,512 | IssuesEvent | 2015-02-26 18:27:06 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳痘痘如何祛 | auto-migrated Priority-Medium Type-Defect | ```
深圳痘痘如何祛【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:36 | 1.0 | 深圳痘痘如何祛 - ```
深圳痘痘如何祛【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:36 | defect | 深圳痘痘如何祛 深圳痘痘如何祛【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方�� �—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科� ��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康 祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治�� �粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘� �� original issue reported on code google com by szft com on may at | 1 |
19,893 | 3,273,628,523 | IssuesEvent | 2015-10-26 04:24:43 | npgall/cqengine | https://api.github.com/repos/npgall/cqengine | closed | Query CONSISTENTLY slower with the addition of NavigableIndex | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Run this code, see below.
What is the expected output? What do you see instead?
I expected the queries against a NavigableIndex to be similar to or faster
instead of nearly 2x slower, than without.
What version of the product are you using? On what operating system?
1.2.7 from Maven on Ubuntu 13.10
Running this code will spit out some times. If you run it with and without the
index line commented out you'll see that commenting it out is much faster.
Thanks for looking into it!
```
Original issue reported on code.google.com by `crlia...@gmail.com` on 30 May 2014 at 9:34
Attachments:
* [App.java](https://storage.googleapis.com/google-code-attachments/cqengine/issue-37/comment-0/App.java)
| 1.0 | Query CONSISTENTLY slower with the addition of NavigableIndex - ```
What steps will reproduce the problem?
1. Run this code, see below.
What is the expected output? What do you see instead?
I expected the queries against a NavigableIndex to be similar to or faster
instead of nearly 2x slower, than without.
What version of the product are you using? On what operating system?
1.2.7 from Maven on Ubuntu 13.10
Running this code will spit out some times. If you run it with and without the
index line commented out you'll see that commenting it out is much faster.
Thanks for looking into it!
```
Original issue reported on code.google.com by `crlia...@gmail.com` on 30 May 2014 at 9:34
Attachments:
* [App.java](https://storage.googleapis.com/google-code-attachments/cqengine/issue-37/comment-0/App.java)
| defect | query consistently slower with the addition of navigableindex what steps will reproduce the problem run this code see below what is the expected output what do you see instead i expected the queries against a navigableindex to be similar to or faster instead of nearly slower than without what version of the product are you using on what operating system from maven on ubuntu running this code will spit out some times if you run it with and without the index line commented out you ll see that commenting it out is much faster thanks for looking into it original issue reported on code google com by crlia gmail com on may at attachments | 1 |
63,537 | 14,656,736,755 | IssuesEvent | 2020-12-28 14:05:07 | fu1771695yongxie/next.js | https://api.github.com/repos/fu1771695yongxie/next.js | opened | CVE-2020-11023 (Medium) detected in multiple libraries | security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-2.2.0.min.js</b>, <b>jquery-3.2.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: next.js/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: next.js/node_modules/js-base64/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.0/jquery.min.js</a></p>
<p>Path to dependency file: next.js/node_modules/lost/docs/_includes/footer.html</p>
<p>Path to vulnerable library: next.js/node_modules/lost/docs/_includes/footer.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to dependency file: next.js/node_modules/superagent/docs/tail.html</p>
<p>Path to vulnerable library: next.js/node_modules/superagent/docs/tail.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/next.js/commit/7da96cb602f4b841f912ded99ee8ea2109a96f0e">7da96cb602f4b841f912ded99ee8ea2109a96f0e</a></p>
<p>Found in base branch: <b>canary</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11023 (Medium) detected in multiple libraries - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-2.2.0.min.js</b>, <b>jquery-3.2.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: next.js/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: next.js/node_modules/js-base64/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.0/jquery.min.js</a></p>
<p>Path to dependency file: next.js/node_modules/lost/docs/_includes/footer.html</p>
<p>Path to vulnerable library: next.js/node_modules/lost/docs/_includes/footer.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to dependency file: next.js/node_modules/superagent/docs/tail.html</p>
<p>Path to vulnerable library: next.js/node_modules/superagent/docs/tail.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/next.js/commit/7da96cb602f4b841f912ded99ee8ea2109a96f0e">7da96cb602f4b841f912ded99ee8ea2109a96f0e</a></p>
<p>Found in base branch: <b>canary</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file next js node modules js test index html path to vulnerable library next js node modules js test index html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file next js node modules lost docs includes footer html path to vulnerable library next js node modules lost docs includes footer html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file next js node modules superagent docs tail html path to vulnerable library next js node modules superagent docs tail html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch canary vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
68,900 | 21,945,989,945 | IssuesEvent | 2022-05-24 00:40:49 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | invalid homeserver/_matrix/client/r0/search for Dendrite | T-Defect | ### Steps to reproduce
1. Where are you starting? What can you see? search
2. What do you click? and found invalid url homeserver/_matrix/client/r0/search
It seems that https://matrix.org/docs/spec/client_server/r0.6.1 old version has this API, but the new version don't have, I am not sure
### Outcome
#### What did you expect?
can sarch for Dendrite
#### What happened instead?
invalid URL homeserver/_matrix/client/r0/search
### Operating system
Arch Linux
### Browser information
chromium 101.0.4951.64-1
### URL for webapp
element-web 1.10.12
### Application version
element-web 1.10.12
### Homeserver
dendrite 0.8.5
### Will you send logs?
No | 1.0 | invalid homeserver/_matrix/client/r0/search for Dendrite - ### Steps to reproduce
1. Where are you starting? What can you see? search
2. What do you click? and found invalid url homeserver/_matrix/client/r0/search
It seems that https://matrix.org/docs/spec/client_server/r0.6.1 old version has this API, but the new version don't have, I am not sure
### Outcome
#### What did you expect?
can sarch for Dendrite
#### What happened instead?
invalid URL homeserver/_matrix/client/r0/search
### Operating system
Arch Linux
### Browser information
chromium 101.0.4951.64-1
### URL for webapp
element-web 1.10.12
### Application version
element-web 1.10.12
### Homeserver
dendrite 0.8.5
### Will you send logs?
No | defect | invalid homeserver matrix client search for dendrite steps to reproduce where are you starting what can you see search what do you click and found invalid url homeserver matrix client search it seems that old version has this api but the new version don t have i am not sure outcome what did you expect can sarch for dendrite what happened instead invalid url homeserver matrix client search operating system arch linux browser information chromium url for webapp element web application version element web homeserver dendrite will you send logs no | 1 |
17,055 | 2,972,399,343 | IssuesEvent | 2015-07-14 13:36:42 | mabe02/lanterna | https://api.github.com/repos/mabe02/lanterna | closed | ActionListDialog should highlight entire selected item | auto-migrated Priority-Medium Type-Defect | ```
I was just experimenting with ListSelectDialog which uses ActionListDialog,
passing Strings as the Objects. The selected item is only indicated by the
cursor. It would be nicer if the entry row in the list was highlighted.
```
Original issue reported on code.google.com by `bem...@gmail.com` on 11 Sep 2012 at 9:22 | 1.0 | ActionListDialog should highlight entire selected item - ```
I was just experimenting with ListSelectDialog which uses ActionListDialog,
passing Strings as the Objects. The selected item is only indicated by the
cursor. It would be nicer if the entry row in the list was highlighted.
```
Original issue reported on code.google.com by `bem...@gmail.com` on 11 Sep 2012 at 9:22 | defect | actionlistdialog should highlight entire selected item i was just experimenting with listselectdialog which uses actionlistdialog passing strings as the objects the selected item is only indicated by the cursor it would be nicer if the entry row in the list was highlighted original issue reported on code google com by bem gmail com on sep at | 1 |
63,304 | 17,576,431,594 | IssuesEvent | 2021-08-15 17:51:11 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Microphone disappeared from settings after updating Element-Desktop | T-Defect X-Needs-Info X-Cannot-Reproduce A-VoIP A-Media | Please delete this - not an Element issue! | 1.0 | Microphone disappeared from settings after updating Element-Desktop - Please delete this - not an Element issue! | defect | microphone disappeared from settings after updating element desktop please delete this not an element issue | 1 |
288,112 | 21,685,845,527 | IssuesEvent | 2022-05-09 11:11:44 | mmastro31/talking-oscilloscope | https://api.github.com/repos/mmastro31/talking-oscilloscope | closed | Write PCB prototype section of report | documentation Medium | Complete the section of the final report regarding the PCB prototype. This can be done after the PCB prototype is completed. | 1.0 | Write PCB prototype section of report - Complete the section of the final report regarding the PCB prototype. This can be done after the PCB prototype is completed. | non_defect | write pcb prototype section of report complete the section of the final report regarding the pcb prototype this can be done after the pcb prototype is completed | 0 |
284,786 | 30,913,688,955 | IssuesEvent | 2023-08-05 02:37:15 | Nivaskumark/kernel_v4.19.72_old | https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old | reopened | CVE-2022-4379 (High) detected in linux-yoctov5.4.51 | Mend: dependency security vulnerability | ## CVE-2022-4379 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability was found in __nfs42_ssc_open() in fs/nfs/nfs4file.c in the Linux kernel. This flaw allows an attacker to conduct a remote denial
<p>Publish Date: 2023-01-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4379>CVE-2022-4379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4379">https://www.linuxkernelcves.com/cves/CVE-2022-4379</a></p>
<p>Release Date: 2023-01-10</p>
<p>Fix Resolution: v6.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-4379 (High) detected in linux-yoctov5.4.51 - ## CVE-2022-4379 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability was found in __nfs42_ssc_open() in fs/nfs/nfs4file.c in the Linux kernel. This flaw allows an attacker to conduct a remote denial
<p>Publish Date: 2023-01-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4379>CVE-2022-4379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4379">https://www.linuxkernelcves.com/cves/CVE-2022-4379</a></p>
<p>Release Date: 2023-01-10</p>
<p>Fix Resolution: v6.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files fs nfsd c fs nfsd c vulnerability details a use after free vulnerability was found in ssc open in fs nfs c in the linux kernel this flaw allows an attacker to conduct a remote denial publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
73,232 | 24,515,363,972 | IssuesEvent | 2022-10-11 04:11:43 | Optiboot/optiboot | https://api.github.com/repos/Optiboot/optiboot | opened | optiboot_x fails for >48k of flash (ie xTiny, AVRDx64, AVRDx128) | Type-Defect Priority-High Optiboot-X-specific | The current optiboot_x was written for ATmega4809, and assumes that all of flash memory is mapped into the RAM address space. Also, it doesn't handle any of the hacks for an extended address byte.
So optiboot_x will fail to work for more than the amount of flash that is directly mapped into the RAM address space by default. That's only 32k on the 64k and 128k devices.
https://github.com/avrdudes/avrdude/issues/1120#
@SpenceKonde @MCUdude @avrdudes
| 1.0 | optiboot_x fails for >48k of flash (ie xTiny, AVRDx64, AVRDx128) - The current optiboot_x was written for ATmega4809, and assumes that all of flash memory is mapped into the RAM address space. Also, it doesn't handle any of the hacks for an extended address byte.
So optiboot_x will fail to work for more than the amount of flash that is directly mapped into the RAM address space by default. That's only 32k on the 64k and 128k devices.
https://github.com/avrdudes/avrdude/issues/1120#
@SpenceKonde @MCUdude @avrdudes
| defect | optiboot x fails for of flash ie xtiny the current optiboot x was written for and assumes that all of flash memory is mapped into the ram address space also it doesn t handle any of the hacks for an extended address byte so optiboot x will fail to work for more than the amount of flash that is directly mapped into the ram address space by default that s only on the and devices spencekonde mcudude avrdudes | 1 |
255,108 | 8,108,831,037 | IssuesEvent | 2018-08-14 04:12:56 | Crizov/HealthClientApp | https://api.github.com/repos/Crizov/HealthClientApp | opened | US111 - Upload supplementary files from local storage | priority: H size: 3 | As a patient I want to upload supplementary files from local storage so that I can add local files to the data packet
**Acceptance Criteria:**
1. on the create packet screen, add a section that allows a user to upload a file.
2. When "select file" button is pressed, open the phone's local file explorer.
3. allow user to select a file, and get the selected file's ref.
4. when data packet is sent, save the file to firebase using the file ref | 1.0 | US111 - Upload supplementary files from local storage - As a patient I want to upload supplementary files from local storage so that I can add local files to the data packet
**Acceptance Criteria:**
1. on the create packet screen, add a section that allows a user to upload a file.
2. When "select file" button is pressed, open the phone's local file explorer.
3. allow user to select a file, and get the selected file's ref.
4. when data packet is sent, save the file to firebase using the file ref | non_defect | upload supplementary files from local storage as a patient i want to upload supplementary files from local storage so that i can add local files to the data packet acceptance criteria on the create packet screen add a section that allows a user to upload a file when select file button is pressed open the phone s local file explorer allow user to select a file and get the selected file s ref when data packet is sent save the file to firebase using the file ref | 0 |
99,300 | 11,138,187,608 | IssuesEvent | 2019-12-20 21:34:29 | sharyuwu/optimum-tilt-of-solar-panels | https://api.github.com/repos/sharyuwu/optimum-tilt-of-solar-panels | closed | VnV Review: InputBounds-id2 - input(2) and output (2) should be -90 | documentation | Hi Sharon,
There seems to be a typo with input (2), as it reads 90 but should be -90.
The same applies to output (2) - both highlighted below.

| 1.0 | VnV Review: InputBounds-id2 - input(2) and output (2) should be -90 - Hi Sharon,
There seems to be a typo with input (2), as it reads 90 but should be -90.
The same applies to output (2) - both highlighted below.

| non_defect | vnv review inputbounds input and output should be hi sharon there seems to be a typo with input as it reads but should be the same applies to output both highlighted below | 0 |
2,107 | 2,603,976,409 | IssuesEvent | 2015-02-24 19:01:37 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳hsv-igg阳性 | auto-migrated Priority-Medium Type-Defect | ```
沈阳hsv-igg阳性〓沈陽軍區政治部醫院性病〓TEL:024-31023308〓�
��立于1946年,68年專注于性傳播疾病的研究和治療。位于沈陽
市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史悠
久、設備精良、技術權威、專家云集,是預防、保健、醫療��
�科研康復為一體的綜合性醫院。是國家首批公立甲等部隊醫�
��、全國首批醫療規范定點單位,是第四軍醫大學、東南大學
等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤��
�衛生部評為衛生工作先進單位,先后兩次榮立集體二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21 | 1.0 | 沈阳hsv-igg阳性 - ```
沈阳hsv-igg阳性〓沈陽軍區政治部醫院性病〓TEL:024-31023308〓�
��立于1946年,68年專注于性傳播疾病的研究和治療。位于沈陽
市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史悠
久、設備精良、技術權威、專家云集,是預防、保健、醫療��
�科研康復為一體的綜合性醫院。是國家首批公立甲等部隊醫�
��、全國首批醫療規范定點單位,是第四軍醫大學、東南大學
等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤��
�衛生部評為衛生工作先進單位,先后兩次榮立集體二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21 | defect | 沈阳hsv igg阳性 沈阳hsv igg阳性〓沈陽軍區政治部醫院性病〓tel: 〓� �� , 。位于沈陽 。是一所與新中國同建立共輝煌的歷史悠 久、設備精良、技術權威、專家云集,是預防、保健、醫療�� �科研康復為一體的綜合性醫院。是國家首批公立甲等部隊醫� ��、全國首批醫療規范定點單位,是第四軍醫大學、東南大學 等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤�� �衛生部評為衛生工作先進單位,先后兩次榮立集體二等功。 original issue reported on code google com by gmail com on jun at | 1 |
2,267 | 2,603,992,064 | IssuesEvent | 2015-02-24 19:06:53 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳假尖锐疣有什么症状 | auto-migrated Priority-Medium Type-Defect | ```
沈阳假尖锐疣有什么症状〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:35 | 1.0 | 沈阳假尖锐疣有什么症状 - ```
沈阳假尖锐疣有什么症状〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:35 | defect | 沈阳假尖锐疣有什么症状 沈阳假尖锐疣有什么症状〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at | 1 |
809,105 | 30,174,630,571 | IssuesEvent | 2023-07-04 02:37:48 | TencentBlueKing/bk-cmdb | https://api.github.com/repos/TencentBlueKing/bk-cmdb | closed | 【CMDB+v3.10.27-feature-field-template-alpha1】模板编辑后点击提交同步模板信息时,同步文案与设计稿不一致 | priority: Normal | 问题描述
模板编辑后点击提交同步模板信息时,同步文案与设计稿不一致
一、前提条件
1.存在一个模板,并且绑定了模型
二 、重现步骤
1.点击模板,进入模板详情
2.点击 进入编辑
3.任意编写模板信息后,点击下一步
4.点击提交
预期结果
提交后,同步信息文案提示与设计稿一致
三 、实际结果
同步文案与设计稿不一致


| 1.0 | 【CMDB+v3.10.27-feature-field-template-alpha1】模板编辑后点击提交同步模板信息时,同步文案与设计稿不一致 - 问题描述
模板编辑后点击提交同步模板信息时,同步文案与设计稿不一致
一、前提条件
1.存在一个模板,并且绑定了模型
二 、重现步骤
1.点击模板,进入模板详情
2.点击 进入编辑
3.任意编写模板信息后,点击下一步
4.点击提交
预期结果
提交后,同步信息文案提示与设计稿一致
三 、实际结果
同步文案与设计稿不一致


| non_defect | 【cmdb feature field template 】模板编辑后点击提交同步模板信息时,同步文案与设计稿不一致 问题描述 模板编辑后点击提交同步模板信息时,同步文案与设计稿不一致 一、前提条件 存在一个模板,并且绑定了模型 二 、重现步骤 点击模板,进入模板详情 点击 进入编辑 任意编写模板信息后,点击下一步 点击提交 预期结果 提交后,同步信息文案提示与设计稿一致 三 、实际结果 同步文案与设计稿不一致 | 0 |
52,982 | 13,258,800,025 | IssuesEvent | 2020-08-20 15:51:23 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | 508-defect-2 [AXE-CORE]: Heading levels SHOULD increase by one | 508-defect-2 508-issue-headings 508/Accessibility vsa vsa-benefits | # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
The alert box heading, "Save time—and save your work in progress—by signing in before starting your application," is currently an h3 but there isn't an h2 between it and the h1.
If this page is in the CMS, it is a CMS sitewide issue.
If this page is built in React, there is a level prop that needs to be adjusted to "2", and a utility class will style it to look like an h3.
## Point of Contact
**VFS Point of Contact:** Jennifer
## Acceptance Criteria
As a screen reader user, I want to navigate the hierarchy of the page content using heading levels to save time and frustration.
## Environment
- Operating System: all
- Browser: all
- Screenreading device: any
- Server destination: staging & production
## Steps to Recreate
1. Enter [URL] in browser
2. Have developer tools open, and the axe browser extension loaded
3. Run an axe audit
4. Verify that [heading code] is called out as an error of "Heading levels should only increase by one"
5. This error is repeated throughout the form process
## Possible Fixes (optional)
### Sample
**Current code**
```html
<h3 class="usa-alert-heading">Save time—and save your work in progress—by signing in before starting your application</h3>
```
**Recommended code**
```html
<h2 class="usa-alert-heading">Save time—and save your work in progress—by signing in before starting your application</h2>
```
## WCAG or Vendor Guidance (optional)
* [axe-core 3.4 - Heading levels should only increase by one](https://dequeuniversity.com/rules/axe/3.4/heading-order)
## Screenshots


| 1.0 | 508-defect-2 [AXE-CORE]: Heading levels SHOULD increase by one - # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
The alert box heading, "Save time—and save your work in progress—by signing in before starting your application," is currently an h3 but there isn't an h2 between it and the h1.
If this page is in the CMS, it is a CMS sitewide issue.
If this page is built in React, there is a level prop that needs to be adjusted to "2", and a utility class will style it to look like an h3.
## Point of Contact
**VFS Point of Contact:** Jennifer
## Acceptance Criteria
As a screen reader user, I want to navigate the hierarchy of the page content using heading levels to save time and frustration.
## Environment
- Operating System: all
- Browser: all
- Screenreading device: any
- Server destination: staging & production
## Steps to Recreate
1. Enter [URL] in browser
2. Have developer tools open, and the axe browser extension loaded
3. Run an axe audit
4. Verify that [heading code] is called out as an error of "Heading levels should only increase by one"
5. This error is repeated throughout the form process
## Possible Fixes (optional)
### Sample
**Current code**
```html
<h3 class="usa-alert-heading">Save time—and save your work in progress—by signing in before starting your application</h3>
```
**Recommended code**
```html
<h2 class="usa-alert-heading">Save time—and save your work in progress—by signing in before starting your application</h2>
```
## WCAG or Vendor Guidance (optional)
* [axe-core 3.4 - Heading levels should only increase by one](https://dequeuniversity.com/rules/axe/3.4/heading-order)
## Screenshots


| defect | defect heading levels should increase by one feedback framework ❗️ must for if the feedback must be applied ⚠️should if the feedback is best practice ✔️ consider for suggestions enhancements description the alert box heading save time—and save your work in progress—by signing in before starting your application is currently an but there isn t an between it and the if this page is in the cms it is a cms sitewide issue if this page is built in react there is a level prop that needs to be adjusted to and a utility class will style it to look like an point of contact vfs point of contact jennifer acceptance criteria as a screen reader user i want to navigate the hierarchy of the page content using heading levels to save time and frustration environment operating system all browser all screenreading device any server destination staging production steps to recreate enter in browser have developer tools open and the axe browser extension loaded run an axe audit verify that is called out as an error of heading levels should only increase by one this error is repeated throughout the form process possible fixes optional sample current code html save time—and save your work in progress—by signing in before starting your application recommended code html save time—and save your work in progress—by signing in before starting your application wcag or vendor guidance optional screenshots | 1 |
37,182 | 8,287,757,752 | IssuesEvent | 2018-09-19 09:47:44 | hazelcast/hazelcast-nodejs-client | https://api.github.com/repos/hazelcast/hazelcast-nodejs-client | opened | ProxyManager proxies map should keep namespaces | Type: Defect | Currently, ProxyManager `proxies` map just uses object names as key. It should use namespace including service name too. | 1.0 | ProxyManager proxies map should keep namespaces - Currently, ProxyManager `proxies` map just uses object names as key. It should use namespace including service name too. | defect | proxymanager proxies map should keep namespaces currently proxymanager proxies map just uses object names as key it should use namespace including service name too | 1 |
126,326 | 17,875,614,678 | IssuesEvent | 2021-09-07 02:53:59 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [Security Solution]Applied Filter on hover in Host detail flyout under timeline not working | bug Team:Threat Hunting Team: SecuritySolution Theme: rac v7.15.0 | **Describe the bug**
Applied Filter on hover in Host detail flyout under timeline not working
**Build Details**
Version: 7.15.0-SNAPSHOT
Commit:00fcc2cd00d309f4c17db4ec7d552bc54fbd1b81
Build:43271
**Browsers**
all
**Steps to Reproduce**
1.Generate few of the Alert .
2.click on investigate in timeline.
3.Timeline will open with the alert id
4.Click on host.name field value.
5.Hover over the host.ip or any filed
6.Click Filter for or Filter out
7. Observed that neither the filter for and filter out not got applied over the timeline result and not in detection page.
**Whats Working**
- Investigate in timeline , copy to clipboard , show graph hover action are working under timeline
**Whats Not Working**
- Filter for and filter is not working
**Actual Result**
Applied Filter on hover in Host detail flyout under timeline not working
**Expected Result**
Applied Filter on hover in Host detail flyut under timeline should working
- timeline result should be filtered out
**Screen-Shoot**

**logs**
N/A | True | [Security Solution]Applied Filter on hover in Host detail flyout under timeline not working - **Describe the bug**
Applied Filter on hover in Host detail flyout under timeline not working
**Build Details**
Version: 7.15.0-SNAPSHOT
Commit:00fcc2cd00d309f4c17db4ec7d552bc54fbd1b81
Build:43271
**Browsers**
all
**Steps to Reproduce**
1.Generate few of the Alert .
2.click on investigate in timeline.
3.Timeline will open with the alert id
4.Click on host.name field value.
5.Hover over the host.ip or any filed
6.Click Filter for or Filter out
7. Observed that neither the filter for and filter out not got applied over the timeline result and not in detection page.
**Whats Working**
- Investigate in timeline , copy to clipboard , show graph hover action are working under timeline
**Whats Not Working**
- Filter for and filter is not working
**Actual Result**
Applied Filter on hover in Host detail flyout under timeline not working
**Expected Result**
Applied Filter on hover in Host detail flyut under timeline should working
- timeline result should be filtered out
**Screen-Shoot**

**logs**
N/A | non_defect | applied filter on hover in host detail flyout under timeline not working describe the bug applied filter on hover in host detail flyout under timeline not working build details version snapshot commit build browsers all steps to reproduce generate few of the alert click on investigate in timeline timeline will open with the alert id click on host name field value hover over the host ip or any filed click filter for or filter out observed that neither the filter for and filter out not got applied over the timeline result and not in detection page whats working investigate in timeline copy to clipboard show graph hover action are working under timeline whats not working filter for and filter is not working actual result applied filter on hover in host detail flyout under timeline not working expected result applied filter on hover in host detail flyut under timeline should working timeline result should be filtered out screen shoot logs n a | 0 |
54,085 | 13,386,213,087 | IssuesEvent | 2020-09-02 14:27:09 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | closed | Child sections on /section pages should be sorted by term weight | Content governance Defect Stretch goal | **Describe the defect**
Child sections are out of order on some /section pages
**To Reproduce**
Go to section/veterans-health-administration/visn-4 note Pittsburgh appears first.
Go to section/vamc-facilities and note VISN 4 appears first
**Expected behavior**
Sections should be listed in the same order as
/admin/structure/taxonomy/manage/administration/overview
**Screenshots**
If applicable, add screenshots to help explain your problem.

| 1.0 | Child sections on /section pages should be sorted by term weight - **Describe the defect**
Child sections are out of order on some /section pages
**To Reproduce**
Go to section/veterans-health-administration/visn-4 note Pittsburgh appears first.
Go to section/vamc-facilities and note VISN 4 appears first
**Expected behavior**
Sections should be listed in the same order as
/admin/structure/taxonomy/manage/administration/overview
**Screenshots**
If applicable, add screenshots to help explain your problem.

| defect | child sections on section pages should be sorted by term weight describe the defect child sections are out of order on some section pages to reproduce go to section veterans health administration visn note pittsburgh appears first go to section vamc facilities and note visn appears first expected behavior sections should be listed in the same order as admin structure taxonomy manage administration overview screenshots if applicable add screenshots to help explain your problem | 1 |
66,055 | 19,910,898,316 | IssuesEvent | 2022-01-25 17:03:18 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | closed | Unable to display locations shared from Element Web or fluffychat | T-Defect | ### Steps to reproduce
Element Android dbg version sent to me by @onurays .
1. In an room (encrypted or not)
2. On another client, Element Web or FluffyChat, share my location
3. See error message "Element dbg encountered an issue when rendering content of event..."
4. Press and hold the error message
5. Element dbg crashes. I uploaded debug info for the first message listed below.
I was testing with FluffyChat, so the first event I sent was from there:
```
{
"content": {
"body": "https://www.openstreetmap.org/?mlat=51.0000006666&mlon=-0.500000003333333#map=16/51.000000666666/-0.5475583333333333",
"geo_uri": "geo:51.0000000000666666,-0.50000033333333;u=13.899999618530273",
"msgtype": "m.location"
},
"origin_server_ts": 1642764786259,
"sender": "@andybalaam:one.ems.host",
"type": "m.room.message",
"unsigned": {},
"event_id": "$HGlFQrCJxeileFBCnvre4UYxg6Pnuhaxqx_4PJ6dJi4",
"room_id": "!wuFNecIlbapwgowjbe:matrix.org"
}
```
(Geo co-ordinates changed, but everything else left the same.)
Then I tried from Element Web, with similar results:
```
{
"content": {
"body": "Location geo:51.16330296995997,-4.652109146118164;u=10 at 2022-01-21T11:38:37.773Z",
"geo_uri": "geo:51.16330296995997,-4.652109146118164;u=10",
"msgtype": "m.location",
"org.matrix.msc1767.text": "Location geo:51.16330296995997,-4.652109146118164;u=10 at 2022-01-21T11:38:37.773Z",
"org.matrix.msc3488.asset": {
"type": "m.self"
},
"org.matrix.msc3488.location": {
"description": null,
"uri": "geo:51.16330296995997,-4.652109146118164;u=10"
},
"org.matrix.msc3488.ts": 1642765117773
},
"origin_server_ts": 1642765118786,
"sender": "@andybalaam-test1:matrix.org",
"type": "m.room.message",
"unsigned": {
"age": 439,
"transaction_id": "m1642765118582.64"
},
"event_id": "$oK9KNwVzSutbM1mjxkBGe_HIHb9XN8ltvVBuxm5v6jo",
"room_id": "!wuFNecIlbapwgowjbe:matrix.org"
}
```
### Outcome
#### What did you expect?
A map showing the location should be displayed
#### What happened instead?
An error message appeared, and the app crashed when I long-pressed on it
### Your phone model
Samsung Galaxy A12
### Operating system version
Android 11
### Application version and app store
1.3.13-dev [40103130] (G-7993ff39) feature/ons/static_location
### Homeserver
matrix.org
### Will you send logs?
Yes | 1.0 | Unable to display locations shared from Element Web or fluffychat - ### Steps to reproduce
Element Android dbg version sent to me by @onurays .
1. In an room (encrypted or not)
2. On another client, Element Web or FluffyChat, share my location
3. See error message "Element dbg encountered an issue when rendering content of event..."
4. Press and hold the error message
5. Element dbg crashes. I uploaded debug info for the first message listed below.
I was testing with FluffyChat, so the first event I sent was from there:
```
{
"content": {
"body": "https://www.openstreetmap.org/?mlat=51.0000006666&mlon=-0.500000003333333#map=16/51.000000666666/-0.5475583333333333",
"geo_uri": "geo:51.0000000000666666,-0.50000033333333;u=13.899999618530273",
"msgtype": "m.location"
},
"origin_server_ts": 1642764786259,
"sender": "@andybalaam:one.ems.host",
"type": "m.room.message",
"unsigned": {},
"event_id": "$HGlFQrCJxeileFBCnvre4UYxg6Pnuhaxqx_4PJ6dJi4",
"room_id": "!wuFNecIlbapwgowjbe:matrix.org"
}
```
(Geo co-ordinates changed, but everything else left the same.)
Then I tried from Element Web, with similar results:
```
{
"content": {
"body": "Location geo:51.16330296995997,-4.652109146118164;u=10 at 2022-01-21T11:38:37.773Z",
"geo_uri": "geo:51.16330296995997,-4.652109146118164;u=10",
"msgtype": "m.location",
"org.matrix.msc1767.text": "Location geo:51.16330296995997,-4.652109146118164;u=10 at 2022-01-21T11:38:37.773Z",
"org.matrix.msc3488.asset": {
"type": "m.self"
},
"org.matrix.msc3488.location": {
"description": null,
"uri": "geo:51.16330296995997,-4.652109146118164;u=10"
},
"org.matrix.msc3488.ts": 1642765117773
},
"origin_server_ts": 1642765118786,
"sender": "@andybalaam-test1:matrix.org",
"type": "m.room.message",
"unsigned": {
"age": 439,
"transaction_id": "m1642765118582.64"
},
"event_id": "$oK9KNwVzSutbM1mjxkBGe_HIHb9XN8ltvVBuxm5v6jo",
"room_id": "!wuFNecIlbapwgowjbe:matrix.org"
}
```
### Outcome
#### What did you expect?
A map showing the location should be displayed
#### What happened instead?
An error message appeared, and the app crashed when I long-pressed on it
### Your phone model
Samsung Galaxy A12
### Operating system version
Android 11
### Application version and app store
1.3.13-dev [40103130] (G-7993ff39) feature/ons/static_location
### Homeserver
matrix.org
### Will you send logs?
Yes | defect | unable to display locations shared from element web or fluffychat steps to reproduce element android dbg version sent to me by onurays in an room encrypted or not on another client element web or fluffychat share my location see error message element dbg encountered an issue when rendering content of event press and hold the error message element dbg crashes i uploaded debug info for the first message listed below i was testing with fluffychat so the first event i sent was from there content body geo uri geo u msgtype m location origin server ts sender andybalaam one ems host type m room message unsigned event id room id wufnecilbapwgowjbe matrix org geo co ordinates changed but everything else left the same then i tried from element web with similar results content body location geo u at geo uri geo u msgtype m location org matrix text location geo u at org matrix asset type m self org matrix location description null uri geo u org matrix ts origin server ts sender andybalaam matrix org type m room message unsigned age transaction id event id room id wufnecilbapwgowjbe matrix org outcome what did you expect a map showing the location should be displayed what happened instead an error message appeared and the app crashed when i long pressed on it your phone model samsung galaxy operating system version android application version and app store dev g feature ons static location homeserver matrix org will you send logs yes | 1 |
14,565 | 10,958,348,591 | IssuesEvent | 2019-11-27 09:13:47 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | Align missing libintl behavior of macOS and Alpine / musl-libc portable | area-Infrastructure-coreclr | <ins>libintl: the last unavoidable runtime dependency</ins>
For the operating systems which do not have libc providing gettext or libintl functionality out of the box, there is a UNIXTODO:
https://github.com/dotnet/coreclr/blob/ed5dc831b09a0bfed76ddad684008bebc86ab2f0/src/pal/src/locale/unicode.cpp#L592-L595
On macOS, regardless we have ran `brew install gettext` on the CoreCLR build machine, it gets skipped due to this condition:
https://github.com/dotnet/coreclr/blob/ed5dc831b09a0bfed76ddad684008bebc86ab2f0/src/pal/src/configure.cmake#L49-L51
However, it seems that the build machine dedicated for musl-libc portable, there is gettext installed, so consumers get this as a hard runtime dependency. On Alpine Linux, we need to install `libintl` package; on Void Linux (muscle edition), the smallest available (yet baggy) package `gettext` is required. A bit more unfortunate case is Single EXE / "bundle" apps, where this inobvious dependency is required as well.
For that matter, we have recently added a check in `dotnet-install` script: https://github.com/dotnet/cli/blob/ba194e4e6fe356af1e82abdca03e9cfbb2e3ca28/scripts/obtain/dotnet-install.sh#L258.
Proposal: Make `HAVE_LIBINTL_H` always-false for Alpine Linux just like macOS (for now™️).
PS - BTW, there is no issue tracking that UNIXTODO; was the original plan to implement the required functionality in CoreCLR?
/cc @janvorli, @jkotas | 1.0 | Align missing libintl behavior of macOS and Alpine / musl-libc portable - <ins>libintl: the last unavoidable runtime dependency</ins>
For the operating systems which do not have libc providing gettext or libintl functionality out of the box, there is a UNIXTODO:
https://github.com/dotnet/coreclr/blob/ed5dc831b09a0bfed76ddad684008bebc86ab2f0/src/pal/src/locale/unicode.cpp#L592-L595
On macOS, regardless we have ran `brew install gettext` on the CoreCLR build machine, it gets skipped due to this condition:
https://github.com/dotnet/coreclr/blob/ed5dc831b09a0bfed76ddad684008bebc86ab2f0/src/pal/src/configure.cmake#L49-L51
However, it seems that the build machine dedicated for musl-libc portable, there is gettext installed, so consumers get this as a hard runtime dependency. On Alpine Linux, we need to install `libintl` package; on Void Linux (muscle edition), the smallest available (yet baggy) package `gettext` is required. A bit more unfortunate case is Single EXE / "bundle" apps, where this inobvious dependency is required as well.
For that matter, we have recently added a check in `dotnet-install` script: https://github.com/dotnet/cli/blob/ba194e4e6fe356af1e82abdca03e9cfbb2e3ca28/scripts/obtain/dotnet-install.sh#L258.
Proposal: Make `HAVE_LIBINTL_H` always-false for Alpine Linux just like macOS (for now™️).
PS - BTW, there is no issue tracking that UNIXTODO; was the original plan to implement the required functionality in CoreCLR?
/cc @janvorli, @jkotas | non_defect | align missing libintl behavior of macos and alpine musl libc portable libintl the last unavoidable runtime dependency for the operating systems which do not have libc providing gettext or libintl functionality out of the box there is a unixtodo on macos regardless we have ran brew install gettext on the coreclr build machine it gets skipped due to this condition however it seems that the build machine dedicated for musl libc portable there is gettext installed so consumers get this as a hard runtime dependency on alpine linux we need to install libintl package on void linux muscle edition the smallest available yet baggy package gettext is required a bit more unfortunate case is single exe bundle apps where this inobvious dependency is required as well for that matter we have recently added a check in dotnet install script proposal make have libintl h always false for alpine linux just like macos for now™️ ps btw there is no issue tracking that unixtodo was the original plan to implement the required functionality in coreclr cc janvorli jkotas | 0 |
26,046 | 4,559,631,427 | IssuesEvent | 2016-09-14 03:26:51 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | TypeError: this.$initialize is not a function with [ObjectLiteral] inheritance | defect | Reported by @ProductiveRage on the [forums](http://forums.bridge.net/forum/bridge-net-pro/bugs/2685-open-1819-breaking-change-with-objectliteral-class-inheriting-from-non-objectliteral-class).
### Expected
No JavaScript error
### Actual
JavaScript error
### Steps To Reproduce
http://deck.net/6ec7b41baa416bb4381d7aae683a82b7
```csharp
public class Program
{
public static void Main()
{
var x = new Attributes { Name = "test " };
}
}
[ObjectLiteral]
public class Attributes : AttributeBase
{
public string Name { get; set; }
}
public class AttributeBase { }
``` | 1.0 | TypeError: this.$initialize is not a function with [ObjectLiteral] inheritance - Reported by @ProductiveRage on the [forums](http://forums.bridge.net/forum/bridge-net-pro/bugs/2685-open-1819-breaking-change-with-objectliteral-class-inheriting-from-non-objectliteral-class).
### Expected
No JavaScript error
### Actual
JavaScript error
### Steps To Reproduce
http://deck.net/6ec7b41baa416bb4381d7aae683a82b7
```csharp
public class Program
{
public static void Main()
{
var x = new Attributes { Name = "test " };
}
}
[ObjectLiteral]
public class Attributes : AttributeBase
{
public string Name { get; set; }
}
public class AttributeBase { }
``` | defect | typeerror this initialize is not a function with inheritance reported by productiverage on the expected no javascript error actual javascript error steps to reproduce csharp public class program public static void main var x new attributes name test public class attributes attributebase public string name get set public class attributebase | 1 |
57,577 | 15,866,201,695 | IssuesEvent | 2021-04-08 15:29:56 | galasa-dev/projectmanagement | https://api.github.com/repos/galasa-dev/projectmanagement | closed | Grey line below header bar | defect webui | There seems to be a dark grey bottom-border to the header bar, which shouldn't be there. | 1.0 | Grey line below header bar - There seems to be a dark grey bottom-border to the header bar, which shouldn't be there. | defect | grey line below header bar there seems to be a dark grey bottom border to the header bar which shouldn t be there | 1 |
57,329 | 15,730,484,081 | IssuesEvent | 2021-03-29 15:57:59 | danmar/testissues | https://api.github.com/repos/danmar/testissues | opened | ".." in include will cause conflicting slashes in messages (Trac #318) | Incomplete Migration Migrated from Trac Other defect hyd_danmar | Migrated from https://trac.cppcheck.net/ticket/318
```json
{
"status": "closed",
"changetime": "2009-05-31T08:14:43",
"description": "If you scan this folder structure:\n\n{{{\ntemp\ntemp/src\ntemp/src/test.cpp\ntemp/test.h\n}}}\n\nAnd test.cpp will include test.h with\n\n{{{\n#include \"../test.h\"\n}}}\n\nmessages from test.h will look like this\n\n{{{\n[temp\\/test.h:4]: (error) Memory leak: p\n}}}\n\n\n\n",
"reporter": "kidkat",
"cc": "",
"resolution": "fixed",
"_ts": "1243757683000000",
"component": "Other",
"summary": "\"..\" in include will cause conflicting slashes in messages",
"priority": "",
"keywords": "",
"time": "2009-05-19T08:34:19",
"milestone": "1.33",
"owner": "hyd_danmar",
"type": "defect"
}
```
| 1.0 | ".." in include will cause conflicting slashes in messages (Trac #318) - Migrated from https://trac.cppcheck.net/ticket/318
```json
{
"status": "closed",
"changetime": "2009-05-31T08:14:43",
"description": "If you scan this folder structure:\n\n{{{\ntemp\ntemp/src\ntemp/src/test.cpp\ntemp/test.h\n}}}\n\nAnd test.cpp will include test.h with\n\n{{{\n#include \"../test.h\"\n}}}\n\nmessages from test.h will look like this\n\n{{{\n[temp\\/test.h:4]: (error) Memory leak: p\n}}}\n\n\n\n",
"reporter": "kidkat",
"cc": "",
"resolution": "fixed",
"_ts": "1243757683000000",
"component": "Other",
"summary": "\"..\" in include will cause conflicting slashes in messages",
"priority": "",
"keywords": "",
"time": "2009-05-19T08:34:19",
"milestone": "1.33",
"owner": "hyd_danmar",
"type": "defect"
}
```
| defect | in include will cause conflicting slashes in messages trac migrated from json status closed changetime description if you scan this folder structure n n ntemp ntemp src ntemp src test cpp ntemp test h n n nand test cpp will include test h with n n n include test h n n nmessages from test h will look like this n n n error memory leak p n n n n n reporter kidkat cc resolution fixed ts component other summary in include will cause conflicting slashes in messages priority keywords time milestone owner hyd danmar type defect | 1 |
58,639 | 16,670,138,132 | IssuesEvent | 2021-06-07 09:48:41 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | Don't use the ForkJoin's common-pool for actions protected by permission checks | Source: Internal Team: Core Type: Defect | We should make sure we don't use `commonPool` for permission-protected actions. They would not work with the SecurityManager enabled.
See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html
> If a SecurityManager is present and no factory is specified, then the default pool uses a factory supplying threads that have no Permissions enabled.
I.e. If a code running in a thread from the common-pool executes a protected action where a permission is checked. Then such a code fails with throwing an `AccessControlException`.
Sample code:
```java
import java.nio.file.Files;
import java.nio.file.Paths;
import java.security.AllPermission;
import java.security.CodeSource;
import java.security.PermissionCollection;
import java.security.Permissions;
import java.security.Policy;
import java.util.concurrent.ForkJoinPool;
public class App {
public static void main(String[] args) {
Policy.setPolicy(new Policy() {
public PermissionCollection getPermissions(CodeSource codesource) {
Permissions p = new Permissions();
p.add(new AllPermission());
return p;
}
});
SecurityManager sm = new SecurityManager();
System.setSecurityManager(sm);
// This one prints a line from the file
operation();
// and this fails and prints stack trace of an AccessControlException
ForkJoinPool.commonPool().submit(() -> operation()).join();
}
public static void operation() {
try {
System.out.println(Files.readAllLines(Paths.get("/etc/passwd")).get(0));
} catch (Exception e) {
e.printStackTrace();
}
}
}
```
Workaround could be using a custom `ForkJoinWorkerThreadFactory` by setting FQCN into system property `java.util.concurrent.ForkJoinPool.common.threadFactory` | 1.0 | Don't use the ForkJoin's common-pool for actions protected by permission checks - We should make sure we don't use `commonPool` for permission-protected actions. They would not work with the SecurityManager enabled.
See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html
> If a SecurityManager is present and no factory is specified, then the default pool uses a factory supplying threads that have no Permissions enabled.
I.e. If a code running in a thread from the common-pool executes a protected action where a permission is checked. Then such a code fails with throwing an `AccessControlException`.
Sample code:
```java
import java.nio.file.Files;
import java.nio.file.Paths;
import java.security.AllPermission;
import java.security.CodeSource;
import java.security.PermissionCollection;
import java.security.Permissions;
import java.security.Policy;
import java.util.concurrent.ForkJoinPool;
public class App {
public static void main(String[] args) {
Policy.setPolicy(new Policy() {
public PermissionCollection getPermissions(CodeSource codesource) {
Permissions p = new Permissions();
p.add(new AllPermission());
return p;
}
});
SecurityManager sm = new SecurityManager();
System.setSecurityManager(sm);
// This one prints a line from the file
operation();
// and this fails and prints stack trace of an AccessControlException
ForkJoinPool.commonPool().submit(() -> operation()).join();
}
public static void operation() {
try {
System.out.println(Files.readAllLines(Paths.get("/etc/passwd")).get(0));
} catch (Exception e) {
e.printStackTrace();
}
}
}
```
Workaround could be using a custom `ForkJoinWorkerThreadFactory` by setting FQCN into system property `java.util.concurrent.ForkJoinPool.common.threadFactory` | defect | don t use the forkjoin s common pool for actions protected by permission checks we should make sure we don t use commonpool for permission protected actions they would not work with the securitymanager enabled see if a securitymanager is present and no factory is specified then the default pool uses a factory supplying threads that have no permissions enabled i e if a code running in a thread from the common pool executes a protected action where a permission is checked then such a code fails with throwing an accesscontrolexception sample code java import java nio file files import java nio file paths import java security allpermission import java security codesource import java security permissioncollection import java security permissions import java security policy import java util concurrent forkjoinpool public class app public static void main string args policy setpolicy new policy public permissioncollection getpermissions codesource codesource permissions p new permissions p add new allpermission return p securitymanager sm new securitymanager system setsecuritymanager sm this one prints a line from the file operation and this fails and prints stack trace of an accesscontrolexception forkjoinpool commonpool submit operation join public static void operation try system out println files readalllines paths get etc passwd get catch exception e e printstacktrace workaround could be using a custom forkjoinworkerthreadfactory by setting fqcn into system property java util concurrent forkjoinpool common threadfactory | 1 |
54,478 | 13,731,514,821 | IssuesEvent | 2020-10-05 01:19:31 | naev/naev | https://api.github.com/repos/naev/naev | closed | Double-click on unknown system on starmap reveals name | Priority-Critical Type-Defect | f439eb2c729b3550bd90007ed4be7a9dbfdbe754
If you doubleclick on an unvisited star system on the starmap, you reveal its name and other info, like the star itself, etc. | 1.0 | Double-click on unknown system on starmap reveals name - f439eb2c729b3550bd90007ed4be7a9dbfdbe754
If you doubleclick on an unvisited star system on the starmap, you reveal its name and other info, like the star itself, etc. | defect | double click on unknown system on starmap reveals name if you doubleclick on an unvisited star system on the starmap you reveal its name and other info like the star itself etc | 1 |
787,211 | 27,710,334,330 | IssuesEvent | 2023-03-14 13:53:50 | AY2223S2-CS2113-T13-2/tp | https://api.github.com/repos/AY2223S2-CS2113-T13-2/tp | closed | Add expense with arbitrary currency | type.Story priority.Medium | As a user, i want to add expenses with arbitrary currency. | 1.0 | Add expense with arbitrary currency - As a user, i want to add expenses with arbitrary currency. | non_defect | add expense with arbitrary currency as a user i want to add expenses with arbitrary currency | 0 |
447,501 | 12,889,049,785 | IssuesEvent | 2020-07-13 13:58:01 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | opened | [SUGGESTION] | Greymane Bloodline | :grey_exclamation: priority low :question: suggestion :question: | <!--
DO NOT REMOVE PRE-EXISTING LINES
IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION
----------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
so just followin up with the suggestion i talked bout on the discord to get those two the love they deserve
Greymane Bloodline
Founder: Archibald Greymane
effects: +0.10 prestige ,+10 gilnean opinion, 5% bonus morale defense, +5 personal combat modifier
feel free to change the stats if ye feel it would be more fitting for him and his descendants but it makes sense lorewise to me | 1.0 | [SUGGESTION] | Greymane Bloodline - <!--
DO NOT REMOVE PRE-EXISTING LINES
IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION
----------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
so just followin up with the suggestion i talked bout on the discord to get those two the love they deserve
Greymane Bloodline
Founder: Archibald Greymane
effects: +0.10 prestige ,+10 gilnean opinion, 5% bonus morale defense, +5 personal combat modifier
feel free to change the stats if ye feel it would be more fitting for him and his descendants but it makes sense lorewise to me | non_defect | greymane bloodline do not remove pre existing lines if you want to suggest a few things open a new issue per every suggestion describe your suggestion in full detail below so just followin up with the suggestion i talked bout on the discord to get those two the love they deserve greymane bloodline founder archibald greymane effects prestige gilnean opinion bonus morale defense personal combat modifier feel free to change the stats if ye feel it would be more fitting for him and his descendants but it makes sense lorewise to me | 0 |
100,235 | 11,184,031,062 | IssuesEvent | 2019-12-31 16:05:42 | quantopian/zipline | https://api.github.com/repos/quantopian/zipline | closed | Adding a Zipline Binder? | Dev Experience Documentation | I was just looking through some JupyterLab repos (among others) and was reminded of [Binder](https://mybinder.org/). I think it'd be pretty cool to have a Binder in the README for people to play around with Zipline's beginner tutorial before they decide whether or not they'd like to install it.


Thoughts @richafrank @ssanderson @llllllllll? | 1.0 | Adding a Zipline Binder? - I was just looking through some JupyterLab repos (among others) and was reminded of [Binder](https://mybinder.org/). I think it'd be pretty cool to have a Binder in the README for people to play around with Zipline's beginner tutorial before they decide whether or not they'd like to install it.


Thoughts @richafrank @ssanderson @llllllllll? | non_defect | adding a zipline binder i was just looking through some jupyterlab repos among others and was reminded of i think it d be pretty cool to have a binder in the readme for people to play around with zipline s beginner tutorial before they decide whether or not they d like to install it thoughts richafrank ssanderson llllllllll | 0 |
29,106 | 5,535,819,810 | IssuesEvent | 2017-03-21 18:10:31 | STEllAR-GROUP/hpx | https://api.github.com/repos/STEllAR-GROUP/hpx | closed | run_guarded using bound function ignores reference | category: LCOs type: defect | It seems binding a variable to a function, then calling run_guarded on that binding won't honor a reference. Take a minimal example
``` cpp
#include <hpx/lcos/local/composable_guard.hpp>
#include <hpx/util/bind.hpp>
#include <hpx/hpx_init.hpp>
#include <iostream>
void incr(int &il1) {
// increment variable by 1, implicitly lock guard
il1 = il1 + 1;
// implicitly unlock guard
}
hpx::lcos::local::guard l1;
int hpx_main(boost::program_options::variables_map&)
{
int i1 = 0;
// run incr with i1 passed as a reference
run_guarded(l1, hpx::util::bind(incr, boost::ref(i1)));
std::cout << i1 << std::endl;
return hpx::finalize();
}
int main(int argc, char* argv[])
{
boost::program_options::options_description
desc_commandline("Usage: " HPX_APPLICATION_STRING " [options]");
return hpx::init(desc_commandline, argc, argv);
}
```
If you were to compile and run this
```
> ./a.out
0
```
It seem's `i1` is not updated, although i've bound a reference of it to incr and invoked it via run_guarded. Is this a bug or am I mistaken?
| 1.0 | run_guarded using bound function ignores reference - It seems binding a variable to a function, then calling run_guarded on that binding won't honor a reference. Take a minimal example
``` cpp
#include <hpx/lcos/local/composable_guard.hpp>
#include <hpx/util/bind.hpp>
#include <hpx/hpx_init.hpp>
#include <iostream>
void incr(int &il1) {
// increment variable by 1, implicitly lock guard
il1 = il1 + 1;
// implicitly unlock guard
}
hpx::lcos::local::guard l1;
int hpx_main(boost::program_options::variables_map&)
{
int i1 = 0;
// run incr with i1 passed as a reference
run_guarded(l1, hpx::util::bind(incr, boost::ref(i1)));
std::cout << i1 << std::endl;
return hpx::finalize();
}
int main(int argc, char* argv[])
{
boost::program_options::options_description
desc_commandline("Usage: " HPX_APPLICATION_STRING " [options]");
return hpx::init(desc_commandline, argc, argv);
}
```
If you were to compile and run this
```
> ./a.out
0
```
It seem's `i1` is not updated, although i've bound a reference of it to incr and invoked it via run_guarded. Is this a bug or am I mistaken?
| defect | run guarded using bound function ignores reference it seems binding a variable to a function then calling run guarded on that binding won t honor a reference take a minimal example cpp include include include include void incr int increment variable by implicitly lock guard implicitly unlock guard hpx lcos local guard int hpx main boost program options variables map int run incr with passed as a reference run guarded hpx util bind incr boost ref std cout std endl return hpx finalize int main int argc char argv boost program options options description desc commandline usage hpx application string return hpx init desc commandline argc argv if you were to compile and run this a out it seem s is not updated although i ve bound a reference of it to incr and invoked it via run guarded is this a bug or am i mistaken | 1 |
77,340 | 26,929,230,355 | IssuesEvent | 2023-02-07 15:44:08 | ontop/ontop | https://api.github.com/repos/ontop/ontop | closed | DISTINCT with ORDER BY (Spark SQL) | type: defect status: fixed w: db support | Hi,
as the title indicates, I do have a question regarding the behaviour of the SQL generator.
Given a SPARQL query
```
PREFIX : <http://example.org/>
SELECT DISTINCT ?s
WHERE
{ ?s :p4 ?o }
ORDER BY DESC(?s)
```
applied to a schema with a single table `t1 (s [String], o [String])` only
I'm getting the following SQL query
```
SELECT DISTINCT `v1`.`s` AS `s1m4`, `v1`.`s` AS `v0`
FROM `t1` `v1`
ORDER BY `v1`.`s` DESC NULLS LAST
```
The problem with this query is that - as far as I know - some databases might fail because of the extended sort key here. At least, Apache Spark SQL isn't smart enough to get that the alias `v0` is equal to `v1.s` - I know this is dumb but I also know that some databases do have those restrictions. ( a nice article regarding this is [here](https://blog.jooq.org/2018/07/13/how-sql-distinct-and-order-by-are-related/) )
So my question, is there any existing implementation that does project the sort key?
I'm also wondering why the generated SQL query does project the same column twice?
I'll provide the query reformulation here, maybe it's relevant:
```
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:93) - SPARQL query:
PREFIX : <http://example.org/>
SELECT DISTINCT ?s
WHERE
{ ?s :p4 ?o }
ORDER BY DESC(?s)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:95) - Parsed query converted into IQ (after normalization):
ans1(s)
DISTINCT
CONSTRUCT [s] []
ORDER BY [DESC(s)]
INTENSIONAL triple(s,<http://example.org/p4>,o)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:100) - Start the rewriting process...
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:104) - Rewritten IQ:
ans1(s)
DISTINCT
CONSTRUCT [s] []
ORDER BY [DESC(s)]
INTENSIONAL triple(s,<http://example.org/p4>,o)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:106) - Start the unfolding...
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:117) - Unfolded query:
ans1(s)
DISTINCT
CONSTRUCT [s] []
ORDER BY [DESC(s)]
CONSTRUCT [s, o] [s/RDF(VARCHARToSTRING(s1m4),IRI), o/RDF(VARCHARToSTRING(o1m4),xsd:string)]
DISTINCT
EXTENSIONAL "t1"(0:s1m4,1:o1m4)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:122) - Planned query:
ans1(s)
CONSTRUCT [s] [s/RDF(VARCHARToSTRING(s1m4),IRI)]
DISTINCT
ORDER BY [DESC(VARCHARToSTRING(s1m4))]
EXTENSIONAL "t1"(0:s1m4)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:152) - Producing the native query string...
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:157) - Resulting native query:
ans1(s)
CONSTRUCT [s] [s/RDF(VARCHARToSTRING(s1m4),IRI)]
NATIVE [s1m4, v0]
SELECT DISTINCT `v1`.`s` AS `s1m4`, `v1`.`s` AS `v0`
FROM `t1` `v1`
ORDER BY `v1`.`s` DESC NULLS LAST
```
Cheers
| 1.0 | DISTINCT with ORDER BY (Spark SQL) - Hi,
as the title indicates, I do have a question regarding the behaviour of the SQL generator.
Given a SPARQL query
```
PREFIX : <http://example.org/>
SELECT DISTINCT ?s
WHERE
{ ?s :p4 ?o }
ORDER BY DESC(?s)
```
applied to a schema with a single table `t1 (s [String], o [String])` only
I'm getting the following SQL query
```
SELECT DISTINCT `v1`.`s` AS `s1m4`, `v1`.`s` AS `v0`
FROM `t1` `v1`
ORDER BY `v1`.`s` DESC NULLS LAST
```
The problem with this query is that - as far as I know - some databases might fail because of the extended sort key here. At least, Apache Spark SQL isn't smart enough to get that the alias `v0` is equal to `v1.s` - I know this is dumb but I also know that some databases do have those restrictions. ( a nice article regarding this is [here](https://blog.jooq.org/2018/07/13/how-sql-distinct-and-order-by-are-related/) )
So my question, is there any existing implementation that does project the sort key?
I'm also wondering why the generated SQL query does project the same column twice?
I'll provide the query reformulation here, maybe it's relevant:
```
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:93) - SPARQL query:
PREFIX : <http://example.org/>
SELECT DISTINCT ?s
WHERE
{ ?s :p4 ?o }
ORDER BY DESC(?s)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:95) - Parsed query converted into IQ (after normalization):
ans1(s)
DISTINCT
CONSTRUCT [s] []
ORDER BY [DESC(s)]
INTENSIONAL triple(s,<http://example.org/p4>,o)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:100) - Start the rewriting process...
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:104) - Rewritten IQ:
ans1(s)
DISTINCT
CONSTRUCT [s] []
ORDER BY [DESC(s)]
INTENSIONAL triple(s,<http://example.org/p4>,o)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:106) - Start the unfolding...
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:117) - Unfolded query:
ans1(s)
DISTINCT
CONSTRUCT [s] []
ORDER BY [DESC(s)]
CONSTRUCT [s, o] [s/RDF(VARCHARToSTRING(s1m4),IRI), o/RDF(VARCHARToSTRING(o1m4),xsd:string)]
DISTINCT
EXTENSIONAL "t1"(0:s1m4,1:o1m4)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:122) - Planned query:
ans1(s)
CONSTRUCT [s] [s/RDF(VARCHARToSTRING(s1m4),IRI)]
DISTINCT
ORDER BY [DESC(VARCHARToSTRING(s1m4))]
EXTENSIONAL "t1"(0:s1m4)
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:152) - Producing the native query string...
DEBUG [ScalaTest-run-running-QuotationTests] (QuestQueryProcessor.java:157) - Resulting native query:
ans1(s)
CONSTRUCT [s] [s/RDF(VARCHARToSTRING(s1m4),IRI)]
NATIVE [s1m4, v0]
SELECT DISTINCT `v1`.`s` AS `s1m4`, `v1`.`s` AS `v0`
FROM `t1` `v1`
ORDER BY `v1`.`s` DESC NULLS LAST
```
Cheers
| defect | distinct with order by spark sql hi as the title indicates i do have a question regarding the behaviour of the sql generator given a sparql query prefix select distinct s where s o order by desc s applied to a schema with a single table s o only i m getting the following sql query select distinct s as s as from order by s desc nulls last the problem with this query is that as far as i know some databases might fail because of the extended sort key here at least apache spark sql isn t smart enough to get that the alias is equal to s i know this is dumb but i also know that some databases do have those restrictions a nice article regarding this is so my question is there any existing implementation that does project the sort key i m also wondering why the generated sql query does project the same column twice i ll provide the query reformulation here maybe it s relevant debug questqueryprocessor java sparql query prefix select distinct s where s o order by desc s debug questqueryprocessor java parsed query converted into iq after normalization s distinct construct order by intensional triple s debug questqueryprocessor java start the rewriting process debug questqueryprocessor java rewritten iq s distinct construct order by intensional triple s debug questqueryprocessor java start the unfolding debug questqueryprocessor java unfolded query s distinct construct order by construct distinct extensional debug questqueryprocessor java planned query s construct distinct order by extensional debug questqueryprocessor java producing the native query string debug questqueryprocessor java resulting native query s construct native select distinct s as s as from order by s desc nulls last cheers | 1 |
81,386 | 30,828,067,752 | IssuesEvent | 2023-08-01 21:57:46 | dotCMS/core | https://api.github.com/repos/dotCMS/core | closed | UI: Workflow action takeover requires alignment adjustment | Type : Defect Team : Lunik Triage | ### Parent Issue
[Pages]
### Problem Statement
The workflow action takeover appears to be misaligned.
### Steps to Reproduce
1. Go to https://localhost:8443/dotAdmin/#/pages
2. Observe that the alignment of the takeover elements is off
### Acceptance Criteria
The workflow action takeover should be aligned correctly.
### dotCMS Version
23.07
### Proposed Objective
User Experience
### Proposed Priority
Priority 3 - Average
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
_No response_
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds

### Sub-Tasks & Estimates
_No response_ | 1.0 | UI: Workflow action takeover requires alignment adjustment - ### Parent Issue
[Pages]
### Problem Statement
The workflow action takeover appears to be misaligned.
### Steps to Reproduce
1. Go to https://localhost:8443/dotAdmin/#/pages
2. Observe that the alignment of the takeover elements is off
### Acceptance Criteria
The workflow action takeover should be aligned correctly.
### dotCMS Version
23.07
### Proposed Objective
User Experience
### Proposed Priority
Priority 3 - Average
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
_No response_
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds

### Sub-Tasks & Estimates
_No response_ | defect | ui workflow action takeover requires alignment adjustment parent issue problem statement the workflow action takeover appears to be misaligned steps to reproduce go to observe that the alignment of the takeover elements is off acceptance criteria the workflow action takeover should be aligned correctly dotcms version proposed objective user experience proposed priority priority average external links slack conversations support tickets figma designs etc no response assumptions initiation needs no response quality assurance notes workarounds sub tasks estimates no response | 1 |
16,205 | 2,878,405,739 | IssuesEvent | 2015-06-10 00:38:47 | googlei18n/noto-fonts | https://api.github.com/repos/googlei18n/noto-fonts | closed | language missing | auto-migrated Priority-Medium Type-Defect | ```
on the download page, users can select language based on region. however, the
webpage listed every language in Chinese language-family except for Cantonese,
which is widely used in China and Hong Kong.
```
Original issue reported on code.google.com by `LQY....@gmail.com` on 8 Oct 2014 at 2:40 | 1.0 | language missing - ```
on the download page, users can select language based on region. however, the
webpage listed every language in Chinese language-family except for Cantonese,
which is widely used in China and Hong Kong.
```
Original issue reported on code.google.com by `LQY....@gmail.com` on 8 Oct 2014 at 2:40 | defect | language missing on the download page users can select language based on region however the webpage listed every language in chinese language family except for cantonese which is widely used in china and hong kong original issue reported on code google com by lqy gmail com on oct at | 1 |
38,183 | 8,686,538,639 | IssuesEvent | 2018-12-03 11:05:30 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Stop using the term "database" colloquially as database system in documentation | C: Documentation E: All Editions P: Low T: Defect | In a more strict understanding of RDBMS related terms, the name "database" clearly identifies a database in terms of: "schema", "model", etc., whereas what we often call "database" is really a "database system".
The manual and Javadoc should more clearly distinguish between these two terms. | 1.0 | Stop using the term "database" colloquially as database system in documentation - In a more strict understanding of RDBMS related terms, the name "database" clearly identifies a database in terms of: "schema", "model", etc., whereas what we often call "database" is really a "database system".
The manual and Javadoc should more clearly distinguish between these two terms. | defect | stop using the term database colloquially as database system in documentation in a more strict understanding of rdbms related terms the name database clearly identifies a database in terms of schema model etc whereas what we often call database is really a database system the manual and javadoc should more clearly distinguish between these two terms | 1 |
76,681 | 26,554,558,732 | IssuesEvent | 2023-01-20 10:50:34 | nats-io/nats.java | https://api.github.com/repos/nats-io/nats.java | closed | ArrayIndexOutOfBoundsException on attempt to reconnect | 🐞 defect | ## Defect
We observed one of our Nats clients in our production enviroment flooded log with the following exceptions.
1. First 40 minutes it was throwing the following exception during publish() call (messages consumption was healthy):
```
java.lang.IllegalStateException: Output queue is full 5000
at io.nats.client.impl.MessageQueue.push(MessageQueue.java:124)
at io.nats.client.impl.MessageQueue.push(MessageQueue.java:110)
at io.nats.client.impl.NatsConnectionWriter.queue(NatsConnectionWriter.java:209)
at io.nats.client.impl.NatsConnection.queueOutgoing(NatsConnection.java:1353)
at io.nats.client.impl.NatsConnection.publishInternal(NatsConnection.java:765)
at io.nats.client.impl.NatsConnection.publish(NatsConnection.java:733)
```
2. Then both publishing and consumption didn't work for another 40 minutes till service was terminated. At this time log was flooded with the following exceptions (each time with the same last destination index and byte index):
```
java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: arraycopy: last destination index 175759 out of bounds for byte[131555]
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at io.nats.client.impl.NatsConnection.tryToConnect(NatsConnection.java:380)
at io.nats.client.impl.NatsConnection.reconnect(NatsConnection.java:254)
at io.nats.client.impl.NatsConnection.closeSocket(NatsConnection.java:583)
at io.nats.client.impl.NatsConnection.lambda$handleCommunicationIssue$3(NatsConnection.java:541)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ArrayIndexOutOfBoundsException: arraycopy: last destination index 175759 out of bounds for byte[131555]
at java.base/java.lang.System.arraycopy(Native Method)
at io.nats.client.impl.NatsConnectionWriter.sendMessageBatch(NatsConnectionWriter.java:147)
at io.nats.client.impl.NatsConnectionWriter.run(NatsConnectionWriter.java:188)
... 5 common frames omitted
```
Our client configuration:
```
Options options = new Options.Builder()
.servers(natsBootstrapServers.addresses)
.errorListener(errorListener)
.connectionTimeout(Duration.ofSeconds(5))
.maxReconnects(-1) // -1 = Infinite
.connectionName(getHostName())
.pingInterval(Duration.ofSeconds(2))
.build();
```
We don't have a reproduction scenario yet.
#### Versions of `io.nats:jnats` and `nats-server`:
nats server v 2.7.4
jnats v 2.14.1
#### OS/Container environment:
adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine
#### Steps or code to reproduce the issue:
#### Expected result:
#### Actual result:
| 1.0 | ArrayIndexOutOfBoundsException on attempt to reconnect - ## Defect
We observed one of our Nats clients in our production enviroment flooded log with the following exceptions.
1. First 40 minutes it was throwing the following exception during publish() call (messages consumption was healthy):
```
java.lang.IllegalStateException: Output queue is full 5000
at io.nats.client.impl.MessageQueue.push(MessageQueue.java:124)
at io.nats.client.impl.MessageQueue.push(MessageQueue.java:110)
at io.nats.client.impl.NatsConnectionWriter.queue(NatsConnectionWriter.java:209)
at io.nats.client.impl.NatsConnection.queueOutgoing(NatsConnection.java:1353)
at io.nats.client.impl.NatsConnection.publishInternal(NatsConnection.java:765)
at io.nats.client.impl.NatsConnection.publish(NatsConnection.java:733)
```
2. Then both publishing and consumption didn't work for another 40 minutes till service was terminated. At this time log was flooded with the following exceptions (each time with the same last destination index and byte index):
```
java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: arraycopy: last destination index 175759 out of bounds for byte[131555]
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at io.nats.client.impl.NatsConnection.tryToConnect(NatsConnection.java:380)
at io.nats.client.impl.NatsConnection.reconnect(NatsConnection.java:254)
at io.nats.client.impl.NatsConnection.closeSocket(NatsConnection.java:583)
at io.nats.client.impl.NatsConnection.lambda$handleCommunicationIssue$3(NatsConnection.java:541)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ArrayIndexOutOfBoundsException: arraycopy: last destination index 175759 out of bounds for byte[131555]
at java.base/java.lang.System.arraycopy(Native Method)
at io.nats.client.impl.NatsConnectionWriter.sendMessageBatch(NatsConnectionWriter.java:147)
at io.nats.client.impl.NatsConnectionWriter.run(NatsConnectionWriter.java:188)
... 5 common frames omitted
```
Our client configuration:
```
Options options = new Options.Builder()
.servers(natsBootstrapServers.addresses)
.errorListener(errorListener)
.connectionTimeout(Duration.ofSeconds(5))
.maxReconnects(-1) // -1 = Infinite
.connectionName(getHostName())
.pingInterval(Duration.ofSeconds(2))
.build();
```
We don't have a reproduction scenario yet.
#### Versions of `io.nats:jnats` and `nats-server`:
nats server v 2.7.4
jnats v 2.14.1
#### OS/Container environment:
adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine
#### Steps or code to reproduce the issue:
#### Expected result:
#### Actual result:
| defect | arrayindexoutofboundsexception on attempt to reconnect defect we observed one of our nats clients in our production enviroment flooded log with the following exceptions first minutes it was throwing the following exception during publish call messages consumption was healthy java lang illegalstateexception output queue is full at io nats client impl messagequeue push messagequeue java at io nats client impl messagequeue push messagequeue java at io nats client impl natsconnectionwriter queue natsconnectionwriter java at io nats client impl natsconnection queueoutgoing natsconnection java at io nats client impl natsconnection publishinternal natsconnection java at io nats client impl natsconnection publish natsconnection java then both publishing and consumption didn t work for another minutes till service was terminated at this time log was flooded with the following exceptions each time with the same last destination index and byte index java util concurrent executionexception java lang arrayindexoutofboundsexception arraycopy last destination index out of bounds for byte at java base java util concurrent futuretask report futuretask java at java base java util concurrent futuretask get futuretask java at io nats client impl natsconnection trytoconnect natsconnection java at io nats client impl natsconnection reconnect natsconnection java at io nats client impl natsconnection closesocket natsconnection java at io nats client impl natsconnection lambda handlecommunicationissue natsconnection java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by java lang arrayindexoutofboundsexception arraycopy last destination index out of bounds for byte at java base java lang system arraycopy native method at io nats client impl natsconnectionwriter sendmessagebatch natsconnectionwriter java at io nats client impl natsconnectionwriter run natsconnectionwriter java common frames omitted our client configuration options options new options builder servers natsbootstrapservers addresses errorlistener errorlistener connectiontimeout duration ofseconds maxreconnects infinite connectionname gethostname pinginterval duration ofseconds build we don t have a reproduction scenario yet versions of io nats jnats and nats server nats server v jnats v os container environment adoptopenjdk jdk alpine steps or code to reproduce the issue expected result actual result | 1 |
138,546 | 5,343,559,171 | IssuesEvent | 2017-02-17 11:45:14 | RobinStephenson/jbt-2 | https://api.github.com/repos/RobinStephenson/jbt-2 | opened | Start of event message in Gui | enhancement GUI high-priority | Display the event title and description when an event is successfully started | 1.0 | Start of event message in Gui - Display the event title and description when an event is successfully started | non_defect | start of event message in gui display the event title and description when an event is successfully started | 0 |
8,828 | 2,612,904,937 | IssuesEvent | 2015-02-27 17:25:30 | chrsmith/windows-package-manager | https://api.github.com/repos/chrsmith/windows-package-manager | closed | VLC not installable | auto-migrated Milestone-End_Of_Month Type-Defect | ```
I get
Error: Install/Uninstall: Error 12040: A redirect request will change a secure
to a non-secure connection
when trying to install VLC, both with npackd 1.5 and 1.6 series. And indeed,
when visiting the download URL manually, I get the file, but via http (without
tls).
If one could provide the setup file manually, one could work around this issue,
but I don't know where to put it / if that's possible. BTW, an option to keep
downloaded setups for would be appreciated :)
```
Original issue reported on code.google.com by `dtra...@gmail.com` on 28 Nov 2011 at 11:02 | 1.0 | VLC not installable - ```
I get
Error: Install/Uninstall: Error 12040: A redirect request will change a secure
to a non-secure connection
when trying to install VLC, both with npackd 1.5 and 1.6 series. And indeed,
when visiting the download URL manually, I get the file, but via http (without
tls).
If one could provide the setup file manually, one could work around this issue,
but I don't know where to put it / if that's possible. BTW, an option to keep
downloaded setups for would be appreciated :)
```
Original issue reported on code.google.com by `dtra...@gmail.com` on 28 Nov 2011 at 11:02 | defect | vlc not installable i get error install uninstall error a redirect request will change a secure to a non secure connection when trying to install vlc both with npackd and series and indeed when visiting the download url manually i get the file but via http without tls if one could provide the setup file manually one could work around this issue but i don t know where to put it if that s possible btw an option to keep downloaded setups for would be appreciated original issue reported on code google com by dtra gmail com on nov at | 1 |
6,665 | 2,610,258,730 | IssuesEvent | 2015-02-26 19:22:30 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳激光治疗粉刺效果好不好 | auto-migrated Priority-Medium Type-Defect | ```
深圳激光治疗粉刺效果好不好【深圳韩方科颜全国热线400-869-
1818,24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机��
�以韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘�
��品,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“
不反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开��
�国内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾�
��脸上的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:53 | 1.0 | 深圳激光治疗粉刺效果好不好 - ```
深圳激光治疗粉刺效果好不好【深圳韩方科颜全国热线400-869-
1818,24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机��
�以韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘�
��品,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“
不反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开��
�国内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾�
��脸上的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:53 | defect | 深圳激光治疗粉刺效果好不好 深圳激光治疗粉刺效果好不好【 , 】深圳韩方科颜专业祛痘连锁机构,机�� �以韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘� ��品,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“ 不反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开�� �国内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾� ��脸上的痘痘。 original issue reported on code google com by szft com on may at | 1 |
77,242 | 26,874,363,173 | IssuesEvent | 2023-02-04 21:26:34 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Emojis are not properly represented in message input field | T-Defect | ### Steps to reproduce
1. Open any chat
2. Choose/paste some uncommon emojis, for example 🇸 🇱 (without space in between)
3. Send them.
### Outcome
#### What did you expect?
I expect my message to look the same way as in input field.
Here, what GitHub does:
https://user-images.githubusercontent.com/52239427/216790134-2d2d772c-47c1-409e-8897-a359e15f1fd7.mp4
#### What happened instead?
🇸 🇱 turned into 🇸🇱
https://user-images.githubusercontent.com/52239427/216790156-fb2b1f87-23a8-45e0-a5a8-50780c952d1b.mp4
### Operating system
nixOS
### Browser information
Mozilla Firefox 109.0.1 (64-bit)
### URL for webapp
app.element.io
### Application version
Element version: 1.11.21 Olm version: 3.2.12
### Homeserver
sleroq.link, Dendrite v0.11.0
### Will you send logs?
Yes | 1.0 | Emojis are not properly represented in message input field - ### Steps to reproduce
1. Open any chat
2. Choose/paste some uncommon emojis, for example 🇸 🇱 (without space in between)
3. Send them.
### Outcome
#### What did you expect?
I expect my message to look the same way as in input field.
Here, what GitHub does:
https://user-images.githubusercontent.com/52239427/216790134-2d2d772c-47c1-409e-8897-a359e15f1fd7.mp4
#### What happened instead?
🇸 🇱 turned into 🇸🇱
https://user-images.githubusercontent.com/52239427/216790156-fb2b1f87-23a8-45e0-a5a8-50780c952d1b.mp4
### Operating system
nixOS
### Browser information
Mozilla Firefox 109.0.1 (64-bit)
### URL for webapp
app.element.io
### Application version
Element version: 1.11.21 Olm version: 3.2.12
### Homeserver
sleroq.link, Dendrite v0.11.0
### Will you send logs?
Yes | defect | emojis are not properly represented in message input field steps to reproduce open any chat choose paste some uncommon emojis for example 🇸 🇱 without space in between send them outcome what did you expect i expect my message to look the same way as in input field here what github does what happened instead 🇸 🇱 turned into 🇸🇱 operating system nixos browser information mozilla firefox bit url for webapp app element io application version element version olm version homeserver sleroq link dendrite will you send logs yes | 1 |
2,615 | 2,607,932,059 | IssuesEvent | 2015-02-26 00:27:09 | chrsmithdemos/minify | https://api.github.com/repos/chrsmithdemos/minify | closed | must-revalidate currently breaks caching in webkit | auto-migrated Priority-Medium Release-2.1.2 Type-Defect | ```
See: http://mrclay.org/index.php/2009/02/24/safari-4-beta-cache-
controlmust-revalidate-bug/
For now, best to remove must-revalidate from ConditionalGet. It was
originally added to force Opera to revalidate its cache (which it wasn't
even with max-age=0).
```
-----
Original issue reported on code.google.com by `mrclay....@gmail.com` on 30 Jun 2009 at 5:36 | 1.0 | must-revalidate currently breaks caching in webkit - ```
See: http://mrclay.org/index.php/2009/02/24/safari-4-beta-cache-
controlmust-revalidate-bug/
For now, best to remove must-revalidate from ConditionalGet. It was
originally added to force Opera to revalidate its cache (which it wasn't
even with max-age=0).
```
-----
Original issue reported on code.google.com by `mrclay....@gmail.com` on 30 Jun 2009 at 5:36 | defect | must revalidate currently breaks caching in webkit see controlmust revalidate bug for now best to remove must revalidate from conditionalget it was originally added to force opera to revalidate its cache which it wasn t even with max age original issue reported on code google com by mrclay gmail com on jun at | 1 |
174,508 | 21,300,169,878 | IssuesEvent | 2022-04-15 01:16:54 | mgh3326/nuber-eats-backend | https://api.github.com/repos/mgh3326/nuber-eats-backend | opened | CVE-2021-43138 (High) detected in async-0.2.10.tgz | security vulnerability | ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-0.2.10.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.2.10.tgz">https://registry.npmjs.org/async/-/async-0.2.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- psql-0.0.1.tgz (Root Library)
- winston-0.7.3.tgz
- :x: **async-0.2.10.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution: async - v3.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-43138 (High) detected in async-0.2.10.tgz - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-0.2.10.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.2.10.tgz">https://registry.npmjs.org/async/-/async-0.2.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- psql-0.0.1.tgz (Root Library)
- winston-0.7.3.tgz
- :x: **async-0.2.10.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution: async - v3.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in async tgz cve high severity vulnerability vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules async package json dependency hierarchy psql tgz root library winston tgz x async tgz vulnerable library found in base branch master vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async step up your open source security game with whitesource | 0 |
40,470 | 5,293,644,919 | IssuesEvent | 2017-02-09 08:18:51 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | github.com/cockroachdb/cockroach/pkg/sql: TestParallelCreateTables failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/298907ef2a3d33cd87e255b663d76c8eb74a62df
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=144530&tab=buildLog
```
W170209 08:16:46.355017 14244 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170209 08:16:46.355952 14244 server/config.go:482 1 storage engine initialized
I170209 08:16:46.356159 14244 server/node.go:442 [n?] store [n0,s0] not bootstrapped
I170209 08:16:46.357662 14244 server/node.go:371 [n?] **** cluster 6b903678-546b-4d5b-bce0-fa58a5b7d153 has been created
I170209 08:16:46.357684 14244 server/node.go:372 [n?] **** add additional nodes by specifying --join=127.0.0.1:34680
I170209 08:16:46.358118 14244 storage/store.go:1261 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I170209 08:16:46.358148 14244 server/node.go:455 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0}
I170209 08:16:46.358170 14244 server/node.go:340 [n1] node ID 1 initialized
I170209 08:16:46.358213 14244 gossip/gossip.go:293 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34680" > attrs:<> locality:<>
I170209 08:16:46.358319 14244 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I170209 08:16:46.358358 14244 server/node.go:587 [n1] connecting to gossip network to verify cluster ID...
I170209 08:16:46.359134 14244 server/node.go:611 [n1] node connected via gossip and verified as part of cluster "6b903678-546b-4d5b-bce0-fa58a5b7d153"
I170209 08:16:46.359163 14244 server/node.go:390 [n1] node=1: started with [[]=] engine(s) and attributes []
I170209 08:16:46.359192 14244 sql/executor.go:332 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:34680}
I170209 08:16:46.360973 14244 server/server.go:632 [n1] starting https server at 127.0.0.1:60316
I170209 08:16:46.360993 14244 server/server.go:633 [n1] starting grpc/postgres server at 127.0.0.1:34680
I170209 08:16:46.361006 14244 server/server.go:634 [n1] advertising CockroachDB node at 127.0.0.1:34680
I170209 08:16:46.361585 14339 storage/split_queue.go:98 [split,n1,s1,r1/1:/M{in-ax},@c421a74a80] splitting at key /Table/0/0
I170209 08:16:46.362963 14339 storage/replica_command.go:2397 [split,n1,s1,r1/1:/M{in-ax},@c421a74a80] initiating a split of this range at key /Table/0 [r2]
I170209 08:16:46.366741 14349 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:34680} Attrs: Locality:} ClusterID:6b903678-546b-4d5b-bce0-fa58a5b7d153 StartedAt:1486628206359149785}
I170209 08:16:46.368918 14244 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
I170209 08:16:46.374138 14339 storage/split_queue.go:98 [split,n1,s1,r2/1:/{Table/0-Max},@c420838e00] splitting at key /Table/11/0
I170209 08:16:46.374179 14339 storage/replica_command.go:2397 [split,n1,s1,r2/1:/{Table/0-Max},@c420838e00] initiating a split of this range at key /Table/11 [r3]
I170209 08:16:46.381356 14244 server/server.go:689 [n1] done ensuring all necessary migrations have run
I170209 08:16:46.381373 14244 server/server.go:691 [n1] serving sql connections
I170209 08:16:46.388871 14339 storage/split_queue.go:98 [split,n1,s1,r3/1:/{Table/11-Max},@c421a44e00] splitting at key /Table/12/0
I170209 08:16:46.388903 14339 storage/replica_command.go:2397 [split,n1,s1,r3/1:/{Table/11-Max},@c421a44e00] initiating a split of this range at key /Table/12 [r4]
I170209 08:16:46.404533 14339 storage/split_queue.go:98 [split,n1,s1,r4/1:/{Table/12-Max},@c421af4700] splitting at key /Table/13/0
I170209 08:16:46.404568 14339 storage/replica_command.go:2397 [split,n1,s1,r4/1:/{Table/12-Max},@c421af4700] initiating a split of this range at key /Table/13 [r5]
I170209 08:16:46.413676 14339 storage/split_queue.go:98 [split,n1,s1,r5/1:/{Table/13-Max},@c421a45500] splitting at key /Table/14/0
I170209 08:16:46.413709 14339 storage/replica_command.go:2397 [split,n1,s1,r5/1:/{Table/13-Max},@c421a45500] initiating a split of this range at key /Table/14 [r6]
W170209 08:16:46.452185 14244 gossip/gossip.go:1178 [n?] no incoming or outgoing connections
W170209 08:16:46.458708 14244 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170209 08:16:46.471345 14244 server/config.go:482 1 storage engine initialized
I170209 08:16:46.471545 14244 server/node.go:442 [n?] store [n0,s0] not bootstrapped
I170209 08:16:46.471563 14244 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170209 08:16:46.471581 14244 server/node.go:587 [n?] connecting to gossip network to verify cluster ID...
I170209 08:16:46.473401 14545 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:34680
I170209 08:16:46.474547 14579 gossip/server.go:214 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:42865}
I170209 08:16:46.474933 14244 server/node.go:611 [n?] node connected via gossip and verified as part of cluster "6b903678-546b-4d5b-bce0-fa58a5b7d153"
I170209 08:16:46.475834 14569 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170209 08:16:46.476773 14244 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170209 08:16:46.477251 14244 server/node.go:333 [n?] new node allocated ID 2
I170209 08:16:46.477295 14244 gossip/gossip.go:293 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:42865" > attrs:<> locality:<>
I170209 08:16:46.477337 14244 server/node.go:390 [n2] node=2: started with [[]=] engine(s) and attributes []
I170209 08:16:46.477360 14244 sql/executor.go:332 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:42865}
I170209 08:16:46.479379 14244 server/server.go:632 [n2] starting https server at 127.0.0.1:59021
I170209 08:16:46.479397 14244 server/server.go:633 [n2] starting grpc/postgres server at 127.0.0.1:42865
I170209 08:16:46.479409 14244 server/server.go:634 [n2] advertising CockroachDB node at 127.0.0.1:42865
I170209 08:16:46.493412 14415 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I170209 08:16:46.493896 14244 server/server.go:689 [n2] done ensuring all necessary migrations have run
I170209 08:16:46.493915 14244 server/server.go:691 [n2] serving sql connections
I170209 08:16:46.497212 14595 server/node.go:568 [n2] bootstrapped store [n2,s2]
I170209 08:16:46.498955 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] generated preemptive snapshot da848636 at index 16
W170209 08:16:46.499512 14244 gossip/gossip.go:1178 [n?] no incoming or outgoing connections
W170209 08:16:46.500127 14244 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170209 08:16:46.501033 14244 server/config.go:482 1 storage engine initialized
I170209 08:16:46.501255 14244 server/node.go:442 [n?] store [n0,s0] not bootstrapped
I170209 08:16:46.501268 14244 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170209 08:16:46.501282 14244 server/node.go:587 [n?] connecting to gossip network to verify cluster ID...
I170209 08:16:46.503307 14598 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:42865} Attrs: Locality:} ClusterID:6b903678-546b-4d5b-bce0-fa58a5b7d153 StartedAt:1486628206477326275}
I170209 08:16:46.509215 14419 storage/store.go:3278 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] streamed snapshot: kv pairs: 40, log entries: 6, 1ms
I170209 08:16:46.509460 14558 storage/replica_raftstorage.go:596 [n2,s2,r5/?:{-},@c4216b0a80] applying preemptive snapshot at index 16 (id=da848636, encoded size=5616, 1 rocksdb batches, 6 log entries)
I170209 08:16:46.509654 14558 storage/replica_raftstorage.go:604 [n2,s2,r5/?:/Table/1{3-4},@c4216b0a80] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.510281 14738 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:34680
I170209 08:16:46.510593 14771 gossip/server.go:214 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:47786}
I170209 08:16:46.510696 14244 server/node.go:611 [n?] node connected via gossip and verified as part of cluster "6b903678-546b-4d5b-bce0-fa58a5b7d153"
I170209 08:16:46.510891 14759 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170209 08:16:46.510923 14759 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170209 08:16:46.511284 14419 storage/replica_command.go:3253 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] change replicas (remove {2 2 2}): read existing descriptor range_id:5 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.514079 14244 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170209 08:16:46.514137 14786 storage/replica.go:2476 [n1,s1,r5/1:/Table/1{3-4},@c421a45500] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170209 08:16:46.514636 14244 server/node.go:333 [n?] new node allocated ID 3
I170209 08:16:46.514678 14244 gossip/gossip.go:293 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:47786" > attrs:<> locality:<>
I170209 08:16:46.514711 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] generated preemptive snapshot 6eeb3705 at index 22
I170209 08:16:46.514723 14244 server/node.go:390 [n3] node=3: started with [[]=] engine(s) and attributes []
I170209 08:16:46.514739 14244 sql/executor.go:332 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:47786}
I170209 08:16:46.515230 14419 storage/store.go:3278 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] streamed snapshot: kv pairs: 10, log entries: 12, 0ms
I170209 08:16:46.515516 14781 storage/raft_transport.go:437 [n2] raft transport stream to node 1 established
I170209 08:16:46.515751 14790 storage/replica_raftstorage.go:596 [n2,s2,r3/?:{-},@c42266e000] applying preemptive snapshot at index 22 (id=6eeb3705, encoded size=6038, 1 rocksdb batches, 12 log entries)
I170209 08:16:46.515886 14708 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I170209 08:16:46.515946 14790 storage/replica_raftstorage.go:604 [n2,s2,r3/?:/Table/1{1-2},@c42266e000] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.516076 14709 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I170209 08:16:46.516330 14419 storage/replica_command.go:3253 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] change replicas (remove {2 2 2}): read existing descriptor range_id:3 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.516690 14244 server/server.go:632 [n3] starting https server at 127.0.0.1:57920
I170209 08:16:46.516708 14244 server/server.go:633 [n3] starting grpc/postgres server at 127.0.0.1:47786
I170209 08:16:46.516716 14244 server/server.go:634 [n3] advertising CockroachDB node at 127.0.0.1:47786
I170209 08:16:46.518304 14244 server/server.go:689 [n3] done ensuring all necessary migrations have run
I170209 08:16:46.518323 14244 server/server.go:691 [n3] serving sql connections
I170209 08:16:46.532019 14683 server/node.go:568 [n3] bootstrapped store [n3,s3]
I170209 08:16:46.532422 14811 storage/replica.go:2476 [n1,s1,r3/1:/Table/1{1-2},@c421a44e00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170209 08:16:46.533496 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] generated preemptive snapshot a43c23ed at index 20
I170209 08:16:46.534143 14817 storage/replica_raftstorage.go:596 [n2,s2,r4/?:{-},@c4216b0e00] applying preemptive snapshot at index 20 (id=a43c23ed, encoded size=8479, 1 rocksdb batches, 10 log entries)
I170209 08:16:46.534345 14817 storage/replica_raftstorage.go:604 [n2,s2,r4/?:/Table/1{2-3},@c4216b0e00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.534632 14686 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:47786} Attrs: Locality:} ClusterID:6b903678-546b-4d5b-bce0-fa58a5b7d153 StartedAt:1486628206514708621}
I170209 08:16:46.535134 14419 storage/store.go:3278 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] streamed snapshot: kv pairs: 36, log entries: 10, 0ms
I170209 08:16:46.537904 14419 storage/replica_command.go:3253 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] change replicas (remove {2 2 2}): read existing descriptor range_id:4 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.540752 14923 storage/replica.go:2476 [n1,s1,r4/1:/Table/1{2-3},@c421af4700] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170209 08:16:46.541276 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] generated preemptive snapshot 82e14f8b at index 24
I170209 08:16:46.552915 14419 storage/store.go:3278 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] streamed snapshot: kv pairs: 33, log entries: 14, 2ms
I170209 08:16:46.553187 14950 storage/replica_raftstorage.go:596 [n3,s3,r2/?:{-},@c42094ee00] applying preemptive snapshot at index 24 (id=82e14f8b, encoded size=12461, 1 rocksdb batches, 14 log entries)
I170209 08:16:46.553486 14950 storage/replica_raftstorage.go:604 [n3,s3,r2/?:/Table/{0-11},@c42094ee00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.555021 14419 storage/replica_command.go:3253 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] change replicas (remove {3 3 2}): read existing descriptor range_id:2 start_key:"\210" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.558573 14969 storage/replica.go:2476 [n1,s1,r2/1:/Table/{0-11},@c420838e00] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170209 08:16:46.559128 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] generated preemptive snapshot 56ac0ade at index 67
I170209 08:16:46.563259 14419 storage/store.go:3278 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] streamed snapshot: kv pairs: 783, log entries: 57, 4ms
I170209 08:16:46.563503 14954 storage/replica_raftstorage.go:596 [n3,s3,r1/?:{-},@c42027ca80] applying preemptive snapshot at index 67 (id=56ac0ade, encoded size=255981, 1 rocksdb batches, 57 log entries)
I170209 08:16:46.563997 14978 storage/raft_transport.go:437 [n3] raft transport stream to node 1 established
I170209 08:16:46.586698 14954 storage/replica_raftstorage.go:604 [n3,s3,r1/?:/{Min-Table/0},@c42027ca80] applied preemptive snapshot in 23ms [clear=0ms batch=0ms entries=3ms commit=3ms]
I170209 08:16:46.587593 14419 storage/replica_command.go:3253 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] change replicas (remove {3 3 2}): read existing descriptor range_id:1 start_key:"" end_key:"\210" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.591791 14897 storage/replica.go:2476 [n1,s1,r1/1:/{Min-Table/0},@c421a74a80] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170209 08:16:46.592820 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] generated preemptive snapshot 0cdf3695 at index 11
I170209 08:16:46.594325 14938 storage/replica_raftstorage.go:596 [n3,s3,r6/?:{-},@c42219bc00] applying preemptive snapshot at index 11 (id=0cdf3695, encoded size=504, 1 rocksdb batches, 1 log entries)
I170209 08:16:46.594489 14938 storage/replica_raftstorage.go:604 [n3,s3,r6/?:/{Table/14-Max},@c42219bc00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.594968 14419 storage/store.go:3278 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] streamed snapshot: kv pairs: 9, log entries: 1, 2ms
I170209 08:16:46.599280 14419 storage/replica_command.go:3253 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] change replicas (remove {3 3 2}): read existing descriptor range_id:6 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.633671 15008 storage/replica.go:2476 [n1,s1,r6/1:/{Table/14-Max},@c420839500] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170209 08:16:46.634074 14419 storage/queue.go:693 [n1,replicate] purgatory is now empty
I170209 08:16:46.634302 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] generated preemptive snapshot 9d5eb423 at index 29
I170209 08:16:46.635797 15012 storage/replica_raftstorage.go:596 [n3,s3,r5/?:{-},@c421ae6e00] applying preemptive snapshot at index 29 (id=9d5eb423, encoded size=17194, 1 rocksdb batches, 19 log entries)
I170209 08:16:46.636066 15012 storage/replica_raftstorage.go:604 [n3,s3,r5/?:/Table/1{3-4},@c421ae6e00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.636183 14340 storage/store.go:3278 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] streamed snapshot: kv pairs: 76, log entries: 19, 1ms
I170209 08:16:46.636549 14340 storage/replica_command.go:3253 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] change replicas (remove {3 3 3}): read existing descriptor range_id:5 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170209 08:16:46.640482 15029 storage/replica.go:2476 [n1,s1,r5/1:/Table/1{3-4},@c421a45500] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170209 08:16:46.645015 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] generated preemptive snapshot 8c42d8eb at index 78
I170209 08:16:46.651586 14340 storage/store.go:3278 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] streamed snapshot: kv pairs: 789, log entries: 68, 6ms
I170209 08:16:46.651923 15032 storage/replica_raftstorage.go:596 [n2,s2,r1/?:{-},@c421f29880] applying preemptive snapshot at index 78 (id=8c42d8eb, encoded size=259557, 1 rocksdb batches, 68 log entries)
I170209 08:16:46.672082 15032 storage/replica_raftstorage.go:604 [n2,s2,r1/?:/{Min-Table/0},@c421f29880] applied preemptive snapshot in 20ms [clear=0ms batch=0ms entries=19ms commit=1ms]
I170209 08:16:46.673649 14340 storage/replica_command.go:3253 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] change replicas (remove {2 2 3}): read existing descriptor range_id:1 start_key:"" end_key:"\210" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170209 08:16:46.677903 14985 storage/replica.go:2476 [n1,s1,r1/1:/{Min-Table/0},@c421a74a80] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170209 08:16:46.684866 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] generated preemptive snapshot c036d85d at index 14
I170209 08:16:46.686135 14340 storage/store.go:3278 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] streamed snapshot: kv pairs: 10, log entries: 4, 1ms
I170209 08:16:46.688064 15090 storage/replica_raftstorage.go:596 [n2,s2,r6/?:{-},@c421a74000] applying preemptive snapshot at index 14 (id=c036d85d, encoded size=1694, 1 rocksdb batches, 4 log entries)
I170209 08:16:46.688238 15090 storage/replica_raftstorage.go:604 [n2,s2,r6/?:/{Table/14-Max},@c421a74000] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.688717 14340 storage/replica_command.go:3253 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] change replicas (remove {2 2 3}): read existing descriptor range_id:6 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170209 08:16:46.691982 14876 storage/replica.go:2476 [n1,s1,r6/1:/{Table/14-Max},@c420839500] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170209 08:16:46.695452 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] generated preemptive snapshot 8e6b3f22 at index 24
I170209 08:16:46.695957 14340 storage/store.go:3278 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] streamed snapshot: kv pairs: 31, log entries: 14, 0ms
I170209 08:16:46.696213 15124 storage/replica_raftstorage.go:596 [n3,s3,r4/?:{-},@c421ef7880] applying preemptive snapshot at index 24 (id=8e6b3f22, encoded size=9271, 1 rocksdb batches, 14 log entries)
I170209 08:16:46.696441 15124 storage/replica_raftstorage.go:604 [n3,s3,r4/?:/Table/1{2-3},@c421ef7880] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.696953 14340 storage/replica_command.go:3253 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] change replicas (remove {3 3 3}): read existing descriptor range_id:4 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170209 08:16:46.702057 15081 storage/replica.go:2476 [n1,s1,r4/1:/Table/1{2-3},@c421af4700] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170209 08:16:46.703074 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] generated preemptive snapshot 46928d4d at index 27
I170209 08:16:46.707596 14340 storage/store.go:3278 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] streamed snapshot: kv pairs: 34, log entries: 17, 4ms
I170209 08:16:46.707861 14988 storage/replica_raftstorage.go:596 [n2,s2,r2/?:{-},@c421ae6700] applying preemptive snapshot at index 27 (id=46928d4d, encoded size=13647, 1 rocksdb batches, 17 log entries)
I170209 08:16:46.708101 14988 storage/replica_raftstorage.go:604 [n2,s2,r2/?:/Table/{0-11},@c421ae6700] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.708958 14340 storage/replica_command.go:3253 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] change replicas (remove {2 2 3}): read existing descriptor range_id:2 start_key:"\210" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170209 08:16:46.714573 15056 storage/replica.go:2476 [n1,s1,r2/1:/Table/{0-11},@c420838e00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170209 08:16:46.718768 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] generated preemptive snapshot b582edcc at index 26
I170209 08:16:46.719275 14340 storage/store.go:3278 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] streamed snapshot: kv pairs: 11, log entries: 16, 0ms
I170209 08:16:46.719536 15120 storage/replica_raftstorage.go:596 [n3,s3,r3/?:{-},@c42219a700] applying preemptive snapshot at index 26 (id=b582edcc, encoded size=7530, 1 rocksdb batches, 16 log entries)
I170209 08:16:46.719794 15120 storage/replica_raftstorage.go:604 [n3,s3,r3/?:/Table/1{1-2},@c42219a700] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.720161 14340 storage/replica_command.go:3253 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] change replicas (remove {3 3 3}): read existing descriptor range_id:3 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170209 08:16:46.724337 15156 storage/replica.go:2476 [n1,s1,r3/1:/Table/1{1-2},@c421a44e00] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170209 08:16:46.895958 15088 sql/event_log.go:95 [client=127.0.0.1:37587,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:test Statement:CREATE DATABASE test User:root}
I170209 08:16:46.897681 14339 storage/split_queue.go:98 [split,n1,s1,r6/1:/{Table/14-Max},@c420839500] splitting at key /Table/50/0
I170209 08:16:46.897717 14339 storage/replica_command.go:2397 [split,n1,s1,r6/1:/{Table/14-Max},@c420839500] initiating a split of this range at key /Table/50 [r7]
I170209 08:16:47.041622 15152 sql/event_log.go:95 [client=127.0.0.1:44896,user=root,n3] Event: "create_table", target: 51, info: {TableName:test.table_5 Statement:CREATE TABLE IF NOT EXISTS test.table_5 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.042215 14339 storage/split_queue.go:98 [split,n1,s1,r7/1:/{Table/50-Max},@c423ae1c00] splitting at key /Table/51/0
I170209 08:16:47.042259 14339 storage/replica_command.go:2397 [split,n1,s1,r7/1:/{Table/50-Max},@c423ae1c00] initiating a split of this range at key /Table/51 [r8]
I170209 08:16:47.059773 15192 sql/event_log.go:95 [client=127.0.0.1:37590,user=root,n1] Event: "create_table", target: 52, info: {TableName:test.table_6 Statement:CREATE TABLE IF NOT EXISTS test.table_6 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.065610 14339 storage/split_queue.go:98 [split,n1,s1,r8/1:/{Table/51-Max},@c42193ea80] splitting at key /Table/52/0
I170209 08:16:47.065654 14339 storage/replica_command.go:2397 [split,n1,s1,r8/1:/{Table/51-Max},@c42193ea80] initiating a split of this range at key /Table/52 [r9]
I170209 08:16:47.081936 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r8/1:/Table/5{1-2},@c42193ea80] generated Raft snapshot 0aa1a1ef at index 15
E170209 08:16:47.082956 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r8/1:/Table/5{1-2},@c42193ea80] [n1,s1,r8/1:/Table/5{1-2}]: change replicas aborted due to failed preemptive snapshot: r8: remote couldn't accept snapshot with error: [n3,s3],r8: cannot apply snapshot: snapshot intersects existing range [n3,s3,r7/2:/{Table/50-Max}]
I170209 08:16:47.280077 14339 storage/split_queue.go:98 [split,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] splitting at key /Table/53/0
I170209 08:16:47.280114 14339 storage/replica_command.go:2397 [split,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] initiating a split of this range at key /Table/53 [r10]
I170209 08:16:47.280948 15151 sql/event_log.go:95 [client=127.0.0.1:44899,user=root,n3] Event: "create_table", target: 53, info: {TableName:test.table_2 Statement:CREATE TABLE IF NOT EXISTS test.table_2 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.283646 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] generated Raft snapshot 7fca501f at index 11
E170209 08:16:47.285031 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] [n1,s1,r9/1:/{Table/52-Max}]: change replicas aborted due to failed preemptive snapshot: r9: remote couldn't accept snapshot with error: [n3,s3],r9: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.333284 15235 sql/event_log.go:95 [client=127.0.0.1:43018,user=root,n2] Event: "create_table", target: 54, info: {TableName:test.table_7 Statement:CREATE TABLE IF NOT EXISTS test.table_7 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.333613 14339 storage/split_queue.go:98 [split,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] splitting at key /Table/54/0
I170209 08:16:47.333649 14339 storage/replica_command.go:2397 [split,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] initiating a split of this range at key /Table/54 [r11]
I170209 08:16:47.336538 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] generated Raft snapshot f13e88fc at index 11
E170209 08:16:47.337626 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] [n1,s1,r10/1:/{Table/53-Max}]: change replicas aborted due to failed preemptive snapshot: r10: remote couldn't accept snapshot with error: [n3,s3],r10: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.379569 14339 storage/split_queue.go:98 [split,n1,s1,r11/1:/{Table/54-Max},@c422781880] splitting at key /Table/55/0
I170209 08:16:47.379608 14339 storage/replica_command.go:2397 [split,n1,s1,r11/1:/{Table/54-Max},@c422781880] initiating a split of this range at key /Table/55 [r12]
I170209 08:16:47.380670 15234 sql/event_log.go:95 [client=127.0.0.1:43021,user=root,n2] Event: "create_table", target: 55, info: {TableName:test.table_4 Statement:CREATE TABLE IF NOT EXISTS test.table_4 (id INT PRIMARY KEY, val INT) User:root}
E170209 08:16:47.402437 14823 storage/replica_proposal.go:444 [n3,s3,r2/2:/Table/{0-11},@c42094ee00] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan
I170209 08:16:47.403312 16062 storage/raft_transport.go:437 [n3] raft transport stream to node 2 established
I170209 08:16:47.409093 16030 storage/raft_transport.go:437 [n2] raft transport stream to node 3 established
I170209 08:16:47.418189 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r11/1:/{Table/54-Max},@c422781880] generated Raft snapshot 34272ffc at index 11
E170209 08:16:47.418971 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r11/1:/{Table/54-Max},@c422781880] [n1,s1,r11/1:/{Table/54-Max}]: change replicas aborted due to failed preemptive snapshot: r11: remote couldn't accept snapshot with error: [n3,s3],r11: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
E170209 08:16:47.466800 14822 storage/replica_proposal.go:720 [n3,s3,r2/2:/Table/{0-11},@c42094ee00] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan
I170209 08:16:47.467217 15088 sql/event_log.go:95 [client=127.0.0.1:37587,user=root,n1] Event: "create_table", target: 56, info: {TableName:test.table_9 Statement:CREATE TABLE IF NOT EXISTS test.table_9 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.485729 14339 storage/split_queue.go:98 [split,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] splitting at key /Table/56/0
I170209 08:16:47.485764 14339 storage/replica_command.go:2397 [split,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] initiating a split of this range at key /Table/56 [r13]
I170209 08:16:47.486036 15216 sql/event_log.go:95 [client=127.0.0.1:43024,user=root,n2] Event: "create_table", target: 57, info: {TableName:test.table_1 Statement:CREATE TABLE IF NOT EXISTS test.table_1 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.488221 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] generated Raft snapshot 1f053283 at index 10
E170209 08:16:47.488574 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] [n1,s1,r12/1:/{Table/55-Max}]: change replicas aborted due to failed preemptive snapshot: r12: remote couldn't accept snapshot with error: [n3,s3],r12: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.494491 14339 storage/split_queue.go:98 [split,n1,s1,r13/1:/{Table/56-Max},@c422766700] splitting at key /Table/57/0
I170209 08:16:47.494529 14339 storage/replica_command.go:2397 [split,n1,s1,r13/1:/{Table/56-Max},@c422766700] initiating a split of this range at key /Table/57 [r14]
I170209 08:16:47.497271 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r13/1:/{Table/56-Max},@c422766700] generated Raft snapshot f9291688 at index 11
E170209 08:16:47.497980 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r13/1:/{Table/56-Max},@c422766700] [n1,s1,r13/1:/{Table/56-Max}]: change replicas aborted due to failed preemptive snapshot: r13: remote couldn't accept snapshot with error: [n3,s3],r13: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.561670 15193 sql/event_log.go:95 [client=127.0.0.1:37593,user=root,n1] Event: "create_table", target: 58, info: {TableName:test.table_3 Statement:CREATE TABLE IF NOT EXISTS test.table_3 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.562224 14339 storage/split_queue.go:98 [split,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] splitting at key /Table/58/0
I170209 08:16:47.562261 14339 storage/replica_command.go:2397 [split,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] initiating a split of this range at key /Table/58 [r15]
I170209 08:16:47.564690 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] generated Raft snapshot 040b7d56 at index 11
E170209 08:16:47.565983 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] [n1,s1,r14/1:/{Table/57-Max}]: change replicas aborted due to failed preemptive snapshot: r14: remote couldn't accept snapshot with error: [n3,s3],r14: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.693628 15217 sql/event_log.go:95 [client=127.0.0.1:44893,user=root,n3] Event: "create_table", target: 59, info: {TableName:test.table_8 Statement:CREATE TABLE IF NOT EXISTS test.table_8 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.707314 14339 storage/split_queue.go:98 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] splitting at key /Table/59/0
I170209 08:16:47.707349 14339 storage/replica_command.go:2397 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] initiating a split of this range at key /Table/59 [r16]
I170209 08:16:47.715310 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] generated Raft snapshot 5950096e at index 10
E170209 08:16:47.717366 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] [n1,s1,r15/1:/{Table/58-Max}]: change replicas aborted due to failed preemptive snapshot: r15: remote couldn't accept snapshot with error: [n3,s3],r15: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.723309 16477 util/stop/stopper.go:493 quiescing; tasks left:
1 storage/queue.go:523
1 kv/txn_coord_sender.go:924
W170209 08:16:47.723402 14932 storage/raft_transport.go:443 [n1] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723482 16030 storage/raft_transport.go:443 [n2] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723610 14781 storage/raft_transport.go:443 [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723636 16062 storage/raft_transport.go:443 [n3] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723720 14978 storage/raft_transport.go:443 [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I170209 08:16:47.723794 16477 util/stop/stopper.go:493 quiescing; tasks left:
1 storage/queue.go:523
W170209 08:16:47.723863 14778 storage/raft_transport.go:443 [n1] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
E170209 08:16:47.723954 14339 internal/client/txn.go:341 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] failure aborting transaction: writing transaction timed out or ran on multiple coordinators; abort caused by: node unavailable; try another peer
E170209 08:16:47.724022 14339 storage/queue.go:628 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] unable to split [n1,s1,r15/1:/{Table/58-Max}] at key "/Table/59/0": storage/replica_command.go:2476: split at key /Table/59 failed: node unavailable; try another peer
create_test.go:191: table 0: could not be created: pq: unexpected value: raw_bytes:"\364U\032\350\003\n\237\001\n\007table_5\0303 2(\0010\000:\004\010\000\020\000B\022\n\002id\020\001\032\006\010\001\020\000\030\000 \0000\000B\023\n\003val\020\002\032\006\010\001\020\000\030\000 \0010\000H\003R!\n\007primary\020\001\030\001\"\002id0\001@\000J\010\010\000\020\000\032\000 \000Z\000`\002j\n\n\010\n\004root\020\002\200\001\001\210\001\003\230\001\000\262\001\032\n\007primary\020\000\032\002id\032\003val \001 \002(\002\270\001\001\302\001\000" timestamp:<wall_time:1486628206925976227 logical:0 >
create_test.go:230: expected 10 tables created, only got 9
``` | 1.0 | github.com/cockroachdb/cockroach/pkg/sql: TestParallelCreateTables failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/298907ef2a3d33cd87e255b663d76c8eb74a62df
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=144530&tab=buildLog
```
W170209 08:16:46.355017 14244 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170209 08:16:46.355952 14244 server/config.go:482 1 storage engine initialized
I170209 08:16:46.356159 14244 server/node.go:442 [n?] store [n0,s0] not bootstrapped
I170209 08:16:46.357662 14244 server/node.go:371 [n?] **** cluster 6b903678-546b-4d5b-bce0-fa58a5b7d153 has been created
I170209 08:16:46.357684 14244 server/node.go:372 [n?] **** add additional nodes by specifying --join=127.0.0.1:34680
I170209 08:16:46.358118 14244 storage/store.go:1261 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I170209 08:16:46.358148 14244 server/node.go:455 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0}
I170209 08:16:46.358170 14244 server/node.go:340 [n1] node ID 1 initialized
I170209 08:16:46.358213 14244 gossip/gossip.go:293 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34680" > attrs:<> locality:<>
I170209 08:16:46.358319 14244 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I170209 08:16:46.358358 14244 server/node.go:587 [n1] connecting to gossip network to verify cluster ID...
I170209 08:16:46.359134 14244 server/node.go:611 [n1] node connected via gossip and verified as part of cluster "6b903678-546b-4d5b-bce0-fa58a5b7d153"
I170209 08:16:46.359163 14244 server/node.go:390 [n1] node=1: started with [[]=] engine(s) and attributes []
I170209 08:16:46.359192 14244 sql/executor.go:332 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:34680}
I170209 08:16:46.360973 14244 server/server.go:632 [n1] starting https server at 127.0.0.1:60316
I170209 08:16:46.360993 14244 server/server.go:633 [n1] starting grpc/postgres server at 127.0.0.1:34680
I170209 08:16:46.361006 14244 server/server.go:634 [n1] advertising CockroachDB node at 127.0.0.1:34680
I170209 08:16:46.361585 14339 storage/split_queue.go:98 [split,n1,s1,r1/1:/M{in-ax},@c421a74a80] splitting at key /Table/0/0
I170209 08:16:46.362963 14339 storage/replica_command.go:2397 [split,n1,s1,r1/1:/M{in-ax},@c421a74a80] initiating a split of this range at key /Table/0 [r2]
I170209 08:16:46.366741 14349 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:34680} Attrs: Locality:} ClusterID:6b903678-546b-4d5b-bce0-fa58a5b7d153 StartedAt:1486628206359149785}
I170209 08:16:46.368918 14244 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
I170209 08:16:46.374138 14339 storage/split_queue.go:98 [split,n1,s1,r2/1:/{Table/0-Max},@c420838e00] splitting at key /Table/11/0
I170209 08:16:46.374179 14339 storage/replica_command.go:2397 [split,n1,s1,r2/1:/{Table/0-Max},@c420838e00] initiating a split of this range at key /Table/11 [r3]
I170209 08:16:46.381356 14244 server/server.go:689 [n1] done ensuring all necessary migrations have run
I170209 08:16:46.381373 14244 server/server.go:691 [n1] serving sql connections
I170209 08:16:46.388871 14339 storage/split_queue.go:98 [split,n1,s1,r3/1:/{Table/11-Max},@c421a44e00] splitting at key /Table/12/0
I170209 08:16:46.388903 14339 storage/replica_command.go:2397 [split,n1,s1,r3/1:/{Table/11-Max},@c421a44e00] initiating a split of this range at key /Table/12 [r4]
I170209 08:16:46.404533 14339 storage/split_queue.go:98 [split,n1,s1,r4/1:/{Table/12-Max},@c421af4700] splitting at key /Table/13/0
I170209 08:16:46.404568 14339 storage/replica_command.go:2397 [split,n1,s1,r4/1:/{Table/12-Max},@c421af4700] initiating a split of this range at key /Table/13 [r5]
I170209 08:16:46.413676 14339 storage/split_queue.go:98 [split,n1,s1,r5/1:/{Table/13-Max},@c421a45500] splitting at key /Table/14/0
I170209 08:16:46.413709 14339 storage/replica_command.go:2397 [split,n1,s1,r5/1:/{Table/13-Max},@c421a45500] initiating a split of this range at key /Table/14 [r6]
W170209 08:16:46.452185 14244 gossip/gossip.go:1178 [n?] no incoming or outgoing connections
W170209 08:16:46.458708 14244 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170209 08:16:46.471345 14244 server/config.go:482 1 storage engine initialized
I170209 08:16:46.471545 14244 server/node.go:442 [n?] store [n0,s0] not bootstrapped
I170209 08:16:46.471563 14244 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170209 08:16:46.471581 14244 server/node.go:587 [n?] connecting to gossip network to verify cluster ID...
I170209 08:16:46.473401 14545 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:34680
I170209 08:16:46.474547 14579 gossip/server.go:214 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:42865}
I170209 08:16:46.474933 14244 server/node.go:611 [n?] node connected via gossip and verified as part of cluster "6b903678-546b-4d5b-bce0-fa58a5b7d153"
I170209 08:16:46.475834 14569 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170209 08:16:46.476773 14244 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170209 08:16:46.477251 14244 server/node.go:333 [n?] new node allocated ID 2
I170209 08:16:46.477295 14244 gossip/gossip.go:293 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:42865" > attrs:<> locality:<>
I170209 08:16:46.477337 14244 server/node.go:390 [n2] node=2: started with [[]=] engine(s) and attributes []
I170209 08:16:46.477360 14244 sql/executor.go:332 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:42865}
I170209 08:16:46.479379 14244 server/server.go:632 [n2] starting https server at 127.0.0.1:59021
I170209 08:16:46.479397 14244 server/server.go:633 [n2] starting grpc/postgres server at 127.0.0.1:42865
I170209 08:16:46.479409 14244 server/server.go:634 [n2] advertising CockroachDB node at 127.0.0.1:42865
I170209 08:16:46.493412 14415 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I170209 08:16:46.493896 14244 server/server.go:689 [n2] done ensuring all necessary migrations have run
I170209 08:16:46.493915 14244 server/server.go:691 [n2] serving sql connections
I170209 08:16:46.497212 14595 server/node.go:568 [n2] bootstrapped store [n2,s2]
I170209 08:16:46.498955 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] generated preemptive snapshot da848636 at index 16
W170209 08:16:46.499512 14244 gossip/gossip.go:1178 [n?] no incoming or outgoing connections
W170209 08:16:46.500127 14244 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170209 08:16:46.501033 14244 server/config.go:482 1 storage engine initialized
I170209 08:16:46.501255 14244 server/node.go:442 [n?] store [n0,s0] not bootstrapped
I170209 08:16:46.501268 14244 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170209 08:16:46.501282 14244 server/node.go:587 [n?] connecting to gossip network to verify cluster ID...
I170209 08:16:46.503307 14598 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:42865} Attrs: Locality:} ClusterID:6b903678-546b-4d5b-bce0-fa58a5b7d153 StartedAt:1486628206477326275}
I170209 08:16:46.509215 14419 storage/store.go:3278 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] streamed snapshot: kv pairs: 40, log entries: 6, 1ms
I170209 08:16:46.509460 14558 storage/replica_raftstorage.go:596 [n2,s2,r5/?:{-},@c4216b0a80] applying preemptive snapshot at index 16 (id=da848636, encoded size=5616, 1 rocksdb batches, 6 log entries)
I170209 08:16:46.509654 14558 storage/replica_raftstorage.go:604 [n2,s2,r5/?:/Table/1{3-4},@c4216b0a80] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.510281 14738 gossip/client.go:131 [n?] started gossip client to 127.0.0.1:34680
I170209 08:16:46.510593 14771 gossip/server.go:214 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:47786}
I170209 08:16:46.510696 14244 server/node.go:611 [n?] node connected via gossip and verified as part of cluster "6b903678-546b-4d5b-bce0-fa58a5b7d153"
I170209 08:16:46.510891 14759 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170209 08:16:46.510923 14759 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170209 08:16:46.511284 14419 storage/replica_command.go:3253 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] change replicas (remove {2 2 2}): read existing descriptor range_id:5 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.514079 14244 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170209 08:16:46.514137 14786 storage/replica.go:2476 [n1,s1,r5/1:/Table/1{3-4},@c421a45500] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170209 08:16:46.514636 14244 server/node.go:333 [n?] new node allocated ID 3
I170209 08:16:46.514678 14244 gossip/gossip.go:293 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:47786" > attrs:<> locality:<>
I170209 08:16:46.514711 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] generated preemptive snapshot 6eeb3705 at index 22
I170209 08:16:46.514723 14244 server/node.go:390 [n3] node=3: started with [[]=] engine(s) and attributes []
I170209 08:16:46.514739 14244 sql/executor.go:332 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:47786}
I170209 08:16:46.515230 14419 storage/store.go:3278 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] streamed snapshot: kv pairs: 10, log entries: 12, 0ms
I170209 08:16:46.515516 14781 storage/raft_transport.go:437 [n2] raft transport stream to node 1 established
I170209 08:16:46.515751 14790 storage/replica_raftstorage.go:596 [n2,s2,r3/?:{-},@c42266e000] applying preemptive snapshot at index 22 (id=6eeb3705, encoded size=6038, 1 rocksdb batches, 12 log entries)
I170209 08:16:46.515886 14708 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I170209 08:16:46.515946 14790 storage/replica_raftstorage.go:604 [n2,s2,r3/?:/Table/1{1-2},@c42266e000] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.516076 14709 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I170209 08:16:46.516330 14419 storage/replica_command.go:3253 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] change replicas (remove {2 2 2}): read existing descriptor range_id:3 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.516690 14244 server/server.go:632 [n3] starting https server at 127.0.0.1:57920
I170209 08:16:46.516708 14244 server/server.go:633 [n3] starting grpc/postgres server at 127.0.0.1:47786
I170209 08:16:46.516716 14244 server/server.go:634 [n3] advertising CockroachDB node at 127.0.0.1:47786
I170209 08:16:46.518304 14244 server/server.go:689 [n3] done ensuring all necessary migrations have run
I170209 08:16:46.518323 14244 server/server.go:691 [n3] serving sql connections
I170209 08:16:46.532019 14683 server/node.go:568 [n3] bootstrapped store [n3,s3]
I170209 08:16:46.532422 14811 storage/replica.go:2476 [n1,s1,r3/1:/Table/1{1-2},@c421a44e00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170209 08:16:46.533496 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] generated preemptive snapshot a43c23ed at index 20
I170209 08:16:46.534143 14817 storage/replica_raftstorage.go:596 [n2,s2,r4/?:{-},@c4216b0e00] applying preemptive snapshot at index 20 (id=a43c23ed, encoded size=8479, 1 rocksdb batches, 10 log entries)
I170209 08:16:46.534345 14817 storage/replica_raftstorage.go:604 [n2,s2,r4/?:/Table/1{2-3},@c4216b0e00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.534632 14686 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:47786} Attrs: Locality:} ClusterID:6b903678-546b-4d5b-bce0-fa58a5b7d153 StartedAt:1486628206514708621}
I170209 08:16:46.535134 14419 storage/store.go:3278 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] streamed snapshot: kv pairs: 36, log entries: 10, 0ms
I170209 08:16:46.537904 14419 storage/replica_command.go:3253 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] change replicas (remove {2 2 2}): read existing descriptor range_id:4 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.540752 14923 storage/replica.go:2476 [n1,s1,r4/1:/Table/1{2-3},@c421af4700] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170209 08:16:46.541276 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] generated preemptive snapshot 82e14f8b at index 24
I170209 08:16:46.552915 14419 storage/store.go:3278 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] streamed snapshot: kv pairs: 33, log entries: 14, 2ms
I170209 08:16:46.553187 14950 storage/replica_raftstorage.go:596 [n3,s3,r2/?:{-},@c42094ee00] applying preemptive snapshot at index 24 (id=82e14f8b, encoded size=12461, 1 rocksdb batches, 14 log entries)
I170209 08:16:46.553486 14950 storage/replica_raftstorage.go:604 [n3,s3,r2/?:/Table/{0-11},@c42094ee00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.555021 14419 storage/replica_command.go:3253 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] change replicas (remove {3 3 2}): read existing descriptor range_id:2 start_key:"\210" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.558573 14969 storage/replica.go:2476 [n1,s1,r2/1:/Table/{0-11},@c420838e00] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170209 08:16:46.559128 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] generated preemptive snapshot 56ac0ade at index 67
I170209 08:16:46.563259 14419 storage/store.go:3278 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] streamed snapshot: kv pairs: 783, log entries: 57, 4ms
I170209 08:16:46.563503 14954 storage/replica_raftstorage.go:596 [n3,s3,r1/?:{-},@c42027ca80] applying preemptive snapshot at index 67 (id=56ac0ade, encoded size=255981, 1 rocksdb batches, 57 log entries)
I170209 08:16:46.563997 14978 storage/raft_transport.go:437 [n3] raft transport stream to node 1 established
I170209 08:16:46.586698 14954 storage/replica_raftstorage.go:604 [n3,s3,r1/?:/{Min-Table/0},@c42027ca80] applied preemptive snapshot in 23ms [clear=0ms batch=0ms entries=3ms commit=3ms]
I170209 08:16:46.587593 14419 storage/replica_command.go:3253 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] change replicas (remove {3 3 2}): read existing descriptor range_id:1 start_key:"" end_key:"\210" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.591791 14897 storage/replica.go:2476 [n1,s1,r1/1:/{Min-Table/0},@c421a74a80] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170209 08:16:46.592820 14419 storage/replica_raftstorage.go:414 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] generated preemptive snapshot 0cdf3695 at index 11
I170209 08:16:46.594325 14938 storage/replica_raftstorage.go:596 [n3,s3,r6/?:{-},@c42219bc00] applying preemptive snapshot at index 11 (id=0cdf3695, encoded size=504, 1 rocksdb batches, 1 log entries)
I170209 08:16:46.594489 14938 storage/replica_raftstorage.go:604 [n3,s3,r6/?:/{Table/14-Max},@c42219bc00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.594968 14419 storage/store.go:3278 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] streamed snapshot: kv pairs: 9, log entries: 1, 2ms
I170209 08:16:46.599280 14419 storage/replica_command.go:3253 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] change replicas (remove {3 3 2}): read existing descriptor range_id:6 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170209 08:16:46.633671 15008 storage/replica.go:2476 [n1,s1,r6/1:/{Table/14-Max},@c420839500] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170209 08:16:46.634074 14419 storage/queue.go:693 [n1,replicate] purgatory is now empty
I170209 08:16:46.634302 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] generated preemptive snapshot 9d5eb423 at index 29
I170209 08:16:46.635797 15012 storage/replica_raftstorage.go:596 [n3,s3,r5/?:{-},@c421ae6e00] applying preemptive snapshot at index 29 (id=9d5eb423, encoded size=17194, 1 rocksdb batches, 19 log entries)
I170209 08:16:46.636066 15012 storage/replica_raftstorage.go:604 [n3,s3,r5/?:/Table/1{3-4},@c421ae6e00] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.636183 14340 storage/store.go:3278 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] streamed snapshot: kv pairs: 76, log entries: 19, 1ms
I170209 08:16:46.636549 14340 storage/replica_command.go:3253 [replicate,n1,s1,r5/1:/Table/1{3-4},@c421a45500] change replicas (remove {3 3 3}): read existing descriptor range_id:5 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170209 08:16:46.640482 15029 storage/replica.go:2476 [n1,s1,r5/1:/Table/1{3-4},@c421a45500] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170209 08:16:46.645015 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] generated preemptive snapshot 8c42d8eb at index 78
I170209 08:16:46.651586 14340 storage/store.go:3278 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] streamed snapshot: kv pairs: 789, log entries: 68, 6ms
I170209 08:16:46.651923 15032 storage/replica_raftstorage.go:596 [n2,s2,r1/?:{-},@c421f29880] applying preemptive snapshot at index 78 (id=8c42d8eb, encoded size=259557, 1 rocksdb batches, 68 log entries)
I170209 08:16:46.672082 15032 storage/replica_raftstorage.go:604 [n2,s2,r1/?:/{Min-Table/0},@c421f29880] applied preemptive snapshot in 20ms [clear=0ms batch=0ms entries=19ms commit=1ms]
I170209 08:16:46.673649 14340 storage/replica_command.go:3253 [replicate,n1,s1,r1/1:/{Min-Table/0},@c421a74a80] change replicas (remove {2 2 3}): read existing descriptor range_id:1 start_key:"" end_key:"\210" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170209 08:16:46.677903 14985 storage/replica.go:2476 [n1,s1,r1/1:/{Min-Table/0},@c421a74a80] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170209 08:16:46.684866 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] generated preemptive snapshot c036d85d at index 14
I170209 08:16:46.686135 14340 storage/store.go:3278 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] streamed snapshot: kv pairs: 10, log entries: 4, 1ms
I170209 08:16:46.688064 15090 storage/replica_raftstorage.go:596 [n2,s2,r6/?:{-},@c421a74000] applying preemptive snapshot at index 14 (id=c036d85d, encoded size=1694, 1 rocksdb batches, 4 log entries)
I170209 08:16:46.688238 15090 storage/replica_raftstorage.go:604 [n2,s2,r6/?:/{Table/14-Max},@c421a74000] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.688717 14340 storage/replica_command.go:3253 [replicate,n1,s1,r6/1:/{Table/14-Max},@c420839500] change replicas (remove {2 2 3}): read existing descriptor range_id:6 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170209 08:16:46.691982 14876 storage/replica.go:2476 [n1,s1,r6/1:/{Table/14-Max},@c420839500] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170209 08:16:46.695452 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] generated preemptive snapshot 8e6b3f22 at index 24
I170209 08:16:46.695957 14340 storage/store.go:3278 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] streamed snapshot: kv pairs: 31, log entries: 14, 0ms
I170209 08:16:46.696213 15124 storage/replica_raftstorage.go:596 [n3,s3,r4/?:{-},@c421ef7880] applying preemptive snapshot at index 24 (id=8e6b3f22, encoded size=9271, 1 rocksdb batches, 14 log entries)
I170209 08:16:46.696441 15124 storage/replica_raftstorage.go:604 [n3,s3,r4/?:/Table/1{2-3},@c421ef7880] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.696953 14340 storage/replica_command.go:3253 [replicate,n1,s1,r4/1:/Table/1{2-3},@c421af4700] change replicas (remove {3 3 3}): read existing descriptor range_id:4 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170209 08:16:46.702057 15081 storage/replica.go:2476 [n1,s1,r4/1:/Table/1{2-3},@c421af4700] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170209 08:16:46.703074 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] generated preemptive snapshot 46928d4d at index 27
I170209 08:16:46.707596 14340 storage/store.go:3278 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] streamed snapshot: kv pairs: 34, log entries: 17, 4ms
I170209 08:16:46.707861 14988 storage/replica_raftstorage.go:596 [n2,s2,r2/?:{-},@c421ae6700] applying preemptive snapshot at index 27 (id=46928d4d, encoded size=13647, 1 rocksdb batches, 17 log entries)
I170209 08:16:46.708101 14988 storage/replica_raftstorage.go:604 [n2,s2,r2/?:/Table/{0-11},@c421ae6700] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.708958 14340 storage/replica_command.go:3253 [replicate,n1,s1,r2/1:/Table/{0-11},@c420838e00] change replicas (remove {2 2 3}): read existing descriptor range_id:2 start_key:"\210" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170209 08:16:46.714573 15056 storage/replica.go:2476 [n1,s1,r2/1:/Table/{0-11},@c420838e00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170209 08:16:46.718768 14340 storage/replica_raftstorage.go:414 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] generated preemptive snapshot b582edcc at index 26
I170209 08:16:46.719275 14340 storage/store.go:3278 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] streamed snapshot: kv pairs: 11, log entries: 16, 0ms
I170209 08:16:46.719536 15120 storage/replica_raftstorage.go:596 [n3,s3,r3/?:{-},@c42219a700] applying preemptive snapshot at index 26 (id=b582edcc, encoded size=7530, 1 rocksdb batches, 16 log entries)
I170209 08:16:46.719794 15120 storage/replica_raftstorage.go:604 [n3,s3,r3/?:/Table/1{1-2},@c42219a700] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170209 08:16:46.720161 14340 storage/replica_command.go:3253 [replicate,n1,s1,r3/1:/Table/1{1-2},@c421a44e00] change replicas (remove {3 3 3}): read existing descriptor range_id:3 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170209 08:16:46.724337 15156 storage/replica.go:2476 [n1,s1,r3/1:/Table/1{1-2},@c421a44e00] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170209 08:16:46.895958 15088 sql/event_log.go:95 [client=127.0.0.1:37587,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:test Statement:CREATE DATABASE test User:root}
I170209 08:16:46.897681 14339 storage/split_queue.go:98 [split,n1,s1,r6/1:/{Table/14-Max},@c420839500] splitting at key /Table/50/0
I170209 08:16:46.897717 14339 storage/replica_command.go:2397 [split,n1,s1,r6/1:/{Table/14-Max},@c420839500] initiating a split of this range at key /Table/50 [r7]
I170209 08:16:47.041622 15152 sql/event_log.go:95 [client=127.0.0.1:44896,user=root,n3] Event: "create_table", target: 51, info: {TableName:test.table_5 Statement:CREATE TABLE IF NOT EXISTS test.table_5 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.042215 14339 storage/split_queue.go:98 [split,n1,s1,r7/1:/{Table/50-Max},@c423ae1c00] splitting at key /Table/51/0
I170209 08:16:47.042259 14339 storage/replica_command.go:2397 [split,n1,s1,r7/1:/{Table/50-Max},@c423ae1c00] initiating a split of this range at key /Table/51 [r8]
I170209 08:16:47.059773 15192 sql/event_log.go:95 [client=127.0.0.1:37590,user=root,n1] Event: "create_table", target: 52, info: {TableName:test.table_6 Statement:CREATE TABLE IF NOT EXISTS test.table_6 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.065610 14339 storage/split_queue.go:98 [split,n1,s1,r8/1:/{Table/51-Max},@c42193ea80] splitting at key /Table/52/0
I170209 08:16:47.065654 14339 storage/replica_command.go:2397 [split,n1,s1,r8/1:/{Table/51-Max},@c42193ea80] initiating a split of this range at key /Table/52 [r9]
I170209 08:16:47.081936 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r8/1:/Table/5{1-2},@c42193ea80] generated Raft snapshot 0aa1a1ef at index 15
E170209 08:16:47.082956 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r8/1:/Table/5{1-2},@c42193ea80] [n1,s1,r8/1:/Table/5{1-2}]: change replicas aborted due to failed preemptive snapshot: r8: remote couldn't accept snapshot with error: [n3,s3],r8: cannot apply snapshot: snapshot intersects existing range [n3,s3,r7/2:/{Table/50-Max}]
I170209 08:16:47.280077 14339 storage/split_queue.go:98 [split,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] splitting at key /Table/53/0
I170209 08:16:47.280114 14339 storage/replica_command.go:2397 [split,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] initiating a split of this range at key /Table/53 [r10]
I170209 08:16:47.280948 15151 sql/event_log.go:95 [client=127.0.0.1:44899,user=root,n3] Event: "create_table", target: 53, info: {TableName:test.table_2 Statement:CREATE TABLE IF NOT EXISTS test.table_2 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.283646 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] generated Raft snapshot 7fca501f at index 11
E170209 08:16:47.285031 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r9/1:/{Table/52-Max},@c421e97c00] [n1,s1,r9/1:/{Table/52-Max}]: change replicas aborted due to failed preemptive snapshot: r9: remote couldn't accept snapshot with error: [n3,s3],r9: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.333284 15235 sql/event_log.go:95 [client=127.0.0.1:43018,user=root,n2] Event: "create_table", target: 54, info: {TableName:test.table_7 Statement:CREATE TABLE IF NOT EXISTS test.table_7 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.333613 14339 storage/split_queue.go:98 [split,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] splitting at key /Table/54/0
I170209 08:16:47.333649 14339 storage/replica_command.go:2397 [split,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] initiating a split of this range at key /Table/54 [r11]
I170209 08:16:47.336538 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] generated Raft snapshot f13e88fc at index 11
E170209 08:16:47.337626 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r10/1:/{Table/53-Max},@c420f74e00] [n1,s1,r10/1:/{Table/53-Max}]: change replicas aborted due to failed preemptive snapshot: r10: remote couldn't accept snapshot with error: [n3,s3],r10: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.379569 14339 storage/split_queue.go:98 [split,n1,s1,r11/1:/{Table/54-Max},@c422781880] splitting at key /Table/55/0
I170209 08:16:47.379608 14339 storage/replica_command.go:2397 [split,n1,s1,r11/1:/{Table/54-Max},@c422781880] initiating a split of this range at key /Table/55 [r12]
I170209 08:16:47.380670 15234 sql/event_log.go:95 [client=127.0.0.1:43021,user=root,n2] Event: "create_table", target: 55, info: {TableName:test.table_4 Statement:CREATE TABLE IF NOT EXISTS test.table_4 (id INT PRIMARY KEY, val INT) User:root}
E170209 08:16:47.402437 14823 storage/replica_proposal.go:444 [n3,s3,r2/2:/Table/{0-11},@c42094ee00] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan
I170209 08:16:47.403312 16062 storage/raft_transport.go:437 [n3] raft transport stream to node 2 established
I170209 08:16:47.409093 16030 storage/raft_transport.go:437 [n2] raft transport stream to node 3 established
I170209 08:16:47.418189 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r11/1:/{Table/54-Max},@c422781880] generated Raft snapshot 34272ffc at index 11
E170209 08:16:47.418971 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r11/1:/{Table/54-Max},@c422781880] [n1,s1,r11/1:/{Table/54-Max}]: change replicas aborted due to failed preemptive snapshot: r11: remote couldn't accept snapshot with error: [n3,s3],r11: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
E170209 08:16:47.466800 14822 storage/replica_proposal.go:720 [n3,s3,r2/2:/Table/{0-11},@c42094ee00] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan
I170209 08:16:47.467217 15088 sql/event_log.go:95 [client=127.0.0.1:37587,user=root,n1] Event: "create_table", target: 56, info: {TableName:test.table_9 Statement:CREATE TABLE IF NOT EXISTS test.table_9 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.485729 14339 storage/split_queue.go:98 [split,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] splitting at key /Table/56/0
I170209 08:16:47.485764 14339 storage/replica_command.go:2397 [split,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] initiating a split of this range at key /Table/56 [r13]
I170209 08:16:47.486036 15216 sql/event_log.go:95 [client=127.0.0.1:43024,user=root,n2] Event: "create_table", target: 57, info: {TableName:test.table_1 Statement:CREATE TABLE IF NOT EXISTS test.table_1 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.488221 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] generated Raft snapshot 1f053283 at index 10
E170209 08:16:47.488574 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r12/1:/{Table/55-Max},@c421dfc000] [n1,s1,r12/1:/{Table/55-Max}]: change replicas aborted due to failed preemptive snapshot: r12: remote couldn't accept snapshot with error: [n3,s3],r12: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.494491 14339 storage/split_queue.go:98 [split,n1,s1,r13/1:/{Table/56-Max},@c422766700] splitting at key /Table/57/0
I170209 08:16:47.494529 14339 storage/replica_command.go:2397 [split,n1,s1,r13/1:/{Table/56-Max},@c422766700] initiating a split of this range at key /Table/57 [r14]
I170209 08:16:47.497271 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r13/1:/{Table/56-Max},@c422766700] generated Raft snapshot f9291688 at index 11
E170209 08:16:47.497980 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r13/1:/{Table/56-Max},@c422766700] [n1,s1,r13/1:/{Table/56-Max}]: change replicas aborted due to failed preemptive snapshot: r13: remote couldn't accept snapshot with error: [n3,s3],r13: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.561670 15193 sql/event_log.go:95 [client=127.0.0.1:37593,user=root,n1] Event: "create_table", target: 58, info: {TableName:test.table_3 Statement:CREATE TABLE IF NOT EXISTS test.table_3 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.562224 14339 storage/split_queue.go:98 [split,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] splitting at key /Table/58/0
I170209 08:16:47.562261 14339 storage/replica_command.go:2397 [split,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] initiating a split of this range at key /Table/58 [r15]
I170209 08:16:47.564690 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] generated Raft snapshot 040b7d56 at index 11
E170209 08:16:47.565983 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r14/1:/{Table/57-Max},@c4223ca380] [n1,s1,r14/1:/{Table/57-Max}]: change replicas aborted due to failed preemptive snapshot: r14: remote couldn't accept snapshot with error: [n3,s3],r14: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.693628 15217 sql/event_log.go:95 [client=127.0.0.1:44893,user=root,n3] Event: "create_table", target: 59, info: {TableName:test.table_8 Statement:CREATE TABLE IF NOT EXISTS test.table_8 (id INT PRIMARY KEY, val INT) User:root}
I170209 08:16:47.707314 14339 storage/split_queue.go:98 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] splitting at key /Table/59/0
I170209 08:16:47.707349 14339 storage/replica_command.go:2397 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] initiating a split of this range at key /Table/59 [r16]
I170209 08:16:47.715310 14343 storage/replica_raftstorage.go:414 [raftsnapshot,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] generated Raft snapshot 5950096e at index 10
E170209 08:16:47.717366 14343 storage/queue.go:628 [raftsnapshot,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] [n1,s1,r15/1:/{Table/58-Max}]: change replicas aborted due to failed preemptive snapshot: r15: remote couldn't accept snapshot with error: [n3,s3],r15: cannot apply snapshot: snapshot intersects existing range; initiated GC: [n3,s3,r8/2:/{Table/51-Max}]
I170209 08:16:47.723309 16477 util/stop/stopper.go:493 quiescing; tasks left:
1 storage/queue.go:523
1 kv/txn_coord_sender.go:924
W170209 08:16:47.723402 14932 storage/raft_transport.go:443 [n1] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723482 16030 storage/raft_transport.go:443 [n2] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723610 14781 storage/raft_transport.go:443 [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723636 16062 storage/raft_transport.go:443 [n3] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W170209 08:16:47.723720 14978 storage/raft_transport.go:443 [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I170209 08:16:47.723794 16477 util/stop/stopper.go:493 quiescing; tasks left:
1 storage/queue.go:523
W170209 08:16:47.723863 14778 storage/raft_transport.go:443 [n1] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
E170209 08:16:47.723954 14339 internal/client/txn.go:341 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] failure aborting transaction: writing transaction timed out or ran on multiple coordinators; abort caused by: node unavailable; try another peer
E170209 08:16:47.724022 14339 storage/queue.go:628 [split,n1,s1,r15/1:/{Table/58-Max},@c421ea7180] unable to split [n1,s1,r15/1:/{Table/58-Max}] at key "/Table/59/0": storage/replica_command.go:2476: split at key /Table/59 failed: node unavailable; try another peer
create_test.go:191: table 0: could not be created: pq: unexpected value: raw_bytes:"\364U\032\350\003\n\237\001\n\007table_5\0303 2(\0010\000:\004\010\000\020\000B\022\n\002id\020\001\032\006\010\001\020\000\030\000 \0000\000B\023\n\003val\020\002\032\006\010\001\020\000\030\000 \0010\000H\003R!\n\007primary\020\001\030\001\"\002id0\001@\000J\010\010\000\020\000\032\000 \000Z\000`\002j\n\n\010\n\004root\020\002\200\001\001\210\001\003\230\001\000\262\001\032\n\007primary\020\000\032\002id\032\003val \001 \002(\002\270\001\001\302\001\000" timestamp:<wall_time:1486628206925976227 logical:0 >
create_test.go:230: expected 10 tables created, only got 9
``` | non_defect | github com cockroachdb cockroach pkg sql testparallelcreatetables failed under stress sha parameters cockroach proposer evaluated kv false tags goflags stress build found a failed test server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped server node go cluster has been created server node go add additional nodes by specifying join storage store go failed initial metrics computation system config not yet available server node go initialized store capacity available rangecount leasecount server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage split queue go splitting at key table storage replica command go initiating a split of this range at key table sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews storage split queue go splitting at key table storage replica command go initiating a split of this range at key table server server go done ensuring all necessary migrations have run server server go serving sql connections storage split queue go splitting at key table storage replica command go initiating a split of this range at key table storage split queue go splitting at key table storage replica command go initiating a split of this range at key table storage split queue go splitting at key table storage replica command go initiating a split of this range at key table gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage stores go wrote node addresses to persistent storage server server go done ensuring all necessary migrations have run server server go serving sql connections server node go bootstrapped store storage replica raftstorage go generated preemptive snapshot at index gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping storage replica go proposing add replica nodeid storeid replicaid server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality storage replica raftstorage go generated preemptive snapshot at index server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp storage store go streamed snapshot kv pairs log entries storage raft transport go raft transport stream to node established storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage stores go wrote node addresses to persistent storage storage replica raftstorage go applied preemptive snapshot in storage stores go wrote node addresses to persistent storage storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections server node go bootstrapped store storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat storage store go streamed snapshot kv pairs log entries storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage raft transport go raft transport stream to node established storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage store go streamed snapshot kv pairs log entries storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage queue go purgatory is now empty storage replica raftstorage go generated preemptive snapshot at index storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage store go streamed snapshot kv pairs log entries storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid sql event log go event create database target info databasename test statement create database test user root storage split queue go splitting at key table storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage split queue go splitting at key table storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage split queue go splitting at key table storage replica command go initiating a split of this range at key table storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range storage split queue go splitting at key table storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range initiated gc sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage split queue go splitting at key table storage replica command go initiating a split of this range at key table storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range initiated gc storage split queue go splitting at key table storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage replica proposal go could not load systemconfig span must retry later due to intent on systemconfigspan storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range initiated gc storage replica proposal go could not load systemconfig span must retry later due to intent on systemconfigspan sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage split queue go splitting at key table storage replica command go initiating a split of this range at key table sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range initiated gc storage split queue go splitting at key table storage replica command go initiating a split of this range at key table storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range initiated gc sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage split queue go splitting at key table storage replica command go initiating a split of this range at key table storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range initiated gc sql event log go event create table target info tablename test table statement create table if not exists test table id int primary key val int user root storage split queue go splitting at key table storage replica command go initiating a split of this range at key table storage replica raftstorage go generated raft snapshot at index storage queue go change replicas aborted due to failed preemptive snapshot remote couldn t accept snapshot with error cannot apply snapshot snapshot intersects existing range initiated gc util stop stopper go quiescing tasks left storage queue go kv txn coord sender go storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing util stop stopper go quiescing tasks left storage queue go storage raft transport go raft transport stream to node failed rpc error code desc transport is closing internal client txn go failure aborting transaction writing transaction timed out or ran on multiple coordinators abort caused by node unavailable try another peer storage queue go unable to split at key table storage replica command go split at key table failed node unavailable try another peer create test go table could not be created pq unexpected value raw bytes n n n n n n n n n timestamp create test go expected tables created only got | 0 |
40,410 | 9,983,328,086 | IssuesEvent | 2019-07-10 12:09:29 | STEllAR-GROUP/hpx | https://api.github.com/repos/STEllAR-GROUP/hpx | opened | Static linking fails during CMake configuration | category: CMake type: defect | When setting `-DHPX_WITH_STATIC_LINKING=ON` CMake configuration fails at the end with
```
-- Configuring done
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_assertion" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_cache" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_collectives" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_config" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_hardware" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_preprocessor" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "accumulator_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "template_accumulator_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "template_function_accumulator_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "cancelable_action_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "jacobi_component_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "nqueen_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "sine_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "random_mem_access_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "startup_shutdown_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "throttle_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "simple_central_tuplespace_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "parcel_coalescing" which requires target "hpx_internal_flags" that is not in the export set.
-- Generating done
```
This is not very high priority for me so if someone needs it fixed let us know. Curiously this is exactly the error we had before making sure HPX modules are `PRIVATE` dependencies. I haven't yet figured out if this is a CMake bug or if there's something we can do about it. | 1.0 | Static linking fails during CMake configuration - When setting `-DHPX_WITH_STATIC_LINKING=ON` CMake configuration fails at the end with
```
-- Configuring done
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_assertion" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_cache" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_collectives" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_config" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_hardware" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "hpx" which requires target "hpx_preprocessor" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "accumulator_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "template_accumulator_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "template_function_accumulator_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "cancelable_action_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "jacobi_component_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "nqueen_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "sine_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "random_mem_access_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "startup_shutdown_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "throttle_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "simple_central_tuplespace_component" which requires target "hpx_internal_flags" that is not in the export set.
CMake Error: install(EXPORT "HPXTargets" ...) includes target "parcel_coalescing" which requires target "hpx_internal_flags" that is not in the export set.
-- Generating done
```
This is not very high priority for me so if someone needs it fixed let us know. Curiously this is exactly the error we had before making sure HPX modules are `PRIVATE` dependencies. I haven't yet figured out if this is a CMake bug or if there's something we can do about it. | defect | static linking fails during cmake configuration when setting dhpx with static linking on cmake configuration fails at the end with configuring done cmake error install export hpxtargets includes target hpx which requires target hpx assertion that is not in the export set cmake error install export hpxtargets includes target hpx which requires target hpx cache that is not in the export set cmake error install export hpxtargets includes target hpx which requires target hpx collectives that is not in the export set cmake error install export hpxtargets includes target hpx which requires target hpx config that is not in the export set cmake error install export hpxtargets includes target hpx which requires target hpx hardware that is not in the export set cmake error install export hpxtargets includes target hpx which requires target hpx preprocessor that is not in the export set cmake error install export hpxtargets includes target accumulator component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target template accumulator component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target template function accumulator component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target cancelable action component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target jacobi component component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target nqueen component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target sine component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target random mem access component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target startup shutdown component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target throttle component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target simple central tuplespace component which requires target hpx internal flags that is not in the export set cmake error install export hpxtargets includes target parcel coalescing which requires target hpx internal flags that is not in the export set generating done this is not very high priority for me so if someone needs it fixed let us know curiously this is exactly the error we had before making sure hpx modules are private dependencies i haven t yet figured out if this is a cmake bug or if there s something we can do about it | 1 |
105,612 | 23,081,841,806 | IssuesEvent | 2022-07-26 07:57:53 | yiisoft/validator | https://api.github.com/repos/yiisoft/validator | closed | Each, GroupRule, Nested rules options do not include their own options | type:enhancement status:code review | For example - `message`, `incorrectInputMessage` in `Each` rule.
https://github.com/yiisoft/validator/blob/c65b1fbf8e01cfa123fa0ca73e72da4eb04c45c3/tests/Rule/EachTest.php#L34-L61 | 1.0 | Each, GroupRule, Nested rules options do not include their own options - For example - `message`, `incorrectInputMessage` in `Each` rule.
https://github.com/yiisoft/validator/blob/c65b1fbf8e01cfa123fa0ca73e72da4eb04c45c3/tests/Rule/EachTest.php#L34-L61 | non_defect | each grouprule nested rules options do not include their own options for example message incorrectinputmessage in each rule | 0 |
58,294 | 11,861,820,554 | IssuesEvent | 2020-03-25 16:55:54 | pywbem/pywbemtools | https://api.github.com/repos/pywbem/pywbemtools | closed | Should use-pull be persistent and part of a particular server | area: code resolution: fixed type: enhancement | Should this general option be part of a particular server or treated as a "transient" so that its really more part of an interactive session. This came up because one user asked why they could not make a particular server in the server file as use-pull no permanently and also when I was tracking issue #530 I found that to be a logical concept.
| 1.0 | Should use-pull be persistent and part of a particular server - Should this general option be part of a particular server or treated as a "transient" so that its really more part of an interactive session. This came up because one user asked why they could not make a particular server in the server file as use-pull no permanently and also when I was tracking issue #530 I found that to be a logical concept.
| non_defect | should use pull be persistent and part of a particular server should this general option be part of a particular server or treated as a transient so that its really more part of an interactive session this came up because one user asked why they could not make a particular server in the server file as use pull no permanently and also when i was tracking issue i found that to be a logical concept | 0 |
34,978 | 6,398,147,772 | IssuesEvent | 2017-08-04 19:50:24 | wizeline/wizelink-back | https://api.github.com/repos/wizeline/wizelink-back | closed | Running unit tests is slow | documentation infrastructure p1 | I know `tox` does more than just run the tests and that's likely to be the reason, but for daily development (and particularly for TDD development), this makes the feedback cycle longer than it should be.
We should have unit tests that run in a fraction of a second. Can we add instructions to the README on how to do this (i.e. bypass `tox` for daily development, and use it only when we're ready to commit changes as a validation step)? | 1.0 | Running unit tests is slow - I know `tox` does more than just run the tests and that's likely to be the reason, but for daily development (and particularly for TDD development), this makes the feedback cycle longer than it should be.
We should have unit tests that run in a fraction of a second. Can we add instructions to the README on how to do this (i.e. bypass `tox` for daily development, and use it only when we're ready to commit changes as a validation step)? | non_defect | running unit tests is slow i know tox does more than just run the tests and that s likely to be the reason but for daily development and particularly for tdd development this makes the feedback cycle longer than it should be we should have unit tests that run in a fraction of a second can we add instructions to the readme on how to do this i e bypass tox for daily development and use it only when we re ready to commit changes as a validation step | 0 |
49,136 | 13,441,800,803 | IssuesEvent | 2020-09-08 05:15:55 | srivatsamarichi/spring-petclinic | https://api.github.com/repos/srivatsamarichi/spring-petclinic | closed | CVE-2020-10683 (High) detected in dom4j-2.1.1.jar | bug security vulnerability | ## CVE-2020-10683 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dom4j-2.1.1.jar</b></p></summary>
<p>flexible XML framework for Java</p>
<p>Library home page: <a href="http://dom4j.github.io/">http://dom4j.github.io/</a></p>
<p>Path to dependency file: /tmp/ws-scm/spring-petclinic/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/dom4j/dom4j/2.1.1/dom4j-2.1.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-2.2.5.RELEASE.jar (Root Library)
- hibernate-core-5.4.12.Final.jar
- :x: **dom4j-2.1.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/spring-petclinic/commit/533ac32a0edc74b6a5204143571aa191849cdb9f">533ac32a0edc74b6a5204143571aa191849cdb9f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
dom4j before 2.1.3 allows external DTDs and External Entities by default, which might enable XXE attacks. However, there is popular external documentation from OWASP showing how to enable the safe, non-default behavior in any application that uses dom4j.
<p>Publish Date: 2020-05-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10683>CVE-2020-10683</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/dom4j/dom4j/tree/version-2.1.3,https://github.com/dom4j/dom4j/tree/version-2.0.3">https://github.com/dom4j/dom4j/tree/version-2.1.3,https://github.com/dom4j/dom4j/tree/version-2.0.3</a></p>
<p>Release Date: 2020-05-01</p>
<p>Fix Resolution: org.dom4j:dom4j:2.1.3,org.dom4j:dom4j:2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-10683 (High) detected in dom4j-2.1.1.jar - ## CVE-2020-10683 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dom4j-2.1.1.jar</b></p></summary>
<p>flexible XML framework for Java</p>
<p>Library home page: <a href="http://dom4j.github.io/">http://dom4j.github.io/</a></p>
<p>Path to dependency file: /tmp/ws-scm/spring-petclinic/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/dom4j/dom4j/2.1.1/dom4j-2.1.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-2.2.5.RELEASE.jar (Root Library)
- hibernate-core-5.4.12.Final.jar
- :x: **dom4j-2.1.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/spring-petclinic/commit/533ac32a0edc74b6a5204143571aa191849cdb9f">533ac32a0edc74b6a5204143571aa191849cdb9f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
dom4j before 2.1.3 allows external DTDs and External Entities by default, which might enable XXE attacks. However, there is popular external documentation from OWASP showing how to enable the safe, non-default behavior in any application that uses dom4j.
<p>Publish Date: 2020-05-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10683>CVE-2020-10683</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/dom4j/dom4j/tree/version-2.1.3,https://github.com/dom4j/dom4j/tree/version-2.0.3">https://github.com/dom4j/dom4j/tree/version-2.1.3,https://github.com/dom4j/dom4j/tree/version-2.0.3</a></p>
<p>Release Date: 2020-05-01</p>
<p>Fix Resolution: org.dom4j:dom4j:2.1.3,org.dom4j:dom4j:2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in jar cve high severity vulnerability vulnerable library jar flexible xml framework for java library home page a href path to dependency file tmp ws scm spring petclinic pom xml path to vulnerable library home wss scanner repository org jar dependency hierarchy spring boot starter data jpa release jar root library hibernate core final jar x jar vulnerable library found in head commit a href vulnerability details before allows external dtds and external entities by default which might enable xxe attacks however there is popular external documentation from owasp showing how to enable the safe non default behavior in any application that uses publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org org step up your open source security game with whitesource | 0 |
42,068 | 10,779,113,098 | IssuesEvent | 2019-11-04 09:52:32 | zotonic/zotonic | https://api.github.com/repos/zotonic/zotonic | closed | I can not get into the admin panel | admin-ui defect | `09:15:24.460 [info] UI event: {"type":"error","message":"Uncaught ReferenceError: z_transport_delegate_register is not defined","file":"https://devskl.anriya.ru:8443/admin","line":575,"col":5,"stack":"ReferenceError: z_transport_delegate_register is not defined\n at window.zotonicPageInit (https://devskl.anriya.ru:8443/admin:575:5)\n at z_jquery_init (https://devskl.anriya.ru:8443/lib/js/modules/jstz.min~/cotonic/zotonic-wired-bundle~/js/apps/zotonic-wired~z.widgetmanager~/js/modules/z.notice~z.imageviewer~z.dialog~z.clickable~livevalidation-1.3~jquery.loadmask~/bootstrap/js/bootstrap.min~/js/modules/responsive~1108108132.js:8591:16)\n at z_jquery_init_await (https://devskl.anriya.ru:8443/lib/js/modules/jstz.min~/cotonic/zotonic-wired-bundle~/js/apps/zotonic-wired~z.widgetmanager~/js/modules/z.notice~z.imageviewer~z.dialog~z.clickable~livevalidation-1.3~jquery.loadmask~/bootstrap/js/bootstrap.min~/js/modules/responsive~1108108132.js:8583:9)\n at https://devskl.anriya.ru:8443/lib/js/modules/jstz.min~/cotonic/zotonic-wired-bundle~/js/apps/zotonic-wired~z.widgetmanager~/js/modules/z.notice~z.imageviewer~z.dialog~z.clickable~livevalidation-1.3~jquery.loadmask~/bootstrap/js/bootstrap.min~/js/modules/responsive~1108108132.js:8578:5","user_agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36","url":"https://devskl.anriya.ru:8443/admin"}` | 1.0 | I can not get into the admin panel - `09:15:24.460 [info] UI event: {"type":"error","message":"Uncaught ReferenceError: z_transport_delegate_register is not defined","file":"https://devskl.anriya.ru:8443/admin","line":575,"col":5,"stack":"ReferenceError: z_transport_delegate_register is not defined\n at window.zotonicPageInit (https://devskl.anriya.ru:8443/admin:575:5)\n at z_jquery_init (https://devskl.anriya.ru:8443/lib/js/modules/jstz.min~/cotonic/zotonic-wired-bundle~/js/apps/zotonic-wired~z.widgetmanager~/js/modules/z.notice~z.imageviewer~z.dialog~z.clickable~livevalidation-1.3~jquery.loadmask~/bootstrap/js/bootstrap.min~/js/modules/responsive~1108108132.js:8591:16)\n at z_jquery_init_await (https://devskl.anriya.ru:8443/lib/js/modules/jstz.min~/cotonic/zotonic-wired-bundle~/js/apps/zotonic-wired~z.widgetmanager~/js/modules/z.notice~z.imageviewer~z.dialog~z.clickable~livevalidation-1.3~jquery.loadmask~/bootstrap/js/bootstrap.min~/js/modules/responsive~1108108132.js:8583:9)\n at https://devskl.anriya.ru:8443/lib/js/modules/jstz.min~/cotonic/zotonic-wired-bundle~/js/apps/zotonic-wired~z.widgetmanager~/js/modules/z.notice~z.imageviewer~z.dialog~z.clickable~livevalidation-1.3~jquery.loadmask~/bootstrap/js/bootstrap.min~/js/modules/responsive~1108108132.js:8578:5","user_agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36","url":"https://devskl.anriya.ru:8443/admin"}` | defect | i can not get into the admin panel ui event type error message uncaught referenceerror z transport delegate register is not defined file z transport delegate register is not defined n at window zotonicpageinit at z jquery init at z jquery init await at linux applewebkit khtml like gecko chrome safari url | 1 |
204,862 | 15,953,139,789 | IssuesEvent | 2021-04-15 12:04:34 | gatsbyjs/gatsby | https://api.github.com/repos/gatsbyjs/gatsby | closed | Unknown Runtime Error when following tutorial part 1 on Safari | status: needs reproduction type: bug type: documentation | <!--
Please fill out each section below, otherwise, your issue will be closed. This info allows Gatsby maintainers to diagnose (and fix!) your issue as quickly as possible.
Useful Links:
- Documentation: https://www.gatsbyjs.com/docs/
- How to File an Issue: https://www.gatsbyjs.com/contributing/how-to-file-an-issue/
Before opening a new issue, please search existing issues: https://github.com/gatsbyjs/gatsby/issues
-->
## Description
In the [Using the `<Link />` component](https://www.gatsbyjs.com/docs/tutorial/part-one/#-using-the-link--component) section of the Gatsby tutorial part 1, after `gatsby develop` succeeded I encountered an Unknown Runtime Error when browsing the local site at localhost:8000: `We couldn't find the correct component chunk with the name "component---src-pages-contact-js"`.
This is despite a contact.js file present in /src/pages/, with a proper React component Contact defined. Just to make sure there was no human error on my end, I overwrote contact.js and index.js with the content of your code samples—didn't work. `gatsby clean` followed by a hard refresh had no effect. However, no error is displayed if I instead use `http://127.0.0.1:8000/`.
I am running Safari 14.0.2 on macOS 10.15.7. I am working on a fresh Gatsby install, installed via Homebrew.
### Steps to reproduce
1. Follow part one of the Gatsby tutorial (copy+pasting code samples) until you create and fill contact.js.
2. Run `gatsby develop`.
3. Open http://localhost:8000 on Safari.
### Expected result
Functional website w/ no errors.
### Actual result
A few seconds later (it varies), I see the following error:
<img width="1782" alt="Screen Shot 2021-04-06 at 5 14 06 PM" src="https://user-images.githubusercontent.com/68053732/113734337-8bb84880-96fb-11eb-9693-5324eb09b172.png">
After closing the error, /contact/ becomes inaccessible ("Preparing requested page" endless loop).
### Environment
System:
OS: macOS 10.15.7
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 5.7.1 - /bin/zsh
Binaries:
Node: 15.13.0 - /usr/local/bin/node
Yarn: 1.22.10 - /usr/local/bin/yarn
npm: 7.7.6 - /usr/local/bin/npm
Languages:
Python: 2.7.16 - /usr/bin/python
Browsers:
Chrome: 89.0.4389.114
Safari: 14.0.2
npmPackages:
gatsby: ^3.1.2 => 3.1.2
npmGlobalPackages:
gatsby-cli: 3.2.0
| 1.0 | Unknown Runtime Error when following tutorial part 1 on Safari - <!--
Please fill out each section below, otherwise, your issue will be closed. This info allows Gatsby maintainers to diagnose (and fix!) your issue as quickly as possible.
Useful Links:
- Documentation: https://www.gatsbyjs.com/docs/
- How to File an Issue: https://www.gatsbyjs.com/contributing/how-to-file-an-issue/
Before opening a new issue, please search existing issues: https://github.com/gatsbyjs/gatsby/issues
-->
## Description
In the [Using the `<Link />` component](https://www.gatsbyjs.com/docs/tutorial/part-one/#-using-the-link--component) section of the Gatsby tutorial part 1, after `gatsby develop` succeeded I encountered an Unknown Runtime Error when browsing the local site at localhost:8000: `We couldn't find the correct component chunk with the name "component---src-pages-contact-js"`.
This is despite a contact.js file present in /src/pages/, with a proper React component Contact defined. Just to make sure there was no human error on my end, I overwrote contact.js and index.js with the content of your code samples—didn't work. `gatsby clean` followed by a hard refresh had no effect. However, no error is displayed if I instead use `http://127.0.0.1:8000/`.
I am running Safari 14.0.2 on macOS 10.15.7. I am working on a fresh Gatsby install, installed via Homebrew.
### Steps to reproduce
1. Follow part one of the Gatsby tutorial (copy+pasting code samples) until you create and fill contact.js.
2. Run `gatsby develop`.
3. Open http://localhost:8000 on Safari.
### Expected result
Functional website w/ no errors.
### Actual result
A few seconds later (it varies), I see the following error:
<img width="1782" alt="Screen Shot 2021-04-06 at 5 14 06 PM" src="https://user-images.githubusercontent.com/68053732/113734337-8bb84880-96fb-11eb-9693-5324eb09b172.png">
After closing the error, /contact/ becomes inaccessible ("Preparing requested page" endless loop).
### Environment
System:
OS: macOS 10.15.7
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 5.7.1 - /bin/zsh
Binaries:
Node: 15.13.0 - /usr/local/bin/node
Yarn: 1.22.10 - /usr/local/bin/yarn
npm: 7.7.6 - /usr/local/bin/npm
Languages:
Python: 2.7.16 - /usr/bin/python
Browsers:
Chrome: 89.0.4389.114
Safari: 14.0.2
npmPackages:
gatsby: ^3.1.2 => 3.1.2
npmGlobalPackages:
gatsby-cli: 3.2.0
| non_defect | unknown runtime error when following tutorial part on safari please fill out each section below otherwise your issue will be closed this info allows gatsby maintainers to diagnose and fix your issue as quickly as possible useful links documentation how to file an issue before opening a new issue please search existing issues description in the section of the gatsby tutorial part after gatsby develop succeeded i encountered an unknown runtime error when browsing the local site at localhost we couldn t find the correct component chunk with the name component src pages contact js this is despite a contact js file present in src pages with a proper react component contact defined just to make sure there was no human error on my end i overwrote contact js and index js with the content of your code samples—didn t work gatsby clean followed by a hard refresh had no effect however no error is displayed if i instead use i am running safari on macos i am working on a fresh gatsby install installed via homebrew steps to reproduce follow part one of the gatsby tutorial copy pasting code samples until you create and fill contact js run gatsby develop open on safari expected result functional website w no errors actual result a few seconds later it varies i see the following error img width alt screen shot at pm src after closing the error contact becomes inaccessible preparing requested page endless loop environment system os macos cpu intel r core tm cpu shell bin zsh binaries node usr local bin node yarn usr local bin yarn npm usr local bin npm languages python usr bin python browsers chrome safari npmpackages gatsby npmglobalpackages gatsby cli | 0 |
68,892 | 17,466,281,518 | IssuesEvent | 2021-08-06 17:20:09 | aws-amplify/aws-sdk-ios | https://api.github.com/repos/aws-amplify/aws-sdk-ios | closed | There is a conflict between amplify and awsiot Library | bug build | There is a conflict between amplify and awsiot Library;
My podfile is as follows:
```
# Uncomment the next line to define a global platform for your project
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '13.0'
use_frameworks!
target 'Todo' do
# Comment the next line if you don't want to use dynamic frameworks
# Pods for Todo
pod 'Amplify'
pod 'AmplifyPlugins/AWSCognitoAuthPlugin'
pod 'AmplifyPlugins/AWSDataStorePlugin'
pod 'AmplifyPlugins/AWSAPIPlugin'
pod 'AmplifyPlugins/AWSS3StoragePlugin'
pod 'AmplifyPlugins/AWSPinpointAnalyticsPlugin'
pod 'AWSPredictionsPlugin'
pod 'CoreMLPredictionsPlugin'
pod 'AWSMobileClient'
pod 'AWSIoT'
# pod 'AWSPinpoint'
pod 'AFNetworking', '4.0.1'
pod 'CocoaAsyncSocket', '7.4.2'
end
```
Compilation error:
In file included from /Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketDelegateAdaptor.m:17:
/Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketDelegateAdaptor.h:17:9: fatal error: 'AWSSRWebSocket.h' file not found
#import "AWSSRWebSocket.h"
^~~~~~~~~~~~~~~~~~
1 error generated.
In file included from /Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketAdaptor.m:17:
/Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketAdaptor.h:17:9: fatal error: 'AWSSRWebSocket.h' file not found
#import "AWSSRWebSocket.h"
^~~~~~~~~~~~~~~~~~
1 error generated.
Please help me to solve the problem | 1.0 | There is a conflict between amplify and awsiot Library - There is a conflict between amplify and awsiot Library;
My podfile is as follows:
```
# Uncomment the next line to define a global platform for your project
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '13.0'
use_frameworks!
target 'Todo' do
# Comment the next line if you don't want to use dynamic frameworks
# Pods for Todo
pod 'Amplify'
pod 'AmplifyPlugins/AWSCognitoAuthPlugin'
pod 'AmplifyPlugins/AWSDataStorePlugin'
pod 'AmplifyPlugins/AWSAPIPlugin'
pod 'AmplifyPlugins/AWSS3StoragePlugin'
pod 'AmplifyPlugins/AWSPinpointAnalyticsPlugin'
pod 'AWSPredictionsPlugin'
pod 'CoreMLPredictionsPlugin'
pod 'AWSMobileClient'
pod 'AWSIoT'
# pod 'AWSPinpoint'
pod 'AFNetworking', '4.0.1'
pod 'CocoaAsyncSocket', '7.4.2'
end
```
Compilation error:
In file included from /Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketDelegateAdaptor.m:17:
/Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketDelegateAdaptor.h:17:9: fatal error: 'AWSSRWebSocket.h' file not found
#import "AWSSRWebSocket.h"
^~~~~~~~~~~~~~~~~~
1 error generated.
In file included from /Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketAdaptor.m:17:
/Users/lgh/Desktop/awsProject/Todo/Pods/AWSTranscribeStreaming/AWSTranscribeStreaming/Internal/AWSSRWebSocketAdaptor.h:17:9: fatal error: 'AWSSRWebSocket.h' file not found
#import "AWSSRWebSocket.h"
^~~~~~~~~~~~~~~~~~
1 error generated.
Please help me to solve the problem | non_defect | there is a conflict between amplify and awsiot library there is a conflict between amplify and awsiot library; my podfile is as follows uncomment the next line to define a global platform for your project source platform ios use frameworks target todo do comment the next line if you don t want to use dynamic frameworks pods for todo pod amplify pod amplifyplugins awscognitoauthplugin pod amplifyplugins awsdatastoreplugin pod amplifyplugins awsapiplugin pod amplifyplugins pod amplifyplugins awspinpointanalyticsplugin pod awspredictionsplugin pod coremlpredictionsplugin pod awsmobileclient pod awsiot pod awspinpoint pod afnetworking pod cocoaasyncsocket end compilation error in file included from users lgh desktop awsproject todo pods awstranscribestreaming awstranscribestreaming internal awssrwebsocketdelegateadaptor m users lgh desktop awsproject todo pods awstranscribestreaming awstranscribestreaming internal awssrwebsocketdelegateadaptor h fatal error awssrwebsocket h file not found import awssrwebsocket h error generated in file included from users lgh desktop awsproject todo pods awstranscribestreaming awstranscribestreaming internal awssrwebsocketadaptor m users lgh desktop awsproject todo pods awstranscribestreaming awstranscribestreaming internal awssrwebsocketadaptor h fatal error awssrwebsocket h file not found import awssrwebsocket h error generated please help me to solve the problem | 0 |
98,234 | 16,361,468,717 | IssuesEvent | 2021-05-14 10:07:44 | Galaxy-Software-Service/Maven_Pom_Demo | https://api.github.com/repos/Galaxy-Software-Service/Maven_Pom_Demo | opened | CVE-2020-25649 (High) detected in jackson-databind-2.0.6.jar | security vulnerability | ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: Maven_Pom_Demo/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.6/jackson-databind-2.0.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Galaxy-Software-Service/Maven_Pom_Demo/commit/69cce4bac0c1b37088c48547695b174bd6149c5c">69cce4bac0c1b37088c48547695b174bd6149c5c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.6","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-25649","vulnerabilityDetails":"A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-25649 (High) detected in jackson-databind-2.0.6.jar - ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: Maven_Pom_Demo/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.6/jackson-databind-2.0.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Galaxy-Software-Service/Maven_Pom_Demo/commit/69cce4bac0c1b37088c48547695b174bd6149c5c">69cce4bac0c1b37088c48547695b174bd6149c5c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.6","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-25649","vulnerabilityDetails":"A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file maven pom demo pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity vulnerabilityurl | 0 |
69,877 | 22,706,718,903 | IssuesEvent | 2022-07-05 15:13:24 | matrix-org/synapse | https://api.github.com/repos/matrix-org/synapse | closed | Faster joins: fix race where we can fail to persist an incoming event with partial state after `_sync_partial_state_room` clears the partial state flag for a room | A-Federated-Join T-Defect | https://github.com/matrix-org/synapse/blob/e3163e2e11cf8bffa4cb3e58ac0b86a83eca314c/synapse/handlers/federation.py#L1553-L1569
When we are processing an incoming event while `_sync_partial_state_room` is running, `_sync_partial_state_room` may clear the partial state flag for the room before we try to persist the event with a partial state flag. This leads to a foreign key constraint failure because there's no longer a `partial_state_room` entry for the room.
See https://github.com/matrix-org/synapse/pull/12394#discussion_r872012615 for an example. | 1.0 | Faster joins: fix race where we can fail to persist an incoming event with partial state after `_sync_partial_state_room` clears the partial state flag for a room - https://github.com/matrix-org/synapse/blob/e3163e2e11cf8bffa4cb3e58ac0b86a83eca314c/synapse/handlers/federation.py#L1553-L1569
When we are processing an incoming event while `_sync_partial_state_room` is running, `_sync_partial_state_room` may clear the partial state flag for the room before we try to persist the event with a partial state flag. This leads to a foreign key constraint failure because there's no longer a `partial_state_room` entry for the room.
See https://github.com/matrix-org/synapse/pull/12394#discussion_r872012615 for an example. | defect | faster joins fix race where we can fail to persist an incoming event with partial state after sync partial state room clears the partial state flag for a room when we are processing an incoming event while sync partial state room is running sync partial state room may clear the partial state flag for the room before we try to persist the event with a partial state flag this leads to a foreign key constraint failure because there s no longer a partial state room entry for the room see for an example | 1 |
16,031 | 2,870,252,460 | IssuesEvent | 2015-06-07 00:37:50 | pdelia/away3d | https://api.github.com/repos/pdelia/away3d | opened | HoverCamera3D has hard-coded termination value which should be a property | auto-migrated Priority-Medium Type-Defect | #85 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:40Z
```
In away3d.cameras.HoverCamera3D, there is a hard-coded constant of 0.01
which is used to decide when hovering has completed; when tiltangle and
panangle have gotten within 0.01 degrees of targettiltangle and
targetpanangle, hovering is done. Unfortunately, on many projects this is
far too precise, and the camera continues to converge for a while after
being visually "there". If the application does something when hovering
finishes, or CPU power is at a premium, this will be a problem.
I suggest adding a new property, "closeenough", which defaults to 0.01, and
is used instead of the current hard-coded "0.01". Having too-large a value
of closeenough means that there's a jerk at the last moment, but in my
current project I've found that a value of 0.1 works well and avoids a
delay at the end of the hover.
```
Original issue reported on code.google.com by `dtgriscom@gmail.com` on 27 Nov 2009 at 8:25 | 1.0 | HoverCamera3D has hard-coded termination value which should be a property - #85 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:40Z
```
In away3d.cameras.HoverCamera3D, there is a hard-coded constant of 0.01
which is used to decide when hovering has completed; when tiltangle and
panangle have gotten within 0.01 degrees of targettiltangle and
targetpanangle, hovering is done. Unfortunately, on many projects this is
far too precise, and the camera continues to converge for a while after
being visually "there". If the application does something when hovering
finishes, or CPU power is at a premium, this will be a problem.
I suggest adding a new property, "closeenough", which defaults to 0.01, and
is used instead of the current hard-coded "0.01". Having too-large a value
of closeenough means that there's a jerk at the last moment, but in my
current project I've found that a value of 0.1 works well and avoids a
delay at the end of the hover.
```
Original issue reported on code.google.com by `dtgriscom@gmail.com` on 27 Nov 2009 at 8:25 | defect | has hard coded termination value which should be a property issue by googlecodeexporter created on in cameras there is a hard coded constant of which is used to decide when hovering has completed when tiltangle and panangle have gotten within degrees of targettiltangle and targetpanangle hovering is done unfortunately on many projects this is far too precise and the camera continues to converge for a while after being visually there if the application does something when hovering finishes or cpu power is at a premium this will be a problem i suggest adding a new property closeenough which defaults to and is used instead of the current hard coded having too large a value of closeenough means that there s a jerk at the last moment but in my current project i ve found that a value of works well and avoids a delay at the end of the hover original issue reported on code google com by dtgriscom gmail com on nov at | 1 |
118,671 | 4,751,666,267 | IssuesEvent | 2016-10-23 01:10:47 | pathwaysmedical/frasernw | https://api.github.com/repos/pathwaysmedical/frasernw | closed | .docx referral forms display very poorly | Low Priority | The code that displays the type of document fails miserably on .docx referral forms. | 1.0 | .docx referral forms display very poorly - The code that displays the type of document fails miserably on .docx referral forms. | non_defect | docx referral forms display very poorly the code that displays the type of document fails miserably on docx referral forms | 0 |
24,073 | 3,907,162,460 | IssuesEvent | 2016-04-19 11:44:15 | contao/core | https://api.github.com/repos/contao/core | closed | Erlaubte Download-Dateitypen: Groß-/Kleinschreibung macht einen Unterschied | defect | Hallo zusammen,
folgendes Problem habe ich mit Contao 3.5.4:
==== Zusammenfassung =====
"Erlaubte Download-Dateitypen" / "Download file types" sind case-sensitiv. GROß-geschriebene Dateiendungen werden anders behandelt als klein-geschreibene Dateiendungen.
==== Ausführliche Beschreibung ======
Dateiverwaltung:
dort liegen in dem einen Ordner 2 Text-Dateien (Endung txt und TXT)

In den System-Einstellungen ist als Download-Dateiendung "TXT" (Großbuchstaben) angegeben:

Und kann die beiden Dateien jedoch nicht im Dateiwähler (z.B. in den Nachrichten als Anhang) finden:

Ändere ich jedoch in den System-Einstellungen die Download-Endung von "TXT" auf "txt":

dann sind die beiden Text-Dateien plötzlich doch wieder zu finden (z.B. in den Nachrichten als Anhang):

Leider lässt sich das Verhalten in der Demo nicht nachvollziehen, da Änderungen in den Systemeinstellungen nicht möglich sind
| 1.0 | Erlaubte Download-Dateitypen: Groß-/Kleinschreibung macht einen Unterschied - Hallo zusammen,
folgendes Problem habe ich mit Contao 3.5.4:
==== Zusammenfassung =====
"Erlaubte Download-Dateitypen" / "Download file types" sind case-sensitiv. GROß-geschriebene Dateiendungen werden anders behandelt als klein-geschreibene Dateiendungen.
==== Ausführliche Beschreibung ======
Dateiverwaltung:
dort liegen in dem einen Ordner 2 Text-Dateien (Endung txt und TXT)

In den System-Einstellungen ist als Download-Dateiendung "TXT" (Großbuchstaben) angegeben:

Und kann die beiden Dateien jedoch nicht im Dateiwähler (z.B. in den Nachrichten als Anhang) finden:

Ändere ich jedoch in den System-Einstellungen die Download-Endung von "TXT" auf "txt":

dann sind die beiden Text-Dateien plötzlich doch wieder zu finden (z.B. in den Nachrichten als Anhang):

Leider lässt sich das Verhalten in der Demo nicht nachvollziehen, da Änderungen in den Systemeinstellungen nicht möglich sind
| defect | erlaubte download dateitypen groß kleinschreibung macht einen unterschied hallo zusammen folgendes problem habe ich mit contao zusammenfassung erlaubte download dateitypen download file types sind case sensitiv groß geschriebene dateiendungen werden anders behandelt als klein geschreibene dateiendungen ausführliche beschreibung dateiverwaltung dort liegen in dem einen ordner text dateien endung txt und txt in den system einstellungen ist als download dateiendung txt großbuchstaben angegeben und kann die beiden dateien jedoch nicht im dateiwähler z b in den nachrichten als anhang finden ändere ich jedoch in den system einstellungen die download endung von txt auf txt dann sind die beiden text dateien plötzlich doch wieder zu finden z b in den nachrichten als anhang leider lässt sich das verhalten in der demo nicht nachvollziehen da änderungen in den systemeinstellungen nicht möglich sind | 1 |
82,515 | 15,952,719,737 | IssuesEvent | 2021-04-15 11:31:04 | TiBiBa/hedy | https://api.github.com/repos/TiBiBa/hedy | opened | Changing scope and initialization of variables within JavaScript code | code improvement | Currently a lot of variables within the Javascript code are "houtje touwtje" to make it to work. A lot have a global scope or are assigned at the start of the file while this isn't necessary. An evaluation should be made to improve the code quality and make the implementation more suitable for expansion later on. | 1.0 | Changing scope and initialization of variables within JavaScript code - Currently a lot of variables within the Javascript code are "houtje touwtje" to make it to work. A lot have a global scope or are assigned at the start of the file while this isn't necessary. An evaluation should be made to improve the code quality and make the implementation more suitable for expansion later on. | non_defect | changing scope and initialization of variables within javascript code currently a lot of variables within the javascript code are houtje touwtje to make it to work a lot have a global scope or are assigned at the start of the file while this isn t necessary an evaluation should be made to improve the code quality and make the implementation more suitable for expansion later on | 0 |
519,426 | 15,051,004,152 | IssuesEvent | 2021-02-03 13:36:25 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | uart.h: Clarification required on uart_irq_tx_ready uart_irq_rx_ready | area: API area: UART bug priority: low | This point relates to discussion that happened around https://github.com/zephyrproject-rtos/zephyr/pull/31192.
It appears that there are some uart API clients (in tree: h4.c and more, @jfischer-no may complete the list) that rely on a specific behavior from `uart_irq_tx_ready` and `uart_irq_rx_ready`.
The behavior variant, not explicited in uart.h, is that functions return false if matching IRQ are disabled.
It is implemented in MCUX and NRF uart drivers (and now STM32, following merge of #31192, maybe few others), but not in other uart drivers (nuovoton, sam, sam0, ..).
So question is:
- Should this behavior be explicit in API and updated in drivers that don't behave this way ?
- Should it be reverted and clients fixed ?
- Should it be kept optional but made explicit in the API ?
| 1.0 | uart.h: Clarification required on uart_irq_tx_ready uart_irq_rx_ready - This point relates to discussion that happened around https://github.com/zephyrproject-rtos/zephyr/pull/31192.
It appears that there are some uart API clients (in tree: h4.c and more, @jfischer-no may complete the list) that rely on a specific behavior from `uart_irq_tx_ready` and `uart_irq_rx_ready`.
The behavior variant, not explicited in uart.h, is that functions return false if matching IRQ are disabled.
It is implemented in MCUX and NRF uart drivers (and now STM32, following merge of #31192, maybe few others), but not in other uart drivers (nuovoton, sam, sam0, ..).
So question is:
- Should this behavior be explicit in API and updated in drivers that don't behave this way ?
- Should it be reverted and clients fixed ?
- Should it be kept optional but made explicit in the API ?
| non_defect | uart h clarification required on uart irq tx ready uart irq rx ready this point relates to discussion that happened around it appears that there are some uart api clients in tree c and more jfischer no may complete the list that rely on a specific behavior from uart irq tx ready and uart irq rx ready the behavior variant not explicited in uart h is that functions return false if matching irq are disabled it is implemented in mcux and nrf uart drivers and now following merge of maybe few others but not in other uart drivers nuovoton sam so question is should this behavior be explicit in api and updated in drivers that don t behave this way should it be reverted and clients fixed should it be kept optional but made explicit in the api | 0 |
815,937 | 30,579,063,432 | IssuesEvent | 2023-07-21 08:16:58 | quarkus-qe/quarkus-test-suite | https://api.github.com/repos/quarkus-qe/quarkus-test-suite | closed | Exclude QuarkusTest from surefire executions | bug good first issue priority/low | All Quarkus test should ends with *Test and should be excluded from Openshift/surefire executions | 1.0 | Exclude QuarkusTest from surefire executions - All Quarkus test should ends with *Test and should be excluded from Openshift/surefire executions | non_defect | exclude quarkustest from surefire executions all quarkus test should ends with test and should be excluded from openshift surefire executions | 0 |
134,802 | 19,347,236,381 | IssuesEvent | 2021-12-15 12:09:43 | Disfactory/SpotDiffFrontend | https://api.github.com/repos/Disfactory/SpotDiffFrontend | opened | 很容易沒看到有擴建那題下面的舊照片 | bug design | 覺得問有沒有擴建那一題,使用者很容易沒看到下面那張照片。可能需要設計的幫忙? indicate 要比較的舊照片在下面,現在手機和電腦版畫面都很容易切在點點塊狀分隔上。也許設計已經做了,但我沒找到?
抱歉回饋有點亂,今天小聚的時候我會再整理一下 | 1.0 | 很容易沒看到有擴建那題下面的舊照片 - 覺得問有沒有擴建那一題,使用者很容易沒看到下面那張照片。可能需要設計的幫忙? indicate 要比較的舊照片在下面,現在手機和電腦版畫面都很容易切在點點塊狀分隔上。也許設計已經做了,但我沒找到?
抱歉回饋有點亂,今天小聚的時候我會再整理一下 | non_defect | 很容易沒看到有擴建那題下面的舊照片 覺得問有沒有擴建那一題,使用者很容易沒看到下面那張照片。可能需要設計的幫忙? indicate 要比較的舊照片在下面,現在手機和電腦版畫面都很容易切在點點塊狀分隔上。也許設計已經做了,但我沒找到? 抱歉回饋有點亂,今天小聚的時候我會再整理一下 | 0 |
39,843 | 9,674,526,107 | IssuesEvent | 2019-05-22 10:04:08 | netty/netty | https://api.github.com/repos/netty/netty | closed | KQueueEventLoop might unregister active channels due to domain socket file descriptor reuse | defect | ### Expected behavior
The current `KQueueEventLoop` implementation does not unregister active channels when domain socket file descriptor are reused.
### Actual behavior
The current `KQueueEventLoop` implementation does not process concurrent domain socket channel registration/unregistration in the order they actual happen since unregistration are delated by an event loop task scheduling. When a domain socket is closed, it's file descriptor might be reused quickly and therefore trigger a new channel registration using the same descriptor.
Consequently the `KQueueEventLoop#add(AbstractKQueueChannel)` method will overwrite the current inactive channels having the same descriptor and the delayed `KQueueEventLoop#remove(AbstractKQueueChannel)` will remove the active channel that replaced the inactive one.
As active channels are registered, events for this file descriptor won't be processed anymore and the channels will never be closed. The `KQueueEventLoop` will also log such events with `WARN io.netty.channel.kqueue.KQueueEventLoop - events[2]=[120, -1] had no channel!`
### Steps to reproduce
Connect multiple channels doing a simple request/response and let the client close the channels when it receive the response. When there are enough channels, this will happen.
### Minimal yet complete reproducer code (or URL to code)
I will provide a PR with a test and a tentative fix.
### Netty version
4.1.34.Final and above
### JVM version (e.g. `java -version`)
Not related
### OS version (e.g. `uname -a`)
Darwin MacBook-Pro-de-julien.local 17.7.0 Darwin Kernel Version 17.7.0: Fri Nov 2 20:43:16 PDT 2018; root:xnu-4570.71.17~1/RELEASE_X86_64 x86_64
| 1.0 | KQueueEventLoop might unregister active channels due to domain socket file descriptor reuse - ### Expected behavior
The current `KQueueEventLoop` implementation does not unregister active channels when domain socket file descriptor are reused.
### Actual behavior
The current `KQueueEventLoop` implementation does not process concurrent domain socket channel registration/unregistration in the order they actual happen since unregistration are delated by an event loop task scheduling. When a domain socket is closed, it's file descriptor might be reused quickly and therefore trigger a new channel registration using the same descriptor.
Consequently the `KQueueEventLoop#add(AbstractKQueueChannel)` method will overwrite the current inactive channels having the same descriptor and the delayed `KQueueEventLoop#remove(AbstractKQueueChannel)` will remove the active channel that replaced the inactive one.
As active channels are registered, events for this file descriptor won't be processed anymore and the channels will never be closed. The `KQueueEventLoop` will also log such events with `WARN io.netty.channel.kqueue.KQueueEventLoop - events[2]=[120, -1] had no channel!`
### Steps to reproduce
Connect multiple channels doing a simple request/response and let the client close the channels when it receive the response. When there are enough channels, this will happen.
### Minimal yet complete reproducer code (or URL to code)
I will provide a PR with a test and a tentative fix.
### Netty version
4.1.34.Final and above
### JVM version (e.g. `java -version`)
Not related
### OS version (e.g. `uname -a`)
Darwin MacBook-Pro-de-julien.local 17.7.0 Darwin Kernel Version 17.7.0: Fri Nov 2 20:43:16 PDT 2018; root:xnu-4570.71.17~1/RELEASE_X86_64 x86_64
| defect | kqueueeventloop might unregister active channels due to domain socket file descriptor reuse expected behavior the current kqueueeventloop implementation does not unregister active channels when domain socket file descriptor are reused actual behavior the current kqueueeventloop implementation does not process concurrent domain socket channel registration unregistration in the order they actual happen since unregistration are delated by an event loop task scheduling when a domain socket is closed it s file descriptor might be reused quickly and therefore trigger a new channel registration using the same descriptor consequently the kqueueeventloop add abstractkqueuechannel method will overwrite the current inactive channels having the same descriptor and the delayed kqueueeventloop remove abstractkqueuechannel will remove the active channel that replaced the inactive one as active channels are registered events for this file descriptor won t be processed anymore and the channels will never be closed the kqueueeventloop will also log such events with warn io netty channel kqueue kqueueeventloop events had no channel steps to reproduce connect multiple channels doing a simple request response and let the client close the channels when it receive the response when there are enough channels this will happen minimal yet complete reproducer code or url to code i will provide a pr with a test and a tentative fix netty version final and above jvm version e g java version not related os version e g uname a darwin macbook pro de julien local darwin kernel version fri nov pdt root xnu release | 1 |
200,979 | 15,801,865,214 | IssuesEvent | 2021-04-03 06:55:23 | lirc572/ped | https://api.github.com/repos/lirc572/ped | opened | clear command's behavior differs from the UG | severity.Low type.DocumentationBug | 
The `clear` command actually empties the pool list as well. This is perhaps the intended behaviour so I put it as a documentation bug.
<!--session: 1617429637314-8b96ce30-7b28-40f3-bccd-90f37ab94878--> | 1.0 | clear command's behavior differs from the UG - 
The `clear` command actually empties the pool list as well. This is perhaps the intended behaviour so I put it as a documentation bug.
<!--session: 1617429637314-8b96ce30-7b28-40f3-bccd-90f37ab94878--> | non_defect | clear command s behavior differs from the ug the clear command actually empties the pool list as well this is perhaps the intended behaviour so i put it as a documentation bug | 0 |
75,700 | 26,003,376,973 | IssuesEvent | 2022-12-20 17:03:12 | hyperledger/iroha | https://api.github.com/repos/hyperledger/iroha | closed | [BUG] When one of four peers wiped and restarted, Then it can't embed back into the network. | Bug iroha2 LTS Pre-alpha defect QA-confirmed | ### OS and Environment
MacOS, DockerHub
### GIT commit hash
52dc18cd
### Minimum working example / Steps to reproduce
1.
2.
### Actual result
```json
{
"peers": 3,
"blocks": 6,
"txs_accepted": 14,
"txs_rejected": 0,
"uptime": {
"secs": 505373,
"nanos": 71000000
},
"view_changes": 0
}
```
```json
{
"peers": 3,
"blocks": 7,
"txs_accepted": 15,
"txs_rejected": 0,
"uptime": {
"secs": 506044,
"nanos": 552000000
},
"view_changes": 0
}
```
Peer can't embed back into the network
### Expected result
Peer successfully embed back into the network.
### Logs
<details>
<summary>Log contents</summary>
```json
Replace this text with a JSON log,
so it doesn't grow too large and has highlighting.
```
</details>
### Who can help to reproduce?
@astrokov7
### Notes
_No response_ | 1.0 | [BUG] When one of four peers wiped and restarted, Then it can't embed back into the network. - ### OS and Environment
MacOS, DockerHub
### GIT commit hash
52dc18cd
### Minimum working example / Steps to reproduce
1.
2.
### Actual result
```json
{
"peers": 3,
"blocks": 6,
"txs_accepted": 14,
"txs_rejected": 0,
"uptime": {
"secs": 505373,
"nanos": 71000000
},
"view_changes": 0
}
```
```json
{
"peers": 3,
"blocks": 7,
"txs_accepted": 15,
"txs_rejected": 0,
"uptime": {
"secs": 506044,
"nanos": 552000000
},
"view_changes": 0
}
```
Peer can't embed back into the network
### Expected result
Peer successfully embed back into the network.
### Logs
<details>
<summary>Log contents</summary>
```json
Replace this text with a JSON log,
so it doesn't grow too large and has highlighting.
```
</details>
### Who can help to reproduce?
@astrokov7
### Notes
_No response_ | defect | when one of four peers wiped and restarted then it can t embed back into the network os and environment macos dockerhub git commit hash minimum working example steps to reproduce actual result json peers blocks txs accepted txs rejected uptime secs nanos view changes json peers blocks txs accepted txs rejected uptime secs nanos view changes peer can t embed back into the network expected result peer successfully embed back into the network logs log contents json replace this text with a json log so it doesn t grow too large and has highlighting who can help to reproduce notes no response | 1 |
73,507 | 24,664,914,019 | IssuesEvent | 2022-10-18 09:33:23 | matrix-org/synapse | https://api.github.com/repos/matrix-org/synapse | closed | completely broken room after purge_room/rejoin | T-Defect | when you purge a room (via the [delete room api](https://matrix-org.github.io/synapse/develop/admin_api/rooms.html#delete-room-api)), we do not clear the in-memory caches. Then, when you rejoin, we have a bunch of the events in the cache, so do not bother to re-persist them.
This leads to very bad brokenness, like `state_groups_state` referring to events which do not exist.
(A workaround is of course to restart synapse after purging and before rejoining) | 1.0 | completely broken room after purge_room/rejoin - when you purge a room (via the [delete room api](https://matrix-org.github.io/synapse/develop/admin_api/rooms.html#delete-room-api)), we do not clear the in-memory caches. Then, when you rejoin, we have a bunch of the events in the cache, so do not bother to re-persist them.
This leads to very bad brokenness, like `state_groups_state` referring to events which do not exist.
(A workaround is of course to restart synapse after purging and before rejoining) | defect | completely broken room after purge room rejoin when you purge a room via the we do not clear the in memory caches then when you rejoin we have a bunch of the events in the cache so do not bother to re persist them this leads to very bad brokenness like state groups state referring to events which do not exist a workaround is of course to restart synapse after purging and before rejoining | 1 |
474,062 | 13,651,981,847 | IssuesEvent | 2020-09-27 04:42:07 | Journaly/journaly | https://api.github.com/repos/Journaly/journaly | closed | FEATURE: Enable users to "Follow" other users | backend database enhancement frontend medium priority | #### Perceived Problem
- As our user base grows, it's going to become more and more difficult to know when your favorite Journalers have written a post you might want to read.
#### Ideas / Proposed Solution(s)
- Enable the ability to "follow" another user!
#### Iterations
1. A simple email notification
2. Show posts from users you follow first (?)
3. Look at ways to mix the results in a way that is weighted towards people you follow, but still shows posts that need feedback and that are a good match
4. Use ML to make this more sophisticated
| 1.0 | FEATURE: Enable users to "Follow" other users - #### Perceived Problem
- As our user base grows, it's going to become more and more difficult to know when your favorite Journalers have written a post you might want to read.
#### Ideas / Proposed Solution(s)
- Enable the ability to "follow" another user!
#### Iterations
1. A simple email notification
2. Show posts from users you follow first (?)
3. Look at ways to mix the results in a way that is weighted towards people you follow, but still shows posts that need feedback and that are a good match
4. Use ML to make this more sophisticated
| non_defect | feature enable users to follow other users perceived problem as our user base grows it s going to become more and more difficult to know when your favorite journalers have written a post you might want to read ideas proposed solution s enable the ability to follow another user iterations a simple email notification show posts from users you follow first look at ways to mix the results in a way that is weighted towards people you follow but still shows posts that need feedback and that are a good match use ml to make this more sophisticated | 0 |
590,636 | 17,783,382,593 | IssuesEvent | 2021-08-31 08:11:24 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Sample Points randomly disappearing | priority:0 area:editor | <!--
IMPORTANT: Your issue may already be reported.
Please check:
- Pinned issues, at the top of https://github.com/ppy/osu/issues
- Current priority 0 issues at https://github.com/ppy/osu/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3Apriority%3A0
- Search for your issue. If you find that it already exists, please respond with a reaction or add any further information that may be helpful.
-->
**Describe the bug:**
Sometimes, after exiting the editor, sample points will just... vanish. (Sometimes, sample points also have no effect).
**Screenshots or videos showing encountered issue:**
https://user-images.githubusercontent.com/25379179/116793405-96fd5a80-aa94-11eb-81f6-78e18cdbf501.mp4
**osu!lazer version:**
2021.424.0
**Logs:**
[database.log](https://github.com/ppy/osu/files/6410193/database.log)
[network.log](https://github.com/ppy/osu/files/6410194/network.log)
[performance.log](https://github.com/ppy/osu/files/6410195/performance.log)
[performance-audio.log](https://github.com/ppy/osu/files/6410196/performance-audio.log)
[performance-draw.log](https://github.com/ppy/osu/files/6410197/performance-draw.log)
[performance-update.log](https://github.com/ppy/osu/files/6410198/performance-update.log)
[runtime.log](https://github.com/ppy/osu/files/6410199/runtime.log) | 1.0 | Sample Points randomly disappearing - <!--
IMPORTANT: Your issue may already be reported.
Please check:
- Pinned issues, at the top of https://github.com/ppy/osu/issues
- Current priority 0 issues at https://github.com/ppy/osu/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3Apriority%3A0
- Search for your issue. If you find that it already exists, please respond with a reaction or add any further information that may be helpful.
-->
**Describe the bug:**
Sometimes, after exiting the editor, sample points will just... vanish. (Sometimes, sample points also have no effect).
**Screenshots or videos showing encountered issue:**
https://user-images.githubusercontent.com/25379179/116793405-96fd5a80-aa94-11eb-81f6-78e18cdbf501.mp4
**osu!lazer version:**
2021.424.0
**Logs:**
[database.log](https://github.com/ppy/osu/files/6410193/database.log)
[network.log](https://github.com/ppy/osu/files/6410194/network.log)
[performance.log](https://github.com/ppy/osu/files/6410195/performance.log)
[performance-audio.log](https://github.com/ppy/osu/files/6410196/performance-audio.log)
[performance-draw.log](https://github.com/ppy/osu/files/6410197/performance-draw.log)
[performance-update.log](https://github.com/ppy/osu/files/6410198/performance-update.log)
[runtime.log](https://github.com/ppy/osu/files/6410199/runtime.log) | non_defect | sample points randomly disappearing important your issue may already be reported please check pinned issues at the top of current priority issues at search for your issue if you find that it already exists please respond with a reaction or add any further information that may be helpful describe the bug sometimes after exiting the editor sample points will just vanish sometimes sample points also have no effect screenshots or videos showing encountered issue osu lazer version logs | 0 |
1,920 | 2,603,973,474 | IssuesEvent | 2015-02-24 19:00:55 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳乳头病毒治疗 | auto-migrated Priority-Medium Type-Defect | ```
沈阳乳头病毒治疗〓沈陽軍區政治部醫院性病〓TEL:024-3102330
8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于�
��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷�
��悠久、設備精良、技術權威、專家云集,是預防、保健、醫
療、科研康復為一體的綜合性醫院。是國家首批公立甲等部��
�醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南�
��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后
勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等��
�。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:06 | 1.0 | 沈阳乳头病毒治疗 - ```
沈阳乳头病毒治疗〓沈陽軍區政治部醫院性病〓TEL:024-3102330
8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于�
��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷�
��悠久、設備精良、技術權威、專家云集,是預防、保健、醫
療、科研康復為一體的綜合性醫院。是國家首批公立甲等部��
�醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南�
��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后
勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等��
�。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:06 | defect | 沈阳乳头病毒治疗 沈阳乳头病毒治疗〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位于� �� 。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 original issue reported on code google com by gmail com on jun at | 1 |
11,650 | 2,660,021,532 | IssuesEvent | 2015-03-19 01:40:26 | perfsonar/project | https://api.github.com/repos/perfsonar/project | closed | owamp packet loss view at different zoom levels | Milestone-Release3.6 Priority-Medium Type-Defect wontfix | Original [issue 1038](https://code.google.com/p/perfsonar-ps/issues/detail?id=1038) created by arlake228 on 2015-01-07T14:05:59.000Z:
[This is related to # 1032]
<b>What steps will reproduce the problem?</b>
1. Set up owamp graphs, and have a packet loss event
2. View graphs at scale of 1d, 3d, 1w, 1m
3. See widely diverging values of packet loss
<b>What is the expected output? What do you see instead?</b>
I was expecting to see the same level of packet loss. What I guess is happening is some sort of summarisation is taking place and I'm seeing an *average* (mean? median?) packet loss over different (and unspecified) time windows.
<b>What version of the product are you using? On what operating system?</b>
perfsonar-ps 3.4.1
<b>Please provide any additional information below.</b>
Suggestion 1: Arguably it would be less misleading to use the *peak* packet loss rather than the *average* as the chosen summary.
Or is this considered unduly pessimistic when looking at large timescales such as 1 month?
Suggestion 2: As per # 1032, identify in the legend when summarisation is taking place
------- Loss(*)
- - - - - Reverse Loss(*)
(*) Mean over 3600 seconds
This makes it clear what you're looking at.
Suggestion 3: Allow the user to select the peak or the average to view. Given that esmond stores a whole bunch of other summaries like quartiles and 95th percentile, you could allow the user to choose any of these.
| 1.0 | owamp packet loss view at different zoom levels - Original [issue 1038](https://code.google.com/p/perfsonar-ps/issues/detail?id=1038) created by arlake228 on 2015-01-07T14:05:59.000Z:
[This is related to # 1032]
<b>What steps will reproduce the problem?</b>
1. Set up owamp graphs, and have a packet loss event
2. View graphs at scale of 1d, 3d, 1w, 1m
3. See widely diverging values of packet loss
<b>What is the expected output? What do you see instead?</b>
I was expecting to see the same level of packet loss. What I guess is happening is some sort of summarisation is taking place and I'm seeing an *average* (mean? median?) packet loss over different (and unspecified) time windows.
<b>What version of the product are you using? On what operating system?</b>
perfsonar-ps 3.4.1
<b>Please provide any additional information below.</b>
Suggestion 1: Arguably it would be less misleading to use the *peak* packet loss rather than the *average* as the chosen summary.
Or is this considered unduly pessimistic when looking at large timescales such as 1 month?
Suggestion 2: As per # 1032, identify in the legend when summarisation is taking place
------- Loss(*)
- - - - - Reverse Loss(*)
(*) Mean over 3600 seconds
This makes it clear what you're looking at.
Suggestion 3: Allow the user to select the peak or the average to view. Given that esmond stores a whole bunch of other summaries like quartiles and 95th percentile, you could allow the user to choose any of these.
| defect | owamp packet loss view at different zoom levels original created by on what steps will reproduce the problem set up owamp graphs and have a packet loss event view graphs at scale of see widely diverging values of packet loss what is the expected output what do you see instead i was expecting to see the same level of packet loss what i guess is happening is some sort of summarisation is taking place and i m seeing an average mean median packet loss over different and unspecified time windows what version of the product are you using on what operating system perfsonar ps please provide any additional information below suggestion arguably it would be less misleading to use the peak packet loss rather than the average as the chosen summary or is this considered unduly pessimistic when looking at large timescales such as month suggestion as per nbsp identify in the legend when summarisation is taking place loss reverse loss mean over seconds this makes it clear what you re looking at suggestion allow the user to select the peak or the average to view given that esmond stores a whole bunch of other summaries like quartiles and percentile you could allow the user to choose any of these | 1 |
71,301 | 23,529,978,469 | IssuesEvent | 2022-08-19 14:26:57 | vector-im/element-call | https://api.github.com/repos/vector-im/element-call | opened | Volume slider doesn't drag in Safari | T-Defect S-Minor O-Uncommon Z-Platform-Specific | ### Steps to reproduce
1. Enter a call
2. Click on the volume icon of a stream to open the volume adjustment dialog
3. Try to drag the handle of the volume slider
### Outcome
#### What did you expect?
Handle drags up & down the slider
#### What happened instead?
Handle doesn't move (but does move to where you click on the slider)
### Operating system
macOS 12.5
### Browser information
Safari 15.6
### URL for webapp
_No response_
### Will you send logs?
No | 1.0 | Volume slider doesn't drag in Safari - ### Steps to reproduce
1. Enter a call
2. Click on the volume icon of a stream to open the volume adjustment dialog
3. Try to drag the handle of the volume slider
### Outcome
#### What did you expect?
Handle drags up & down the slider
#### What happened instead?
Handle doesn't move (but does move to where you click on the slider)
### Operating system
macOS 12.5
### Browser information
Safari 15.6
### URL for webapp
_No response_
### Will you send logs?
No | defect | volume slider doesn t drag in safari steps to reproduce enter a call click on the volume icon of a stream to open the volume adjustment dialog try to drag the handle of the volume slider outcome what did you expect handle drags up down the slider what happened instead handle doesn t move but does move to where you click on the slider operating system macos browser information safari url for webapp no response will you send logs no | 1 |
1,066 | 3,534,936,891 | IssuesEvent | 2016-01-16 04:05:24 | sidorares/node-mysql2 | https://api.github.com/repos/sidorares/node-mysql2 | closed | Pool query does not return query reference | feligxe-mysql-incompatibilities | `mysql` returns the query object in the pool, `mysql2` does not.
https://github.com/felixge/node-mysql/blob/master/lib/Pool.js#L209 | True | Pool query does not return query reference - `mysql` returns the query object in the pool, `mysql2` does not.
https://github.com/felixge/node-mysql/blob/master/lib/Pool.js#L209 | non_defect | pool query does not return query reference mysql returns the query object in the pool does not | 0 |
318,865 | 9,703,098,823 | IssuesEvent | 2019-05-27 10:25:05 | cekit/cekit | https://api.github.com/repos/cekit/cekit | closed | Requesting ODCS composes should be done after fetching artifacts | complexity/low priority/low type/enhancement | This will speed up the debug process of modules and artifacts since there will be no time wasted on waiting for the compose. | 1.0 | Requesting ODCS composes should be done after fetching artifacts - This will speed up the debug process of modules and artifacts since there will be no time wasted on waiting for the compose. | non_defect | requesting odcs composes should be done after fetching artifacts this will speed up the debug process of modules and artifacts since there will be no time wasted on waiting for the compose | 0 |
172,434 | 14,360,596,379 | IssuesEvent | 2020-11-30 17:03:42 | OnionIoT/tau-lidar-server | https://api.github.com/repos/OnionIoT/tau-lidar-server | closed | Review setup instructions | documentation | Setup instructions for the Tau Camera are here: https://github.com/OnionIoT/tau-lidar-server/blob/master/GET-STARTED.md
All 3 modules are live on PIP, so please try out the instructions as if you were a new user.
Feel free to edit, expand, adjust as you see fit! | 1.0 | Review setup instructions - Setup instructions for the Tau Camera are here: https://github.com/OnionIoT/tau-lidar-server/blob/master/GET-STARTED.md
All 3 modules are live on PIP, so please try out the instructions as if you were a new user.
Feel free to edit, expand, adjust as you see fit! | non_defect | review setup instructions setup instructions for the tau camera are here all modules are live on pip so please try out the instructions as if you were a new user feel free to edit expand adjust as you see fit | 0 |
742,646 | 25,864,519,048 | IssuesEvent | 2022-12-13 19:38:55 | jerichosy/CSSWENG-Team-3 | https://api.github.com/repos/jerichosy/CSSWENG-Team-3 | closed | Notes should be optional | bug stack - frontend high priority | **Summary**
Inputs for Notes should be optional, not required when creating a new expense record
**Steps to Produce:**
1. Go to Expense tab
2. Click on Add
3. Set the Category to Grocery
4. Set the Item Name to Flour
5. Set the Amount to ₱1500.00
6. Click Add
**Expected Results:**
It should have created the new expense record and generated in the expense database
**Actual Results:**
There was an input validation stating to input in the Notes field.

See 07-CSHAddExpense, Iteration 1-4 for more details | 1.0 | Notes should be optional - **Summary**
Inputs for Notes should be optional, not required when creating a new expense record
**Steps to Produce:**
1. Go to Expense tab
2. Click on Add
3. Set the Category to Grocery
4. Set the Item Name to Flour
5. Set the Amount to ₱1500.00
6. Click Add
**Expected Results:**
It should have created the new expense record and generated in the expense database
**Actual Results:**
There was an input validation stating to input in the Notes field.

See 07-CSHAddExpense, Iteration 1-4 for more details | non_defect | notes should be optional summary inputs for notes should be optional not required when creating a new expense record steps to produce go to expense tab click on add set the category to grocery set the item name to flour set the amount to ₱ click add expected results it should have created the new expense record and generated in the expense database actual results there was an input validation stating to input in the notes field see cshaddexpense iteration for more details | 0 |
41,677 | 10,563,921,110 | IssuesEvent | 2019-10-04 22:37:36 | SublimeText/PackageDev | https://api.github.com/repos/SublimeText/PackageDev | closed | builtin color completions in .sublime-color-scheme files | defect | should builtin colors be suggested in color scheme files
 ? | 1.0 | builtin color completions in .sublime-color-scheme files - should builtin colors be suggested in color scheme files
 ? | defect | builtin color completions in sublime color scheme files should builtin colors be suggested in color scheme files | 1 |
58,175 | 16,389,238,225 | IssuesEvent | 2021-05-17 14:15:17 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | ResultQuery.fetchStream() fetches entire resultset at once | C: Functionality E: All Editions P: Medium T: Defect | Consider this code:
```java
Configuration c = create().configuration();
c.set(ExecuteListener.onRecordEnd(ctx -> System.out.println(ctx.record().get(TBook_ID()))));
try (Stream<B> s = c.dsl().fetchStream(TBook())) {
Iterator<B> it = s.iterator();
if (it.hasNext())
it.next();
}
```
It fetches the entire result set into memory, printing:
```
1
2
3
4
```
When in fact, given that there's only one call to `it.next()`, it should fetch only one row. This seems to be a regression introduced with https://github.com/jOOQ/jOOQ/issues/4934 "Delay query execution until a Stream terminal op is called". The `fetchStream()` implementation does:
```java
@Override
default Stream<R> fetchStream() {
return Stream.of(1).flatMap(i -> fetchLazy().stream());
}
```
But flatmap just consumes the entire flatmapped stream at once | 1.0 | ResultQuery.fetchStream() fetches entire resultset at once - Consider this code:
```java
Configuration c = create().configuration();
c.set(ExecuteListener.onRecordEnd(ctx -> System.out.println(ctx.record().get(TBook_ID()))));
try (Stream<B> s = c.dsl().fetchStream(TBook())) {
Iterator<B> it = s.iterator();
if (it.hasNext())
it.next();
}
```
It fetches the entire result set into memory, printing:
```
1
2
3
4
```
When in fact, given that there's only one call to `it.next()`, it should fetch only one row. This seems to be a regression introduced with https://github.com/jOOQ/jOOQ/issues/4934 "Delay query execution until a Stream terminal op is called". The `fetchStream()` implementation does:
```java
@Override
default Stream<R> fetchStream() {
return Stream.of(1).flatMap(i -> fetchLazy().stream());
}
```
But flatmap just consumes the entire flatmapped stream at once | defect | resultquery fetchstream fetches entire resultset at once consider this code java configuration c create configuration c set executelistener onrecordend ctx system out println ctx record get tbook id try stream s c dsl fetchstream tbook iterator it s iterator if it hasnext it next it fetches the entire result set into memory printing when in fact given that there s only one call to it next it should fetch only one row this seems to be a regression introduced with delay query execution until a stream terminal op is called the fetchstream implementation does java override default stream fetchstream return stream of flatmap i fetchlazy stream but flatmap just consumes the entire flatmapped stream at once | 1 |
61,123 | 17,023,611,067 | IssuesEvent | 2021-07-03 02:54:54 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Nominatim does not seem to display the name attribute | Component: nominatim Priority: major Resolution: worksforme Type: defect | **[Submitted to the original trac issue database at 7.18am, Monday, 28th June 2010]**
Upon searching for "Europe" Nominatim appears to return the name:eo result of "Eropo" when the results from node 25871341 are displayed.
I have my browsers set to use EN and EN-US as my languages. Node 25871341 does not have an entry for name:en (at this time).
Since there is no name:en tag, I seem to get the next result in alphabetical order of ISO tag (usually name:eo, sometimes name:es).
To clarify: Nominatim cannot find name:en to match my language preferences. Nominatim appears to skip name = Europe and moves on to the next alphabetically available option name:eo = Eropo.
Please feel free to contact me if you need further information. | 1.0 | Nominatim does not seem to display the name attribute - **[Submitted to the original trac issue database at 7.18am, Monday, 28th June 2010]**
Upon searching for "Europe" Nominatim appears to return the name:eo result of "Eropo" when the results from node 25871341 are displayed.
I have my browsers set to use EN and EN-US as my languages. Node 25871341 does not have an entry for name:en (at this time).
Since there is no name:en tag, I seem to get the next result in alphabetical order of ISO tag (usually name:eo, sometimes name:es).
To clarify: Nominatim cannot find name:en to match my language preferences. Nominatim appears to skip name = Europe and moves on to the next alphabetically available option name:eo = Eropo.
Please feel free to contact me if you need further information. | defect | nominatim does not seem to display the name attribute upon searching for europe nominatim appears to return the name eo result of eropo when the results from node are displayed i have my browsers set to use en and en us as my languages node does not have an entry for name en at this time since there is no name en tag i seem to get the next result in alphabetical order of iso tag usually name eo sometimes name es to clarify nominatim cannot find name en to match my language preferences nominatim appears to skip name europe and moves on to the next alphabetically available option name eo eropo please feel free to contact me if you need further information | 1 |
59,659 | 14,439,364,384 | IssuesEvent | 2020-12-07 14:17:58 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | opened | Vulnerability roundup 97: libslirp-4.3.1: 2 advisories [4.3] | 1.severity: security | [search](https://search.nix.gsc.io/?q=libslirp&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=libslirp+in%3Apath&type=Code)
* [ ] [CVE-2020-29129](https://nvd.nist.gov/vuln/detail/CVE-2020-29129) CVSSv3=4.3 (nixos-20.09)
* [ ] [CVE-2020-29130](https://nvd.nist.gov/vuln/detail/CVE-2020-29130) CVSSv3=4.3 (nixos-20.09)
Scanned versions: nixos-20.09: 50c8d1e0deb.
Cc @orivej
| True | Vulnerability roundup 97: libslirp-4.3.1: 2 advisories [4.3] - [search](https://search.nix.gsc.io/?q=libslirp&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=libslirp+in%3Apath&type=Code)
* [ ] [CVE-2020-29129](https://nvd.nist.gov/vuln/detail/CVE-2020-29129) CVSSv3=4.3 (nixos-20.09)
* [ ] [CVE-2020-29130](https://nvd.nist.gov/vuln/detail/CVE-2020-29130) CVSSv3=4.3 (nixos-20.09)
Scanned versions: nixos-20.09: 50c8d1e0deb.
Cc @orivej
| non_defect | vulnerability roundup libslirp advisories nixos nixos scanned versions nixos cc orivej | 0 |
67,934 | 21,328,914,068 | IssuesEvent | 2022-04-18 05:06:59 | klubcoin/lcn-mobile | https://api.github.com/repos/klubcoin/lcn-mobile | closed | [Navigation][Partners][Weblinks] Fix must be able to access web links when user accessed partners' web pages and returned to Klubcoin App via switching app from Browser to Klubcoin. | Defect Should Have Minor Navigation / Drawer Services | ### **Description:**
Must be able to access web links when user accessed partners' web pages and returned to Klubcoin App via switching app from Browser to Klubcoin.
**Build Environment:** Prod Candidate Environment
**Affects Version:** 1.0.0.prod.3
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User already has an existing Wallet Account
3. User is currently at Klubcoin Dashboard
### **Steps to Reproduce:**
1. Tap Hamburger Button
2. Tap Partners
3. Tap specific partner
4. Tap Go to website button
5. Switch app from Browser to Klubcoin
6. Tap Hamburger Button
7. Tap Help / Settings>About
8. Tap all weblinks
### **Expected Result:**
Display respective web pages for each web links
### **Actual Result:**
Doing nothing
### **Attachment/s:**
https://user-images.githubusercontent.com/100281200/163546351-e11f6f43-2e17-4ba3-ab20-535ab1bc928e.mp4 | 1.0 | [Navigation][Partners][Weblinks] Fix must be able to access web links when user accessed partners' web pages and returned to Klubcoin App via switching app from Browser to Klubcoin. - ### **Description:**
Must be able to access web links when user accessed partners' web pages and returned to Klubcoin App via switching app from Browser to Klubcoin.
**Build Environment:** Prod Candidate Environment
**Affects Version:** 1.0.0.prod.3
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User already has an existing Wallet Account
3. User is currently at Klubcoin Dashboard
### **Steps to Reproduce:**
1. Tap Hamburger Button
2. Tap Partners
3. Tap specific partner
4. Tap Go to website button
5. Switch app from Browser to Klubcoin
6. Tap Hamburger Button
7. Tap Help / Settings>About
8. Tap all weblinks
### **Expected Result:**
Display respective web pages for each web links
### **Actual Result:**
Doing nothing
### **Attachment/s:**
https://user-images.githubusercontent.com/100281200/163546351-e11f6f43-2e17-4ba3-ab20-535ab1bc928e.mp4 | defect | fix must be able to access web links when user accessed partners web pages and returned to klubcoin app via switching app from browser to klubcoin description must be able to access web links when user accessed partners web pages and returned to klubcoin app via switching app from browser to klubcoin build environment prod candidate environment affects version prod device platform android device os test device oneplus pro pre condition user successfully installed klubcoin app user already has an existing wallet account user is currently at klubcoin dashboard steps to reproduce tap hamburger button tap partners tap specific partner tap go to website button switch app from browser to klubcoin tap hamburger button tap help settings about tap all weblinks expected result display respective web pages for each web links actual result doing nothing attachment s | 1 |
70,856 | 9,457,396,873 | IssuesEvent | 2019-04-17 00:14:51 | Microsoft/MixedRealityToolkit-Unity | https://api.github.com/repos/Microsoft/MixedRealityToolkit-Unity | closed | A walkthrough the UX components of the MRTK SDK - new guide | Documentation UX Controls v2 Documentation | # Overview
Users need a clear and concise guide for how to utilise the UX / UI components provided by the MRTK SDK
# Requirements
Users need to be able to have good guidance for how to utilize
* Interactables
* Lines
* Collections / Utilities
This might possibly need to be broken up in to multiple guides
# Acceptance Criteria
- [ ] As a user, I need to be able to add UX interactivity in my scene
- [ ] As a user, I need to be able to coordinate the relationships between different objects in my project
- [ ] As a user, I need to be able to organise UX components in my scene relative to the player
| 2.0 | A walkthrough the UX components of the MRTK SDK - new guide - # Overview
Users need a clear and concise guide for how to utilise the UX / UI components provided by the MRTK SDK
# Requirements
Users need to be able to have good guidance for how to utilize
* Interactables
* Lines
* Collections / Utilities
This might possibly need to be broken up in to multiple guides
# Acceptance Criteria
- [ ] As a user, I need to be able to add UX interactivity in my scene
- [ ] As a user, I need to be able to coordinate the relationships between different objects in my project
- [ ] As a user, I need to be able to organise UX components in my scene relative to the player
| non_defect | a walkthrough the ux components of the mrtk sdk new guide overview users need a clear and concise guide for how to utilise the ux ui components provided by the mrtk sdk requirements users need to be able to have good guidance for how to utilize interactables lines collections utilities this might possibly need to be broken up in to multiple guides acceptance criteria as a user i need to be able to add ux interactivity in my scene as a user i need to be able to coordinate the relationships between different objects in my project as a user i need to be able to organise ux components in my scene relative to the player | 0 |
31,195 | 6,443,913,474 | IssuesEvent | 2017-08-12 02:31:19 | opendatakit/opendatakit | https://api.github.com/repos/opendatakit/opendatakit | closed | Data of a repeat group inside a repeat group does not export to Fusion Table | Aggregate Type-Defect | If you have a repeat group B inside a repeat group A, and you submit that data to Aggregate, the submission works fine. If you publish that data to Fusion Tables, the tables are created, but the sub repeat group (B)'s tables will be empty.
To replicate this issue, I used this [fusion-test.xlsx] form (https://github.com/opendatakit/opendatakit/files/675712/fusion-test.xlsx) to generate a sample submission.
```
<?xml version='1.0' ?>
<fusion-test id="fusion-test">
<repeat_a>
<text_a>a1</text_a>
<repeat_b>
<text_b>a1b1</text_b>
</repeat_b>
<repeat_b>
<text_b>a1b2</text_b>
</repeat_b>
<repeat_b>
<text_b>a1b3</text_b>
</repeat_b>
</repeat_a>
<repeat_a>
<text_a>a2</text_a>
<repeat_b>
<text_b>a2b1</text_b>
</repeat_b>
<repeat_b>
<text_b>a2b2</text_b>
</repeat_b>
<repeat_b>
<text_b>a2b3</text_b>
</repeat_b>
<repeat_b>
<text_b>a2b4</text_b>
</repeat_b>
</repeat_a>
<repeat_a>
<text_a>a3</text_a>
</repeat_a>
<meta>
<instanceID>uuid:43a8623f-1a00-4174-8bb2-712d9e048b43</instanceID>
</meta>
</fusion-test>
```
When this submission is sent to Aggregate, the right things happen.
*Aggregate, Repeat A*

*Aggregate, Repeat A1B*

*Aggregate, Repeat A2B*

When this submission is sent to Fusion Tables, the sub repeat (B) shows up as empty.
*Fusion Table, Repeat A*

*Fusion Table, Repeat B*

| 1.0 | Data of a repeat group inside a repeat group does not export to Fusion Table - If you have a repeat group B inside a repeat group A, and you submit that data to Aggregate, the submission works fine. If you publish that data to Fusion Tables, the tables are created, but the sub repeat group (B)'s tables will be empty.
To replicate this issue, I used this [fusion-test.xlsx] form (https://github.com/opendatakit/opendatakit/files/675712/fusion-test.xlsx) to generate a sample submission.
```
<?xml version='1.0' ?>
<fusion-test id="fusion-test">
<repeat_a>
<text_a>a1</text_a>
<repeat_b>
<text_b>a1b1</text_b>
</repeat_b>
<repeat_b>
<text_b>a1b2</text_b>
</repeat_b>
<repeat_b>
<text_b>a1b3</text_b>
</repeat_b>
</repeat_a>
<repeat_a>
<text_a>a2</text_a>
<repeat_b>
<text_b>a2b1</text_b>
</repeat_b>
<repeat_b>
<text_b>a2b2</text_b>
</repeat_b>
<repeat_b>
<text_b>a2b3</text_b>
</repeat_b>
<repeat_b>
<text_b>a2b4</text_b>
</repeat_b>
</repeat_a>
<repeat_a>
<text_a>a3</text_a>
</repeat_a>
<meta>
<instanceID>uuid:43a8623f-1a00-4174-8bb2-712d9e048b43</instanceID>
</meta>
</fusion-test>
```
When this submission is sent to Aggregate, the right things happen.
*Aggregate, Repeat A*

*Aggregate, Repeat A1B*

*Aggregate, Repeat A2B*

When this submission is sent to Fusion Tables, the sub repeat (B) shows up as empty.
*Fusion Table, Repeat A*

*Fusion Table, Repeat B*

| defect | data of a repeat group inside a repeat group does not export to fusion table if you have a repeat group b inside a repeat group a and you submit that data to aggregate the submission works fine if you publish that data to fusion tables the tables are created but the sub repeat group b s tables will be empty to replicate this issue i used this form to generate a sample submission uuid when this submission is sent to aggregate the right things happen aggregate repeat a aggregate repeat aggregate repeat when this submission is sent to fusion tables the sub repeat b shows up as empty fusion table repeat a fusion table repeat b | 1 |
12,986 | 2,732,626,072 | IssuesEvent | 2015-04-17 08:03:17 | creativo/softmodii | https://api.github.com/repos/creativo/softmodii | opened | Una 4.3E que se queda en la instalacion de HBC. A partir de ahi no carga nada | Prioridad-Media Tipo-Defecto Version-Ejecutable | **¿Qué pasos reproducen el problema?**
1\. Consigo instalar el HBC con letterbomb
2\. Preparo la SD con CSD y todo OK
3\. No carga el HBC nada. Por tanto no puedo cambiar cIOS ni nada.
**¿Cuál es la respuesta esperada? ¿Qué ocurre en realidad?**
En el Tutorial pone que para este Bug podemos esperar 10 min en el Menu Wii y 3 min en HBC y puede que acabe solucionandose. Hemos intentado todo pero no sale desde ayer.
**¿Qué versión del programa usas? ¿En qué sistema operativo?**
4.2 (v4206) win XP sp3
**Incluye cualquier información adicional que pueda servir para solucionarlo:**
Es la primera consola que me falla. Aunque tambien es la primera 4.3E negra que toco. Nunca me habia pedido la MAC de la Wii para el Letterbomb en las otras consolas.
He probado con 3 tarjetas SD distintas.
Gracias de antemano. | 1.0 | Una 4.3E que se queda en la instalacion de HBC. A partir de ahi no carga nada - **¿Qué pasos reproducen el problema?**
1\. Consigo instalar el HBC con letterbomb
2\. Preparo la SD con CSD y todo OK
3\. No carga el HBC nada. Por tanto no puedo cambiar cIOS ni nada.
**¿Cuál es la respuesta esperada? ¿Qué ocurre en realidad?**
En el Tutorial pone que para este Bug podemos esperar 10 min en el Menu Wii y 3 min en HBC y puede que acabe solucionandose. Hemos intentado todo pero no sale desde ayer.
**¿Qué versión del programa usas? ¿En qué sistema operativo?**
4.2 (v4206) win XP sp3
**Incluye cualquier información adicional que pueda servir para solucionarlo:**
Es la primera consola que me falla. Aunque tambien es la primera 4.3E negra que toco. Nunca me habia pedido la MAC de la Wii para el Letterbomb en las otras consolas.
He probado con 3 tarjetas SD distintas.
Gracias de antemano. | defect | una que se queda en la instalacion de hbc a partir de ahi no carga nada ¿qué pasos reproducen el problema consigo instalar el hbc con letterbomb preparo la sd con csd y todo ok no carga el hbc nada por tanto no puedo cambiar cios ni nada ¿cuál es la respuesta esperada ¿qué ocurre en realidad en el tutorial pone que para este bug podemos esperar min en el menu wii y min en hbc y puede que acabe solucionandose hemos intentado todo pero no sale desde ayer ¿qué versión del programa usas ¿en qué sistema operativo win xp incluye cualquier información adicional que pueda servir para solucionarlo es la primera consola que me falla aunque tambien es la primera negra que toco nunca me habia pedido la mac de la wii para el letterbomb en las otras consolas he probado con tarjetas sd distintas gracias de antemano | 1 |
45,564 | 12,877,711,111 | IssuesEvent | 2020-07-11 12:43:28 | hikaya-io/activity | https://api.github.com/repos/hikaya-io/activity | closed | When a target field is cleared validation errors are displayed on all target frequecies fields | defect good first issue | **Current behavior**
When a target field is cleared validation errors are displayed on all target frequencies' fields
**To Reproduce**
Steps to reproduce the behavior:
1. Go to indicator list
2. Click on more button on one of the indicators and then Target Periods
3. Clear one of the targets on the frequency's targets
4. Validation error "The target value field must be numeric and may contain decimal points." on all target fields.
**Expected behavior**
Clearing a target field should only display a validation error for that field
**Screenshots**
<img width="642" alt="image" src="https://user-images.githubusercontent.com/16039248/82844397-f77d3280-9ee8-11ea-8d8f-16ca4064b65d.png">

| 1.0 | When a target field is cleared validation errors are displayed on all target frequecies fields - **Current behavior**
When a target field is cleared validation errors are displayed on all target frequencies' fields
**To Reproduce**
Steps to reproduce the behavior:
1. Go to indicator list
2. Click on more button on one of the indicators and then Target Periods
3. Clear one of the targets on the frequency's targets
4. Validation error "The target value field must be numeric and may contain decimal points." on all target fields.
**Expected behavior**
Clearing a target field should only display a validation error for that field
**Screenshots**
<img width="642" alt="image" src="https://user-images.githubusercontent.com/16039248/82844397-f77d3280-9ee8-11ea-8d8f-16ca4064b65d.png">

| defect | when a target field is cleared validation errors are displayed on all target frequecies fields current behavior when a target field is cleared validation errors are displayed on all target frequencies fields to reproduce steps to reproduce the behavior go to indicator list click on more button on one of the indicators and then target periods clear one of the targets on the frequency s targets validation error the target value field must be numeric and may contain decimal points on all target fields expected behavior clearing a target field should only display a validation error for that field screenshots img width alt image src | 1 |
311,814 | 23,405,715,292 | IssuesEvent | 2022-08-12 12:38:06 | FusionAuth/fusionauth-issues | https://api.github.com/repos/FusionAuth/fusionauth-issues | closed | Java client retrieveRefreshTokenById builds a wrong request URL | bug documentation client-library | ## Java client retrieveRefreshTokenById builds a wrong request URL
### Description
`FusionAuthClient.retrieveRefreshTokenById(userId)` builds a wrong request not matching the docs.
Currently:
```Java
return start(RefreshTokenResponse.class, Errors.class)
.uri("/api/jwt/refresh")
.urlSegment(userId)
.get()
.go();
```
should be `GET /api/jwt/refresh?userId={userId}`:
```Java
return start(RefreshTokenResponse.class, Errors.class)
.uri("/api/jwt/refresh")
.urlParameter("userId", userId)
.get()
.go();
```
### Affects versions
Library version `io.fusionauth:fusionauth-java-client:1.36.0`
### Steps to reproduce
Steps to reproduce the behavior:
1. Call `FusionAuthClient.retrieveRefreshTokenById(userId)`
2. See 404
### Expected behavior
200 response with a list of refresh tokens.
| 1.0 | Java client retrieveRefreshTokenById builds a wrong request URL - ## Java client retrieveRefreshTokenById builds a wrong request URL
### Description
`FusionAuthClient.retrieveRefreshTokenById(userId)` builds a wrong request not matching the docs.
Currently:
```Java
return start(RefreshTokenResponse.class, Errors.class)
.uri("/api/jwt/refresh")
.urlSegment(userId)
.get()
.go();
```
should be `GET /api/jwt/refresh?userId={userId}`:
```Java
return start(RefreshTokenResponse.class, Errors.class)
.uri("/api/jwt/refresh")
.urlParameter("userId", userId)
.get()
.go();
```
### Affects versions
Library version `io.fusionauth:fusionauth-java-client:1.36.0`
### Steps to reproduce
Steps to reproduce the behavior:
1. Call `FusionAuthClient.retrieveRefreshTokenById(userId)`
2. See 404
### Expected behavior
200 response with a list of refresh tokens.
| non_defect | java client retrieverefreshtokenbyid builds a wrong request url java client retrieverefreshtokenbyid builds a wrong request url description fusionauthclient retrieverefreshtokenbyid userid builds a wrong request not matching the docs currently java return start refreshtokenresponse class errors class uri api jwt refresh urlsegment userid get go should be get api jwt refresh userid userid java return start refreshtokenresponse class errors class uri api jwt refresh urlparameter userid userid get go affects versions library version io fusionauth fusionauth java client steps to reproduce steps to reproduce the behavior call fusionauthclient retrieverefreshtokenbyid userid see expected behavior response with a list of refresh tokens | 0 |
204,870 | 15,560,485,757 | IssuesEvent | 2021-03-16 12:46:45 | microsoft/azure-pipelines-tasks | https://api.github.com/repos/microsoft/azure-pipelines-tasks | closed | Inconsistent results of Azure Pipelines Test Service comments for similar coverage | Area: Test Area: TestManagement bug | ## Required Information
Entering this information will route you directly to the right team and expedite traction.
**Question, Bug, or Feature?**
*Type*: Bug
**Enter Task Name**: VsTestV2
## Environment
- Server - Azure Pipelines
Account name: Office
- Agent - Private:
Agent OS: Windows
Agent version : 2.179.0
## Issue Description
We have configured the Visual Studio Test task in our build pipeline and enabled to code coverage option to collect the coverage data for the code.
As a result we are getting the comments on the Pull Requests for the diff coverage as expected.
But we could observe some inconsistency in the results of the comments from the Azure Pipelines Test Service for the scenarios where either code coverage data was not found or no executable changes were present in the pull request.
Refer below two examples -


Can you please look into this?
Expected -
If there are no executable changes found or the code coverage data is not found, the Diff coverage check should succeed and should be consistent.
| 2.0 | Inconsistent results of Azure Pipelines Test Service comments for similar coverage - ## Required Information
Entering this information will route you directly to the right team and expedite traction.
**Question, Bug, or Feature?**
*Type*: Bug
**Enter Task Name**: VsTestV2
## Environment
- Server - Azure Pipelines
Account name: Office
- Agent - Private:
Agent OS: Windows
Agent version : 2.179.0
## Issue Description
We have configured the Visual Studio Test task in our build pipeline and enabled to code coverage option to collect the coverage data for the code.
As a result we are getting the comments on the Pull Requests for the diff coverage as expected.
But we could observe some inconsistency in the results of the comments from the Azure Pipelines Test Service for the scenarios where either code coverage data was not found or no executable changes were present in the pull request.
Refer below two examples -


Can you please look into this?
Expected -
If there are no executable changes found or the code coverage data is not found, the Diff coverage check should succeed and should be consistent.
| non_defect | inconsistent results of azure pipelines test service comments for similar coverage required information entering this information will route you directly to the right team and expedite traction question bug or feature type bug enter task name environment server azure pipelines account name office agent private agent os windows agent version issue description we have configured the visual studio test task in our build pipeline and enabled to code coverage option to collect the coverage data for the code as a result we are getting the comments on the pull requests for the diff coverage as expected but we could observe some inconsistency in the results of the comments from the azure pipelines test service for the scenarios where either code coverage data was not found or no executable changes were present in the pull request refer below two examples can you please look into this expected if there are no executable changes found or the code coverage data is not found the diff coverage check should succeed and should be consistent | 0 |
616,496 | 19,303,926,498 | IssuesEvent | 2021-12-13 09:30:11 | inrae/diades.atlas | https://api.github.com/repos/inrae/diades.atlas | closed | [Translation] Define translation mechanism | priority: high | Once the PRs #56 #50 are merged, there need to be the following updates :
+ [x] Translation Guide
+ [x] build_language_json | 1.0 | [Translation] Define translation mechanism - Once the PRs #56 #50 are merged, there need to be the following updates :
+ [x] Translation Guide
+ [x] build_language_json | non_defect | define translation mechanism once the prs are merged there need to be the following updates translation guide build language json | 0 |
65,971 | 6,980,386,231 | IssuesEvent | 2017-12-13 01:24:56 | omegaup/omegaup | https://api.github.com/repos/omegaup/omegaup | closed | No funciona el autocompletado en el listado de problemas | 5 Bug omegaUp For Contests P0 ready | Al comenzar a escribir el nombre de un problema en la barra de búsqueda, no me muestra las coincidencias de la palabra.
## Comportamiento Esperado
Anteriormente ya se había agregado esta opción de autocompletado, pero dejó de funcionar.
## Comportamiento Actual
Al inspeccionar la consola de Javascript veo que está tronando por un elemento que no existe en el DOM
## Posible Solución
Revisando el historial, efectivamente, veo que se eliminó el Pull request [#1530](https://github.com/omegaup/omegaup/pull/1530/files#diff-696daa8612c245c86a0ab65854e1566aL53).
Hay que ver si realmente ya no se necesita, quitar todas sus dependencias.
| 1.0 | No funciona el autocompletado en el listado de problemas - Al comenzar a escribir el nombre de un problema en la barra de búsqueda, no me muestra las coincidencias de la palabra.
## Comportamiento Esperado
Anteriormente ya se había agregado esta opción de autocompletado, pero dejó de funcionar.
## Comportamiento Actual
Al inspeccionar la consola de Javascript veo que está tronando por un elemento que no existe en el DOM
## Posible Solución
Revisando el historial, efectivamente, veo que se eliminó el Pull request [#1530](https://github.com/omegaup/omegaup/pull/1530/files#diff-696daa8612c245c86a0ab65854e1566aL53).
Hay que ver si realmente ya no se necesita, quitar todas sus dependencias.
| non_defect | no funciona el autocompletado en el listado de problemas al comenzar a escribir el nombre de un problema en la barra de búsqueda no me muestra las coincidencias de la palabra comportamiento esperado anteriormente ya se había agregado esta opción de autocompletado pero dejó de funcionar comportamiento actual al inspeccionar la consola de javascript veo que está tronando por un elemento que no existe en el dom posible solución revisando el historial efectivamente veo que se eliminó el pull request hay que ver si realmente ya no se necesita quitar todas sus dependencias | 0 |
818,180 | 30,677,027,621 | IssuesEvent | 2023-07-26 06:23:23 | SlimeVR/SlimeVR-Server | https://api.github.com/repos/SlimeVR/SlimeVR-Server | closed | The VRServer is not gracefully shutdown | Type: Bug Priority: Normal Area: Server | How to reproduce:
1. add a `logManager.info("hello") in https://github.com/SlimeVR/SlimeVR-Server/blob/main/server/src/main/java/dev/slimevr/VRServer.java#L179
2. ./gradlew shadowJar
3. put the `slimevr.jar` from `SlimeVR-Server\server\build\libs` to the your installation file
4. The log doest has hello.
My proposal is that:
1. Tauri should send sigterm to the server
2. Then server should start the clean up process
Obstacle:
1. now the server cannot handle SIGINT, need to investigate how to do so
Already tried to put a shutdown hook in the main function, but It still cannot work. Not sure it related to gradle
| 1.0 | The VRServer is not gracefully shutdown - How to reproduce:
1. add a `logManager.info("hello") in https://github.com/SlimeVR/SlimeVR-Server/blob/main/server/src/main/java/dev/slimevr/VRServer.java#L179
2. ./gradlew shadowJar
3. put the `slimevr.jar` from `SlimeVR-Server\server\build\libs` to the your installation file
4. The log doest has hello.
My proposal is that:
1. Tauri should send sigterm to the server
2. Then server should start the clean up process
Obstacle:
1. now the server cannot handle SIGINT, need to investigate how to do so
Already tried to put a shutdown hook in the main function, but It still cannot work. Not sure it related to gradle
| non_defect | the vrserver is not gracefully shutdown how to reproduce add a logmanager info hello in gradlew shadowjar put the slimevr jar from slimevr server server build libs to the your installation file the log doest has hello my proposal is that tauri should send sigterm to the server then server should start the clean up process obstacle now the server cannot handle sigint need to investigate how to do so already tried to put a shutdown hook in the main function but it still cannot work not sure it related to gradle | 0 |
49,517 | 13,187,224,855 | IssuesEvent | 2020-08-13 02:44:40 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | dst-extractor (Trac #1579) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1579">https://code.icecube.wisc.edu/ticket/1579</a>, reported by nega and owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:22",
"description": "Since we're no longer building `libdst-extractor.so` all of these things fail:\n\n{{{\n~/i3/combo/build 14h 59m 59s\n\u276f ag 'load.*dst-extractor' ../src/\n../src/dst-extractor/python/__init__.py\n4:icetray.load('dst-extractor', False)\n\n../src/dst-extractor/resources/scripts/dstread09.py\n15:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread08.py\n13:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dst11_process.py\n82: load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread11.py\n15:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread10.py\n13:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread07.py\n12:load(\"libdst-extractor\")\n\n../src/filterscripts/resources/scripts/MinBiasHunter.py\n21:I3Tray.load(\"dst-extractor\")\n\n~/i3/combo/build 46s\n\u276f \n}}}\n\nThis also prevents the building of the documentation.\n",
"reporter": "nega",
"cc": "olivas, hdembinski",
"resolution": "fixed",
"_ts": "1550067082284240",
"component": "combo reconstruction",
"summary": "dst-extractor",
"priority": "blocker",
"keywords": "dst dst-extractor pybindings documentatoin",
"time": "2016-03-07T19:10:52",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| 1.0 | dst-extractor (Trac #1579) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1579">https://code.icecube.wisc.edu/ticket/1579</a>, reported by nega and owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:22",
"description": "Since we're no longer building `libdst-extractor.so` all of these things fail:\n\n{{{\n~/i3/combo/build 14h 59m 59s\n\u276f ag 'load.*dst-extractor' ../src/\n../src/dst-extractor/python/__init__.py\n4:icetray.load('dst-extractor', False)\n\n../src/dst-extractor/resources/scripts/dstread09.py\n15:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread08.py\n13:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dst11_process.py\n82: load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread11.py\n15:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread10.py\n13:load(\"libdst-extractor\")\n\n../src/dst-extractor/resources/scripts/dstread07.py\n12:load(\"libdst-extractor\")\n\n../src/filterscripts/resources/scripts/MinBiasHunter.py\n21:I3Tray.load(\"dst-extractor\")\n\n~/i3/combo/build 46s\n\u276f \n}}}\n\nThis also prevents the building of the documentation.\n",
"reporter": "nega",
"cc": "olivas, hdembinski",
"resolution": "fixed",
"_ts": "1550067082284240",
"component": "combo reconstruction",
"summary": "dst-extractor",
"priority": "blocker",
"keywords": "dst dst-extractor pybindings documentatoin",
"time": "2016-03-07T19:10:52",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| defect | dst extractor trac migrated from json status closed changetime description since we re no longer building libdst extractor so all of these things fail n n n combo build n ag load dst extractor src n src dst extractor python init py icetray load dst extractor false n n src dst extractor resources scripts py load libdst extractor n n src dst extractor resources scripts py load libdst extractor n n src dst extractor resources scripts process py load libdst extractor n n src dst extractor resources scripts py load libdst extractor n n src dst extractor resources scripts py load libdst extractor n n src dst extractor resources scripts py load libdst extractor n n src filterscripts resources scripts minbiashunter py load dst extractor n n combo build n n n nthis also prevents the building of the documentation n reporter nega cc olivas hdembinski resolution fixed ts component combo reconstruction summary dst extractor priority blocker keywords dst dst extractor pybindings documentatoin time milestone owner juancarlos type defect | 1 |
50,820 | 13,187,872,262 | IssuesEvent | 2020-08-13 04:52:54 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | [mue] no sphinx documentation (Trac #1446) | Migrated from Trac combo reconstruction defect | Good documentation is now deemed essential.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1446">https://code.icecube.wisc.edu/ticket/1446</a>, reported by david.schultz and owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Good documentation is now deemed essential.",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[mue] no sphinx documentation",
"priority": "major",
"keywords": "",
"time": "2015-11-24T23:42:27",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [mue] no sphinx documentation (Trac #1446) - Good documentation is now deemed essential.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1446">https://code.icecube.wisc.edu/ticket/1446</a>, reported by david.schultz and owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Good documentation is now deemed essential.",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[mue] no sphinx documentation",
"priority": "major",
"keywords": "",
"time": "2015-11-24T23:42:27",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| defect | no sphinx documentation trac good documentation is now deemed essential migrated from json status closed changetime description good documentation is now deemed essential reporter david schultz cc olivas resolution fixed ts component combo reconstruction summary no sphinx documentation priority major keywords time milestone owner dima type defect | 1 |
7,640 | 2,610,408,284 | IssuesEvent | 2015-02-26 20:12:40 | chrsmith/republic-at-war | https://api.github.com/repos/chrsmith/republic-at-war | opened | Bossk Space | auto-migrated Priority-Medium Type-Defect | ```
Bossk's fighter doesn't fire its weapons.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 3 Jul 2012 at 1:40 | 1.0 | Bossk Space - ```
Bossk's fighter doesn't fire its weapons.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 3 Jul 2012 at 1:40 | defect | bossk space bossk s fighter doesn t fire its weapons original issue reported on code google com by killerhurdz netscape net on jul at | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.