Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
726,155 | 24,990,076,576 | IssuesEvent | 2022-11-02 17:58:40 | cmurphy/gke-operator | https://api.github.com/repos/cmurphy/gke-operator | closed | Document CRD attributes | enhancement low-priority | With #8 it should be possible to add documentation to CRD attributes, which would make the `kubectl explain` command more useful. Kubebuilder lets you do this via godoc comments. Wrangler doesn't seem to support this. | 1.0 | Document CRD attributes - With #8 it should be possible to add documentation to CRD attributes, which would make the `kubectl explain` command more useful. Kubebuilder lets you do this via godoc comments. Wrangler doesn't seem to support this. | priority | document crd attributes with it should be possible to add documentation to crd attributes which would make the kubectl explain command more useful kubebuilder lets you do this via godoc comments wrangler doesn t seem to support this | 1 |
176,920 | 13,669,399,737 | IssuesEvent | 2020-09-29 01:48:35 | iptmt/blog | https://api.github.com/repos/iptmt/blog | opened | 测试blog中的各种属性 | /blog/test-page/ Gitalk | https://iptmt.github.io/blog/test-page/
latex formulation
\[\begin{align*}
y = y(x,t) &= A e^{i\theta} \\
&= A (\cos \theta + i \sin \theta) \\
&= A (\cos(kx - \omega t) + i \sin(kx - \omega t)) \\
&= A\cos(kx - \omega t) + i A\sin(kx - \omega t) \\
&= A\cos \Big(\frac{2\pi}{\lambda}x - \frac{2\pi v}{\lambda} t \Big) + i A\sin \Big(\frac{2\pi}{\lambda}x - \frac{2\pi v}{\lambda} t \Big) \\
&= A\cos \frac{2\pi}{\lambda} (x - v t) + i A\sin \frac{2\pi}{\lambda} (x - v t)
\end{align*}\]
| 1.0 | 测试blog中的各种属性 - https://iptmt.github.io/blog/test-page/
latex formulation
\[\begin{align*}
y = y(x,t) &= A e^{i\theta} \\
&= A (\cos \theta + i \sin \theta) \\
&= A (\cos(kx - \omega t) + i \sin(kx - \omega t)) \\
&= A\cos(kx - \omega t) + i A\sin(kx - \omega t) \\
&= A\cos \Big(\frac{2\pi}{\lambda}x - \frac{2\pi v}{\lambda} t \Big) + i A\sin \Big(\frac{2\pi}{\lambda}x - \frac{2\pi v}{\lambda} t \Big) \\
&= A\cos \frac{2\pi}{\lambda} (x - v t) + i A\sin \frac{2\pi}{\lambda} (x - v t)
\end{align*}\]
| non_priority | 测试blog中的各种属性 latex formulation begin align y y x t a e i theta a cos theta i sin theta a cos kx omega t i sin kx omega t a cos kx omega t i a sin kx omega t a cos big frac pi lambda x frac pi v lambda t big i a sin big frac pi lambda x frac pi v lambda t big a cos frac pi lambda x v t i a sin frac pi lambda x v t end align | 0 |
171,199 | 20,955,472,002 | IssuesEvent | 2022-03-27 03:17:15 | samq-wsdemo/cve-2019-16278 | https://api.github.com/repos/samq-wsdemo/cve-2019-16278 | opened | CVE-2020-25575 (High) detected in failure-0.1.7.crate | security vulnerability | ## CVE-2020-25575 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>failure-0.1.7.crate</b></p></summary>
<p>Experimental error handling abstraction.</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/failure/0.1.7/download">https://crates.io/api/v1/crates/failure/0.1.7/download</a></p>
<p>
Dependency Hierarchy:
- :x: **failure-0.1.7.crate** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** UNSUPPORTED WHEN ASSIGNED ** An issue was discovered in the failure crate through 0.1.5 for Rust. It may introduce "compatibility hazards" in some applications, and has a type confusion flaw when downcasting. NOTE: This vulnerability only affects products that are no longer supported by the maintainer. NOTE: This may overlap CVE-2019-25010.
<p>Publish Date: 2020-09-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25575>CVE-2020-25575</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Crate","packageName":"failure","packageVersion":"0.1.7","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"failure:0.1.7","isMinimumFixVersionAvailable":false,"isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-25575","vulnerabilityDetails":"** UNSUPPORTED WHEN ASSIGNED ** An issue was discovered in the failure crate through 0.1.5 for Rust. It may introduce \"compatibility hazards\" in some applications, and has a type confusion flaw when downcasting. NOTE: This vulnerability only affects products that are no longer supported by the maintainer. NOTE: This may overlap CVE-2019-25010.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25575","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-25575 (High) detected in failure-0.1.7.crate - ## CVE-2020-25575 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>failure-0.1.7.crate</b></p></summary>
<p>Experimental error handling abstraction.</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/failure/0.1.7/download">https://crates.io/api/v1/crates/failure/0.1.7/download</a></p>
<p>
Dependency Hierarchy:
- :x: **failure-0.1.7.crate** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** UNSUPPORTED WHEN ASSIGNED ** An issue was discovered in the failure crate through 0.1.5 for Rust. It may introduce "compatibility hazards" in some applications, and has a type confusion flaw when downcasting. NOTE: This vulnerability only affects products that are no longer supported by the maintainer. NOTE: This may overlap CVE-2019-25010.
<p>Publish Date: 2020-09-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25575>CVE-2020-25575</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Crate","packageName":"failure","packageVersion":"0.1.7","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"failure:0.1.7","isMinimumFixVersionAvailable":false,"isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-25575","vulnerabilityDetails":"** UNSUPPORTED WHEN ASSIGNED ** An issue was discovered in the failure crate through 0.1.5 for Rust. It may introduce \"compatibility hazards\" in some applications, and has a type confusion flaw when downcasting. NOTE: This vulnerability only affects products that are no longer supported by the maintainer. NOTE: This may overlap CVE-2019-25010.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25575","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in failure crate cve high severity vulnerability vulnerable library failure crate experimental error handling abstraction library home page a href dependency hierarchy x failure crate vulnerable library found in base branch master vulnerability details unsupported when assigned an issue was discovered in the failure crate through for rust it may introduce compatibility hazards in some applications and has a type confusion flaw when downcasting note this vulnerability only affects products that are no longer supported by the maintainer note this may overlap cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree failure isminimumfixversionavailable false isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails unsupported when assigned an issue was discovered in the failure crate through for rust it may introduce compatibility hazards in some applications and has a type confusion flaw when downcasting note this vulnerability only affects products that are no longer supported by the maintainer note this may overlap cve vulnerabilityurl | 0 |
555,391 | 16,453,554,731 | IssuesEvent | 2021-05-21 09:21:32 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | news.google.com - see bug description | browser-fenix engine-gecko priority-critical | <!-- @browser: Firefox Mobile 90.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:90.0) Gecko/90.0 Firefox/90.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74129 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://news.google.com/articles/CBMid2h0dHBzOi8vdGhlaGlsbC5jb20vcG9saWN5L2ludGVybmF0aW9uYWwvYXNpYS1wYWNpZmljLzU1Mzc3NC1hdXRob3JpdGllcy1pbi1pbmRpYS1kaXNjb3Zlci1odW5kcmVkcy1vZi1ib2RpZXMtYnVyaWVkLWlu0gF7aHR0cHM6Ly90aGVoaWxsLmNvbS9wb2xpY3kvaW50ZXJuYXRpb25hbC9hc2lhLXBhY2lmaWMvNTUzNzc0LWF1dGhvcml0aWVzLWluLWluZGlhLWRpc2NvdmVyLWh1bmRyZWRzLW9mLWJvZGllcy1idXJpZWQtaW4_YW1w?hl=en-US&gl=US&ceid=US%3Aen
**Browser / Version**: Firefox Mobile 90.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Audio and video blocked in settings but still play automatically on web pages!
**Steps to Reproduce**:
Audio and video play automatically
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/5/6965bc71-97a7-4435-9296-2e30d3d8b9ae.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210516091748</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/5/acc26b15-1450-4422-b639-54e07ae18602)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | news.google.com - see bug description - <!-- @browser: Firefox Mobile 90.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:90.0) Gecko/90.0 Firefox/90.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74129 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://news.google.com/articles/CBMid2h0dHBzOi8vdGhlaGlsbC5jb20vcG9saWN5L2ludGVybmF0aW9uYWwvYXNpYS1wYWNpZmljLzU1Mzc3NC1hdXRob3JpdGllcy1pbi1pbmRpYS1kaXNjb3Zlci1odW5kcmVkcy1vZi1ib2RpZXMtYnVyaWVkLWlu0gF7aHR0cHM6Ly90aGVoaWxsLmNvbS9wb2xpY3kvaW50ZXJuYXRpb25hbC9hc2lhLXBhY2lmaWMvNTUzNzc0LWF1dGhvcml0aWVzLWluLWluZGlhLWRpc2NvdmVyLWh1bmRyZWRzLW9mLWJvZGllcy1idXJpZWQtaW4_YW1w?hl=en-US&gl=US&ceid=US%3Aen
**Browser / Version**: Firefox Mobile 90.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Audio and video blocked in settings but still play automatically on web pages!
**Steps to Reproduce**:
Audio and video play automatically
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/5/6965bc71-97a7-4435-9296-2e30d3d8b9ae.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210516091748</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/5/acc26b15-1450-4422-b639-54e07ae18602)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | news google com see bug description url browser version firefox mobile operating system android tested another browser yes edge problem type something else description audio and video blocked in settings but still play automatically on web pages steps to reproduce audio and video play automatically view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
195,521 | 14,738,601,433 | IssuesEvent | 2021-01-07 05:13:42 | QubesOS/updates-status | https://api.github.com/repos/QubesOS/updates-status | closed | core-agent-linux v4.1.19 (r4.1) | r4.1-archlinux-cur-test r4.1-bullseye-cur-test r4.1-buster-cur-test r4.1-centos8-cur-test r4.1-fc31-cur-test r4.1-fc32-cur-test r4.1-fc33-cur-test | Update of core-agent-linux to v4.1.19 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-agent-linux/commit/932727b3dfab56ea931c652dec0c1b06ecc0e247
[Changes since previous version](https://github.com/QubesOS/qubes-core-agent-linux/compare/v4.1.18...v4.1.19):
QubesOS/qubes-core-agent-linux@932727b version 4.1.19
QubesOS/qubes-core-agent-linux@e71edb8 Merge branch 'network-wait-fix'
QubesOS/qubes-core-agent-linux@e1ebbf2 archlinux: checkupdates output is not checked anymore, ignore it
QubesOS/qubes-core-agent-linux@f95f08e Merge remote-tracking branch 'origin/pr/267'
QubesOS/qubes-core-agent-linux@d28ada9 Merge remote-tracking branch 'origin/pr/269'
QubesOS/qubes-core-agent-linux@c2f4e02 Merge remote-tracking branch 'origin/pr/272'
QubesOS/qubes-core-agent-linux@ce9f6b2 Increase upgrades-status-notify verbosity
QubesOS/qubes-core-agent-linux@90ae037 Merge remote-tracking branch 'origin/pr/280'
QubesOS/qubes-core-agent-linux@e8f2f64 Merge remote-tracking branch 'origin/pr/281'
QubesOS/qubes-core-agent-linux@79bb5a8 Merge remote-tracking branch 'origin/pr/283'
QubesOS/qubes-core-agent-linux@882059d Merge remote-tracking branch 'origin/pr/282'
QubesOS/qubes-core-agent-linux@ff86bf9 archlinux: add missing python-setuptools makedepends
QubesOS/qubes-core-agent-linux@ed33374 Handle UnicodeError in firewall when resolving hostname
QubesOS/qubes-core-agent-linux@c25513f Fix comments in default qubes-firewall-user-script
QubesOS/qubes-core-agent-linux@48b9d5c Avoid deprecated /var/run directory
QubesOS/qubes-core-agent-linux@3f5bb37 Ignore more options of qubes-dom0-update
QubesOS/qubes-core-agent-linux@d602da4 network: fix waiting for VM network uplink
QubesOS/qubes-core-agent-linux@ba4e7f8 Actually install unit files into /usr/lib/systemd/system
QubesOS/qubes-core-agent-linux@9943585 Merge remote-tracking branch 'origin/pr/279'
QubesOS/qubes-core-agent-linux@a9e98cc Merge remote-tracking branch 'origin/pr/278'
QubesOS/qubes-core-agent-linux@46df6fc Merge remote-tracking branch 'origin/pr/274'
QubesOS/qubes-core-agent-linux@cba3f59 Merge remote-tracking branch 'origin/pr/268'
QubesOS/qubes-core-agent-linux@3bcc1c3 “sudo” must remove SELinux restrictions
QubesOS/qubes-core-agent-linux@16f48b6 Only give the “qubes” group full Polkit access
QubesOS/qubes-core-agent-linux@951b25e Use 022 instead of 002 as sudo umask
QubesOS/qubes-core-agent-linux@6adad25 Avoid spawning a Zenity progress meter
QubesOS/qubes-core-agent-linux@274df33 Harden shell scripts against metacharacters
QubesOS/qubes-core-agent-linux@a42b380 Metadata is now signed
QubesOS/qubes-core-agent-linux@1ea361b Always pass ‘-y’ to dnf
QubesOS/qubes-core-agent-linux@9bcfc5d Allow SELinux to stay enabled
QubesOS/qubes-core-agent-linux@e5b56b9 Don’t rely on an arbitrary length limit
QubesOS/qubes-core-agent-linux@c09909c Don’t assume dom0 will never have a network connection
QubesOS/qubes-core-agent-linux@bf443ef Merge commit 'b15ff53bc6dee36cecf28413554fb7c856ae0517' into usr-lib-merge
QubesOS/qubes-core-agent-linux@95022f9 Merge commit 'b15ff53bc6dee36cecf28413554fb7c856ae0517' into no-tabs-please
QubesOS/qubes-core-agent-linux@220adca Merge commit 'b15ff53bc6dee36cecf28413554fb7c856ae0517' into conntrack-purge
QubesOS/qubes-core-agent-linux@6565fac Add conntrack-tools dependency to qubes-core-agent-networking
QubesOS/qubes-core-agent-linux@20a6a94 Replace tabs with spaces
QubesOS/qubes-core-agent-linux@b15ff53 debian: update compat
QubesOS/qubes-core-agent-linux@edde0d5 debian: update control
QubesOS/qubes-core-agent-linux@ae48c7e Merge commit '66b3e628f2bf0ec8f23b0b42484d014e5cad23bf' into conntrack-purge
QubesOS/qubes-core-agent-linux@44b3c12 Keep shellcheck from complaining
QubesOS/qubes-core-agent-linux@d960f7a Stop disabling checksum offload
QubesOS/qubes-core-agent-linux@70253ed Remove spurious line continuation; add quotes.
QubesOS/qubes-core-agent-linux@9840953 vif-route-qubes: Check that the -e flag is set
QubesOS/qubes-core-agent-linux@a8588c4 Purge stale connection tracking entries
QubesOS/qubes-core-agent-linux@66b3e62 Order NetworkManager after qubes-network-uplink.service
QubesOS/qubes-core-agent-linux@519e82b init/functions: do not guess 'eth0' as Qubes-managed interface
QubesOS/qubes-core-agent-linux@8a3cd3d Make init/functions suitable for running with 'set -u'
QubesOS/qubes-core-agent-linux@6aa2b89 Cleanup setup-ip script a bit
QubesOS/qubes-core-agent-linux@dd8de79 Move network uplink setup to a separate service
QubesOS/qubes-core-agent-linux@e344dcc Order qubes-early-vm-config.service before networking
QubesOS/qubes-core-agent-linux@0caa7fc network: stop IP forwarding before disabling firewall
QubesOS/qubes-core-agent-linux@f66a494 Allow DHCPv6 replies on uplink interface, if ipv6 is enabled
QubesOS/qubes-core-agent-linux@57b30d3 Use /usr/lib instead of /lib
QubesOS/qubes-core-agent-linux@bba78d2 fix for ArchLinux: notify dom0 about installed updates The launch of the qubes-update-check service failed on ArchLinux, because the qubes-rpc uses the `service` command which isn't available for this OS.
QubesOS/qubes-core-agent-linux@5ddc118 Merge remote-tracking branch 'origin/pr/266'
QubesOS/qubes-core-agent-linux@6da7f77 Merge remote-tracking branch 'origin/pr/265'
QubesOS/qubes-core-agent-linux@4543d4f Merge remote-tracking branch 'origin/pr/232'
QubesOS/qubes-core-agent-linux@7faa707 fix archlinux detection of available upgrades note: checkupdates return 2 when no updates are available (source: man page and source code)
QubesOS/qubes-core-agent-linux@1841ba7 upgrades-installed-check requires pacman-contrib for checkupdates
QubesOS/qubes-core-agent-linux@06d84b5 Only allow known-safe characters in socket paths
QubesOS/qubes-core-agent-linux@489fde7 Replace custom script reloading with sourcing /etc/profile in qubes.GetAppmenus
QubesOS/qubes-core-agent-linux@c3761ac Merge remote-tracking branch 'origin/pr/264'
QubesOS/qubes-core-agent-linux@5e0d1cd qubes.ShowInTerminal requires socat
QubesOS/qubes-core-agent-linux@156e181 gitlab-ci: install test dependencies
QubesOS/qubes-core-agent-linux@3b6a878 gitlab-ci: include codecov
QubesOS/qubes-core-agent-linux@7c42fb6 gitlab-ci: move tests earlier, rename job
QubesOS/qubes-core-agent-linux@0580fe5 Use netvm_gw_ip instead of netvm_ip
QubesOS/qubes-core-agent-linux@9d10ecc Remove commented-out code
QubesOS/qubes-core-agent-linux@e4eeb2e Add NetVM-facing neighbor entry in NAT namespace
QubesOS/qubes-core-agent-linux@097342b Optimization: use `ip -n` over `ip netns exec`
QubesOS/qubes-core-agent-linux@6517cca NAT network namespaces need neighbor entries
QubesOS/qubes-core-agent-linux@b28f8a2 Add .gitlab-ci.yml
QubesOS/qubes-core-agent-linux@791b08c vif-route-qubes: better input validation
QubesOS/qubes-core-agent-linux@9646acb Don’t use onlink flag for nexthop
QubesOS/qubes-core-agent-linux@3e75528 Fix running under -euo pipefail
QubesOS/qubes-core-agent-linux@377add4 Don’t hardcode MAC addresses
QubesOS/qubes-core-agent-linux@0a32295 Add gateway IP+MAC, not VM’s own
QubesOS/qubes-core-agent-linux@aa71677 Add permanent neighbor entries
QubesOS/qubes-core-agent-linux@74f5fb5 network: prevent IP spoofing on upstream (eth0) interface
QubesOS/qubes-core-agent-linux@68b61c2 network: setup anti-spoofing firewall rules before enabling the interface
QubesOS/qubes-core-agent-linux@05a213a Relax private.img condition for mkfs even further
QubesOS/qubes-core-agent-linux@2d7a10a Drop systemd re-exec during boot
QubesOS/qubes-core-agent-linux@7f15690 Add a service to enable swap early - before fsck of the root filesystem
QubesOS/qubes-core-agent-linux@aa50b2f grub: override GRUB_DEVICE with /dev/mapper/dmroot
Referenced issues:
QubesOS/qubes-issues#5570
QubesOS/qubes-issues#5576
QubesOS/qubes-issues#5992
QubesOS/qubes-issues#3758
QubesOS/qubes-issues#6290
QubesOS/qubes-issues#6291
QubesOS/qubes-issues#6163
QubesOS/qubes-issues#6174
QubesOS/qubes-issues#5886
QubesOS/qubes-issues#5599
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-agent-linux 932727b3dfab56ea931c652dec0c1b06ecc0e247 r4.1 current repo` (available 7 days from now)
* `Upload core-agent-linux 932727b3dfab56ea931c652dec0c1b06ecc0e247 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-agent-linux 932727b3dfab56ea931c652dec0c1b06ecc0e247 r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| 7.0 | core-agent-linux v4.1.19 (r4.1) - Update of core-agent-linux to v4.1.19 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-agent-linux/commit/932727b3dfab56ea931c652dec0c1b06ecc0e247
[Changes since previous version](https://github.com/QubesOS/qubes-core-agent-linux/compare/v4.1.18...v4.1.19):
QubesOS/qubes-core-agent-linux@932727b version 4.1.19
QubesOS/qubes-core-agent-linux@e71edb8 Merge branch 'network-wait-fix'
QubesOS/qubes-core-agent-linux@e1ebbf2 archlinux: checkupdates output is not checked anymore, ignore it
QubesOS/qubes-core-agent-linux@f95f08e Merge remote-tracking branch 'origin/pr/267'
QubesOS/qubes-core-agent-linux@d28ada9 Merge remote-tracking branch 'origin/pr/269'
QubesOS/qubes-core-agent-linux@c2f4e02 Merge remote-tracking branch 'origin/pr/272'
QubesOS/qubes-core-agent-linux@ce9f6b2 Increase upgrades-status-notify verbosity
QubesOS/qubes-core-agent-linux@90ae037 Merge remote-tracking branch 'origin/pr/280'
QubesOS/qubes-core-agent-linux@e8f2f64 Merge remote-tracking branch 'origin/pr/281'
QubesOS/qubes-core-agent-linux@79bb5a8 Merge remote-tracking branch 'origin/pr/283'
QubesOS/qubes-core-agent-linux@882059d Merge remote-tracking branch 'origin/pr/282'
QubesOS/qubes-core-agent-linux@ff86bf9 archlinux: add missing python-setuptools makedepends
QubesOS/qubes-core-agent-linux@ed33374 Handle UnicodeError in firewall when resolving hostname
QubesOS/qubes-core-agent-linux@c25513f Fix comments in default qubes-firewall-user-script
QubesOS/qubes-core-agent-linux@48b9d5c Avoid deprecated /var/run directory
QubesOS/qubes-core-agent-linux@3f5bb37 Ignore more options of qubes-dom0-update
QubesOS/qubes-core-agent-linux@d602da4 network: fix waiting for VM network uplink
QubesOS/qubes-core-agent-linux@ba4e7f8 Actually install unit files into /usr/lib/systemd/system
QubesOS/qubes-core-agent-linux@9943585 Merge remote-tracking branch 'origin/pr/279'
QubesOS/qubes-core-agent-linux@a9e98cc Merge remote-tracking branch 'origin/pr/278'
QubesOS/qubes-core-agent-linux@46df6fc Merge remote-tracking branch 'origin/pr/274'
QubesOS/qubes-core-agent-linux@cba3f59 Merge remote-tracking branch 'origin/pr/268'
QubesOS/qubes-core-agent-linux@3bcc1c3 “sudo” must remove SELinux restrictions
QubesOS/qubes-core-agent-linux@16f48b6 Only give the “qubes” group full Polkit access
QubesOS/qubes-core-agent-linux@951b25e Use 022 instead of 002 as sudo umask
QubesOS/qubes-core-agent-linux@6adad25 Avoid spawning a Zenity progress meter
QubesOS/qubes-core-agent-linux@274df33 Harden shell scripts against metacharacters
QubesOS/qubes-core-agent-linux@a42b380 Metadata is now signed
QubesOS/qubes-core-agent-linux@1ea361b Always pass ‘-y’ to dnf
QubesOS/qubes-core-agent-linux@9bcfc5d Allow SELinux to stay enabled
QubesOS/qubes-core-agent-linux@e5b56b9 Don’t rely on an arbitrary length limit
QubesOS/qubes-core-agent-linux@c09909c Don’t assume dom0 will never have a network connection
QubesOS/qubes-core-agent-linux@bf443ef Merge commit 'b15ff53bc6dee36cecf28413554fb7c856ae0517' into usr-lib-merge
QubesOS/qubes-core-agent-linux@95022f9 Merge commit 'b15ff53bc6dee36cecf28413554fb7c856ae0517' into no-tabs-please
QubesOS/qubes-core-agent-linux@220adca Merge commit 'b15ff53bc6dee36cecf28413554fb7c856ae0517' into conntrack-purge
QubesOS/qubes-core-agent-linux@6565fac Add conntrack-tools dependency to qubes-core-agent-networking
QubesOS/qubes-core-agent-linux@20a6a94 Replace tabs with spaces
QubesOS/qubes-core-agent-linux@b15ff53 debian: update compat
QubesOS/qubes-core-agent-linux@edde0d5 debian: update control
QubesOS/qubes-core-agent-linux@ae48c7e Merge commit '66b3e628f2bf0ec8f23b0b42484d014e5cad23bf' into conntrack-purge
QubesOS/qubes-core-agent-linux@44b3c12 Keep shellcheck from complaining
QubesOS/qubes-core-agent-linux@d960f7a Stop disabling checksum offload
QubesOS/qubes-core-agent-linux@70253ed Remove spurious line continuation; add quotes.
QubesOS/qubes-core-agent-linux@9840953 vif-route-qubes: Check that the -e flag is set
QubesOS/qubes-core-agent-linux@a8588c4 Purge stale connection tracking entries
QubesOS/qubes-core-agent-linux@66b3e62 Order NetworkManager after qubes-network-uplink.service
QubesOS/qubes-core-agent-linux@519e82b init/functions: do not guess 'eth0' as Qubes-managed interface
QubesOS/qubes-core-agent-linux@8a3cd3d Make init/functions suitable for running with 'set -u'
QubesOS/qubes-core-agent-linux@6aa2b89 Cleanup setup-ip script a bit
QubesOS/qubes-core-agent-linux@dd8de79 Move network uplink setup to a separate service
QubesOS/qubes-core-agent-linux@e344dcc Order qubes-early-vm-config.service before networking
QubesOS/qubes-core-agent-linux@0caa7fc network: stop IP forwarding before disabling firewall
QubesOS/qubes-core-agent-linux@f66a494 Allow DHCPv6 replies on uplink interface, if ipv6 is enabled
QubesOS/qubes-core-agent-linux@57b30d3 Use /usr/lib instead of /lib
QubesOS/qubes-core-agent-linux@bba78d2 fix for ArchLinux: notify dom0 about installed updates The launch of the qubes-update-check service failed on ArchLinux, because the qubes-rpc uses the `service` command which isn't available for this OS.
QubesOS/qubes-core-agent-linux@5ddc118 Merge remote-tracking branch 'origin/pr/266'
QubesOS/qubes-core-agent-linux@6da7f77 Merge remote-tracking branch 'origin/pr/265'
QubesOS/qubes-core-agent-linux@4543d4f Merge remote-tracking branch 'origin/pr/232'
QubesOS/qubes-core-agent-linux@7faa707 fix archlinux detection of available upgrades note: checkupdates return 2 when no updates are available (source: man page and source code)
QubesOS/qubes-core-agent-linux@1841ba7 upgrades-installed-check requires pacman-contrib for checkupdates
QubesOS/qubes-core-agent-linux@06d84b5 Only allow known-safe characters in socket paths
QubesOS/qubes-core-agent-linux@489fde7 Replace custom script reloading with sourcing /etc/profile in qubes.GetAppmenus
QubesOS/qubes-core-agent-linux@c3761ac Merge remote-tracking branch 'origin/pr/264'
QubesOS/qubes-core-agent-linux@5e0d1cd qubes.ShowInTerminal requires socat
QubesOS/qubes-core-agent-linux@156e181 gitlab-ci: install test dependencies
QubesOS/qubes-core-agent-linux@3b6a878 gitlab-ci: include codecov
QubesOS/qubes-core-agent-linux@7c42fb6 gitlab-ci: move tests earlier, rename job
QubesOS/qubes-core-agent-linux@0580fe5 Use netvm_gw_ip instead of netvm_ip
QubesOS/qubes-core-agent-linux@9d10ecc Remove commented-out code
QubesOS/qubes-core-agent-linux@e4eeb2e Add NetVM-facing neighbor entry in NAT namespace
QubesOS/qubes-core-agent-linux@097342b Optimization: use `ip -n` over `ip netns exec`
QubesOS/qubes-core-agent-linux@6517cca NAT network namespaces need neighbor entries
QubesOS/qubes-core-agent-linux@b28f8a2 Add .gitlab-ci.yml
QubesOS/qubes-core-agent-linux@791b08c vif-route-qubes: better input validation
QubesOS/qubes-core-agent-linux@9646acb Don’t use onlink flag for nexthop
QubesOS/qubes-core-agent-linux@3e75528 Fix running under -euo pipefail
QubesOS/qubes-core-agent-linux@377add4 Don’t hardcode MAC addresses
QubesOS/qubes-core-agent-linux@0a32295 Add gateway IP+MAC, not VM’s own
QubesOS/qubes-core-agent-linux@aa71677 Add permanent neighbor entries
QubesOS/qubes-core-agent-linux@74f5fb5 network: prevent IP spoofing on upstream (eth0) interface
QubesOS/qubes-core-agent-linux@68b61c2 network: setup anti-spoofing firewall rules before enabling the interface
QubesOS/qubes-core-agent-linux@05a213a Relax private.img condition for mkfs even further
QubesOS/qubes-core-agent-linux@2d7a10a Drop systemd re-exec during boot
QubesOS/qubes-core-agent-linux@7f15690 Add a service to enable swap early - before fsck of the root filesystem
QubesOS/qubes-core-agent-linux@aa50b2f grub: override GRUB_DEVICE with /dev/mapper/dmroot
Referenced issues:
QubesOS/qubes-issues#5570
QubesOS/qubes-issues#5576
QubesOS/qubes-issues#5992
QubesOS/qubes-issues#3758
QubesOS/qubes-issues#6290
QubesOS/qubes-issues#6291
QubesOS/qubes-issues#6163
QubesOS/qubes-issues#6174
QubesOS/qubes-issues#5886
QubesOS/qubes-issues#5599
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-agent-linux 932727b3dfab56ea931c652dec0c1b06ecc0e247 r4.1 current repo` (available 7 days from now)
* `Upload core-agent-linux 932727b3dfab56ea931c652dec0c1b06ecc0e247 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-agent-linux 932727b3dfab56ea931c652dec0c1b06ecc0e247 r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| non_priority | core agent linux update of core agent linux to for qubes see comments below for details built from qubesos qubes core agent linux version qubesos qubes core agent linux merge branch network wait fix qubesos qubes core agent linux archlinux checkupdates output is not checked anymore ignore it qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux increase upgrades status notify verbosity qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux archlinux add missing python setuptools makedepends qubesos qubes core agent linux handle unicodeerror in firewall when resolving hostname qubesos qubes core agent linux fix comments in default qubes firewall user script qubesos qubes core agent linux avoid deprecated var run directory qubesos qubes core agent linux ignore more options of qubes update qubesos qubes core agent linux network fix waiting for vm network uplink qubesos qubes core agent linux actually install unit files into usr lib systemd system qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux “sudo” must remove selinux restrictions qubesos qubes core agent linux only give the “qubes” group full polkit access qubesos qubes core agent linux use instead of as sudo umask qubesos qubes core agent linux avoid spawning a zenity progress meter qubesos qubes core agent linux harden shell scripts against metacharacters qubesos qubes core agent linux metadata is now signed qubesos qubes core agent linux always pass ‘ y’ to dnf qubesos qubes core agent linux allow selinux to stay enabled qubesos qubes core agent linux don’t rely on an arbitrary length limit qubesos qubes core agent linux don’t assume will never have a network connection qubesos qubes core agent linux merge commit into usr lib merge qubesos qubes core agent linux merge commit into no tabs please qubesos qubes core agent linux merge commit into conntrack purge qubesos qubes core agent linux add conntrack tools dependency to qubes core agent networking qubesos qubes core agent linux replace tabs with spaces qubesos qubes core agent linux debian update compat qubesos qubes core agent linux debian update control qubesos qubes core agent linux merge commit into conntrack purge qubesos qubes core agent linux keep shellcheck from complaining qubesos qubes core agent linux stop disabling checksum offload qubesos qubes core agent linux remove spurious line continuation add quotes qubesos qubes core agent linux vif route qubes check that the e flag is set qubesos qubes core agent linux purge stale connection tracking entries qubesos qubes core agent linux order networkmanager after qubes network uplink service qubesos qubes core agent linux init functions do not guess as qubes managed interface qubesos qubes core agent linux make init functions suitable for running with set u qubesos qubes core agent linux cleanup setup ip script a bit qubesos qubes core agent linux move network uplink setup to a separate service qubesos qubes core agent linux order qubes early vm config service before networking qubesos qubes core agent linux network stop ip forwarding before disabling firewall qubesos qubes core agent linux allow replies on uplink interface if is enabled qubesos qubes core agent linux use usr lib instead of lib qubesos qubes core agent linux fix for archlinux notify about installed updates the launch of the qubes update check service failed on archlinux because the qubes rpc uses the service command which isn t available for this os qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux fix archlinux detection of available upgrades note checkupdates return when no updates are available source man page and source code qubesos qubes core agent linux upgrades installed check requires pacman contrib for checkupdates qubesos qubes core agent linux only allow known safe characters in socket paths qubesos qubes core agent linux replace custom script reloading with sourcing etc profile in qubes getappmenus qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux qubes showinterminal requires socat qubesos qubes core agent linux gitlab ci install test dependencies qubesos qubes core agent linux gitlab ci include codecov qubesos qubes core agent linux gitlab ci move tests earlier rename job qubesos qubes core agent linux use netvm gw ip instead of netvm ip qubesos qubes core agent linux remove commented out code qubesos qubes core agent linux add netvm facing neighbor entry in nat namespace qubesos qubes core agent linux optimization use ip n over ip netns exec qubesos qubes core agent linux nat network namespaces need neighbor entries qubesos qubes core agent linux add gitlab ci yml qubesos qubes core agent linux vif route qubes better input validation qubesos qubes core agent linux don’t use onlink flag for nexthop qubesos qubes core agent linux fix running under euo pipefail qubesos qubes core agent linux don’t hardcode mac addresses qubesos qubes core agent linux add gateway ip mac not vm’s own qubesos qubes core agent linux add permanent neighbor entries qubesos qubes core agent linux network prevent ip spoofing on upstream interface qubesos qubes core agent linux network setup anti spoofing firewall rules before enabling the interface qubesos qubes core agent linux relax private img condition for mkfs even further qubesos qubes core agent linux drop systemd re exec during boot qubesos qubes core agent linux add a service to enable swap early before fsck of the root filesystem qubesos qubes core agent linux grub override grub device with dev mapper dmroot referenced issues qubesos qubes issues qubesos qubes issues qubesos qubes issues qubesos qubes issues qubesos qubes issues qubesos qubes issues qubesos qubes issues qubesos qubes issues qubesos qubes issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload core agent linux current repo available days from now upload core agent linux current dists repo you can choose subset of distributions like vm vm available days from now upload core agent linux security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it | 0 |
511,783 | 14,881,456,271 | IssuesEvent | 2021-01-20 10:31:16 | bedita/bedita | https://api.github.com/repos/bedita/bedita | opened | Add sortable feature to schema properties | Priority - Normal Topic - API | A `sortable` feature is needed in `/model/schema/{{objectTypeName}}` response in order to know if a property can be used in `sort` query string.
We may assume `sortable` to be `true` as default and add it only when `false`
| 1.0 | Add sortable feature to schema properties - A `sortable` feature is needed in `/model/schema/{{objectTypeName}}` response in order to know if a property can be used in `sort` query string.
We may assume `sortable` to be `true` as default and add it only when `false`
| priority | add sortable feature to schema properties a sortable feature is needed in model schema objecttypename response in order to know if a property can be used in sort query string we may assume sortable to be true as default and add it only when false | 1 |
99,076 | 4,045,557,737 | IssuesEvent | 2016-05-22 03:30:38 | facelessuser/TabsExtra | https://api.github.com/repos/facelessuser/TabsExtra | closed | Do not show dialog if all tabs saved | Accepted Enhancement Priority - Low | Hello! Thank you for your plugin. It is really useful.
Is it possible to do not show dialog "Are you sure you want to dismiss all targeted unsaved buffers?" if all my tabs are saved at https://github.com/facelessuser/TabsExtra/blob/5451f6d6b54c18fb96fe8ad4226d9d879183f022/tabs_extra.py#L521?
I bind `TabsExtra` command to default close tab shortcut (so I no longer need to use mouse 👍 ):
```json
{ "keys": ["super+w"],
"command": "tabs_extra_close_menu",
"args": {
"mode": "dismiss_unsaved",
"close_type": "single"
}
}
```
But If I try to close my saved tab it asks me about unsaved buffers. | 1.0 | Do not show dialog if all tabs saved - Hello! Thank you for your plugin. It is really useful.
Is it possible to do not show dialog "Are you sure you want to dismiss all targeted unsaved buffers?" if all my tabs are saved at https://github.com/facelessuser/TabsExtra/blob/5451f6d6b54c18fb96fe8ad4226d9d879183f022/tabs_extra.py#L521?
I bind `TabsExtra` command to default close tab shortcut (so I no longer need to use mouse 👍 ):
```json
{ "keys": ["super+w"],
"command": "tabs_extra_close_menu",
"args": {
"mode": "dismiss_unsaved",
"close_type": "single"
}
}
```
But If I try to close my saved tab it asks me about unsaved buffers. | priority | do not show dialog if all tabs saved hello thank you for your plugin it is really useful is it possible to do not show dialog are you sure you want to dismiss all targeted unsaved buffers if all my tabs are saved at i bind tabsextra command to default close tab shortcut so i no longer need to use mouse 👍 json keys command tabs extra close menu args mode dismiss unsaved close type single but if i try to close my saved tab it asks me about unsaved buffers | 1 |
56,595 | 32,069,245,458 | IssuesEvent | 2023-09-25 06:47:47 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | reopened | Phase 2 of large dataset performance optimisation: skip klona operations for evaluation updates | Performance Pod Release Blocker Performance | Has been observed that klona(deepClone) operations are performed on the entire dataTree for every evaluation. This computation increases with larger datasets, skipping this computation improves evalThread computation. It has been observed that eval scripting falls by almost 40% for collections for operations on datasets of size about 40000 records.
| True | Phase 2 of large dataset performance optimisation: skip klona operations for evaluation updates - Has been observed that klona(deepClone) operations are performed on the entire dataTree for every evaluation. This computation increases with larger datasets, skipping this computation improves evalThread computation. It has been observed that eval scripting falls by almost 40% for collections for operations on datasets of size about 40000 records.
| non_priority | phase of large dataset performance optimisation skip klona operations for evaluation updates has been observed that klona deepclone operations are performed on the entire datatree for every evaluation this computation increases with larger datasets skipping this computation improves evalthread computation it has been observed that eval scripting falls by almost for collections for operations on datasets of size about records | 0 |
784,386 | 27,568,930,927 | IssuesEvent | 2023-03-08 07:31:32 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | Allow remote write tuning for the awsprometheusremotewriteexporter | enhancement Stale priority:p3 release:after-ga | This is a tracking bug for https://github.com/open-telemetry/opentelemetry-collector/issues/2259 changes to be propagated to the awsprometheusremotewriteexporter. | 1.0 | Allow remote write tuning for the awsprometheusremotewriteexporter - This is a tracking bug for https://github.com/open-telemetry/opentelemetry-collector/issues/2259 changes to be propagated to the awsprometheusremotewriteexporter. | priority | allow remote write tuning for the awsprometheusremotewriteexporter this is a tracking bug for changes to be propagated to the awsprometheusremotewriteexporter | 1 |
301,805 | 9,231,050,340 | IssuesEvent | 2019-03-13 00:32:51 | codeforboston/communityconnect | https://api.github.com/repos/codeforboston/communityconnect | closed | remove or improve Saved Resource for Admin page on a phone | low priority | Either remove the `+` on the cards & the Saved resources widget, or make it look decent on the phone. Making it look better is preferred but might be tricky.
Note this marked as low priority, so while its nice to have, its not critical for out March1 deadline | 1.0 | remove or improve Saved Resource for Admin page on a phone - Either remove the `+` on the cards & the Saved resources widget, or make it look decent on the phone. Making it look better is preferred but might be tricky.
Note this marked as low priority, so while its nice to have, its not critical for out March1 deadline | priority | remove or improve saved resource for admin page on a phone either remove the on the cards the saved resources widget or make it look decent on the phone making it look better is preferred but might be tricky note this marked as low priority so while its nice to have its not critical for out deadline | 1 |
393,657 | 11,623,016,475 | IssuesEvent | 2020-02-27 08:02:42 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Bluetooth: Mesh: Proxy servers only resends segments to proxy | area: Bluetooth Mesh bug priority: medium | The proxy server node first checks whether any connected proxy clients need the packet, and if they do, it won't resend the packet on the advertiser interface. For group dst packets, this is incorrect behavior, as those should go on both interfaces.
Introducing an additional check for unicast address before blocking the adv TX fixes the issue, see #23103. | 1.0 | Bluetooth: Mesh: Proxy servers only resends segments to proxy - The proxy server node first checks whether any connected proxy clients need the packet, and if they do, it won't resend the packet on the advertiser interface. For group dst packets, this is incorrect behavior, as those should go on both interfaces.
Introducing an additional check for unicast address before blocking the adv TX fixes the issue, see #23103. | priority | bluetooth mesh proxy servers only resends segments to proxy the proxy server node first checks whether any connected proxy clients need the packet and if they do it won t resend the packet on the advertiser interface for group dst packets this is incorrect behavior as those should go on both interfaces introducing an additional check for unicast address before blocking the adv tx fixes the issue see | 1 |
105,268 | 13,173,330,033 | IssuesEvent | 2020-08-11 20:06:34 | wikimedia/WikiContrib | https://api.github.com/repos/wikimedia/WikiContrib | closed | Auto Scroll to display activity when tile is clicked | Design | When a day with no contributions is clicked, the message 'username has no activity on this day. One needs to scroll down to be able to see this message and the font is lighter than the other texts on the page, making it easy to miss.

Similarly, when a day with contributions is clicked, the contributions are listed below and there is no marker indicating that the activity has loaded, making it easy to miss. We can add automatic scroll when the data is loaded.
| 1.0 | Auto Scroll to display activity when tile is clicked - When a day with no contributions is clicked, the message 'username has no activity on this day. One needs to scroll down to be able to see this message and the font is lighter than the other texts on the page, making it easy to miss.

Similarly, when a day with contributions is clicked, the contributions are listed below and there is no marker indicating that the activity has loaded, making it easy to miss. We can add automatic scroll when the data is loaded.
| non_priority | auto scroll to display activity when tile is clicked when a day with no contributions is clicked the message username has no activity on this day one needs to scroll down to be able to see this message and the font is lighter than the other texts on the page making it easy to miss similarly when a day with contributions is clicked the contributions are listed below and there is no marker indicating that the activity has loaded making it easy to miss we can add automatic scroll when the data is loaded | 0 |
117,468 | 4,716,593,639 | IssuesEvent | 2016-10-16 04:55:00 | CS2103AUG2016-F11-C1/main | https://api.github.com/repos/CS2103AUG2016-F11-C1/main | opened | Support event with venue | priority.low type.enhancement | We should include a field : venue for event and display it accordingly if provided by user.
| 1.0 | Support event with venue - We should include a field : venue for event and display it accordingly if provided by user.
| priority | support event with venue we should include a field venue for event and display it accordingly if provided by user | 1 |
25,819 | 12,749,213,189 | IssuesEvent | 2020-06-26 22:05:27 | Rdatatable/data.table | https://api.github.com/repos/Rdatatable/data.table | closed | groupby with dogroups (R expression) performance regression | High dev performance regression | There is a performance regression (AFAIU) when doing by group computation where we run R's C eval by each group (q7 and q8 in db-benchmark).
```
in_rows question_group question 20181206_da98fb2 20190913_35b0de3 20191115_92abb70 20191205_eba8704 20191209_6808d2c 20191212_e0140ea 20191229_d52b0d8 20200124_c005296
1: 1e9 basic sum v1 by id1 21.144 10.026 8.375 9.362 9.060 9.082 9.366 9.271
2: 1e9 basic sum v1 by id1:id2 38.914 11.746 9.243 9.327 9.331 9.978 10.813 9.220
3: 1e9 basic sum v1 mean v3 by id3 99.517 14.487 12.044 12.291 13.496 14.325 13.191 13.169
4: 1e9 basic mean v1:v3 by id4 26.593 17.357 15.135 15.157 15.278 16.761 16.724 16.754
5: 1e9 basic sum v1:v3 by id6 122.214 14.569 13.454 14.035 14.046 14.842 14.400 15.019
6: 1e9 advanced median v3 sd v3 by id4 id5 NA NA 121.742 110.925 106.340 113.837 111.984 123.411
7: 1e9 advanced max v1 - min v2 by id3 NA NA 98.680 91.596 87.005 93.749 91.294 299.863
8: 1e9 advanced largest two v3 by id6 NA NA 234.926 215.574 213.824 216.152 211.295 411.241
9: 1e9 advanced regression v1 v2 by id2 id4 NA 72.466 81.297 76.121 75.311 74.769 72.571 41.157
10: 1e9 advanced sum v3 count by id1:id6 NA 180.257 196.403 187.816 177.257 187.702 190.282 187.403
```
worth to note that at the same time q9 `x[, .(r2=cor(v1, v2)^2), by=.(id2, id4)]` got nice speed up | True | groupby with dogroups (R expression) performance regression - There is a performance regression (AFAIU) when doing by group computation where we run R's C eval by each group (q7 and q8 in db-benchmark).
```
in_rows question_group question 20181206_da98fb2 20190913_35b0de3 20191115_92abb70 20191205_eba8704 20191209_6808d2c 20191212_e0140ea 20191229_d52b0d8 20200124_c005296
1: 1e9 basic sum v1 by id1 21.144 10.026 8.375 9.362 9.060 9.082 9.366 9.271
2: 1e9 basic sum v1 by id1:id2 38.914 11.746 9.243 9.327 9.331 9.978 10.813 9.220
3: 1e9 basic sum v1 mean v3 by id3 99.517 14.487 12.044 12.291 13.496 14.325 13.191 13.169
4: 1e9 basic mean v1:v3 by id4 26.593 17.357 15.135 15.157 15.278 16.761 16.724 16.754
5: 1e9 basic sum v1:v3 by id6 122.214 14.569 13.454 14.035 14.046 14.842 14.400 15.019
6: 1e9 advanced median v3 sd v3 by id4 id5 NA NA 121.742 110.925 106.340 113.837 111.984 123.411
7: 1e9 advanced max v1 - min v2 by id3 NA NA 98.680 91.596 87.005 93.749 91.294 299.863
8: 1e9 advanced largest two v3 by id6 NA NA 234.926 215.574 213.824 216.152 211.295 411.241
9: 1e9 advanced regression v1 v2 by id2 id4 NA 72.466 81.297 76.121 75.311 74.769 72.571 41.157
10: 1e9 advanced sum v3 count by id1:id6 NA 180.257 196.403 187.816 177.257 187.702 190.282 187.403
```
worth to note that at the same time q9 `x[, .(r2=cor(v1, v2)^2), by=.(id2, id4)]` got nice speed up | non_priority | groupby with dogroups r expression performance regression there is a performance regression afaiu when doing by group computation where we run r s c eval by each group and in db benchmark in rows question group question basic sum by basic sum by basic sum mean by basic mean by basic sum by advanced median sd by na na advanced max min by na na advanced largest two by na na advanced regression by na advanced sum count by na worth to note that at the same time x got nice speed up | 0 |
126,607 | 17,947,251,087 | IssuesEvent | 2021-09-12 02:52:57 | corbantjoyce/website | https://api.github.com/repos/corbantjoyce/website | closed | CVE-2021-28092 (High) detected in is-svg-3.0.0.tgz - autoclosed | security vulnerability | ## CVE-2021-28092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-svg-3.0.0.tgz</b></p></summary>
<p>Check if a string or buffer is SVG</p>
<p>Library home page: <a href="https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz">https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz</a></p>
<p>Path to dependency file: website/package.json</p>
<p>Path to vulnerable library: website/node_modules/is-svg/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.3.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- :x: **is-svg-3.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/corbantjoyce/website/commit/2d41f06ec8faa6317e843654af85f7dacef9b46e">2d41f06ec8faa6317e843654af85f7dacef9b46e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The is-svg package 2.1.0 through 4.2.1 for Node.js uses a regular expression that is vulnerable to Regular Expression Denial of Service (ReDoS). If an attacker provides a malicious string, is-svg will get stuck processing the input for a very long time.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28092>CVE-2021-28092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: v4.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-28092 (High) detected in is-svg-3.0.0.tgz - autoclosed - ## CVE-2021-28092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-svg-3.0.0.tgz</b></p></summary>
<p>Check if a string or buffer is SVG</p>
<p>Library home page: <a href="https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz">https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz</a></p>
<p>Path to dependency file: website/package.json</p>
<p>Path to vulnerable library: website/node_modules/is-svg/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.3.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- :x: **is-svg-3.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/corbantjoyce/website/commit/2d41f06ec8faa6317e843654af85f7dacef9b46e">2d41f06ec8faa6317e843654af85f7dacef9b46e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The is-svg package 2.1.0 through 4.2.1 for Node.js uses a regular expression that is vulnerable to Regular Expression Denial of Service (ReDoS). If an attacker provides a malicious string, is-svg will get stuck processing the input for a very long time.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28092>CVE-2021-28092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28092</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: v4.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in is svg tgz autoclosed cve high severity vulnerability vulnerable library is svg tgz check if a string or buffer is svg library home page a href path to dependency file website package json path to vulnerable library website node modules is svg package json dependency hierarchy react scripts tgz root library optimize css assets webpack plugin tgz cssnano tgz cssnano preset default tgz postcss svgo tgz x is svg tgz vulnerable library found in head commit a href found in base branch master vulnerability details the is svg package through for node js uses a regular expression that is vulnerable to regular expression denial of service redos if an attacker provides a malicious string is svg will get stuck processing the input for a very long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
757,229 | 26,501,460,610 | IssuesEvent | 2023-01-18 10:35:17 | oceanprotocol/market | https://api.github.com/repos/oceanprotocol/market | closed | getInitialPaymentCollector failure | Type: Bug Priority: High | Go to any asset and look into the browser console. All of them show:
> `[MetaFull: getInitialPaymentCollector] TypeError: undefined is not an object (evaluating 'new this.web3.eth')`
This was introduced in https://github.com/oceanprotocol/market/pull/1786, which assumes a web3 instance is always available AND which fires before any web3 instance is available even with wallet connected, leading to error. | 1.0 | getInitialPaymentCollector failure - Go to any asset and look into the browser console. All of them show:
> `[MetaFull: getInitialPaymentCollector] TypeError: undefined is not an object (evaluating 'new this.web3.eth')`
This was introduced in https://github.com/oceanprotocol/market/pull/1786, which assumes a web3 instance is always available AND which fires before any web3 instance is available even with wallet connected, leading to error. | priority | getinitialpaymentcollector failure go to any asset and look into the browser console all of them show typeerror undefined is not an object evaluating new this eth this was introduced in which assumes a instance is always available and which fires before any instance is available even with wallet connected leading to error | 1 |
661,716 | 22,066,387,895 | IssuesEvent | 2022-05-31 04:04:18 | cse112-sp22-group13/cse112-sp22-group13 | https://api.github.com/repos/cse112-sp22-group13/cse112-sp22-group13 | closed | Incorporate Firebase as the backend | enhancement Priority: HIGH Complexity: L | We want to use a more robust backend than localStorage, so that users may create accounts and their sessions may be saved beyond the current device/platform. To achieve this, we wish to use Firebase for our backend. | 1.0 | Incorporate Firebase as the backend - We want to use a more robust backend than localStorage, so that users may create accounts and their sessions may be saved beyond the current device/platform. To achieve this, we wish to use Firebase for our backend. | priority | incorporate firebase as the backend we want to use a more robust backend than localstorage so that users may create accounts and their sessions may be saved beyond the current device platform to achieve this we wish to use firebase for our backend | 1 |
212,772 | 16,481,455,542 | IssuesEvent | 2021-05-24 12:16:37 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | com.hazelcast.internal.networking.nio.AdvancedNetworkIntegrationTest.testConnectionToWrongPort | Team: Core Type: Test-Failure | _master_ (commit c3c641fc6af0924ccc140f24d98e37f7f8dd66b1)
Failed on IBM JDK 8 nightly run: http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-IbmJDK8-nightly/1/testReport/com.hazelcast.internal.networking.nio/AdvancedNetworkIntegrationTest/testConnectionToWrongPort/
Stacktrace:
```
java.lang.AssertionError: Expected test to throw (an instance of java.lang.IllegalStateException and exception with message a string containing "Node failed to start!")
at org.junit.Assert.fail(Assert.java:89)
at org.junit.rules.ExpectedException.failDueToMissingException(ExpectedException.java:278)
at org.junit.rules.ExpectedException.access$100(ExpectedException.java:111)
at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:264)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:50)
at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:29)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at com.hazelcast.test.AbstractHazelcastClassRunner$1.evaluate(AbstractHazelcastClassRunner.java:306)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
```
Standard output:
```
02:57:40,633 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210524 - c3c641f) starting at [127.0.0.1]:6000
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Cluster name: dev
02:57:40,638 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
02:57:40,646 INFO |testConnectionToWrongPort| - [Node] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Using TCP/IP discovery
02:57:40,646 WARN |testConnectionToWrongPort| - [CPSubsystem] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
02:57:40,653 INFO |testConnectionToWrongPort| - [JetService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
02:57:40,656 INFO |testConnectionToWrongPort| - [Diagnostics] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
02:57:40,656 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is STARTING
02:57:40,659 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:40,759 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:40,937 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-3 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Could not connect to member 8ee538cb-8138-4ace-8c43-4b038bb3d60f, reason com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5702
02:57:40,959 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:41,221 INFO |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Trying to connect to Member [127.0.0.1]:5702 - 8ee538cb-8138-4ace-8c43-4b038bb3d60f
02:57:41,221 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Exception during initial connection to Member [127.0.0.1]:5702 - 8ee538cb-8138-4ace-8c43-4b038bb3d60f: com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5702
02:57:41,221 INFO |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Trying to connect to [127.0.0.1]:5701
02:57:41,221 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Exception during initial connection to [127.0.0.1]:5701: com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5701
02:57:41,221 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Unable to get live cluster connection, retry in 30000 ms, attempt: 180, cluster connect timeout: INFINITE, max backoff: 30000 ms
02:57:41,259 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:41,659 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
02:57:41,659 INFO |testConnectionToWrongPort| - [ClusterService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:6000 - b9b7e404-6088-4673-910f-7622f3a42b27 this
]
02:57:41,659 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,659 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Jet extension is enabled
02:57:41,659 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,660 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is STARTED
02:57:41,661 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210524 - c3c641f) starting at [127.0.0.1]:8000
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Cluster name: dev
02:57:41,665 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
02:57:41,670 INFO |testConnectionToWrongPort| - [Node] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Using TCP/IP discovery
02:57:41,670 WARN |testConnectionToWrongPort| - [CPSubsystem] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
02:57:41,676 DEBUG |testConnectionToWrongPort| - [JetService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Jet exceptions are not registered to the ClientExceptionFactory since the ClientExceptionFactory is not accessible.
02:57:41,676 INFO |testConnectionToWrongPort| - [JetService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
02:57:41,679 INFO |testConnectionToWrongPort| - [Diagnostics] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
02:57:41,679 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is STARTING
02:57:41,682 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:40116 and /127.0.0.1:7000
02:57:41,682 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=1, /127.0.0.1:7000->/127.0.0.1:40116, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=1, /127.0.0.1:7000->/127.0.0.1:40116, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-1
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,682 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=1, /127.0.0.1:40116->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=1, /127.0.0.1:40116->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-0
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,684 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:41,759 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,759 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,782 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.cached.thread-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is removed from the blacklist.
02:57:41,783 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:42179 and /127.0.0.1:7000
02:57:41,783 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-2 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=2, /127.0.0.1:7000->/127.0.0.1:42179, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=2, /127.0.0.1:7000->/127.0.0.1:42179, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-2
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,783 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=2, /127.0.0.1:42179->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=2, /127.0.0.1:42179->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-1
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,783 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:41,859 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,859 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,937 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-4 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Could not connect to member 8ee538cb-8138-4ace-8c43-4b038bb3d60f, reason com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5702
02:57:41,959 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,959 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,982 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.cached.thread-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is removed from the blacklist.
02:57:41,983 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-2 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:37788 and /127.0.0.1:7000
02:57:41,983 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=3, /127.0.0.1:7000->/127.0.0.1:37788, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=3, /127.0.0.1:7000->/127.0.0.1:37788, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-3
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,983 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-2 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=3, /127.0.0.1:37788->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=3, /127.0.0.1:37788->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-out-2
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:59) ~[?:1.8.0]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:105) ~[?:1.8.0]
at sun.nio.ch.IOUtil.write(IOUtil.java:77) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:485) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioOutboundPipeline.flushToSocket(NioOutboundPipeline.java:428) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioOutboundPipeline.process(NioOutboundPipeline.java:313) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,983 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-out-2 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:42,060 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,060 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,160 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,160 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,260 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,260 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,283 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:41622 and /127.0.0.1:7000
02:57:42,283 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.cached.thread-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is removed from the blacklist.
02:57:42,283 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-0 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=4, /127.0.0.1:7000->/127.0.0.1:41622, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=4, /127.0.0.1:7000->/127.0.0.1:41622, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-0
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,283 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=4, /127.0.0.1:41622->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=4, /127.0.0.1:41622->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-3
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,283 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:42,360 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,360 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,460 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,460 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,560 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,560 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,661 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,661 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,682 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
02:57:42,682 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.quirky_tesla.cached.thread-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,682 INFO |testConnectionToWrongPort| - [ClusterService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:8000 - 5e1cc163-9af4-4443-a55d-246e8a071241 this
]
02:57:42,682 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Jet extension is enabled
02:57:42,682 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.quirky_tesla.cached.thread-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,682 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:47971 and /127.0.0.1:7000
02:57:42,682 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is STARTED
02:57:42,683 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is SHUTTING_DOWN
02:57:42,683 WARN |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Terminating forcefully...
02:57:42,683 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
02:57:42,683 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=5, /127.0.0.1:7000->/127.0.0.1:47971, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=5, /127.0.0.1:7000->/127.0.0.1:47971, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-1
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,683 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=5, /127.0.0.1:47971->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=5, /127.0.0.1:47971->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-0
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,683 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:42,683 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Shutting down node engine...
02:57:42,685 INFO |testConnectionToWrongPort| - [NodeExtension] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
02:57:42,685 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 2 ms.
02:57:42,685 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is SHUTDOWN
02:57:42,685 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is SHUTTING_DOWN
02:57:42,685 WARN |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Terminating forcefully...
02:57:42,685 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
02:57:42,685 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Shutting down node engine...
02:57:42,687 INFO |testConnectionToWrongPort| - [NodeExtension] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
02:57:42,687 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 2 ms.
02:57:42,687 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is SHUTDOWN
```
| 1.0 | com.hazelcast.internal.networking.nio.AdvancedNetworkIntegrationTest.testConnectionToWrongPort - _master_ (commit c3c641fc6af0924ccc140f24d98e37f7f8dd66b1)
Failed on IBM JDK 8 nightly run: http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-IbmJDK8-nightly/1/testReport/com.hazelcast.internal.networking.nio/AdvancedNetworkIntegrationTest/testConnectionToWrongPort/
Stacktrace:
```
java.lang.AssertionError: Expected test to throw (an instance of java.lang.IllegalStateException and exception with message a string containing "Node failed to start!")
at org.junit.Assert.fail(Assert.java:89)
at org.junit.rules.ExpectedException.failDueToMissingException(ExpectedException.java:278)
at org.junit.rules.ExpectedException.access$100(ExpectedException.java:111)
at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:264)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:50)
at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:29)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at com.hazelcast.test.AbstractHazelcastClassRunner$1.evaluate(AbstractHazelcastClassRunner.java:306)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
```
Standard output:
```
02:57:40,633 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210524 - c3c641f) starting at [127.0.0.1]:6000
02:57:40,636 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Cluster name: dev
02:57:40,638 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
02:57:40,646 INFO |testConnectionToWrongPort| - [Node] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Using TCP/IP discovery
02:57:40,646 WARN |testConnectionToWrongPort| - [CPSubsystem] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
02:57:40,653 INFO |testConnectionToWrongPort| - [JetService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
02:57:40,656 INFO |testConnectionToWrongPort| - [Diagnostics] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
02:57:40,656 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is STARTING
02:57:40,659 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:40,759 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:40,937 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-3 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Could not connect to member 8ee538cb-8138-4ace-8c43-4b038bb3d60f, reason com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5702
02:57:40,959 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:41,221 INFO |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Trying to connect to Member [127.0.0.1]:5702 - 8ee538cb-8138-4ace-8c43-4b038bb3d60f
02:57:41,221 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Exception during initial connection to Member [127.0.0.1]:5702 - 8ee538cb-8138-4ace-8c43-4b038bb3d60f: com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5702
02:57:41,221 INFO |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Trying to connect to [127.0.0.1]:5701
02:57:41,221 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Exception during initial connection to [127.0.0.1]:5701: com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5701
02:57:41,221 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-6 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Unable to get live cluster connection, retry in 30000 ms, attempt: 180, cluster connect timeout: INFINITE, max backoff: 30000 ms
02:57:41,259 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is added to the blacklist.
02:57:41,659 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
02:57:41,659 INFO |testConnectionToWrongPort| - [ClusterService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:6000 - b9b7e404-6088-4673-910f-7622f3a42b27 this
]
02:57:41,659 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,659 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Jet extension is enabled
02:57:41,659 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,660 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is STARTED
02:57:41,661 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210524 - c3c641f) starting at [127.0.0.1]:8000
02:57:41,663 INFO |testConnectionToWrongPort| - [system] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Cluster name: dev
02:57:41,665 INFO |testConnectionToWrongPort| - [MetricsConfigHelper] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
02:57:41,670 INFO |testConnectionToWrongPort| - [Node] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Using TCP/IP discovery
02:57:41,670 WARN |testConnectionToWrongPort| - [CPSubsystem] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
02:57:41,676 DEBUG |testConnectionToWrongPort| - [JetService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Jet exceptions are not registered to the ClientExceptionFactory since the ClientExceptionFactory is not accessible.
02:57:41,676 INFO |testConnectionToWrongPort| - [JetService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 2
02:57:41,679 INFO |testConnectionToWrongPort| - [Diagnostics] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
02:57:41,679 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is STARTING
02:57:41,682 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:40116 and /127.0.0.1:7000
02:57:41,682 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=1, /127.0.0.1:7000->/127.0.0.1:40116, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=1, /127.0.0.1:7000->/127.0.0.1:40116, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-1
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,682 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=1, /127.0.0.1:40116->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=1, /127.0.0.1:40116->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-0
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,684 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:41,759 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,759 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,782 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.cached.thread-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is removed from the blacklist.
02:57:41,783 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:42179 and /127.0.0.1:7000
02:57:41,783 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-2 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=2, /127.0.0.1:7000->/127.0.0.1:42179, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=2, /127.0.0.1:7000->/127.0.0.1:42179, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-2
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,783 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=2, /127.0.0.1:42179->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=2, /127.0.0.1:42179->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-1
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,783 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:41,859 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,859 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,937 WARN |when_jetClientCreated_then_connectsToJetCluster| - [ClientConnectionManager] hz.client_64.internal-4 - hz.client_64 [fa763ea0-647d-48d6-b9e3-70c8428e6f5a] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Could not connect to member 8ee538cb-8138-4ace-8c43-4b038bb3d60f, reason com.hazelcast.core.HazelcastException: java.io.IOException: Connection refused to address /127.0.0.1:5702
02:57:41,959 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,959 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:41,982 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.cached.thread-1 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is removed from the blacklist.
02:57:41,983 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-2 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:37788 and /127.0.0.1:7000
02:57:41,983 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=3, /127.0.0.1:7000->/127.0.0.1:37788, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=3, /127.0.0.1:7000->/127.0.0.1:37788, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-3
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,983 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-2 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=3, /127.0.0.1:37788->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=3, /127.0.0.1:37788->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-out-2
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:59) ~[?:1.8.0]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:105) ~[?:1.8.0]
at sun.nio.ch.IOUtil.write(IOUtil.java:77) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:485) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioOutboundPipeline.flushToSocket(NioOutboundPipeline.java:428) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioOutboundPipeline.process(NioOutboundPipeline.java:313) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:41,983 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-out-2 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:42,060 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,060 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,160 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,160 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,260 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,260 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,283 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:41622 and /127.0.0.1:7000
02:57:42,283 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.cached.thread-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is removed from the blacklist.
02:57:42,283 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-0 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=4, /127.0.0.1:7000->/127.0.0.1:41622, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=4, /127.0.0.1:7000->/127.0.0.1:41622, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-0
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,283 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=4, /127.0.0.1:41622->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=4, /127.0.0.1:41622->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-3
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,283 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:42,360 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,360 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,460 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,460 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,560 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,560 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-5 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,661 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-3 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,661 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.festive_tesla.cached.thread-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,682 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
02:57:42,682 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.quirky_tesla.cached.thread-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,682 INFO |testConnectionToWrongPort| - [ClusterService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:8000 - 5e1cc163-9af4-4443-a55d-246e8a071241 this
]
02:57:42,682 INFO |testConnectionToWrongPort| - [JetExtension] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Jet extension is enabled
02:57:42,682 DEBUG |testConnectionToWrongPort| - [JobCoordinationService] hz.quirky_tesla.cached.thread-3 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
02:57:42,682 INFO |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-out-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Initialized new cluster connection between /127.0.0.1:47971 and /127.0.0.1:7000
02:57:42,682 INFO |testConnectionToWrongPort| - [LifecycleService] testConnectionToWrongPort - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is STARTED
02:57:42,683 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is SHUTTING_DOWN
02:57:42,683 WARN |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Terminating forcefully...
02:57:42,683 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
02:57:42,683 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.festive_tesla.IO.thread-in-1 - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Connection[id=5, /127.0.0.1:7000->/127.0.0.1:47971, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=5, /127.0.0.1:7000->/127.0.0.1:47971, qualifier=EndpointQualifier{type='CLIENT'}, endpoint=null, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.festive_tesla.IO.thread-in-1
java.lang.IllegalStateException: Unsupported protocol exchange detected, expected protocol: CLIENT
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.verifyProtocol(SingleProtocolDecoder.java:101) ~[classes/:?]
at com.hazelcast.internal.server.tcp.SingleProtocolDecoder.onRead(SingleProtocolDecoder.java:78) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline.lambda$start$0(NioPipeline.java:127) [classes/:?]
at com.hazelcast.internal.networking.nio.NioPipeline$$Lambda$1741/00000000F4028750.run(Unknown Source) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processTaskQueue(NioThread.java:355) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:290) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,683 WARN |testConnectionToWrongPort| - [TcpServerConnection] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Connection[id=5, /127.0.0.1:47971->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=false, connectionType=MEMBER, planeIndex=-1] closed. Reason: Exception in Connection[id=5, /127.0.0.1:47971->/127.0.0.1:7000, qualifier=EndpointQualifier{type='MEMBER'}, endpoint=[127.0.0.1]:7000, alive=true, connectionType=MEMBER, planeIndex=-1], thread=hz.quirky_tesla.IO.thread-in-0
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:51) ~[?:1.8.0]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:235) ~[?:1.8.0]
at sun.nio.ch.IOUtil.read(IOUtil.java:209) ~[?:1.8.0]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:394) ~[?:1.8.0]
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:119) ~[classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294) [classes/:?]
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
02:57:42,683 INFO |testConnectionToWrongPort| - [TcpIpJoiner] hz.quirky_tesla.IO.thread-in-0 - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:7000 is added to the blacklist.
02:57:42,683 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Shutting down node engine...
02:57:42,685 INFO |testConnectionToWrongPort| - [NodeExtension] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
02:57:42,685 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 2 ms.
02:57:42,685 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:8000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:8000 is SHUTDOWN
02:57:42,685 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is SHUTTING_DOWN
02:57:42,685 WARN |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Terminating forcefully...
02:57:42,685 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
02:57:42,685 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Shutting down node engine...
02:57:42,687 INFO |testConnectionToWrongPort| - [NodeExtension] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
02:57:42,687 INFO |testConnectionToWrongPort| - [Node] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 2 ms.
02:57:42,687 INFO |testConnectionToWrongPort| - [LifecycleService] main - [127.0.0.1]:6000 [dev] [5.0-SNAPSHOT] [127.0.0.1]:6000 is SHUTDOWN
```
| non_priority | com hazelcast internal networking nio advancednetworkintegrationtest testconnectiontowrongport master commit failed on ibm jdk nightly run stacktrace java lang assertionerror expected test to throw an instance of java lang illegalstateexception and exception with message a string containing node failed to start at org junit assert fail assert java at org junit rules expectedexception failduetomissingexception expectedexception java at org junit rules expectedexception access expectedexception java at org junit rules expectedexception expectedexceptionstatement evaluate expectedexception java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at com hazelcast test hazelcastserialclassrunner runchild hazelcastserialclassrunner java at com hazelcast test hazelcastserialclassrunner runchild hazelcastserialclassrunner java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at com hazelcast test abstracthazelcastclassrunner evaluate abstracthazelcastclassrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executeeager junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter invokeproviderinsameclassloader forkedbooter java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java standard output info testconnectiontowrongport testconnectiontowrongport overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testconnectiontowrongport testconnectiontowrongport o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o info testconnectiontowrongport testconnectiontowrongport copyright c hazelcast inc all rights reserved info testconnectiontowrongport testconnectiontowrongport hazelcast platform snapshot starting at info testconnectiontowrongport testconnectiontowrongport cluster name dev info testconnectiontowrongport testconnectiontowrongport collecting debug metrics and sending to diagnostics is enabled info testconnectiontowrongport testconnectiontowrongport using tcp ip discovery warn testconnectiontowrongport testconnectiontowrongport cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testconnectiontowrongport testconnectiontowrongport setting number of cooperative threads and default parallelism to info testconnectiontowrongport testconnectiontowrongport diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testconnectiontowrongport testconnectiontowrongport is starting info testconnectiontowrongport hz festive tesla cached thread is added to the blacklist info testconnectiontowrongport hz festive tesla cached thread is added to the blacklist warn when jetclientcreated then connectstojetcluster hz client internal hz client could not connect to member reason com hazelcast core hazelcastexception java io ioexception connection refused to address info testconnectiontowrongport hz festive tesla cached thread is added to the blacklist info when jetclientcreated then connectstojetcluster hz client internal hz client trying to connect to member warn when jetclientcreated then connectstojetcluster hz client internal hz client exception during initial connection to member com hazelcast core hazelcastexception java io ioexception connection refused to address info when jetclientcreated then connectstojetcluster hz client internal hz client trying to connect to warn when jetclientcreated then connectstojetcluster hz client internal hz client exception during initial connection to com hazelcast core hazelcastexception java io ioexception connection refused to address warn when jetclientcreated then connectstojetcluster hz client internal hz client unable to get live cluster connection retry in ms attempt cluster connect timeout infinite max backoff ms info testconnectiontowrongport hz festive tesla cached thread is added to the blacklist info testconnectiontowrongport testconnectiontowrongport jet extension is enabled after the cluster version upgrade info testconnectiontowrongport testconnectiontowrongport members size ver member this debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport testconnectiontowrongport jet extension is enabled debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport testconnectiontowrongport is started info testconnectiontowrongport testconnectiontowrongport overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testconnectiontowrongport testconnectiontowrongport o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o info testconnectiontowrongport testconnectiontowrongport copyright c hazelcast inc all rights reserved info testconnectiontowrongport testconnectiontowrongport hazelcast platform snapshot starting at info testconnectiontowrongport testconnectiontowrongport cluster name dev info testconnectiontowrongport testconnectiontowrongport collecting debug metrics and sending to diagnostics is enabled info testconnectiontowrongport testconnectiontowrongport using tcp ip discovery warn testconnectiontowrongport testconnectiontowrongport cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees debug testconnectiontowrongport testconnectiontowrongport jet exceptions are not registered to the clientexceptionfactory since the clientexceptionfactory is not accessible info testconnectiontowrongport testconnectiontowrongport setting number of cooperative threads and default parallelism to info testconnectiontowrongport testconnectiontowrongport diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testconnectiontowrongport testconnectiontowrongport is starting info testconnectiontowrongport hz quirky tesla io thread out initialized new cluster connection between and warn testconnectiontowrongport hz festive tesla io thread in connection closed reason exception in connection thread hz festive tesla io thread in java lang illegalstateexception unsupported protocol exchange detected expected protocol client at com hazelcast internal server tcp singleprotocoldecoder verifyprotocol singleprotocoldecoder java at com hazelcast internal server tcp singleprotocoldecoder onread singleprotocoldecoder java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niopipeline lambda start niopipeline java at com hazelcast internal networking nio niopipeline lambda run unknown source at com hazelcast internal networking nio niothread processtaskqueue niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java warn testconnectiontowrongport hz quirky tesla io thread in connection alive false connectiontype member planeindex closed reason exception in connection alive true connectiontype member planeindex thread hz quirky tesla io thread in java io ioexception connection reset by peer at sun nio ch filedispatcherimpl native method at sun nio ch socketdispatcher read socketdispatcher java at sun nio ch ioutil readintonativebuffer ioutil java at sun nio ch ioutil read ioutil java at sun nio ch socketchannelimpl read socketchannelimpl java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niothread processselectionkey niothread java at com hazelcast internal networking nio niothread processselectionkeys niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java info testconnectiontowrongport hz quirky tesla io thread in is added to the blacklist debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport hz quirky tesla cached thread is removed from the blacklist info testconnectiontowrongport hz quirky tesla io thread out initialized new cluster connection between and warn testconnectiontowrongport hz festive tesla io thread in connection closed reason exception in connection thread hz festive tesla io thread in java lang illegalstateexception unsupported protocol exchange detected expected protocol client at com hazelcast internal server tcp singleprotocoldecoder verifyprotocol singleprotocoldecoder java at com hazelcast internal server tcp singleprotocoldecoder onread singleprotocoldecoder java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niopipeline lambda start niopipeline java at com hazelcast internal networking nio niopipeline lambda run unknown source at com hazelcast internal networking nio niothread processtaskqueue niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java warn testconnectiontowrongport hz quirky tesla io thread in connection alive false connectiontype member planeindex closed reason exception in connection alive true connectiontype member planeindex thread hz quirky tesla io thread in java io ioexception connection reset by peer at sun nio ch filedispatcherimpl native method at sun nio ch socketdispatcher read socketdispatcher java at sun nio ch ioutil readintonativebuffer ioutil java at sun nio ch ioutil read ioutil java at sun nio ch socketchannelimpl read socketchannelimpl java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niothread processselectionkey niothread java at com hazelcast internal networking nio niothread processselectionkeys niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java info testconnectiontowrongport hz quirky tesla io thread in is added to the blacklist debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized warn when jetclientcreated then connectstojetcluster hz client internal hz client could not connect to member reason com hazelcast core hazelcastexception java io ioexception connection refused to address debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport hz quirky tesla cached thread is removed from the blacklist info testconnectiontowrongport hz quirky tesla io thread out initialized new cluster connection between and warn testconnectiontowrongport hz festive tesla io thread in connection closed reason exception in connection thread hz festive tesla io thread in java lang illegalstateexception unsupported protocol exchange detected expected protocol client at com hazelcast internal server tcp singleprotocoldecoder verifyprotocol singleprotocoldecoder java at com hazelcast internal server tcp singleprotocoldecoder onread singleprotocoldecoder java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niopipeline lambda start niopipeline java at com hazelcast internal networking nio niopipeline lambda run unknown source at com hazelcast internal networking nio niothread processtaskqueue niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java warn testconnectiontowrongport hz quirky tesla io thread out connection alive false connectiontype member planeindex closed reason exception in connection alive true connectiontype member planeindex thread hz quirky tesla io thread out java io ioexception broken pipe at sun nio ch filedispatcherimpl native method at sun nio ch socketdispatcher write socketdispatcher java at sun nio ch ioutil writefromnativebuffer ioutil java at sun nio ch ioutil write ioutil java at sun nio ch socketchannelimpl write socketchannelimpl java at com hazelcast internal networking nio niooutboundpipeline flushtosocket niooutboundpipeline java at com hazelcast internal networking nio niooutboundpipeline process niooutboundpipeline java at com hazelcast internal networking nio niothread processselectionkey niothread java at com hazelcast internal networking nio niothread processselectionkeys niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java info testconnectiontowrongport hz quirky tesla io thread out is added to the blacklist debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport hz quirky tesla io thread out initialized new cluster connection between and info testconnectiontowrongport hz quirky tesla cached thread is removed from the blacklist warn testconnectiontowrongport hz festive tesla io thread in connection closed reason exception in connection thread hz festive tesla io thread in java lang illegalstateexception unsupported protocol exchange detected expected protocol client at com hazelcast internal server tcp singleprotocoldecoder verifyprotocol singleprotocoldecoder java at com hazelcast internal server tcp singleprotocoldecoder onread singleprotocoldecoder java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niopipeline lambda start niopipeline java at com hazelcast internal networking nio niopipeline lambda run unknown source at com hazelcast internal networking nio niothread processtaskqueue niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java warn testconnectiontowrongport hz quirky tesla io thread in connection alive false connectiontype member planeindex closed reason exception in connection alive true connectiontype member planeindex thread hz quirky tesla io thread in java io ioexception connection reset by peer at sun nio ch filedispatcherimpl native method at sun nio ch socketdispatcher read socketdispatcher java at sun nio ch ioutil readintonativebuffer ioutil java at sun nio ch ioutil read ioutil java at sun nio ch socketchannelimpl read socketchannelimpl java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niothread processselectionkey niothread java at com hazelcast internal networking nio niothread processselectionkeys niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java info testconnectiontowrongport hz quirky tesla io thread in is added to the blacklist debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized debug testconnectiontowrongport hz festive tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport testconnectiontowrongport jet extension is enabled after the cluster version upgrade debug testconnectiontowrongport hz quirky tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport testconnectiontowrongport members size ver member this info testconnectiontowrongport testconnectiontowrongport jet extension is enabled debug testconnectiontowrongport hz quirky tesla cached thread not starting jobs because partitions are not yet initialized info testconnectiontowrongport hz quirky tesla io thread out initialized new cluster connection between and info testconnectiontowrongport testconnectiontowrongport is started info testconnectiontowrongport main is shutting down warn testconnectiontowrongport main terminating forcefully info testconnectiontowrongport main shutting down connection manager warn testconnectiontowrongport hz festive tesla io thread in connection closed reason exception in connection thread hz festive tesla io thread in java lang illegalstateexception unsupported protocol exchange detected expected protocol client at com hazelcast internal server tcp singleprotocoldecoder verifyprotocol singleprotocoldecoder java at com hazelcast internal server tcp singleprotocoldecoder onread singleprotocoldecoder java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niopipeline lambda start niopipeline java at com hazelcast internal networking nio niopipeline lambda run unknown source at com hazelcast internal networking nio niothread processtaskqueue niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java warn testconnectiontowrongport hz quirky tesla io thread in connection alive false connectiontype member planeindex closed reason exception in connection alive true connectiontype member planeindex thread hz quirky tesla io thread in java io ioexception connection reset by peer at sun nio ch filedispatcherimpl native method at sun nio ch socketdispatcher read socketdispatcher java at sun nio ch ioutil readintonativebuffer ioutil java at sun nio ch ioutil read ioutil java at sun nio ch socketchannelimpl read socketchannelimpl java at com hazelcast internal networking nio nioinboundpipeline process nioinboundpipeline java at com hazelcast internal networking nio niothread processselectionkey niothread java at com hazelcast internal networking nio niothread processselectionkeys niothread java at com hazelcast internal networking nio niothread selectloop niothread java at com hazelcast internal networking nio niothread executerun niothread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java info testconnectiontowrongport hz quirky tesla io thread in is added to the blacklist info testconnectiontowrongport main shutting down node engine info testconnectiontowrongport main destroying node nodeextension info testconnectiontowrongport main hazelcast shutdown is completed in ms info testconnectiontowrongport main is shutdown info testconnectiontowrongport main is shutting down warn testconnectiontowrongport main terminating forcefully info testconnectiontowrongport main shutting down connection manager info testconnectiontowrongport main shutting down node engine info testconnectiontowrongport main destroying node nodeextension info testconnectiontowrongport main hazelcast shutdown is completed in ms info testconnectiontowrongport main is shutdown | 0 |
40,386 | 12,792,896,157 | IssuesEvent | 2020-07-02 02:34:24 | WSMathias/docker-base-images | https://api.github.com/repos/WSMathias/docker-base-images | closed | High risk vulnerabilities are found | anchore-scan security |
| Image source | Package | Version | Fix | Vulnerability | Risk |
| ------------ | ------- | ------- | --- | ------------- | ---- | | True | High risk vulnerabilities are found -
| Image source | Package | Version | Fix | Vulnerability | Risk |
| ------------ | ------- | ------- | --- | ------------- | ---- | | non_priority | high risk vulnerabilities are found image source package version fix vulnerability risk | 0 |
322,758 | 23,921,717,201 | IssuesEvent | 2022-09-09 17:35:37 | manastone/manakit | https://api.github.com/repos/manastone/manakit | closed | Label | Prototyping T: Documentation T: Feature | ## Feature
Module name: Label
Module extend: Text / Icon
### Description
Text offers the possibility of displaying text in a way that remains uniform throughout your project, it ensures that ManaKit classes are used automatically and as quickly as possible
### Requirement Dev
Adapte content render with props
### Props ( non-exhaustive list )
- title (for text placement )
- icon ( for icon placement )
- titleOnly ( display none icon )
- iconOnly ( display none text )
- size (apply for all content icon and text) | 1.0 | Label - ## Feature
Module name: Label
Module extend: Text / Icon
### Description
Text offers the possibility of displaying text in a way that remains uniform throughout your project, it ensures that ManaKit classes are used automatically and as quickly as possible
### Requirement Dev
Adapte content render with props
### Props ( non-exhaustive list )
- title (for text placement )
- icon ( for icon placement )
- titleOnly ( display none icon )
- iconOnly ( display none text )
- size (apply for all content icon and text) | non_priority | label feature module name label module extend text icon description text offers the possibility of displaying text in a way that remains uniform throughout your project it ensures that manakit classes are used automatically and as quickly as possible requirement dev adapte content render with props props non exhaustive list title for text placement icon for icon placement titleonly display none icon icononly display none text size apply for all content icon and text | 0 |
724,847 | 24,943,416,398 | IssuesEvent | 2022-10-31 21:02:57 | bounswe/bounswe2022group6 | https://api.github.com/repos/bounswe/bounswe2022group6 | closed | Resolving Final Issues Regarding Deployment For Milestone I | Priority: High State: In Progress Type: Development | Final deployment for the Milestone I demo should be done. The deployment will be to the AWS EC2 instance that we use as a team for the course.
The issues that came up or may come up should be resolved in the meeting for this task.
**Tasks**
- [x] Change URLs in the fronted code from localhost to the actual URL.
- [x] Allow all CORS headers for the server to respond to outside requests.
- [x] Deploy the application to the AWS EC2 instance after the changes above
- [x] Test the register, login, and logout functionalities on the deployed application.
**Deadline:** 31.10.2022 - 23.59 | 1.0 | Resolving Final Issues Regarding Deployment For Milestone I - Final deployment for the Milestone I demo should be done. The deployment will be to the AWS EC2 instance that we use as a team for the course.
The issues that came up or may come up should be resolved in the meeting for this task.
**Tasks**
- [x] Change URLs in the fronted code from localhost to the actual URL.
- [x] Allow all CORS headers for the server to respond to outside requests.
- [x] Deploy the application to the AWS EC2 instance after the changes above
- [x] Test the register, login, and logout functionalities on the deployed application.
**Deadline:** 31.10.2022 - 23.59 | priority | resolving final issues regarding deployment for milestone i final deployment for the milestone i demo should be done the deployment will be to the aws instance that we use as a team for the course the issues that came up or may come up should be resolved in the meeting for this task tasks change urls in the fronted code from localhost to the actual url allow all cors headers for the server to respond to outside requests deploy the application to the aws instance after the changes above test the register login and logout functionalities on the deployed application deadline | 1 |
350,678 | 31,931,929,611 | IssuesEvent | 2023-09-19 08:01:56 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix math.test_tensorflow_divide_no_nan | TensorFlow Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6195778261"><img src=https://img.shields.io/badge/-failure-red></a>
| 1.0 | Fix math.test_tensorflow_divide_no_nan - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6039650802"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6195778261"><img src=https://img.shields.io/badge/-failure-red></a>
| non_priority | fix math test tensorflow divide no nan numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src | 0 |
435,238 | 12,533,328,297 | IssuesEvent | 2020-06-04 17:24:34 | docker/classicswarm | https://api.github.com/repos/docker/classicswarm | closed | Etcd Connectivity Issue with proxy mode | area/discovery kind/bug kind/help wanted priority/P2 | Docker Swarm seems to have intermittent difficulty connecting to etcd even though it is available and works the majority of the time. It will not fail and retry either, just continue to error out indefinitely.
```
Jun 18 06:25:47 localhost systemd[1]: Started Swarm Agent.
Jun 18 06:25:47 localhost dockerd[740]: time="2015-06-18T06:25:47Z" level=info msg="Registering on the discovery service every 20s..." addr="192.168.1.71:2375" discovery="etcd://127.0.0.1:4001"
Jun 18 06:25:47 localhost dockerd[740]: time="2015-06-18T06:25:47Z" level=error msg="502: (unhandled http status [Service Unavailable] with body [{\"message\":\"proxy: zero endpoints currently available\"}]) [0]"
Jun 18 06:26:07 localhost dockerd[740]: time="2015-06-18T06:26:07Z" level=info msg="Registering on the discovery service every 20s..." addr="192.168.1.71:2375" discovery="etcd://127.0.0.1:4001"
Jun 18 06:26:08 localhost dockerd[740]: time="2015-06-18T06:26:08Z" level=error msg="501: All the given peers are not reachable (failed to propose on members [{\"message\":\"proxy: zero endpoints currently available\"}] twice [last error: Put %7B%22message%22:%22proxy:%20zero%20endpoints%20currently%20available%22%7D/v2/keys/docker/swarm/nodes/192.168.1.71:2375: unsupported protocol scheme \"\"]) [0]"
Jun 18 06:26:28 localhost dockerd[740]: time="2015-06-18T06:26:28Z" level=info msg="Registering on the discovery service every 20s..." addr="192.168.1.71:2375" discovery="etcd://127.0.0.1:4001"
Jun 18 06:26:28 localhost dockerd[740]: time="2015-06-18T06:26:28Z" level=error msg="501: All the given peers are not reachable (failed to propose on members [{\"message\":\"proxy: zero endpoints currently available\"}] twice [last error: Put %7B%22message%22:%22proxy:%20zero%20endpoints%20currently%20available%22%7D/v2/keys/docker/swarm/nodes/192.168.1.71:2375: unsupported protocol scheme \"\"]) [0]"
...
```
```
#cloud-config
...
- name: docker.service
command: start
content: |
[Unit]
Description=Docker Daemon
After=docker.socket flanneld.service
Requires=docker.socket flanneld.service
[Service]
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
EnvironmentFile=-/run/flannel_docker_opts.env
EnvironmentFile=-/etc/environment
Environment=TMPDIR=/var/tmp
ExecStartPre=/opt/bin/GET_IP
ExecStart=/usr/lib/coreos/dockerd -D -d -s overlay -H fd:// -H tcp://0.0.0.0:2375 $DOCKER_OPTS $DOCKER_OPT_DNS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
ExecStartPost=/usr/lib/coreos/dockerd info
Restart=always
[Install]
WantedBy=multi-user.target
- name: swarm.service
command: start
content: |
[Unit]
Description=Swarm Agent
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/etc/environment
Environment="VER=0.3.0-rc3"
ExecStartPre=-/usr/lib/coreos/dockerd kill swarm-agent
ExecStopPost=-/usr/lib/coreos/dockerd rm swarm-agent
ExecStartPre=/usr/lib/coreos/dockerd pull swarm
ExecStart=/usr/lib/coreos/dockerd run --name swarm-agent --net host swarm:${VER} join --addr ${SWARM_IP}:2375 etcd://127.0.0.1:2379
ExecStop=/usr/lib/coreos/dockerd stop swarm-agent
Restart=always
[Install]
WantedBy=multi-user.target
...
```
| 1.0 | Etcd Connectivity Issue with proxy mode - Docker Swarm seems to have intermittent difficulty connecting to etcd even though it is available and works the majority of the time. It will not fail and retry either, just continue to error out indefinitely.
```
Jun 18 06:25:47 localhost systemd[1]: Started Swarm Agent.
Jun 18 06:25:47 localhost dockerd[740]: time="2015-06-18T06:25:47Z" level=info msg="Registering on the discovery service every 20s..." addr="192.168.1.71:2375" discovery="etcd://127.0.0.1:4001"
Jun 18 06:25:47 localhost dockerd[740]: time="2015-06-18T06:25:47Z" level=error msg="502: (unhandled http status [Service Unavailable] with body [{\"message\":\"proxy: zero endpoints currently available\"}]) [0]"
Jun 18 06:26:07 localhost dockerd[740]: time="2015-06-18T06:26:07Z" level=info msg="Registering on the discovery service every 20s..." addr="192.168.1.71:2375" discovery="etcd://127.0.0.1:4001"
Jun 18 06:26:08 localhost dockerd[740]: time="2015-06-18T06:26:08Z" level=error msg="501: All the given peers are not reachable (failed to propose on members [{\"message\":\"proxy: zero endpoints currently available\"}] twice [last error: Put %7B%22message%22:%22proxy:%20zero%20endpoints%20currently%20available%22%7D/v2/keys/docker/swarm/nodes/192.168.1.71:2375: unsupported protocol scheme \"\"]) [0]"
Jun 18 06:26:28 localhost dockerd[740]: time="2015-06-18T06:26:28Z" level=info msg="Registering on the discovery service every 20s..." addr="192.168.1.71:2375" discovery="etcd://127.0.0.1:4001"
Jun 18 06:26:28 localhost dockerd[740]: time="2015-06-18T06:26:28Z" level=error msg="501: All the given peers are not reachable (failed to propose on members [{\"message\":\"proxy: zero endpoints currently available\"}] twice [last error: Put %7B%22message%22:%22proxy:%20zero%20endpoints%20currently%20available%22%7D/v2/keys/docker/swarm/nodes/192.168.1.71:2375: unsupported protocol scheme \"\"]) [0]"
...
```
```
#cloud-config
...
- name: docker.service
command: start
content: |
[Unit]
Description=Docker Daemon
After=docker.socket flanneld.service
Requires=docker.socket flanneld.service
[Service]
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
EnvironmentFile=-/run/flannel_docker_opts.env
EnvironmentFile=-/etc/environment
Environment=TMPDIR=/var/tmp
ExecStartPre=/opt/bin/GET_IP
ExecStart=/usr/lib/coreos/dockerd -D -d -s overlay -H fd:// -H tcp://0.0.0.0:2375 $DOCKER_OPTS $DOCKER_OPT_DNS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
ExecStartPost=/usr/lib/coreos/dockerd info
Restart=always
[Install]
WantedBy=multi-user.target
- name: swarm.service
command: start
content: |
[Unit]
Description=Swarm Agent
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/etc/environment
Environment="VER=0.3.0-rc3"
ExecStartPre=-/usr/lib/coreos/dockerd kill swarm-agent
ExecStopPost=-/usr/lib/coreos/dockerd rm swarm-agent
ExecStartPre=/usr/lib/coreos/dockerd pull swarm
ExecStart=/usr/lib/coreos/dockerd run --name swarm-agent --net host swarm:${VER} join --addr ${SWARM_IP}:2375 etcd://127.0.0.1:2379
ExecStop=/usr/lib/coreos/dockerd stop swarm-agent
Restart=always
[Install]
WantedBy=multi-user.target
...
```
| priority | etcd connectivity issue with proxy mode docker swarm seems to have intermittent difficulty connecting to etcd even though it is available and works the majority of the time it will not fail and retry either just continue to error out indefinitely jun localhost systemd started swarm agent jun localhost dockerd time level info msg registering on the discovery service every addr discovery etcd jun localhost dockerd time level error msg unhandled http status with body jun localhost dockerd time level info msg registering on the discovery service every addr discovery etcd jun localhost dockerd time level error msg all the given peers are not reachable failed to propose on members twice jun localhost dockerd time level info msg registering on the discovery service every addr discovery etcd jun localhost dockerd time level error msg all the given peers are not reachable failed to propose on members twice cloud config name docker service command start content description docker daemon after docker socket flanneld service requires docker socket flanneld service mountflags slave limitnofile limitnproc environmentfile run flannel docker opts env environmentfile etc environment environment tmpdir var tmp execstartpre opt bin get ip execstart usr lib coreos dockerd d d s overlay h fd h tcp docker opts docker opt dns docker opt bip docker opt mtu docker opt ipmasq execstartpost usr lib coreos dockerd info restart always wantedby multi user target name swarm service command start content description swarm agent after docker service requires docker service environmentfile etc environment environment ver execstartpre usr lib coreos dockerd kill swarm agent execstoppost usr lib coreos dockerd rm swarm agent execstartpre usr lib coreos dockerd pull swarm execstart usr lib coreos dockerd run name swarm agent net host swarm ver join addr swarm ip etcd execstop usr lib coreos dockerd stop swarm agent restart always wantedby multi user target | 1 |
327,023 | 24,113,967,704 | IssuesEvent | 2022-09-20 13:30:25 | kubernetes/sig-release | https://api.github.com/repos/kubernetes/sig-release | closed | Cut v1.25.1 release | priority/important-soon sig/release kind/documentation area/release-eng | ## Scheduled to happen: Wednesday, September 14, 2022
## Release Blocking Issues
<!--
Make a list of anything preventing the release to start
(failing tests, pending image bumps, etc) and link them
to the relevant GitHub issues:
- [ ] Issue 1
- [ ] Issue 2
-->
<!--
Release Process Steps:
======================
Create a thread on #release-management on Slack to notify updates
about the release. For example,
- https://kubernetes.slack.com/archives/CJH2GBF7Y/p1635868822040300
- https://kubernetes.slack.com/archives/CJH2GBF7Y/p1631606375087500
- Add/Remove items of the checklist as you see fit
- Post bumps or issues encountered along the way
Hints and pointers to docs for each step of the release process:
Screenshot Testgrid Boards:
Use `krel testgridshot` to automatically create the screenshots
http://bit.ly/relmanagers-handbook#testgrid-screenshots
Stage and Release (mock and nomock):
Use `krel stage` && `krel release` see the handbook for more:
http://bit.ly/relmanagers-handbook#releases-management
Image promotion:
Use `kpromo pr` to create a pull request
https://sigs.k8s.io/promo-tools/docs/promotion-pull-requests.md
Notify #release-management on Slack:
Announce the release in a message in the Channel and paste the link
Direct link to slack: https://kubernetes.slack.com/messages/CJH2GBF7Y
Build & publish packages: ← Skip for prereleases
Coordinate with @google-build-admin before starting. Once the
NoMock Release is done and **before sending the announcement**
notify @google-build-admin to start building the packages.
Send notification:
Use `krel announce` using your Sendgrid token
http://bit.ly/relmanagers-handbook#sending-mail
Collect Metrics:
Run krel history --branch release-1.mm --date-from 2021-mm-dd
http://bit.ly/relmanagers-handbook#adding-data-about-the-cloud-build-jobs
Finish post-release branch creation tasks: ← Only for rc.0 release
See the Branch Creation section of the handbook for more details:
http://bit.ly/relmanagers-handbook#branch-creation
Help? Ring @release-managers on slack!
-->
## Release Steps
- [ ] Create a thread on #release-management: <!-- Paste link to slack -->
- [ ] Screenshot unhealthy release branch testgrid boards
- Mock Run
- [ ] Stage
- [ ] Release
- NoMock Run
- [ ] Stage
- [ ] Image Promotion: <!-- Paste Pull Request URL here -->
- [ ] Release
- [ ] Build & publish packages (debs & rpms) <!-- REMOVE THIS STEP FOR PRE-RELEASES -->
- [ ] Notify #release-management: <!-- Paste link to slack -->
- [ ] Send notification: <!-- Paste link to kubernetes-dev email -->
- [ ] Collect metrics and add them to the `Release steps` table below
<!-- ONLY FOR RC.0 RELEASE - [ ] Finish post-release branch creation tasks -->
## Release Tools Version
<!-- Replace with output of `krel version` -->
```
GitVersion: vM.m.p
GitCommit: 191ddd0b0b49af1adb04a98e45cebdd36cae9307
GitTreeState: clean
BuildDate: YYYY-MM-DDTHH:mm:ssZ
GoVersion: go1.16.3
Compiler: gc
Platform: linux/amd64
```
## Release Jobs History
<!-- The following table can be automatically generated using krel --history -->
| Step | Command | Link | Start | Duration | Succeeded? |
| --- | --- | --- | --- | --- | --- |
| Mock stage | `krel stage [arguments]` | | | | |
| Mock release | `krel release [arguments]` | | | | |
| Stage | `krel stage [arguments]` | | | | |
| Release | `krel release [arguments]` | | | | |
## Action Items
<!--
During the release, you may find a few things that require updates
(process changes, documentation updates, fixes to release tooling).
Please list them here.
It will be your responsibility to open issues/PRs to resolve these
issues/improvements. Keep this issue open until these action items
are complete.
- [ ] Item 1
- [ ] Item 2
- [ ] Item 3
-->
## Open Questions
<!--
During the release, you may have a few questions that you can't
answer yourself or may require group discussion.
Please list them here.
Follow up with Branch Managers/Patch Release Team/Release Engineering
subproject owners to get these questions answered.
- [ ] Item 1
- [ ] Item 2
- [ ] Item 3
-->
/milestone v1.25
/assign
/cc @kubernetes/release-managers
/priority important-soon
/kind documentation
| 1.0 | Cut v1.25.1 release - ## Scheduled to happen: Wednesday, September 14, 2022
## Release Blocking Issues
<!--
Make a list of anything preventing the release to start
(failing tests, pending image bumps, etc) and link them
to the relevant GitHub issues:
- [ ] Issue 1
- [ ] Issue 2
-->
<!--
Release Process Steps:
======================
Create a thread on #release-management on Slack to notify updates
about the release. For example,
- https://kubernetes.slack.com/archives/CJH2GBF7Y/p1635868822040300
- https://kubernetes.slack.com/archives/CJH2GBF7Y/p1631606375087500
- Add/Remove items of the checklist as you see fit
- Post bumps or issues encountered along the way
Hints and pointers to docs for each step of the release process:
Screenshot Testgrid Boards:
Use `krel testgridshot` to automatically create the screenshots
http://bit.ly/relmanagers-handbook#testgrid-screenshots
Stage and Release (mock and nomock):
Use `krel stage` && `krel release` see the handbook for more:
http://bit.ly/relmanagers-handbook#releases-management
Image promotion:
Use `kpromo pr` to create a pull request
https://sigs.k8s.io/promo-tools/docs/promotion-pull-requests.md
Notify #release-management on Slack:
Announce the release in a message in the Channel and paste the link
Direct link to slack: https://kubernetes.slack.com/messages/CJH2GBF7Y
Build & publish packages: ← Skip for prereleases
Coordinate with @google-build-admin before starting. Once the
NoMock Release is done and **before sending the announcement**
notify @google-build-admin to start building the packages.
Send notification:
Use `krel announce` using your Sendgrid token
http://bit.ly/relmanagers-handbook#sending-mail
Collect Metrics:
Run krel history --branch release-1.mm --date-from 2021-mm-dd
http://bit.ly/relmanagers-handbook#adding-data-about-the-cloud-build-jobs
Finish post-release branch creation tasks: ← Only for rc.0 release
See the Branch Creation section of the handbook for more details:
http://bit.ly/relmanagers-handbook#branch-creation
Help? Ring @release-managers on slack!
-->
## Release Steps
- [ ] Create a thread on #release-management: <!-- Paste link to slack -->
- [ ] Screenshot unhealthy release branch testgrid boards
- Mock Run
- [ ] Stage
- [ ] Release
- NoMock Run
- [ ] Stage
- [ ] Image Promotion: <!-- Paste Pull Request URL here -->
- [ ] Release
- [ ] Build & publish packages (debs & rpms) <!-- REMOVE THIS STEP FOR PRE-RELEASES -->
- [ ] Notify #release-management: <!-- Paste link to slack -->
- [ ] Send notification: <!-- Paste link to kubernetes-dev email -->
- [ ] Collect metrics and add them to the `Release steps` table below
<!-- ONLY FOR RC.0 RELEASE - [ ] Finish post-release branch creation tasks -->
## Release Tools Version
<!-- Replace with output of `krel version` -->
```
GitVersion: vM.m.p
GitCommit: 191ddd0b0b49af1adb04a98e45cebdd36cae9307
GitTreeState: clean
BuildDate: YYYY-MM-DDTHH:mm:ssZ
GoVersion: go1.16.3
Compiler: gc
Platform: linux/amd64
```
## Release Jobs History
<!-- The following table can be automatically generated using krel --history -->
| Step | Command | Link | Start | Duration | Succeeded? |
| --- | --- | --- | --- | --- | --- |
| Mock stage | `krel stage [arguments]` | | | | |
| Mock release | `krel release [arguments]` | | | | |
| Stage | `krel stage [arguments]` | | | | |
| Release | `krel release [arguments]` | | | | |
## Action Items
<!--
During the release, you may find a few things that require updates
(process changes, documentation updates, fixes to release tooling).
Please list them here.
It will be your responsibility to open issues/PRs to resolve these
issues/improvements. Keep this issue open until these action items
are complete.
- [ ] Item 1
- [ ] Item 2
- [ ] Item 3
-->
## Open Questions
<!--
During the release, you may have a few questions that you can't
answer yourself or may require group discussion.
Please list them here.
Follow up with Branch Managers/Patch Release Team/Release Engineering
subproject owners to get these questions answered.
- [ ] Item 1
- [ ] Item 2
- [ ] Item 3
-->
/milestone v1.25
/assign
/cc @kubernetes/release-managers
/priority important-soon
/kind documentation
| non_priority | cut release scheduled to happen wednesday september release blocking issues make a list of anything preventing the release to start failing tests pending image bumps etc and link them to the relevant github issues issue issue release process steps create a thread on release management on slack to notify updates about the release for example add remove items of the checklist as you see fit post bumps or issues encountered along the way hints and pointers to docs for each step of the release process screenshot testgrid boards use krel testgridshot to automatically create the screenshots stage and release mock and nomock use krel stage krel release see the handbook for more image promotion use kpromo pr to create a pull request notify release management on slack announce the release in a message in the channel and paste the link direct link to slack build publish packages ← skip for prereleases coordinate with google build admin before starting once the nomock release is done and before sending the announcement notify google build admin to start building the packages send notification use krel announce using your sendgrid token collect metrics run krel history branch release mm date from mm dd finish post release branch creation tasks ← only for rc release see the branch creation section of the handbook for more details help ring release managers on slack release steps create a thread on release management screenshot unhealthy release branch testgrid boards mock run stage release nomock run stage image promotion release build publish packages debs rpms notify release management send notification collect metrics and add them to the release steps table below release tools version gitversion vm m p gitcommit gittreestate clean builddate yyyy mm ddthh mm ssz goversion compiler gc platform linux release jobs history step command link start duration succeeded mock stage krel stage mock release krel release stage krel stage release krel release action items during the release you may find a few things that require updates process changes documentation updates fixes to release tooling please list them here it will be your responsibility to open issues prs to resolve these issues improvements keep this issue open until these action items are complete item item item open questions during the release you may have a few questions that you can t answer yourself or may require group discussion please list them here follow up with branch managers patch release team release engineering subproject owners to get these questions answered item item item milestone assign cc kubernetes release managers priority important soon kind documentation | 0 |
92,410 | 10,742,539,270 | IssuesEvent | 2019-10-29 22:53:26 | pulibrary/plantain | https://api.github.com/repos/pulibrary/plantain | closed | Add to the README the instructions outlining how one retrieves EADs using SVN | documentation | These need to be added in order for one to actually load the data into one's local development environment. | 1.0 | Add to the README the instructions outlining how one retrieves EADs using SVN - These need to be added in order for one to actually load the data into one's local development environment. | non_priority | add to the readme the instructions outlining how one retrieves eads using svn these need to be added in order for one to actually load the data into one s local development environment | 0 |
526,260 | 15,284,653,562 | IssuesEvent | 2021-02-23 12:32:24 | fli-iam/shanoir-ng | https://api.github.com/repos/fli-iam/shanoir-ng | closed | [Challenge/Feature] Challenge users subscription | backend feature user priority | For the next challenge, we want user to be able to subscribe and retrieve data using Shanoir.
- User subscribe using Shanoir interface
- He is automatically part of a study where the data is already loaded
- He can download all data with one click ?
- At the end of the challenge, the study is not accessible anymore => How to do this?
Update Study object to have a boolean value "isChallenge"
This option is only visible for admin users
=> Check on front side
=> Check on back side
If the study is a challenge, the data is accessible only between startDate and endDate
This is quite fast to do (0.5 day?)
Propose an alternate subscription button => subscribe to a challenge
Then the list of open challenges is displayed (otherwise, an error occurs stating that no challenge are currently active)
=> Select study name where isChallenge is true and endDate<now()
The subscription screen:

AccountRequestInfo is updated with a challenge field (ID of study or name ?)
Then, on user microservice, if the user subscribed to a challenge (check challenge field), send a rabbitMQMessage to subscribe the study MS to subscribe this user to the challenge study. (can download data only ? => to be tested)
When the user is accepted then created, he directly has access to this study and only this one, with the data.
This is quite fast to do too (1 or 2 days maybe because of screen definition AND out of identification SQL request)
| 1.0 | [Challenge/Feature] Challenge users subscription - For the next challenge, we want user to be able to subscribe and retrieve data using Shanoir.
- User subscribe using Shanoir interface
- He is automatically part of a study where the data is already loaded
- He can download all data with one click ?
- At the end of the challenge, the study is not accessible anymore => How to do this?
Update Study object to have a boolean value "isChallenge"
This option is only visible for admin users
=> Check on front side
=> Check on back side
If the study is a challenge, the data is accessible only between startDate and endDate
This is quite fast to do (0.5 day?)
Propose an alternate subscription button => subscribe to a challenge
Then the list of open challenges is displayed (otherwise, an error occurs stating that no challenge are currently active)
=> Select study name where isChallenge is true and endDate<now()
The subscription screen:

AccountRequestInfo is updated with a challenge field (ID of study or name ?)
Then, on user microservice, if the user subscribed to a challenge (check challenge field), send a rabbitMQMessage to subscribe the study MS to subscribe this user to the challenge study. (can download data only ? => to be tested)
When the user is accepted then created, he directly has access to this study and only this one, with the data.
This is quite fast to do too (1 or 2 days maybe because of screen definition AND out of identification SQL request)
| priority | challenge users subscription for the next challenge we want user to be able to subscribe and retrieve data using shanoir user subscribe using shanoir interface he is automatically part of a study where the data is already loaded he can download all data with one click at the end of the challenge the study is not accessible anymore how to do this update study object to have a boolean value ischallenge this option is only visible for admin users check on front side check on back side if the study is a challenge the data is accessible only between startdate and enddate this is quite fast to do day propose an alternate subscription button subscribe to a challenge then the list of open challenges is displayed otherwise an error occurs stating that no challenge are currently active select study name where ischallenge is true and enddate now the subscription screen accountrequestinfo is updated with a challenge field id of study or name then on user microservice if the user subscribed to a challenge check challenge field send a rabbitmqmessage to subscribe the study ms to subscribe this user to the challenge study can download data only to be tested when the user is accepted then created he directly has access to this study and only this one with the data this is quite fast to do too or days maybe because of screen definition and out of identification sql request | 1 |
398,385 | 11,740,487,845 | IssuesEvent | 2020-03-11 19:42:47 | syntax-prosody-ot/main | https://api.github.com/repos/syntax-prosody-ot/main | closed | Constraint hierarchy on interface | high priority | We've decided that we want to include all the constraints that we ever want to use on the interface, but by default hide the ones that a) aren't used in the literature and b) we don't think should be adopted. So we'd like to change the constraint layout to have the hierarchy shown in this doc:
[https://docs.google.com/document/d/1W7CMcGXB6pEFLZI9zmNLH_9w8DR99txjv24dVNp4RCs/edit](https://docs.google.com/document/d/1W7CMcGXB6pEFLZI9zmNLH_9w8DR99txjv24dVNp4RCs/edit)
"Show more..." should be a link that reveals the constraints that are shown under it in the doc outline. | 1.0 | Constraint hierarchy on interface - We've decided that we want to include all the constraints that we ever want to use on the interface, but by default hide the ones that a) aren't used in the literature and b) we don't think should be adopted. So we'd like to change the constraint layout to have the hierarchy shown in this doc:
[https://docs.google.com/document/d/1W7CMcGXB6pEFLZI9zmNLH_9w8DR99txjv24dVNp4RCs/edit](https://docs.google.com/document/d/1W7CMcGXB6pEFLZI9zmNLH_9w8DR99txjv24dVNp4RCs/edit)
"Show more..." should be a link that reveals the constraints that are shown under it in the doc outline. | priority | constraint hierarchy on interface we ve decided that we want to include all the constraints that we ever want to use on the interface but by default hide the ones that a aren t used in the literature and b we don t think should be adopted so we d like to change the constraint layout to have the hierarchy shown in this doc show more should be a link that reveals the constraints that are shown under it in the doc outline | 1 |
281,465 | 8,695,701,341 | IssuesEvent | 2018-12-04 15:45:49 | ExchangeUnion/xud | https://api.github.com/repos/ExchangeUnion/xud | closed | not connected to peer after disconnect | in progress p2p top priority | My node was up connected to 3 peers and have peer orders.
wifi down for 3 minutes, wifi back up.
peers are missing (no reconnect). | 1.0 | not connected to peer after disconnect - My node was up connected to 3 peers and have peer orders.
wifi down for 3 minutes, wifi back up.
peers are missing (no reconnect). | priority | not connected to peer after disconnect my node was up connected to peers and have peer orders wifi down for minutes wifi back up peers are missing no reconnect | 1 |
269,669 | 28,960,244,646 | IssuesEvent | 2023-05-10 01:26:21 | dpteam/RK3188_TABLET | https://api.github.com/repos/dpteam/RK3188_TABLET | reopened | CVE-2017-7895 (High) detected in linux-yocto-4.12v3.1.10, linux-yocto-4.12v3.0.66 | Mend: dependency security vulnerability | ## CVE-2017-7895 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yocto-4.12v3.1.10</b>, <b>linux-yocto-4.12v3.0.66</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The NFSv2 and NFSv3 server implementations in the Linux kernel through 4.10.13 lack certain checks for the end of a buffer, which allows remote attackers to trigger pointer-arithmetic errors or possibly have unspecified other impact via crafted requests, related to fs/nfsd/nfs3xdr.c and fs/nfsd/nfsxdr.c.
<p>Publish Date: 2017-04-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-7895>CVE-2017-7895</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7895">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7895</a></p>
<p>Release Date: 2017-04-28</p>
<p>Fix Resolution: v4.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-7895 (High) detected in linux-yocto-4.12v3.1.10, linux-yocto-4.12v3.0.66 - ## CVE-2017-7895 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yocto-4.12v3.1.10</b>, <b>linux-yocto-4.12v3.0.66</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The NFSv2 and NFSv3 server implementations in the Linux kernel through 4.10.13 lack certain checks for the end of a buffer, which allows remote attackers to trigger pointer-arithmetic errors or possibly have unspecified other impact via crafted requests, related to fs/nfsd/nfs3xdr.c and fs/nfsd/nfsxdr.c.
<p>Publish Date: 2017-04-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-7895>CVE-2017-7895</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7895">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7895</a></p>
<p>Release Date: 2017-04-28</p>
<p>Fix Resolution: v4.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in linux yocto linux yocto cve high severity vulnerability vulnerable libraries linux yocto linux yocto vulnerability details the and server implementations in the linux kernel through lack certain checks for the end of a buffer which allows remote attackers to trigger pointer arithmetic errors or possibly have unspecified other impact via crafted requests related to fs nfsd c and fs nfsd nfsxdr c publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
786,756 | 27,663,861,238 | IssuesEvent | 2023-03-12 20:37:42 | neuland-ingolstadt/neuland.app | https://api.github.com/repos/neuland-ingolstadt/neuland.app | opened | Users previously logged in as guests are not seeing all functions | bug high priority | If you log in as guest and dismiss the install prompt, the current dashboard configuration is written to storage. If you then log out and log in using your account, you will only see the functionality that is available to guests. The additional functionality can only be restored by going into the dashboard settings and resetting the dashboard. | 1.0 | Users previously logged in as guests are not seeing all functions - If you log in as guest and dismiss the install prompt, the current dashboard configuration is written to storage. If you then log out and log in using your account, you will only see the functionality that is available to guests. The additional functionality can only be restored by going into the dashboard settings and resetting the dashboard. | priority | users previously logged in as guests are not seeing all functions if you log in as guest and dismiss the install prompt the current dashboard configuration is written to storage if you then log out and log in using your account you will only see the functionality that is available to guests the additional functionality can only be restored by going into the dashboard settings and resetting the dashboard | 1 |
751,004 | 26,227,716,815 | IssuesEvent | 2023-01-04 20:23:24 | Ore-Design/Ore-3D-Reports-Changelog | https://api.github.com/repos/Ore-Design/Ore-3D-Reports-Changelog | closed | Bug: Crash when Child is Added while Parent Tile is Selected [1.6.1] | bug in progress high priority | Add Fuse>Delete Placeholder Child>Add child (allows you to actually select the shape code)>CRASH | 1.0 | Bug: Crash when Child is Added while Parent Tile is Selected [1.6.1] - Add Fuse>Delete Placeholder Child>Add child (allows you to actually select the shape code)>CRASH | priority | bug crash when child is added while parent tile is selected add fuse delete placeholder child add child allows you to actually select the shape code crash | 1 |
87,804 | 3,758,305,181 | IssuesEvent | 2016-03-14 08:11:16 | atifaziz/NCrontab | https://api.github.com/repos/atifaziz/NCrontab | closed | Strong-name assembly | enhancement Priority-Medium | Please sign the ncrontab assembly in nuget! I'm currently signing and maintaining my own version because I need to reference it in a signed assembly. Thanks :)
---
Originally reported on Google Code with ID 8
Reported by @phrosty on 2013-06-27 21:54:38
| 1.0 | Strong-name assembly - Please sign the ncrontab assembly in nuget! I'm currently signing and maintaining my own version because I need to reference it in a signed assembly. Thanks :)
---
Originally reported on Google Code with ID 8
Reported by @phrosty on 2013-06-27 21:54:38
| priority | strong name assembly please sign the ncrontab assembly in nuget i m currently signing and maintaining my own version because i need to reference it in a signed assembly thanks originally reported on google code with id reported by phrosty on | 1 |
425,172 | 12,336,676,981 | IssuesEvent | 2020-05-14 13:56:11 | TheCodeXTeam/Kandahar-IHS | https://api.github.com/repos/TheCodeXTeam/Kandahar-IHS | closed | Buggs in database queries | For: Database Priority: Low Status: Pending Type: Bug | - `Staff's Experience` should be Change to attachment.
- A field for `Maktoob` should be added to `Staff`.
- `KankorID` field should be added to `Student`.
- `ActiveDegree` field in Staff degress.
`Second Chance` should be added.
- `ApplicationLetter` for marks should be added.
| 1.0 | Buggs in database queries - - `Staff's Experience` should be Change to attachment.
- A field for `Maktoob` should be added to `Staff`.
- `KankorID` field should be added to `Student`.
- `ActiveDegree` field in Staff degress.
`Second Chance` should be added.
- `ApplicationLetter` for marks should be added.
| priority | buggs in database queries staff s experience should be change to attachment a field for maktoob should be added to staff kankorid field should be added to student activedegree field in staff degress second chance should be added applicationletter for marks should be added | 1 |
662,583 | 22,144,900,926 | IssuesEvent | 2022-06-03 10:54:42 | StatisticsNZ/simplevis | https://api.github.com/repos/StatisticsNZ/simplevis | closed | pointrange & hhpointrange: rename xmiddle_var as x_var | high priority 6.3.0 breaking | Too long and makes it difficult to read | 1.0 | pointrange & hhpointrange: rename xmiddle_var as x_var - Too long and makes it difficult to read | priority | pointrange hhpointrange rename xmiddle var as x var too long and makes it difficult to read | 1 |
1,232 | 3,088,553,870 | IssuesEvent | 2015-08-25 17:07:29 | servo/servo | https://api.github.com/repos/servo/servo | closed | mach build-cef could not compile embedding | A-infrastructure | I imagine this may just be a temporary problem but I can build servo itself but not ./mach build-cef.
This is on Gentoo amd64 I don't imagine that matters much as servo uses a snapshot of rust and not my natively installed version as far as I am aware.
```
./mach env
export PATH=/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/rustc/bin:/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/bin:/root/servo/.servo/cargo/2015-08-20/cargo/bin:/root/servo/.servo/cargo/2015-08-20/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.9.2
export LD_LIBRARY_PATH=/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/rustc/lib:/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/lib:
```
```
Compiling embedding v0.0.1 (file:///root/servo/ports/cef)
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:235:38: 235:51 error: the type of this value must be known in this context
browser.rs:235 Some(ref win) => win.wait_events(),
^~~~~~~~~~~~~
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:239:22: 239:34 error: the type of this value must be known in this context
browser.rs:239 match events.pop() {
^~~~~~~~~~~~
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
error: aborting due to 6 previous errors
Could not compile `embedding`.
To learn more, run the command again with --verbose.
CEF build completed in 5.75s
```
| 1.0 | mach build-cef could not compile embedding - I imagine this may just be a temporary problem but I can build servo itself but not ./mach build-cef.
This is on Gentoo amd64 I don't imagine that matters much as servo uses a snapshot of rust and not my natively installed version as far as I am aware.
```
./mach env
export PATH=/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/rustc/bin:/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/bin:/root/servo/.servo/cargo/2015-08-20/cargo/bin:/root/servo/.servo/cargo/2015-08-20/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.9.2
export LD_LIBRARY_PATH=/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/rustc/lib:/root/servo/.servo/rust/7b7fc67dd453c470a48dbdcf64693a93293c9ab0/rustc-1.4.0-dev-x86_64-unknown-linux-gnu/lib:
```
```
Compiling embedding v0.0.1 (file:///root/servo/ports/cef)
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 error: the trait `core::iter::Iterator` is not implemented for the type `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` [E0277]
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:230:9: 244:10 help: run `rustc --explain E0277` to see a detailed explanation
browser.rs:230:9: 244:10 note: `&core::cell::Ref<'_, collections::vec::Vec<interfaces::cef_browser::CefBrowser>>` is not an iterator; maybe try calling `.iter()` or a similar method
browser.rs:230 for browser in &browsers.borrow() {
browser.rs:231 if browser.downcast().callback_executed.get() == false {
browser.rs:232 browser_callback_after_created(browser.clone());
browser.rs:233 }
browser.rs:234 let mut events = match browser.downcast().window {
browser.rs:235 Some(ref win) => win.wait_events(),
...
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:235:38: 235:51 error: the type of this value must be known in this context
browser.rs:235 Some(ref win) => win.wait_events(),
^~~~~~~~~~~~~
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
browser.rs:239:22: 239:34 error: the type of this value must be known in this context
browser.rs:239 match events.pop() {
^~~~~~~~~~~~
note: in expansion of for loop expansion
browser.rs:230:9: 244:10 note: expansion site
note: in expansion of closure expansion
browser.rs:229:19: 245:6 note: expansion site
error: aborting due to 6 previous errors
Could not compile `embedding`.
To learn more, run the command again with --verbose.
CEF build completed in 5.75s
```
| non_priority | mach build cef could not compile embedding i imagine this may just be a temporary problem but i can build servo itself but not mach build cef this is on gentoo i don t imagine that matters much as servo uses a snapshot of rust and not my natively installed version as far as i am aware mach env export path root servo servo rust rustc dev unknown linux gnu rustc bin root servo servo rust rustc dev unknown linux gnu bin root servo servo cargo cargo bin root servo servo cargo bin usr local sbin usr local bin usr sbin usr bin sbin bin opt bin usr pc linux gnu gcc bin export ld library path root servo servo rust rustc dev unknown linux gnu rustc lib root servo servo rust rustc dev unknown linux gnu lib compiling embedding file root servo ports cef browser rs error the trait core iter iterator is not implemented for the type core cell ref browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs help run rustc explain to see a detailed explanation browser rs note core cell ref is not an iterator maybe try calling iter or a similar method browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs error the trait core iter iterator is not implemented for the type core cell ref browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs help run rustc explain to see a detailed explanation browser rs note core cell ref is not an iterator maybe try calling iter or a similar method browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs error the trait core iter iterator is not implemented for the type core cell ref browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs help run rustc explain to see a detailed explanation browser rs note core cell ref is not an iterator maybe try calling iter or a similar method browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs error the trait core iter iterator is not implemented for the type core cell ref browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs help run rustc explain to see a detailed explanation browser rs note core cell ref is not an iterator maybe try calling iter or a similar method browser rs for browser in browsers borrow browser rs if browser downcast callback executed get false browser rs browser callback after created browser clone browser rs browser rs let mut events match browser downcast window browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs error the type of this value must be known in this context browser rs some ref win win wait events note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site browser rs error the type of this value must be known in this context browser rs match events pop note in expansion of for loop expansion browser rs note expansion site note in expansion of closure expansion browser rs note expansion site error aborting due to previous errors could not compile embedding to learn more run the command again with verbose cef build completed in | 0 |
84,506 | 3,667,507,253 | IssuesEvent | 2016-02-20 01:25:47 | docker/docker | https://api.github.com/repos/docker/docker | closed | Panic on log rotation when running `docker logs -f xxxx` | area/logging kind/bug priority/P1 | Reproduce:
```
docker run -d --name=test --log-opt max-size=500 --log-opt max-file=5 busybox sh -c "sleep 10;yes X|head -c 200"
docker logs -f test
```
Daemon log:
```
DEBU[23526] GET /v1.22/containers/logs/logs?follow=1&stderr=1&stdout=1&tail=all
DEBU[23526] logs: begin stream
DEBU[23526] waiting for events logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
ERRO[23527] error watching log file for modifications: stat /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
ERRO[23527] Error streaming logs: stat /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
ERRO[23527] error watching log file for modifications: open /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
ERRO[23527] error watching log file for modifications: open /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
DEBU[23527] watch for /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log closed
panic: send on closed channel
goroutine 980 [running]:
github.com/docker/docker/pkg/filenotify.(*filePoller).sendEvent(0xc82145a000, 0xc82179652a, 0x49, 0x2, 0xc8215780c0, 0x0, 0x0)
/usr/src/docker/.gopath/src/github.com/docker/docker/pkg/filenotify/poller.go:128 +0x164
github.com/docker/docker/pkg/filenotify.(*filePoller).watch(0xc82145a000, 0xc8207f54a0, 0x7fd26a65b648, 0xc8214d0000, 0xc8215780c0)
/usr/src/docker/.gopath/src/github.com/docker/docker/pkg/filenotify/poller.go:198 +0x676
created by github.com/docker/docker/pkg/filenotify.(*filePoller).Add
/usr/src/docker/.gopath/src/github.com/docker/docker/pkg/filenotify/poller.go:69 +0x37e
goroutine 1 [chan receive, 392 minutes]:
main.(*DaemonCli).CmdDaemon(0xc8203ad760, 0xc82000a0c0, 0x8, 0x8, 0x0, 0x0)
/usr/src/docker/docker/daemon.go:305 +0x20a7
reflect.callMethod(0xc8206e0de0, 0xc82093fc78)
/usr/local/go/src/reflect/value.go:628 +0x1fc
reflect.methodValueCall(0xc82000a0c0, 0x8, 0x8, 0x1, 0xc8206e0de0, 0x0, 0x0, 0xc8206e0de0, 0x0, 0x479f34, ...)
/usr/local/go/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc8206e0d50, 0xc82000a0b0, 0x9, 0x9, 0x0, 0x0)
/usr/src/docker/.gopath/src/github.com/docker/docker/cli/cli.go:89 +0x383
main.main()
/usr/src/docker/docker/docker.go:63 +0x43c
goroutine 17 [syscall, 392 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1721 +0x1
```
docker version:
```
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
```
docker info:
```
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 232
Server Version: 1.10.1
Storage Driver: aufs
Root Dir: /data1/docker/aufs
Backing Filesystem: extfs
Dirs: 242
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.13.0-76-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.703 GiB
Name: sky
ID: HJRN:G4UP:MRFS:DPGA:UETY:4533:2KPV:EIXV:R5T7:N5IU:YQWU:TKK2
Debug mode (server): true
File Descriptors: 23
Goroutines: 59
System Time: 2016-02-15T19:32:01.93920522+08:00
EventsListeners: 0
Init SHA1: e1042dbb0bcf49bb9da188176d9a5063cdb92a01
Init Path: /usr/lib/docker/dockerinit
Docker Root Dir: /data1/docker
WARNING: No swap limit support
```
| 1.0 | Panic on log rotation when running `docker logs -f xxxx` - Reproduce:
```
docker run -d --name=test --log-opt max-size=500 --log-opt max-file=5 busybox sh -c "sleep 10;yes X|head -c 200"
docker logs -f test
```
Daemon log:
```
DEBU[23526] GET /v1.22/containers/logs/logs?follow=1&stderr=1&stdout=1&tail=all
DEBU[23526] logs: begin stream
DEBU[23526] waiting for events logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
ERRO[23527] error watching log file for modifications: stat /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
ERRO[23527] Error streaming logs: stat /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
ERRO[23527] error watching log file for modifications: open /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
DEBU[23527] waiting for events logger=json-file
WARN[23527] falling back to file poller logger=json-file
ERRO[23527] error watching log file for modifications: open /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log: no such file or directory
DEBU[23527] watch for /data1/docker/containers/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646/f09b6bb01578d11ee96393cf9d99fb5ba32d54a941abf1520c1afea953980646-json.log closed
panic: send on closed channel
goroutine 980 [running]:
github.com/docker/docker/pkg/filenotify.(*filePoller).sendEvent(0xc82145a000, 0xc82179652a, 0x49, 0x2, 0xc8215780c0, 0x0, 0x0)
/usr/src/docker/.gopath/src/github.com/docker/docker/pkg/filenotify/poller.go:128 +0x164
github.com/docker/docker/pkg/filenotify.(*filePoller).watch(0xc82145a000, 0xc8207f54a0, 0x7fd26a65b648, 0xc8214d0000, 0xc8215780c0)
/usr/src/docker/.gopath/src/github.com/docker/docker/pkg/filenotify/poller.go:198 +0x676
created by github.com/docker/docker/pkg/filenotify.(*filePoller).Add
/usr/src/docker/.gopath/src/github.com/docker/docker/pkg/filenotify/poller.go:69 +0x37e
goroutine 1 [chan receive, 392 minutes]:
main.(*DaemonCli).CmdDaemon(0xc8203ad760, 0xc82000a0c0, 0x8, 0x8, 0x0, 0x0)
/usr/src/docker/docker/daemon.go:305 +0x20a7
reflect.callMethod(0xc8206e0de0, 0xc82093fc78)
/usr/local/go/src/reflect/value.go:628 +0x1fc
reflect.methodValueCall(0xc82000a0c0, 0x8, 0x8, 0x1, 0xc8206e0de0, 0x0, 0x0, 0xc8206e0de0, 0x0, 0x479f34, ...)
/usr/local/go/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc8206e0d50, 0xc82000a0b0, 0x9, 0x9, 0x0, 0x0)
/usr/src/docker/.gopath/src/github.com/docker/docker/cli/cli.go:89 +0x383
main.main()
/usr/src/docker/docker/docker.go:63 +0x43c
goroutine 17 [syscall, 392 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1721 +0x1
```
docker version:
```
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
```
docker info:
```
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 232
Server Version: 1.10.1
Storage Driver: aufs
Root Dir: /data1/docker/aufs
Backing Filesystem: extfs
Dirs: 242
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.13.0-76-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.703 GiB
Name: sky
ID: HJRN:G4UP:MRFS:DPGA:UETY:4533:2KPV:EIXV:R5T7:N5IU:YQWU:TKK2
Debug mode (server): true
File Descriptors: 23
Goroutines: 59
System Time: 2016-02-15T19:32:01.93920522+08:00
EventsListeners: 0
Init SHA1: e1042dbb0bcf49bb9da188176d9a5063cdb92a01
Init Path: /usr/lib/docker/dockerinit
Docker Root Dir: /data1/docker
WARNING: No swap limit support
```
| priority | panic on log rotation when running docker logs f xxxx reproduce docker run d name test log opt max size log opt max file busybox sh c sleep yes x head c docker logs f test daemon log debu get containers logs logs follow stderr stdout tail all debu logs begin stream debu waiting for events logger json file debu waiting for events logger json file warn falling back to file poller logger json file erro error watching log file for modifications stat docker containers json log no such file or directory debu waiting for events logger json file warn falling back to file poller logger json file debu waiting for events logger json file warn falling back to file poller logger json file debu waiting for events logger json file erro error streaming logs stat docker containers json log no such file or directory warn falling back to file poller logger json file debu waiting for events logger json file warn falling back to file poller logger json file debu waiting for events logger json file warn falling back to file poller logger json file debu waiting for events logger json file warn falling back to file poller logger json file debu waiting for events logger json file warn falling back to file poller logger json file debu waiting for events logger json file warn falling back to file poller logger json file erro error watching log file for modifications open docker containers json log no such file or directory debu waiting for events logger json file warn falling back to file poller logger json file erro error watching log file for modifications open docker containers json log no such file or directory debu watch for docker containers json log closed panic send on closed channel goroutine github com docker docker pkg filenotify filepoller sendevent usr src docker gopath src github com docker docker pkg filenotify poller go github com docker docker pkg filenotify filepoller watch usr src docker gopath src github com docker docker pkg filenotify poller go created by github com docker docker pkg filenotify filepoller add usr src docker gopath src github com docker docker pkg filenotify poller go goroutine main daemoncli cmddaemon usr src docker docker daemon go reflect callmethod usr local go src reflect value go reflect methodvaluecall usr local go src reflect asm s github com docker docker cli cli run usr src docker gopath src github com docker docker cli cli go main main usr src docker docker docker go goroutine runtime goexit usr local go src runtime asm s docker version client version api version go version git commit built thu feb os arch linux server version api version go version git commit built thu feb os arch linux docker info containers running paused stopped images server version storage driver aufs root dir docker aufs backing filesystem extfs dirs supported false execution driver native logging driver json file plugins volume local network null host bridge kernel version generic operating system ubuntu lts ostype linux architecture cpus total memory gib name sky id hjrn mrfs dpga uety eixv yqwu debug mode server true file descriptors goroutines system time eventslisteners init init path usr lib docker dockerinit docker root dir docker warning no swap limit support | 1 |
155,111 | 5,949,236,550 | IssuesEvent | 2017-05-26 13:48:01 | tendermint/ethermint | https://api.github.com/repos/tendermint/ethermint | closed | Unlock/Password Flag doesn't work | Difficulty: Medium Priority: High Status: Available Type: Bug | Ethermint version: `0.1.0-unstable-b5127d3f Ethereum/1.5.9-stable`
I want to unlock some accounts on start:
`ethermint --datadir /tmp/nnetwork --rpc --rpcapi eth,net,web3,personal,admin --unlock 9e8b5ba8a61edc2e48da13d7bdd712d31e13901e --password ./passwords`
This doesn't unlock accounts. | 1.0 | Unlock/Password Flag doesn't work - Ethermint version: `0.1.0-unstable-b5127d3f Ethereum/1.5.9-stable`
I want to unlock some accounts on start:
`ethermint --datadir /tmp/nnetwork --rpc --rpcapi eth,net,web3,personal,admin --unlock 9e8b5ba8a61edc2e48da13d7bdd712d31e13901e --password ./passwords`
This doesn't unlock accounts. | priority | unlock password flag doesn t work ethermint version unstable ethereum stable i want to unlock some accounts on start ethermint datadir tmp nnetwork rpc rpcapi eth net personal admin unlock password passwords this doesn t unlock accounts | 1 |
798,036 | 28,213,766,867 | IssuesEvent | 2023-04-05 07:23:14 | kdt-final-3/salarying-be | https://api.github.com/repos/kdt-final-3/salarying-be | closed | feat: 지원서 등록 api | For: API Priority: High Status: Available Type: Feature | ## Description ( todo 설명 )
지원서 등록 api,
이메일 중복 체크 api 까지 추가
## Task ( todo )
- [x] 필요한 enum 추가 (MilitaryEnum, Keywords)
- [x] ApplicantController
- [x] ApplicantService
- [x] dto설계 | 1.0 | feat: 지원서 등록 api - ## Description ( todo 설명 )
지원서 등록 api,
이메일 중복 체크 api 까지 추가
## Task ( todo )
- [x] 필요한 enum 추가 (MilitaryEnum, Keywords)
- [x] ApplicantController
- [x] ApplicantService
- [x] dto설계 | priority | feat 지원서 등록 api description todo 설명 지원서 등록 api 이메일 중복 체크 api 까지 추가 task todo 필요한 enum 추가 militaryenum keywords applicantcontroller applicantservice dto설계 | 1 |
14,752 | 5,774,588,149 | IssuesEvent | 2017-04-28 07:40:15 | buildbot/buildbot | https://api.github.com/repos/buildbot/buildbot | closed | Build fails with 'signal 13' error | buildbot_worker py3 | When I run the same build command directly on the worker, there is no error.
Buildbot version: 0.9.6
Python: 3.5
This is the output in the web interface
```
process killed by signal 13
program finished with exit code -1
elapsedTime=74.348216
```
These are the corresponding logs of the worker:
```
2017-04-26 15:11:47+0000 [-] Unhandled Error
Traceback (most recent call last):
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/log.py", line 103, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/log.py", line 86, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/posixbase.py", line 597, in _doReadOrWrite
why = selectable.doRead()
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/process.py", line 291, in doRead
return fdesc.readFromFD(self.fd, self.dataReceived)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/fdesc.py", line 94, in readFromFD
callback(output)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/process.py", line 295, in dataReceived
self.proc.childDataReceived(self.name, data)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/process.py", line 961, in childDataReceived
self.proto.childDataReceived(name, data)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/protocol.py", line 604, in childDataReceived
self.outReceived(data)
File "/home/buildbot/venv/lib/python3.5/site-packages/buildbot_worker/runprocess.py", line 214, in outReceived
data, self.command.builder.unicode_encoding)
File "/home/buildbot/venv/lib/python3.5/site-packages/buildbot_worker/compat.py", line 55, in bytes2NativeString
return x.decode(encoding)
builtins.UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 69: ordinal not in range(128)
2017-04-26 15:11:47+0000 [-] command finished with signal 13, exit code None, elapsedTime: 74.348216
```
| 1.0 | Build fails with 'signal 13' error - When I run the same build command directly on the worker, there is no error.
Buildbot version: 0.9.6
Python: 3.5
This is the output in the web interface
```
process killed by signal 13
program finished with exit code -1
elapsedTime=74.348216
```
These are the corresponding logs of the worker:
```
2017-04-26 15:11:47+0000 [-] Unhandled Error
Traceback (most recent call last):
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/log.py", line 103, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/log.py", line 86, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/posixbase.py", line 597, in _doReadOrWrite
why = selectable.doRead()
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/process.py", line 291, in doRead
return fdesc.readFromFD(self.fd, self.dataReceived)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/fdesc.py", line 94, in readFromFD
callback(output)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/process.py", line 295, in dataReceived
self.proc.childDataReceived(self.name, data)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/process.py", line 961, in childDataReceived
self.proto.childDataReceived(name, data)
File "/home/buildbot/venv/lib/python3.5/site-packages/twisted/internet/protocol.py", line 604, in childDataReceived
self.outReceived(data)
File "/home/buildbot/venv/lib/python3.5/site-packages/buildbot_worker/runprocess.py", line 214, in outReceived
data, self.command.builder.unicode_encoding)
File "/home/buildbot/venv/lib/python3.5/site-packages/buildbot_worker/compat.py", line 55, in bytes2NativeString
return x.decode(encoding)
builtins.UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 69: ordinal not in range(128)
2017-04-26 15:11:47+0000 [-] command finished with signal 13, exit code None, elapsedTime: 74.348216
```
| non_priority | build fails with signal error when i run the same build command directly on the worker there is no error buildbot version python this is the output in the web interface process killed by signal program finished with exit code elapsedtime these are the corresponding logs of the worker unhandled error traceback most recent call last file home buildbot venv lib site packages twisted python log py line in callwithlogger return callwithcontext system lp func args kw file home buildbot venv lib site packages twisted python log py line in callwithcontext return context call ilogcontext newctx func args kw file home buildbot venv lib site packages twisted python context py line in callwithcontext return self currentcontext callwithcontext ctx func args kw file home buildbot venv lib site packages twisted python context py line in callwithcontext return func args kw file home buildbot venv lib site packages twisted internet posixbase py line in doreadorwrite why selectable doread file home buildbot venv lib site packages twisted internet process py line in doread return fdesc readfromfd self fd self datareceived file home buildbot venv lib site packages twisted internet fdesc py line in readfromfd callback output file home buildbot venv lib site packages twisted internet process py line in datareceived self proc childdatareceived self name data file home buildbot venv lib site packages twisted internet process py line in childdatareceived self proto childdatareceived name data file home buildbot venv lib site packages twisted internet protocol py line in childdatareceived self outreceived data file home buildbot venv lib site packages buildbot worker runprocess py line in outreceived data self command builder unicode encoding file home buildbot venv lib site packages buildbot worker compat py line in return x decode encoding builtins unicodedecodeerror ascii codec can t decode byte in position ordinal not in range command finished with signal exit code none elapsedtime | 0 |
5,187 | 7,965,607,655 | IssuesEvent | 2018-07-14 10:54:20 | exercism/cli | https://api.github.com/repos/exercism/cli | closed | Distribute completion scripts along with executable | release-process | The completion scripts currently live in the CLI website repository, which—as @QuLogic mentions [here](https://github.com/exercism/cli-www/issues/34)—doesn't make much sense.
We should update the `bin/build-all` build script to produce a directory containing:
- the executable
- the completion scripts
- a README file that explains what to do with the executable (make sure it's in your path) and the completion scripts (pick the right one, source it in your shell config)
Then tar/zip as before. | 1.0 | Distribute completion scripts along with executable - The completion scripts currently live in the CLI website repository, which—as @QuLogic mentions [here](https://github.com/exercism/cli-www/issues/34)—doesn't make much sense.
We should update the `bin/build-all` build script to produce a directory containing:
- the executable
- the completion scripts
- a README file that explains what to do with the executable (make sure it's in your path) and the completion scripts (pick the right one, source it in your shell config)
Then tar/zip as before. | non_priority | distribute completion scripts along with executable the completion scripts currently live in the cli website repository which—as qulogic mentions make much sense we should update the bin build all build script to produce a directory containing the executable the completion scripts a readme file that explains what to do with the executable make sure it s in your path and the completion scripts pick the right one source it in your shell config then tar zip as before | 0 |
59,444 | 14,396,424,626 | IssuesEvent | 2020-12-03 06:15:53 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [Security Solution] If long text name is given to the endpoint, it does not truncate and cross overs the UI | Feature:SecurityAdmin Team: SecuritySolution Team:Onboarding and Lifecycle Mgt bug impact:high v7.10.1 | **Describe the bug**
If long text name is given to the endpoint, it does not truncate and cross overs the UI
**Build Details:**
```
Elastic-Cloud : 7.10.0 BC3
Commit: ed66f41a8a60ad03426beff65ed270a743c46ac4
Build: 35817
Artifacts: https://staging.elastic.co/7.10.0-aea04452/summary-7.10.0.html
```
**Browser Details**
All
**Preconditions**
1. Elastic Cloud 7.10.0 environment should be deployed
2. Endpoint should be installed.
**Steps to Reproduce**
1. Navigate to Administration tab under Security.
2. Click on Windows host name.
3. Side bar will be open and observe that UI is broken on Administration tab.
**Impacted Test case(s)**
N/A
**Actual Result**
If long text name is given to the endpoint, it does not truncate and cross overs the UI
**Expected Result**
The long text should not be expanded, It should be truncate.
**What's Working**
Long text is correctly truncated if the user provides the hyphen.

**What's not Working**
N/A
**Screenshot:**

| True | [Security Solution] If long text name is given to the endpoint, it does not truncate and cross overs the UI - **Describe the bug**
If long text name is given to the endpoint, it does not truncate and cross overs the UI
**Build Details:**
```
Elastic-Cloud : 7.10.0 BC3
Commit: ed66f41a8a60ad03426beff65ed270a743c46ac4
Build: 35817
Artifacts: https://staging.elastic.co/7.10.0-aea04452/summary-7.10.0.html
```
**Browser Details**
All
**Preconditions**
1. Elastic Cloud 7.10.0 environment should be deployed
2. Endpoint should be installed.
**Steps to Reproduce**
1. Navigate to Administration tab under Security.
2. Click on Windows host name.
3. Side bar will be open and observe that UI is broken on Administration tab.
**Impacted Test case(s)**
N/A
**Actual Result**
If long text name is given to the endpoint, it does not truncate and cross overs the UI
**Expected Result**
The long text should not be expanded, It should be truncate.
**What's Working**
Long text is correctly truncated if the user provides the hyphen.

**What's not Working**
N/A
**Screenshot:**

| non_priority | if long text name is given to the endpoint it does not truncate and cross overs the ui describe the bug if long text name is given to the endpoint it does not truncate and cross overs the ui build details elastic cloud commit build artifacts browser details all preconditions elastic cloud environment should be deployed endpoint should be installed steps to reproduce navigate to administration tab under security click on windows host name side bar will be open and observe that ui is broken on administration tab impacted test case s n a actual result if long text name is given to the endpoint it does not truncate and cross overs the ui expected result the long text should not be expanded it should be truncate what s working long text is correctly truncated if the user provides the hyphen what s not working n a screenshot | 0 |
207,704 | 15,832,502,324 | IssuesEvent | 2021-04-06 14:41:25 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | DISABLED test_rref_context_debug_info (__main__.TensorPipeRpcTestWithSpawn) | module: flaky-tests module: rpc triaged | https://app.circleci.com/pipelines/github/pytorch/pytorch/296644/workflows/4b8369cf-a402-4ed0-9107-0a9692173dd0/jobs/12146109/steps
```
Apr 06 10:13:15 ======================================================================
Apr 06 10:13:15 ERROR [2.751s]: test_rref_context_debug_info (__main__.TensorPipeRpcTestWithSpawn)
Apr 06 10:13:15 ----------------------------------------------------------------------
Apr 06 10:13:15 Traceback (most recent call last):
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 321, in wrapper
Apr 06 10:13:15 self._join_processes(fn)
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 514, in _join_processes
Apr 06 10:13:15 self._check_return_codes(elapsed_time)
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 557, in _check_return_codes
Apr 06 10:13:15 raise RuntimeError(error)
Apr 06 10:13:15 RuntimeError: Process 2 exited with error code 10 and exception:
Apr 06 10:13:15 Traceback (most recent call last):
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:13:15 getattr(self, test_name)()
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:13:15 fn()
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:13:15 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2726, in test_rref_context_debug_info
Apr 06 10:13:15 self.assertEqual(2, int(info["num_owner_rrefs"]))
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1320, in assertEqual
Apr 06 10:13:15 super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/unittest/case.py", line 682, in assertTrue
Apr 06 10:13:15 raise self.failureException(msg)
Apr 06 10:13:15 AssertionError: False is not true : Scalars failed to compare as equal! Comparing 2 and 3 gives a difference of 1, but the allowed difference with rtol=0 and atol=0 is only 0!
Apr 06 10:13:15
Apr 06 10:13:15
```
```
Apr 06 10:09:05 test_rref_context_debug_info (__main__.TensorPipeRpcTestWithSpawn) ... WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:05 WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:05 WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:05 WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2726, in test_rref_context_debug_info
Apr 06 10:09:06 self.assertEqual(2, int(info["num_owner_rrefs"]))
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1320, in assertEqual
Apr 06 10:09:06 super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/unittest/case.py", line 682, in assertTrue
Apr 06 10:09:06 raise self.failureException(msg)
Apr 06 10:09:06 AssertionError: False is not true : Scalars failed to compare as equal! Comparing 2 and 3 gives a difference of 1, but the allowed difference with rtol=0 and atol=0 is only 0!
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2731, in test_rref_context_debug_info
Apr 06 10:09:06 dist.barrier()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2477, in barrier
Apr 06 10:09:06 work.wait()
Apr 06 10:09:06 RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer [172.17.0.2]:51084
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2731, in test_rref_context_debug_info
Apr 06 10:09:06 dist.barrier()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2477, in barrier
Apr 06 10:09:06 work.wait()
Apr 06 10:09:06 RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer [172.17.0.2]:51344
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2731, in test_rref_context_debug_info
Apr 06 10:09:06 dist.barrier()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2477, in barrier
Apr 06 10:09:06 work.wait()
Apr 06 10:09:06 RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer [172.17.0.2]:29621
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 Process 2 terminated with exit code 10, terminating remaining processes.
Apr 06 10:09:06 ERROR (2.751s)
``` | 1.0 | DISABLED test_rref_context_debug_info (__main__.TensorPipeRpcTestWithSpawn) - https://app.circleci.com/pipelines/github/pytorch/pytorch/296644/workflows/4b8369cf-a402-4ed0-9107-0a9692173dd0/jobs/12146109/steps
```
Apr 06 10:13:15 ======================================================================
Apr 06 10:13:15 ERROR [2.751s]: test_rref_context_debug_info (__main__.TensorPipeRpcTestWithSpawn)
Apr 06 10:13:15 ----------------------------------------------------------------------
Apr 06 10:13:15 Traceback (most recent call last):
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 321, in wrapper
Apr 06 10:13:15 self._join_processes(fn)
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 514, in _join_processes
Apr 06 10:13:15 self._check_return_codes(elapsed_time)
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 557, in _check_return_codes
Apr 06 10:13:15 raise RuntimeError(error)
Apr 06 10:13:15 RuntimeError: Process 2 exited with error code 10 and exception:
Apr 06 10:13:15 Traceback (most recent call last):
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:13:15 getattr(self, test_name)()
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:13:15 fn()
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:13:15 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2726, in test_rref_context_debug_info
Apr 06 10:13:15 self.assertEqual(2, int(info["num_owner_rrefs"]))
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1320, in assertEqual
Apr 06 10:13:15 super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Apr 06 10:13:15 File "/opt/conda/lib/python3.6/unittest/case.py", line 682, in assertTrue
Apr 06 10:13:15 raise self.failureException(msg)
Apr 06 10:13:15 AssertionError: False is not true : Scalars failed to compare as equal! Comparing 2 and 3 gives a difference of 1, but the allowed difference with rtol=0 and atol=0 is only 0!
Apr 06 10:13:15
Apr 06 10:13:15
```
```
Apr 06 10:09:05 test_rref_context_debug_info (__main__.TensorPipeRpcTestWithSpawn) ... WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:05 WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:05 WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:05 WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2726, in test_rref_context_debug_info
Apr 06 10:09:06 self.assertEqual(2, int(info["num_owner_rrefs"]))
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1320, in assertEqual
Apr 06 10:09:06 super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/unittest/case.py", line 682, in assertTrue
Apr 06 10:09:06 raise self.failureException(msg)
Apr 06 10:09:06 AssertionError: False is not true : Scalars failed to compare as equal! Comparing 2 and 3 gives a difference of 1, but the allowed difference with rtol=0 and atol=0 is only 0!
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2731, in test_rref_context_debug_info
Apr 06 10:09:06 dist.barrier()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2477, in barrier
Apr 06 10:09:06 work.wait()
Apr 06 10:09:06 RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer [172.17.0.2]:51084
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2731, in test_rref_context_debug_info
Apr 06 10:09:06 dist.barrier()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2477, in barrier
Apr 06 10:09:06 work.wait()
Apr 06 10:09:06 RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer [172.17.0.2]:51344
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 ERROR:torch.testing._internal.common_distributed:Caught exception:
Apr 06 10:09:06 Traceback (most recent call last):
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 440, in run_test
Apr 06 10:09:06 getattr(self, test_name)()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 323, in wrapper
Apr 06 10:09:06 fn()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/dist_utils.py", line 91, in new_test_method
Apr 06 10:09:06 return_value = old_test_method(self, *arg, **kwargs)
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/distributed/rpc/rpc_test.py", line 2731, in test_rref_context_debug_info
Apr 06 10:09:06 dist.barrier()
Apr 06 10:09:06 File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 2477, in barrier
Apr 06 10:09:06 work.wait()
Apr 06 10:09:06 RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer [172.17.0.2]:29621
Apr 06 10:09:06 exiting process with exit code: {MultiProcessTestCase.TEST_ERROR_EXIT_CODE}
Apr 06 10:09:06 Process 2 terminated with exit code 10, terminating remaining processes.
Apr 06 10:09:06 ERROR (2.751s)
``` | non_priority | disabled test rref context debug info main tensorpiperpctestwithspawn apr apr error test rref context debug info main tensorpiperpctestwithspawn apr apr traceback most recent call last apr file opt conda lib site packages torch testing internal common distributed py line in wrapper apr self join processes fn apr file opt conda lib site packages torch testing internal common distributed py line in join processes apr self check return codes elapsed time apr file opt conda lib site packages torch testing internal common distributed py line in check return codes apr raise runtimeerror error apr runtimeerror process exited with error code and exception apr traceback most recent call last apr file opt conda lib site packages torch testing internal common distributed py line in run test apr getattr self test name apr file opt conda lib site packages torch testing internal common distributed py line in wrapper apr fn apr file opt conda lib site packages torch testing internal dist utils py line in new test method apr return value old test method self arg kwargs apr file opt conda lib site packages torch testing internal distributed rpc rpc test py line in test rref context debug info apr self assertequal int info apr file opt conda lib site packages torch testing internal common utils py line in assertequal apr super asserttrue result msg self get assert msg msg debug msg debug msg apr file opt conda lib unittest case py line in asserttrue apr raise self failureexception msg apr assertionerror false is not true scalars failed to compare as equal comparing and gives a difference of but the allowed difference with rtol and atol is only apr apr apr test rref context debug info main tensorpiperpctestwithspawn warning root this python run failed to load cuda module no module named python state gpu and amd hip module no module named python state hip will run in cpu only mode apr warning root this python run failed to load cuda module no module named python state gpu and amd hip module no module named python state hip will run in cpu only mode apr warning root this python run failed to load cuda module no module named python state gpu and amd hip module no module named python state hip will run in cpu only mode apr warning root this python run failed to load cuda module no module named python state gpu and amd hip module no module named python state hip will run in cpu only mode apr error torch testing internal common distributed caught exception apr traceback most recent call last apr file opt conda lib site packages torch testing internal common distributed py line in run test apr getattr self test name apr file opt conda lib site packages torch testing internal common distributed py line in wrapper apr fn apr file opt conda lib site packages torch testing internal dist utils py line in new test method apr return value old test method self arg kwargs apr file opt conda lib site packages torch testing internal distributed rpc rpc test py line in test rref context debug info apr self assertequal int info apr file opt conda lib site packages torch testing internal common utils py line in assertequal apr super asserttrue result msg self get assert msg msg debug msg debug msg apr file opt conda lib unittest case py line in asserttrue apr raise self failureexception msg apr assertionerror false is not true scalars failed to compare as equal comparing and gives a difference of but the allowed difference with rtol and atol is only apr exiting process with exit code multiprocesstestcase test error exit code apr error torch testing internal common distributed caught exception apr traceback most recent call last apr file opt conda lib site packages torch testing internal common distributed py line in run test apr getattr self test name apr file opt conda lib site packages torch testing internal common distributed py line in wrapper apr fn apr file opt conda lib site packages torch testing internal dist utils py line in new test method apr return value old test method self arg kwargs apr file opt conda lib site packages torch testing internal distributed rpc rpc test py line in test rref context debug info apr dist barrier apr file opt conda lib site packages torch distributed distributed py line in barrier apr work wait apr runtimeerror connection closed by peer apr exiting process with exit code multiprocesstestcase test error exit code apr error torch testing internal common distributed caught exception apr traceback most recent call last apr file opt conda lib site packages torch testing internal common distributed py line in run test apr getattr self test name apr file opt conda lib site packages torch testing internal common distributed py line in wrapper apr fn apr file opt conda lib site packages torch testing internal dist utils py line in new test method apr return value old test method self arg kwargs apr file opt conda lib site packages torch testing internal distributed rpc rpc test py line in test rref context debug info apr dist barrier apr file opt conda lib site packages torch distributed distributed py line in barrier apr work wait apr runtimeerror connection closed by peer apr exiting process with exit code multiprocesstestcase test error exit code apr error torch testing internal common distributed caught exception apr traceback most recent call last apr file opt conda lib site packages torch testing internal common distributed py line in run test apr getattr self test name apr file opt conda lib site packages torch testing internal common distributed py line in wrapper apr fn apr file opt conda lib site packages torch testing internal dist utils py line in new test method apr return value old test method self arg kwargs apr file opt conda lib site packages torch testing internal distributed rpc rpc test py line in test rref context debug info apr dist barrier apr file opt conda lib site packages torch distributed distributed py line in barrier apr work wait apr runtimeerror connection closed by peer apr exiting process with exit code multiprocesstestcase test error exit code apr process terminated with exit code terminating remaining processes apr error | 0 |
231,977 | 25,558,070,934 | IssuesEvent | 2022-11-30 08:38:35 | openmls/openmls | https://api.github.com/repos/openmls/openmls | opened | Improve Forward Secrecy | security | Ensure that all secrets are dropped at the earliest possible.
- [ ] `leaf_secret` in the `OpenMlsLeafNode` | True | Improve Forward Secrecy - Ensure that all secrets are dropped at the earliest possible.
- [ ] `leaf_secret` in the `OpenMlsLeafNode` | non_priority | improve forward secrecy ensure that all secrets are dropped at the earliest possible leaf secret in the openmlsleafnode | 0 |
241,993 | 18,507,644,971 | IssuesEvent | 2021-10-19 20:43:38 | vtt0001/NewPhone | https://api.github.com/repos/vtt0001/NewPhone | closed | Documentar la comprobación de sintaxis de ModeloTel | documentation | Como desarrollador necesito documentar las comprobaciones pertinentes, en este caso la correcta compilación. Este issue mejora la issue #7 , haciendo avanzar la HU #5 | 1.0 | Documentar la comprobación de sintaxis de ModeloTel - Como desarrollador necesito documentar las comprobaciones pertinentes, en este caso la correcta compilación. Este issue mejora la issue #7 , haciendo avanzar la HU #5 | non_priority | documentar la comprobación de sintaxis de modelotel como desarrollador necesito documentar las comprobaciones pertinentes en este caso la correcta compilación este issue mejora la issue haciendo avanzar la hu | 0 |
312,918 | 23,447,885,439 | IssuesEvent | 2022-08-15 21:44:05 | UnBArqDsw2022-1/2022.1_G4_FluxoAgil | https://api.github.com/repos/UnBArqDsw2022-1/2022.1_G4_FluxoAgil | closed | GoF estrutural Decorator | documentation | ## Descrição da Issue
Fazer documento de GoF estrutural Decorator
## Tasks:
- [x] Criar documento
- [x] Link para o aparecimento na wiki
## Critérios de Aceitação:
- [x] arquivo md
- [x] ortografia correta
| 1.0 | GoF estrutural Decorator - ## Descrição da Issue
Fazer documento de GoF estrutural Decorator
## Tasks:
- [x] Criar documento
- [x] Link para o aparecimento na wiki
## Critérios de Aceitação:
- [x] arquivo md
- [x] ortografia correta
| non_priority | gof estrutural decorator descrição da issue fazer documento de gof estrutural decorator tasks criar documento link para o aparecimento na wiki critérios de aceitação arquivo md ortografia correta | 0 |
685,987 | 23,473,039,773 | IssuesEvent | 2022-08-17 01:08:02 | googleapis/release-please | https://api.github.com/repos/googleapis/release-please | opened | Multiple paths for package in manifest release | type: question priority: p3 | Is it possible to have multiple paths trigger a release for a package in the `release-please-config.json`?
I'm using Turborepo, which has `apps` and `packages` directories which contain apps and libraries, respectively. I would like it if updates to a specific `package` triggered a release for an `app`.
Can this be achieved, currently?
| 1.0 | Multiple paths for package in manifest release - Is it possible to have multiple paths trigger a release for a package in the `release-please-config.json`?
I'm using Turborepo, which has `apps` and `packages` directories which contain apps and libraries, respectively. I would like it if updates to a specific `package` triggered a release for an `app`.
Can this be achieved, currently?
| priority | multiple paths for package in manifest release is it possible to have multiple paths trigger a release for a package in the release please config json i m using turborepo which has apps and packages directories which contain apps and libraries respectively i would like it if updates to a specific package triggered a release for an app can this be achieved currently | 1 |
600,599 | 18,346,186,976 | IssuesEvent | 2021-10-08 06:41:21 | AY2122S1-CS2103T-F11-2/tp | https://api.github.com/repos/AY2122S1-CS2103T-F11-2/tp | opened | Inputs for Prefixes given for `find` commands are not being validated | bug priority.Medium | ### Problem
Giving the input below will display an empty list.
```
find /s -200
```
### Expected
"-200" should be considered an invalid input for salary and an error message should be shown to let the user know that they provided an invalid input.
| 1.0 | Inputs for Prefixes given for `find` commands are not being validated - ### Problem
Giving the input below will display an empty list.
```
find /s -200
```
### Expected
"-200" should be considered an invalid input for salary and an error message should be shown to let the user know that they provided an invalid input.
| priority | inputs for prefixes given for find commands are not being validated problem giving the input below will display an empty list find s expected should be considered an invalid input for salary and an error message should be shown to let the user know that they provided an invalid input | 1 |
28,199 | 2,700,420,815 | IssuesEvent | 2015-04-04 04:28:24 | Avicus/Issues | https://api.github.com/repos/Avicus/Issues | closed | When the game starts it does not tp you to where you are supposed to spawn | bug invalid priority | - Can't join a team after it auto joins.
- Compass points to nearest player, including teammates.
- When the game starts it does not teleport you to where you are supposed to spawn, it just puts you in survival, I died because I was flying around when the game started.
Kits-
-Sonic is speed 2, too OP.
-Stomper still needs to be removed or ONLY take no fall damage
-There is nothing to work for anymore. Everyone has every kit. No point in playing. | 1.0 | When the game starts it does not tp you to where you are supposed to spawn - - Can't join a team after it auto joins.
- Compass points to nearest player, including teammates.
- When the game starts it does not teleport you to where you are supposed to spawn, it just puts you in survival, I died because I was flying around when the game started.
Kits-
-Sonic is speed 2, too OP.
-Stomper still needs to be removed or ONLY take no fall damage
-There is nothing to work for anymore. Everyone has every kit. No point in playing. | priority | when the game starts it does not tp you to where you are supposed to spawn can t join a team after it auto joins compass points to nearest player including teammates when the game starts it does not teleport you to where you are supposed to spawn it just puts you in survival i died because i was flying around when the game started kits sonic is speed too op stomper still needs to be removed or only take no fall damage there is nothing to work for anymore everyone has every kit no point in playing | 1 |
483,181 | 13,920,264,255 | IssuesEvent | 2020-10-21 10:11:12 | sButtons/sbuttons | https://api.github.com/repos/sButtons/sbuttons | closed | [BUTTON IDEA]:Social media sliding button | Priority: Low button-idea stale-issue | **1. Name of button**: Social media sliding button
**2. Description**: Social media sliding icon hover effect, a slider slides and shows the name of social media.
**3. Button Type (Animated, Special, etc...)**: Icon
**4. Will you work on it?**:Yes
| 1.0 | [BUTTON IDEA]:Social media sliding button - **1. Name of button**: Social media sliding button
**2. Description**: Social media sliding icon hover effect, a slider slides and shows the name of social media.
**3. Button Type (Animated, Special, etc...)**: Icon
**4. Will you work on it?**:Yes
| priority | social media sliding button name of button social media sliding button description social media sliding icon hover effect a slider slides and shows the name of social media button type animated special etc icon will you work on it yes | 1 |
677,253 | 23,156,218,681 | IssuesEvent | 2022-07-29 13:16:15 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.youtube.com - video or audio doesn't play | browser-firefox priority-critical os-mac engine-gecko | <!-- @browser: Firefox 103.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Firefox/103.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108053 -->
**URL**: https://www.youtube.com/watch?v=q2VZdkxlJIY
**Browser / Version**: Firefox 103.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
Some videos do not play in Firefox, whereas they play perfectly well in Chrome (same time, same video)
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/7/8dcd2617-7ecb-4015-aa5e-99938cdaf977.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.youtube.com - video or audio doesn't play - <!-- @browser: Firefox 103.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Firefox/103.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108053 -->
**URL**: https://www.youtube.com/watch?v=q2VZdkxlJIY
**Browser / Version**: Firefox 103.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
Some videos do not play in Firefox, whereas they play perfectly well in Chrome (same time, same video)
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/7/8dcd2617-7ecb-4015-aa5e-99938cdaf977.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | video or audio doesn t play url browser version firefox operating system mac os x tested another browser yes chrome problem type video or audio doesn t play description the video or audio does not play steps to reproduce some videos do not play in firefox whereas they play perfectly well in chrome same time same video view the screenshot img alt screenshot src browser configuration none from with ❤️ | 1 |
39,976 | 6,793,097,549 | IssuesEvent | 2017-11-01 05:12:57 | Microsoft/pxt | https://api.github.com/repos/Microsoft/pxt | opened | Reference docs structure | documentation | Reorganize all the reference docs under Reference:
Reference
• Blocks
• JavaScript
• Types
This will make it easier for users to find content, simplify the TOC, and have 1 central place to point people to find all the reference documentation. | 1.0 | Reference docs structure - Reorganize all the reference docs under Reference:
Reference
• Blocks
• JavaScript
• Types
This will make it easier for users to find content, simplify the TOC, and have 1 central place to point people to find all the reference documentation. | non_priority | reference docs structure reorganize all the reference docs under reference reference • blocks • javascript • types this will make it easier for users to find content simplify the toc and have central place to point people to find all the reference documentation | 0 |
394,035 | 11,628,518,657 | IssuesEvent | 2020-02-27 18:29:47 | wynn-rj/sword-and-bored-game | https://api.github.com/repos/wynn-rj/sword-and-bored-game | opened | Refactor Resource System | blocker enhancement high-priority strategy view | - Make resource system modular for other resource types
- Make resource(s) persistent throughout scenes and save functionality | 1.0 | Refactor Resource System - - Make resource system modular for other resource types
- Make resource(s) persistent throughout scenes and save functionality | priority | refactor resource system make resource system modular for other resource types make resource s persistent throughout scenes and save functionality | 1 |
557,459 | 16,509,693,857 | IssuesEvent | 2021-05-26 01:22:42 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | ability to have asadmin emit output that can then be scripted back as create statements | Component: configuration ERR: Assignee Priority: Major Stale Type: New Feature | as much as asadmin's export-sync-bundle will snapshot an entire domain's configuration, and allow that entire config to be used elsewhere, this is not so good if you only need to move bits and pieces of a config, as might be the case when you want to update an existing production config with the new jdbc resources from the development platform.
it would be great if the list or export commands could be told what to list, and have them effectively generate the asadmin commands one would then use on the other system for those resources to create them. | 1.0 | ability to have asadmin emit output that can then be scripted back as create statements - as much as asadmin's export-sync-bundle will snapshot an entire domain's configuration, and allow that entire config to be used elsewhere, this is not so good if you only need to move bits and pieces of a config, as might be the case when you want to update an existing production config with the new jdbc resources from the development platform.
it would be great if the list or export commands could be told what to list, and have them effectively generate the asadmin commands one would then use on the other system for those resources to create them. | priority | ability to have asadmin emit output that can then be scripted back as create statements as much as asadmin s export sync bundle will snapshot an entire domain s configuration and allow that entire config to be used elsewhere this is not so good if you only need to move bits and pieces of a config as might be the case when you want to update an existing production config with the new jdbc resources from the development platform it would be great if the list or export commands could be told what to list and have them effectively generate the asadmin commands one would then use on the other system for those resources to create them | 1 |
30,950 | 11,860,425,944 | IssuesEvent | 2020-03-25 14:52:44 | TreyM-WSS/Struts2Example | https://api.github.com/repos/TreyM-WSS/Struts2Example | opened | CVE-2013-2251 (High) detected in struts2-core-2.3.1.2.jar | security vulnerability | ## CVE-2013-2251 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>struts2-core-2.3.1.2.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: /tmp/ws-scm/Struts2Example/pom.xml</p>
<p>Path to vulnerable library: 20200325141938/downloadResource_d50070f4-1279-48e5-8366-a2fcdc735d7c/20200325145151/struts2-core-2.3.1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/Struts2Example/commit/20d7b6df1b42979ec0033b63405d552a70d92b0c">20d7b6df1b42979ec0033b63405d552a70d92b0c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Struts 2.0.0 through 2.3.15 allows remote attackers to execute arbitrary OGNL expressions via a parameter with a crafted (1) action:, (2) redirect:, or (3) redirectAction: prefix.
<p>Publish Date: 2013-07-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-2251>CVE-2013-2251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>9.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2251">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2251</a></p>
<p>Release Date: 2013-07-20</p>
<p>Fix Resolution: 2.3.16</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts2-core","packageVersion":"2.3.1.2","isTransitiveDependency":false,"dependencyTree":"org.apache.struts:struts2-core:2.3.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.16"}],"vulnerabilityIdentifier":"CVE-2013-2251","vulnerabilityDetails":"Apache Struts 2.0.0 through 2.3.15 allows remote attackers to execute arbitrary OGNL expressions via a parameter with a crafted (1) action:, (2) redirect:, or (3) redirectAction: prefix.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-2251","cvss2Severity":"high","cvss2Score":"9.3","extraData":{}}</REMEDIATE> --> | True | CVE-2013-2251 (High) detected in struts2-core-2.3.1.2.jar - ## CVE-2013-2251 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>struts2-core-2.3.1.2.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: /tmp/ws-scm/Struts2Example/pom.xml</p>
<p>Path to vulnerable library: 20200325141938/downloadResource_d50070f4-1279-48e5-8366-a2fcdc735d7c/20200325145151/struts2-core-2.3.1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/Struts2Example/commit/20d7b6df1b42979ec0033b63405d552a70d92b0c">20d7b6df1b42979ec0033b63405d552a70d92b0c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Struts 2.0.0 through 2.3.15 allows remote attackers to execute arbitrary OGNL expressions via a parameter with a crafted (1) action:, (2) redirect:, or (3) redirectAction: prefix.
<p>Publish Date: 2013-07-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-2251>CVE-2013-2251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>9.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2251">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2251</a></p>
<p>Release Date: 2013-07-20</p>
<p>Fix Resolution: 2.3.16</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts2-core","packageVersion":"2.3.1.2","isTransitiveDependency":false,"dependencyTree":"org.apache.struts:struts2-core:2.3.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.16"}],"vulnerabilityIdentifier":"CVE-2013-2251","vulnerabilityDetails":"Apache Struts 2.0.0 through 2.3.15 allows remote attackers to execute arbitrary OGNL expressions via a parameter with a crafted (1) action:, (2) redirect:, or (3) redirectAction: prefix.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-2251","cvss2Severity":"high","cvss2Score":"9.3","extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in core jar cve high severity vulnerability vulnerable library core jar apache struts path to dependency file tmp ws scm pom xml path to vulnerable library downloadresource core jar dependency hierarchy x core jar vulnerable library found in head commit a href vulnerability details apache struts through allows remote attackers to execute arbitrary ognl expressions via a parameter with a crafted action redirect or redirectaction prefix publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails apache struts through allows remote attackers to execute arbitrary ognl expressions via a parameter with a crafted action redirect or redirectaction prefix vulnerabilityurl | 0 |
188,836 | 15,171,786,885 | IssuesEvent | 2021-02-13 05:16:28 | engineerjoe440/ElectricPy | https://api.github.com/repos/engineerjoe440/ElectricPy | closed | Incorrect formula for resistance/reactance in powerimpedance function in __init__.py . | bug documentation | According to [Wikipedia](https://en.wikipedia.org/wiki/AC_power#Calculations_and_equations), using R = (V ** 2 )/P is only valid when PF = 1 (purely resistive load). If you call the function powerimpedance(S=1111.11,PF=0.9,V=120), it'll return R=14.4 (120²/1000), instead of 11.66 (aprox).
I am currently learning about Circuits and searching for python libraries that could help me during my tests. I'll be very happy to lend a hend if possible in this project.
Best regards. | 1.0 | Incorrect formula for resistance/reactance in powerimpedance function in __init__.py . - According to [Wikipedia](https://en.wikipedia.org/wiki/AC_power#Calculations_and_equations), using R = (V ** 2 )/P is only valid when PF = 1 (purely resistive load). If you call the function powerimpedance(S=1111.11,PF=0.9,V=120), it'll return R=14.4 (120²/1000), instead of 11.66 (aprox).
I am currently learning about Circuits and searching for python libraries that could help me during my tests. I'll be very happy to lend a hend if possible in this project.
Best regards. | non_priority | incorrect formula for resistance reactance in powerimpedance function in init py according to using r v p is only valid when pf purely resistive load if you call the function powerimpedance s pf v it ll return r instead of aprox i am currently learning about circuits and searching for python libraries that could help me during my tests i ll be very happy to lend a hend if possible in this project best regards | 0 |
49,205 | 13,445,706,161 | IssuesEvent | 2020-09-08 11:51:22 | chaitanya00/aem-wknd | https://api.github.com/repos/chaitanya00/aem-wknd | opened | CVE-2015-9251 (Medium) detected in jquery-2.2.4.tgz, jquery-1.7.1.min.js | security vulnerability | ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.2.4.tgz</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.2.4.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-2.2.4.tgz">https://registry.npmjs.org/jquery/-/jquery-2.2.4.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p>
<p>Path to vulnerable library: /aem-wknd/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/aem-wknd/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /aem-wknd/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-9251 (Medium) detected in jquery-2.2.4.tgz, jquery-1.7.1.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.2.4.tgz</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.2.4.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-2.2.4.tgz">https://registry.npmjs.org/jquery/-/jquery-2.2.4.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p>
<p>Path to vulnerable library: /aem-wknd/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/aem-wknd/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /aem-wknd/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in jquery tgz jquery min js cve medium severity vulnerability vulnerable libraries jquery tgz jquery min js jquery tgz javascript library for dom operations library home page a href path to dependency file tmp ws scm aem wknd package json path to vulnerable library aem wknd node modules jquery package json dependency hierarchy x jquery tgz vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm aem wknd node modules vm browserify example run index html path to vulnerable library aem wknd node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
277,239 | 24,055,636,629 | IssuesEvent | 2022-09-16 16:34:21 | gradle/gradle | https://api.github.com/repos/gradle/gradle | closed | No test reports are written (even for already-completed tests!) if the build system interrupts the Gradle process | in:testing a:bug stale | <!--- Provide a brief summary of the issue in the title above -->
### Expected Behavior
Even if the Gradle process is interrupted, I still expect to see the reports for all the tests which were already completed before the interrupt. And ideally, also a report for the test which was running _during_ the interrupt, so that we can see which one might have been the culprit.
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
### Current Behavior
Currently (Gradle 4.7 & 4.8), when a test is cancelled, the entire process exits without writing any reports. The odd thing is that this is the case for all the previously-run tests too, even though you would expect these to have already been written to disk.
The lack of tests then causes kick-on issues, because having no test reports is not an expected condition.
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
### Context
Sometimes a test run might take a longer time to run than it's supposed to. It is typical to configure the build system (in our case, Jenkins) to cancel tasks if they take longer than a given timeout, and this is typically done by sending signals to the process.
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
### Steps to Reproduce (for bugs)
Demo project: https://github.com/trejkaz/gradle_test_interrupt
Usage:
* Start the build
* Witness that some tests pass/fail (if no tests run before the test with the delay in it, try a new project, it could just be bad RNG.)
* When the tests pause for a while, interrupt the Gradle process
<!--- Provide a self-contained example project (as an attached archive or a Github project). -->
<!--- In the rare cases where this is infeasible, we will also accept a detailed set of instructions. -->
### Your Environment
This occurs for all platforms we currently test on.
I tried to produce a build scan, but it turns out `./gradlew build --scan` has the same problem - when I interrupt the process, the build scan is not produced. Oops!
<!--- Include as many relevant details about the environment you experienced the bug in -->
<!--- A build scan `https://scans.gradle.com/get-started` is ideal -->
| 1.0 | No test reports are written (even for already-completed tests!) if the build system interrupts the Gradle process - <!--- Provide a brief summary of the issue in the title above -->
### Expected Behavior
Even if the Gradle process is interrupted, I still expect to see the reports for all the tests which were already completed before the interrupt. And ideally, also a report for the test which was running _during_ the interrupt, so that we can see which one might have been the culprit.
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
### Current Behavior
Currently (Gradle 4.7 & 4.8), when a test is cancelled, the entire process exits without writing any reports. The odd thing is that this is the case for all the previously-run tests too, even though you would expect these to have already been written to disk.
The lack of tests then causes kick-on issues, because having no test reports is not an expected condition.
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
### Context
Sometimes a test run might take a longer time to run than it's supposed to. It is typical to configure the build system (in our case, Jenkins) to cancel tasks if they take longer than a given timeout, and this is typically done by sending signals to the process.
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
### Steps to Reproduce (for bugs)
Demo project: https://github.com/trejkaz/gradle_test_interrupt
Usage:
* Start the build
* Witness that some tests pass/fail (if no tests run before the test with the delay in it, try a new project, it could just be bad RNG.)
* When the tests pause for a while, interrupt the Gradle process
<!--- Provide a self-contained example project (as an attached archive or a Github project). -->
<!--- In the rare cases where this is infeasible, we will also accept a detailed set of instructions. -->
### Your Environment
This occurs for all platforms we currently test on.
I tried to produce a build scan, but it turns out `./gradlew build --scan` has the same problem - when I interrupt the process, the build scan is not produced. Oops!
<!--- Include as many relevant details about the environment you experienced the bug in -->
<!--- A build scan `https://scans.gradle.com/get-started` is ideal -->
| non_priority | no test reports are written even for already completed tests if the build system interrupts the gradle process expected behavior even if the gradle process is interrupted i still expect to see the reports for all the tests which were already completed before the interrupt and ideally also a report for the test which was running during the interrupt so that we can see which one might have been the culprit current behavior currently gradle when a test is cancelled the entire process exits without writing any reports the odd thing is that this is the case for all the previously run tests too even though you would expect these to have already been written to disk the lack of tests then causes kick on issues because having no test reports is not an expected condition context sometimes a test run might take a longer time to run than it s supposed to it is typical to configure the build system in our case jenkins to cancel tasks if they take longer than a given timeout and this is typically done by sending signals to the process steps to reproduce for bugs demo project usage start the build witness that some tests pass fail if no tests run before the test with the delay in it try a new project it could just be bad rng when the tests pause for a while interrupt the gradle process your environment this occurs for all platforms we currently test on i tried to produce a build scan but it turns out gradlew build scan has the same problem when i interrupt the process the build scan is not produced oops | 0 |
18,342 | 4,259,097,925 | IssuesEvent | 2016-07-11 09:45:12 | madebymany/sir-trevor-js | https://api.github.com/repos/madebymany/sir-trevor-js | closed | Icons broken? | Documentation | I tried running index.html in examples, but the icons seem broken for some reason (I temporarily replaced the paths to sir-trevor.js, sir-trevor.css and sir-trevor-icons as they seemed to be wrong).
But this is what I see:
<img width="625" alt="screen shot 2016-01-20 at 13 30 42" src="https://cloud.githubusercontent.com/assets/3842168/12447793/9c38f136-bf7a-11e5-83e3-0759672b642f.png">
There is no way to add new blocks, and there is no border when hovering over blocks. Chrome also gives a bunch of "Unsafe attempt to load URL" errors, for example:
```
Unsafe attempt to load URL file:///Users/daniel/web/sir-trevor-js/src/icons/sir-trevor-icons.svg#move from frame with URL file:///Users/daniel/web/sir-trevor-js/examples/index.html. 'file:' URLs are treated as unique security origins.
```
Any idea if I'm doing something wrong or what could cause this? | 1.0 | Icons broken? - I tried running index.html in examples, but the icons seem broken for some reason (I temporarily replaced the paths to sir-trevor.js, sir-trevor.css and sir-trevor-icons as they seemed to be wrong).
But this is what I see:
<img width="625" alt="screen shot 2016-01-20 at 13 30 42" src="https://cloud.githubusercontent.com/assets/3842168/12447793/9c38f136-bf7a-11e5-83e3-0759672b642f.png">
There is no way to add new blocks, and there is no border when hovering over blocks. Chrome also gives a bunch of "Unsafe attempt to load URL" errors, for example:
```
Unsafe attempt to load URL file:///Users/daniel/web/sir-trevor-js/src/icons/sir-trevor-icons.svg#move from frame with URL file:///Users/daniel/web/sir-trevor-js/examples/index.html. 'file:' URLs are treated as unique security origins.
```
Any idea if I'm doing something wrong or what could cause this? | non_priority | icons broken i tried running index html in examples but the icons seem broken for some reason i temporarily replaced the paths to sir trevor js sir trevor css and sir trevor icons as they seemed to be wrong but this is what i see img width alt screen shot at src there is no way to add new blocks and there is no border when hovering over blocks chrome also gives a bunch of unsafe attempt to load url errors for example unsafe attempt to load url file users daniel web sir trevor js src icons sir trevor icons svg move from frame with url file users daniel web sir trevor js examples index html file urls are treated as unique security origins any idea if i m doing something wrong or what could cause this | 0 |
14,538 | 3,863,429,414 | IssuesEvent | 2016-04-08 09:22:52 | brockuniera/cs373-idb | https://api.github.com/repos/brockuniera/cs373-idb | closed | Add Docker commands information to Hosting section of the Wiki | Documentation | Need to add specific commands that we are using to the hosting section of the wiki so other developers would be able to easily use them. | 1.0 | Add Docker commands information to Hosting section of the Wiki - Need to add specific commands that we are using to the hosting section of the wiki so other developers would be able to easily use them. | non_priority | add docker commands information to hosting section of the wiki need to add specific commands that we are using to the hosting section of the wiki so other developers would be able to easily use them | 0 |
268,110 | 20,258,070,387 | IssuesEvent | 2022-02-15 02:42:06 | UnBArqDsw2021-2/2021.2_G2_Ki-Limpinho | https://api.github.com/repos/UnBArqDsw2021-2/2021.2_G2_Ki-Limpinho | closed | Diagrama de atividades | documentation Modelagem dinâmica sprint3 | ### Descrição:
Elaborar artefato que pertecente a modelagem estática seguindo boas práticas de alguma referência bibliográfica.
### Tarefas:
- [ ] Elaboração do documento com o fluxo dos principais da plataforma (tem pelo menos 4)
### Critério de aceitação:
Deverá ter classe, atributos e métodos. A review deve ser feita por uma dupla responsável por um artefato de modelagem estática ou ágil. | 1.0 | Diagrama de atividades - ### Descrição:
Elaborar artefato que pertecente a modelagem estática seguindo boas práticas de alguma referência bibliográfica.
### Tarefas:
- [ ] Elaboração do documento com o fluxo dos principais da plataforma (tem pelo menos 4)
### Critério de aceitação:
Deverá ter classe, atributos e métodos. A review deve ser feita por uma dupla responsável por um artefato de modelagem estática ou ágil. | non_priority | diagrama de atividades descrição elaborar artefato que pertecente a modelagem estática seguindo boas práticas de alguma referência bibliográfica tarefas elaboração do documento com o fluxo dos principais da plataforma tem pelo menos critério de aceitação deverá ter classe atributos e métodos a review deve ser feita por uma dupla responsável por um artefato de modelagem estática ou ágil | 0 |
655,739 | 21,707,102,257 | IssuesEvent | 2022-05-10 10:36:55 | logseq/logseq | https://api.github.com/repos/logseq/logseq | closed | 批量copy blocks或批量copy block refs粘贴都会导致结果异常,并报错 | priority-A data-stability fixed-next-release ✅ editor:copy-paste | ### What happened?
批量copy blocks或批量copy block refs结果异常,并报错
### Reproduce the Bug
1. 批量copy blocks或批量copy block refs (至少2个或更多)
2.粘贴到别处的某个区块
3. 这些blocks或block refs会被分别加上“-”前缀,然后粘贴到同一个区块的本体里 (原来是每个block(或block ref)分别新建一个独立的block),并且报错。
### Expected Behavior
原来是为每个block(或block ref)分别新建一个独立的block
### Screenshots

### Desktop Platform Information
osx 12.3.1,0.66 nightly 220422
### Mobile Platform Information
_No response_
### Additional Context
_No response_ | 1.0 | 批量copy blocks或批量copy block refs粘贴都会导致结果异常,并报错 - ### What happened?
批量copy blocks或批量copy block refs结果异常,并报错
### Reproduce the Bug
1. 批量copy blocks或批量copy block refs (至少2个或更多)
2.粘贴到别处的某个区块
3. 这些blocks或block refs会被分别加上“-”前缀,然后粘贴到同一个区块的本体里 (原来是每个block(或block ref)分别新建一个独立的block),并且报错。
### Expected Behavior
原来是为每个block(或block ref)分别新建一个独立的block
### Screenshots

### Desktop Platform Information
osx 12.3.1,0.66 nightly 220422
### Mobile Platform Information
_No response_
### Additional Context
_No response_ | priority | 批量copy blocks或批量copy block refs粘贴都会导致结果异常,并报错 what happened 批量copy blocks或批量copy block refs结果异常,并报错 reproduce the bug 批量copy blocks或批量copy block refs ( ) 粘贴到别处的某个区块 这些blocks或block refs会被分别加上“ ”前缀,然后粘贴到同一个区块的本体里 原来是每个block(或block ref)分别新建一个独立的block ,并且报错。 expected behavior 原来是为每个block(或block ref)分别新建一个独立的block screenshots desktop platform information osx , nightly mobile platform information no response additional context no response | 1 |
114,740 | 11,855,279,036 | IssuesEvent | 2020-03-25 03:42:12 | acord-robotics/stellarios | https://api.github.com/repos/acord-robotics/stellarios | opened | Welcome to ACORD/Arduino | Stellarios | Epic branches documentation git jekyll portal website | t5_x8awv, ACORDRobotics
Welcome to ACORD/Arduino | Stellarios by LimoDroid
Link: http://acord.software/stellarios/hydejack/2019-08-26-arduino-repo/
t3_fois8d | 1.0 | Welcome to ACORD/Arduino | Stellarios - t5_x8awv, ACORDRobotics
Welcome to ACORD/Arduino | Stellarios by LimoDroid
Link: http://acord.software/stellarios/hydejack/2019-08-26-arduino-repo/
t3_fois8d | non_priority | welcome to acord arduino stellarios acordrobotics welcome to acord arduino stellarios by limodroid link | 0 |
223,756 | 17,138,498,989 | IssuesEvent | 2021-07-13 06:51:55 | Deltares/Wflow.jl | https://api.github.com/repos/Deltares/Wflow.jl | closed | document using multithreading for wflow_cli | documentation | The `JULIA_NUM_THREADS` environment variable needs to be set to the number of threads you want, otherwise it defaults to single-threaded. | 1.0 | document using multithreading for wflow_cli - The `JULIA_NUM_THREADS` environment variable needs to be set to the number of threads you want, otherwise it defaults to single-threaded. | non_priority | document using multithreading for wflow cli the julia num threads environment variable needs to be set to the number of threads you want otherwise it defaults to single threaded | 0 |
87,329 | 3,749,714,702 | IssuesEvent | 2016-03-11 01:26:05 | mozilla/MozDef | https://api.github.com/repos/mozilla/MozDef | opened | Couldtrail ES template is missing proper interpretation for apiVersion | category:bug priority:medium | Cloudtrail logs sent to ES occasionally fail complaining about apiVersion:
2016-03-10 09:07:46,885 /home/mozdef/envs/mozdef/cron/cloudtrail2mozdef.py ERROR Error handling log record {u'eventVersion': u'1.03', u'eventID': u'18df484f-a527-4f93-928f-58b305a44ff6', u'eventTime': u'2016-03-10T16:52:03Z', 'utctimestamp': '2016-03-10T16:52:03+00:00', u'requestParameters': None, u'eventType': u'AwsApiCall', u'responseElements': None, u'awsRegion': u'us-east-1', u'eventName': u'ListDistributions', u'userIdentity': {u'userName': u'CloudHealthUser', u'principalId': u'snip', u'accessKeyId': u'snip', u'type': u'IAMUser', u'arn': u'arn:aws:iam::236517346949:user/CloudHealthUser', u'accountId': u'236517346949'}, u'eventSource': u'cloudfront.amazonaws.com', u'requestID': u'6a38ac88-e6e0-11e5-a5ae-ef6a11084c32', u'apiVersion': u'2015_07_27', u'userAgent': u'aws-sdk-ruby2/2.1.36 jruby/1.9.3 java cloudhealth', u'sourceIPAddress': u'54.146.86.215', u'recipientAccountId': u'236517346949'} IllegalArgumentException[Invalid format: "2015_07
_27" is malformed at "_07_27"];
ES assumes it's a date. Auto mapping snippet:
"cloudtrail" : {
"dynamic_templates" : [ {
"string_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match" : "*",
"match_mapping_type" : "string"
}
} ],
"properties" : {
"apiVersion" : {
"type" : "date",
"format" : "dateOptionalTime"
}, ....
Need to craft a default template for cloudtrail that appropriately interprets the field. | 1.0 | Couldtrail ES template is missing proper interpretation for apiVersion - Cloudtrail logs sent to ES occasionally fail complaining about apiVersion:
2016-03-10 09:07:46,885 /home/mozdef/envs/mozdef/cron/cloudtrail2mozdef.py ERROR Error handling log record {u'eventVersion': u'1.03', u'eventID': u'18df484f-a527-4f93-928f-58b305a44ff6', u'eventTime': u'2016-03-10T16:52:03Z', 'utctimestamp': '2016-03-10T16:52:03+00:00', u'requestParameters': None, u'eventType': u'AwsApiCall', u'responseElements': None, u'awsRegion': u'us-east-1', u'eventName': u'ListDistributions', u'userIdentity': {u'userName': u'CloudHealthUser', u'principalId': u'snip', u'accessKeyId': u'snip', u'type': u'IAMUser', u'arn': u'arn:aws:iam::236517346949:user/CloudHealthUser', u'accountId': u'236517346949'}, u'eventSource': u'cloudfront.amazonaws.com', u'requestID': u'6a38ac88-e6e0-11e5-a5ae-ef6a11084c32', u'apiVersion': u'2015_07_27', u'userAgent': u'aws-sdk-ruby2/2.1.36 jruby/1.9.3 java cloudhealth', u'sourceIPAddress': u'54.146.86.215', u'recipientAccountId': u'236517346949'} IllegalArgumentException[Invalid format: "2015_07
_27" is malformed at "_07_27"];
ES assumes it's a date. Auto mapping snippet:
"cloudtrail" : {
"dynamic_templates" : [ {
"string_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match" : "*",
"match_mapping_type" : "string"
}
} ],
"properties" : {
"apiVersion" : {
"type" : "date",
"format" : "dateOptionalTime"
}, ....
Need to craft a default template for cloudtrail that appropriately interprets the field. | priority | couldtrail es template is missing proper interpretation for apiversion cloudtrail logs sent to es occasionally fail complaining about apiversion home mozdef envs mozdef cron py error error handling log record u eventversion u u eventid u u eventtime u utctimestamp u requestparameters none u eventtype u awsapicall u responseelements none u awsregion u us east u eventname u listdistributions u useridentity u username u cloudhealthuser u principalid u snip u accesskeyid u snip u type u iamuser u arn u arn aws iam user cloudhealthuser u accountid u u eventsource u cloudfront amazonaws com u requestid u u apiversion u u useragent u aws sdk jruby java cloudhealth u sourceipaddress u u recipientaccountid u illegalargumentexception invalid format is malformed at es assumes it s a date auto mapping snippet cloudtrail dynamic templates string fields mapping index not analyzed doc values true type string match match mapping type string properties apiversion type date format dateoptionaltime need to craft a default template for cloudtrail that appropriately interprets the field | 1 |
278,299 | 21,058,309,417 | IssuesEvent | 2022-04-01 06:59:59 | HanJiyao/ped | https://api.github.com/repos/HanJiyao/ped | opened | Lack of explaination for who is the target of add contact | type.DocumentationBug severity.Low | Need to state that the target patient’s emergency is the one matching NRIC
<!--session: 1648793230418-c1979e97-0dca-4c06-9d4a-95b111feaacb-->
<!--Version: Web v3.4.2--> | 1.0 | Lack of explaination for who is the target of add contact - Need to state that the target patient’s emergency is the one matching NRIC
<!--session: 1648793230418-c1979e97-0dca-4c06-9d4a-95b111feaacb-->
<!--Version: Web v3.4.2--> | non_priority | lack of explaination for who is the target of add contact need to state that the target patient’s emergency is the one matching nric | 0 |
12,904 | 5,266,023,643 | IssuesEvent | 2017-02-04 07:34:15 | mitchellh/packer | https://api.github.com/repos/mitchellh/packer | closed | Determine AMI ID during packer build | builder/amazon enhancement post-processor/atlas | Hi,
Is there some way to determine the AMI ID of a build while packer is still running?
I have a template that is packer push'd to Atlas. One of the builds in it creates an Amazon AMI (see below for the abbreviated template). At the place indicated, how can I get the newly-created AMI ID to add to the "description" field?
Thanks,
-Tennis
```
{
"variables": {
"atlas_name": "nextgxdx/centos7dev",
********* SNIP *******
},
"push": {
"name": "{{user `atlas_name` }}",
"vcs": false
},
"builders": [
{
"type": "amazon-ebs",
****** SNIP ****
}
],
"provisioners": [
{
**********SNIP*************
}
],
"post-processors": [
[
{
"type": "atlas",
"artifact": "{{ user `atlas_name` }}",
"artifact_type": "vagrant.box",
"only": [
"amazon-ebs"
],
"metadata": {
"provider": "aws",
"version": "{{ user `atlas_version` }}",
"description": "{{ user `atlas_description` }}" <----- HOW TO ADD AMI ID HERE???
}
}
]
]
}
```
| 1.0 | Determine AMI ID during packer build - Hi,
Is there some way to determine the AMI ID of a build while packer is still running?
I have a template that is packer push'd to Atlas. One of the builds in it creates an Amazon AMI (see below for the abbreviated template). At the place indicated, how can I get the newly-created AMI ID to add to the "description" field?
Thanks,
-Tennis
```
{
"variables": {
"atlas_name": "nextgxdx/centos7dev",
********* SNIP *******
},
"push": {
"name": "{{user `atlas_name` }}",
"vcs": false
},
"builders": [
{
"type": "amazon-ebs",
****** SNIP ****
}
],
"provisioners": [
{
**********SNIP*************
}
],
"post-processors": [
[
{
"type": "atlas",
"artifact": "{{ user `atlas_name` }}",
"artifact_type": "vagrant.box",
"only": [
"amazon-ebs"
],
"metadata": {
"provider": "aws",
"version": "{{ user `atlas_version` }}",
"description": "{{ user `atlas_description` }}" <----- HOW TO ADD AMI ID HERE???
}
}
]
]
}
```
| non_priority | determine ami id during packer build hi is there some way to determine the ami id of a build while packer is still running i have a template that is packer push d to atlas one of the builds in it creates an amazon ami see below for the abbreviated template at the place indicated how can i get the newly created ami id to add to the description field thanks tennis variables atlas name nextgxdx snip push name user atlas name vcs false builders type amazon ebs snip provisioners snip post processors type atlas artifact user atlas name artifact type vagrant box only amazon ebs metadata provider aws version user atlas version description user atlas description how to add ami id here | 0 |
499,116 | 14,440,695,814 | IssuesEvent | 2020-12-07 15:52:57 | zeebe-io/zeebe | https://api.github.com/repos/zeebe-io/zeebe | closed | Print zbctl status as json | Priority: Mid Scope: clients/go Status: Needs Review Type: Enhancement good first issue | **Is your feature request related to a problem? Please describe.**
Currently we print the topology of the cluster via `zbctl status` and it looks like this:
```
[zell scripts/ ns:zell-chaos]$ k exec -it zell-chaos-zeebe-gateway-5577d6958-r6bfp -- zbctl status --insecure
Cluster size: 3
Partitions count: 3
Replication factor: 3
Gateway version: 0.26.0-SNAPSHOT
Brokers:
Broker 0 - zell-chaos-zeebe-0.zell-chaos-zeebe.zell-chaos.svc.cluster.local:26501
Version: 0.26.0-SNAPSHOT
Partition 1 : Follower, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Follower, Healthy
Broker 1 - zell-chaos-zeebe-1.zell-chaos-zeebe.zell-chaos.svc.cluster.local:26501
Version: 0.26.0-SNAPSHOT
Partition 1 : Follower, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Follower, Healthy
Broker 2 - zell-chaos-zeebe-2.zell-chaos-zeebe.zell-chaos.svc.cluster.local:26501
Version: 0.26.0-SNAPSHOT
Partition 1 : Leader, Healthy
Partition 2 : Leader, Healthy
Partition 3 : Leader, Healthy
```
This is totally fine for a human beeing, but not if you what to process it.
**Describe the solution you'd like**
It would be cool if we had a flag or something where we can tell `zbctl` to print the topology as json.
**Describe alternatives you've considered**
idk - self parsing
**Additional context**
I need this quite often in building new chaos experiments where I want to know who is the leader for a certain partition or who is taking part of the partition. For our benchmarks it currently works kind of if the partitions are well distributed, like three nodes and three partitions, then you can do something like:
```sh
state=$1 # example Leader
partition=${2:-3}
# For cluster size 3 and replication factor 3
# we know the following partition matrix
# partition \ node 0 1 2
# 1 L F F
# 2 F L F
# 3 F F L
# etc.
# This means broker 1, 2 or 3 participates on partition 3
# BE AWARE the topology above is just an example and the leader can every node participating node.
index=$(($(echo "$topology" \
| grep "Partition $partition" \
| grep -n "$state" -m 1 \
| sed 's/\([0-9]*\).*/\1/') - 1))
pod=$(echo "$pod" | sed 's/\(.*\)\([0-9]\)$/\1/')
pod="$pod$index"
```
but this doesn't work if you have not well distributed partitions like 5 nodes and 3 partition, or 5 nodes 8 partition and replication 3 etc. | 1.0 | Print zbctl status as json - **Is your feature request related to a problem? Please describe.**
Currently we print the topology of the cluster via `zbctl status` and it looks like this:
```
[zell scripts/ ns:zell-chaos]$ k exec -it zell-chaos-zeebe-gateway-5577d6958-r6bfp -- zbctl status --insecure
Cluster size: 3
Partitions count: 3
Replication factor: 3
Gateway version: 0.26.0-SNAPSHOT
Brokers:
Broker 0 - zell-chaos-zeebe-0.zell-chaos-zeebe.zell-chaos.svc.cluster.local:26501
Version: 0.26.0-SNAPSHOT
Partition 1 : Follower, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Follower, Healthy
Broker 1 - zell-chaos-zeebe-1.zell-chaos-zeebe.zell-chaos.svc.cluster.local:26501
Version: 0.26.0-SNAPSHOT
Partition 1 : Follower, Healthy
Partition 2 : Follower, Healthy
Partition 3 : Follower, Healthy
Broker 2 - zell-chaos-zeebe-2.zell-chaos-zeebe.zell-chaos.svc.cluster.local:26501
Version: 0.26.0-SNAPSHOT
Partition 1 : Leader, Healthy
Partition 2 : Leader, Healthy
Partition 3 : Leader, Healthy
```
This is totally fine for a human beeing, but not if you what to process it.
**Describe the solution you'd like**
It would be cool if we had a flag or something where we can tell `zbctl` to print the topology as json.
**Describe alternatives you've considered**
idk - self parsing
**Additional context**
I need this quite often in building new chaos experiments where I want to know who is the leader for a certain partition or who is taking part of the partition. For our benchmarks it currently works kind of if the partitions are well distributed, like three nodes and three partitions, then you can do something like:
```sh
state=$1 # example Leader
partition=${2:-3}
# For cluster size 3 and replication factor 3
# we know the following partition matrix
# partition \ node 0 1 2
# 1 L F F
# 2 F L F
# 3 F F L
# etc.
# This means broker 1, 2 or 3 participates on partition 3
# BE AWARE the topology above is just an example and the leader can every node participating node.
index=$(($(echo "$topology" \
| grep "Partition $partition" \
| grep -n "$state" -m 1 \
| sed 's/\([0-9]*\).*/\1/') - 1))
pod=$(echo "$pod" | sed 's/\(.*\)\([0-9]\)$/\1/')
pod="$pod$index"
```
but this doesn't work if you have not well distributed partitions like 5 nodes and 3 partition, or 5 nodes 8 partition and replication 3 etc. | priority | print zbctl status as json is your feature request related to a problem please describe currently we print the topology of the cluster via zbctl status and it looks like this k exec it zell chaos zeebe gateway zbctl status insecure cluster size partitions count replication factor gateway version snapshot brokers broker zell chaos zeebe zell chaos zeebe zell chaos svc cluster local version snapshot partition follower healthy partition follower healthy partition follower healthy broker zell chaos zeebe zell chaos zeebe zell chaos svc cluster local version snapshot partition follower healthy partition follower healthy partition follower healthy broker zell chaos zeebe zell chaos zeebe zell chaos svc cluster local version snapshot partition leader healthy partition leader healthy partition leader healthy this is totally fine for a human beeing but not if you what to process it describe the solution you d like it would be cool if we had a flag or something where we can tell zbctl to print the topology as json describe alternatives you ve considered idk self parsing additional context i need this quite often in building new chaos experiments where i want to know who is the leader for a certain partition or who is taking part of the partition for our benchmarks it currently works kind of if the partitions are well distributed like three nodes and three partitions then you can do something like sh state example leader partition for cluster size and replication factor we know the following partition matrix partition node l f f f l f f f l etc this means broker or participates on partition be aware the topology above is just an example and the leader can every node participating node index echo topology grep partition partition grep n state m sed s pod echo pod sed s pod pod index but this doesn t work if you have not well distributed partitions like nodes and partition or nodes partition and replication etc | 1 |
245,128 | 18,772,897,873 | IssuesEvent | 2021-11-07 06:08:38 | pda-mit/PDA-Website | https://api.github.com/repos/pda-mit/PDA-Website | closed | Create Issues & Pull request template | documentation | Since the repo is made, it's currently void of Issue templates and Pull Requests templates to use. Create one, so that new developers can coordinate better | 1.0 | Create Issues & Pull request template - Since the repo is made, it's currently void of Issue templates and Pull Requests templates to use. Create one, so that new developers can coordinate better | non_priority | create issues pull request template since the repo is made it s currently void of issue templates and pull requests templates to use create one so that new developers can coordinate better | 0 |
68,610 | 9,203,708,565 | IssuesEvent | 2019-03-08 03:45:45 | naren1991/blurb | https://api.github.com/repos/naren1991/blurb | opened | Provide R Help documentation | documentation | Add documentation for all functions and classes to be accessed through R help. | 1.0 | Provide R Help documentation - Add documentation for all functions and classes to be accessed through R help. | non_priority | provide r help documentation add documentation for all functions and classes to be accessed through r help | 0 |
164,774 | 26,022,905,432 | IssuesEvent | 2022-12-21 14:07:21 | iotaledger/firefly | https://api.github.com/repos/iotaledger/firefly | opened | [Task]: Add voting power to balance breakdown | context:governance type:ux:design type:ui | ### Task description
Storage Deposit Breakdown should be extended with voting power. Does it make sense to rename it to balance breakdown.
### Requirements
TBD
### Creation checklist
- [X] I have assigned this task to the correct people
- [X] I have added the most appropriate labels
- [X] I have linked the correct milestone and/or project | 1.0 | [Task]: Add voting power to balance breakdown - ### Task description
Storage Deposit Breakdown should be extended with voting power. Does it make sense to rename it to balance breakdown.
### Requirements
TBD
### Creation checklist
- [X] I have assigned this task to the correct people
- [X] I have added the most appropriate labels
- [X] I have linked the correct milestone and/or project | non_priority | add voting power to balance breakdown task description storage deposit breakdown should be extended with voting power does it make sense to rename it to balance breakdown requirements tbd creation checklist i have assigned this task to the correct people i have added the most appropriate labels i have linked the correct milestone and or project | 0 |
809,451 | 30,193,173,539 | IssuesEvent | 2023-07-04 17:25:03 | BuilderIO/qwik | https://api.github.com/repos/BuilderIO/qwik | closed | [🐞] aria-current not updating | bug triage Priority | ### Which component is affected?
Qwik Runtime
### Describe the bug
I'm building a `PageLink` component that set `aria-current` to page when the current page is the one described by the href.
It works well except that the first anchor is not updated on page change.
I suppose it has to do with the `useLocation`, but I'm not sure.
```jsx
import { component$, Slot, useComputed$ } from '@builder.io/qwik';
import { useLocation, Link } from '@builder.io/qwik-city';
import type { LinkProps } from '@builder.io/qwik-city';
const normalize = (pathname: string) => {
return pathname[pathname.length - 1] === '/'
? pathname.slice(0, -1)
: pathname;
};
export const PageLink = component$((props: LinkProps) => {
const { url } = useLocation();
if (!props.href) throw new Error('PageLink should have a href');
const href = props.href;
const current = useComputed$(() =>
normalize(href) === normalize(url.pathname) ? 'page' : undefined
);
return (
<Link {...props} aria-current={current.value}>
<Slot />
</Link>
);
});
```
### Reproduction
https://stackblitz.com/edit/qwik-starter-frgqvy?file=src/components/page-link.tsx
### Steps to reproduce
_No response_
### System Info
```shell
System:
OS: Windows 10 10.0.22623
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 16.69 GB / 31.71 GB
Binaries:
Node: 18.16.0 - C:\Program Files\nodejs\node.EXE
Yarn: 3.3.1 - C:\Program Files\nodejs\yarn.CMD
npm: 9.5.1 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: Spartan (44.22621.1095.0), Chromium (112.0.1722.68)
Internet Explorer: 11.0.22621.1
npmPackages:
@builder.io/qwik: 1.0.0 => 1.0.0
@builder.io/qwik-city: 1.0.0 => 1.0.0
undici: 5.22.0 => 5.22.0
vite: 4.3.3 => 4.3.3
```
### Additional Information
_No response_ | 1.0 | [🐞] aria-current not updating - ### Which component is affected?
Qwik Runtime
### Describe the bug
I'm building a `PageLink` component that set `aria-current` to page when the current page is the one described by the href.
It works well except that the first anchor is not updated on page change.
I suppose it has to do with the `useLocation`, but I'm not sure.
```jsx
import { component$, Slot, useComputed$ } from '@builder.io/qwik';
import { useLocation, Link } from '@builder.io/qwik-city';
import type { LinkProps } from '@builder.io/qwik-city';
const normalize = (pathname: string) => {
return pathname[pathname.length - 1] === '/'
? pathname.slice(0, -1)
: pathname;
};
export const PageLink = component$((props: LinkProps) => {
const { url } = useLocation();
if (!props.href) throw new Error('PageLink should have a href');
const href = props.href;
const current = useComputed$(() =>
normalize(href) === normalize(url.pathname) ? 'page' : undefined
);
return (
<Link {...props} aria-current={current.value}>
<Slot />
</Link>
);
});
```
### Reproduction
https://stackblitz.com/edit/qwik-starter-frgqvy?file=src/components/page-link.tsx
### Steps to reproduce
_No response_
### System Info
```shell
System:
OS: Windows 10 10.0.22623
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 16.69 GB / 31.71 GB
Binaries:
Node: 18.16.0 - C:\Program Files\nodejs\node.EXE
Yarn: 3.3.1 - C:\Program Files\nodejs\yarn.CMD
npm: 9.5.1 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: Spartan (44.22621.1095.0), Chromium (112.0.1722.68)
Internet Explorer: 11.0.22621.1
npmPackages:
@builder.io/qwik: 1.0.0 => 1.0.0
@builder.io/qwik-city: 1.0.0 => 1.0.0
undici: 5.22.0 => 5.22.0
vite: 4.3.3 => 4.3.3
```
### Additional Information
_No response_ | priority | aria current not updating which component is affected qwik runtime describe the bug i m building a pagelink component that set aria current to page when the current page is the one described by the href it works well except that the first anchor is not updated on page change i suppose it has to do with the uselocation but i m not sure jsx import component slot usecomputed from builder io qwik import uselocation link from builder io qwik city import type linkprops from builder io qwik city const normalize pathname string return pathname pathname slice pathname export const pagelink component props linkprops const url uselocation if props href throw new error pagelink should have a href const href props href const current usecomputed normalize href normalize url pathname page undefined return reproduction steps to reproduce no response system info shell system os windows cpu intel r core tm cpu memory gb gb binaries node c program files nodejs node exe yarn c program files nodejs yarn cmd npm c program files nodejs npm cmd browsers edge spartan chromium internet explorer npmpackages builder io qwik builder io qwik city undici vite additional information no response | 1 |
60,094 | 3,120,764,711 | IssuesEvent | 2015-09-05 01:40:20 | framingeinstein/issues-test | https://api.github.com/repos/framingeinstein/issues-test | closed | SPK-75: Provide Trade Portal URL | priority:normal resolution:fixed type:enhancement | We need the URL to the Trade Portal. We can then input it on the site. | 1.0 | SPK-75: Provide Trade Portal URL - We need the URL to the Trade Portal. We can then input it on the site. | priority | spk provide trade portal url we need the url to the trade portal we can then input it on the site | 1 |
306,844 | 23,174,307,545 | IssuesEvent | 2022-07-31 07:17:09 | maniac-tech/react-native-expo-read-sms | https://api.github.com/repos/maniac-tech/react-native-expo-read-sms | closed | Need documentation around the functions available in the library | documentation help wanted good first issue | Description:
As of now we don't have any documentation which explains how and what to pass to the functions like `startReadSMS` etc. | 1.0 | Need documentation around the functions available in the library - Description:
As of now we don't have any documentation which explains how and what to pass to the functions like `startReadSMS` etc. | non_priority | need documentation around the functions available in the library description as of now we don t have any documentation which explains how and what to pass to the functions like startreadsms etc | 0 |
67,071 | 16,814,806,422 | IssuesEvent | 2021-06-17 05:45:20 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | Unable to upload Study Thumbnail Image | Help needed Study builder | **Describe the bug**
When creating a new study I am unable to Upload an Image, even if I use [one of these images from the github repo](https://github.com/GoogleCloudPlatform/fda-mystudies/tree/master/study-builder).
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://studies.mystudies-demo.hcls.joonix.net/studybuilder/adminStudies/viewBasicInfo.do'
2. Click on 'Upload Image'
3. See error "Please upload image as per provided guidelines."
**Expected behavior**
I expect the photo to be uploaded.
**Screenshots**
Screenshots below.

**Desktop (please complete the following information):**
- OS: [e.g. iOS, Debian, etc] Linux
- Browser [e.g. chrome, safari] Chrome
- Version [e.g. 22] Not sure .. .
**Additional context**
This is my first time creating a study on the joonix account.
**Logs**
Any relevant logs that you can provide. Please remove any identifying information.
**Labels**
Please add a label that identifies what component(s) this issue applies to.
#Blocker #Feedback | 1.0 | Unable to upload Study Thumbnail Image - **Describe the bug**
When creating a new study I am unable to Upload an Image, even if I use [one of these images from the github repo](https://github.com/GoogleCloudPlatform/fda-mystudies/tree/master/study-builder).
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://studies.mystudies-demo.hcls.joonix.net/studybuilder/adminStudies/viewBasicInfo.do'
2. Click on 'Upload Image'
3. See error "Please upload image as per provided guidelines."
**Expected behavior**
I expect the photo to be uploaded.
**Screenshots**
Screenshots below.

**Desktop (please complete the following information):**
- OS: [e.g. iOS, Debian, etc] Linux
- Browser [e.g. chrome, safari] Chrome
- Version [e.g. 22] Not sure .. .
**Additional context**
This is my first time creating a study on the joonix account.
**Logs**
Any relevant logs that you can provide. Please remove any identifying information.
**Labels**
Please add a label that identifies what component(s) this issue applies to.
#Blocker #Feedback | non_priority | unable to upload study thumbnail image describe the bug when creating a new study i am unable to upload an image even if i use to reproduce steps to reproduce the behavior go to click on upload image see error please upload image as per provided guidelines expected behavior i expect the photo to be uploaded screenshots screenshots below desktop please complete the following information os linux browser chrome version not sure additional context this is my first time creating a study on the joonix account logs any relevant logs that you can provide please remove any identifying information labels please add a label that identifies what component s this issue applies to blocker feedback | 0 |
182,129 | 30,799,175,824 | IssuesEvent | 2023-07-31 22:56:15 | xparq/sfw | https://api.github.com/repos/xparq/sfw | closed | Strange ignored clicks on (certain?) widgets sometimes | bug appearance | UX | design | Like `OptionsBox` arrows. And `CheckBox`... And `Slider`... _(Upstream also has this problem!)_ See also #141!
----
- [x] 1. Hover events tend to get ignored/lost, when entering a container, and then a widget within that container! (The widget is clearly not in the `Hovered` state, according to the missing blue debug insights outline.) Slow mouse moves tend to avoid that.
- _OK, see fix in a [later comment](https://github.com/xparq/sfw/issues/45#issuecomment-1658581954)!_
_But what about that "priming" effect (hopefully described somewhere below)?!_
- [ ] 2. Even worse: sometimes the hover rect is there, and yet the clicks appear to not be propagated! :-o
_Couldn't reproduce it recently though (2023 July). I tend to believe that it was either a mistaken observation, or some transient glitch that is no more._
_Can this be _actually losing_ those events, due to the usual SFML inner polling loop to eat all pending ones in one frame?!
Because last time this was really noticeable under high load!_
-> Unfortunately, this seems to be the case without load, too. :-/ | 1.0 | Strange ignored clicks on (certain?) widgets sometimes - Like `OptionsBox` arrows. And `CheckBox`... And `Slider`... _(Upstream also has this problem!)_ See also #141!
----
- [x] 1. Hover events tend to get ignored/lost, when entering a container, and then a widget within that container! (The widget is clearly not in the `Hovered` state, according to the missing blue debug insights outline.) Slow mouse moves tend to avoid that.
- _OK, see fix in a [later comment](https://github.com/xparq/sfw/issues/45#issuecomment-1658581954)!_
_But what about that "priming" effect (hopefully described somewhere below)?!_
- [ ] 2. Even worse: sometimes the hover rect is there, and yet the clicks appear to not be propagated! :-o
_Couldn't reproduce it recently though (2023 July). I tend to believe that it was either a mistaken observation, or some transient glitch that is no more._
_Can this be _actually losing_ those events, due to the usual SFML inner polling loop to eat all pending ones in one frame?!
Because last time this was really noticeable under high load!_
-> Unfortunately, this seems to be the case without load, too. :-/ | non_priority | strange ignored clicks on certain widgets sometimes like optionsbox arrows and checkbox and slider upstream also has this problem see also hover events tend to get ignored lost when entering a container and then a widget within that container the widget is clearly not in the hovered state according to the missing blue debug insights outline slow mouse moves tend to avoid that ok see fix in a but what about that priming effect hopefully described somewhere below even worse sometimes the hover rect is there and yet the clicks appear to not be propagated o couldn t reproduce it recently though july i tend to believe that it was either a mistaken observation or some transient glitch that is no more can this be actually losing those events due to the usual sfml inner polling loop to eat all pending ones in one frame because last time this was really noticeable under high load unfortunately this seems to be the case without load too | 0 |
20,078 | 3,295,315,196 | IssuesEvent | 2015-10-31 20:48:32 | chief-atx/bcmon | https://api.github.com/repos/chief-atx/bcmon | closed | Could created output.csv | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. when the file output.csv is create
2. airodump-ng -c 'channel#' --bssid 'MAC' -w output ath0
3. "Can`t create output.csv" file read-only. Where is file output created?
What is the expected output? What do you see instead?
Output.csv created
What version of the product are you using? On what operating system?
CM 11
Please provide any additional information below.
I need to know where the file is created to give the writing folder permissions.
When i run besside-ng I get the same error
```
Original issue reported on code.google.com by `ariel50...@gmail.com` on 21 Feb 2015 at 3:35 | 1.0 | Could created output.csv - ```
What steps will reproduce the problem?
1. when the file output.csv is create
2. airodump-ng -c 'channel#' --bssid 'MAC' -w output ath0
3. "Can`t create output.csv" file read-only. Where is file output created?
What is the expected output? What do you see instead?
Output.csv created
What version of the product are you using? On what operating system?
CM 11
Please provide any additional information below.
I need to know where the file is created to give the writing folder permissions.
When i run besside-ng I get the same error
```
Original issue reported on code.google.com by `ariel50...@gmail.com` on 21 Feb 2015 at 3:35 | non_priority | could created output csv what steps will reproduce the problem when the file output csv is create airodump ng c channel bssid mac w output can t create output csv file read only where is file output created what is the expected output what do you see instead output csv created what version of the product are you using on what operating system cm please provide any additional information below i need to know where the file is created to give the writing folder permissions when i run besside ng i get the same error original issue reported on code google com by gmail com on feb at | 0 |
50,549 | 12,520,635,343 | IssuesEvent | 2020-06-03 16:10:01 | hashicorp/packer | https://api.github.com/repos/hashicorp/packer | closed | Packer Crash interface conversion: types.AnyType is nil, not types.ManagedObjectReference | bug builder/vsphere crash | Here is the crash log
```2020/04/01 15:03:36 [INFO] Packer version: 1.5.4 [go1.13.7 windows amd64]
2020/04/01 15:03:36 [DEBUG] Discovered plugin: vsphere-clone = C:\packer_1.5.4_windows_amd64\packer-builder-vsphere-clone.exe
2020/04/01 15:03:36 using external builders [vsphere-clone]
2020/04/01 15:03:36 [DEBUG] Discovered plugin: windows-update = C:\packer_1.5.4_windows_amd64\packer-provisioner-windows-update.exe
2020/04/01 15:03:36 using external provisioners [windows-update]
2020/04/01 15:03:36 Checking 'PACKER_CONFIG' for a config file path
2020/04/01 15:03:36 'PACKER_CONFIG' not set; checking the default config file path
2020/04/01 15:03:36 Attempting to open config file: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:36 [WARN] Config file doesn't exist: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:36 Setting cache directory: C:\packer_1.5.4_windows_amd64\setup\packer_cache
cannot determine if process is in background: Process background check error: not implemented yet
2020/04/01 15:03:36 Creating plugin client for path: C:\packer_1.5.4_windows_amd64\packer.exe
2020/04/01 15:03:36 Starting plugin: C:\packer_1.5.4_windows_amd64\packer.exe []string{"C:\\packer_1.5.4_windows_amd64\\packer.exe", "plugin", "packer-builder-vsphere-iso"}
2020/04/01 15:03:37 Waiting for RPC address for: C:\packer_1.5.4_windows_amd64\packer.exe
2020/04/01 15:03:37 packer.exe plugin: [INFO] Packer version: 1.5.4 [go1.13.7 windows amd64]
2020/04/01 15:03:37 packer.exe plugin: Checking 'PACKER_CONFIG' for a config file path
2020/04/01 15:03:37 packer.exe plugin: 'PACKER_CONFIG' not set; checking the default config file path
2020/04/01 15:03:37 packer.exe plugin: Attempting to open config file: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:37 packer.exe plugin: [WARN] Config file doesn't exist: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:37 packer.exe plugin: Setting cache directory: C:\packer_1.5.4_windows_amd64\setup\packer_cache
2020/04/01 15:03:37 packer.exe plugin: args: []string{"packer-builder-vsphere-iso"}
2020/04/01 15:03:37 packer.exe plugin: Plugin port range: [10000,25000]
2020/04/01 15:03:37 packer.exe plugin: Plugin address: tcp 127.0.0.1:10000
2020/04/01 15:03:37 packer.exe plugin: Waiting for connection...
2020/04/01 15:03:37 Received tcp RPC address for C:\packer_1.5.4_windows_amd64\packer.exe: addr is 127.0.0.1:10000
2020/04/01 15:03:37 packer.exe plugin: Serving a plugin connection...
2020/04/01 15:03:37 ui: vsphere-iso: output will be in this color.
2020/04/01 15:03:37 ui:
2020/04/01 15:03:37 Build debug mode: false
2020/04/01 15:03:37 Force build: false
2020/04/01 15:03:37 On error:
2020/04/01 15:03:37 Preparing build: vsphere-iso
2020/04/01 15:03:37 Waiting on builds to complete...
2020/04/01 15:03:37 Starting build run: vsphere-iso
2020/04/01 15:03:37 Running builder: vsphere-iso
2020/04/01 15:03:37 [INFO] (telemetry) Starting builder vsphere-iso
2020/04/01 15:03:38 ui: ==> vsphere-iso: Creating VM...
2020/04/01 15:03:40 packer.exe plugin: panic: interface conversion: types.AnyType is nil, not types.ManagedObjectReference
2020/04/01 15:03:40 packer.exe plugin:
2020/04/01 15:03:40 packer.exe plugin: goroutine 23 [running]:
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/builder/vsphere/driver.(*Driver).CreateVM(0xc0001e42a0, 0xc00055f668, 0x0, 0x0, 0x1)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/builder/vsphere/driver/vm.go:165 +0xbca
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/builder/vsphere/iso.(*StepCreateVM).Run(0xc000132560, 0x496a080, 0xc0003da040, 0x4947140, 0xc00044c810, 0x0)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/builder/vsphere/iso/step_create.go:116 +0x4f5
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/helper/multistep.(*BasicRunner).Run(0xc00044c9c0, 0x496a080, 0xc0003da040, 0x4947140, 0xc00044c810)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/helper/multistep/basic_runner.go:67 +0x21e
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/builder/vsphere/iso.(*Builder).Run(0xc000178600, 0x496a080, 0xc0003da040, 0x49849a0, 0xc00044c7b0, 0x48f41c0, 0xc000132540, 0x2030000, 0x20, 0xc000046088, ...)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/builder/vsphere/iso/builder.go:136 +0xc22
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/packer/rpc.(*BuilderServer).Run(0xc0004385c0, 0x1, 0xc000046080, 0x0, 0x0)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/packer/rpc/builder.go:117 +0x283
2020/04/01 15:03:40 packer.exe plugin: reflect.Value.call(0xc000430660, 0xc0000060f8, 0x13, 0x42553e8, 0x4, 0xc0005a1f18, 0x3, 0x3, 0xc0005a1e78, 0x405753, ...)
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/reflect/value.go:460 +0x5fd
2020/04/01 15:03:40 packer.exe plugin: reflect.Value.Call(0xc000430660, 0xc0000060f8, 0x13, 0xc0005a1f18, 0x3, 0x3, 0xc0003ca180, 0xc0000e3500, 0xc0000e3588)
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/reflect/value.go:321 +0xbb
2020/04/01 15:03:40 packer.exe plugin: net/rpc.(*service).call(0xc000438600, 0xc00035e230, 0xc00045d208, 0xc00045d210, 0xc0000d6680, 0xc00045f0a0, 0x3605c00, 0xc00004607c, 0x18a, 0x3542420, ...)
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/net/rpc/server.go:377 +0x176
2020/04/01 15:03:40 packer.exe plugin: created by net/rpc.(*Server).ServeCodec
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/net/rpc/server.go:474 +0x432
2020/04/01 15:03:40 [INFO] (telemetry) ending vsphere-iso
2020/04/01 15:03:40 ui error: Build 'vsphere-iso' errored: unexpected EOF
2020/04/01 15:03:40 machine readable: error-count []string{"1"}
2020/04/01 15:03:40 ui error:
==> Some builds didn't complete successfully and had errors:
2020/04/01 15:03:40 machine readable: vsphere-iso,error []string{"unexpected EOF"}
2020/04/01 15:03:40 ui error: --> vsphere-iso: unexpected EOF
2020/04/01 15:03:40 ui:
==> Builds finished but no artifacts were created.
2020/04/01 15:03:40 Cancelling builder after context cancellation context canceled
2020/04/01 15:03:40 Error cancelling builder: connection is shut down
2020/04/01 15:03:40 [INFO] (telemetry) Finalizing.
2020/04/01 15:03:40 C:\packer_1.5.4_windows_amd64\packer.exe: plugin process exited
2020/04/01 15:03:40 waiting for all plugin processes to complete...
```
Here is my JSON
```
{
"builders": [
{
"type": "vsphere-iso",
"vcenter_server": "vcenter.testdomain.local",
"insecure_connection": "true",
"username": "packertest@vsphere.local",
"password": "testpw",
"cluster": "Compute01",
"host": "esx1.testdomain.local",
"vm_name": "packerimage",
"folder": "Templates",
"guest_os_type": "windows9_64Guest",
"communicator": "winrm",
"winrm_username": "jetbrains",
"winrm_password": "jetbrains",
"CPUs": 1,
"RAM": 4096,
"RAM_reserve_all": true,
"disk_controller_type": "pvscsi",
"disk_size": 32768,
"disk_thin_provisioned": false,
"datastore": "Content Library",
"network_card": "vmxnet3",
"iso_paths": [
"[Content LIbrary] ISO/Windows/Windows_Server_2016.ISO",
"[Content LIbrary] ISO/Windows/VMwareTools.iso"
],
"floppy_files": [
"setup/autounattend.xml",
"setup/setup.ps1",
"setup/vmtools.cmd"
]
}
]
}
``` | 1.0 | Packer Crash interface conversion: types.AnyType is nil, not types.ManagedObjectReference - Here is the crash log
```2020/04/01 15:03:36 [INFO] Packer version: 1.5.4 [go1.13.7 windows amd64]
2020/04/01 15:03:36 [DEBUG] Discovered plugin: vsphere-clone = C:\packer_1.5.4_windows_amd64\packer-builder-vsphere-clone.exe
2020/04/01 15:03:36 using external builders [vsphere-clone]
2020/04/01 15:03:36 [DEBUG] Discovered plugin: windows-update = C:\packer_1.5.4_windows_amd64\packer-provisioner-windows-update.exe
2020/04/01 15:03:36 using external provisioners [windows-update]
2020/04/01 15:03:36 Checking 'PACKER_CONFIG' for a config file path
2020/04/01 15:03:36 'PACKER_CONFIG' not set; checking the default config file path
2020/04/01 15:03:36 Attempting to open config file: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:36 [WARN] Config file doesn't exist: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:36 Setting cache directory: C:\packer_1.5.4_windows_amd64\setup\packer_cache
cannot determine if process is in background: Process background check error: not implemented yet
2020/04/01 15:03:36 Creating plugin client for path: C:\packer_1.5.4_windows_amd64\packer.exe
2020/04/01 15:03:36 Starting plugin: C:\packer_1.5.4_windows_amd64\packer.exe []string{"C:\\packer_1.5.4_windows_amd64\\packer.exe", "plugin", "packer-builder-vsphere-iso"}
2020/04/01 15:03:37 Waiting for RPC address for: C:\packer_1.5.4_windows_amd64\packer.exe
2020/04/01 15:03:37 packer.exe plugin: [INFO] Packer version: 1.5.4 [go1.13.7 windows amd64]
2020/04/01 15:03:37 packer.exe plugin: Checking 'PACKER_CONFIG' for a config file path
2020/04/01 15:03:37 packer.exe plugin: 'PACKER_CONFIG' not set; checking the default config file path
2020/04/01 15:03:37 packer.exe plugin: Attempting to open config file: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:37 packer.exe plugin: [WARN] Config file doesn't exist: C:\Users\user1\AppData\Roaming\packer.config
2020/04/01 15:03:37 packer.exe plugin: Setting cache directory: C:\packer_1.5.4_windows_amd64\setup\packer_cache
2020/04/01 15:03:37 packer.exe plugin: args: []string{"packer-builder-vsphere-iso"}
2020/04/01 15:03:37 packer.exe plugin: Plugin port range: [10000,25000]
2020/04/01 15:03:37 packer.exe plugin: Plugin address: tcp 127.0.0.1:10000
2020/04/01 15:03:37 packer.exe plugin: Waiting for connection...
2020/04/01 15:03:37 Received tcp RPC address for C:\packer_1.5.4_windows_amd64\packer.exe: addr is 127.0.0.1:10000
2020/04/01 15:03:37 packer.exe plugin: Serving a plugin connection...
2020/04/01 15:03:37 ui: vsphere-iso: output will be in this color.
2020/04/01 15:03:37 ui:
2020/04/01 15:03:37 Build debug mode: false
2020/04/01 15:03:37 Force build: false
2020/04/01 15:03:37 On error:
2020/04/01 15:03:37 Preparing build: vsphere-iso
2020/04/01 15:03:37 Waiting on builds to complete...
2020/04/01 15:03:37 Starting build run: vsphere-iso
2020/04/01 15:03:37 Running builder: vsphere-iso
2020/04/01 15:03:37 [INFO] (telemetry) Starting builder vsphere-iso
2020/04/01 15:03:38 ui: ==> vsphere-iso: Creating VM...
2020/04/01 15:03:40 packer.exe plugin: panic: interface conversion: types.AnyType is nil, not types.ManagedObjectReference
2020/04/01 15:03:40 packer.exe plugin:
2020/04/01 15:03:40 packer.exe plugin: goroutine 23 [running]:
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/builder/vsphere/driver.(*Driver).CreateVM(0xc0001e42a0, 0xc00055f668, 0x0, 0x0, 0x1)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/builder/vsphere/driver/vm.go:165 +0xbca
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/builder/vsphere/iso.(*StepCreateVM).Run(0xc000132560, 0x496a080, 0xc0003da040, 0x4947140, 0xc00044c810, 0x0)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/builder/vsphere/iso/step_create.go:116 +0x4f5
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/helper/multistep.(*BasicRunner).Run(0xc00044c9c0, 0x496a080, 0xc0003da040, 0x4947140, 0xc00044c810)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/helper/multistep/basic_runner.go:67 +0x21e
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/builder/vsphere/iso.(*Builder).Run(0xc000178600, 0x496a080, 0xc0003da040, 0x49849a0, 0xc00044c7b0, 0x48f41c0, 0xc000132540, 0x2030000, 0x20, 0xc000046088, ...)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/builder/vsphere/iso/builder.go:136 +0xc22
2020/04/01 15:03:40 packer.exe plugin: github.com/hashicorp/packer/packer/rpc.(*BuilderServer).Run(0xc0004385c0, 0x1, 0xc000046080, 0x0, 0x0)
2020/04/01 15:03:40 packer.exe plugin: /Users/mmarsh/Projects/packer/packer/rpc/builder.go:117 +0x283
2020/04/01 15:03:40 packer.exe plugin: reflect.Value.call(0xc000430660, 0xc0000060f8, 0x13, 0x42553e8, 0x4, 0xc0005a1f18, 0x3, 0x3, 0xc0005a1e78, 0x405753, ...)
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/reflect/value.go:460 +0x5fd
2020/04/01 15:03:40 packer.exe plugin: reflect.Value.Call(0xc000430660, 0xc0000060f8, 0x13, 0xc0005a1f18, 0x3, 0x3, 0xc0003ca180, 0xc0000e3500, 0xc0000e3588)
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/reflect/value.go:321 +0xbb
2020/04/01 15:03:40 packer.exe plugin: net/rpc.(*service).call(0xc000438600, 0xc00035e230, 0xc00045d208, 0xc00045d210, 0xc0000d6680, 0xc00045f0a0, 0x3605c00, 0xc00004607c, 0x18a, 0x3542420, ...)
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/net/rpc/server.go:377 +0x176
2020/04/01 15:03:40 packer.exe plugin: created by net/rpc.(*Server).ServeCodec
2020/04/01 15:03:40 packer.exe plugin: /usr/local/go/src/net/rpc/server.go:474 +0x432
2020/04/01 15:03:40 [INFO] (telemetry) ending vsphere-iso
2020/04/01 15:03:40 ui error: Build 'vsphere-iso' errored: unexpected EOF
2020/04/01 15:03:40 machine readable: error-count []string{"1"}
2020/04/01 15:03:40 ui error:
==> Some builds didn't complete successfully and had errors:
2020/04/01 15:03:40 machine readable: vsphere-iso,error []string{"unexpected EOF"}
2020/04/01 15:03:40 ui error: --> vsphere-iso: unexpected EOF
2020/04/01 15:03:40 ui:
==> Builds finished but no artifacts were created.
2020/04/01 15:03:40 Cancelling builder after context cancellation context canceled
2020/04/01 15:03:40 Error cancelling builder: connection is shut down
2020/04/01 15:03:40 [INFO] (telemetry) Finalizing.
2020/04/01 15:03:40 C:\packer_1.5.4_windows_amd64\packer.exe: plugin process exited
2020/04/01 15:03:40 waiting for all plugin processes to complete...
```
Here is my JSON
```
{
"builders": [
{
"type": "vsphere-iso",
"vcenter_server": "vcenter.testdomain.local",
"insecure_connection": "true",
"username": "packertest@vsphere.local",
"password": "testpw",
"cluster": "Compute01",
"host": "esx1.testdomain.local",
"vm_name": "packerimage",
"folder": "Templates",
"guest_os_type": "windows9_64Guest",
"communicator": "winrm",
"winrm_username": "jetbrains",
"winrm_password": "jetbrains",
"CPUs": 1,
"RAM": 4096,
"RAM_reserve_all": true,
"disk_controller_type": "pvscsi",
"disk_size": 32768,
"disk_thin_provisioned": false,
"datastore": "Content Library",
"network_card": "vmxnet3",
"iso_paths": [
"[Content LIbrary] ISO/Windows/Windows_Server_2016.ISO",
"[Content LIbrary] ISO/Windows/VMwareTools.iso"
],
"floppy_files": [
"setup/autounattend.xml",
"setup/setup.ps1",
"setup/vmtools.cmd"
]
}
]
}
``` | non_priority | packer crash interface conversion types anytype is nil not types managedobjectreference here is the crash log packer version discovered plugin vsphere clone c packer windows packer builder vsphere clone exe using external builders discovered plugin windows update c packer windows packer provisioner windows update exe using external provisioners checking packer config for a config file path packer config not set checking the default config file path attempting to open config file c users appdata roaming packer config config file doesn t exist c users appdata roaming packer config setting cache directory c packer windows setup packer cache cannot determine if process is in background process background check error not implemented yet creating plugin client for path c packer windows packer exe starting plugin c packer windows packer exe string c packer windows packer exe plugin packer builder vsphere iso waiting for rpc address for c packer windows packer exe packer exe plugin packer version packer exe plugin checking packer config for a config file path packer exe plugin packer config not set checking the default config file path packer exe plugin attempting to open config file c users appdata roaming packer config packer exe plugin config file doesn t exist c users appdata roaming packer config packer exe plugin setting cache directory c packer windows setup packer cache packer exe plugin args string packer builder vsphere iso packer exe plugin plugin port range packer exe plugin plugin address tcp packer exe plugin waiting for connection received tcp rpc address for c packer windows packer exe addr is packer exe plugin serving a plugin connection ui vsphere iso output will be in this color ui build debug mode false force build false on error preparing build vsphere iso waiting on builds to complete starting build run vsphere iso running builder vsphere iso telemetry starting builder vsphere iso ui vsphere iso creating vm packer exe plugin panic interface conversion types anytype is nil not types managedobjectreference packer exe plugin packer exe plugin goroutine packer exe plugin github com hashicorp packer builder vsphere driver driver createvm packer exe plugin users mmarsh projects packer builder vsphere driver vm go packer exe plugin github com hashicorp packer builder vsphere iso stepcreatevm run packer exe plugin users mmarsh projects packer builder vsphere iso step create go packer exe plugin github com hashicorp packer helper multistep basicrunner run packer exe plugin users mmarsh projects packer helper multistep basic runner go packer exe plugin github com hashicorp packer builder vsphere iso builder run packer exe plugin users mmarsh projects packer builder vsphere iso builder go packer exe plugin github com hashicorp packer packer rpc builderserver run packer exe plugin users mmarsh projects packer packer rpc builder go packer exe plugin reflect value call packer exe plugin usr local go src reflect value go packer exe plugin reflect value call packer exe plugin usr local go src reflect value go packer exe plugin net rpc service call packer exe plugin usr local go src net rpc server go packer exe plugin created by net rpc server servecodec packer exe plugin usr local go src net rpc server go telemetry ending vsphere iso ui error build vsphere iso errored unexpected eof machine readable error count string ui error some builds didn t complete successfully and had errors machine readable vsphere iso error string unexpected eof ui error vsphere iso unexpected eof ui builds finished but no artifacts were created cancelling builder after context cancellation context canceled error cancelling builder connection is shut down telemetry finalizing c packer windows packer exe plugin process exited waiting for all plugin processes to complete here is my json builders type vsphere iso vcenter server vcenter testdomain local insecure connection true username packertest vsphere local password testpw cluster host testdomain local vm name packerimage folder templates guest os type communicator winrm winrm username jetbrains winrm password jetbrains cpus ram ram reserve all true disk controller type pvscsi disk size disk thin provisioned false datastore content library network card iso paths iso windows windows server iso iso windows vmwaretools iso floppy files setup autounattend xml setup setup setup vmtools cmd | 0 |
73,165 | 15,252,861,729 | IssuesEvent | 2021-02-20 04:59:07 | gate5/struts-2.3.20 | https://api.github.com/repos/gate5/struts-2.3.20 | closed | WS-2018-0021 Medium Severity Vulnerability detected by WhiteSource - autoclosed | security vulnerability | ## WS-2018-0021 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-2.1.1.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>path: /struts-2.3.20/apps/showcase/src/main/webapp/js/bootstrap.min.js</p>
<p>
<p>Library home page: <a href=https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js>https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js</a></p>
Dependency Hierarchy:
- :x: **bootstrap-2.1.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gate5/struts-2.3.20/commit/1d3a9da2b49a075b9122e05e19a483fc66b5aaf4">1d3a9da2b49a075b9122e05e19a483fc66b5aaf4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XSS in data-target in bootstrap (3.3.7 and before)
<p>Publish Date: 2017-06-27
<p>URL: <a href=https://github.com/twbs/bootstrap/issues/20184>WS-2018-0021</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e">https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e</a></p>
<p>Release Date: 2017-08-25</p>
<p>Fix Resolution: Replace or update the following files: alert.js, carousel.js, collapse.js, dropdown.js, modal.js</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0021 Medium Severity Vulnerability detected by WhiteSource - autoclosed - ## WS-2018-0021 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-2.1.1.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>path: /struts-2.3.20/apps/showcase/src/main/webapp/js/bootstrap.min.js</p>
<p>
<p>Library home page: <a href=https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js>https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.1.1/bootstrap.min.js</a></p>
Dependency Hierarchy:
- :x: **bootstrap-2.1.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gate5/struts-2.3.20/commit/1d3a9da2b49a075b9122e05e19a483fc66b5aaf4">1d3a9da2b49a075b9122e05e19a483fc66b5aaf4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XSS in data-target in bootstrap (3.3.7 and before)
<p>Publish Date: 2017-06-27
<p>URL: <a href=https://github.com/twbs/bootstrap/issues/20184>WS-2018-0021</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e">https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e</a></p>
<p>Release Date: 2017-08-25</p>
<p>Fix Resolution: Replace or update the following files: alert.js, carousel.js, collapse.js, dropdown.js, modal.js</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium severity vulnerability detected by whitesource autoclosed ws medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web path struts apps showcase src main webapp js bootstrap min js library home page a href dependency hierarchy x bootstrap min js vulnerable library found in head commit a href vulnerability details xss in data target in bootstrap and before publish date url a href cvss score details base score metrics not available suggested fix type change files origin a href release date fix resolution replace or update the following files alert js carousel js collapse js dropdown js modal js step up your open source security game with whitesource | 0 |
66,089 | 12,709,453,895 | IssuesEvent | 2020-06-23 12:23:29 | nanopb/nanopb | https://api.github.com/repos/nanopb/nanopb | closed | Buffer overflow when encoding bytes with size set to 65535 | Component-Encoder FixedInGit Priority-High Type-Defect | On platforms where `size_t` equals `pb_size_t`, for example AVR where both
are 16-bit, or x86 and ARM when `PB_FIELD_32BIT` is defined, the buffer size
checks in `pb_write()` and `pb_enc_submessage` can overflow if a `bytes` field
has size close to maximum size value. This causes read and write out of bounds.
This issue can cause a security vulnerability if the `size` of a `bytes` field
in the structure given to `pb_encode()` is untrusted. Note that `pb_decode()`
has correct bounds checking and will reject too large values. | 1.0 | Buffer overflow when encoding bytes with size set to 65535 - On platforms where `size_t` equals `pb_size_t`, for example AVR where both
are 16-bit, or x86 and ARM when `PB_FIELD_32BIT` is defined, the buffer size
checks in `pb_write()` and `pb_enc_submessage` can overflow if a `bytes` field
has size close to maximum size value. This causes read and write out of bounds.
This issue can cause a security vulnerability if the `size` of a `bytes` field
in the structure given to `pb_encode()` is untrusted. Note that `pb_decode()`
has correct bounds checking and will reject too large values. | non_priority | buffer overflow when encoding bytes with size set to on platforms where size t equals pb size t for example avr where both are bit or and arm when pb field is defined the buffer size checks in pb write and pb enc submessage can overflow if a bytes field has size close to maximum size value this causes read and write out of bounds this issue can cause a security vulnerability if the size of a bytes field in the structure given to pb encode is untrusted note that pb decode has correct bounds checking and will reject too large values | 0 |
4,172 | 2,545,629,836 | IssuesEvent | 2015-01-29 18:21:39 | bireme/proethos | https://api.github.com/repos/bireme/proethos | closed | Pagina principal en espanhol contiene creditos de la OPS en ingles. | enhancement priority 3 (low) severity 3 (normal/minor impact) | Al momento, bajo "descargo de responsabilidad" y "terminos de uso", esta como: © Pan American Health Organization, 2013. All rights reserved.
Deberia estar como: © Organización Panamericana de la Salud, 2013. Todos los derechos reservados.
| 1.0 | Pagina principal en espanhol contiene creditos de la OPS en ingles. - Al momento, bajo "descargo de responsabilidad" y "terminos de uso", esta como: © Pan American Health Organization, 2013. All rights reserved.
Deberia estar como: © Organización Panamericana de la Salud, 2013. Todos los derechos reservados.
| priority | pagina principal en espanhol contiene creditos de la ops en ingles al momento bajo descargo de responsabilidad y terminos de uso esta como © pan american health organization all rights reserved deberia estar como © organización panamericana de la salud todos los derechos reservados | 1 |
170,805 | 6,471,942,667 | IssuesEvent | 2017-08-17 12:56:27 | minishift/minishift | https://api.github.com/repos/minishift/minishift | closed | Unknown error creating VM if disk-size is not valid | kind/bug priority/major | With virtualbox:
````
$ minishift config set disk-size 40960
No Minishift instance exists. New disk-size setting will be applied on next 'minishift start'
$ minishift start
Starting local OpenShift cluster using 'virtualbox' hypervisor...
E0627 13:46:56.160140 1462 start.go:275] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: exit status 1. Retrying.
E0627 13:46:56.210722 1462 start.go:275] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
E0627 13:46:56.256466 1462 start.go:275] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: exit status 1
Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
$ minishift config set disk-size 40g
You currently have an existing Minishift instance. Changes to the disk-size setting are only applied when a new Minishift instance is created.
To let the configuration changes take effect, you must delete the current instance with 'minishift delete' and then start a new one with 'minishift start'.
$ minishift delete
Deleting the Minishift VM...
Minishift VM deleted.
$ minishift config view
- cpus : 2
- disk-size : 40g
- image-caching : true
- memory : 8192
- vm-driver : virtualbox
$ minishift start
Starting local OpenShift cluster using 'virtualbox' hypervisor...
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ...
Pulling image openshift/origin:v1.5.1
Pulled 0/3 layers, 3% complete
Pulled 1/3 layers, 87% complete
Pulled 2/3 layers, 94% complete
Pulled 3/3 layers, 100% complete
Extracting
Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
....
```` | 1.0 | Unknown error creating VM if disk-size is not valid - With virtualbox:
````
$ minishift config set disk-size 40960
No Minishift instance exists. New disk-size setting will be applied on next 'minishift start'
$ minishift start
Starting local OpenShift cluster using 'virtualbox' hypervisor...
E0627 13:46:56.160140 1462 start.go:275] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: exit status 1. Retrying.
E0627 13:46:56.210722 1462 start.go:275] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
E0627 13:46:56.256466 1462 start.go:275] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: exit status 1
Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
$ minishift config set disk-size 40g
You currently have an existing Minishift instance. Changes to the disk-size setting are only applied when a new Minishift instance is created.
To let the configuration changes take effect, you must delete the current instance with 'minishift delete' and then start a new one with 'minishift start'.
$ minishift delete
Deleting the Minishift VM...
Minishift VM deleted.
$ minishift config view
- cpus : 2
- disk-size : 40g
- image-caching : true
- memory : 8192
- vm-driver : virtualbox
$ minishift start
Starting local OpenShift cluster using 'virtualbox' hypervisor...
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ...
Pulling image openshift/origin:v1.5.1
Pulled 0/3 layers, 3% complete
Pulled 1/3 layers, 87% complete
Pulled 2/3 layers, 94% complete
Pulled 3/3 layers, 100% complete
Extracting
Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
....
```` | priority | unknown error creating vm if disk size is not valid with virtualbox minishift config set disk size no minishift instance exists new disk size setting will be applied on next minishift start minishift start starting local openshift cluster using virtualbox hypervisor start go error starting the vm error creating the vm error creating machine error in driver during machine creation exit status retrying start go error starting the vm error getting the state for host machine does not exist retrying start go error starting the vm error getting the state for host machine does not exist retrying error starting the vm error creating the vm error creating machine error in driver during machine creation exit status error getting the state for host machine does not exist error getting the state for host machine does not exist minishift config set disk size you currently have an existing minishift instance changes to the disk size setting are only applied when a new minishift instance is created to let the configuration changes take effect you must delete the current instance with minishift delete and then start a new one with minishift start minishift delete deleting the minishift vm minishift vm deleted minishift config view cpus disk size image caching true memory vm driver virtualbox minishift start starting local openshift cluster using virtualbox hypervisor checking openshift client ok checking docker client ok checking docker version ok checking for existing openshift container ok checking for openshift origin image pulling image openshift origin pulled layers complete pulled layers complete pulled layers complete pulled layers complete extracting image pull complete checking docker daemon configuration ok checking for available ports ok checking type of volume mount using docker shared volumes for openshift volumes creating host directories ok | 1 |
293,389 | 25,289,160,654 | IssuesEvent | 2022-11-16 22:09:10 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest/sqlsmith: invalid tenant ID 0 panic | C-test-failure O-robot O-roachtest branch-master T-sql-queries | roachtest.sqlsmith/setup=tpcc/setting=no-mutations [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7512756?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7512756?buildTab=artifacts#/sqlsmith/setup=tpcc/setting=no-mutations) on master @ [47c7b3a1bc047fc3e481cf12166885b39519c022](https://github.com/cockroachdb/cockroach/commits/47c7b3a1bc047fc3e481cf12166885b39519c022):
```
test artifacts and logs in: /artifacts/sqlsmith/setup=tpcc/setting=no-mutations/run_1
(test_impl.go:297).Fatalf: ping node 2: driver: bad connection
HINT: node likely crashed, check logs in artifacts > logs/2.unredacted
previous sql:
SELECT
tab_418.s_dist_08 AS col_988,
1097984308323098197:::INT8 AS col_989,
NULL AS col_990,
8200287526605587757:::INT8 AS col_991,
NULL AS col_992,
crdb_internal.create_tenant(e'(M\x7f\x07':::STRING::STRING)::INT8 AS col_993,
'E':::STRING AS col_994
FROM
defaultdb.public.stock@[0] AS tab_418;ping node 2: driver: bad connection
HINT: node likely crashed, check logs in artifacts > logs/2.unredacted
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=tpcc/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-21473 | 2.0 | roachtest/sqlsmith: invalid tenant ID 0 panic - roachtest.sqlsmith/setup=tpcc/setting=no-mutations [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7512756?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7512756?buildTab=artifacts#/sqlsmith/setup=tpcc/setting=no-mutations) on master @ [47c7b3a1bc047fc3e481cf12166885b39519c022](https://github.com/cockroachdb/cockroach/commits/47c7b3a1bc047fc3e481cf12166885b39519c022):
```
test artifacts and logs in: /artifacts/sqlsmith/setup=tpcc/setting=no-mutations/run_1
(test_impl.go:297).Fatalf: ping node 2: driver: bad connection
HINT: node likely crashed, check logs in artifacts > logs/2.unredacted
previous sql:
SELECT
tab_418.s_dist_08 AS col_988,
1097984308323098197:::INT8 AS col_989,
NULL AS col_990,
8200287526605587757:::INT8 AS col_991,
NULL AS col_992,
crdb_internal.create_tenant(e'(M\x7f\x07':::STRING::STRING)::INT8 AS col_993,
'E':::STRING AS col_994
FROM
defaultdb.public.stock@[0] AS tab_418;ping node 2: driver: bad connection
HINT: node likely crashed, check logs in artifacts > logs/2.unredacted
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=tpcc/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-21473 | non_priority | roachtest sqlsmith invalid tenant id panic roachtest sqlsmith setup tpcc setting no mutations with on master test artifacts and logs in artifacts sqlsmith setup tpcc setting no mutations run test impl go fatalf ping node driver bad connection hint node likely crashed check logs in artifacts logs unredacted previous sql select tab s dist as col as col null as col as col null as col crdb internal create tenant e m string string as col e string as col from defaultdb public stock as tab ping node driver bad connection hint node likely crashed check logs in artifacts logs unredacted parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb sql queries jira issue crdb | 0 |
7,067 | 5,833,001,945 | IssuesEvent | 2017-05-08 23:41:26 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Caching EqualityComparer.Default<T> in ValueTuple brings significant performance increase | area-System.Runtime tenet-performance up-for-grabs | I have added caching for EqualityComparer instances inside ValueTuple. E.g.
internal static readonly EqualityComparer<T1> T1Comparer = EqualityComparer<T1>.Default;
internal static readonly EqualityComparer<T2> T2Comparer = EqualityComparer<T2>.Default;
internal static readonly EqualityComparer<T3> T3Comparer = EqualityComparer<T3>.Default;
internal static readonly EqualityComparer<T4> T4Comparer = EqualityComparer<T4>.Default;
public bool Equals(ValueTupleCached<T1, T2, T3, T4> other)
{
return T1Comparer.Equals(Item1, other.Item1)
&& T2Comparer.Equals(Item2, other.Item2)
&& T3Comparer.Equals(Item3, other.Item3)
&& T4Comparer.Equals(Item4, other.Item4);
}
And it seems improving performance for both GetHashCode and Equals. I can create pull request if this change supposed to be approved.
``` ini
BenchmarkDotNet=v0.10.3.0, OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Core(TM) i7-4770 CPU 3.40GHz, ProcessorCount=8
Frequency=3318387 Hz, Resolution=301.3512 ns, Timer=TSC
[Host] : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1637.0
LegacyJitX64 : Clr 4.0.30319.42000, 64bit LegacyJIT/clrjit-v4.6.1637.0;compatjit-v4.6.1637.0
LegacyJitX86 : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1637.0
RyuJitX64 : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1637.0
Runtime=Clr
```
| Method | Job | Jit | Platform | Mean | StdErr | StdDev |
|------------------ |------------- |---------- |--------- |----------- |---------- |---------- |
| Equals | LegacyJitX64 | LegacyJit | X64 | 10.4266 ns | 0.0090 ns | 0.0338 ns |
| GetHashCode | LegacyJitX64 | LegacyJit | X64 | 47.9099 ns | 0.0912 ns | 0.3533 ns |
| EqualsCached | LegacyJitX64 | LegacyJit | X64 | 4.2168 ns | 0.0070 ns | 0.0260 ns |
| GetHashCodeCached | LegacyJitX64 | LegacyJit | X64 | 25.2044 ns | 0.0407 ns | 0.1576 ns |
| Equals | LegacyJitX86 | LegacyJit | X86 | 9.6312 ns | 0.0323 ns | 0.1252 ns |
| GetHashCode | LegacyJitX86 | LegacyJit | X86 | 46.9868 ns | 0.4833 ns | 2.1065 ns |
| EqualsCached | LegacyJitX86 | LegacyJit | X86 | 3.9719 ns | 0.0330 ns | 0.1279 ns |
| GetHashCodeCached | LegacyJitX86 | LegacyJit | X86 | 20.4734 ns | 0.0514 ns | 0.1925 ns |
| Equals | RyuJitX64 | RyuJit | X64 | 9.6321 ns | 0.0123 ns | 0.0476 ns |
| GetHashCode | RyuJitX64 | RyuJit | X64 | 40.7101 ns | 0.0525 ns | 0.2035 ns |
| EqualsCached | RyuJitX64 | RyuJit | X64 | 2.9862 ns | 0.0035 ns | 0.0130 ns |
| GetHashCodeCached | RyuJitX64 | RyuJit | X64 | 14.9397 ns | 0.1623 ns | 0.7073 ns |
You can check results in my test project
https://github.com/azhmur/TupleBenchmark2/
PS Originally I thought JIT inlining effectively eliminates need of such optimization, but it seems it is not. | True | Caching EqualityComparer.Default<T> in ValueTuple brings significant performance increase - I have added caching for EqualityComparer instances inside ValueTuple. E.g.
internal static readonly EqualityComparer<T1> T1Comparer = EqualityComparer<T1>.Default;
internal static readonly EqualityComparer<T2> T2Comparer = EqualityComparer<T2>.Default;
internal static readonly EqualityComparer<T3> T3Comparer = EqualityComparer<T3>.Default;
internal static readonly EqualityComparer<T4> T4Comparer = EqualityComparer<T4>.Default;
public bool Equals(ValueTupleCached<T1, T2, T3, T4> other)
{
return T1Comparer.Equals(Item1, other.Item1)
&& T2Comparer.Equals(Item2, other.Item2)
&& T3Comparer.Equals(Item3, other.Item3)
&& T4Comparer.Equals(Item4, other.Item4);
}
And it seems improving performance for both GetHashCode and Equals. I can create pull request if this change supposed to be approved.
``` ini
BenchmarkDotNet=v0.10.3.0, OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Core(TM) i7-4770 CPU 3.40GHz, ProcessorCount=8
Frequency=3318387 Hz, Resolution=301.3512 ns, Timer=TSC
[Host] : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1637.0
LegacyJitX64 : Clr 4.0.30319.42000, 64bit LegacyJIT/clrjit-v4.6.1637.0;compatjit-v4.6.1637.0
LegacyJitX86 : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1637.0
RyuJitX64 : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1637.0
Runtime=Clr
```
| Method | Job | Jit | Platform | Mean | StdErr | StdDev |
|------------------ |------------- |---------- |--------- |----------- |---------- |---------- |
| Equals | LegacyJitX64 | LegacyJit | X64 | 10.4266 ns | 0.0090 ns | 0.0338 ns |
| GetHashCode | LegacyJitX64 | LegacyJit | X64 | 47.9099 ns | 0.0912 ns | 0.3533 ns |
| EqualsCached | LegacyJitX64 | LegacyJit | X64 | 4.2168 ns | 0.0070 ns | 0.0260 ns |
| GetHashCodeCached | LegacyJitX64 | LegacyJit | X64 | 25.2044 ns | 0.0407 ns | 0.1576 ns |
| Equals | LegacyJitX86 | LegacyJit | X86 | 9.6312 ns | 0.0323 ns | 0.1252 ns |
| GetHashCode | LegacyJitX86 | LegacyJit | X86 | 46.9868 ns | 0.4833 ns | 2.1065 ns |
| EqualsCached | LegacyJitX86 | LegacyJit | X86 | 3.9719 ns | 0.0330 ns | 0.1279 ns |
| GetHashCodeCached | LegacyJitX86 | LegacyJit | X86 | 20.4734 ns | 0.0514 ns | 0.1925 ns |
| Equals | RyuJitX64 | RyuJit | X64 | 9.6321 ns | 0.0123 ns | 0.0476 ns |
| GetHashCode | RyuJitX64 | RyuJit | X64 | 40.7101 ns | 0.0525 ns | 0.2035 ns |
| EqualsCached | RyuJitX64 | RyuJit | X64 | 2.9862 ns | 0.0035 ns | 0.0130 ns |
| GetHashCodeCached | RyuJitX64 | RyuJit | X64 | 14.9397 ns | 0.1623 ns | 0.7073 ns |
You can check results in my test project
https://github.com/azhmur/TupleBenchmark2/
PS Originally I thought JIT inlining effectively eliminates need of such optimization, but it seems it is not. | non_priority | caching equalitycomparer default in valuetuple brings significant performance increase i have added caching for equalitycomparer instances inside valuetuple e g internal static readonly equalitycomparer equalitycomparer default internal static readonly equalitycomparer equalitycomparer default internal static readonly equalitycomparer equalitycomparer default internal static readonly equalitycomparer equalitycomparer default public bool equals valuetuplecached other return equals other equals other equals other equals other and it seems improving performance for both gethashcode and equals i can create pull request if this change supposed to be approved ini benchmarkdotnet os microsoft windows nt processor intel r core tm cpu processorcount frequency hz resolution ns timer tsc clr legacyjit clr legacyjit clrjit compatjit clr legacyjit clr ryujit runtime clr method job jit platform mean stderr stddev equals legacyjit ns ns ns gethashcode legacyjit ns ns ns equalscached legacyjit ns ns ns gethashcodecached legacyjit ns ns ns equals legacyjit ns ns ns gethashcode legacyjit ns ns ns equalscached legacyjit ns ns ns gethashcodecached legacyjit ns ns ns equals ryujit ns ns ns gethashcode ryujit ns ns ns equalscached ryujit ns ns ns gethashcodecached ryujit ns ns ns you can check results in my test project ps originally i thought jit inlining effectively eliminates need of such optimization but it seems it is not | 0 |
83,612 | 10,417,283,404 | IssuesEvent | 2019-09-14 20:16:50 | jeffgolenski/roadmap | https://api.github.com/repos/jeffgolenski/roadmap | opened | Personal growth: year over year analysis of myself | DesignTactician.blog Portfolio Professional | write up a personal case study of growth - from a creative technologist and marketing designer to UX / growth. | 1.0 | Personal growth: year over year analysis of myself - write up a personal case study of growth - from a creative technologist and marketing designer to UX / growth. | non_priority | personal growth year over year analysis of myself write up a personal case study of growth from a creative technologist and marketing designer to ux growth | 0 |
9,718 | 8,125,779,063 | IssuesEvent | 2018-08-16 22:17:00 | interbit/interbit | https://api.github.com/repos/interbit/interbit | opened | Test coverage for interbit-ui-tools sagas | infrastructure | `interbit-ui-tools` test coverage has some significant gaps, especially for the sagas that work in conjunction with the middleware. | 1.0 | Test coverage for interbit-ui-tools sagas - `interbit-ui-tools` test coverage has some significant gaps, especially for the sagas that work in conjunction with the middleware. | non_priority | test coverage for interbit ui tools sagas interbit ui tools test coverage has some significant gaps especially for the sagas that work in conjunction with the middleware | 0 |
51,906 | 3,015,460,339 | IssuesEvent | 2015-07-29 19:44:50 | Organic-Beard-Supply/Organic-Beard-Supply | https://api.github.com/repos/Organic-Beard-Supply/Organic-Beard-Supply | closed | Implement SumoMe | Top Priority | Implement SumoMe apps that make sense for these pages. (newsletter, heatmap, etc.) | 1.0 | Implement SumoMe - Implement SumoMe apps that make sense for these pages. (newsletter, heatmap, etc.) | priority | implement sumome implement sumome apps that make sense for these pages newsletter heatmap etc | 1 |
481,677 | 13,889,929,684 | IssuesEvent | 2020-10-19 08:35:36 | jcr7467/UCLAbookstack | https://api.github.com/repos/jcr7467/UCLAbookstack | closed | Sticky Navbar Broken | Priority - Medium bug | **Describe the bug**
Sticky Navbar when Scrolling is broken on the home screen.
It also is not working on the books page, but this may or may not be an issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to home page
2. Scroll
3. Scroll down to '....'
4. See error
**Expected behavior**
The sticky navbar should be a condensed version of the normal navbar. Still holding all elements like search bar etc..
**Screenshots**
Current:

Expected:

| 1.0 | Sticky Navbar Broken - **Describe the bug**
Sticky Navbar when Scrolling is broken on the home screen.
It also is not working on the books page, but this may or may not be an issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to home page
2. Scroll
3. Scroll down to '....'
4. See error
**Expected behavior**
The sticky navbar should be a condensed version of the normal navbar. Still holding all elements like search bar etc..
**Screenshots**
Current:

Expected:

| priority | sticky navbar broken describe the bug sticky navbar when scrolling is broken on the home screen it also is not working on the books page but this may or may not be an issue to reproduce steps to reproduce the behavior go to home page scroll scroll down to see error expected behavior the sticky navbar should be a condensed version of the normal navbar still holding all elements like search bar etc screenshots current expected | 1 |
113,226 | 17,116,181,553 | IssuesEvent | 2021-07-11 12:00:22 | theHinneh/ken | https://api.github.com/repos/theHinneh/ken | closed | CVE-2019-19919 (High) detected in handlebars-2.0.0.min.js | security vulnerability | ## CVE-2019-19919 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-2.0.0.min.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/2.0.0/handlebars.min.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/2.0.0/handlebars.min.js</a></p>
<p>Path to dependency file: ken/node_modules/swagger-tools/middleware/swagger-ui/index.html</p>
<p>Path to vulnerable library: ken/node_modules/swagger-tools/middleware/swagger-ui/lib/handlebars-2.0.0.js</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-2.0.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/theHinneh/ken/commit/566bfcafc00c7780574cd4d73cab32719747338c">566bfcafc00c7780574cd4d73cab32719747338c</a></p>
<p>Found in base branch: <b>backend</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19919 (High) detected in handlebars-2.0.0.min.js - ## CVE-2019-19919 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-2.0.0.min.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/2.0.0/handlebars.min.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/2.0.0/handlebars.min.js</a></p>
<p>Path to dependency file: ken/node_modules/swagger-tools/middleware/swagger-ui/index.html</p>
<p>Path to vulnerable library: ken/node_modules/swagger-tools/middleware/swagger-ui/lib/handlebars-2.0.0.js</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-2.0.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/theHinneh/ken/commit/566bfcafc00c7780574cd4d73cab32719747338c">566bfcafc00c7780574cd4d73cab32719747338c</a></p>
<p>Found in base branch: <b>backend</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in handlebars min js cve high severity vulnerability vulnerable library handlebars min js handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file ken node modules swagger tools middleware swagger ui index html path to vulnerable library ken node modules swagger tools middleware swagger ui lib handlebars js dependency hierarchy x handlebars min js vulnerable library found in head commit a href found in base branch backend vulnerability details versions of handlebars prior to are vulnerable to prototype pollution leading to remote code execution templates may alter an object s proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
37,458 | 8,299,962,988 | IssuesEvent | 2018-09-21 06:13:13 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | [VSCode] Better support for modifying Ballerina configuration in Settings | Component/VScode plugin Type/Improvement | **Description:**
See following screenshot. All Ballerina configuration are highligheted as "edit in settings.json" while other configuration shown with drop down box, text fields etc.

| 1.0 | [VSCode] Better support for modifying Ballerina configuration in Settings - **Description:**
See following screenshot. All Ballerina configuration are highligheted as "edit in settings.json" while other configuration shown with drop down box, text fields etc.

| non_priority | better support for modifying ballerina configuration in settings description see following screenshot all ballerina configuration are highligheted as edit in settings json while other configuration shown with drop down box text fields etc | 0 |
196,694 | 15,607,024,959 | IssuesEvent | 2021-03-19 08:51:32 | ViktorKatz/PsychoJigglypuff | https://api.github.com/repos/ViktorKatz/PsychoJigglypuff | opened | Uploadovati SSU dokumente u poseban folder | documentation task | Uploadovati SSU dokumente u poseban folder za to.
- [ ] Teodora
- [ ] Vera
- [ ] Viktor
Zatvoriti issue kad svi uploadujemo. | 1.0 | Uploadovati SSU dokumente u poseban folder - Uploadovati SSU dokumente u poseban folder za to.
- [ ] Teodora
- [ ] Vera
- [ ] Viktor
Zatvoriti issue kad svi uploadujemo. | non_priority | uploadovati ssu dokumente u poseban folder uploadovati ssu dokumente u poseban folder za to teodora vera viktor zatvoriti issue kad svi uploadujemo | 0 |
262,723 | 22,954,702,909 | IssuesEvent | 2022-07-19 10:28:20 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | closed | Release 4.3.6 - Revision 1 - Release Candidate RC1 - Footprint Metrics - ALL-EXCEPT-ACTIVE-RESPONSE (4h) | release test/4.3.6 | ## Footprint metrics information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue #** | #14188 |
| **Main footprint metrics issue #** | #14274 |
| **Version** | 4.3.6 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/4.3.6-rc1 |
## Stress test documentation
### Packages used
- Repository: `packages-dev.wazuh.com`
- Package path: `pre-release`
- Package revision: `1`
- **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3377/
---
<details><summary>Manager</summary>
+ <details><summary>Plots</summary>
















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_manager_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/logs/ossec_Test_stress_B3377_manager_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-manager-Test_stress_B3377_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/data/monitor-manager-Test_stress_B3377_manager-pre-release.csv)
[Test_stress_B3377_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/data/Test_stress_B3377_manager_analysisd_state.csv)
[Test_stress_B3377_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/data/Test_stress_B3377_manager_remoted_state.csv)
</details>
</details>
<details><summary>Centos agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_centos_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_centos/logs/ossec_Test_stress_B3377_centos_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3377_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_centos/data/monitor-agent-Test_stress_B3377_centos-pre-release.csv)
[Test_stress_B3377_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_centos/data/Test_stress_B3377_centos_agentd_state.csv)
</details>
</details>
<details><summary>Ubuntu agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_ubuntu_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_ubuntu/logs/ossec_Test_stress_B3377_ubuntu_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3377_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_ubuntu/data/monitor-agent-Test_stress_B3377_ubuntu-pre-release.csv)
[Test_stress_B3377_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_ubuntu/data/Test_stress_B3377_ubuntu_agentd_state.csv)
</details>
</details>
<details><summary>Windows agent</summary>
+ <details><summary>Plots</summary>















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_windows_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_windows/logs/ossec_Test_stress_B3377_windows_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-winagent-Test_stress_B3377_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_windows/data/monitor-winagent-Test_stress_B3377_windows-pre-release.csv)
[Test_stress_B3377_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_windows/data/Test_stress_B3377_windows_agentd_state.csv)
</details>
</details>
<details><summary>macOS agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details>
<details><summary>Solaris agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details> | 1.0 | Release 4.3.6 - Revision 1 - Release Candidate RC1 - Footprint Metrics - ALL-EXCEPT-ACTIVE-RESPONSE (4h) - ## Footprint metrics information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue #** | #14188 |
| **Main footprint metrics issue #** | #14274 |
| **Version** | 4.3.6 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/4.3.6-rc1 |
## Stress test documentation
### Packages used
- Repository: `packages-dev.wazuh.com`
- Package path: `pre-release`
- Package revision: `1`
- **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3377/
---
<details><summary>Manager</summary>
+ <details><summary>Plots</summary>
















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_manager_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/logs/ossec_Test_stress_B3377_manager_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-manager-Test_stress_B3377_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/data/monitor-manager-Test_stress_B3377_manager-pre-release.csv)
[Test_stress_B3377_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/data/Test_stress_B3377_manager_analysisd_state.csv)
[Test_stress_B3377_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_manager_centos/data/Test_stress_B3377_manager_remoted_state.csv)
</details>
</details>
<details><summary>Centos agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_centos_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_centos/logs/ossec_Test_stress_B3377_centos_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3377_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_centos/data/monitor-agent-Test_stress_B3377_centos-pre-release.csv)
[Test_stress_B3377_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_centos/data/Test_stress_B3377_centos_agentd_state.csv)
</details>
</details>
<details><summary>Ubuntu agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_ubuntu_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_ubuntu/logs/ossec_Test_stress_B3377_ubuntu_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3377_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_ubuntu/data/monitor-agent-Test_stress_B3377_ubuntu-pre-release.csv)
[Test_stress_B3377_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_ubuntu/data/Test_stress_B3377_ubuntu_agentd_state.csv)
</details>
</details>
<details><summary>Windows agent</summary>
+ <details><summary>Plots</summary>















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3377_windows_2022-07-18.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_windows/logs/ossec_Test_stress_B3377_windows_2022-07-18.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-winagent-Test_stress_B3377_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_windows/data/monitor-winagent-Test_stress_B3377_windows-pre-release.csv)
[Test_stress_B3377_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.6/B3377-240m/B3377_agent_windows/data/Test_stress_B3377_windows_agentd_state.csv)
</details>
</details>
<details><summary>macOS agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details>
<details><summary>Solaris agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details> | non_priority | release revision release candidate footprint metrics all except active response footprint metrics information main release candidate issue main footprint metrics issue version release candidate tag stress test documentation packages used repository packages dev wazuh com package path pre release package revision jenkins build manager plots logs and configuration csv centos agent plots logs and configuration csv ubuntu agent plots logs and configuration csv windows agent plots logs and configuration csv macos agent plots logs and configuration csv solaris agent plots logs and configuration csv | 0 |
425,583 | 12,342,502,044 | IssuesEvent | 2020-05-15 01:00:20 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Java object read hangs reading file from persistent volume mounted throuogh minikube | area/mount cause/go9p-limitation kind/bug lifecycle/rotten os/linux priority/awaiting-more-evidence r/2019q2 |
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one): BUG REPORT
Please provide the following details:
**Environment**:
**Minikube version** : v0.30.0
- **OS** : Debian Stretch
- **VM Driver** : virtualbox
- **ISO version** : v0.30.0
- **Install tools**:
- **Others**: root@i2kcontext-796c856b5-rdpb8:/# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-2~deb9u1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
Container is based on openjdk:8.
**What happened**:
This is strange. I have a Java app that, as part of initialization, reads a large data structure from a file that was previously serialized using Java Object I/O. The image is running in a container with the file presented to it through a volume mount. Outside of the container, I have a process running minikube mount, and the appropriate file system is correctly mounted via PersistentVolume and PersistentVolumeClaim. I can see the file in the container's file system, and it has the correct privileges and checksum. However, my Java app hangs while reading the data structure from the file. Here's the stack from jstack. As far as I can tell, it never progresses past the read0 call, although I can't determine if it's the first read0 call or another one. The process doesn't appear to be consuming any significant CPU.
"main" #1 prio=5 os_prio=0 tid=0x00007fb7d000a800 nid=0x7 runnable [0x00007fb7da
12c000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.read0(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:207)
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java
:2641)
at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream
.java:2948)
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputSt
ream.java:2958)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1738)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2
042)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2568)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2508)
at java.util.TreeMap.readObject(TreeMap.java:2454)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
Now, If I shell into the container, copy the file to /tmp, and load it from there, it loads fine in the container. So a) it's copyable, and b) the problem seems to be related to reading in the manner that this Java code reads from the volume mount through the minikube virtualbox mount point. If I build the same file into the image via Dockerfile, that file reads in fine.
**What you expected to happen**:
Input via Java ObjectInputStream would complete in a timely fashion.
**How to reproduce it** (as minimally and precisely as possible):
I think it would take some effort to package up the code to reproduce this. I'm not sure it could be made to happen easily with a simple input test, as I don't see any other cases of reading from mounted files hanging. However, I am happy to try to package up a repro if needed.
**Output of `minikube logs` (if applicable)**:
Oct 31 13:42:30 minikube kubelet[2858]: I1031 13:42:30.942484 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "service-account-creds" (UniqueName: "kubernetes.io/secret/d0514fd3-dd12-11e8-9f35-080027b74f1f-service-account-creds") pod "i2kworkers-54bcbb8647-xd4qp" (UID: "d0514fd3-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.042715 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d0514fd3-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kworkers-54bcbb8647-xd4qp" (UID: "d0514fd3-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.042790 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d0514fd3-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kworkers-54bcbb8647-xd4qp" (UID: "d0514fd3-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.749672 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d04e02fb-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kweb-5cbccd559c-fd4zl" (UID: "d04e02fb-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.749826 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d04e02fb-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kweb-5cbccd559c-fd4zl" (UID: "d04e02fb-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.632280 2858 pod_container_deletor.go:77] Container "ab92edf3fd5862842236d176750c8825009f642eaa8480d5368d8e7d06e03bb7" not found in pod's containers
Oct 31 13:42:32 minikube kubelet[2858]: I1031 13:42:32.861477 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d047a2bd-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2ksource-776bd7d5f9-6x5sn" (UID: "d047a2bd-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:32 minikube kubelet[2858]: I1031 13:42:32.861513 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d047a2bd-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2ksource-776bd7d5f9-6x5sn" (UID: "d047a2bd-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940001 2858 container.go:393] Failed to create summary reader for "/system.slice/run-rfe08f2a43dc6418b983852ca5058dd32.scope": none of the resources are being tracked.
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940193 2858 container.go:393] Failed to create summary reader for "/system.slice/run-r95351b62bccb46ddb51632403d1a3aa6.scope": none of the resources are being tracked.
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940278 2858 container.go:393] Failed to create summary reader for "/system.slice/run-r65f5a980289346a5a723503cbaf36af6.scope": none of the resources are being tracked.
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940355 2858 container.go:393] Failed to create summary reader for "/system.slice/run-r1777c7fc4a00428795281434ff9ba71e.scope": none of the resources are being tracked.
Oct 31 13:42:33 minikube kubelet[2858]: I1031 13:42:33.868570 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d049e46d-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kconduit-db-76ffdd776c-wfstq" (UID: "d049e46d-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:33 minikube kubelet[2858]: I1031 13:42:33.868638 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d049e46d-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kconduit-db-76ffdd776c-wfstq" (UID: "d049e46d-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:33 minikube kubelet[2858]: W1031 13:42:33.918736 2858 pod_container_deletor.go:77] Container "4a8037a08c45e89b2cdc0f6664394ed00edf3453a5bc64f47baddf3f727bc8e2" not found in pod's containers
Oct 31 13:42:34 minikube kubelet[2858]: I1031 13:42:34.878400 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d05ccc32-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kcontext-796c856b5-rdpb8" (UID: "d05ccc32-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:34 minikube kubelet[2858]: I1031 13:42:34.878424 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d05ccc32-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kcontext-796c856b5-rdpb8" (UID: "d05ccc32-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:34 minikube kubelet[2858]: W1031 13:42:34.992057 2858 container.go:393] Failed to create summary reader for "/system.slice/run-re6f56c27c2de46498e7e5ce72a756bc1.scope": none of the resources are being tracked.
Oct 31 13:42:35 minikube kubelet[2858]: I1031 13:42:35.886929 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-d04134fc-dd12-11e8-9f35-080027b74f1f" (UniqueName: "kubernetes.io/host-path/d047582c-dd12-11e8-9f35-080027b74f1f-pvc-d04134fc-dd12-11e8-9f35-080027b74f1f") pod "i2kmariadb-56978b4c77-dmf8q" (UID: "d047582c-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:36 minikube kubelet[2858]: I1031 13:42:36.690085 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d047582c-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kmariadb-56978b4c77-dmf8q" (UID: "d047582c-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:36 minikube kubelet[2858]: I1031 13:42:36.690170 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d047582c-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kmariadb-56978b4c77-dmf8q" (UID: "d047582c-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:39 minikube kubelet[2858]: I1031 13:42:39.101731 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-d042b186-dd12-11e8-9f35-080027b74f1f" (UniqueName: "kubernetes.io/host-path/d0475855-dd12-11e8-9f35-080027b74f1f-pvc-d042b186-dd12-11e8-9f35-080027b74f1f") pod "i2ksearch-849fcdcf78-rxzp8" (UID: "d0475855-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:39 minikube kubelet[2858]: I1031 13:42:39.904880 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d0475855-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2ksearch-849fcdcf78-rxzp8" (UID: "d0475855-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:39 minikube kubelet[2858]: I1031 13:42:39.904915 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d0475855-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2ksearch-849fcdcf78-rxzp8" (UID: "d0475855-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:56:21 minikube kubelet[2858]: I1031 13:56:21.769608 2858 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-krks7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 31 13:56:21 minikube kubelet[2858]: I1031 13:56:21.769882 2858 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-6f4cfc5d87-kj8fr_kube-system(b9c41328-dcbf-11e8-9f35-080027b74f1f)"
**Anything else do we need to know**:
| 1.0 | Java object read hangs reading file from persistent volume mounted throuogh minikube -
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one): BUG REPORT
Please provide the following details:
**Environment**:
**Minikube version** : v0.30.0
- **OS** : Debian Stretch
- **VM Driver** : virtualbox
- **ISO version** : v0.30.0
- **Install tools**:
- **Others**: root@i2kcontext-796c856b5-rdpb8:/# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-2~deb9u1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
Container is based on openjdk:8.
**What happened**:
This is strange. I have a Java app that, as part of initialization, reads a large data structure from a file that was previously serialized using Java Object I/O. The image is running in a container with the file presented to it through a volume mount. Outside of the container, I have a process running minikube mount, and the appropriate file system is correctly mounted via PersistentVolume and PersistentVolumeClaim. I can see the file in the container's file system, and it has the correct privileges and checksum. However, my Java app hangs while reading the data structure from the file. Here's the stack from jstack. As far as I can tell, it never progresses past the read0 call, although I can't determine if it's the first read0 call or another one. The process doesn't appear to be consuming any significant CPU.
"main" #1 prio=5 os_prio=0 tid=0x00007fb7d000a800 nid=0x7 runnable [0x00007fb7da
12c000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.read0(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:207)
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java
:2641)
at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream
.java:2948)
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputSt
ream.java:2958)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1738)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2
042)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2568)
at java.util.TreeMap.buildFromSorted(TreeMap.java:2508)
at java.util.TreeMap.readObject(TreeMap.java:2454)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
Now, If I shell into the container, copy the file to /tmp, and load it from there, it loads fine in the container. So a) it's copyable, and b) the problem seems to be related to reading in the manner that this Java code reads from the volume mount through the minikube virtualbox mount point. If I build the same file into the image via Dockerfile, that file reads in fine.
**What you expected to happen**:
Input via Java ObjectInputStream would complete in a timely fashion.
**How to reproduce it** (as minimally and precisely as possible):
I think it would take some effort to package up the code to reproduce this. I'm not sure it could be made to happen easily with a simple input test, as I don't see any other cases of reading from mounted files hanging. However, I am happy to try to package up a repro if needed.
**Output of `minikube logs` (if applicable)**:
Oct 31 13:42:30 minikube kubelet[2858]: I1031 13:42:30.942484 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "service-account-creds" (UniqueName: "kubernetes.io/secret/d0514fd3-dd12-11e8-9f35-080027b74f1f-service-account-creds") pod "i2kworkers-54bcbb8647-xd4qp" (UID: "d0514fd3-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.042715 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d0514fd3-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kworkers-54bcbb8647-xd4qp" (UID: "d0514fd3-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.042790 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d0514fd3-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kworkers-54bcbb8647-xd4qp" (UID: "d0514fd3-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.749672 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d04e02fb-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kweb-5cbccd559c-fd4zl" (UID: "d04e02fb-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:31 minikube kubelet[2858]: I1031 13:42:31.749826 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d04e02fb-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kweb-5cbccd559c-fd4zl" (UID: "d04e02fb-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.632280 2858 pod_container_deletor.go:77] Container "ab92edf3fd5862842236d176750c8825009f642eaa8480d5368d8e7d06e03bb7" not found in pod's containers
Oct 31 13:42:32 minikube kubelet[2858]: I1031 13:42:32.861477 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d047a2bd-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2ksource-776bd7d5f9-6x5sn" (UID: "d047a2bd-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:32 minikube kubelet[2858]: I1031 13:42:32.861513 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d047a2bd-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2ksource-776bd7d5f9-6x5sn" (UID: "d047a2bd-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940001 2858 container.go:393] Failed to create summary reader for "/system.slice/run-rfe08f2a43dc6418b983852ca5058dd32.scope": none of the resources are being tracked.
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940193 2858 container.go:393] Failed to create summary reader for "/system.slice/run-r95351b62bccb46ddb51632403d1a3aa6.scope": none of the resources are being tracked.
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940278 2858 container.go:393] Failed to create summary reader for "/system.slice/run-r65f5a980289346a5a723503cbaf36af6.scope": none of the resources are being tracked.
Oct 31 13:42:32 minikube kubelet[2858]: W1031 13:42:32.940355 2858 container.go:393] Failed to create summary reader for "/system.slice/run-r1777c7fc4a00428795281434ff9ba71e.scope": none of the resources are being tracked.
Oct 31 13:42:33 minikube kubelet[2858]: I1031 13:42:33.868570 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d049e46d-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kconduit-db-76ffdd776c-wfstq" (UID: "d049e46d-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:33 minikube kubelet[2858]: I1031 13:42:33.868638 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d049e46d-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kconduit-db-76ffdd776c-wfstq" (UID: "d049e46d-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:33 minikube kubelet[2858]: W1031 13:42:33.918736 2858 pod_container_deletor.go:77] Container "4a8037a08c45e89b2cdc0f6664394ed00edf3453a5bc64f47baddf3f727bc8e2" not found in pod's containers
Oct 31 13:42:34 minikube kubelet[2858]: I1031 13:42:34.878400 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d05ccc32-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kcontext-796c856b5-rdpb8" (UID: "d05ccc32-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:34 minikube kubelet[2858]: I1031 13:42:34.878424 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d05ccc32-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kcontext-796c856b5-rdpb8" (UID: "d05ccc32-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:34 minikube kubelet[2858]: W1031 13:42:34.992057 2858 container.go:393] Failed to create summary reader for "/system.slice/run-re6f56c27c2de46498e7e5ce72a756bc1.scope": none of the resources are being tracked.
Oct 31 13:42:35 minikube kubelet[2858]: I1031 13:42:35.886929 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-d04134fc-dd12-11e8-9f35-080027b74f1f" (UniqueName: "kubernetes.io/host-path/d047582c-dd12-11e8-9f35-080027b74f1f-pvc-d04134fc-dd12-11e8-9f35-080027b74f1f") pod "i2kmariadb-56978b4c77-dmf8q" (UID: "d047582c-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:36 minikube kubelet[2858]: I1031 13:42:36.690085 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d047582c-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2kmariadb-56978b4c77-dmf8q" (UID: "d047582c-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:36 minikube kubelet[2858]: I1031 13:42:36.690170 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d047582c-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2kmariadb-56978b4c77-dmf8q" (UID: "d047582c-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:39 minikube kubelet[2858]: I1031 13:42:39.101731 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-d042b186-dd12-11e8-9f35-080027b74f1f" (UniqueName: "kubernetes.io/host-path/d0475855-dd12-11e8-9f35-080027b74f1f-pvc-d042b186-dd12-11e8-9f35-080027b74f1f") pod "i2ksearch-849fcdcf78-rxzp8" (UID: "d0475855-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:39 minikube kubelet[2858]: I1031 13:42:39.904880 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-86gvw" (UniqueName: "kubernetes.io/secret/d0475855-dd12-11e8-9f35-080027b74f1f-default-token-86gvw") pod "i2ksearch-849fcdcf78-rxzp8" (UID: "d0475855-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:42:39 minikube kubelet[2858]: I1031 13:42:39.904915 2858 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "i2kcfg-pv" (UniqueName: "kubernetes.io/host-path/d0475855-dd12-11e8-9f35-080027b74f1f-i2kcfg-pv") pod "i2ksearch-849fcdcf78-rxzp8" (UID: "d0475855-dd12-11e8-9f35-080027b74f1f")
Oct 31 13:56:21 minikube kubelet[2858]: I1031 13:56:21.769608 2858 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-krks7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Oct 31 13:56:21 minikube kubelet[2858]: I1031 13:56:21.769882 2858 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-6f4cfc5d87-kj8fr_kube-system(b9c41328-dcbf-11e8-9f35-080027b74f1f)"
**Anything else do we need to know**:
| priority | java object read hangs reading file from persistent volume mounted throuogh minikube is this a bug report or feature request choose one bug report please provide the following details environment minikube version os debian stretch vm driver virtualbox iso version install tools others root java version openjdk version openjdk runtime environment build openjdk bit server vm build mixed mode container is based on openjdk what happened this is strange i have a java app that as part of initialization reads a large data structure from a file that was previously serialized using java object i o the image is running in a container with the file presented to it through a volume mount outside of the container i have a process running minikube mount and the appropriate file system is correctly mounted via persistentvolume and persistentvolumeclaim i can see the file in the container s file system and it has the correct privileges and checksum however my java app hangs while reading the data structure from the file here s the stack from jstack as far as i can tell it never progresses past the call although i can t determine if it s the first call or another one the process doesn t appear to be consuming any significant cpu main prio os prio tid nid runnable java lang thread state runnable at java io fileinputstream native method at java io fileinputstream read fileinputstream java at java io objectinputstream peekinputstream peek objectinputstream java at java io objectinputstream blockdatainputstream peek objectinputstream java at java io objectinputstream blockdatainputstream peekbyte objectinputst ream java at java io objectinputstream readclassdesc objectinputstream java at java io objectinputstream readordinaryobject objectinputstream java at java io objectinputstream objectinputstream java at java io objectinputstream readobject objectinputstream java at java util treemap buildfromsorted treemap java at java util treemap buildfromsorted treemap java at java util treemap readobject treemap java at sun reflect invoke unknown source now if i shell into the container copy the file to tmp and load it from there it loads fine in the container so a it s copyable and b the problem seems to be related to reading in the manner that this java code reads from the volume mount through the minikube virtualbox mount point if i build the same file into the image via dockerfile that file reads in fine what you expected to happen input via java objectinputstream would complete in a timely fashion how to reproduce it as minimally and precisely as possible i think it would take some effort to package up the code to reproduce this i m not sure it could be made to happen easily with a simple input test as i don t see any other cases of reading from mounted files hanging however i am happy to try to package up a repro if needed output of minikube logs if applicable oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume service account creds uniquename kubernetes io secret service account creds pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pv uniquename kubernetes io host path pv pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume default token uniquename kubernetes io secret default token pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume default token uniquename kubernetes io secret default token pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pv uniquename kubernetes io host path pv pod uid oct minikube kubelet pod container deletor go container not found in pod s containers oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume default token uniquename kubernetes io secret default token pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pv uniquename kubernetes io host path pv pod uid oct minikube kubelet container go failed to create summary reader for system slice run scope none of the resources are being tracked oct minikube kubelet container go failed to create summary reader for system slice run scope none of the resources are being tracked oct minikube kubelet container go failed to create summary reader for system slice run scope none of the resources are being tracked oct minikube kubelet container go failed to create summary reader for system slice run scope none of the resources are being tracked oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume default token uniquename kubernetes io secret default token pod db wfstq uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pv uniquename kubernetes io host path pv pod db wfstq uid oct minikube kubelet pod container deletor go container not found in pod s containers oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume default token uniquename kubernetes io secret default token pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pv uniquename kubernetes io host path pv pod uid oct minikube kubelet container go failed to create summary reader for system slice run scope none of the resources are being tracked oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pvc uniquename kubernetes io host path pvc pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pv uniquename kubernetes io host path pv pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume default token uniquename kubernetes io secret default token pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pvc uniquename kubernetes io host path pvc pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume default token uniquename kubernetes io secret default token pod uid oct minikube kubelet reconciler go operationexecutor verifycontrollerattachedvolume started for volume pv uniquename kubernetes io host path pv pod uid oct minikube kubelet kuberuntime manager go container name kubernetes dashboard image gcr io kubernetes dashboard command args workingdir ports envfrom env resources limits map requests map volumemounts volumedevices livenessprobe probe handler handler exec nil httpget httpgetaction path port host scheme http httpheaders tcpsocket nil initialdelayseconds timeoutseconds periodseconds successthreshold failurethreshold readinessprobe nil lifecycle nil terminationmessagepath dev termination log terminationmessagepolicy file imagepullpolicy ifnotpresent securitycontext nil stdin false stdinonce false tty false is dead but restartpolicy says that we should restart it oct minikube kubelet kuberuntime manager go checking backoff for container kubernetes dashboard in pod kubernetes dashboard kube system dcbf anything else do we need to know | 1 |
223,336 | 7,452,393,248 | IssuesEvent | 2018-03-29 08:13:36 | anticto/steamroll | https://api.github.com/repos/anticto/steamroll | closed | Guts of steamball tunnels has some diffuse material | art bug medium-priority | It should not reflect any light to create depth effect. Or at least have a shape that matches the shape of the exterior

| 1.0 | Guts of steamball tunnels has some diffuse material - It should not reflect any light to create depth effect. Or at least have a shape that matches the shape of the exterior

| priority | guts of steamball tunnels has some diffuse material it should not reflect any light to create depth effect or at least have a shape that matches the shape of the exterior | 1 |
350,771 | 31,932,303,829 | IssuesEvent | 2023-09-19 08:15:56 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix jax_lax_operators.test_jax_argmax | JAX Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6162488937"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
| 1.0 | Fix jax_lax_operators.test_jax_argmax - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6162488937"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6004180933"><img src=https://img.shields.io/badge/-failure-red></a>
| non_priority | fix jax lax operators test jax argmax numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src | 0 |
740,266 | 25,740,200,423 | IssuesEvent | 2022-12-08 05:16:04 | googleapis/nodejs-ai-platform | https://api.github.com/repos/googleapis/nodejs-ai-platform | closed | AI platform create batch prediction job video classification: should create a video classification batch prediction job failed | type: bug priority: p1 flakybot: issue api: vertex-ai | Note: #380 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 04f7c858217f1a3ce7b1072c7bf8946d39947532
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2c1b5aae-a46c-4a7a-a474-79da01e686eb), [Sponge](http://sponge2/2c1b5aae-a46c-4a7a-a474-79da01e686eb)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node ./create-batch-prediction-job-video-classification.js temp_create_batch_prediction_video_classification_testd8b7aa2f-bb08-449d-8ba5-f64a7f131afc 8596984660557299712 gs://ucaip-samples-test-output/inputs/vcn_40_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
Error: Command failed: node ./create-batch-prediction-job-video-classification.js temp_create_batch_prediction_video_classification_testd8b7aa2f-bb08-449d-8ba5-f64a7f131afc 8596984660557299712 gs://ucaip-samples-test-output/inputs/vcn_40_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/create-batch-prediction-job-video-classification.test.js:24:28)
at Context.<anonymous> (test/create-batch-prediction-job-video-classification.test.js:46:20)
at processImmediate (internal/timers.js:461:21)</pre></details> | 1.0 | AI platform create batch prediction job video classification: should create a video classification batch prediction job failed - Note: #380 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 04f7c858217f1a3ce7b1072c7bf8946d39947532
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2c1b5aae-a46c-4a7a-a474-79da01e686eb), [Sponge](http://sponge2/2c1b5aae-a46c-4a7a-a474-79da01e686eb)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node ./create-batch-prediction-job-video-classification.js temp_create_batch_prediction_video_classification_testd8b7aa2f-bb08-449d-8ba5-f64a7f131afc 8596984660557299712 gs://ucaip-samples-test-output/inputs/vcn_40_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
Error: Command failed: node ./create-batch-prediction-job-video-classification.js temp_create_batch_prediction_video_classification_testd8b7aa2f-bb08-449d-8ba5-f64a7f131afc 8596984660557299712 gs://ucaip-samples-test-output/inputs/vcn_40_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/create-batch-prediction-job-video-classification.test.js:24:28)
at Context.<anonymous> (test/create-batch-prediction-job-video-classification.test.js:46:20)
at processImmediate (internal/timers.js:461:21)</pre></details> | priority | ai platform create batch prediction job video classification should create a video classification batch prediction job failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output command failed node create batch prediction job video classification js temp create batch prediction video classification gs ucaip samples test output inputs vcn batch prediction input jsonl gs ucaip samples test output undefined us permission denied permission denied consumer project undefined has been suspended error command failed node create batch prediction job video classification js temp create batch prediction video classification gs ucaip samples test output inputs vcn batch prediction input jsonl gs ucaip samples test output undefined us permission denied permission denied consumer project undefined has been suspended at checkexecsyncerror child process js at object execsync child process js at execsync test create batch prediction job video classification test js at context test create batch prediction job video classification test js at processimmediate internal timers js | 1 |
348,486 | 31,622,113,920 | IssuesEvent | 2023-09-06 00:28:42 | sayakongit/status-code-sangnet | https://api.github.com/repos/sayakongit/status-code-sangnet | opened | Unit test cases for backend | hacktoberfest unit-tests | ### Description
For the backend, we need to write unit test cases for each existing modules with the maximum code coverage. These may be related to API test or functional test. The test cases would be both positive and negative ensuring a better end user experience and robust usage. | 1.0 | Unit test cases for backend - ### Description
For the backend, we need to write unit test cases for each existing modules with the maximum code coverage. These may be related to API test or functional test. The test cases would be both positive and negative ensuring a better end user experience and robust usage. | non_priority | unit test cases for backend description for the backend we need to write unit test cases for each existing modules with the maximum code coverage these may be related to api test or functional test the test cases would be both positive and negative ensuring a better end user experience and robust usage | 0 |
339,094 | 10,241,896,598 | IssuesEvent | 2019-08-20 02:28:59 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.hager.de - "Webpage is slowing down your browser" banner displayed | browser-firefox engine-gecko priority-normal severity-important | <!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: web -->
**URL**: https://www.hager.de/
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: hager.de shows yellow "Webpage is slowing down your browser" banner
**Steps to Reproduce**:
Important product sites on hager.de are almost unusable in practise.
SRT:
visit https://www.hager.de/
click (for example) on top menu "Produktkatalog"
click (for example) "Modulargeräte "
Result: long loading, yellow "webpage slowing" warning appears 2-3 times (clicking "wait")
Click another link, e.g. "Fehlerstromschutzschalter"
Result: Problem repeats
You get the same results with deeplinks, e.g.
https://www.hager.de/modulargeraete/leitungsschutzschalter/930276.htm
https://www.hager.de/modulargeraete/fehlerstrom-leitungsschutzschalter/930291.htm
Tested with Nightly 70, same Problem with Beta 69 and Fx 68.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.hager.de - "Webpage is slowing down your browser" banner displayed - <!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: web -->
**URL**: https://www.hager.de/
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: hager.de shows yellow "Webpage is slowing down your browser" banner
**Steps to Reproduce**:
Important product sites on hager.de are almost unusable in practise.
SRT:
visit https://www.hager.de/
click (for example) on top menu "Produktkatalog"
click (for example) "Modulargeräte "
Result: long loading, yellow "webpage slowing" warning appears 2-3 times (clicking "wait")
Click another link, e.g. "Fehlerstromschutzschalter"
Result: Problem repeats
You get the same results with deeplinks, e.g.
https://www.hager.de/modulargeraete/leitungsschutzschalter/930276.htm
https://www.hager.de/modulargeraete/fehlerstrom-leitungsschutzschalter/930291.htm
Tested with Nightly 70, same Problem with Beta 69 and Fx 68.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | webpage is slowing down your browser banner displayed url browser version firefox operating system windows tested another browser yes problem type site is not usable description hager de shows yellow webpage is slowing down your browser banner steps to reproduce important product sites on hager de are almost unusable in practise srt visit click for example on top menu produktkatalog click for example modulargeräte result long loading yellow webpage slowing warning appears times clicking wait click another link e g fehlerstromschutzschalter result problem repeats you get the same results with deeplinks e g tested with nightly same problem with beta and fx browser configuration none from with ❤️ | 1 |
342,317 | 24,738,577,953 | IssuesEvent | 2022-10-21 01:37:01 | tlsrb100/my-first-github-repository | https://api.github.com/repos/tlsrb100/my-first-github-repository | reopened | Todo App 요구사항 명세서 작성 | documentation | ### 만들려고하는 기능
- todo app
### 해당 기능 구현을 위한 워크 플로우
[]
[]
[]
### 예상 작업 시간
ex)3h | 1.0 | Todo App 요구사항 명세서 작성 - ### 만들려고하는 기능
- todo app
### 해당 기능 구현을 위한 워크 플로우
[]
[]
[]
### 예상 작업 시간
ex)3h | non_priority | todo app 요구사항 명세서 작성 만들려고하는 기능 todo app 해당 기능 구현을 위한 워크 플로우 예상 작업 시간 ex | 0 |
3,730 | 6,733,142,355 | IssuesEvent | 2017-10-18 13:58:37 | york-region-tpss/stp | https://api.github.com/repos/york-region-tpss/stp | closed | Contract preparation single item view - Process Data | process workflow | Flush the data in form and collection to the database. | 1.0 | Contract preparation single item view - Process Data - Flush the data in form and collection to the database. | non_priority | contract preparation single item view process data flush the data in form and collection to the database | 0 |
268,485 | 20,325,932,877 | IssuesEvent | 2022-02-18 05:42:52 | Cantera/cantera | https://api.github.com/repos/Cantera/cantera | opened | Update reactions.rst to reflect changes introduced in Cantera 2.6 | documentation | <!-- Please fill in the following information to report a problem with Cantera. If you have a question about using Cantera, please post it on our Google Users' Group (https://groups.google.com/forum/#!forum/cantera-users). Feature enhancements should be discussed in the dedicated Cantera enhancements repository (https://github.com/Cantera/enhancements/new/choose) -->
**Problem description**
<!-- A clear and concise description of what the bug is. -->
Changes introduced by Cantera/enhancements#87 need to be reflected in the documentation prior to the release of Cantera 2.6.
**System information**
- Cantera version: 2.6.0a4
- OS: N/A
- Python/MATLAB/other software versions: N/A
| 1.0 | Update reactions.rst to reflect changes introduced in Cantera 2.6 - <!-- Please fill in the following information to report a problem with Cantera. If you have a question about using Cantera, please post it on our Google Users' Group (https://groups.google.com/forum/#!forum/cantera-users). Feature enhancements should be discussed in the dedicated Cantera enhancements repository (https://github.com/Cantera/enhancements/new/choose) -->
**Problem description**
<!-- A clear and concise description of what the bug is. -->
Changes introduced by Cantera/enhancements#87 need to be reflected in the documentation prior to the release of Cantera 2.6.
**System information**
- Cantera version: 2.6.0a4
- OS: N/A
- Python/MATLAB/other software versions: N/A
| non_priority | update reactions rst to reflect changes introduced in cantera problem description changes introduced by cantera enhancements need to be reflected in the documentation prior to the release of cantera system information cantera version os n a python matlab other software versions n a | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.