Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7,964 | 3,642,016,856 | IssuesEvent | 2016-02-14 01:36:12 | agdsn/sipa | https://api.github.com/repos/agdsn/sipa | closed | Refactor logging configuration | code enhancement | The logging configuration is kind of weirdly patched. And mostly not used.
The actual intention is to replace the `.ini` config with a `dict` or similiar, and while doign that I can clean up stuff as well. | 1.0 | Refactor logging configuration - The logging configuration is kind of weirdly patched. And mostly not used.
The actual intention is to replace the `.ini` config with a `dict` or similiar, and while doign that I can clean up stuff as well. | non_main | refactor logging configuration the logging configuration is kind of weirdly patched and mostly not used the actual intention is to replace the ini config with a dict or similiar and while doign that i can clean up stuff as well | 0 |
5,481 | 27,377,606,515 | IssuesEvent | 2023-02-28 07:37:33 | Windham-High-School/CubeServer | https://api.github.com/repos/Windham-High-School/CubeServer | closed | Bug Fixes & Completing API | bug enhancement ui maintainability | CubeServer-api-python
- [x] Finish implementing necessary dataclasses
- [x] Add emails sent visibility to admin table
- [x] Allow browsing sent emails
- [x] Connection between sent emails collection and teams for API-sent mail
- [x] Remove (some) dead code in wrapper
- [x] Add server version to status
| True | Bug Fixes & Completing API - CubeServer-api-python
- [x] Finish implementing necessary dataclasses
- [x] Add emails sent visibility to admin table
- [x] Allow browsing sent emails
- [x] Connection between sent emails collection and teams for API-sent mail
- [x] Remove (some) dead code in wrapper
- [x] Add server version to status
| main | bug fixes completing api cubeserver api python finish implementing necessary dataclasses add emails sent visibility to admin table allow browsing sent emails connection between sent emails collection and teams for api sent mail remove some dead code in wrapper add server version to status | 1 |
38,972 | 10,272,244,731 | IssuesEvent | 2019-08-23 15:56:00 | coin-or-tools/BuildTools | https://api.github.com/repos/coin-or-tools/BuildTools | opened | add compiler flags to specify C/C++ standard | build system enhancement | Add a macro so a project can specify which C++ version it compiles flags are added to specify this version (e.g., `-std=gnu++11`). Similar for C. | 1.0 | add compiler flags to specify C/C++ standard - Add a macro so a project can specify which C++ version it compiles flags are added to specify this version (e.g., `-std=gnu++11`). Similar for C. | non_main | add compiler flags to specify c c standard add a macro so a project can specify which c version it compiles flags are added to specify this version e g std gnu similar for c | 0 |
160,714 | 20,117,721,146 | IssuesEvent | 2022-02-07 21:27:59 | ibm-skills-network/editor.md | https://api.github.com/repos/ibm-skills-network/editor.md | closed | CVE-2016-7103 (Medium) detected in jquery-ui-1.11.0.min.js - autoclosed | security vulnerability | ## CVE-2016-7103 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.11.0.min.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.0/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.0/jquery-ui.min.js</a></p>
<p>Path to dependency file: /lib/codemirror/mode/slim/index.html</p>
<p>Path to vulnerable library: /lib/codemirror/mode/slim/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.11.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ibm-skills-network/editor.md/commit/3536c96518d940a17281ef2d14155d06cf61d37a">3536c96518d940a17281ef2d14155d06cf61d37a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function.
<p>Publish Date: 2017-03-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103>CVE-2016-7103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-7103">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-7103</a></p>
<p>Release Date: 2017-03-15</p>
<p>Fix Resolution: jquery-ui - 1.12.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-7103 (Medium) detected in jquery-ui-1.11.0.min.js - autoclosed - ## CVE-2016-7103 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.11.0.min.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.0/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.11.0/jquery-ui.min.js</a></p>
<p>Path to dependency file: /lib/codemirror/mode/slim/index.html</p>
<p>Path to vulnerable library: /lib/codemirror/mode/slim/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.11.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ibm-skills-network/editor.md/commit/3536c96518d940a17281ef2d14155d06cf61d37a">3536c96518d940a17281ef2d14155d06cf61d37a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function.
<p>Publish Date: 2017-03-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103>CVE-2016-7103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-7103">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-7103</a></p>
<p>Release Date: 2017-03-15</p>
<p>Fix Resolution: jquery-ui - 1.12.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in jquery ui min js autoclosed cve medium severity vulnerability vulnerable library jquery ui min js a curated set of user interface interactions effects widgets and themes built on top of the jquery javascript library library home page a href path to dependency file lib codemirror mode slim index html path to vulnerable library lib codemirror mode slim index html dependency hierarchy x jquery ui min js vulnerable library found in head commit a href found in base branch master vulnerability details cross site scripting xss vulnerability in jquery ui before might allow remote attackers to inject arbitrary web script or html via the closetext parameter of the dialog function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery ui step up your open source security game with whitesource | 0 |
54,970 | 11,355,932,199 | IssuesEvent | 2020-01-24 21:17:37 | GSA/code-gov-front-end | https://api.github.com/repos/GSA/code-gov-front-end | closed | Fixes to Open Task `Types` | [issue-type] bug [issue-type] good first issue [skill-level] beginner code.gov help wanted | <!-- Issues should follow our Issue Guidelines, which are at https://github.com/GSA/code-gov-front-end/blob/master/CONTRIBUTING.md#issue-guidelines -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
<!-- Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
1. View Open Tasks for GSA https://code.gov/open-tasks?&agencies=GSA&page=1&size=10
2. Take a look a the Type meta data for `Reduce Image Sizes` and several other tasks. The type is listed as `good`.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Type should be `good first issue`.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
<img width="971" alt="Screen Shot 2019-05-07 at 5 34 16 PM" src="https://user-images.githubusercontent.com/2197515/57335070-8b6a3980-70ef-11e9-8ea6-73d09d0de356.png">
<img width="286" alt="Screen Shot 2019-05-07 at 5 34 24 PM" src="https://user-images.githubusercontent.com/2197515/57335071-8b6a3980-70ef-11e9-86e0-189d7dc29e1f.png">
<img width="334" alt="Screen Shot 2019-05-07 at 5 38 14 PM" src="https://user-images.githubusercontent.com/2197515/57335072-8b6a3980-70ef-11e9-8b7a-2b044db5127a.png">
**Additional context**
<!-- Add any other context about the problem here. -->
| 1.0 | Fixes to Open Task `Types` - <!-- Issues should follow our Issue Guidelines, which are at https://github.com/GSA/code-gov-front-end/blob/master/CONTRIBUTING.md#issue-guidelines -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
<!-- Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
1. View Open Tasks for GSA https://code.gov/open-tasks?&agencies=GSA&page=1&size=10
2. Take a look a the Type meta data for `Reduce Image Sizes` and several other tasks. The type is listed as `good`.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Type should be `good first issue`.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
<img width="971" alt="Screen Shot 2019-05-07 at 5 34 16 PM" src="https://user-images.githubusercontent.com/2197515/57335070-8b6a3980-70ef-11e9-8ea6-73d09d0de356.png">
<img width="286" alt="Screen Shot 2019-05-07 at 5 34 24 PM" src="https://user-images.githubusercontent.com/2197515/57335071-8b6a3980-70ef-11e9-86e0-189d7dc29e1f.png">
<img width="334" alt="Screen Shot 2019-05-07 at 5 38 14 PM" src="https://user-images.githubusercontent.com/2197515/57335072-8b6a3980-70ef-11e9-8b7a-2b044db5127a.png">
**Additional context**
<!-- Add any other context about the problem here. -->
| non_main | fixes to open task types describe the bug to reproduce steps to reproduce the behavior go to click on scroll down to see error view open tasks for gsa take a look a the type meta data for reduce image sizes and several other tasks the type is listed as good expected behavior type should be good first issue screenshots img width alt screen shot at pm src img width alt screen shot at pm src img width alt screen shot at pm src additional context | 0 |
754 | 4,351,916,477 | IssuesEvent | 2016-08-01 02:54:37 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Ansible Openstack module in OS_NOVA_FLAVOR | cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
os_nova_flavor module
##### ANSIBLE VERSION
N/A
##### SUMMARY
There is no argument right now to pass extra spec to flavor. These are very important to for creation of NUMA, CPU PINNING , SRIOV etc .
Openstack support it but ansible does not support natively .
Regards
Arif Khan | True | Ansible Openstack module in OS_NOVA_FLAVOR - ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
os_nova_flavor module
##### ANSIBLE VERSION
N/A
##### SUMMARY
There is no argument right now to pass extra spec to flavor. These are very important to for creation of NUMA, CPU PINNING , SRIOV etc .
Openstack support it but ansible does not support natively .
Regards
Arif Khan | main | ansible openstack module in os nova flavor issue type feature idea component name os nova flavor module ansible version n a summary there is no argument right now to pass extra spec to flavor these are very important to for creation of numa cpu pinning sriov etc openstack support it but ansible does not support natively regards arif khan | 1 |
2,157 | 7,496,502,007 | IssuesEvent | 2018-04-08 10:05:20 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Test methods should not contain conditions | Area: analyzer Area: maintainability feature in progress | A test method should not contain following statements:
- if
- switch
- ??
- ?.
Reason:
Tests should test a very specific scenario. Therefore there is no need to have a condition because in that situation a test tests more than one scenario.
So a condition inside a test is a big code smell. | True | Test methods should not contain conditions - A test method should not contain following statements:
- if
- switch
- ??
- ?.
Reason:
Tests should test a very specific scenario. Therefore there is no need to have a condition because in that situation a test tests more than one scenario.
So a condition inside a test is a big code smell. | main | test methods should not contain conditions a test method should not contain following statements if switch reason tests should test a very specific scenario therefore there is no need to have a condition because in that situation a test tests more than one scenario so a condition inside a test is a big code smell | 1 |
4,850 | 24,976,332,890 | IssuesEvent | 2022-11-02 08:09:41 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | AssertionError while creating new record | type: bug work: backend status: ready restricted: maintainers | ## Description
I got this error while creating a new record. I could not find a reliable method to reproduce this, but once I got this error, I kept getting it consistently until I restarted our service.
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/tables/159/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 139, in create
serializer.save()
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 206, in save
assert self.instance is not None, (
Exception Type: AssertionError at /api/db/v0/tables/159/records/
Exception Value: `create()` did not return an object instance.
```
cc @mathemancer | True | AssertionError while creating new record - ## Description
I got this error while creating a new record. I could not find a reliable method to reproduce this, but once I got this error, I kept getting it consistently until I restarted our service.
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/tables/159/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 139, in create
serializer.save()
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 206, in save
assert self.instance is not None, (
Exception Type: AssertionError at /api/db/v0/tables/159/records/
Exception Value: `create()` did not return an object instance.
```
cc @mathemancer | main | assertionerror while creating new record description i got this error while creating a new record i could not find a reliable method to reproduce this but once i got this error i kept getting it consistently until i restarted our service environment request method post request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file code mathesar api db viewsets records py line in create serializer save file usr local lib site packages rest framework serializers py line in save assert self instance is not none exception type assertionerror at api db tables records exception value create did not return an object instance cc mathemancer | 1 |
545,855 | 15,964,705,762 | IssuesEvent | 2021-04-16 06:41:42 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | opened | Compiler crashes when bound method access is used with `self` of an isolated object | Crash Priority/Blocker Team/CompilerFE Type/Bug | **Description:**
$title.
**Steps to reproduce:**
```ballerina
public isolated class Foo {
public isolated function bar() {
isolated function () fn = self.baz;
}
isolated function baz() {
}
}
```
```log
[2021-04-16 12:08:45,574] SEVERE {b7a.log.crash} - null
java.lang.NullPointerException
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.isValidIsolatedObjectFieldAccessViaSelfOutsideLock(IsolationAnalyzer.java:2218)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:1174)
at org.wso2.ballerinalang.compiler.tree.expressions.BLangFieldBasedAccess.accept(BLangFieldBasedAccess.java:65)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:442)
at org.wso2.ballerinalang.compiler.tree.BLangSimpleVariable.accept(BLangSimpleVariable.java:53)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:489)
at org.wso2.ballerinalang.compiler.tree.statements.BLangSimpleVariableDef.accept(BLangSimpleVariableDef.java:46)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:376)
at org.wso2.ballerinalang.compiler.tree.BLangBlockFunctionBody.accept(BLangBlockFunctionBody.java:58)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:362)
at org.wso2.ballerinalang.compiler.tree.BLangFunction.accept(BLangFunction.java:73)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:1598)
at org.wso2.ballerinalang.compiler.tree.BLangClassDefinition.accept(BLangClassDefinition.java:106)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:311)
at org.wso2.ballerinalang.compiler.tree.BLangPackage.accept(BLangPackage.java:167)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyze(IsolationAnalyzer.java:288)
at io.ballerina.projects.internal.CompilerPhaseRunner.isolationAnalyze(CompilerPhaseRunner.java:233)
at io.ballerina.projects.internal.CompilerPhaseRunner.performTypeCheckPhases(CompilerPhaseRunner.java:130)
at io.ballerina.projects.ModuleContext.compileInternal(ModuleContext.java:383)
at io.ballerina.projects.ModuleCompilationState$1.compile(ModuleCompilationState.java:45)
at io.ballerina.projects.ModuleContext.compile(ModuleContext.java:329)
at io.ballerina.projects.PackageCompilation.compileModulesInternal(PackageCompilation.java:176)
at io.ballerina.projects.PackageCompilation.compileModules(PackageCompilation.java:168)
at io.ballerina.projects.PackageCompilation.from(PackageCompilation.java:97)
at io.ballerina.projects.PackageContext.getPackageCompilation(PackageContext.java:206)
at io.ballerina.projects.Package.getCompilation(Package.java:131)
at io.ballerina.cli.task.CompileTask.execute(CompileTask.java:68)
at io.ballerina.cli.TaskExecutor.executeTasks(TaskExecutor.java:40)
at io.ballerina.cli.cmd.RunCommand.execute(RunCommand.java:166)
at java.base/java.util.Optional.ifPresent(Optional.java:183)
at io.ballerina.cli.launcher.Main.main(Main.java:58)
```
**Affected Versions:**
slalpha4 | 1.0 | Compiler crashes when bound method access is used with `self` of an isolated object - **Description:**
$title.
**Steps to reproduce:**
```ballerina
public isolated class Foo {
public isolated function bar() {
isolated function () fn = self.baz;
}
isolated function baz() {
}
}
```
```log
[2021-04-16 12:08:45,574] SEVERE {b7a.log.crash} - null
java.lang.NullPointerException
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.isValidIsolatedObjectFieldAccessViaSelfOutsideLock(IsolationAnalyzer.java:2218)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:1174)
at org.wso2.ballerinalang.compiler.tree.expressions.BLangFieldBasedAccess.accept(BLangFieldBasedAccess.java:65)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:442)
at org.wso2.ballerinalang.compiler.tree.BLangSimpleVariable.accept(BLangSimpleVariable.java:53)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:489)
at org.wso2.ballerinalang.compiler.tree.statements.BLangSimpleVariableDef.accept(BLangSimpleVariableDef.java:46)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:376)
at org.wso2.ballerinalang.compiler.tree.BLangBlockFunctionBody.accept(BLangBlockFunctionBody.java:58)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:362)
at org.wso2.ballerinalang.compiler.tree.BLangFunction.accept(BLangFunction.java:73)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:1598)
at org.wso2.ballerinalang.compiler.tree.BLangClassDefinition.accept(BLangClassDefinition.java:106)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.visit(IsolationAnalyzer.java:311)
at org.wso2.ballerinalang.compiler.tree.BLangPackage.accept(BLangPackage.java:167)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyzeNode(IsolationAnalyzer.java:280)
at org.wso2.ballerinalang.compiler.semantics.analyzer.IsolationAnalyzer.analyze(IsolationAnalyzer.java:288)
at io.ballerina.projects.internal.CompilerPhaseRunner.isolationAnalyze(CompilerPhaseRunner.java:233)
at io.ballerina.projects.internal.CompilerPhaseRunner.performTypeCheckPhases(CompilerPhaseRunner.java:130)
at io.ballerina.projects.ModuleContext.compileInternal(ModuleContext.java:383)
at io.ballerina.projects.ModuleCompilationState$1.compile(ModuleCompilationState.java:45)
at io.ballerina.projects.ModuleContext.compile(ModuleContext.java:329)
at io.ballerina.projects.PackageCompilation.compileModulesInternal(PackageCompilation.java:176)
at io.ballerina.projects.PackageCompilation.compileModules(PackageCompilation.java:168)
at io.ballerina.projects.PackageCompilation.from(PackageCompilation.java:97)
at io.ballerina.projects.PackageContext.getPackageCompilation(PackageContext.java:206)
at io.ballerina.projects.Package.getCompilation(Package.java:131)
at io.ballerina.cli.task.CompileTask.execute(CompileTask.java:68)
at io.ballerina.cli.TaskExecutor.executeTasks(TaskExecutor.java:40)
at io.ballerina.cli.cmd.RunCommand.execute(RunCommand.java:166)
at java.base/java.util.Optional.ifPresent(Optional.java:183)
at io.ballerina.cli.launcher.Main.main(Main.java:58)
```
**Affected Versions:**
slalpha4 | non_main | compiler crashes when bound method access is used with self of an isolated object description title steps to reproduce ballerina public isolated class foo public isolated function bar isolated function fn self baz isolated function baz log severe log crash null java lang nullpointerexception at org ballerinalang compiler semantics analyzer isolationanalyzer isvalidisolatedobjectfieldaccessviaselfoutsidelock isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer visit isolationanalyzer java at org ballerinalang compiler tree expressions blangfieldbasedaccess accept blangfieldbasedaccess java at org ballerinalang compiler semantics analyzer isolationanalyzer analyzenode isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer visit isolationanalyzer java at org ballerinalang compiler tree blangsimplevariable accept blangsimplevariable java at org ballerinalang compiler semantics analyzer isolationanalyzer analyzenode isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer visit isolationanalyzer java at org ballerinalang compiler tree statements blangsimplevariabledef accept blangsimplevariabledef java at org ballerinalang compiler semantics analyzer isolationanalyzer analyzenode isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer visit isolationanalyzer java at org ballerinalang compiler tree blangblockfunctionbody accept blangblockfunctionbody java at org ballerinalang compiler semantics analyzer isolationanalyzer analyzenode isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer visit isolationanalyzer java at org ballerinalang compiler tree blangfunction accept blangfunction java at org ballerinalang compiler semantics analyzer isolationanalyzer analyzenode isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer visit isolationanalyzer java at org ballerinalang compiler tree blangclassdefinition accept blangclassdefinition java at org ballerinalang compiler semantics analyzer isolationanalyzer analyzenode isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer visit isolationanalyzer java at org ballerinalang compiler tree blangpackage accept blangpackage java at org ballerinalang compiler semantics analyzer isolationanalyzer analyzenode isolationanalyzer java at org ballerinalang compiler semantics analyzer isolationanalyzer analyze isolationanalyzer java at io ballerina projects internal compilerphaserunner isolationanalyze compilerphaserunner java at io ballerina projects internal compilerphaserunner performtypecheckphases compilerphaserunner java at io ballerina projects modulecontext compileinternal modulecontext java at io ballerina projects modulecompilationstate compile modulecompilationstate java at io ballerina projects modulecontext compile modulecontext java at io ballerina projects packagecompilation compilemodulesinternal packagecompilation java at io ballerina projects packagecompilation compilemodules packagecompilation java at io ballerina projects packagecompilation from packagecompilation java at io ballerina projects packagecontext getpackagecompilation packagecontext java at io ballerina projects package getcompilation package java at io ballerina cli task compiletask execute compiletask java at io ballerina cli taskexecutor executetasks taskexecutor java at io ballerina cli cmd runcommand execute runcommand java at java base java util optional ifpresent optional java at io ballerina cli launcher main main main java affected versions | 0 |
3,670 | 15,007,970,876 | IssuesEvent | 2021-01-31 07:49:27 | arkane-systems/genie | https://api.github.com/repos/arkane-systems/genie | closed | Degradation in 1.29: Support for CentOS7 | distro-maintainer-wanted help wanted | Usage of machinectl should be customizable.
In Debian everything is fine, but under CentOS7 I have:
```console
[root@honeypot ~]# genie --version
1.28
[root@honeypot ~]# echo ${INSIDE_GENIE:-false}
false
[root@honeypot ~]# genie -s
runuser: invalid option -- 'w'
Usage:
runuser [options] -u <USER> COMMAND
runuser [options] [-] [USER [arg]...]
Run COMMAND with the effective <user> id and group id. If -u not
given, fallback to su(1) compatible semantic and shell is executed.
The options -l, -c, -f, -s are mutually exclusive to -u.
Options:
-u, --user <user> username
-m, -p, --preserve-environment do not reset environment variables
-g, --group <group> specify the primary group
-G, --supp-group <group> specify a supplemental group
-, -l, --login make the shell a login shell
-c, --command <command> pass a single command to the shell with -c
--session-command <command> pass a single command to the shell with -c
and do not create a new session
-f, --fast pass -f to the shell (for csh or tcsh)
-s, --shell <shell> run shell if /etc/shells allows it
-h, --help display this help and exit
-V, --version output version information and exit
For more details see runuser(1).
genie: starting shell failed; nsenter returned 1.
[root@honeypot ~]# genie -c bash
[root@honeypot-wsl ~]# echo $INSIDE_GENIE
true
```
It's almost OK (except unsupported `--whitelist-environment` in `runuser`), but after upgrade:
```console
[root@honeypot ~]# genie --version
1.31
[root@honeypot ~]# genie -s
Unknown operation shell.
genie: starting shell failed; machinectl shell returned 1.
[root@honeypot ~]# genie -c bash
Unknown operation shell.
genie: running command failed; machinectl shell returned 1.
genie: running command failed; machinectl shell returned 1.
[root@honeypot ~]# genie -l
** (pkttyagent:828): WARNING **: 01:16:07.893: Unable to register authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown:
The name org.freedesktop.PolicyKit1 was not provided by any .service files
Error registering authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not prov
ided by any .service files (g-dbus-error-quark, 2)
Failed to get machine PTY: The name org.freedesktop.machine1 was not provided by any .service files
genie: starting login failed; machinectl login returned 1.
```
| True | Degradation in 1.29: Support for CentOS7 - Usage of machinectl should be customizable.
In Debian everything is fine, but under CentOS7 I have:
```console
[root@honeypot ~]# genie --version
1.28
[root@honeypot ~]# echo ${INSIDE_GENIE:-false}
false
[root@honeypot ~]# genie -s
runuser: invalid option -- 'w'
Usage:
runuser [options] -u <USER> COMMAND
runuser [options] [-] [USER [arg]...]
Run COMMAND with the effective <user> id and group id. If -u not
given, fallback to su(1) compatible semantic and shell is executed.
The options -l, -c, -f, -s are mutually exclusive to -u.
Options:
-u, --user <user> username
-m, -p, --preserve-environment do not reset environment variables
-g, --group <group> specify the primary group
-G, --supp-group <group> specify a supplemental group
-, -l, --login make the shell a login shell
-c, --command <command> pass a single command to the shell with -c
--session-command <command> pass a single command to the shell with -c
and do not create a new session
-f, --fast pass -f to the shell (for csh or tcsh)
-s, --shell <shell> run shell if /etc/shells allows it
-h, --help display this help and exit
-V, --version output version information and exit
For more details see runuser(1).
genie: starting shell failed; nsenter returned 1.
[root@honeypot ~]# genie -c bash
[root@honeypot-wsl ~]# echo $INSIDE_GENIE
true
```
It's almost OK (except unsupported `--whitelist-environment` in `runuser`), but after upgrade:
```console
[root@honeypot ~]# genie --version
1.31
[root@honeypot ~]# genie -s
Unknown operation shell.
genie: starting shell failed; machinectl shell returned 1.
[root@honeypot ~]# genie -c bash
Unknown operation shell.
genie: running command failed; machinectl shell returned 1.
genie: running command failed; machinectl shell returned 1.
[root@honeypot ~]# genie -l
** (pkttyagent:828): WARNING **: 01:16:07.893: Unable to register authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown:
The name org.freedesktop.PolicyKit1 was not provided by any .service files
Error registering authentication agent: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not prov
ided by any .service files (g-dbus-error-quark, 2)
Failed to get machine PTY: The name org.freedesktop.machine1 was not provided by any .service files
genie: starting login failed; machinectl login returned 1.
```
| main | degradation in support for usage of machinectl should be customizable in debian everything is fine but under i have console genie version echo inside genie false false genie s runuser invalid option w usage runuser u command runuser run command with the effective id and group id if u not given fallback to su compatible semantic and shell is executed the options l c f s are mutually exclusive to u options u user username m p preserve environment do not reset environment variables g group specify the primary group g supp group specify a supplemental group l login make the shell a login shell c command pass a single command to the shell with c session command pass a single command to the shell with c and do not create a new session f fast pass f to the shell for csh or tcsh s shell run shell if etc shells allows it h help display this help and exit v version output version information and exit for more details see runuser genie starting shell failed nsenter returned genie c bash echo inside genie true it s almost ok except unsupported whitelist environment in runuser but after upgrade console genie version genie s unknown operation shell genie starting shell failed machinectl shell returned genie c bash unknown operation shell genie running command failed machinectl shell returned genie running command failed machinectl shell returned genie l pkttyagent warning unable to register authentication agent gdbus error org freedesktop dbus error serviceunknown the name org freedesktop was not provided by any service files error registering authentication agent gdbus error org freedesktop dbus error serviceunknown the name org freedesktop was not prov ided by any service files g dbus error quark failed to get machine pty the name org freedesktop was not provided by any service files genie starting login failed machinectl login returned | 1 |
590,531 | 17,779,932,551 | IssuesEvent | 2021-08-31 02:07:23 | EddieHubCommunity/api | https://api.github.com/repos/EddieHubCommunity/api | closed | Namespace needs to be moved to environment variable | 🏁 status: ready for dev ✨ goal: improvement 🤖 aspect: dx 🔢 points: 2 🟥 priority: critical no-issue-activity | When deployed, we will use `prod` but when developing, we will use `dev` to avoid any confusion
For example...
```ts
AstraModule.forFeature({ namespace: 'eddiehub', collection: 'standup' }),
``` | 1.0 | Namespace needs to be moved to environment variable - When deployed, we will use `prod` but when developing, we will use `dev` to avoid any confusion
For example...
```ts
AstraModule.forFeature({ namespace: 'eddiehub', collection: 'standup' }),
``` | non_main | namespace needs to be moved to environment variable when deployed we will use prod but when developing we will use dev to avoid any confusion for example ts astramodule forfeature namespace eddiehub collection standup | 0 |
16,050 | 2,870,253,984 | IssuesEvent | 2015-06-07 00:40:25 | pdelia/away3d | https://api.github.com/repos/pdelia/away3d | closed | [Away3D Lite] ColorMaterial uses random when trying set to black. | auto-migrated Priority-Medium Type-Defect | #105 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:50Z
```
The ColorMaterial will use a random color if you pass in 0 as the color.
Change:
_color = Cast.color(color || "random");
to:
_color = color==null ? Cast.color("random") : color;
```
Original issue reported on code.google.com by `mirsw...@gmail.com` on 26 Mar 2010 at 10:00 | 1.0 | [Away3D Lite] ColorMaterial uses random when trying set to black. - #105 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:50Z
```
The ColorMaterial will use a random color if you pass in 0 as the color.
Change:
_color = Cast.color(color || "random");
to:
_color = color==null ? Cast.color("random") : color;
```
Original issue reported on code.google.com by `mirsw...@gmail.com` on 26 Mar 2010 at 10:00 | non_main | colormaterial uses random when trying set to black issue by googlecodeexporter created on the colormaterial will use a random color if you pass in as the color change color cast color color random to color color null cast color random color original issue reported on code google com by mirsw gmail com on mar at | 0 |
330,848 | 10,056,392,658 | IssuesEvent | 2019-07-22 09:03:05 | openshift/odo | https://api.github.com/repos/openshift/odo | opened | weird odo preference behaviour | kind/bug priority/High | [kind/bug]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
- Operating System: Supported
- Output of `odo version`: master
## How did you run odo exactly?
**Step 1:**
$ odo project create test
✓ New project created and now using project : test
Amits-MacBook-Pro:odo amit$ odo project create test1 -v4 -w
**I0722 13:53:55.215196 2082 preference.go:116] The configFile is /Users/amit/.odo/preference.yaml**
I0722 13:53:55.215543 2082 occlient.go:485] Trying to connect to server 192.168.64.2:8443
I0722 13:53:55.216147 2082 occlient.go:492] Server https://192.168.64.2:8443 is up
I0722 13:53:55.234835 2082 occlient.go:415] isLoggedIn err: <nil>
output: "developer"
[...]
set: flag accessed but not defined: component
• Waiting for project to come up ...
✓ Waiting for project to come up [322ms]
✓ Project 'test1' is ready for use
✓ New project created and now using project : test1
I0722 13:53:55.569290 2082 odo.go:70] Could not get the latest release information in time. Never mind, exiting gracefully :)
Step 2:
Go to the location of preference file referenced by project create log
Step 3:
Won't find **/Users/amit/.odo/preference.yaml** path
## Actual behavior
No global preference file found
## Expected behavior
The preference file should be created. I believe the global preference file should be created under the hood by any odo command except odo version.
## Any logs, error output, etc?
| 1.0 | weird odo preference behaviour - [kind/bug]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
- Operating System: Supported
- Output of `odo version`: master
## How did you run odo exactly?
**Step 1:**
$ odo project create test
✓ New project created and now using project : test
Amits-MacBook-Pro:odo amit$ odo project create test1 -v4 -w
**I0722 13:53:55.215196 2082 preference.go:116] The configFile is /Users/amit/.odo/preference.yaml**
I0722 13:53:55.215543 2082 occlient.go:485] Trying to connect to server 192.168.64.2:8443
I0722 13:53:55.216147 2082 occlient.go:492] Server https://192.168.64.2:8443 is up
I0722 13:53:55.234835 2082 occlient.go:415] isLoggedIn err: <nil>
output: "developer"
[...]
set: flag accessed but not defined: component
• Waiting for project to come up ...
✓ Waiting for project to come up [322ms]
✓ Project 'test1' is ready for use
✓ New project created and now using project : test1
I0722 13:53:55.569290 2082 odo.go:70] Could not get the latest release information in time. Never mind, exiting gracefully :)
Step 2:
Go to the location of preference file referenced by project create log
Step 3:
Won't find **/Users/amit/.odo/preference.yaml** path
## Actual behavior
No global preference file found
## Expected behavior
The preference file should be created. I believe the global preference file should be created under the hood by any odo command except odo version.
## Any logs, error output, etc?
| non_main | weird odo preference behaviour welcome we kindly ask you to fill out the issue template below use the chat and talk to us if you have a question rather than a bug or feature request the chat room is at thanks for understanding and for contributing to the project what versions of software are you using operating system supported output of odo version master how did you run odo exactly step odo project create test ✓ new project created and now using project test amits macbook pro odo amit odo project create w preference go the configfile is users amit odo preference yaml occlient go trying to connect to server occlient go server is up occlient go isloggedin err output developer set flag accessed but not defined component • waiting for project to come up ✓ waiting for project to come up ✓ project is ready for use ✓ new project created and now using project odo go could not get the latest release information in time never mind exiting gracefully step go to the location of preference file referenced by project create log step won t find users amit odo preference yaml path actual behavior no global preference file found expected behavior the preference file should be created i believe the global preference file should be created under the hood by any odo command except odo version any logs error output etc | 0 |
4,350 | 21,961,317,621 | IssuesEvent | 2022-05-24 16:03:43 | libp2p/js-libp2p-mplex | https://api.github.com/repos/libp2p/js-libp2p-mplex | closed | Version 1.0.0 and 1.0.1 I think they have a bug | need/maintainers-input | Hi,
I am new to libp2p and I was trying it for the first time, I started with the latest version of libp2p-mplex or @libp2p/mplex
but I got this error : https://discuss.libp2p.io/t/trying-to-send-a-message-bettwen-two-nodes-but-got-exception-on-libp2p-mplex/1277
I got this error on both version 1.0.0 and 1.0.1, but then I replaced it to version v0.10.7 and it worked.
I made a smaller ping version:
```
import Libp2p from 'libp2p'
import { NOISE } from 'libp2p-noise'
import MPLEX from 'libp2p-mplex'
import TCP from 'libp2p-tcp'
import { multiaddr } from 'multiaddr'
async function main (bootstrap) {
const node = await Libp2p.create({
addresses: {
listen: ['/ip4/127.0.0.1/tcp/0']
},
modules: {
transport: [TCP],
connEncryption: [NOISE],
streamMuxer: [MPLEX]
},
});
const address = bootstrap[0];
// start libp2p
await node.start()
console.log('libp2p has started')
// print out listening addresses
console.log('listening on addresses:')
node.multiaddrs.forEach(addr => {
console.log(`${addr.toString()}/p2p/${node.peerId.toB58String()}`)
})
// ping peer if received multiaddr
if (address) {
const ma = multiaddr(address)
console.log(`pinging remote peer at ${address}`)
const latency = await node.ping(ma)
console.log(`pinged ${address} in ${latency}ms`)
} else {
console.log('no remote peer address given, skipping ping')
}
const stop = async () => {
// stop libp2p
await node.stop()
console.log('libp2p has stopped')
process.exit(0)
}
process.on('SIGTERM', stop)
process.on('SIGINT', stop)
}
console.log(process.argv);
const [n, p, ...bootstrap] = process.argv;
main(bootstrap);
```
The error is the same with 1.0.0 and 1.0.1
```
Connection established to: QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU /ip4/127.0.0.1/tcp/35071/p2p/QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU
Connection established to: QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU /ip4/127.0.0.1/tcp/35071/p2p/QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU
file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/uint8arraylist/dist/src/index.js:25
length += buf.length;
^
TypeError: Cannot read properties of undefined (reading 'length')
at Uint8ArrayList.appendAll (file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/uint8arraylist/dist/src/index.js:25:31)
at Uint8ArrayList.subarray (file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/uint8arraylist/dist/src/index.js:75:14)
at Object.sink (file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/libp2p-mplex/dist/src/stream.js:92:82)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
code: 'ERR_UNSUPPORTED_PROTOCOL'
}
```
But it works great with v0.10.7
Thanks.
| True | Version 1.0.0 and 1.0.1 I think they have a bug - Hi,
I am new to libp2p and I was trying it for the first time, I started with the latest version of libp2p-mplex or @libp2p/mplex
but I got this error : https://discuss.libp2p.io/t/trying-to-send-a-message-bettwen-two-nodes-but-got-exception-on-libp2p-mplex/1277
I got this error on both version 1.0.0 and 1.0.1, but then I replaced it to version v0.10.7 and it worked.
I made a smaller ping version:
```
import Libp2p from 'libp2p'
import { NOISE } from 'libp2p-noise'
import MPLEX from 'libp2p-mplex'
import TCP from 'libp2p-tcp'
import { multiaddr } from 'multiaddr'
async function main (bootstrap) {
const node = await Libp2p.create({
addresses: {
listen: ['/ip4/127.0.0.1/tcp/0']
},
modules: {
transport: [TCP],
connEncryption: [NOISE],
streamMuxer: [MPLEX]
},
});
const address = bootstrap[0];
// start libp2p
await node.start()
console.log('libp2p has started')
// print out listening addresses
console.log('listening on addresses:')
node.multiaddrs.forEach(addr => {
console.log(`${addr.toString()}/p2p/${node.peerId.toB58String()}`)
})
// ping peer if received multiaddr
if (address) {
const ma = multiaddr(address)
console.log(`pinging remote peer at ${address}`)
const latency = await node.ping(ma)
console.log(`pinged ${address} in ${latency}ms`)
} else {
console.log('no remote peer address given, skipping ping')
}
const stop = async () => {
// stop libp2p
await node.stop()
console.log('libp2p has stopped')
process.exit(0)
}
process.on('SIGTERM', stop)
process.on('SIGINT', stop)
}
console.log(process.argv);
const [n, p, ...bootstrap] = process.argv;
main(bootstrap);
```
The error is the same with 1.0.0 and 1.0.1
```
Connection established to: QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU /ip4/127.0.0.1/tcp/35071/p2p/QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU
Connection established to: QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU /ip4/127.0.0.1/tcp/35071/p2p/QmSY1XoXy6vH44RQziyTSNmf99dk5vyC9tQ6PU49EgqigU
file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/uint8arraylist/dist/src/index.js:25
length += buf.length;
^
TypeError: Cannot read properties of undefined (reading 'length')
at Uint8ArrayList.appendAll (file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/uint8arraylist/dist/src/index.js:25:31)
at Uint8ArrayList.subarray (file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/uint8arraylist/dist/src/index.js:75:14)
at Object.sink (file:///media/fsvieira/Data/fsvieira/sandbox/libp2p-repo/test/node_modules/libp2p-mplex/dist/src/stream.js:92:82)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
code: 'ERR_UNSUPPORTED_PROTOCOL'
}
```
But it works great with v0.10.7
Thanks.
| main | version and i think they have a bug hi i am new to and i was trying it for the first time i started with the latest version of mplex or mplex but i got this error i got this error on both version and but then i replaced it to version and it worked i made a smaller ping version import from import noise from noise import mplex from mplex import tcp from tcp import multiaddr from multiaddr async function main bootstrap const node await create addresses listen modules transport connencryption streammuxer const address bootstrap start await node start console log has started print out listening addresses console log listening on addresses node multiaddrs foreach addr console log addr tostring node peerid ping peer if received multiaddr if address const ma multiaddr address console log pinging remote peer at address const latency await node ping ma console log pinged address in latency ms else console log no remote peer address given skipping ping const stop async stop await node stop console log has stopped process exit process on sigterm stop process on sigint stop console log process argv const process argv main bootstrap the error is the same with and connection established to tcp connection established to tcp file media fsvieira data fsvieira sandbox repo test node modules dist src index js length buf length typeerror cannot read properties of undefined reading length at appendall file media fsvieira data fsvieira sandbox repo test node modules dist src index js at subarray file media fsvieira data fsvieira sandbox repo test node modules dist src index js at object sink file media fsvieira data fsvieira sandbox repo test node modules mplex dist src stream js at processticksandrejections node internal process task queues code err unsupported protocol but it works great with thanks | 1 |
120,945 | 10,143,064,636 | IssuesEvent | 2019-08-04 08:32:12 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | [Quest] Rolling with my Homies - ID 14071 - (issue 2) Blocks goblin start chain - Kezan | Confirmed Fixed Confirmed Fixed in Dev Priority-High Regression Starting Zone Test in progress | https://www.wowhead.com/quest=14071/rolling-with-my-homies
I cant do quest without hot rod,
if is quest solved with solving issue of hot rod just label as invalid. | 1.0 | [Quest] Rolling with my Homies - ID 14071 - (issue 2) Blocks goblin start chain - Kezan - https://www.wowhead.com/quest=14071/rolling-with-my-homies
I cant do quest without hot rod,
if is quest solved with solving issue of hot rod just label as invalid. | non_main | rolling with my homies id issue blocks goblin start chain kezan i cant do quest without hot rod if is quest solved with solving issue of hot rod just label as invalid | 0 |
5,757 | 30,514,017,814 | IssuesEvent | 2023-07-19 00:16:40 | cncf/glossary | https://api.github.com/repos/cncf/glossary | closed | hugo v0.115.2 emits warnings when building website | maintainers | After the PR #2222 gets merged, hugo-extended v0.115.2 will be used for building the website.
When I try building the website locally with the suggestions introduced in #2222, I see some warnings:
```
WARN config: languages.es.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.es.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.bn.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.bn.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.de.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.de.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.ur.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.ur.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.hi.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.hi.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.ko.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.ko.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.it.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.it.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.zh-cn.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.zh-cn.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.en.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.en.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.pt-br.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.pt-br.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
```
This is because since Hugo 0.112.0 adding custom params to the top level language config is deprecated, and they guide to add all of these below `[params]`, like:
```
[languages]
[languages.sv]
title = "Min blogg"
languageCode = "sv"
[languages.en.params]
color = "blue"
```
So the suggested solution is to move each `description` entity to under `[languages.xx.params]`.
I will open a PR that resolves this. | True | hugo v0.115.2 emits warnings when building website - After the PR #2222 gets merged, hugo-extended v0.115.2 will be used for building the website.
When I try building the website locally with the suggestions introduced in #2222, I see some warnings:
```
WARN config: languages.es.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.es.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.bn.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.bn.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.de.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.de.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.ur.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.ur.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.hi.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.hi.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.ko.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.ko.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.it.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.it.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.zh-cn.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.zh-cn.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.en.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.en.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
WARN config: languages.pt-br.description: custom params on the language top level is deprecated and will be removed in a future release. Put the value below [languages.pt-br.params]. See https://gohugo.io/content-management/multilingual/#changes-in-hugo-01120
```
This is because since Hugo 0.112.0 adding custom params to the top level language config is deprecated, and they guide to add all of these below `[params]`, like:
```
[languages]
[languages.sv]
title = "Min blogg"
languageCode = "sv"
[languages.en.params]
color = "blue"
```
So the suggested solution is to move each `description` entity to under `[languages.xx.params]`.
I will open a PR that resolves this. | main | hugo emits warnings when building website after the pr gets merged hugo extended will be used for building the website when i try building the website locally with the suggestions introduced in i see some warnings warn config languages es description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages bn description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages de description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages ur description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages hi description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages ko description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages it description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages zh cn description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages en description custom params on the language top level is deprecated and will be removed in a future release put the value below see warn config languages pt br description custom params on the language top level is deprecated and will be removed in a future release put the value below see this is because since hugo adding custom params to the top level language config is deprecated and they guide to add all of these below like title min blogg languagecode sv color blue so the suggested solution is to move each description entity to under i will open a pr that resolves this | 1 |
1,342 | 5,721,491,667 | IssuesEvent | 2017-04-20 06:48:17 | tomchentw/react-google-maps | https://api.github.com/repos/tomchentw/react-google-maps | closed | access to the map instance? | CALL_FOR_MAINTAINERS | I want access to the map instance, specifically to call setCenter and setZoom in response to an autocomplete input selection. Is there any way to access it?
I know there are props for zoom/center but I just want to set them once in response to the event and then not manage them.
| True | access to the map instance? - I want access to the map instance, specifically to call setCenter and setZoom in response to an autocomplete input selection. Is there any way to access it?
I know there are props for zoom/center but I just want to set them once in response to the event and then not manage them.
| main | access to the map instance i want access to the map instance specifically to call setcenter and setzoom in response to an autocomplete input selection is there any way to access it i know there are props for zoom center but i just want to set them once in response to the event and then not manage them | 1 |
174,653 | 21,300,326,499 | IssuesEvent | 2022-04-15 01:37:38 | LaudateCorpus1/vscode-main | https://api.github.com/repos/LaudateCorpus1/vscode-main | opened | CVE-2021-43138 (High) detected in async-2.6.3.tgz | security vulnerability | ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async</p>
<p>
Dependency Hierarchy:
- gulp-shell-0.6.5.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>dev1</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (gulp-shell): 0.7.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-43138 (High) detected in async-2.6.3.tgz - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async</p>
<p>
Dependency Hierarchy:
- gulp-shell-0.6.5.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>dev1</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (gulp-shell): 0.7.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in async tgz cve high severity vulnerability vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules async dependency hierarchy gulp shell tgz root library x async tgz vulnerable library found in base branch vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async direct dependency fix resolution gulp shell step up your open source security game with whitesource | 0 |
2,492 | 8,650,852,474 | IssuesEvent | 2018-11-27 00:17:14 | Microsoft/DirectXTex | https://api.github.com/repos/Microsoft/DirectXTex | closed | Publish a NuGet packge with DX12 support for Win32 desktop | maintainence | The NuGet package ``DirectXTex_Uwp`` includes DirectX 12 support side-by-side with DirectX 11, but the ``directxtex_desktop_2015`` only supports DirectX 11 for Windows 7 support.
I should publish a ``DirectXTex_desktop_win10`` package that includes the DirectX 12 support for desktop apps that require Windows 10. | True | Publish a NuGet packge with DX12 support for Win32 desktop - The NuGet package ``DirectXTex_Uwp`` includes DirectX 12 support side-by-side with DirectX 11, but the ``directxtex_desktop_2015`` only supports DirectX 11 for Windows 7 support.
I should publish a ``DirectXTex_desktop_win10`` package that includes the DirectX 12 support for desktop apps that require Windows 10. | main | publish a nuget packge with support for desktop the nuget package directxtex uwp includes directx support side by side with directx but the directxtex desktop only supports directx for windows support i should publish a directxtex desktop package that includes the directx support for desktop apps that require windows | 1 |
2,461 | 8,639,900,071 | IssuesEvent | 2018-11-23 22:31:11 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | Glitchy output on RaspPi 3 and 1 without `-c 1` | V1 related (not maintained) | When I do `sudo ./rpitx -m VFO -f 1000` is it right to expect basically a square wave at 1000 kHz?
Instead, I get really sporadic pulses, as shown in [this video](https://twitter.com/natevw/status/763233882841817088).
Is this some sort of Pulse-Density Modulation trick or similar, or is something wrong? Mostly just trying to wrap my head around this. I am able to hear the FM sample via a broadcast receiver but when I lower the frequencies to what my 100MHz scope is supposed to be able to pick up, I don't see what I would expect.
| True | Glitchy output on RaspPi 3 and 1 without `-c 1` - When I do `sudo ./rpitx -m VFO -f 1000` is it right to expect basically a square wave at 1000 kHz?
Instead, I get really sporadic pulses, as shown in [this video](https://twitter.com/natevw/status/763233882841817088).
Is this some sort of Pulse-Density Modulation trick or similar, or is something wrong? Mostly just trying to wrap my head around this. I am able to hear the FM sample via a broadcast receiver but when I lower the frequencies to what my 100MHz scope is supposed to be able to pick up, I don't see what I would expect.
| main | glitchy output on rasppi and without c when i do sudo rpitx m vfo f is it right to expect basically a square wave at khz instead i get really sporadic pulses as shown in is this some sort of pulse density modulation trick or similar or is something wrong mostly just trying to wrap my head around this i am able to hear the fm sample via a broadcast receiver but when i lower the frequencies to what my scope is supposed to be able to pick up i don t see what i would expect | 1 |
3,392 | 13,160,801,457 | IssuesEvent | 2020-08-10 18:17:45 | RapidField/solid-instruments | https://api.github.com/repos/RapidField/solid-instruments | closed | Refactor awaited APIs. | Category-Maintenance Source-Maintainer Stage-4-Complete Subcategory-Conventions Tag-AddReleaseNote Verdict-Released Version-1.0.26 WindowForDelivery-2021-Q1 | # Maintenance Request
This issue represents a request for documentation, testing, refactoring or other non-functional changes.
## Overview
Several **Solid Instruments** APIs use the `await` keyword unnecessarily. [Those APIs](https://github.com/RapidField/solid-instruments/search?q=await&unscoped_q=await) should be refactored so avoid use of `await`.
## Statement of work
The following list describes the work to be done.
1. Find and refactor the offending APIs.
## Revision control plan
**Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue.
- `master` is the pull request target for
- `release/v1.0.26-preview1`, which is the pull request target for
- `develop`, which is the pull request target for
- `maintenance/00288-refactor-awaits`, which is the pull request target for contributing user branches, which should be named using the pattern
- `user/{username}/00288-refactor-awaits` | True | Refactor awaited APIs. - # Maintenance Request
This issue represents a request for documentation, testing, refactoring or other non-functional changes.
## Overview
Several **Solid Instruments** APIs use the `await` keyword unnecessarily. [Those APIs](https://github.com/RapidField/solid-instruments/search?q=await&unscoped_q=await) should be refactored so avoid use of `await`.
## Statement of work
The following list describes the work to be done.
1. Find and refactor the offending APIs.
## Revision control plan
**Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue.
- `master` is the pull request target for
- `release/v1.0.26-preview1`, which is the pull request target for
- `develop`, which is the pull request target for
- `maintenance/00288-refactor-awaits`, which is the pull request target for contributing user branches, which should be named using the pattern
- `user/{username}/00288-refactor-awaits` | main | refactor awaited apis maintenance request this issue represents a request for documentation testing refactoring or other non functional changes overview several solid instruments apis use the await keyword unnecessarily should be refactored so avoid use of await statement of work the following list describes the work to be done find and refactor the offending apis revision control plan solid instruments uses the individual contributors should follow the branching plan below when working on this issue master is the pull request target for release which is the pull request target for develop which is the pull request target for maintenance refactor awaits which is the pull request target for contributing user branches which should be named using the pattern user username refactor awaits | 1 |
712,378 | 24,493,412,038 | IssuesEvent | 2022-10-10 06:09:43 | pingcap/ossinsight | https://api.github.com/repos/pingcap/ossinsight | opened | Remove some old tables | priority/p1 | Use tbl:collections instead:
* css_framework_repos
* osdb_repos
* programming_language_repos
* static_site_generator_repos
* web_framework_repos
| 1.0 | Remove some old tables - Use tbl:collections instead:
* css_framework_repos
* osdb_repos
* programming_language_repos
* static_site_generator_repos
* web_framework_repos
| non_main | remove some old tables use tbl collections instead css framework repos osdb repos programming language repos static site generator repos web framework repos | 0 |
204,612 | 15,937,666,358 | IssuesEvent | 2021-04-14 12:44:38 | cornellius-gp/gpytorch | https://api.github.com/repos/cornellius-gp/gpytorch | closed | [Docs] Settings defaults mismatched in source code and documentation | documentation | # 📚 Documentation/Examples
Separating out from #1526
The defaults shown in the settings don't consistently match the source code.
### For example:
settings.memory_efficient is [True in the documentation](https://docs.gpytorch.ai/en/stable/settings.html#gpytorch.settings.memory_efficient), but [False in the corresponding source code](https://docs.gpytorch.ai/en/stable/_modules/gpytorch/settings.html#memory_efficient).
| 1.0 | [Docs] Settings defaults mismatched in source code and documentation - # 📚 Documentation/Examples
Separating out from #1526
The defaults shown in the settings don't consistently match the source code.
### For example:
settings.memory_efficient is [True in the documentation](https://docs.gpytorch.ai/en/stable/settings.html#gpytorch.settings.memory_efficient), but [False in the corresponding source code](https://docs.gpytorch.ai/en/stable/_modules/gpytorch/settings.html#memory_efficient).
| non_main | settings defaults mismatched in source code and documentation 📚 documentation examples separating out from the defaults shown in the settings don t consistently match the source code for example settings memory efficient is but | 0 |
61,305 | 12,165,860,345 | IssuesEvent | 2020-04-27 08:17:49 | ST-Apps/CS-ParallelRoadTool | https://api.github.com/repos/ST-Apps/CS-ParallelRoadTool | opened | When anarchy and update mode are on, keeping mouse pressed will update segments multiple times until released | bug code dev | Video proof: https://www.youtube.com/watch?v=Ned-yMPm-uM
Possible solution is to intercept mouse events to toggle the tool on/off and force it to run just once per click. | 1.0 | When anarchy and update mode are on, keeping mouse pressed will update segments multiple times until released - Video proof: https://www.youtube.com/watch?v=Ned-yMPm-uM
Possible solution is to intercept mouse events to toggle the tool on/off and force it to run just once per click. | non_main | when anarchy and update mode are on keeping mouse pressed will update segments multiple times until released video proof possible solution is to intercept mouse events to toggle the tool on off and force it to run just once per click | 0 |
17,165 | 3,595,399,172 | IssuesEvent | 2016-02-02 06:17:22 | FedericoElles/KazokuNabi | https://api.github.com/repos/FedericoElles/KazokuNabi | closed | Add optional Email Field in contact form to enable reply | enhancement test | Save E-Mail once added locally.
Add check box "Antwort erwünscht" | 1.0 | Add optional Email Field in contact form to enable reply - Save E-Mail once added locally.
Add check box "Antwort erwünscht" | non_main | add optional email field in contact form to enable reply save e mail once added locally add check box antwort erwünscht | 0 |
5,364 | 26,987,563,646 | IssuesEvent | 2023-02-09 17:12:40 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | `sam local start-api` does not handle 403 responses properly | stage/needs-investigation maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description
Briefly describe the bug you are facing.
`sam local start-api` does not handle 403 responses properly. Function Processing is completed, but function times out without returning a response. Works fine after deploying to Lambda.
### Steps to reproduce
``` go
package main
import (
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(Test403Error)
}
func Test403Error(e events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
return events.APIGatewayProxyResponse{
StatusCode: 403,
}, nil
}
```
### Observed result
```
Fetching lambci/lambda:go1.x Docker container image......
2019-06-19 13:11:06 Mounting /home/ayush/projects/go/src/myProject/auth as /var/task:ro,delegated inside runtime container
2019-06-19 13:11:07 http://localhost:None "POST /v1.35/containers/create HTTP/1.1" 201 201
2019-06-19 13:11:07 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:07 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:08 http://localhost:None "POST /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/start HTTP/1.1" 204 0
2019-06-19 13:11:08 Starting a timer for 20 seconds for function 'SendLoginOTPV1'
2019-06-19 13:11:09 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:09 http://localhost:None "POST /containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/attach?stream=1&stdin=0&logs=1&stderr=1&stdout=1 HTTP/1.1" 101 0
START RequestId: ab360452-44d8-1629-ac61-4491964d4592 Version: $LATEST
END RequestId: ab360452-44d8-1629-ac61-4491964d4592
REPORT RequestId: ab360452-44d8-1629-ac61-4491964d4592 Duration: 1.07 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 7 MB
2019-06-19 13:11:28 Function 'SendLoginOTPV1' timed out after 20 seconds
2019-06-19 13:11:28 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:28 http://localhost:None "DELETE /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8?force=True&link=False&v=False HTTP/1.1" 204 0
```
### Expected result
API should not time out and should properly return a response.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Ubuntu 16.04
2. `sam --version`: SAM CLI, version 0.17.0
`Add --debug flag to command you are running`:
sam local start-api --debug --docker-network="host"
**EDIT:** It works once in a while, but fails more often than not. | True | `sam local start-api` does not handle 403 responses properly - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description
Briefly describe the bug you are facing.
`sam local start-api` does not handle 403 responses properly. Function Processing is completed, but function times out without returning a response. Works fine after deploying to Lambda.
### Steps to reproduce
``` go
package main
import (
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(Test403Error)
}
func Test403Error(e events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
return events.APIGatewayProxyResponse{
StatusCode: 403,
}, nil
}
```
### Observed result
```
Fetching lambci/lambda:go1.x Docker container image......
2019-06-19 13:11:06 Mounting /home/ayush/projects/go/src/myProject/auth as /var/task:ro,delegated inside runtime container
2019-06-19 13:11:07 http://localhost:None "POST /v1.35/containers/create HTTP/1.1" 201 201
2019-06-19 13:11:07 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:07 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:08 http://localhost:None "POST /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/start HTTP/1.1" 204 0
2019-06-19 13:11:08 Starting a timer for 20 seconds for function 'SendLoginOTPV1'
2019-06-19 13:11:09 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:09 http://localhost:None "POST /containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/attach?stream=1&stdin=0&logs=1&stderr=1&stdout=1 HTTP/1.1" 101 0
START RequestId: ab360452-44d8-1629-ac61-4491964d4592 Version: $LATEST
END RequestId: ab360452-44d8-1629-ac61-4491964d4592
REPORT RequestId: ab360452-44d8-1629-ac61-4491964d4592 Duration: 1.07 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 7 MB
2019-06-19 13:11:28 Function 'SendLoginOTPV1' timed out after 20 seconds
2019-06-19 13:11:28 http://localhost:None "GET /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8/json HTTP/1.1" 200 None
2019-06-19 13:11:28 http://localhost:None "DELETE /v1.35/containers/8ca73a9be7c2481cf6a9c21477b62738a5d7aeb793bfe1957f4cc5d5b67e45d8?force=True&link=False&v=False HTTP/1.1" 204 0
```
### Expected result
API should not time out and should properly return a response.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Ubuntu 16.04
2. `sam --version`: SAM CLI, version 0.17.0
`Add --debug flag to command you are running`:
sam local start-api --debug --docker-network="host"
**EDIT:** It works once in a while, but fails more often than not. | main | sam local start api does not handle responses properly make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description briefly describe the bug you are facing sam local start api does not handle responses properly function processing is completed but function times out without returning a response works fine after deploying to lambda steps to reproduce go package main import github com aws aws lambda go events github com aws aws lambda go lambda func main lambda start func e events apigatewayproxyrequest events apigatewayproxyresponse error return events apigatewayproxyresponse statuscode nil observed result fetching lambci lambda x docker container image mounting home ayush projects go src myproject auth as var task ro delegated inside runtime container post containers create http get containers json http none get containers json http none post containers start http starting a timer for seconds for function get containers json http none post containers attach stream stdin logs stderr stdout http start requestid version latest end requestid report requestid duration ms billed duration ms memory size mb max memory used mb function timed out after seconds get containers json http none delete containers force true link false v false http expected result api should not time out and should properly return a response additional environment details ex windows mac amazon linux etc os ubuntu sam version sam cli version add debug flag to command you are running sam local start api debug docker network host edit it works once in a while but fails more often than not | 1 |
3,656 | 14,938,541,591 | IssuesEvent | 2021-01-25 15:53:41 | rstudio/bslib | https://api.github.com/repos/rstudio/bslib | closed | Improvements to precompilation | maintainence | - [ ] Only use precompiled if LibSass version is same
- [x] Skip precompiled tests on CRAN
- [x] Figure out why `Rscript tools/yarn_install.R` isn't producing pre-compiled results (it does if you run interactively)
- [ ] Set up a workflow to auto push pre-compiled results back to repo
- [x] Devmode option to turn off precompilation | True | Improvements to precompilation - - [ ] Only use precompiled if LibSass version is same
- [x] Skip precompiled tests on CRAN
- [x] Figure out why `Rscript tools/yarn_install.R` isn't producing pre-compiled results (it does if you run interactively)
- [ ] Set up a workflow to auto push pre-compiled results back to repo
- [x] Devmode option to turn off precompilation | main | improvements to precompilation only use precompiled if libsass version is same skip precompiled tests on cran figure out why rscript tools yarn install r isn t producing pre compiled results it does if you run interactively set up a workflow to auto push pre compiled results back to repo devmode option to turn off precompilation | 1 |
719,566 | 24,764,147,135 | IssuesEvent | 2022-10-22 09:34:06 | bounswe/bounswe2022group8 | https://api.github.com/repos/bounswe/bounswe2022group8 | closed | BE-3: PostgreSQL Integration to the application | Effort: Medium Priority: High Status: completed coding | ### What's up?
In our previous Backend team meeting, we decided to use PostfreSQL for our application. As one of the initial configuration steps for the application, we will be doing research on PostgreSQL and find a way to set up the integration to the application.
### To Do
- Do the initial configurations for the app, clone the repo to your local as mentioned in #179
- Research PostgreSQL
- Search for a way to integrate with the Django app
- If there isn't any straightforward integration tool, create and implement a DB class for more readable and writable code.
### Deadline
20.10.2022 @18.00
### Reviewers
_Please review until 21.10.2022_
@BElifb @dundarmete | 1.0 | BE-3: PostgreSQL Integration to the application - ### What's up?
In our previous Backend team meeting, we decided to use PostfreSQL for our application. As one of the initial configuration steps for the application, we will be doing research on PostgreSQL and find a way to set up the integration to the application.
### To Do
- Do the initial configurations for the app, clone the repo to your local as mentioned in #179
- Research PostgreSQL
- Search for a way to integrate with the Django app
- If there isn't any straightforward integration tool, create and implement a DB class for more readable and writable code.
### Deadline
20.10.2022 @18.00
### Reviewers
_Please review until 21.10.2022_
@BElifb @dundarmete | non_main | be postgresql integration to the application what s up in our previous backend team meeting we decided to use postfresql for our application as one of the initial configuration steps for the application we will be doing research on postgresql and find a way to set up the integration to the application to do do the initial configurations for the app clone the repo to your local as mentioned in research postgresql search for a way to integrate with the django app if there isn t any straightforward integration tool create and implement a db class for more readable and writable code deadline reviewers please review until belifb dundarmete | 0 |
693 | 4,238,520,287 | IssuesEvent | 2016-07-06 04:21:04 | Particular/PlatformInstaller | https://api.github.com/repos/Particular/PlatformInstaller | opened | Platform Installer appears hung when installing the NServiceBus prerequisites. | Size: S Tag: Maintainer Prio Type: Bug | On a fresh Windows 10 machine I used the PI to install the NSB prerequisities,
After only selecting the pre-requisites and pressing install the app locked up for the duration of the install of the prerequisites and showed no progress activity. This took between 1 to 2 minutes.
Repeated the test on a fresh Windows 2012 R2 VM on azure. Same result.
When this first happened I was about to kill the task as I thought it was dead. | True | Platform Installer appears hung when installing the NServiceBus prerequisites. - On a fresh Windows 10 machine I used the PI to install the NSB prerequisities,
After only selecting the pre-requisites and pressing install the app locked up for the duration of the install of the prerequisites and showed no progress activity. This took between 1 to 2 minutes.
Repeated the test on a fresh Windows 2012 R2 VM on azure. Same result.
When this first happened I was about to kill the task as I thought it was dead. | main | platform installer appears hung when installing the nservicebus prerequisites on a fresh windows machine i used the pi to install the nsb prerequisities after only selecting the pre requisites and pressing install the app locked up for the duration of the install of the prerequisites and showed no progress activity this took between to minutes repeated the test on a fresh windows vm on azure same result when this first happened i was about to kill the task as i thought it was dead | 1 |
25,034 | 4,128,282,467 | IssuesEvent | 2016-06-10 05:05:28 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | header issues with mobile | area/mobile area/ui kind/bug status/resolved status/to-test | **Rancher Version:** master 5-24
- [x] Can't select environment - drop-down is behind drop-down menu


- [ ] .. menu is not keyboard accessible. Can't access the main menu header in mobile using keyboard.

- [x] .. menu is cut off. Used to have three dots and a caret.

- [x] User drop-down looks like a submenu of API

- [x] Hard to tell which menu item is actually selected because select & deselect colors are so close. In firefox you can't tell at all.

- [x] When you hover over menu item, you can't read text you selected.

| 1.0 | header issues with mobile - **Rancher Version:** master 5-24
- [x] Can't select environment - drop-down is behind drop-down menu


- [ ] .. menu is not keyboard accessible. Can't access the main menu header in mobile using keyboard.

- [x] .. menu is cut off. Used to have three dots and a caret.

- [x] User drop-down looks like a submenu of API

- [x] Hard to tell which menu item is actually selected because select & deselect colors are so close. In firefox you can't tell at all.

- [x] When you hover over menu item, you can't read text you selected.

| non_main | header issues with mobile rancher version master can t select environment drop down is behind drop down menu menu is not keyboard accessible can t access the main menu header in mobile using keyboard menu is cut off used to have three dots and a caret user drop down looks like a submenu of api hard to tell which menu item is actually selected because select deselect colors are so close in firefox you can t tell at all when you hover over menu item you can t read text you selected | 0 |
308,833 | 23,269,493,609 | IssuesEvent | 2022-08-04 21:02:54 | jupekett/valheim-server-discord-bot | https://api.github.com/repos/jupekett/valheim-server-discord-bot | closed | Improve "getting started" documentation | documentation good first issue | README isn't currently bulletproof. Check the steps on a fresh installation and augment README.
- How to create the bot application: link to tutorial | 1.0 | Improve "getting started" documentation - README isn't currently bulletproof. Check the steps on a fresh installation and augment README.
- How to create the bot application: link to tutorial | non_main | improve getting started documentation readme isn t currently bulletproof check the steps on a fresh installation and augment readme how to create the bot application link to tutorial | 0 |
195,932 | 22,362,813,814 | IssuesEvent | 2022-06-15 22:39:51 | snowflakedb/snowflake-hive-metastore-connector | https://api.github.com/repos/snowflakedb/snowflake-hive-metastore-connector | closed | CVE-2020-14060 (High) detected in jackson-databind-2.6.5.jar | security vulnerability | ## CVE-2020-14060 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.5/jackson-databind-2.6.5.jar</p>
<p>
Dependency Hierarchy:
- hive-metastore-2.3.5.jar (Root Library)
- hive-serde-2.3.5.jar
- hive-common-2.3.5.jar
- :x: **jackson-databind-2.6.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowflakedb/snowflake-hive-metastore-connector/commit/37f5b0ac91898ef82cc1bf4610b729970f6eed58">37f5b0ac91898ef82cc1bf4610b729970f6eed58</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.6.7.4</p>
<p>Direct dependency fix Resolution (org.apache.hive:hive-metastore): 2.3.6</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.hive","packageName":"hive-metastore","packageVersion":"2.3.5","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.hive:hive-metastore:2.3.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14060","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-14060 (High) detected in jackson-databind-2.6.5.jar - ## CVE-2020-14060 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.5/jackson-databind-2.6.5.jar</p>
<p>
Dependency Hierarchy:
- hive-metastore-2.3.5.jar (Root Library)
- hive-serde-2.3.5.jar
- hive-common-2.3.5.jar
- :x: **jackson-databind-2.6.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowflakedb/snowflake-hive-metastore-connector/commit/37f5b0ac91898ef82cc1bf4610b729970f6eed58">37f5b0ac91898ef82cc1bf4610b729970f6eed58</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.6.7.4</p>
<p>Direct dependency fix Resolution (org.apache.hive:hive-metastore): 2.3.6</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.hive","packageName":"hive-metastore","packageVersion":"2.3.5","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.hive:hive-metastore:2.3.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14060","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy hive metastore jar root library hive serde jar hive common jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache xalan lib sql jndiconnectionpool aka apache drill publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution org apache hive hive metastore rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache hive hive metastore isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache xalan lib sql jndiconnectionpool aka apache drill vulnerabilityurl | 0 |
52,308 | 6,609,227,778 | IssuesEvent | 2017-09-19 13:55:14 | owncloud/client | https://api.github.com/repos/owncloud/client | closed | [2.0RC1] Greyed "Add folder to Synchronize" button? | Design & UX Discussion | 
- The "Add Folder to Synchronize" button is never active. Is it not used any more? Before I was able to sync a folder from my account to a different place on my filesystem. I can still add my account twice. | 1.0 | [2.0RC1] Greyed "Add folder to Synchronize" button? - 
- The "Add Folder to Synchronize" button is never active. Is it not used any more? Before I was able to sync a folder from my account to a different place on my filesystem. I can still add my account twice. | non_main | greyed add folder to synchronize button the add folder to synchronize button is never active is it not used any more before i was able to sync a folder from my account to a different place on my filesystem i can still add my account twice | 0 |
3,336 | 12,947,835,787 | IssuesEvent | 2020-07-19 01:24:55 | Kashdeya/Tiny-Progressions | https://api.github.com/repos/Kashdeya/Tiny-Progressions | closed | Big Pouch Voids Contents 3.2.31 | Version not Maintainted | Playing Stone Block 2 mod pack on the FTB launcher, which has version 3.2.31 of Tiny Progressions. I have found that going into creative and then back to survival voids the contents of the big pouch if it is in your inventory at the time. I reproduced it with an empty one and an oak sapling, so I don't believe contents matter. | True | Big Pouch Voids Contents 3.2.31 - Playing Stone Block 2 mod pack on the FTB launcher, which has version 3.2.31 of Tiny Progressions. I have found that going into creative and then back to survival voids the contents of the big pouch if it is in your inventory at the time. I reproduced it with an empty one and an oak sapling, so I don't believe contents matter. | main | big pouch voids contents playing stone block mod pack on the ftb launcher which has version of tiny progressions i have found that going into creative and then back to survival voids the contents of the big pouch if it is in your inventory at the time i reproduced it with an empty one and an oak sapling so i don t believe contents matter | 1 |
2,362 | 8,415,681,657 | IssuesEvent | 2018-10-13 17:09:31 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Bower module throws KeyError for packages specified via git endpoint | affects_2.5 bug module needs_maintainer support:community traceback | From @im-denisenko on 2015-08-04T15:13:36Z
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
bower module
##### ANSIBLE VERSION
devel
##### SUMMARY
Hello.
I have following dependency in my `bower.json`:
``` json
{
"dependencies": {
"css3pie": "git://github.com/PepijnSenders/css3pie.git"
}
}
```
When I trying to install it, it always failing:
```
TASK: [fpm | install bower packages] ******************************************
<srv-01> REMOTE_MODULE bower path="/home/realty/master"
failed: [srv-01] => {"failed": true, "parsed": false}
BECOME-SUCCESS-ydpbtaykmrsparnntxwwkzrpvjjwamay
Traceback (most recent call last):
File "/tmp/ansible-tmp-1438699847.74-256940805164596/bower", line 1792, in <module>
main()
File "/tmp/ansible-tmp-1438699847.74-256940805164596/bower", line 168, in main
installed, missing, outdated = bower.list()
File "/tmp/ansible-tmp-1438699847.74-256940805164596/bower", line 121, in list
elif data['dependencies'][dep]['pkgMeta']['version'] != data['dependencies'][dep]['update']['latest']:
KeyError: 'version'
```
Output of `bower list --json` for this package:
``` json
"css3pie": {
"endpoint": {
"name": "css3pie",
"source": "git://github.com/PepijnSenders/css3pie.git",
"target": "*"
},
"canonicalDir": "/home/realty/master/www/bower_components/css3pie",
"pkgMeta": {
"name": "css3pie",
"homepage": "https://github.com/PepijnSenders/css3pie",
"authors": [
"Pepijn Senders <pepijn@bekokstooft.nl>"
],
"description": "Bower package for css3pie http://css3pie.com/",
"keywords": [
"pie",
"css3",
"PIE",
"css",
"css3pie"
],
"license": "MIT",
"ignore": [
"**/.*",
"node_modules",
"bower_components",
"test",
"tests"
],
"_release": "b5e68ce841",
"_resolution": {
"type": "branch",
"branch": "master",
"commit": "b5e68ce8414bc0d7b451922901e093b78df32b11"
},
"_source": "git://github.com/PepijnSenders/css3pie.git",
"_target": "*",
"_originalSource": "git://github.com/PepijnSenders/css3pie.git"
},
"dependencies": {},
"nrDependants": 1,
"versions": []
}
```
There is no `css3pie\pkgMeta\version` nor `css3pie\update` fields.
I guess [line 121](https://github.com/ansible/ansible-modules-extras/blob/devel/packaging/language/bower.py#L121) should check presence of these keys before trying to access them.
Copied from original issue: ansible/ansible-modules-extras#809
| True | Bower module throws KeyError for packages specified via git endpoint - From @im-denisenko on 2015-08-04T15:13:36Z
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
bower module
##### ANSIBLE VERSION
devel
##### SUMMARY
Hello.
I have following dependency in my `bower.json`:
``` json
{
"dependencies": {
"css3pie": "git://github.com/PepijnSenders/css3pie.git"
}
}
```
When I trying to install it, it always failing:
```
TASK: [fpm | install bower packages] ******************************************
<srv-01> REMOTE_MODULE bower path="/home/realty/master"
failed: [srv-01] => {"failed": true, "parsed": false}
BECOME-SUCCESS-ydpbtaykmrsparnntxwwkzrpvjjwamay
Traceback (most recent call last):
File "/tmp/ansible-tmp-1438699847.74-256940805164596/bower", line 1792, in <module>
main()
File "/tmp/ansible-tmp-1438699847.74-256940805164596/bower", line 168, in main
installed, missing, outdated = bower.list()
File "/tmp/ansible-tmp-1438699847.74-256940805164596/bower", line 121, in list
elif data['dependencies'][dep]['pkgMeta']['version'] != data['dependencies'][dep]['update']['latest']:
KeyError: 'version'
```
Output of `bower list --json` for this package:
``` json
"css3pie": {
"endpoint": {
"name": "css3pie",
"source": "git://github.com/PepijnSenders/css3pie.git",
"target": "*"
},
"canonicalDir": "/home/realty/master/www/bower_components/css3pie",
"pkgMeta": {
"name": "css3pie",
"homepage": "https://github.com/PepijnSenders/css3pie",
"authors": [
"Pepijn Senders <pepijn@bekokstooft.nl>"
],
"description": "Bower package for css3pie http://css3pie.com/",
"keywords": [
"pie",
"css3",
"PIE",
"css",
"css3pie"
],
"license": "MIT",
"ignore": [
"**/.*",
"node_modules",
"bower_components",
"test",
"tests"
],
"_release": "b5e68ce841",
"_resolution": {
"type": "branch",
"branch": "master",
"commit": "b5e68ce8414bc0d7b451922901e093b78df32b11"
},
"_source": "git://github.com/PepijnSenders/css3pie.git",
"_target": "*",
"_originalSource": "git://github.com/PepijnSenders/css3pie.git"
},
"dependencies": {},
"nrDependants": 1,
"versions": []
}
```
There is no `css3pie\pkgMeta\version` nor `css3pie\update` fields.
I guess [line 121](https://github.com/ansible/ansible-modules-extras/blob/devel/packaging/language/bower.py#L121) should check presence of these keys before trying to access them.
Copied from original issue: ansible/ansible-modules-extras#809
| main | bower module throws keyerror for packages specified via git endpoint from im denisenko on issue type bug report component name bower module ansible version devel summary hello i have following dependency in my bower json json dependencies git github com pepijnsenders git when i trying to install it it always failing task remote module bower path home realty master failed failed true parsed false become success ydpbtaykmrsparnntxwwkzrpvjjwamay traceback most recent call last file tmp ansible tmp bower line in main file tmp ansible tmp bower line in main installed missing outdated bower list file tmp ansible tmp bower line in list elif data data keyerror version output of bower list json for this package json endpoint name source git github com pepijnsenders git target canonicaldir home realty master www bower components pkgmeta name homepage authors pepijn senders description bower package for keywords pie pie css license mit ignore node modules bower components test tests release resolution type branch branch master commit source git github com pepijnsenders git target originalsource git github com pepijnsenders git dependencies nrdependants versions there is no pkgmeta version nor update fields i guess should check presence of these keys before trying to access them copied from original issue ansible ansible modules extras | 1 |
286,523 | 24,759,114,239 | IssuesEvent | 2022-10-21 21:06:19 | verilator/verilator | https://api.github.com/repos/verilator/verilator | closed | Better alternative to vcddiff | area: tests resolution: no fix needed status: discussion type: maintenance area: tracing | vcddiff has a couple of annoying issues:
- Arbitrary limits on signal sizes due to use of static buffers
- Cannot cope with some non-trivially different but isomorphic VCD files. My fix to this is to feed both inptus through vcd2fst then fst2vcd, which then often confirms the files are indeed identical
Rather than trying to fix these, it should not be hard to write write a python alternative that is good enough for the limited size traces used in the test-suite. we need to assert that:
- The signal declarations (header) are isomorphic
- At each time point in the files, the values are identical
None of these sound particularly hard. | 1.0 | Better alternative to vcddiff - vcddiff has a couple of annoying issues:
- Arbitrary limits on signal sizes due to use of static buffers
- Cannot cope with some non-trivially different but isomorphic VCD files. My fix to this is to feed both inptus through vcd2fst then fst2vcd, which then often confirms the files are indeed identical
Rather than trying to fix these, it should not be hard to write write a python alternative that is good enough for the limited size traces used in the test-suite. we need to assert that:
- The signal declarations (header) are isomorphic
- At each time point in the files, the values are identical
None of these sound particularly hard. | non_main | better alternative to vcddiff vcddiff has a couple of annoying issues arbitrary limits on signal sizes due to use of static buffers cannot cope with some non trivially different but isomorphic vcd files my fix to this is to feed both inptus through then which then often confirms the files are indeed identical rather than trying to fix these it should not be hard to write write a python alternative that is good enough for the limited size traces used in the test suite we need to assert that the signal declarations header are isomorphic at each time point in the files the values are identical none of these sound particularly hard | 0 |
65,691 | 27,195,507,494 | IssuesEvent | 2023-02-20 04:37:35 | oxstreet/oxstreet-status-page | https://api.github.com/repos/oxstreet/oxstreet-status-page | closed | 🛑 Product service is down | status product-service | In [`8f3e861`](https://github.com/oxstreet/oxstreet-status-page/commit/8f3e861332f967d758828d4d26439c599961c697
), Product service (https://api.oxstreet.com/products/v1/healthcheck) was **down**:
- HTTP code: 502
- Response time: 181 ms
| 1.0 | 🛑 Product service is down - In [`8f3e861`](https://github.com/oxstreet/oxstreet-status-page/commit/8f3e861332f967d758828d4d26439c599961c697
), Product service (https://api.oxstreet.com/products/v1/healthcheck) was **down**:
- HTTP code: 502
- Response time: 181 ms
| non_main | 🛑 product service is down in product service was down http code response time ms | 0 |
3,634 | 14,686,155,096 | IssuesEvent | 2021-01-01 13:29:54 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | opened | Proposal to move projects Goedel and Pocklington to coq-community | coq-library maintainer-wanted move-project | **Project names:** Goedel and Pocklington
**Initial author(s):** Russell O'Connor (Goedel), Olga Caprotti and Martijn Oostdijk (Pocklington)
**Current URL:** https://github.com/coq-contribs/goedel https://github.com/coq-contribs/pocklington
**Kind:** pure Coq libraries
**License:** [CC-0/public domain](http://r6.ca/Goedel/goedel1.html) (Goedel) LGPL-2.1-or-later (Pocklington)
**Description:** A constructive proof of the Gödel-Rosser incompleteness theorem in Coq, and its supporting library of primality certification. As pointed out by @Casteran, the incompleteness proof may have pedagogical uses and contains a formalization of Peano arithmetic that may be useful elsewhere.
**Status:** unmaintained
**New maintainer:** looking for a volunteer
The last version of Coq known to work is 8.10:
- https://github.com/coq-contribs/pocklington/tree/v8.9
- https://github.com/coq-contribs/goedel/tree/v8.9 | True | Proposal to move projects Goedel and Pocklington to coq-community - **Project names:** Goedel and Pocklington
**Initial author(s):** Russell O'Connor (Goedel), Olga Caprotti and Martijn Oostdijk (Pocklington)
**Current URL:** https://github.com/coq-contribs/goedel https://github.com/coq-contribs/pocklington
**Kind:** pure Coq libraries
**License:** [CC-0/public domain](http://r6.ca/Goedel/goedel1.html) (Goedel) LGPL-2.1-or-later (Pocklington)
**Description:** A constructive proof of the Gödel-Rosser incompleteness theorem in Coq, and its supporting library of primality certification. As pointed out by @Casteran, the incompleteness proof may have pedagogical uses and contains a formalization of Peano arithmetic that may be useful elsewhere.
**Status:** unmaintained
**New maintainer:** looking for a volunteer
The last version of Coq known to work is 8.10:
- https://github.com/coq-contribs/pocklington/tree/v8.9
- https://github.com/coq-contribs/goedel/tree/v8.9 | main | proposal to move projects goedel and pocklington to coq community project names goedel and pocklington initial author s russell o connor goedel olga caprotti and martijn oostdijk pocklington current url kind pure coq libraries license goedel lgpl or later pocklington description a constructive proof of the gödel rosser incompleteness theorem in coq and its supporting library of primality certification as pointed out by casteran the incompleteness proof may have pedagogical uses and contains a formalization of peano arithmetic that may be useful elsewhere status unmaintained new maintainer looking for a volunteer the last version of coq known to work is | 1 |
584,913 | 17,466,854,554 | IssuesEvent | 2021-08-06 18:10:12 | apcountryman/picolibrary-microchip-megaavr | https://api.github.com/repos/apcountryman/picolibrary-microchip-megaavr | closed | Undefine avr-libc macros that clash with library features | priority-normal status-in_development type-refactoring | Undefine avr-libc macros that clash with library features. Including `avr/io.h` will expose clashes.
- [x] include/picolibrary/microchip/megaavr.h
- [ ] include/picolibrary/microchip/megaavr/asynchronous_serial.h
- [ ] include/picolibrary/microchip/megaavr/gpio.h
- [ ] include/picolibrary/microchip/megaavr/i2c.h
- [ ] include/picolibrary/microchip/megaavr/multiplexed_signals.h
- [ ] include/picolibrary/microchip/megaavr/multiplexed_signals/atmega2560.h
- [ ] include/picolibrary/microchip/megaavr/multiplexed_signals/atmega328p.h
- [ ] include/picolibrary/microchip/megaavr/peripheral.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/atmega2560.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/atmega328p.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/instance.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/port.h
- [x] include/picolibrary/microchip/megaavr/peripheral/spi.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/twi.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/usart.h
- [ ] include/picolibrary/microchip/megaavr/register.h
- [ ] include/picolibrary/microchip/megaavr/spi.h
- [ ] include/picolibrary/microchip/megaavr/version.h | 1.0 | Undefine avr-libc macros that clash with library features - Undefine avr-libc macros that clash with library features. Including `avr/io.h` will expose clashes.
- [x] include/picolibrary/microchip/megaavr.h
- [ ] include/picolibrary/microchip/megaavr/asynchronous_serial.h
- [ ] include/picolibrary/microchip/megaavr/gpio.h
- [ ] include/picolibrary/microchip/megaavr/i2c.h
- [ ] include/picolibrary/microchip/megaavr/multiplexed_signals.h
- [ ] include/picolibrary/microchip/megaavr/multiplexed_signals/atmega2560.h
- [ ] include/picolibrary/microchip/megaavr/multiplexed_signals/atmega328p.h
- [ ] include/picolibrary/microchip/megaavr/peripheral.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/atmega2560.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/atmega328p.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/instance.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/port.h
- [x] include/picolibrary/microchip/megaavr/peripheral/spi.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/twi.h
- [ ] include/picolibrary/microchip/megaavr/peripheral/usart.h
- [ ] include/picolibrary/microchip/megaavr/register.h
- [ ] include/picolibrary/microchip/megaavr/spi.h
- [ ] include/picolibrary/microchip/megaavr/version.h | non_main | undefine avr libc macros that clash with library features undefine avr libc macros that clash with library features including avr io h will expose clashes include picolibrary microchip megaavr h include picolibrary microchip megaavr asynchronous serial h include picolibrary microchip megaavr gpio h include picolibrary microchip megaavr h include picolibrary microchip megaavr multiplexed signals h include picolibrary microchip megaavr multiplexed signals h include picolibrary microchip megaavr multiplexed signals h include picolibrary microchip megaavr peripheral h include picolibrary microchip megaavr peripheral h include picolibrary microchip megaavr peripheral h include picolibrary microchip megaavr peripheral instance h include picolibrary microchip megaavr peripheral port h include picolibrary microchip megaavr peripheral spi h include picolibrary microchip megaavr peripheral twi h include picolibrary microchip megaavr peripheral usart h include picolibrary microchip megaavr register h include picolibrary microchip megaavr spi h include picolibrary microchip megaavr version h | 0 |
445,543 | 31,239,894,489 | IssuesEvent | 2023-08-20 18:33:40 | burmilla/os | https://api.github.com/repos/burmilla/os | closed | Add to releases a format supported by Digital Ocean | documentation wontfix | https://www.digitalocean.com/docs/images/custom-images/#image-requirements
> Image Requirements
>
>Images you upload to DigitalOcean must meet the following requirements:
>
> Operating system. Images must have a Unix-like OS.
>
> File format. Images must be in one of the following file formats:
> Raw (.img) with an MBR or GPT partition table
> qcow2
> VHDX
> VDI
> VMDK
>
> Size. Images must be 100 GB or less when uncompressed, including the filesystem.
>
> Filesystem. Images must support the ext3 or ext4 filesystems.
>
> cloud-init. Images must have cloud-init 0.7.7 or higher, cloudbase-init, coreos-cloudinit, ignition, or bsd-cloudinit installed and configured correctly. If your image's default cloud-init configuration lists the NoCloud datasource before the ConfigDrive datasource, Droplets created from your image will not function properly.
Click here to display detailed cloud-init instructions.
>
> SSH configuration. Images must have sshd installed and configured to run on boot. If your image does not have sshd set up, you will not have SSH access to Droplets created from that image unless you recover access using the Droplet console.
>
> You can also upload a custom image that meets the above criteria as a compressed gzip or bzip2 file. | 1.0 | Add to releases a format supported by Digital Ocean - https://www.digitalocean.com/docs/images/custom-images/#image-requirements
> Image Requirements
>
>Images you upload to DigitalOcean must meet the following requirements:
>
> Operating system. Images must have a Unix-like OS.
>
> File format. Images must be in one of the following file formats:
> Raw (.img) with an MBR or GPT partition table
> qcow2
> VHDX
> VDI
> VMDK
>
> Size. Images must be 100 GB or less when uncompressed, including the filesystem.
>
> Filesystem. Images must support the ext3 or ext4 filesystems.
>
> cloud-init. Images must have cloud-init 0.7.7 or higher, cloudbase-init, coreos-cloudinit, ignition, or bsd-cloudinit installed and configured correctly. If your image's default cloud-init configuration lists the NoCloud datasource before the ConfigDrive datasource, Droplets created from your image will not function properly.
Click here to display detailed cloud-init instructions.
>
> SSH configuration. Images must have sshd installed and configured to run on boot. If your image does not have sshd set up, you will not have SSH access to Droplets created from that image unless you recover access using the Droplet console.
>
> You can also upload a custom image that meets the above criteria as a compressed gzip or bzip2 file. | non_main | add to releases a format supported by digital ocean image requirements images you upload to digitalocean must meet the following requirements operating system images must have a unix like os file format images must be in one of the following file formats raw img with an mbr or gpt partition table vhdx vdi vmdk size images must be gb or less when uncompressed including the filesystem filesystem images must support the or filesystems cloud init images must have cloud init or higher cloudbase init coreos cloudinit ignition or bsd cloudinit installed and configured correctly if your image s default cloud init configuration lists the nocloud datasource before the configdrive datasource droplets created from your image will not function properly click here to display detailed cloud init instructions ssh configuration images must have sshd installed and configured to run on boot if your image does not have sshd set up you will not have ssh access to droplets created from that image unless you recover access using the droplet console you can also upload a custom image that meets the above criteria as a compressed gzip or file | 0 |
98,330 | 11,061,826,816 | IssuesEvent | 2019-12-11 08:13:09 | JohnCoene/waiter | https://api.github.com/repos/JohnCoene/waiter | opened | Hostess Additional Arguments | documentation enhancement | The hostess' underlying JavaScript library has numerous, very interesting parameters to customise the loading bar: these must included and well documented. | 1.0 | Hostess Additional Arguments - The hostess' underlying JavaScript library has numerous, very interesting parameters to customise the loading bar: these must included and well documented. | non_main | hostess additional arguments the hostess underlying javascript library has numerous very interesting parameters to customise the loading bar these must included and well documented | 0 |
705,726 | 24,246,340,822 | IssuesEvent | 2022-09-27 10:52:37 | Aizistral-Studios/No-Chat-Reports | https://api.github.com/repos/Aizistral-Studios/No-Chat-Reports | closed | All of the UI pertaining to the new Chat Encryption feature are cut off on most common display resolutions, including 1920x1080 and 3456x2160. | bug confirmed priority: normal | ## Environment
**Modloader:** quilt-loader 0.17.4
**Minecraft Version:** 1.19.2, 1.19.1
**No Chat Reports Version:** 1.13.0
## Bug Description
All of the UI pertaining to the new Chat Encryption feature are cut off on most common display resolutions, including 1920x1080 and 3456x2160.
This includes both "About Encryption" _and_ "Encryption Settings".
It would probably be a good idea to decrease the amount of padding between each line, or something like that.
## Screenshots ("About Encryption")
### 1920x1017@4x (Windowed mode on Windows)

### 1920x1080@4x (Fullscreen)

### 3456x1974@8x (Windowed mode on macOS)

### 3456x2160@8x (Fullscreen)

## Screenshots ("Encryption Settings")
### 1920x1017@4x (Windowed mode on Windows)

### 1920x1080@4x (Fullscreen)

### 3456x1974@8x (Windowed mode on macOS)

### 3456x2160@8x (Fullscreen)

| 1.0 | All of the UI pertaining to the new Chat Encryption feature are cut off on most common display resolutions, including 1920x1080 and 3456x2160. - ## Environment
**Modloader:** quilt-loader 0.17.4
**Minecraft Version:** 1.19.2, 1.19.1
**No Chat Reports Version:** 1.13.0
## Bug Description
All of the UI pertaining to the new Chat Encryption feature are cut off on most common display resolutions, including 1920x1080 and 3456x2160.
This includes both "About Encryption" _and_ "Encryption Settings".
It would probably be a good idea to decrease the amount of padding between each line, or something like that.
## Screenshots ("About Encryption")
### 1920x1017@4x (Windowed mode on Windows)

### 1920x1080@4x (Fullscreen)

### 3456x1974@8x (Windowed mode on macOS)

### 3456x2160@8x (Fullscreen)

## Screenshots ("Encryption Settings")
### 1920x1017@4x (Windowed mode on Windows)

### 1920x1080@4x (Fullscreen)

### 3456x1974@8x (Windowed mode on macOS)

### 3456x2160@8x (Fullscreen)

| non_main | all of the ui pertaining to the new chat encryption feature are cut off on most common display resolutions including and environment modloader quilt loader minecraft version no chat reports version bug description all of the ui pertaining to the new chat encryption feature are cut off on most common display resolutions including and this includes both about encryption and encryption settings it would probably be a good idea to decrease the amount of padding between each line or something like that screenshots about encryption windowed mode on windows fullscreen windowed mode on macos fullscreen screenshots encryption settings windowed mode on windows fullscreen windowed mode on macos fullscreen | 0 |
51,796 | 13,648,272,451 | IssuesEvent | 2020-09-26 08:18:03 | srivatsamarichi/tailspin-spacegame | https://api.github.com/repos/srivatsamarichi/tailspin-spacegame | closed | CVE-2020-11023 (Medium) detected in jquery-2.1.4.min.js | bug security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: tailspin-spacegame/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: tailspin-spacegame/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/tailspin-spacegame/commit/18bed90b3f61ffbe393dbb67ae624f4355632bcc">18bed90b3f61ffbe393dbb67ae624f4355632bcc</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11023 (Medium) detected in jquery-2.1.4.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: tailspin-spacegame/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: tailspin-spacegame/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/tailspin-spacegame/commit/18bed90b3f61ffbe393dbb67ae624f4355632bcc">18bed90b3f61ffbe393dbb67ae624f4355632bcc</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tailspin spacegame node modules js attic test moment index html path to vulnerable library tailspin spacegame node modules js attic test moment index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
133,029 | 28,488,671,169 | IssuesEvent | 2023-04-18 09:41:41 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | When treating, the NPC doctor does not look for medicines in the patient's pockets. | Feature request Code Design | ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
When treating, the NPC doctor does not look for medicines in the patient's pockets. The patient may carry medicines in his pockets, on his belt or in the pockets of the doctor's uniform worn by the patient. It would be logical if the doctor searches for the medicines through patient's pockets, if necessary. Because the patient carry medicals to keeps own health.
### Reproduction steps
_No response_
### Bug prevalence
Just once
### Version
0.21.6.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | 1.0 | When treating, the NPC doctor does not look for medicines in the patient's pockets. - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
When treating, the NPC doctor does not look for medicines in the patient's pockets. The patient may carry medicines in his pockets, on his belt or in the pockets of the doctor's uniform worn by the patient. It would be logical if the doctor searches for the medicines through patient's pockets, if necessary. Because the patient carry medicals to keeps own health.
### Reproduction steps
_No response_
### Bug prevalence
Just once
### Version
0.21.6.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | non_main | when treating the npc doctor does not look for medicines in the patient s pockets disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened when treating the npc doctor does not look for medicines in the patient s pockets the patient may carry medicines in his pockets on his belt or in the pockets of the doctor s uniform worn by the patient it would be logical if the doctor searches for the medicines through patient s pockets if necessary because the patient carry medicals to keeps own health reproduction steps no response bug prevalence just once version no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response | 0 |
731,753 | 25,229,948,175 | IssuesEvent | 2022-11-14 18:56:43 | cloudflare/cloudflared | https://api.github.com/repos/cloudflare/cloudflared | opened | 🐛 Issue accessing Raritan IP-KVM over public hostname | Type: Bug Priority: Normal | **Describe the bug**
While attempting to configure cloudflared to access a Raritan Dominion KXIV-101 IP-KVM I am having some issues connecting to the KVM portion. I can connect to the device and view configuration options, login, etc. but when attempting to start the session to access the KVM it returns "Client has been disconnected from target." and generates some logs either in cloudflared or in an nginx reverse-proxy.
**To Reproduce**
Steps to reproduce the behavior:
Build tunnel using dashboard and point to the IP-KVM's internal IP address or an nginx-reverse proxy running on the cloudflared host.
Connect to device and attempt to start KVM session.
**Expected behavior**
Connection to KVM is established without errors.
**Environment and versions**
- OS: Ubuntu 20.04
- Architecture: amd64
- Version: 2022.10.3
**Logs and errors**
Cloudflared errors:
```
2022-11-14T17:23:49Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" cfRay=76a16fe93e7e90f4-FRA ingressRule=1 originService=https://x.x.x.x:443
2022-11-14T17:23:49Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" connIndex=1 dest=https://abc.xxx.com/rfb ip=198.41.192.77 type=ws
```
Nginx errors:
```
2022/11/14 12:55:14 [error] 3950327#3950327: *78 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: y.y.y.y, request: "GET /rfb HTTP/1.1", upstream: "https://x.x.x.x:443/rfb", host: "abc.xxx.com"
2022/11/14 12:55:14 [error] 3950327#3950327: *78 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: y.y.y.y, request: "GET /rfb HTTP/1.1", upstream: "https://x.x.x.x:443/rfb", host: "abc.xxx.com"
```
**Additional context**
The nginx reverse-proxy config works when connecting directly from a host on the same subnet or over a vpn as does a direct connection to the device. Also tried using a clean vm as the cloudflared host with no success. Nginx config is below
```
upstream raritan {
server x.x.x.x:443;
keepalive 32;
}
server {
listen 4080 http2 ssl;
listen [::]:4080 http2 ssl;
server_name y.y.y.y;
ssl_certificate /etc/ssl/certs/raritan-cf.crt;
ssl_certificate_key /etc/ssl/private/raritan-cf.key;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
more_clear_headers 'Content-Length';
location / {
proxy_pass https://raritan;
proxy_http_version 1.1;
proxy_set_header Connection "upgrade";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
| 1.0 | 🐛 Issue accessing Raritan IP-KVM over public hostname - **Describe the bug**
While attempting to configure cloudflared to access a Raritan Dominion KXIV-101 IP-KVM I am having some issues connecting to the KVM portion. I can connect to the device and view configuration options, login, etc. but when attempting to start the session to access the KVM it returns "Client has been disconnected from target." and generates some logs either in cloudflared or in an nginx reverse-proxy.
**To Reproduce**
Steps to reproduce the behavior:
Build tunnel using dashboard and point to the IP-KVM's internal IP address or an nginx-reverse proxy running on the cloudflared host.
Connect to device and attempt to start KVM session.
**Expected behavior**
Connection to KVM is established without errors.
**Environment and versions**
- OS: Ubuntu 20.04
- Architecture: amd64
- Version: 2022.10.3
**Logs and errors**
Cloudflared errors:
```
2022-11-14T17:23:49Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" cfRay=76a16fe93e7e90f4-FRA ingressRule=1 originService=https://x.x.x.x:443
2022-11-14T17:23:49Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" connIndex=1 dest=https://abc.xxx.com/rfb ip=198.41.192.77 type=ws
```
Nginx errors:
```
2022/11/14 12:55:14 [error] 3950327#3950327: *78 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: y.y.y.y, request: "GET /rfb HTTP/1.1", upstream: "https://x.x.x.x:443/rfb", host: "abc.xxx.com"
2022/11/14 12:55:14 [error] 3950327#3950327: *78 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: y.y.y.y, request: "GET /rfb HTTP/1.1", upstream: "https://x.x.x.x:443/rfb", host: "abc.xxx.com"
```
**Additional context**
The nginx reverse-proxy config works when connecting directly from a host on the same subnet or over a vpn as does a direct connection to the device. Also tried using a clean vm as the cloudflared host with no success. Nginx config is below
```
upstream raritan {
server x.x.x.x:443;
keepalive 32;
}
server {
listen 4080 http2 ssl;
listen [::]:4080 http2 ssl;
server_name y.y.y.y;
ssl_certificate /etc/ssl/certs/raritan-cf.crt;
ssl_certificate_key /etc/ssl/private/raritan-cf.key;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
more_clear_headers 'Content-Length';
location / {
proxy_pass https://raritan;
proxy_http_version 1.1;
proxy_set_header Connection "upgrade";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
| non_main | 🐛 issue accessing raritan ip kvm over public hostname describe the bug while attempting to configure cloudflared to access a raritan dominion kxiv ip kvm i am having some issues connecting to the kvm portion i can connect to the device and view configuration options login etc but when attempting to start the session to access the kvm it returns client has been disconnected from target and generates some logs either in cloudflared or in an nginx reverse proxy to reproduce steps to reproduce the behavior build tunnel using dashboard and point to the ip kvm s internal ip address or an nginx reverse proxy running on the cloudflared host connect to device and attempt to start kvm session expected behavior connection to kvm is established without errors environment and versions os ubuntu architecture version logs and errors cloudflared errors err error unable to reach the origin service the service may be down or it may not be responding to traffic from cloudflared eof cfray fra ingressrule originservice err request failed error unable to reach the origin service the service may be down or it may not be responding to traffic from cloudflared eof connindex dest ip type ws nginx errors upstream prematurely closed connection while reading response header from upstream client server y y y y request get rfb http upstream host abc xxx com upstream prematurely closed connection while reading response header from upstream client server y y y y request get rfb http upstream host abc xxx com additional context the nginx reverse proxy config works when connecting directly from a host on the same subnet or over a vpn as does a direct connection to the device also tried using a clean vm as the cloudflared host with no success nginx config is below upstream raritan server x x x x keepalive server listen ssl listen ssl server name y y y y ssl certificate etc ssl certs raritan cf crt ssl certificate key etc ssl private raritan cf key access log var log nginx reverse access log error log var log nginx reverse error log more clear headers content length location proxy pass proxy http version proxy set header connection upgrade proxy set header upgrade http upgrade proxy set header x forwarded for remote addr proxy set header x forwarded proto scheme | 0 |
525,849 | 15,267,168,713 | IssuesEvent | 2021-02-22 09:45:03 | MichaelClerx/myokit | https://api.github.com/repos/MichaelClerx/myokit | closed | Module not found error on 1.31.0 on Win10 Python 3.8 | bug compatibility installation priority | ```
[11:10:25] Loading Myokit IDE
[11:10:25] Opened C:\Users\user\Downloads\model.mmt
[11:10:31] Running embedded script.
[11:10:36] An error has occurred
[11:10:36] Traceback (most recent call last):
File "c:\users\user\miniconda3\lib\site-packages\myokit\gui\ide.py", line 938, in action_run
myokit.run(
File "c:\users\user\miniconda3\lib\site-packages\myokit\_aux.py", line 1134, in run
r.run()
File "c:\users\user\miniconda3\lib\site-packages\myokit\_aux.py", line 1125, in run
myokit._exec(self.script, environment)
File "c:\users\user\miniconda3\lib\site-packages\myokit\_exec_new.py", line 15, in _exec
exec(script, globals, locals)
File "<string>", line 8, in <module>
File "c:\users\user\miniconda3\lib\site-packages\myokit\_sim\cvodesim.py", line 150, in __init__
self._sim = self._compile(module_name, fname, args, libs, libd, incd)
File "c:\users\user\miniconda3\lib\site-packages\myokit\_sim\__init__.py", line 210, in _compile
return load_module(name, d_build)
File "c:\users\user\miniconda3\lib\site-packages\myokit\_sim\__init__.py", line 54, in load_module
module = importlib.util.module_from_spec(spec)
File "<frozen importlib._bootstrap>", line 556, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1101, in create_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
ImportError: DLL load failed while importing myokit_sim_1_8341692155779361965: The specified module could not be found.
``` | 1.0 | Module not found error on 1.31.0 on Win10 Python 3.8 - ```
[11:10:25] Loading Myokit IDE
[11:10:25] Opened C:\Users\user\Downloads\model.mmt
[11:10:31] Running embedded script.
[11:10:36] An error has occurred
[11:10:36] Traceback (most recent call last):
File "c:\users\user\miniconda3\lib\site-packages\myokit\gui\ide.py", line 938, in action_run
myokit.run(
File "c:\users\user\miniconda3\lib\site-packages\myokit\_aux.py", line 1134, in run
r.run()
File "c:\users\user\miniconda3\lib\site-packages\myokit\_aux.py", line 1125, in run
myokit._exec(self.script, environment)
File "c:\users\user\miniconda3\lib\site-packages\myokit\_exec_new.py", line 15, in _exec
exec(script, globals, locals)
File "<string>", line 8, in <module>
File "c:\users\user\miniconda3\lib\site-packages\myokit\_sim\cvodesim.py", line 150, in __init__
self._sim = self._compile(module_name, fname, args, libs, libd, incd)
File "c:\users\user\miniconda3\lib\site-packages\myokit\_sim\__init__.py", line 210, in _compile
return load_module(name, d_build)
File "c:\users\user\miniconda3\lib\site-packages\myokit\_sim\__init__.py", line 54, in load_module
module = importlib.util.module_from_spec(spec)
File "<frozen importlib._bootstrap>", line 556, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1101, in create_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
ImportError: DLL load failed while importing myokit_sim_1_8341692155779361965: The specified module could not be found.
``` | non_main | module not found error on on python loading myokit ide opened c users user downloads model mmt running embedded script an error has occurred traceback most recent call last file c users user lib site packages myokit gui ide py line in action run myokit run file c users user lib site packages myokit aux py line in run r run file c users user lib site packages myokit aux py line in run myokit exec self script environment file c users user lib site packages myokit exec new py line in exec exec script globals locals file line in file c users user lib site packages myokit sim cvodesim py line in init self sim self compile module name fname args libs libd incd file c users user lib site packages myokit sim init py line in compile return load module name d build file c users user lib site packages myokit sim init py line in load module module importlib util module from spec spec file line in module from spec file line in create module file line in call with frames removed importerror dll load failed while importing myokit sim the specified module could not be found | 0 |
445 | 3,591,894,123 | IssuesEvent | 2016-02-01 14:02:51 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | closed | Fix issue when building for PHP versions 5.3 and 5.4 | maintainability medium | When building in Travis, we are requiring the development dependencies (`--dev`) in order to install *satooshi/php-coveralls*. A recent change in the dependencies definition of this library makes composer install the latest 3.0 branch of the *symfony/** dependencies (composer always tries to install the latest version by default). This branch requires **at least** PHP 5.5.9. Therefore, when building on top of PHP 5.3 or 5.4, build fails.
The problem is skipped by modifying the build script and telling composer to install the lowest version available (`--prefer-lowest`), but this is a hack and should be fixed properly, either by *satooshi/php-coveralls* by specifying different branches supporting both 2.X and 3.0 branches of *symfony/** or by including dependencies ourselves on the original branches. | True | Fix issue when building for PHP versions 5.3 and 5.4 - When building in Travis, we are requiring the development dependencies (`--dev`) in order to install *satooshi/php-coveralls*. A recent change in the dependencies definition of this library makes composer install the latest 3.0 branch of the *symfony/** dependencies (composer always tries to install the latest version by default). This branch requires **at least** PHP 5.5.9. Therefore, when building on top of PHP 5.3 or 5.4, build fails.
The problem is skipped by modifying the build script and telling composer to install the lowest version available (`--prefer-lowest`), but this is a hack and should be fixed properly, either by *satooshi/php-coveralls* by specifying different branches supporting both 2.X and 3.0 branches of *symfony/** or by including dependencies ourselves on the original branches. | main | fix issue when building for php versions and when building in travis we are requiring the development dependencies dev in order to install satooshi php coveralls a recent change in the dependencies definition of this library makes composer install the latest branch of the symfony dependencies composer always tries to install the latest version by default this branch requires at least php therefore when building on top of php or build fails the problem is skipped by modifying the build script and telling composer to install the lowest version available prefer lowest but this is a hack and should be fixed properly either by satooshi php coveralls by specifying different branches supporting both x and branches of symfony or by including dependencies ourselves on the original branches | 1 |
178,601 | 29,930,366,630 | IssuesEvent | 2023-06-22 09:03:27 | geeksforsocialchange/teamwilder | https://api.github.com/repos/geeksforsocialchange/teamwilder | closed | [Bug]: Broken stories redirect to home page but with broken url remaining | bug e vv verified design change | ## Description
Visiting https://team-wilder-proto.pages.dev/story/broken-link takes you to the home page
It should probably either take you to a 'story does not exist' page like a broken guide does, or to the home page but without the extra url info, depending on what we want behaviour to be here
A dev conversation to decided which approach is best given remaining time etc needs to happen
## Acceptance criteria
Either
- [x] guides and stories have a 404 page that displays when you are on the wrong url | 1.0 | [Bug]: Broken stories redirect to home page but with broken url remaining - ## Description
Visiting https://team-wilder-proto.pages.dev/story/broken-link takes you to the home page
It should probably either take you to a 'story does not exist' page like a broken guide does, or to the home page but without the extra url info, depending on what we want behaviour to be here
A dev conversation to decided which approach is best given remaining time etc needs to happen
## Acceptance criteria
Either
- [x] guides and stories have a 404 page that displays when you are on the wrong url | non_main | broken stories redirect to home page but with broken url remaining description visiting takes you to the home page it should probably either take you to a story does not exist page like a broken guide does or to the home page but without the extra url info depending on what we want behaviour to be here a dev conversation to decided which approach is best given remaining time etc needs to happen acceptance criteria either guides and stories have a page that displays when you are on the wrong url | 0 |
542,966 | 15,875,181,879 | IssuesEvent | 2021-04-09 06:34:06 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Enabling out-of-band oauth client provisioning in devportal causes App keys veiw rendering error | API-M 4.0.0 Priority/High React-UI Resolution/Cannot Reproduce Severity/Critical Type/Bug | ### Description:
Enabling out-of-band oauth client provisioning in devportal[1], leads to errors when rendering keys of Applications that are created subsequently.

[1]https://apim.docs.wso2.com/en/latest/learn/api-security/oauth2/provisioning-out-of-band-oauth-clients/#provisioning-out-of-band-oauth2-clients
### Steps to reproduce:
1. Edit the deployment.toml to enable provisioning out-of-band oauth clients as follows
```
[apim.devportal]
enable_key_provisioning=true
```
2. Start API Manager and login to devportal.
3. Create a new Application and click on either production or sandbox keys to view the Apps credentials.
4. Page does not render and following error is seen in browser console.
```
react_devtools_backend.js:6 TypeError: Cannot read property 'keyType' of undefined
at TokenManager.jsx:630
at Array.map (<anonymous>)
at be.render (TokenManager.jsx:533)
at qa (react-dom.production.min.js:182)
at za (react-dom.production.min.js:181)
at ws (react-dom.production.min.js:263)
at Eu (react-dom.production.min.js:246)
at xu (react-dom.production.min.js:246)
at pu (react-dom.production.min.js:239)
at react-dom.production.min.js:123
r @ react_devtools_backend.js:6
os @ index.bundle.js:39
n.callback @ index.bundle.js:39
mo @ index.bundle.js:39
ls @ index.bundle.js:39
Tu @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Cu @ index.bundle.js:39
pu @ index.bundle.js:39
(anonymous) @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Ki @ index.bundle.js:39
Xi @ index.bundle.js:39
su @ index.bundle.js:39
enqueueSetState @ index.bundle.js:39
x.setState @ index.bundle.js:47
(anonymous) @ ApplicationDetails.bundle.js:1
Promise.then (async)
(anonymous) @ ApplicationDetails.bundle.js:1
componentDidMount @ ApplicationDetails.bundle.js:1
ls @ index.bundle.js:39
Tu @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Cu @ index.bundle.js:39
pu @ index.bundle.js:39
(anonymous) @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Ki @ index.bundle.js:39
Xi @ index.bundle.js:39
R @ index.bundle.js:39
Xe @ index.bundle.js:39
53ProtectedApp.jsx:177 Uncaught TypeError: Cannot read property 'contentWindow' of null
at ProtectedApp.jsx:177
(anonymous) @ ProtectedApp.bundle.js:1
setInterval (async)
checkSession @ ProtectedApp.bundle.js:1
componentDidMount @ ProtectedApp.bundle.js:1
ls @ index.bundle.js:39
Tu @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Cu @ index.bundle.js:39
pu @ index.bundle.js:39
(anonymous) @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Ki @ index.bundle.js:39
L @ index.bundle.js:55
A.port1.onmessage @ index.bundle.js:55
```
### Environment details (with versions):
- OS: Ubuntu 18.04
- Client: Chromium Version 83.0.4103.116
- Env (Docker/K8s):
| 1.0 | Enabling out-of-band oauth client provisioning in devportal causes App keys veiw rendering error - ### Description:
Enabling out-of-band oauth client provisioning in devportal[1], leads to errors when rendering keys of Applications that are created subsequently.

[1]https://apim.docs.wso2.com/en/latest/learn/api-security/oauth2/provisioning-out-of-band-oauth-clients/#provisioning-out-of-band-oauth2-clients
### Steps to reproduce:
1. Edit the deployment.toml to enable provisioning out-of-band oauth clients as follows
```
[apim.devportal]
enable_key_provisioning=true
```
2. Start API Manager and login to devportal.
3. Create a new Application and click on either production or sandbox keys to view the Apps credentials.
4. Page does not render and following error is seen in browser console.
```
react_devtools_backend.js:6 TypeError: Cannot read property 'keyType' of undefined
at TokenManager.jsx:630
at Array.map (<anonymous>)
at be.render (TokenManager.jsx:533)
at qa (react-dom.production.min.js:182)
at za (react-dom.production.min.js:181)
at ws (react-dom.production.min.js:263)
at Eu (react-dom.production.min.js:246)
at xu (react-dom.production.min.js:246)
at pu (react-dom.production.min.js:239)
at react-dom.production.min.js:123
r @ react_devtools_backend.js:6
os @ index.bundle.js:39
n.callback @ index.bundle.js:39
mo @ index.bundle.js:39
ls @ index.bundle.js:39
Tu @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Cu @ index.bundle.js:39
pu @ index.bundle.js:39
(anonymous) @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Ki @ index.bundle.js:39
Xi @ index.bundle.js:39
su @ index.bundle.js:39
enqueueSetState @ index.bundle.js:39
x.setState @ index.bundle.js:47
(anonymous) @ ApplicationDetails.bundle.js:1
Promise.then (async)
(anonymous) @ ApplicationDetails.bundle.js:1
componentDidMount @ ApplicationDetails.bundle.js:1
ls @ index.bundle.js:39
Tu @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Cu @ index.bundle.js:39
pu @ index.bundle.js:39
(anonymous) @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Ki @ index.bundle.js:39
Xi @ index.bundle.js:39
R @ index.bundle.js:39
Xe @ index.bundle.js:39
53ProtectedApp.jsx:177 Uncaught TypeError: Cannot read property 'contentWindow' of null
at ProtectedApp.jsx:177
(anonymous) @ ProtectedApp.bundle.js:1
setInterval (async)
checkSession @ ProtectedApp.bundle.js:1
componentDidMount @ ProtectedApp.bundle.js:1
ls @ index.bundle.js:39
Tu @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Cu @ index.bundle.js:39
pu @ index.bundle.js:39
(anonymous) @ index.bundle.js:39
e.unstable_runWithPriority @ index.bundle.js:55
$i @ index.bundle.js:39
Ki @ index.bundle.js:39
L @ index.bundle.js:55
A.port1.onmessage @ index.bundle.js:55
```
### Environment details (with versions):
- OS: Ubuntu 18.04
- Client: Chromium Version 83.0.4103.116
- Env (Docker/K8s):
| non_main | enabling out of band oauth client provisioning in devportal causes app keys veiw rendering error description enabling out of band oauth client provisioning in devportal leads to errors when rendering keys of applications that are created subsequently steps to reproduce edit the deployment toml to enable provisioning out of band oauth clients as follows enable key provisioning true start api manager and login to devportal create a new application and click on either production or sandbox keys to view the apps credentials page does not render and following error is seen in browser console react devtools backend js typeerror cannot read property keytype of undefined at tokenmanager jsx at array map at be render tokenmanager jsx at qa react dom production min js at za react dom production min js at ws react dom production min js at eu react dom production min js at xu react dom production min js at pu react dom production min js at react dom production min js r react devtools backend js os index bundle js n callback index bundle js mo index bundle js ls index bundle js tu index bundle js e unstable runwithpriority index bundle js i index bundle js cu index bundle js pu index bundle js anonymous index bundle js e unstable runwithpriority index bundle js i index bundle js ki index bundle js xi index bundle js su index bundle js enqueuesetstate index bundle js x setstate index bundle js anonymous applicationdetails bundle js promise then async anonymous applicationdetails bundle js componentdidmount applicationdetails bundle js ls index bundle js tu index bundle js e unstable runwithpriority index bundle js i index bundle js cu index bundle js pu index bundle js anonymous index bundle js e unstable runwithpriority index bundle js i index bundle js ki index bundle js xi index bundle js r index bundle js xe index bundle js jsx uncaught typeerror cannot read property contentwindow of null at protectedapp jsx anonymous protectedapp bundle js setinterval async checksession protectedapp bundle js componentdidmount protectedapp bundle js ls index bundle js tu index bundle js e unstable runwithpriority index bundle js i index bundle js cu index bundle js pu index bundle js anonymous index bundle js e unstable runwithpriority index bundle js i index bundle js ki index bundle js l index bundle js a onmessage index bundle js environment details with versions os ubuntu client chromium version env docker | 0 |
4,526 | 23,532,372,077 | IssuesEvent | 2022-08-19 16:36:49 | ipfs/ipfs-docs | https://api.github.com/repos/ipfs/ipfs-docs | closed | Recent releases page is out of date | need/maintainers-input | There is a [recent releases page](https://docs.ipfs.tech/install/recent-releases/#go-ipfs-0-10) page in IPFS Docs > Install. It was last updated Oct 2021 with go-ipfs 0.10. We're now at kubo 0.14.
It should either be updated & a process identified for keeping it up-to-date, or removed entirely.
Thoughts @TMoMoreau @jennijuju @johnnymatthews @2color @aschmahmann @lidel @BigLep? | True | Recent releases page is out of date - There is a [recent releases page](https://docs.ipfs.tech/install/recent-releases/#go-ipfs-0-10) page in IPFS Docs > Install. It was last updated Oct 2021 with go-ipfs 0.10. We're now at kubo 0.14.
It should either be updated & a process identified for keeping it up-to-date, or removed entirely.
Thoughts @TMoMoreau @jennijuju @johnnymatthews @2color @aschmahmann @lidel @BigLep? | main | recent releases page is out of date there is a page in ipfs docs install it was last updated oct with go ipfs we re now at kubo it should either be updated a process identified for keeping it up to date or removed entirely thoughts tmomoreau jennijuju johnnymatthews aschmahmann lidel biglep | 1 |
57,637 | 14,175,959,436 | IssuesEvent | 2020-11-12 22:32:50 | golang/go | https://api.github.com/repos/golang/go | closed | math/big: panic during recursive division of very large numbers | Security | A number of math/big.Int methods (Div, Exp, DivMod, Quo, Rem, QuoRem, Mod, ModInverse, ModSqrt, Jacobi, and GCD) can panic when provided crafted large inputs. For the panic to happen, the divisor or modulo argument must be larger than 3168 bits (on 32-bit architectures) or 6336 bits (on 64-bit architectures). Multiple math/big.Rat methods are similarly affected.
crypto/rsa.VerifyPSS, crypto/rsa.VerifyPKCS1v15, and crypto/dsa.Verify may panic when provided crafted public keys and signatures. crypto/ecdsa and crypto/elliptic operations may only be affected if custom CurveParams with unusually large field sizes (several times larger than the largest supported curve, P-521) are in use. Using crypto/x509.Verify on a crafted X.509 certificate chain can lead to a panic, even if the certificates don’t chain to a trusted root. The chain can be delivered via a crypto/tls connection to a client, or to a server that accepts and verifies client certificates. net/http clients can be made to crash by an HTTPS server, while net/http servers that accept client certificates will recover the panic and are unaffected.
Moreover, an application might crash invoking crypto/x509.(*CertificateRequest).CheckSignature on an X.509 certificate request or during a golang.org/x/crypto/otr conversation. Parsing a golang.org/x/crypto/openpgp Entity or verifying a signature may crash. Finally, a golang.org/x/crypto/ssh client can panic due to a malformed host key, while a server could panic if either PublicKeyCallback accepts a malformed public key, or if IsUserAuthority accepts a certificate with a malformed public key.
Thanks to the Go Ethereum team and the OSS-Fuzz project for reporting this. Thanks to Rémy Oudompheng and Robert Griesemer for their help developing and validating the fix.
This issue is CVE-2020-28362. | True | math/big: panic during recursive division of very large numbers - A number of math/big.Int methods (Div, Exp, DivMod, Quo, Rem, QuoRem, Mod, ModInverse, ModSqrt, Jacobi, and GCD) can panic when provided crafted large inputs. For the panic to happen, the divisor or modulo argument must be larger than 3168 bits (on 32-bit architectures) or 6336 bits (on 64-bit architectures). Multiple math/big.Rat methods are similarly affected.
crypto/rsa.VerifyPSS, crypto/rsa.VerifyPKCS1v15, and crypto/dsa.Verify may panic when provided crafted public keys and signatures. crypto/ecdsa and crypto/elliptic operations may only be affected if custom CurveParams with unusually large field sizes (several times larger than the largest supported curve, P-521) are in use. Using crypto/x509.Verify on a crafted X.509 certificate chain can lead to a panic, even if the certificates don’t chain to a trusted root. The chain can be delivered via a crypto/tls connection to a client, or to a server that accepts and verifies client certificates. net/http clients can be made to crash by an HTTPS server, while net/http servers that accept client certificates will recover the panic and are unaffected.
Moreover, an application might crash invoking crypto/x509.(*CertificateRequest).CheckSignature on an X.509 certificate request or during a golang.org/x/crypto/otr conversation. Parsing a golang.org/x/crypto/openpgp Entity or verifying a signature may crash. Finally, a golang.org/x/crypto/ssh client can panic due to a malformed host key, while a server could panic if either PublicKeyCallback accepts a malformed public key, or if IsUserAuthority accepts a certificate with a malformed public key.
Thanks to the Go Ethereum team and the OSS-Fuzz project for reporting this. Thanks to Rémy Oudompheng and Robert Griesemer for their help developing and validating the fix.
This issue is CVE-2020-28362. | non_main | math big panic during recursive division of very large numbers a number of math big int methods div exp divmod quo rem quorem mod modinverse modsqrt jacobi and gcd can panic when provided crafted large inputs for the panic to happen the divisor or modulo argument must be larger than bits on bit architectures or bits on bit architectures multiple math big rat methods are similarly affected crypto rsa verifypss crypto rsa and crypto dsa verify may panic when provided crafted public keys and signatures crypto ecdsa and crypto elliptic operations may only be affected if custom curveparams with unusually large field sizes several times larger than the largest supported curve p are in use using crypto verify on a crafted x certificate chain can lead to a panic even if the certificates don’t chain to a trusted root the chain can be delivered via a crypto tls connection to a client or to a server that accepts and verifies client certificates net http clients can be made to crash by an https server while net http servers that accept client certificates will recover the panic and are unaffected moreover an application might crash invoking crypto certificaterequest checksignature on an x certificate request or during a golang org x crypto otr conversation parsing a golang org x crypto openpgp entity or verifying a signature may crash finally a golang org x crypto ssh client can panic due to a malformed host key while a server could panic if either publickeycallback accepts a malformed public key or if isuserauthority accepts a certificate with a malformed public key thanks to the go ethereum team and the oss fuzz project for reporting this thanks to rémy oudompheng and robert griesemer for their help developing and validating the fix this issue is cve | 0 |
4,589 | 23,817,644,890 | IssuesEvent | 2022-09-05 08:21:16 | tgbot-collection/ytdlbot | https://api.github.com/repos/tgbot-collection/ytdlbot | closed | sub download issue, channel not downloaded | bug not-maintained | ```
https://www.youtube.com/c/RusPiano
```
is added but never received a download | True | sub download issue, channel not downloaded - ```
https://www.youtube.com/c/RusPiano
```
is added but never received a download | main | sub download issue channel not downloaded is added but never received a download | 1 |
4,513 | 23,465,030,549 | IssuesEvent | 2022-08-16 15:59:47 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | closed | [MAINTAIN] any arch packages remove R alias or not | maintain |
<!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
[any_R.log](https://github.com/BioArchLinux/Packages/files/9351831/any_R.log)
</details>
**Packages (please complete the following information):**
- Package Name: R packages
**Description**
@hubutui thinks that it should be removed
@sukanka thinks that it shouldn't be removed
Could you give me a final answer to decide
| True | [MAINTAIN] any arch packages remove R alias or not -
<!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
[any_R.log](https://github.com/BioArchLinux/Packages/files/9351831/any_R.log)
</details>
**Packages (please complete the following information):**
- Package Name: R packages
**Description**
@hubutui thinks that it should be removed
@sukanka thinks that it shouldn't be removed
Could you give me a final answer to decide
| main | any arch packages remove r alias or not please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug packages please complete the following information package name r packages description hubutui thinks that it should be removed sukanka thinks that it shouldn t be removed could you give me a final answer to decide | 1 |
29,046 | 8,269,153,930 | IssuesEvent | 2018-09-15 02:01:19 | google/xi-editor | https://api.github.com/repos/google/xi-editor | opened | 'cargo bench' fails on master | build error | This looks like something that slipped through with #762. @mqzry would you like to take a look or should I? You probably have a better sense of what the best solution is.. | 1.0 | 'cargo bench' fails on master - This looks like something that slipped through with #762. @mqzry would you like to take a look or should I? You probably have a better sense of what the best solution is.. | non_main | cargo bench fails on master this looks like something that slipped through with mqzry would you like to take a look or should i you probably have a better sense of what the best solution is | 0 |
19,543 | 5,903,213,843 | IssuesEvent | 2017-05-19 05:40:38 | dickschoeller/gedbrowser | https://api.github.com/repos/dickschoeller/gedbrowser | closed | Menu should be a fragment | code smell in progress | :hankey:
Currently, each page implements its own identical toolbar. Because of this, we have the following problems:
- The renderers expose control methods that should be a in separate object. This is both a poor separation of concerns and requires an unnecessary facade.
- Each template has a block of code that id very similar but not identical. These have to be kept in sync.
- It is a non-standard way to do things when there is a perfectly good standard.
| 1.0 | Menu should be a fragment - :hankey:
Currently, each page implements its own identical toolbar. Because of this, we have the following problems:
- The renderers expose control methods that should be a in separate object. This is both a poor separation of concerns and requires an unnecessary facade.
- Each template has a block of code that id very similar but not identical. These have to be kept in sync.
- It is a non-standard way to do things when there is a perfectly good standard.
| non_main | menu should be a fragment hankey currently each page implements its own identical toolbar because of this we have the following problems the renderers expose control methods that should be a in separate object this is both a poor separation of concerns and requires an unnecessary facade each template has a block of code that id very similar but not identical these have to be kept in sync it is a non standard way to do things when there is a perfectly good standard | 0 |
191,695 | 15,301,537,665 | IssuesEvent | 2021-02-24 13:44:28 | crowdsecurity/crowdsec | https://api.github.com/repos/crowdsecurity/crowdsec | opened | Improvement/Documentation multiple goroutines | documentation enhancement | **Is your feature request related to a problem? Please describe.**
At the moment, if one wants better performance, he can add goroutines for parser, leakybucket and output stuff. But this is not documented
**Describe the solution you'd like**
Document this feature.
| 1.0 | Improvement/Documentation multiple goroutines - **Is your feature request related to a problem? Please describe.**
At the moment, if one wants better performance, he can add goroutines for parser, leakybucket and output stuff. But this is not documented
**Describe the solution you'd like**
Document this feature.
| non_main | improvement documentation multiple goroutines is your feature request related to a problem please describe at the moment if one wants better performance he can add goroutines for parser leakybucket and output stuff but this is not documented describe the solution you d like document this feature | 0 |
11,995 | 7,607,130,038 | IssuesEvent | 2018-04-30 15:29:50 | DemokratieInBewegung/abstimmungstool | https://api.github.com/repos/DemokratieInBewegung/abstimmungstool | opened | Comment Icon doesn't signify whether I commented | BUG Usability | We wanted to fill-in the comment icon when the user has commented on that argument (or it is their own), but we currently don't do that. | True | Comment Icon doesn't signify whether I commented - We wanted to fill-in the comment icon when the user has commented on that argument (or it is their own), but we currently don't do that. | non_main | comment icon doesn t signify whether i commented we wanted to fill in the comment icon when the user has commented on that argument or it is their own but we currently don t do that | 0 |
449,569 | 31,851,365,566 | IssuesEvent | 2023-09-15 02:07:58 | ksh93/ksh | https://api.github.com/repos/ksh93/ksh | closed | printf with %d: the man page should say that when ".." is used, the precision is 1 | documentation | For `printf` with `%d`, when a single dot is used without a precision, the precision is taken as 0. So the ksh93 specific case `..` to provide a base without giving a precision (e.g. `%..2d`) is ambiguous: does this mean that the precision is missing (like without any dot, thus defaulting to 1) or does this mean that the precision is 0 (like with a single dot)? This should be documented in the man page (`sh.1`).
A test shows that the precision is 1 in this case (which is much more useful for the user than 0):
```
$ printf ">%..2d<\n" 0
>0<
``` | 1.0 | printf with %d: the man page should say that when ".." is used, the precision is 1 - For `printf` with `%d`, when a single dot is used without a precision, the precision is taken as 0. So the ksh93 specific case `..` to provide a base without giving a precision (e.g. `%..2d`) is ambiguous: does this mean that the precision is missing (like without any dot, thus defaulting to 1) or does this mean that the precision is 0 (like with a single dot)? This should be documented in the man page (`sh.1`).
A test shows that the precision is 1 in this case (which is much more useful for the user than 0):
```
$ printf ">%..2d<\n" 0
>0<
``` | non_main | printf with d the man page should say that when is used the precision is for printf with d when a single dot is used without a precision the precision is taken as so the specific case to provide a base without giving a precision e g is ambiguous does this mean that the precision is missing like without any dot thus defaulting to or does this mean that the precision is like with a single dot this should be documented in the man page sh a test shows that the precision is in this case which is much more useful for the user than printf n | 0 |
4,625 | 23,952,638,720 | IssuesEvent | 2022-09-12 12:47:57 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Hide selection background when only one cell is selected | type: enhancement work: frontend status: ready restricted: maintainers | ## Current behavior
- A blue background displays on all cells that are selected -- even when only one cell is selected.
- The background displays during edit mode too, which is particularly weird.
## Desired behavior
- The "selected" cell background is hidden when any of the following conditions are true:
- only one cell is selected
- The cell is in edit mode
(Note that a cell can be in edit mode even when multiple cells are selected, so it's important to use both the above criteria)
CC @rajatvijay
| True | Hide selection background when only one cell is selected - ## Current behavior
- A blue background displays on all cells that are selected -- even when only one cell is selected.
- The background displays during edit mode too, which is particularly weird.
## Desired behavior
- The "selected" cell background is hidden when any of the following conditions are true:
- only one cell is selected
- The cell is in edit mode
(Note that a cell can be in edit mode even when multiple cells are selected, so it's important to use both the above criteria)
CC @rajatvijay
| main | hide selection background when only one cell is selected current behavior a blue background displays on all cells that are selected even when only one cell is selected the background displays during edit mode too which is particularly weird desired behavior the selected cell background is hidden when any of the following conditions are true only one cell is selected the cell is in edit mode note that a cell can be in edit mode even when multiple cells are selected so it s important to use both the above criteria cc rajatvijay | 1 |
25 | 2,536,753,698 | IssuesEvent | 2015-01-26 16:09:08 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | opened | Speech formatting is handled shittily | Maintainability - Hinders improvements | it was okish before when there was only italics but with all the new shit added, Gia was right and I should code something to make this not suck. | True | Speech formatting is handled shittily - it was okish before when there was only italics but with all the new shit added, Gia was right and I should code something to make this not suck. | main | speech formatting is handled shittily it was okish before when there was only italics but with all the new shit added gia was right and i should code something to make this not suck | 1 |
4,980 | 25,568,625,426 | IssuesEvent | 2022-11-30 15:59:56 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [accordion] explore `start` side alignment | type: question ❓ role: visual 🎨 type: discussion 💬 component: accordion status: waiting for maintainer response 💬 | Since we are allowing `start` side alignment now we need to explore the design and layout implications of this.
- Do we add in padding to the right of the chevron when using the 'start' alignment?
- How do you achieve text alignment when the chevron in on the left?
- Where does the content inside the panel start?
- Other?
Design tasks:
- [ ] Explore variations and possibilities list above
- [ ] Create final design spec
- [ ] Open issues for implementation (code, kit, website)
Related issue: https://github.com/carbon-design-system/carbon-website/issues/2289 | True | [accordion] explore `start` side alignment - Since we are allowing `start` side alignment now we need to explore the design and layout implications of this.
- Do we add in padding to the right of the chevron when using the 'start' alignment?
- How do you achieve text alignment when the chevron in on the left?
- Where does the content inside the panel start?
- Other?
Design tasks:
- [ ] Explore variations and possibilities list above
- [ ] Create final design spec
- [ ] Open issues for implementation (code, kit, website)
Related issue: https://github.com/carbon-design-system/carbon-website/issues/2289 | main | explore start side alignment since we are allowing start side alignment now we need to explore the design and layout implications of this do we add in padding to the right of the chevron when using the start alignment how do you achieve text alignment when the chevron in on the left where does the content inside the panel start other design tasks explore variations and possibilities list above create final design spec open issues for implementation code kit website related issue | 1 |
10,795 | 3,141,115,372 | IssuesEvent | 2015-09-12 09:05:15 | FreeCodeCamp/FreeCodeCamp | https://api.github.com/repos/FreeCodeCamp/FreeCodeCamp | opened | First test of 'Create Bootstrap Wells' waypoint has slight phrasing error | Easy Test Improvement | 
`elements` (last word of the test) shouldn't be inside the code block. Also, it could be rephrased to say each of your `div` element with the class `col-xs-6`. | 1.0 | First test of 'Create Bootstrap Wells' waypoint has slight phrasing error - 
`elements` (last word of the test) shouldn't be inside the code block. Also, it could be rephrased to say each of your `div` element with the class `col-xs-6`. | non_main | first test of create bootstrap wells waypoint has slight phrasing error elements last word of the test shouldn t be inside the code block also it could be rephrased to say each of your div element with the class col xs | 0 |
5,489 | 27,415,336,658 | IssuesEvent | 2023-03-01 13:24:04 | Windham-High-School/CubeServer | https://api.github.com/repos/Windham-High-School/CubeServer | closed | API Wrapper Versioning | enhancement trivial api maintainability | The client_config blueprint that generates the prepackaged API wrapper libraries with client configuration built-in need to manage compatibility with different versions choose the latest compatible release of the API wrapper library to clone.
The alternative is to keep the API wrapper generally backwards-compatible. | True | API Wrapper Versioning - The client_config blueprint that generates the prepackaged API wrapper libraries with client configuration built-in need to manage compatibility with different versions choose the latest compatible release of the API wrapper library to clone.
The alternative is to keep the API wrapper generally backwards-compatible. | main | api wrapper versioning the client config blueprint that generates the prepackaged api wrapper libraries with client configuration built in need to manage compatibility with different versions choose the latest compatible release of the api wrapper library to clone the alternative is to keep the api wrapper generally backwards compatible | 1 |
4,906 | 25,224,305,882 | IssuesEvent | 2022-11-14 14:58:19 | precice/precice | https://api.github.com/repos/precice/precice | closed | Turn EventTimings into a CMake submodule | maintainability good first issue | The [EventTimings project](https://github.com/precice/EventTimings) is a fully fledged CMake project.
We should consider refactoring it out of preCICE by placing it into a submodule. | True | Turn EventTimings into a CMake submodule - The [EventTimings project](https://github.com/precice/EventTimings) is a fully fledged CMake project.
We should consider refactoring it out of preCICE by placing it into a submodule. | main | turn eventtimings into a cmake submodule the is a fully fledged cmake project we should consider refactoring it out of precice by placing it into a submodule | 1 |
116,722 | 14,995,333,006 | IssuesEvent | 2021-01-29 14:10:56 | CIMDBORG/CIMMigrationProject | https://api.github.com/repos/CIMDBORG/CIMMigrationProject | opened | Add ER/RFC Field in the Edit/View Records and Review forms | Design enhancement | Add an ER/RFC # Field to the Edit/View Records form and the other forms we use for reviewing issues. This will allow us to attach a ER/RFC # that is related to the issue opened. This will aid us in research later when we need to find how a function works or how an item was implemented. | 1.0 | Add ER/RFC Field in the Edit/View Records and Review forms - Add an ER/RFC # Field to the Edit/View Records form and the other forms we use for reviewing issues. This will allow us to attach a ER/RFC # that is related to the issue opened. This will aid us in research later when we need to find how a function works or how an item was implemented. | non_main | add er rfc field in the edit view records and review forms add an er rfc field to the edit view records form and the other forms we use for reviewing issues this will allow us to attach a er rfc that is related to the issue opened this will aid us in research later when we need to find how a function works or how an item was implemented | 0 |
2,959 | 10,616,627,764 | IssuesEvent | 2019-10-12 13:14:18 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | closed | Update to Go 1.13 and latest dependency versions | context-workflow scope-compatibility scope-maintainability scope-performance scope-quality scope-security scope-stability type-task | [Go 1.13 has been released][blog] over a month ago that comes with some great features and a lot stability, performance and security improvements and bug fixes. The [new `os.UserConfigDir()` function][os] is a great addition for the handling for snowsaw's configuration files that will be implemented late on. See the [Go 1.13 official release notes][rln] for more details.
Since there are no breaking changes snowsaw will now require Go 1.13 as minimum version.
With the update to Go 1.13.x all outdated dependencies should also be updated to their latest versions to prevent possible module incompatibilities as well as including the latest improvements and bug fixes.
[blog]: https://blog.golang.org/go1.13
[os]: https://golang.org/pkg/os/#UserConfigDir
[rln]: https://golang.org/doc/go1.13
| True | Update to Go 1.13 and latest dependency versions - [Go 1.13 has been released][blog] over a month ago that comes with some great features and a lot stability, performance and security improvements and bug fixes. The [new `os.UserConfigDir()` function][os] is a great addition for the handling for snowsaw's configuration files that will be implemented late on. See the [Go 1.13 official release notes][rln] for more details.
Since there are no breaking changes snowsaw will now require Go 1.13 as minimum version.
With the update to Go 1.13.x all outdated dependencies should also be updated to their latest versions to prevent possible module incompatibilities as well as including the latest improvements and bug fixes.
[blog]: https://blog.golang.org/go1.13
[os]: https://golang.org/pkg/os/#UserConfigDir
[rln]: https://golang.org/doc/go1.13
| main | update to go and latest dependency versions over a month ago that comes with some great features and a lot stability performance and security improvements and bug fixes the is a great addition for the handling for snowsaw s configuration files that will be implemented late on see the for more details since there are no breaking changes snowsaw will now require go as minimum version with the update to go x all outdated dependencies should also be updated to their latest versions to prevent possible module incompatibilities as well as including the latest improvements and bug fixes | 1 |
1,823 | 6,577,330,146 | IssuesEvent | 2017-09-12 00:09:15 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ios_template missing backup | affects_2.1 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
networking/ios_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /home/admin-0/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
using ios_template with backup: true, I intermittently only get half the switch config. This also seems to affect the --check and --diff features, as the module tries to insert missing parts when they are in fact already present on the switch.
I have only seen this behaviour with a large stack of 8 Cisco 3850s, running IOS XE. I have two smaller stacks (of 5 and 2 using the same hardware/software) that don't seem to exhibit this problem.
I wonder if some timeout or read buffer is being hit? The problem seems to be intermittent.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Add templated config to an interface numbered Gi3/0/9 or higher, of a stack where the last interface is Gi8/0/48.
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| True | ios_template missing backup - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
networking/ios_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /home/admin-0/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
using ios_template with backup: true, I intermittently only get half the switch config. This also seems to affect the --check and --diff features, as the module tries to insert missing parts when they are in fact already present on the switch.
I have only seen this behaviour with a large stack of 8 Cisco 3850s, running IOS XE. I have two smaller stacks (of 5 and 2 using the same hardware/software) that don't seem to exhibit this problem.
I wonder if some timeout or read buffer is being hit? The problem seems to be intermittent.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Add templated config to an interface numbered Gi3/0/9 or higher, of a stack where the last interface is Gi8/0/48.
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| main | ios template missing backup issue type bug report component name networking ios template ansible version ansible config file home admin ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary using ios template with backup true i intermittently only get half the switch config this also seems to affect the check and diff features as the module tries to insert missing parts when they are in fact already present on the switch i have only seen this behaviour with a large stack of cisco running ios xe i have two smaller stacks of and using the same hardware software that don t seem to exhibit this problem i wonder if some timeout or read buffer is being hit the problem seems to be intermittent steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used add templated config to an interface numbered or higher of a stack where the last interface is expected results actual results | 1 |
302,511 | 26,149,033,138 | IssuesEvent | 2022-12-30 10:33:37 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | closed | Release 4.4.0 - Alpha 2 | release test/4.4.0 | The following issue will gather all the info regarding testing and fixing in order to validate this release stage.
The definition of done for this one is the validation from the product owner of each QA analysis and the acceptance of the implemented fixes implemented, all the below issues must be closed in order to close this one.
## Stage info
|Project|Main issue|Version|Stage|Tag|Previous Stage issue|Next Stage issue|
|---|---|---|---|---|---|---|
|[v4.4.0](https://github.com/orgs/wazuh/projects/14)|#15504|4.4.0|Alpha 2|[v4.4.0-alpha2](https://github.com/wazuh/wazuh/tree/v4.4.0-alpha2)|#15505|-|
## QA testing issues
In order to move to a new stage or the GA version, all tests and metrics analyses below must be in Closed status.
| Name | Issue | Status |DRI|
|-----------------------------|-----------------------------------------------|-------------|---|
| C unit | - | ⚪ Skipped |@wazuh/core|
| Python unit | - | ⚪ Skipped |@wazuh/framework|
| Footprint metrics | https://github.com/wazuh/wazuh/issues/15752 | 🟣 Completed |@wazuh/cicd|
| Workload benchmarks metrics | - | ⚪ Skipped |@wazuh/framework|
| Integration | #15797 | 🟣 Completed |@wazuh/qa|
| API integration | - | ⚪ Skipped |@wazuh/framework|
| System | #15751 | 🟣 Completed | @wazuh/framework|
| External integrations modules | #15750 | 🟣 Completed | @wazuh/framework|
| Demo uses cases | #15761 | 🟣 Completed |@wazuh/cicd|
| Packages | #15753 | 🟣 Completed |@wazuh/cicd|
| Coverity scan | #15769 | 🟣 Completed |@wazuh/core|
| Ruleset | - | ⚪ Skipped |@wazuh/threat-intel|
| Kibana UI regression | https://github.com/wazuh/wazuh-kibana-app/issues/5041 | 🔴 Completed with failures | @wazuh/frontend|
| Splunk UI regression | - | ⚪ Skipped | @wazuh/frontend|
| WPK Upgrade | - | ⚪ Skipped |@wazuh/core|
| E2E UX | https://github.com/wazuh/wazuh/issues/15749 | 🔴 Completed with failures |@wazuh|
⚫ _Not started: The tasks didn't start yet._
🟡 _In progress: The team is already working on it._
🟢 _Ready to review: The product owner must audit and validate the results._
⚪ _Skipped: The task has been deemed not necessary for this stage._
🟣 _Completed: Task finished. Nothing to do here._
🔴 _Completed with failures: Some issues were raised here._
## Auditors' validation
In order to close and proceed with the release or the next stage version, the following auditors must give the green light to this stage.
- [x] @davidjiglesias
| 1.0 | Release 4.4.0 - Alpha 2 - The following issue will gather all the info regarding testing and fixing in order to validate this release stage.
The definition of done for this one is the validation from the product owner of each QA analysis and the acceptance of the implemented fixes implemented, all the below issues must be closed in order to close this one.
## Stage info
|Project|Main issue|Version|Stage|Tag|Previous Stage issue|Next Stage issue|
|---|---|---|---|---|---|---|
|[v4.4.0](https://github.com/orgs/wazuh/projects/14)|#15504|4.4.0|Alpha 2|[v4.4.0-alpha2](https://github.com/wazuh/wazuh/tree/v4.4.0-alpha2)|#15505|-|
## QA testing issues
In order to move to a new stage or the GA version, all tests and metrics analyses below must be in Closed status.
| Name | Issue | Status |DRI|
|-----------------------------|-----------------------------------------------|-------------|---|
| C unit | - | ⚪ Skipped |@wazuh/core|
| Python unit | - | ⚪ Skipped |@wazuh/framework|
| Footprint metrics | https://github.com/wazuh/wazuh/issues/15752 | 🟣 Completed |@wazuh/cicd|
| Workload benchmarks metrics | - | ⚪ Skipped |@wazuh/framework|
| Integration | #15797 | 🟣 Completed |@wazuh/qa|
| API integration | - | ⚪ Skipped |@wazuh/framework|
| System | #15751 | 🟣 Completed | @wazuh/framework|
| External integrations modules | #15750 | 🟣 Completed | @wazuh/framework|
| Demo uses cases | #15761 | 🟣 Completed |@wazuh/cicd|
| Packages | #15753 | 🟣 Completed |@wazuh/cicd|
| Coverity scan | #15769 | 🟣 Completed |@wazuh/core|
| Ruleset | - | ⚪ Skipped |@wazuh/threat-intel|
| Kibana UI regression | https://github.com/wazuh/wazuh-kibana-app/issues/5041 | 🔴 Completed with failures | @wazuh/frontend|
| Splunk UI regression | - | ⚪ Skipped | @wazuh/frontend|
| WPK Upgrade | - | ⚪ Skipped |@wazuh/core|
| E2E UX | https://github.com/wazuh/wazuh/issues/15749 | 🔴 Completed with failures |@wazuh|
⚫ _Not started: The tasks didn't start yet._
🟡 _In progress: The team is already working on it._
🟢 _Ready to review: The product owner must audit and validate the results._
⚪ _Skipped: The task has been deemed not necessary for this stage._
🟣 _Completed: Task finished. Nothing to do here._
🔴 _Completed with failures: Some issues were raised here._
## Auditors' validation
In order to close and proceed with the release or the next stage version, the following auditors must give the green light to this stage.
- [x] @davidjiglesias
| non_main | release alpha the following issue will gather all the info regarding testing and fixing in order to validate this release stage the definition of done for this one is the validation from the product owner of each qa analysis and the acceptance of the implemented fixes implemented all the below issues must be closed in order to close this one stage info project main issue version stage tag previous stage issue next stage issue qa testing issues in order to move to a new stage or the ga version all tests and metrics analyses below must be in closed status name issue status dri c unit ⚪ skipped wazuh core python unit ⚪ skipped wazuh framework footprint metrics 🟣 completed wazuh cicd workload benchmarks metrics ⚪ skipped wazuh framework integration 🟣 completed wazuh qa api integration ⚪ skipped wazuh framework system 🟣 completed wazuh framework external integrations modules 🟣 completed wazuh framework demo uses cases 🟣 completed wazuh cicd packages 🟣 completed wazuh cicd coverity scan 🟣 completed wazuh core ruleset ⚪ skipped wazuh threat intel kibana ui regression 🔴 completed with failures wazuh frontend splunk ui regression ⚪ skipped wazuh frontend wpk upgrade ⚪ skipped wazuh core ux 🔴 completed with failures wazuh ⚫ not started the tasks didn t start yet 🟡 in progress the team is already working on it 🟢 ready to review the product owner must audit and validate the results ⚪ skipped the task has been deemed not necessary for this stage 🟣 completed task finished nothing to do here 🔴 completed with failures some issues were raised here auditors validation in order to close and proceed with the release or the next stage version the following auditors must give the green light to this stage davidjiglesias | 0 |
110,013 | 16,963,765,247 | IssuesEvent | 2021-06-29 08:24:26 | opfab/operatorfabric-core | https://api.github.com/repos/opfab/operatorfabric-core | closed | WS-2020-0163 (Medium) detected in marked-0.7.0.tgz | security vulnerability | ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.7.0.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.7.0.tgz">https://registry.npmjs.org/marked/-/marked-0.7.0.tgz</a></p>
<p>Path to dependency file: operatorfabric-core/ui/main/package.json</p>
<p>Path to vulnerable library: operatorfabric-core/ui/main/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- compodoc-1.1.11.tgz (Root Library)
- :x: **marked-0.7.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opfab/operatorfabric-core/commit/02e564fdff7e533435a5a00f051178a638cdb3d7">02e564fdff7e533435a5a00f051178a638cdb3d7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0163 (Medium) detected in marked-0.7.0.tgz - ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.7.0.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.7.0.tgz">https://registry.npmjs.org/marked/-/marked-0.7.0.tgz</a></p>
<p>Path to dependency file: operatorfabric-core/ui/main/package.json</p>
<p>Path to vulnerable library: operatorfabric-core/ui/main/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- compodoc-1.1.11.tgz (Root Library)
- :x: **marked-0.7.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opfab/operatorfabric-core/commit/02e564fdff7e533435a5a00f051178a638cdb3d7">02e564fdff7e533435a5a00f051178a638cdb3d7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in marked tgz ws medium severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file operatorfabric core ui main package json path to vulnerable library operatorfabric core ui main node modules marked package json dependency hierarchy compodoc tgz root library x marked tgz vulnerable library found in head commit a href found in base branch master vulnerability details marked before is vulnerable to regular expression denial of service redos rules js have multiple unused capture groups which can lead to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked step up your open source security game with whitesource | 0 |
82,543 | 10,257,522,331 | IssuesEvent | 2019-08-21 20:20:21 | pegnet/pegnet | https://api.github.com/repos/pegnet/pegnet | closed | Evaluating PoW to avoid a 51% attack | design discussion pM1 consideration | We have an advantage because the selection of the record that matters is a distributed problem (over all entries submitted). But any miner with 51% of the hash power still has the chance of selecting the values actually used in a block. How do we protect ourselves from this?
An analysis of the final 50 OPRs in the current approach has no method to do much but calculate the agreement between the 50 OPRs. A miner with 26 Entries in the 50 can dictate that result. So how can we make it harder to get 26 Entries?
Change how we reduce 100 to 200 entries to 50.
This method works like this:
```
Collect the valid OPRs (all references to OPRs past this step assumes the list of valid OPRs)
Calculate the PoW for all the OPRs.
Take the difficulty of the last OPR submitted, and use it to create a salted hash for all OPRs
Sort by Salted Hash.
Then loop through all the OPRs by pairs
Keep the OPR of the pair that has the highest PoW
if all that is left + what you are to keep == 50, you are done
If at the end of the list (without a pair), and we still have more than 50 OPRs, repeat the loop
with the OPRs we Kept
```
What this does for any set of valid OPRs over 100 is ensure a party submitting 26 OPRs has a much reduced chance of being in set of 50. A mining pool submitting multiple entries is likely to compete with themselves prior to the selection of 50.
To have a good chance to have 26 entries out of 50, 51% is no longer enough. Many of your entries will end up competing with your own entries, ensuring one or the other no longer counts, no matter how high the hash power is for each.
The only entry with 100% certainty to win and go into the 50 is the highest hash power. But the second highest hash power might have been paired with the highest and eliminated. The impact of the algorithm is rather hard for me to calculate. Someone with some statistics might be able to figure it out. I need my stats book to do stats.
| 1.0 | Evaluating PoW to avoid a 51% attack - We have an advantage because the selection of the record that matters is a distributed problem (over all entries submitted). But any miner with 51% of the hash power still has the chance of selecting the values actually used in a block. How do we protect ourselves from this?
An analysis of the final 50 OPRs in the current approach has no method to do much but calculate the agreement between the 50 OPRs. A miner with 26 Entries in the 50 can dictate that result. So how can we make it harder to get 26 Entries?
Change how we reduce 100 to 200 entries to 50.
This method works like this:
```
Collect the valid OPRs (all references to OPRs past this step assumes the list of valid OPRs)
Calculate the PoW for all the OPRs.
Take the difficulty of the last OPR submitted, and use it to create a salted hash for all OPRs
Sort by Salted Hash.
Then loop through all the OPRs by pairs
Keep the OPR of the pair that has the highest PoW
if all that is left + what you are to keep == 50, you are done
If at the end of the list (without a pair), and we still have more than 50 OPRs, repeat the loop
with the OPRs we Kept
```
What this does for any set of valid OPRs over 100 is ensure a party submitting 26 OPRs has a much reduced chance of being in set of 50. A mining pool submitting multiple entries is likely to compete with themselves prior to the selection of 50.
To have a good chance to have 26 entries out of 50, 51% is no longer enough. Many of your entries will end up competing with your own entries, ensuring one or the other no longer counts, no matter how high the hash power is for each.
The only entry with 100% certainty to win and go into the 50 is the highest hash power. But the second highest hash power might have been paired with the highest and eliminated. The impact of the algorithm is rather hard for me to calculate. Someone with some statistics might be able to figure it out. I need my stats book to do stats.
| non_main | evaluating pow to avoid a attack we have an advantage because the selection of the record that matters is a distributed problem over all entries submitted but any miner with of the hash power still has the chance of selecting the values actually used in a block how do we protect ourselves from this an analysis of the final oprs in the current approach has no method to do much but calculate the agreement between the oprs a miner with entries in the can dictate that result so how can we make it harder to get entries change how we reduce to entries to this method works like this collect the valid oprs all references to oprs past this step assumes the list of valid oprs calculate the pow for all the oprs take the difficulty of the last opr submitted and use it to create a salted hash for all oprs sort by salted hash then loop through all the oprs by pairs keep the opr of the pair that has the highest pow if all that is left what you are to keep you are done if at the end of the list without a pair and we still have more than oprs repeat the loop with the oprs we kept what this does for any set of valid oprs over is ensure a party submitting oprs has a much reduced chance of being in set of a mining pool submitting multiple entries is likely to compete with themselves prior to the selection of to have a good chance to have entries out of is no longer enough many of your entries will end up competing with your own entries ensuring one or the other no longer counts no matter how high the hash power is for each the only entry with certainty to win and go into the is the highest hash power but the second highest hash power might have been paired with the highest and eliminated the impact of the algorithm is rather hard for me to calculate someone with some statistics might be able to figure it out i need my stats book to do stats | 0 |
4,252 | 21,089,646,663 | IssuesEvent | 2022-04-04 02:22:52 | aws/aws-sam-cli-app-templates | https://api.github.com/repos/aws/aws-sam-cli-app-templates | closed | Please add a working Rust app template | help wanted maintainer/need-followup | Mirroring issue https://github.com/aws/aws-lambda-builders/issues/167#issuecomment-698100365 here.
The official works and general support are not exactly helpful at the time of writing this, unfortunately, so hopefully those pointers help for the time being? :/ | True | Please add a working Rust app template - Mirroring issue https://github.com/aws/aws-lambda-builders/issues/167#issuecomment-698100365 here.
The official works and general support are not exactly helpful at the time of writing this, unfortunately, so hopefully those pointers help for the time being? :/ | main | please add a working rust app template mirroring issue here the official works and general support are not exactly helpful at the time of writing this unfortunately so hopefully those pointers help for the time being | 1 |
471 | 3,703,125,150 | IssuesEvent | 2016-02-29 19:16:40 | pychess/pychess | https://api.github.com/repos/pychess/pychess | closed | pylint warnings/errors | Maintainability task | Original [issue 523](https://code.google.com/p/pychess/issues/detail?id=523) reported by [thomas.spura](https://code.google.com/u/110909423880295359716/) 2010-02-04
This should be a pylint work in progress issue:
When running 'pylint pychess', this file is rated as 7.69/10, which is
pretty good. There are only some Convention warnings.
'pylint Main.py' is a bit funny ;)
Your code has been rated at -23.17/10
There are 35 errors and 275 warnings and some other conventions and
refactor warnings.
I use pylint quite often and it detects some programmi errors, so the
pylint output should be reduce, to better decide, if pychess is error-free.
After just a few modifications in imports, this will become:
Your code has been rated at -8.37/10
At least a good starting point ;) | True | pylint warnings/errors - Original [issue 523](https://code.google.com/p/pychess/issues/detail?id=523) reported by [thomas.spura](https://code.google.com/u/110909423880295359716/) 2010-02-04
This should be a pylint work in progress issue:
When running 'pylint pychess', this file is rated as 7.69/10, which is
pretty good. There are only some Convention warnings.
'pylint Main.py' is a bit funny ;)
Your code has been rated at -23.17/10
There are 35 errors and 275 warnings and some other conventions and
refactor warnings.
I use pylint quite often and it detects some programmi errors, so the
pylint output should be reduce, to better decide, if pychess is error-free.
After just a few modifications in imports, this will become:
Your code has been rated at -8.37/10
At least a good starting point ;) | main | pylint warnings errors original reported by this should be a pylint work in progress issue when running pylint pychess this file is rated as which is pretty good there are only some convention warnings pylint main py is a bit funny your code has been rated at there are errors and warnings and some other conventions and refactor warnings i use pylint quite often and it detects some programmi errors so the pylint output should be reduce to better decide if pychess is error free after just a few modifications in imports this will become your code has been rated at at least a good starting point | 1 |
400,822 | 27,301,874,598 | IssuesEvent | 2023-02-24 03:09:54 | VIP-LES/EosPayload | https://api.github.com/repos/VIP-LES/EosPayload | closed | Remote OrchEOStrator Debug Technical Specification | documentation | Exact functionality
How it is implemented?
1/2 to 1 page | 1.0 | Remote OrchEOStrator Debug Technical Specification - Exact functionality
How it is implemented?
1/2 to 1 page | non_main | remote orcheostrator debug technical specification exact functionality how it is implemented to page | 0 |
547,750 | 16,046,239,302 | IssuesEvent | 2021-04-22 13:56:55 | gnosis/ido-ux | https://api.github.com/repos/gnosis/ido-ux | closed | Create a Dune Analytics list for orders | high priority | **Create a Dune Analytics list that shows all orders**
The first iteration should have the following columns:
| Price | Amount | Sum | My Size %
- Price -> determines the price level of the bids in one row
- Amount -> Shows the amount that is being bid at that given row
- Sum -> Shows the cumulative amount at that bid price, adding the amount of all higher bids form previous rows
- My Size % (maybe not possible for Dune Analytics) -> Shows the % of the `Amount` that the bid of the user has
**Possible button location**
Marked with the red rectangle:
<img width="1228" alt="Screenshot 2021-03-25 at 15 11 20" src="https://user-images.githubusercontent.com/26544214/112486911-99c1ad00-8d7c-11eb-9c5f-cff85548cef7.png">
**References to orderbook lists**
A few images for inspiration.
Binance:
<img width="269" alt="Screenshot 2021-03-25 at 14 55 33" src="https://user-images.githubusercontent.com/26544214/112487068-bc53c600-8d7c-11eb-9c95-e345952e62bf.png">
dYdX:
<img width="291" alt="Screenshot 2021-03-25 at 15 14 37" src="https://user-images.githubusercontent.com/26544214/112487240-db525800-8d7c-11eb-878f-62e4c86742dc.png">
@cmagan Let me know if you think this would be enough.
| 1.0 | Create a Dune Analytics list for orders - **Create a Dune Analytics list that shows all orders**
The first iteration should have the following columns:
| Price | Amount | Sum | My Size %
- Price -> determines the price level of the bids in one row
- Amount -> Shows the amount that is being bid at that given row
- Sum -> Shows the cumulative amount at that bid price, adding the amount of all higher bids form previous rows
- My Size % (maybe not possible for Dune Analytics) -> Shows the % of the `Amount` that the bid of the user has
**Possible button location**
Marked with the red rectangle:
<img width="1228" alt="Screenshot 2021-03-25 at 15 11 20" src="https://user-images.githubusercontent.com/26544214/112486911-99c1ad00-8d7c-11eb-9c5f-cff85548cef7.png">
**References to orderbook lists**
A few images for inspiration.
Binance:
<img width="269" alt="Screenshot 2021-03-25 at 14 55 33" src="https://user-images.githubusercontent.com/26544214/112487068-bc53c600-8d7c-11eb-9c95-e345952e62bf.png">
dYdX:
<img width="291" alt="Screenshot 2021-03-25 at 15 14 37" src="https://user-images.githubusercontent.com/26544214/112487240-db525800-8d7c-11eb-878f-62e4c86742dc.png">
@cmagan Let me know if you think this would be enough.
| non_main | create a dune analytics list for orders create a dune analytics list that shows all orders the first iteration should have the following columns price amount sum my size price determines the price level of the bids in one row amount shows the amount that is being bid at that given row sum shows the cumulative amount at that bid price adding the amount of all higher bids form previous rows my size maybe not possible for dune analytics shows the of the amount that the bid of the user has possible button location marked with the red rectangle img width alt screenshot at src references to orderbook lists a few images for inspiration binance img width alt screenshot at src dydx img width alt screenshot at src cmagan let me know if you think this would be enough | 0 |
2,424 | 8,607,482,535 | IssuesEvent | 2018-11-17 22:57:12 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | opened | Cleanup the multiple SLF4j bindings | maintainability maven task | **Describe the bug**
SLF4J API is designed to bind with one and only one underlying logging framework at a time. If more than one binding is present on the class path, SLF4J will emit a warning, listing the location of those bindings.
**To Reproduce**
Steps to reproduce the behavior:
1. Run ./refine
**Current Results**
```
22:40:43.060 [ refine_server] Creating new workspace directory /home/thad/.local/share/openrefine (383ms)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/thad/OpenRefine/server/target/lib/slf4j-log4j12-1.7.18.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/thad/OpenRefine/main/webapp/WEB-INF/lib/slf4j-log4j12-1.7.18.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
22:40:43.095 [ refine] Starting OpenRefine 3.1-beta [TRUNK]... (35ms)
```
**Expected behavior**
A clear and concise description of what you expected to happen or to show.
**Desktop (please complete the following information):**
- OS: Ubuntu 18.04
- Browser Version: Firefox latest
- JRE or JDK Version: OpenJDK 8
**OpenRefine (please complete the following information):**
- Version: Trunk (master)
**Additional Context**
https://www.slf4j.org/codes.html#multiple_bindings
| True | Cleanup the multiple SLF4j bindings - **Describe the bug**
SLF4J API is designed to bind with one and only one underlying logging framework at a time. If more than one binding is present on the class path, SLF4J will emit a warning, listing the location of those bindings.
**To Reproduce**
Steps to reproduce the behavior:
1. Run ./refine
**Current Results**
```
22:40:43.060 [ refine_server] Creating new workspace directory /home/thad/.local/share/openrefine (383ms)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/thad/OpenRefine/server/target/lib/slf4j-log4j12-1.7.18.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/thad/OpenRefine/main/webapp/WEB-INF/lib/slf4j-log4j12-1.7.18.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
22:40:43.095 [ refine] Starting OpenRefine 3.1-beta [TRUNK]... (35ms)
```
**Expected behavior**
A clear and concise description of what you expected to happen or to show.
**Desktop (please complete the following information):**
- OS: Ubuntu 18.04
- Browser Version: Firefox latest
- JRE or JDK Version: OpenJDK 8
**OpenRefine (please complete the following information):**
- Version: Trunk (master)
**Additional Context**
https://www.slf4j.org/codes.html#multiple_bindings
| main | cleanup the multiple bindings describe the bug api is designed to bind with one and only one underlying logging framework at a time if more than one binding is present on the class path will emit a warning listing the location of those bindings to reproduce steps to reproduce the behavior run refine current results creating new workspace directory home thad local share openrefine class path contains multiple bindings found binding in found binding in see for an explanation actual binding is of type starting openrefine beta expected behavior a clear and concise description of what you expected to happen or to show desktop please complete the following information os ubuntu browser version firefox latest jre or jdk version openjdk openrefine please complete the following information version trunk master additional context | 1 |
223,896 | 7,463,255,868 | IssuesEvent | 2018-04-01 02:19:32 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | cilium bpf policy get doesn't print entries whose labels are not resolved | area/cli help-wanted kind/bug kind/microtask priority/1.0-blocker | In issue #3314 , we observe that the regular `cilium bpf policy get <EPID>` output only prints entries whose labels can be resolved, which misleads users to believe that there are only a few entries in the table.
We should print empty labels or "cannot be resolved" for each identity that we cannot resolve the labels for (perhaps with the underlying identity). | 1.0 | cilium bpf policy get doesn't print entries whose labels are not resolved - In issue #3314 , we observe that the regular `cilium bpf policy get <EPID>` output only prints entries whose labels can be resolved, which misleads users to believe that there are only a few entries in the table.
We should print empty labels or "cannot be resolved" for each identity that we cannot resolve the labels for (perhaps with the underlying identity). | non_main | cilium bpf policy get doesn t print entries whose labels are not resolved in issue we observe that the regular cilium bpf policy get output only prints entries whose labels can be resolved which misleads users to believe that there are only a few entries in the table we should print empty labels or cannot be resolved for each identity that we cannot resolve the labels for perhaps with the underlying identity | 0 |
35,858 | 14,890,570,231 | IssuesEvent | 2021-01-20 23:20:41 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | WebApp:[webapp ssh] - does not work on WSL | Service Attention Web Apps | ## Describe the bug
**Command Name**
`az webapp ssh`
**Errors:**
```
webapp ssh is only supported on linux and mac
```
Here is the Ubuntu 20.04 (WSL) using the [Azure CLI installed on Linux with apt](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt):

## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az webapp ssh --resource-group {} --name {}`
## Expected Behavior
## Environment Summary
```
Ubuntu 20.04 (WSL on Windows-10-10.0.19041-SP0)
Python 3.6.8
Installer: `apt` package azure-cli 2.15.0 *
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
These are issues reporting the `webapp ssh` does not work on Windows:
- https://github.com/Azure/azure-cli/issues/15729
- https://github.com/Azure/azure-cli/issues/15072
- https://github.com/Azure/azure-cli/issues/12940
- https://github.com/Azure/azure-cli/issues/12844
but apparently it does not work on Linux either, so there must be something broken in general.
| 1.0 | WebApp:[webapp ssh] - does not work on WSL - ## Describe the bug
**Command Name**
`az webapp ssh`
**Errors:**
```
webapp ssh is only supported on linux and mac
```
Here is the Ubuntu 20.04 (WSL) using the [Azure CLI installed on Linux with apt](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt):

## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az webapp ssh --resource-group {} --name {}`
## Expected Behavior
## Environment Summary
```
Ubuntu 20.04 (WSL on Windows-10-10.0.19041-SP0)
Python 3.6.8
Installer: `apt` package azure-cli 2.15.0 *
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
These are issues reporting the `webapp ssh` does not work on Windows:
- https://github.com/Azure/azure-cli/issues/15729
- https://github.com/Azure/azure-cli/issues/15072
- https://github.com/Azure/azure-cli/issues/12940
- https://github.com/Azure/azure-cli/issues/12844
but apparently it does not work on Linux either, so there must be something broken in general.
| non_main | webapp does not work on wsl describe the bug command name az webapp ssh errors webapp ssh is only supported on linux and mac here is the ubuntu wsl using the to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az webapp ssh resource group name expected behavior environment summary ubuntu wsl on windows python installer apt package azure cli additional context these are issues reporting the webapp ssh does not work on windows but apparently it does not work on linux either so there must be something broken in general | 0 |
268,725 | 23,391,985,600 | IssuesEvent | 2022-08-11 18:47:25 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Test testplan | testplan-item | Refs: https://github.com/microsoft/vscode/issues/157940
- [ ] Windows
- [ ] macOS
- [ ] Linux
Complexity: 2
---
Blank test plan item to test issue bot | 1.0 | Test testplan - Refs: https://github.com/microsoft/vscode/issues/157940
- [ ] Windows
- [ ] macOS
- [ ] Linux
Complexity: 2
---
Blank test plan item to test issue bot | non_main | test testplan refs windows macos linux complexity blank test plan item to test issue bot | 0 |
2,410 | 8,561,255,772 | IssuesEvent | 2018-11-09 05:56:44 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | mysql_user support for FUNCTION and PROCEDURE privileges | affects_2.2 feature module needs_maintainer support:community | From @mklassen on 2016-11-07T19:10:33Z
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
ansible 2.2.0.0
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Currently only `TABLE` privileges can be manipulated with `mysql_user`
Granting execute privileges on a mysql `FUNCTION` requires an SQL statement of the form
```
GRANT EXECUTE ON FUNCTION dbname.function_name TO 'user';
```
Unfortunately if the `FUNCTION` keyword is included in `mysql_user` modules's `priv` parameter it is not recognized as a valid privilege level.
Object types of `FUNCTION` and `PROCEDURE` are supported by `mysql` (http://dev.mysql.com/doc/refman/5.7/en/grant.html) and it would be nice if the `priv` parameter supported specifying 'object_type', so that task like the following could be executed
```
- mysql_user:
user: db_user
priv: FUNCTION dbname.function_name:EXECUTE
state: present
```
Copied from original issue: ansible/ansible-modules-core#5518
| True | mysql_user support for FUNCTION and PROCEDURE privileges - From @mklassen on 2016-11-07T19:10:33Z
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
ansible 2.2.0.0
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Currently only `TABLE` privileges can be manipulated with `mysql_user`
Granting execute privileges on a mysql `FUNCTION` requires an SQL statement of the form
```
GRANT EXECUTE ON FUNCTION dbname.function_name TO 'user';
```
Unfortunately if the `FUNCTION` keyword is included in `mysql_user` modules's `priv` parameter it is not recognized as a valid privilege level.
Object types of `FUNCTION` and `PROCEDURE` are supported by `mysql` (http://dev.mysql.com/doc/refman/5.7/en/grant.html) and it would be nice if the `priv` parameter supported specifying 'object_type', so that task like the following could be executed
```
- mysql_user:
user: db_user
priv: FUNCTION dbname.function_name:EXECUTE
state: present
```
Copied from original issue: ansible/ansible-modules-core#5518
| main | mysql user support for function and procedure privileges from mklassen on issue type feature idea component name mysql user ansible version ansible configuration n a os environment n a summary currently only table privileges can be manipulated with mysql user granting execute privileges on a mysql function requires an sql statement of the form grant execute on function dbname function name to user unfortunately if the function keyword is included in mysql user modules s priv parameter it is not recognized as a valid privilege level object types of function and procedure are supported by mysql and it would be nice if the priv parameter supported specifying object type so that task like the following could be executed mysql user user db user priv function dbname function name execute state present copied from original issue ansible ansible modules core | 1 |
3,299 | 12,696,702,729 | IssuesEvent | 2020-06-22 10:29:44 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Terraform module backend config not working | affects_2.9 bug cloud collection collection:community.general module needs_collection_redirect needs_maintainer needs_triage support:community | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Terraform module backend config doesn't work
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
terraform module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = small
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = False
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/etc/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['profile_tasks']
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 80
DEFAULT_LIBVIRT_LXC_NOSECLABEL(/etc/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /home/nirr/.ansible/ansible.log
DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/etc/ansible/ansible.cfg) = 23
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = devops
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/home/nirr/roles', '/etc/ansib>
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = smart
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 30
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running on localhost (Fedora 32) terraform installed is Terraform v0.12.26
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
become: no
tasks:
- name: Write the vars to tfvar file
copy:
content: |
{{ terraform_vars | to_nice_json }}
dest: /tmp/tfvars.json
vars:
terraform_vars:
clusterName: "{{ customer }}.k8s.local"
clusterVpc: "{{ vpc.id }}"
region: "{{ region }}"
subnets: |
{{ subnets }}
- name: Add extra routing using TF
terraform:
project_path: "extra_terraform/"
state: "present"
force_init: true
# plan_file: "./extra_tf"
backend_config:
region: "us-east-1"
bucket: "{{ bucket_name }}"
key: "{{ customer }}/terraform/state.tfstate"
variables_file: /tmp/tfvars.json
register: subnets_terraform_output
```
Module gets the config OK (see output in actual result).
...
...
...
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Running the playbook should put the state in s3 and shouldn't create local terraform.tfstate
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible is running terraform without backed config (local terraform.tfstate is created)
<!--- Paste verbatim command output between quotes -->
relevant part is:
```paste below
ok: [localhost] => {
"changed": false,
"command": "/usr/bin/terraform apply -no-color -input=false -auto-approve=true -lock=true /tmp/tmpri8wpq0a.tfplan",
"invocation": {
"module_args": {
"backend_config": {
"bucket": "BUCKET_NAME_IS_OK(REDUCTED)",
"key": "nirr-try/terraform/state.tfstate",
"region": "us-east-1"
},
"binary_path": null,
"force_init": true,
"lock": true,
"lock_timeout": null,
"plan_file": null,
"project_path": "extra_terraform/",
"purge_workspace": false,
"state": "present",
"state_file": null,
"targets": [],
"variables": null,
"variables_file": "/tmp/tfvars.json",
"workspace": "default"
}
},
"outputs": {},
"state": "present",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"workspace": "default"
}
```
| True | Terraform module backend config not working - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Terraform module backend config doesn't work
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
terraform module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = small
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = False
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/etc/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['profile_tasks']
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 80
DEFAULT_LIBVIRT_LXC_NOSECLABEL(/etc/ansible/ansible.cfg) = True
DEFAULT_LOAD_CALLBACK_PLUGINS(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /home/nirr/.ansible/ansible.log
DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/etc/ansible/ansible.cfg) = 23
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = devops
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/home/nirr/roles', '/etc/ansib>
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = smart
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 30
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running on localhost (Fedora 32) terraform installed is Terraform v0.12.26
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
become: no
tasks:
- name: Write the vars to tfvar file
copy:
content: |
{{ terraform_vars | to_nice_json }}
dest: /tmp/tfvars.json
vars:
terraform_vars:
clusterName: "{{ customer }}.k8s.local"
clusterVpc: "{{ vpc.id }}"
region: "{{ region }}"
subnets: |
{{ subnets }}
- name: Add extra routing using TF
terraform:
project_path: "extra_terraform/"
state: "present"
force_init: true
# plan_file: "./extra_tf"
backend_config:
region: "us-east-1"
bucket: "{{ bucket_name }}"
key: "{{ customer }}/terraform/state.tfstate"
variables_file: /tmp/tfvars.json
register: subnets_terraform_output
```
Module gets the config OK (see output in actual result).
...
...
...
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Running the playbook should put the state in s3 and shouldn't create local terraform.tfstate
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible is running terraform without backed config (local terraform.tfstate is created)
<!--- Paste verbatim command output between quotes -->
relevant part is:
```paste below
ok: [localhost] => {
"changed": false,
"command": "/usr/bin/terraform apply -no-color -input=false -auto-approve=true -lock=true /tmp/tmpri8wpq0a.tfplan",
"invocation": {
"module_args": {
"backend_config": {
"bucket": "BUCKET_NAME_IS_OK(REDUCTED)",
"key": "nirr-try/terraform/state.tfstate",
"region": "us-east-1"
},
"binary_path": null,
"force_init": true,
"lock": true,
"lock_timeout": null,
"plan_file": null,
"project_path": "extra_terraform/",
"purge_workspace": false,
"state": "present",
"state_file": null,
"targets": [],
"variables": null,
"variables_file": "/tmp/tfvars.json",
"workspace": "default"
}
},
"outputs": {},
"state": "present",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"workspace": "default"
}
```
| main | terraform module backend config not working summary terraform module backend config doesn t work issue type bug report component name terraform module ansible version paste below ansible configuration paste below ansible cow selection etc ansible ansible cfg small ansible nocows etc ansible ansible cfg false ansible pipelining etc ansible ansible cfg true default become method etc ansible ansible cfg sudo default become user etc ansible ansible cfg root default callback whitelist etc ansible ansible cfg default forks etc ansible ansible cfg default libvirt lxc noseclabel etc ansible ansible cfg true default load callback plugins etc ansible ansible cfg true default log path etc ansible ansible cfg home nirr ansible ansible log default poll interval etc ansible ansible cfg default remote port etc ansible ansible cfg default remote user etc ansible ansible cfg devops default roles path etc ansible ansible cfg home nirr roles etc ansib default scp if ssh etc ansible ansible cfg smart default timeout etc ansible ansible cfg host key checking etc ansible ansible cfg false retry files enabled etc ansible ansible cfg false os environment running on localhost fedora terraform installed is terraform steps to reproduce yaml hosts localhost connection local become no tasks name write the vars to tfvar file copy content terraform vars to nice json dest tmp tfvars json vars terraform vars clustername customer local clustervpc vpc id region region subnets subnets name add extra routing using tf terraform project path extra terraform state present force init true plan file extra tf backend config region us east bucket bucket name key customer terraform state tfstate variables file tmp tfvars json register subnets terraform output module gets the config ok see output in actual result expected results running the playbook should put the state in and shouldn t create local terraform tfstate actual results ansible is running terraform without backed config local terraform tfstate is created relevant part is paste below ok changed false command usr bin terraform apply no color input false auto approve true lock true tmp tfplan invocation module args backend config bucket bucket name is ok reducted key nirr try terraform state tfstate region us east binary path null force init true lock true lock timeout null plan file null project path extra terraform purge workspace false state present state file null targets variables null variables file tmp tfvars json workspace default outputs state present stderr stderr lines stdout stdout lines workspace default | 1 |
21,657 | 10,676,150,210 | IssuesEvent | 2019-10-21 13:14:09 | repo-helper/badgeboard | https://api.github.com/repos/repo-helper/badgeboard | opened | CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz | security vulnerability | ## CVE-2015-8857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.2.5.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/badgeboard/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/badgeboard/node_modules/transformers/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- jade-1.11.0.tgz (Root Library)
- transformers-2.1.0.tgz
- :x: **uglify-js-2.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/repo-helper/badgeboard/commit/c3313c4fcd75653eeaf5e272b1a0741f10358953">c3313c4fcd75653eeaf5e272b1a0741f10358953</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8857>CVE-2015-8857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.4.24</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz - ## CVE-2015-8857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.2.5.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/badgeboard/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/badgeboard/node_modules/transformers/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- jade-1.11.0.tgz (Root Library)
- transformers-2.1.0.tgz
- :x: **uglify-js-2.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/repo-helper/badgeboard/commit/c3313c4fcd75653eeaf5e272b1a0741f10358953">c3313c4fcd75653eeaf5e272b1a0741f10358953</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8857>CVE-2015-8857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.4.24</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in uglify js tgz cve high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file tmp ws scm badgeboard package json path to vulnerable library tmp ws scm badgeboard node modules transformers node modules uglify js package json dependency hierarchy jade tgz root library transformers tgz x uglify js tgz vulnerable library found in head commit a href vulnerability details the uglify js package before for node js does not properly account for non boolean values when rewriting boolean expressions which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten javascript publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
34,418 | 4,918,458,058 | IssuesEvent | 2016-11-24 08:59:43 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/wal: TestOpenAtUncommittedIndex failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/b54490b2cf70c155ec2b7af5133276ffe24dc02c
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=true
TAGS=stress
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=58266&tab=buildLog
```
wal_test.go:461: mkdir /tmp/waltest279610478.tmp: no space left on device
``` | 1.0 | github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/wal: TestOpenAtUncommittedIndex failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/b54490b2cf70c155ec2b7af5133276ffe24dc02c
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=true
TAGS=stress
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=58266&tab=buildLog
```
wal_test.go:461: mkdir /tmp/waltest279610478.tmp: no space left on device
``` | non_main | github com cockroachdb cockroach vendor github com coreos etcd wal testopenatuncommittedindex failed under stress sha parameters cockroach proposer evaluated kv true tags stress goflags stress build found a failed test wal test go mkdir tmp tmp no space left on device | 0 |
174,468 | 6,540,411,157 | IssuesEvent | 2017-09-01 15:20:02 | stevekrouse/WoofJS | https://api.github.com/repos/stevekrouse/WoofJS | closed | better sorting of search results in docs | good for beginners optimization priority woofjs.com | For example, if you search for "if", the block for if is low on the list:

Creating better tags on docs items and [tweaking the weights in the search code](https://github.com/stevekrouse/WoofJS/blob/master/docs/index.html#L1689) are two places to start on this one. | 1.0 | better sorting of search results in docs - For example, if you search for "if", the block for if is low on the list:

Creating better tags on docs items and [tweaking the weights in the search code](https://github.com/stevekrouse/WoofJS/blob/master/docs/index.html#L1689) are two places to start on this one. | non_main | better sorting of search results in docs for example if you search for if the block for if is low on the list creating better tags on docs items and are two places to start on this one | 0 |
136,333 | 5,280,622,339 | IssuesEvent | 2017-02-07 14:40:54 | YaleSTC/vesta | https://api.github.com/repos/YaleSTC/vesta | opened | Add link for admins to edit drawless groups (or groups in general?) | complexity: 1 priority: 5 type: bug | There is currently no link 😞 | 1.0 | Add link for admins to edit drawless groups (or groups in general?) - There is currently no link 😞 | non_main | add link for admins to edit drawless groups or groups in general there is currently no link 😞 | 0 |
167,995 | 14,135,430,232 | IssuesEvent | 2020-11-10 01:41:28 | jackdomleo7/Checka11y.css | https://api.github.com/repos/jackdomleo7/Checka11y.css | closed | Create a website | documentation project enhancement | <!-- 🧡 Thanks for your time to make Checka11y.css better with your feedbacks 🧡 -->
<!-- If this issue is to check for a new a11y feature or modify an existing a11y feature check, please label it as `a11y feature` -->
<!-- If this issue is to enhance anything else in the project (I.e. linting, dependencies, README, architecture, etc), please label it as `project enhancement` -->
### Describe the new a11y feature or project enhancement
<!-- A clear and concise description of what the problem is. E.g. A common a11y mistake is... -->
The `gh-pages` branch is reserved for the documentation/live demo website for Checka11y.css. We just need a website creating, with:
- Usage docs
- Links to:
- Product Hunt
- [GitHub repo](https://github.com/jackdomleo7/Checka11y.css)
- npm
- Yarn
- [CHANGELOG](https://github.com/jackdomleo7/Checka11y.css/releases)
- Example page triggering as many Checka11y errors and warnings as possible
- Badges
- GitHub stars
- Version
- JsDelivr hits
- npm downloads
### Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. Adding some code examples would be neat! -->
A website! 🔥
### Link(s)
<!-- Please provide any relevant links used in your investigation in raising this issue. -->
<!-- Try linking to trusted sites such as w3.org, developer.mozilla.org, a11yproject.com, inclusive-components.design, etc -->
- https://checka11y.jackdomleo.dev
| 1.0 | Create a website - <!-- 🧡 Thanks for your time to make Checka11y.css better with your feedbacks 🧡 -->
<!-- If this issue is to check for a new a11y feature or modify an existing a11y feature check, please label it as `a11y feature` -->
<!-- If this issue is to enhance anything else in the project (I.e. linting, dependencies, README, architecture, etc), please label it as `project enhancement` -->
### Describe the new a11y feature or project enhancement
<!-- A clear and concise description of what the problem is. E.g. A common a11y mistake is... -->
The `gh-pages` branch is reserved for the documentation/live demo website for Checka11y.css. We just need a website creating, with:
- Usage docs
- Links to:
- Product Hunt
- [GitHub repo](https://github.com/jackdomleo7/Checka11y.css)
- npm
- Yarn
- [CHANGELOG](https://github.com/jackdomleo7/Checka11y.css/releases)
- Example page triggering as many Checka11y errors and warnings as possible
- Badges
- GitHub stars
- Version
- JsDelivr hits
- npm downloads
### Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. Adding some code examples would be neat! -->
A website! 🔥
### Link(s)
<!-- Please provide any relevant links used in your investigation in raising this issue. -->
<!-- Try linking to trusted sites such as w3.org, developer.mozilla.org, a11yproject.com, inclusive-components.design, etc -->
- https://checka11y.jackdomleo.dev
| non_main | create a website describe the new feature or project enhancement the gh pages branch is reserved for the documentation live demo website for css we just need a website creating with usage docs links to product hunt npm yarn example page triggering as many errors and warnings as possible badges github stars version jsdelivr hits npm downloads describe the solution you d like a website 🔥 link s | 0 |
412,137 | 12,035,708,988 | IssuesEvent | 2020-04-13 18:23:40 | minio/minio-go | https://api.github.com/repos/minio/minio-go | closed | NoSuchBucket Error on backblaze gateway | priority: low | <!--- Provide a general summary of the issue in the Title above -->
Hey, I don't know if this is an expected behaviour or a Bug. When putting an object using the minioClient.PutObject and ContentLength to -1 (unknown) we get a NoSuchBucket. This is only happening with ContentLength -1
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Putting Object with unknown File-size
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Error "NoSuchBucket"
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Set up Minio with Backblaze B2 as backend
2. Use minioClient.PutObject with ContentLength to -1
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I am trying to put objects from different sources on stream with compression (size unknown)
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Server: MinIO/RELEASE.2020-03-19T21-49-00Z
* minio-go v6.0.14
| 1.0 | NoSuchBucket Error on backblaze gateway - <!--- Provide a general summary of the issue in the Title above -->
Hey, I don't know if this is an expected behaviour or a Bug. When putting an object using the minioClient.PutObject and ContentLength to -1 (unknown) we get a NoSuchBucket. This is only happening with ContentLength -1
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Putting Object with unknown File-size
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Error "NoSuchBucket"
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Set up Minio with Backblaze B2 as backend
2. Use minioClient.PutObject with ContentLength to -1
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I am trying to put objects from different sources on stream with compression (size unknown)
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Server: MinIO/RELEASE.2020-03-19T21-49-00Z
* minio-go v6.0.14
| non_main | nosuchbucket error on backblaze gateway hey i don t know if this is an expected behaviour or a bug when putting an object using the minioclient putobject and contentlength to unknown we get a nosuchbucket this is only happening with contentlength expected behavior putting object with unknown file size current behavior error nosuchbucket steps to reproduce for bugs set up minio with backblaze as backend use minioclient putobject with contentlength to context i am trying to put objects from different sources on stream with compression size unknown your environment server minio release minio go | 0 |
154,022 | 19,710,762,851 | IssuesEvent | 2022-01-13 04:52:48 | ChoeMinji/react-17.0.2 | https://api.github.com/repos/ChoeMinji/react-17.0.2 | opened | CVE-2020-7693 (Medium) detected in sockjs-0.3.18.tgz, sockjs-0.3.19.tgz | security vulnerability | ## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>sockjs-0.3.18.tgz</b>, <b>sockjs-0.3.19.tgz</b></p></summary>
<p>
<details><summary><b>sockjs-0.3.18.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.18.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.18.tgz</a></p>
<p>Path to dependency file: /fixtures/fiber-debugger/package.json</p>
<p>Path to vulnerable library: /fixtures/fiber-debugger/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- :x: **sockjs-0.3.18.tgz** (Vulnerable Library)
</details>
<details><summary><b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.2.1.tgz (Root Library)
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7693 (Medium) detected in sockjs-0.3.18.tgz, sockjs-0.3.19.tgz - ## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>sockjs-0.3.18.tgz</b>, <b>sockjs-0.3.19.tgz</b></p></summary>
<p>
<details><summary><b>sockjs-0.3.18.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.18.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.18.tgz</a></p>
<p>Path to dependency file: /fixtures/fiber-debugger/package.json</p>
<p>Path to vulnerable library: /fixtures/fiber-debugger/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- :x: **sockjs-0.3.18.tgz** (Vulnerable Library)
</details>
<details><summary><b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.2.1.tgz (Root Library)
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in sockjs tgz sockjs tgz cve medium severity vulnerability vulnerable libraries sockjs tgz sockjs tgz sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href path to dependency file fixtures fiber debugger package json path to vulnerable library fixtures fiber debugger node modules sockjs package json dependency hierarchy react scripts tgz root library webpack dev server tgz x sockjs tgz vulnerable library sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href dependency hierarchy webpack dev server tgz root library x sockjs tgz vulnerable library found in head commit a href found in base branch master vulnerability details incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sockjs step up your open source security game with whitesource | 0 |
150,804 | 11,985,379,676 | IssuesEvent | 2020-04-07 17:24:07 | calibra/cargo-guppy | https://api.github.com/repos/calibra/cargo-guppy | opened | Implement continuous verification of guppy properties | better-testing | We have [code to test various properties](https://github.com/calibra/cargo-guppy/blob/master/guppy/src/unit_tests/dep_helpers.rs) that we expect to be true for every dependency graph modeled by guppy. At the moment this code is buried in our test suite and only used for a set of known fixtures. It would be cool to write a tool that can continuously verify a repository or crate on crates.io. Here's how it might be done:
* Extract out the test code into a separate crate.
* Use that crate as the basis for a new CLI tool that verifies a workspace (similar to and extending #83).
* Figure out how to make this tool run continously across many repos and over time. | 1.0 | Implement continuous verification of guppy properties - We have [code to test various properties](https://github.com/calibra/cargo-guppy/blob/master/guppy/src/unit_tests/dep_helpers.rs) that we expect to be true for every dependency graph modeled by guppy. At the moment this code is buried in our test suite and only used for a set of known fixtures. It would be cool to write a tool that can continuously verify a repository or crate on crates.io. Here's how it might be done:
* Extract out the test code into a separate crate.
* Use that crate as the basis for a new CLI tool that verifies a workspace (similar to and extending #83).
* Figure out how to make this tool run continously across many repos and over time. | non_main | implement continuous verification of guppy properties we have that we expect to be true for every dependency graph modeled by guppy at the moment this code is buried in our test suite and only used for a set of known fixtures it would be cool to write a tool that can continuously verify a repository or crate on crates io here s how it might be done extract out the test code into a separate crate use that crate as the basis for a new cli tool that verifies a workspace similar to and extending figure out how to make this tool run continously across many repos and over time | 0 |
410,685 | 11,995,379,015 | IssuesEvent | 2020-04-08 15:07:18 | StudioTBA/CoronaIO | https://api.github.com/repos/StudioTBA/CoronaIO | closed | State machine for zombies | AI Character development Priority: High | **Is your feature request related to a problem? Please describe.**
For zombies to act independently, they should have a state machine.
**Describe the solution you would like**
Implement the FSM class in the project for the zombies. | 1.0 | State machine for zombies - **Is your feature request related to a problem? Please describe.**
For zombies to act independently, they should have a state machine.
**Describe the solution you would like**
Implement the FSM class in the project for the zombies. | non_main | state machine for zombies is your feature request related to a problem please describe for zombies to act independently they should have a state machine describe the solution you would like implement the fsm class in the project for the zombies | 0 |
1,626 | 6,572,656,147 | IssuesEvent | 2017-09-11 04:07:56 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Cask installs that require a password fail | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
homebrew_cask
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
inventory = hosts/hosts
vault_password_file = ~/.ansible/vault/password.txt
retry_files_enabled = False
retry_files_save_path = ~/.ansible/retry
remote_user = root
nocows = 0
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
macOS Sierra 10.12.1 (16B2555)
One mac (controller) controlling another mac (minion) using normal ssh connection with ssh key.
##### SUMMARY
Some homebrew casks ask for a password during the installation process. It appears the ansible module does not provide a way to collect a password and pass it to the homebrew cask installer.
Currently, these casks that require a password need to be manually installed using 'brew cask install <some item> at the command line. Ansible cannot install them.
Found two casks so far that failed to install with Ansible, but work fine when done manually: microsoft-office and wireshark
##### STEPS TO REPRODUCE
```
- homebrew_cask: name=wireshark state=present
```
```
- homebrew_cask: name=microsoft-office state=present
```
##### EXPECTED RESULTS
wireshark cask and microsoft-office casks are installed
##### ACTUAL RESULTS
```
failed: [10.0.1.119] (item=wireshark) => {"failed": true, "item": "wireshark", "msg": "Error: Command failed to execute!\n\n==> Failed command:\n/usr/bin/sudo -E -- /usr/sbin/installer -pkg #<Pathname:/usr/local/Caskroom/wireshark/2.2.1/Wireshark 2.2.1 Intel 64.pkg> -target /\n\n==> Standard Output of failed command:\n\n\n==> Standard Error of failed command:\nsudo: no tty present and no askpass program specified\n\n\n==> Exit status of failed command:\n#<Process::Status: pid 29235 exit 1>"}
```
```
failed: [10.0.1.119] (item=microsoft-office) => {"failed": true, "item": "microsoft-office", "msg": "Error: Command failed to execute!\n\n==> Failed command:\n/usr/bin/sudo -E -- /usr/sbin/installer -pkg #<Pathname:/usr/local/Caskroom/microsoft-office/15.27.0_161010/Microsoft_Office_2016_15.27.0_161010_Installer.pkg> -target /\n\n==> Standard Output of failed command:\n\n\n==> Standard Error of failed command:\nsudo: no tty present and no askpass program specified\n\n\n==> Exit status of failed command:\n#<Process::Status: pid 12059 exit 1>"}
```
| True | Cask installs that require a password fail - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
homebrew_cask
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
inventory = hosts/hosts
vault_password_file = ~/.ansible/vault/password.txt
retry_files_enabled = False
retry_files_save_path = ~/.ansible/retry
remote_user = root
nocows = 0
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
macOS Sierra 10.12.1 (16B2555)
One mac (controller) controlling another mac (minion) using normal ssh connection with ssh key.
##### SUMMARY
Some homebrew casks ask for a password during the installation process. It appears the ansible module does not provide a way to collect a password and pass it to the homebrew cask installer.
Currently, these casks that require a password need to be manually installed using 'brew cask install <some item> at the command line. Ansible cannot install them.
Found two casks so far that failed to install with Ansible, but work fine when done manually: microsoft-office and wireshark
##### STEPS TO REPRODUCE
```
- homebrew_cask: name=wireshark state=present
```
```
- homebrew_cask: name=microsoft-office state=present
```
##### EXPECTED RESULTS
wireshark cask and microsoft-office casks are installed
##### ACTUAL RESULTS
```
failed: [10.0.1.119] (item=wireshark) => {"failed": true, "item": "wireshark", "msg": "Error: Command failed to execute!\n\n==> Failed command:\n/usr/bin/sudo -E -- /usr/sbin/installer -pkg #<Pathname:/usr/local/Caskroom/wireshark/2.2.1/Wireshark 2.2.1 Intel 64.pkg> -target /\n\n==> Standard Output of failed command:\n\n\n==> Standard Error of failed command:\nsudo: no tty present and no askpass program specified\n\n\n==> Exit status of failed command:\n#<Process::Status: pid 29235 exit 1>"}
```
```
failed: [10.0.1.119] (item=microsoft-office) => {"failed": true, "item": "microsoft-office", "msg": "Error: Command failed to execute!\n\n==> Failed command:\n/usr/bin/sudo -E -- /usr/sbin/installer -pkg #<Pathname:/usr/local/Caskroom/microsoft-office/15.27.0_161010/Microsoft_Office_2016_15.27.0_161010_Installer.pkg> -target /\n\n==> Standard Output of failed command:\n\n\n==> Standard Error of failed command:\nsudo: no tty present and no askpass program specified\n\n\n==> Exit status of failed command:\n#<Process::Status: pid 12059 exit 1>"}
```
| main | cask installs that require a password fail issue type bug report component name homebrew cask ansible version ansible config file configured module search path default w o overrides configuration inventory hosts hosts vault password file ansible vault password txt retry files enabled false retry files save path ansible retry remote user root nocows pipelining true os environment macos sierra one mac controller controlling another mac minion using normal ssh connection with ssh key summary some homebrew casks ask for a password during the installation process it appears the ansible module does not provide a way to collect a password and pass it to the homebrew cask installer currently these casks that require a password need to be manually installed using brew cask install at the command line ansible cannot install them found two casks so far that failed to install with ansible but work fine when done manually microsoft office and wireshark steps to reproduce homebrew cask name wireshark state present homebrew cask name microsoft office state present expected results wireshark cask and microsoft office casks are installed actual results failed item wireshark failed true item wireshark msg error command failed to execute n n failed command n usr bin sudo e usr sbin installer pkg target n n standard output of failed command n n n standard error of failed command nsudo no tty present and no askpass program specified n n n exit status of failed command n failed item microsoft office failed true item microsoft office msg error command failed to execute n n failed command n usr bin sudo e usr sbin installer pkg target n n standard output of failed command n n n standard error of failed command nsudo no tty present and no askpass program specified n n n exit status of failed command n | 1 |
1,680 | 6,574,141,460 | IssuesEvent | 2017-09-11 11:40:25 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | template module creates an acutal new line when reading (m?)\n | affects_2.0 bug_report waiting_on_maintainer |
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
template
##### ANSIBLE VERSION
2.0 and higher
##### CONFIGURATION
[ssh_connection]
control_path = %(directory)s/%%C
##### OS / ENVIRONMENT
Mac OS X 10.11.6
Centos 6.x, 7.x
##### SUMMARY
In the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing "(?m)\n" . The output generated by the template module in versions 2.0 and later, treats the \n as actual line break. Where as versions up to 1.9.6 retains the literal "(?m)\n" without replacing the \n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.
Any way we can work around this issue? Thank you for your help.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Our execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:
1. Run a shell script with ansible-playbook command that pass in an env variable with (?m)\n literal.
2. Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.
3. The task yaml file invokes the template module.
In the snippet below I stripped out other lines/vars for clarity.
<!--- Paste example playbooks or commands between quotes below -->
main shell
```
set GROK_PATTERN_GENERAL_ERROR_PG="%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"
ansible-playbook -i ../common/host.inventory \
-${VERBOSE} \
t.yml \
${CHECK_ONLY} \
--extra-vars "hosts='${HOST}'
xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'
"
```
t.yml
```
---
- hosts: 127.0.0.1
connection: local
tasks:
- include_vars: ../common/defaults/main.yml
- name: generate logstash kafka logscan filter config file
include: tasks/t.yml
vars:
logstash_grok_general_error: "{{xlogstash_grok_general_error}}"
```
tasks/t.yml
```
---
- name: generate logstash kafka logscan filter config file
template: src=../common/templates/my.conf.j2
dest="./500-filter.conf"
```
my.conf.j2
```
grok {
break_on_match => "true"
match => [
"message", "{{logstash_grok_general_error}}"
]
}
```
<!--- You can also paste gist.github.com links for larger files -->
Note the (?m)\n are still on the same line.
##### EXPECTED RESULTS
```
grok {
break_on_match => "true"
match => [
"message", "%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"
]
}
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Note (?m)\n now has the \n as actual line break.
<!--- Paste verbatim command output between quotes below -->
```
grok {
break_on_match => "true"
match => [
"message", "%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)
%{USER:logerror}%{GREEDYDATA})"
]
}
```
| True | template module creates an acutal new line when reading (m?)\n -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
template
##### ANSIBLE VERSION
2.0 and higher
##### CONFIGURATION
[ssh_connection]
control_path = %(directory)s/%%C
##### OS / ENVIRONMENT
Mac OS X 10.11.6
Centos 6.x, 7.x
##### SUMMARY
In the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing "(?m)\n" . The output generated by the template module in versions 2.0 and later, treats the \n as actual line break. Where as versions up to 1.9.6 retains the literal "(?m)\n" without replacing the \n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.
Any way we can work around this issue? Thank you for your help.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Our execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:
1. Run a shell script with ansible-playbook command that pass in an env variable with (?m)\n literal.
2. Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.
3. The task yaml file invokes the template module.
In the snippet below I stripped out other lines/vars for clarity.
<!--- Paste example playbooks or commands between quotes below -->
main shell
```
set GROK_PATTERN_GENERAL_ERROR_PG="%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"
ansible-playbook -i ../common/host.inventory \
-${VERBOSE} \
t.yml \
${CHECK_ONLY} \
--extra-vars "hosts='${HOST}'
xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'
"
```
t.yml
```
---
- hosts: 127.0.0.1
connection: local
tasks:
- include_vars: ../common/defaults/main.yml
- name: generate logstash kafka logscan filter config file
include: tasks/t.yml
vars:
logstash_grok_general_error: "{{xlogstash_grok_general_error}}"
```
tasks/t.yml
```
---
- name: generate logstash kafka logscan filter config file
template: src=../common/templates/my.conf.j2
dest="./500-filter.conf"
```
my.conf.j2
```
grok {
break_on_match => "true"
match => [
"message", "{{logstash_grok_general_error}}"
]
}
```
<!--- You can also paste gist.github.com links for larger files -->
Note the (?m)\n are still on the same line.
##### EXPECTED RESULTS
```
grok {
break_on_match => "true"
match => [
"message", "%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"
]
}
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Note (?m)\n now has the \n as actual line break.
<!--- Paste verbatim command output between quotes below -->
```
grok {
break_on_match => "true"
match => [
"message", "%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)
%{USER:logerror}%{GREEDYDATA})"
]
}
```
| main | template module creates an acutal new line when reading m n issue type bug report component name template ansible version and higher configuration control path directory s c os environment mac os x centos x x summary in the input file we substitute a variable with an environment variable that has a line string that contains a grok expression containing m n the output generated by the template module in versions and later treats the n as actual line break where as versions up to retains the literal m n without replacing the n with an actual line break we see the line break after we upgraded the ansible version to x any way we can work around this issue thank you for your help steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used our execution flow is probably not the nicest we want to reengineer it soon basic steps run a shell script with ansible playbook command that pass in an env variable with m n literal playbook calls a main yaml file and assigns shell environment var to a included task yaml file the task yaml file invokes the template module in the snippet below i stripped out other lines vars for clarity main shell set grok pattern general error pg timestamp error user classname greedydata m n user logerror greedydata ansible playbook i common host inventory verbose t yml check only extra vars hosts host xlogstash grok general error grok pattern general error pg t yml hosts connection local tasks include vars common defaults main yml name generate logstash kafka logscan filter config file include tasks t yml vars logstash grok general error xlogstash grok general error tasks t yml name generate logstash kafka logscan filter config file template src common templates my conf dest filter conf my conf grok break on match true match message logstash grok general error note the m n are still on the same line expected results grok break on match true match message timestamp error user classname greedydata m n user logerror greedydata actual results note m n now has the n as actual line break grok break on match true match message timestamp error user classname greedydata m user logerror greedydata | 1 |
62,504 | 26,023,591,426 | IssuesEvent | 2022-12-21 14:37:17 | Narikakun-Network/status-page | https://api.github.com/repos/Narikakun-Network/status-page | closed | 複数サービスソフトウェアメンテナンスのお知らせ | maintenance ur0-cc n-tool-online narikakun-ddns-rental-service | <!--
start: 2022-12-21T23:00:00+09:00
end: 2022-12-22T03:00:00+09:00
expectedDown: ur0.cc, nTool.online, Narikakun DDNS Rental Service
--> | 1.0 | 複数サービスソフトウェアメンテナンスのお知らせ - <!--
start: 2022-12-21T23:00:00+09:00
end: 2022-12-22T03:00:00+09:00
expectedDown: ur0.cc, nTool.online, Narikakun DDNS Rental Service
--> | non_main | 複数サービスソフトウェアメンテナンスのお知らせ start end expecteddown cc ntool online narikakun ddns rental service | 0 |
1,792 | 6,575,891,417 | IssuesEvent | 2017-09-11 17:43:51 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | user module: Adding user with primary group keeps changed | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible --version
ansible 2.1.2.0 (stable-2.1 4c9ed1f4fb) last updated 2016/09/23 11:24:18 (GMT +200)
lib/ansible/modules/core: (detached HEAD af67009d38) last updated 2016/09/23 11:27:16 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 1bde4310bc) last updated 2016/09/23 11:27:16 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
I'm creating a user with primary group and some other groups.
When using the stable branch, the task from below keeps changed = true. I couldn't find out what is changed, though.
Running the same task with ansible and the modules from devel works correct. Maybe you can merge the changes in ansible-modules-core in stable. Unfortantly, I can't tell which commits.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: add oracle user
user:
name=oracle
group=oinstall
groups=oinstall,dba
password=foobar
update_password=on_create
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
On second run, the change should be false and state ok.
```
TASK [oracle-12c-preparation : add oracle user] ********************************
ok: [mysecret] => {"append": false, "changed": false, "comment": "", "group": 1001, "groups": "oinstall,dba", "home": "/home/oracle", "move_home": false, "name": "oracle", "password": "NOT_LOGGING_PASSWORD", "shell": "/bin/bash", "state": "present", "uid": 1002}
```
##### ACTUAL RESULTS
State keeps changed.
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
```
TASK [oracle-12c-preparation : add oracle user] ********************************
changed: [mysecret] => {"append": false, "changed": true, "comment": "", "group": 1001, "groups": "oinstall,dba", "home": "/home/oracle", "move_home": false, "name": "oracle", "password": "NOT_LOGGING_PASSWORD", "shell": "/bin/bash", "state": "present", "uid": 1002}
```
<!--- Paste verbatim command output between quotes below -->
```
```
| True | user module: Adding user with primary group keeps changed - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible --version
ansible 2.1.2.0 (stable-2.1 4c9ed1f4fb) last updated 2016/09/23 11:24:18 (GMT +200)
lib/ansible/modules/core: (detached HEAD af67009d38) last updated 2016/09/23 11:27:16 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 1bde4310bc) last updated 2016/09/23 11:27:16 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
I'm creating a user with primary group and some other groups.
When using the stable branch, the task from below keeps changed = true. I couldn't find out what is changed, though.
Running the same task with ansible and the modules from devel works correct. Maybe you can merge the changes in ansible-modules-core in stable. Unfortantly, I can't tell which commits.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: add oracle user
user:
name=oracle
group=oinstall
groups=oinstall,dba
password=foobar
update_password=on_create
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
On second run, the change should be false and state ok.
```
TASK [oracle-12c-preparation : add oracle user] ********************************
ok: [mysecret] => {"append": false, "changed": false, "comment": "", "group": 1001, "groups": "oinstall,dba", "home": "/home/oracle", "move_home": false, "name": "oracle", "password": "NOT_LOGGING_PASSWORD", "shell": "/bin/bash", "state": "present", "uid": 1002}
```
##### ACTUAL RESULTS
State keeps changed.
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
```
TASK [oracle-12c-preparation : add oracle user] ********************************
changed: [mysecret] => {"append": false, "changed": true, "comment": "", "group": 1001, "groups": "oinstall,dba", "home": "/home/oracle", "move_home": false, "name": "oracle", "password": "NOT_LOGGING_PASSWORD", "shell": "/bin/bash", "state": "present", "uid": 1002}
```
<!--- Paste verbatim command output between quotes below -->
```
```
| main | user module adding user with primary group keeps changed issue type bug report component name user ansible version ansible version ansible stable last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary i m creating a user with primary group and some other groups when using the stable branch the task from below keeps changed true i couldn t find out what is changed though running the same task with ansible and the modules from devel works correct maybe you can merge the changes in ansible modules core in stable unfortantly i can t tell which commits steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name add oracle user user name oracle group oinstall groups oinstall dba password foobar update password on create expected results on second run the change should be false and state ok task ok append false changed false comment group groups oinstall dba home home oracle move home false name oracle password not logging password shell bin bash state present uid actual results state keeps changed task changed append false changed true comment group groups oinstall dba home home oracle move home false name oracle password not logging password shell bin bash state present uid | 1 |
662,500 | 22,141,761,314 | IssuesEvent | 2022-06-03 07:41:56 | BAMWelDX/weldx | https://api.github.com/repos/BAMWelDX/weldx | opened | [setup.cfg] license_file parameter is deprecated, use license_files instead. | low priority | setuptools/config/setupcfg.py:459: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. | 1.0 | [setup.cfg] license_file parameter is deprecated, use license_files instead. - setuptools/config/setupcfg.py:459: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. | non_main | license file parameter is deprecated use license files instead setuptools config setupcfg py setuptoolsdeprecationwarning the license file parameter is deprecated use license files instead | 0 |
438,278 | 12,625,559,536 | IssuesEvent | 2020-06-14 12:31:28 | threefoldtech/jumpscaleX_threebot | https://api.github.com/repos/threefoldtech/jumpscaleX_threebot | closed | chatflow: we should validate the flist url | priority_minor type_feature | I did make a mistake in the flist url when trying to deploy a container. The reservation eventually failed but we could have validated the format of the flist upfront to avoid the mistake sooner.

| 1.0 | chatflow: we should validate the flist url - I did make a mistake in the flist url when trying to deploy a container. The reservation eventually failed but we could have validated the format of the flist upfront to avoid the mistake sooner.

| non_main | chatflow we should validate the flist url i did make a mistake in the flist url when trying to deploy a container the reservation eventually failed but we could have validated the format of the flist upfront to avoid the mistake sooner | 0 |
67,636 | 17,024,410,878 | IssuesEvent | 2021-07-03 07:06:40 | apache/shardingsphere | https://api.github.com/repos/apache/shardingsphere | closed | Calcite always can not download when mvn install in windows env | status: volunteer wanted type: build | Calcite lib always can not download when mvn install in windows env, please investigate the reason.
error log:
```
Error: Failed to execute goal on project shardingsphere-infra-optimize: Could not resolve dependencies for project org.apache.shardingsphere:shardingsphere-infra-optimize:jar:5.0.0-RC1-SNAPSHOT: Failed to collect dependencies at org.apache.calcite:calcite-core:jar:1.26.0: Failed to read artifact descriptor for org.apache.calcite:calcite-core:jar:1.26.0: Could not transfer artifact org.apache.calcite:calcite-core:pom:1.26.0 from/to central (https://repo.maven.apache.org/maven2): Transfer failed for https://repo.maven.apache.org/maven2/org/apache/calcite/calcite-core/1.26.0/calcite-core-1.26.0.pom: Connection reset -> [Help 1]
Error:
Error: To see the full stack trace of the errors, re-run Maven with the -e switch.
Error: Re-run Maven using the -X switch to enable full debug logging.
Error:
Error: For more information about the errors and possible solutions, please read the following articles:
Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
Error:
Error: After correcting the problems, you can resume the build with the command
Error: mvn <args> -rf :shardingsphere-infra-optimize
Error: Process completed with exit code 1.
``` | 1.0 | Calcite always can not download when mvn install in windows env - Calcite lib always can not download when mvn install in windows env, please investigate the reason.
error log:
```
Error: Failed to execute goal on project shardingsphere-infra-optimize: Could not resolve dependencies for project org.apache.shardingsphere:shardingsphere-infra-optimize:jar:5.0.0-RC1-SNAPSHOT: Failed to collect dependencies at org.apache.calcite:calcite-core:jar:1.26.0: Failed to read artifact descriptor for org.apache.calcite:calcite-core:jar:1.26.0: Could not transfer artifact org.apache.calcite:calcite-core:pom:1.26.0 from/to central (https://repo.maven.apache.org/maven2): Transfer failed for https://repo.maven.apache.org/maven2/org/apache/calcite/calcite-core/1.26.0/calcite-core-1.26.0.pom: Connection reset -> [Help 1]
Error:
Error: To see the full stack trace of the errors, re-run Maven with the -e switch.
Error: Re-run Maven using the -X switch to enable full debug logging.
Error:
Error: For more information about the errors and possible solutions, please read the following articles:
Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
Error:
Error: After correcting the problems, you can resume the build with the command
Error: mvn <args> -rf :shardingsphere-infra-optimize
Error: Process completed with exit code 1.
``` | non_main | calcite always can not download when mvn install in windows env calcite lib always can not download when mvn install in windows env please investigate the reason error log error failed to execute goal on project shardingsphere infra optimize could not resolve dependencies for project org apache shardingsphere shardingsphere infra optimize jar snapshot failed to collect dependencies at org apache calcite calcite core jar failed to read artifact descriptor for org apache calcite calcite core jar could not transfer artifact org apache calcite calcite core pom from to central transfer failed for connection reset error error to see the full stack trace of the errors re run maven with the e switch error re run maven using the x switch to enable full debug logging error error for more information about the errors and possible solutions please read the following articles error error error after correcting the problems you can resume the build with the command error mvn rf shardingsphere infra optimize error process completed with exit code | 0 |
1,922 | 6,587,381,486 | IssuesEvent | 2017-09-13 20:51:37 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | DaysBetween: Add trigger | Low-Hanging Fruit Maintainer Timeout Suggestion | Could add "days until April 9" as a trigger. Currently the closest thing is "days from today to april 9"
---
IA Page: http://duck.co/ia/view/days_between
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @JetFault
| True | DaysBetween: Add trigger - Could add "days until April 9" as a trigger. Currently the closest thing is "days from today to april 9"
---
IA Page: http://duck.co/ia/view/days_between
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @JetFault
| main | daysbetween add trigger could add days until april as a trigger currently the closest thing is days from today to april ia page jetfault | 1 |
302,332 | 9,256,926,373 | IssuesEvent | 2019-03-16 23:52:20 | readsoftware/ReadIssues | https://api.github.com/repos/readsoftware/ReadIssues | opened | Translation and Chāyā V - RTF output | Enhancement Priority 2 Spec | Minor issue with formatting of RTF output. When using sentence sequences the tranalation and chaya annotations are output with appropriate spacing,
Where one has developed ones own sequences, the ouput is concatenated with out any spacing. It seems reasonable that the default output would include a space between translation/shaya sequences. | 1.0 | Translation and Chāyā V - RTF output - Minor issue with formatting of RTF output. When using sentence sequences the tranalation and chaya annotations are output with appropriate spacing,
Where one has developed ones own sequences, the ouput is concatenated with out any spacing. It seems reasonable that the default output would include a space between translation/shaya sequences. | non_main | translation and chāyā v rtf output minor issue with formatting of rtf output when using sentence sequences the tranalation and chaya annotations are output with appropriate spacing where one has developed ones own sequences the ouput is concatenated with out any spacing it seems reasonable that the default output would include a space between translation shaya sequences | 0 |
6,120 | 2,583,298,415 | IssuesEvent | 2015-02-16 03:21:35 | OpenConceptLab/ocl_web | https://api.github.com/repos/OpenConceptLab/ocl_web | opened | Add "website" field to user object | enhancement medium-priority | This was an oversight that it was not in the spec -- assuming we won't get to this in the current scope, so just documenting it. | 1.0 | Add "website" field to user object - This was an oversight that it was not in the spec -- assuming we won't get to this in the current scope, so just documenting it. | non_main | add website field to user object this was an oversight that it was not in the spec assuming we won t get to this in the current scope so just documenting it | 0 |
130,211 | 12,425,192,639 | IssuesEvent | 2020-05-24 15:14:39 | SWI-Prolog/swipl-devel | https://api.github.com/repos/SWI-Prolog/swipl-devel | closed | broken link in doc of library(check) | Documentation bug | On [this](http://www.swi-prolog.org/pldoc/man?section=check) page, in the sentence --
> Run all consistency checks defined by checker/2
-- the link to checker/2 is broken.
| 1.0 | broken link in doc of library(check) - On [this](http://www.swi-prolog.org/pldoc/man?section=check) page, in the sentence --
> Run all consistency checks defined by checker/2
-- the link to checker/2 is broken.
| non_main | broken link in doc of library check on page in the sentence run all consistency checks defined by checker the link to checker is broken | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.