Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
208,903 | 23,665,431,073 | IssuesEvent | 2022-08-26 20:18:17 | JohnDeere/work-tracker-examples | https://api.github.com/repos/JohnDeere/work-tracker-examples | closed | WS-2020-0293 (Medium) detected in spring-security-web-4.2.11.RELEASE.jar - autoclosed | security vulnerability | ## WS-2020-0293 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-4.2.11.RELEASE.jar</b></p></summary>
<p>spring-security-web</p>
<p>Library home page: <a href="http://spring.io/spring-security">http://spring.io/spring-security</a></p>
<p>Path to dependency file: /spring-boot-example/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/4.2.11.RELEASE/spring-security-web-4.2.11.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-security-1.5.19.RELEASE.jar (Root Library)
- :x: **spring-security-web-4.2.11.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/JohnDeere/work-tracker-examples/commit/7aa2fa9c80c3d14d7e62f0494ba7edaff8842068">7aa2fa9c80c3d14d7e62f0494ba7edaff8842068</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security before 5.2.9, 5.3.7, and 5.4.3 vulnerable to side-channel attacks. Vulnerable versions of Spring Security don't use constant time comparisons for CSRF tokens.
<p>Publish Date: 2020-12-17
<p>URL: <a href=https://github.com/spring-projects/spring-security/commit/40e027c56d11b9b4c5071360bfc718165c937784>WS-2020-0293</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-12-17</p>
<p>Fix Resolution (org.springframework.security:spring-security-web): 5.2.9.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-security): 2.3.0.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0293 (Medium) detected in spring-security-web-4.2.11.RELEASE.jar - autoclosed - ## WS-2020-0293 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-4.2.11.RELEASE.jar</b></p></summary>
<p>spring-security-web</p>
<p>Library home page: <a href="http://spring.io/spring-security">http://spring.io/spring-security</a></p>
<p>Path to dependency file: /spring-boot-example/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/4.2.11.RELEASE/spring-security-web-4.2.11.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-security-1.5.19.RELEASE.jar (Root Library)
- :x: **spring-security-web-4.2.11.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/JohnDeere/work-tracker-examples/commit/7aa2fa9c80c3d14d7e62f0494ba7edaff8842068">7aa2fa9c80c3d14d7e62f0494ba7edaff8842068</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security before 5.2.9, 5.3.7, and 5.4.3 vulnerable to side-channel attacks. Vulnerable versions of Spring Security don't use constant time comparisons for CSRF tokens.
<p>Publish Date: 2020-12-17
<p>URL: <a href=https://github.com/spring-projects/spring-security/commit/40e027c56d11b9b4c5071360bfc718165c937784>WS-2020-0293</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-12-17</p>
<p>Fix Resolution (org.springframework.security:spring-security-web): 5.2.9.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-security): 2.3.0.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in spring security web release jar autoclosed ws medium severity vulnerability vulnerable library spring security web release jar spring security web library home page a href path to dependency file spring boot example pom xml path to vulnerable library home wss scanner repository org springframework security spring security web release spring security web release jar dependency hierarchy spring boot starter security release jar root library x spring security web release jar vulnerable library found in head commit a href found in base branch master vulnerability details spring security before and vulnerable to side channel attacks vulnerable versions of spring security don t use constant time comparisons for csrf tokens publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution org springframework security spring security web release direct dependency fix resolution org springframework boot spring boot starter security release step up your open source security game with mend | 0 |
149,759 | 13,301,491,642 | IssuesEvent | 2020-08-25 13:02:06 | JJguri/bestiapop | https://api.github.com/repos/JJguri/bestiapop | closed | acknowledge the data providers | documentation | We need to update the documentation and provide an acknowledge for the data use (I am not entirely sure if this needs to be in the header of the file).
**SILO**
Under the CC Attribution licence, users are required to acknowledge the data provider, so we ask our clients to:
• cite the Jeffrey et al. 2001 paper in **technical documents**
• acknowledge SILO as the data source in **non-technical documents**, for example:
These data were obtained from the Queensland Government’s [SILO](https://www.longpaddock.qld.gov.au/silo/) climate database and are licensed under [CC BY 4.0.](https://creativecommons.org/licenses/by/4.0/)”
**NASAPOWER**
When POWER data products are used in a **publication**, we request the following acknowledgment be included: “These data were obtained from the NASA Langley Research Center POWER Project funded through the NASA Earth Science Directorate Applied Science Program.” | 1.0 | acknowledge the data providers - We need to update the documentation and provide an acknowledge for the data use (I am not entirely sure if this needs to be in the header of the file).
**SILO**
Under the CC Attribution licence, users are required to acknowledge the data provider, so we ask our clients to:
• cite the Jeffrey et al. 2001 paper in **technical documents**
• acknowledge SILO as the data source in **non-technical documents**, for example:
These data were obtained from the Queensland Government’s [SILO](https://www.longpaddock.qld.gov.au/silo/) climate database and are licensed under [CC BY 4.0.](https://creativecommons.org/licenses/by/4.0/)”
**NASAPOWER**
When POWER data products are used in a **publication**, we request the following acknowledgment be included: “These data were obtained from the NASA Langley Research Center POWER Project funded through the NASA Earth Science Directorate Applied Science Program.” | non_main | acknowledge the data providers we need to update the documentation and provide an acknowledge for the data use i am not entirely sure if this needs to be in the header of the file silo under the cc attribution licence users are required to acknowledge the data provider so we ask our clients to • cite the jeffrey et al paper in technical documents • acknowledge silo as the data source in non technical documents for example these data were obtained from the queensland government’s climate database and are licensed under nasapower when power data products are used in a publication we request the following acknowledgment be included “these data were obtained from the nasa langley research center power project funded through the nasa earth science directorate applied science program ” | 0 |
5,104 | 26,018,850,884 | IssuesEvent | 2022-12-21 10:48:31 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Hide transformation results in a 500 in /run request | type: bug work: backend status: ready restricted: maintainers | ## Description
Adding a hide transformation results in a 500 with the following error:
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/queries/run/
Django Version: 3.1.14
Python Version: 3.9.15
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 59, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/queries.py", line 123, in run
column_metadata = query.all_columns_description_map
File "/code/mathesar/models/query.py", line 248, in all_columns_description_map
return {
File "/code/mathesar/models/query.py", line 249, in <dictcomp>
alias: self._describe_query_column(sa_col)
File "/code/mathesar/models/query.py", line 217, in _describe_query_column
type=sa_col.db_type.id,
File "/code/db/columns/base.py", line 225, in db_type
return get_db_type_enum_from_class(self.type.__class__)
File "/code/db/types/operations/convert.py", line 37, in get_db_type_enum_from_class
raise UnknownDbTypeId
Exception Type: UnknownDbTypeId at /api/db/v0/queries/run/
Exception Value:
```
The request:
```json
{
"base_table":72,
"initial_columns":[
{
"id":225,
"alias":"Patrons_First Name"
},
{
"id":226,
"alias":"Patrons_Last Name"
},
{
"id":227,
"alias":"Patrons_Email"
}
],
"transformations":[
{
"type":"hide",
"spec":[]
}
],
``` | True | Hide transformation results in a 500 in /run request - ## Description
Adding a hide transformation results in a 500 with the following error:
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/queries/run/
Django Version: 3.1.14
Python Version: 3.9.15
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 59, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/queries.py", line 123, in run
column_metadata = query.all_columns_description_map
File "/code/mathesar/models/query.py", line 248, in all_columns_description_map
return {
File "/code/mathesar/models/query.py", line 249, in <dictcomp>
alias: self._describe_query_column(sa_col)
File "/code/mathesar/models/query.py", line 217, in _describe_query_column
type=sa_col.db_type.id,
File "/code/db/columns/base.py", line 225, in db_type
return get_db_type_enum_from_class(self.type.__class__)
File "/code/db/types/operations/convert.py", line 37, in get_db_type_enum_from_class
raise UnknownDbTypeId
Exception Type: UnknownDbTypeId at /api/db/v0/queries/run/
Exception Value:
```
The request:
```json
{
"base_table":72,
"initial_columns":[
{
"id":225,
"alias":"Patrons_First Name"
},
{
"id":226,
"alias":"Patrons_Last Name"
},
{
"id":227,
"alias":"Patrons_Email"
}
],
"transformations":[
{
"type":"hide",
"spec":[]
}
],
``` | main | hide transformation results in a in run request description adding a hide transformation results in a with the following error environment request method post request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware mathesar middleware cursorclosedhandlermiddleware mathesar middleware passwordchangeneededmiddleware django userforeignkey middleware userforeignkeymiddleware django request cache middleware requestcachemiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file code mathesar api db viewsets queries py line in run column metadata query all columns description map file code mathesar models query py line in all columns description map return file code mathesar models query py line in alias self describe query column sa col file code mathesar models query py line in describe query column type sa col db type id file code db columns base py line in db type return get db type enum from class self type class file code db types operations convert py line in get db type enum from class raise unknowndbtypeid exception type unknowndbtypeid at api db queries run exception value the request json base table initial columns id alias patrons first name id alias patrons last name id alias patrons email transformations type hide spec | 1 |
1,398 | 6,025,396,153 | IssuesEvent | 2017-06-08 08:35:42 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | win_acl error with having LDAP signing enabled | affects_2.2 bug_report waiting_on_maintainer windows | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_acl
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
none / default
##### OS / ENVIRONMENT
managing Windows7 from MacOS 10.11
##### SUMMARY
win_acl is failing if having LDAP signing enabled
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Give Domain Users the required access rights for folder
win_acl:
path: 'C:\Users\Public\Desktop\somefolder'
user: 'DOMAIN\Domain Users'
rights: 'ReadAndExecute,Write,ListDirectory,CreateDirectories,CreateFiles,DeleteSubdirectoriesAndFiles,Synchronize,Traverse'
type: 'allow'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
ACL for that folder updated
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
FAILED! => {"changed": false, "failed": true, "msg": "exception calling \"FindOne\" with 0 argument(s): \"A more secure authentication method is required for this server.\r\n\""}
```
| True | win_acl error with having LDAP signing enabled - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_acl
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
none / default
##### OS / ENVIRONMENT
managing Windows7 from MacOS 10.11
##### SUMMARY
win_acl is failing if having LDAP signing enabled
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Give Domain Users the required access rights for folder
win_acl:
path: 'C:\Users\Public\Desktop\somefolder'
user: 'DOMAIN\Domain Users'
rights: 'ReadAndExecute,Write,ListDirectory,CreateDirectories,CreateFiles,DeleteSubdirectoriesAndFiles,Synchronize,Traverse'
type: 'allow'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
ACL for that folder updated
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
FAILED! => {"changed": false, "failed": true, "msg": "exception calling \"FindOne\" with 0 argument(s): \"A more secure authentication method is required for this server.\r\n\""}
```
| main | win acl error with having ldap signing enabled issue type bug report component name win acl ansible version ansible configuration none default os environment managing from macos summary win acl is failing if having ldap signing enabled steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name give domain users the required access rights for folder win acl path c users public desktop somefolder user domain domain users rights readandexecute write listdirectory createdirectories createfiles deletesubdirectoriesandfiles synchronize traverse type allow expected results acl for that folder updated actual results failed changed false failed true msg exception calling findone with argument s a more secure authentication method is required for this server r n | 1 |
273,951 | 8,555,268,733 | IssuesEvent | 2018-11-08 09:33:33 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.flipkart.com - site is not usable | browser-firefox priority-important | <!-- @browser: Firefox 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.flipkart.com/?affid=galaksion&affExtParam1=54B902A0-E307-11E8-B292-9140F31CBED0&affExtParam2=21997
**Browser / Version**: Firefox 64.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Site is not usable
**Description**: this site leads to crash of the connection
**Steps to Reproduce**:
this site is spam and not allowing me to use the net
[](https://webcompat.com/uploads/2018/11/430c72f5-a32a-43f4-91b7-452a762c59d2.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181022150107</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[console.log(ServiceWorker registration successful with scope: , https://www.flipkart.com/) https://www.flipkart.com/?affid=galaksion&affExtParam1=54B902A0-E307-11E8-B292-9140F31CBED0&affExtParam2=21997:159:4]', u"[console.log(Flipkart 's web) https://img1a.flixcart.com/www/linchpin/fk-cp-zion/js/raven.3.22.3.js:2:1243]"]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.flipkart.com - site is not usable - <!-- @browser: Firefox 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.flipkart.com/?affid=galaksion&affExtParam1=54B902A0-E307-11E8-B292-9140F31CBED0&affExtParam2=21997
**Browser / Version**: Firefox 64.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Site is not usable
**Description**: this site leads to crash of the connection
**Steps to Reproduce**:
this site is spam and not allowing me to use the net
[](https://webcompat.com/uploads/2018/11/430c72f5-a32a-43f4-91b7-452a762c59d2.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181022150107</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[console.log(ServiceWorker registration successful with scope: , https://www.flipkart.com/) https://www.flipkart.com/?affid=galaksion&affExtParam1=54B902A0-E307-11E8-B292-9140F31CBED0&affExtParam2=21997:159:4]', u"[console.log(Flipkart 's web) https://img1a.flixcart.com/www/linchpin/fk-cp-zion/js/raven.3.22.3.js:2:1243]"]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | site is not usable url browser version firefox operating system windows tested another browser unknown problem type site is not usable description this site leads to crash of the connection steps to reproduce this site is spam and not allowing me to use the net browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages u u from with ❤️ | 0 |
1,032 | 4,827,588,341 | IssuesEvent | 2016-11-07 14:05:54 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | cloudformation module fails when state:absent and stack does not exist | affects_2.0 aws bug_report cloud waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Plugin Name:
cloudformation
##### Ansible Version:
```
$ ansible --version
ansible 2.0.1.0
config file = /Users/dcarr/.ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
None
##### Environment:
N/A; Mac OS X 10.10.5
##### Summary:
I have a playbook that deletes a CloudFormation stack. If I run it when the stack is already absent, I expect it to succeed without error, noting that no changes were needed. What I actually see is that it fails with an error message:
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Stack with id STACKNAME does not exist"}
```
##### Steps To Reproduce:
```
---
- name: delete stack play
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: delete stack task
cloudformation:
stack_name: "STACKNAME"
state: "absent"
region: "us-east-1"
```
##### Expected Results:
Success with no changes
##### Actual Results:
<!-- What actually happened? If possible run with high verbosity (-vvvv) -->
```
$ ansible-playbook bug.yaml -vvvv
Using /Users/dcarr/.ansible.cfg as config file
Loaded callback default of type stdout, v2.0
1 plays in bug.yaml
PLAY [delete stack play] *******************************************************
TASK [delete stack task] *******************************************************
task path: /private/var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/bug.s12v4I2i/bug.yaml:7
ESTABLISH LOCAL CONNECTION FOR USER: dcarr
127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `" )'
127.0.0.1 PUT /var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/tmp8h9eEU TO /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation
127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation; rm -rf "/Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/" > /dev/null 2>&1'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "disable_rollback": false, "ec2_url": null, "notification_arns": null, "profile": null, "region": "us-east-1", "security_token": null, "stack_name": "STACKNAME", "stack_policy": null, "state": "absent", "tags": null, "template": null, "template_format": "json", "template_parameters": {}, "template_url": null, "validate_certs": true}, "module_name": "cloudformation"}, "msg": "Stack with id STACKNAME does not exist"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @bug.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
| True | cloudformation module fails when state:absent and stack does not exist - ##### Issue Type:
- Bug Report
##### Plugin Name:
cloudformation
##### Ansible Version:
```
$ ansible --version
ansible 2.0.1.0
config file = /Users/dcarr/.ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
None
##### Environment:
N/A; Mac OS X 10.10.5
##### Summary:
I have a playbook that deletes a CloudFormation stack. If I run it when the stack is already absent, I expect it to succeed without error, noting that no changes were needed. What I actually see is that it fails with an error message:
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Stack with id STACKNAME does not exist"}
```
##### Steps To Reproduce:
```
---
- name: delete stack play
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: delete stack task
cloudformation:
stack_name: "STACKNAME"
state: "absent"
region: "us-east-1"
```
##### Expected Results:
Success with no changes
##### Actual Results:
<!-- What actually happened? If possible run with high verbosity (-vvvv) -->
```
$ ansible-playbook bug.yaml -vvvv
Using /Users/dcarr/.ansible.cfg as config file
Loaded callback default of type stdout, v2.0
1 plays in bug.yaml
PLAY [delete stack play] *******************************************************
TASK [delete stack task] *******************************************************
task path: /private/var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/bug.s12v4I2i/bug.yaml:7
ESTABLISH LOCAL CONNECTION FOR USER: dcarr
127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `" )'
127.0.0.1 PUT /var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/tmp8h9eEU TO /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation
127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation; rm -rf "/Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/" > /dev/null 2>&1'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "disable_rollback": false, "ec2_url": null, "notification_arns": null, "profile": null, "region": "us-east-1", "security_token": null, "stack_name": "STACKNAME", "stack_policy": null, "state": "absent", "tags": null, "template": null, "template_format": "json", "template_parameters": {}, "template_url": null, "validate_certs": true}, "module_name": "cloudformation"}, "msg": "Stack with id STACKNAME does not exist"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @bug.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
| main | cloudformation module fails when state absent and stack does not exist issue type bug report plugin name cloudformation ansible version ansible version ansible config file users dcarr ansible cfg configured module search path default w o overrides ansible configuration none environment n a mac os x summary i have a playbook that deletes a cloudformation stack if i run it when the stack is already absent i expect it to succeed without error noting that no changes were needed what i actually see is that it fails with an error message fatal failed changed false failed true msg stack with id stackname does not exist steps to reproduce name delete stack play hosts localhost connection local gather facts false tasks name delete stack task cloudformation stack name stackname state absent region us east expected results success with no changes actual results ansible playbook bug yaml vvvv using users dcarr ansible cfg as config file loaded callback default of type stdout plays in bug yaml play task task path private var folders t bug bug yaml establish local connection for user dcarr exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders t to users dcarr ansible tmp ansible tmp cloudformation exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users dcarr ansible tmp ansible tmp cloudformation rm rf users dcarr ansible tmp ansible tmp dev null fatal failed changed false failed true invocation module args aws access key null aws secret key null disable rollback false url null notification arns null profile null region us east security token null stack name stackname stack policy null state absent tags null template null template format json template parameters template url null validate certs true module name cloudformation msg stack with id stackname does not exist no more hosts left to retry use limit bug retry play recap localhost ok changed unreachable failed | 1 |
434,694 | 30,462,619,888 | IssuesEvent | 2023-07-17 08:08:05 | kubecub/go-project-layout | https://api.github.com/repos/kubecub/go-project-layout | closed | Bug reports for links in kubecub docs | kind/documentation triage/unresolved report lifecycle/stale | ## Summary
| Status | Count |
|---------------|-------|
| 🔍 Total | 171 |
| ✅ Successful | 164 |
| ⏳ Timeouts | 1 |
| 🔀 Redirected | 0 |
| 👻 Excluded | 0 |
| ❓ Unknown | 0 |
| 🚫 Errors | 6 |
## Errors per input
### Errors in CONTRIBUTING.md
* [TIMEOUT] [https://twitter.com/xxw3293172751](https://twitter.com/xxw3293172751) | Timeout
### Errors in .github/CODE_OF_CONDUCT.md
* [404] [https://github.com/kubecub/kubecub/tree/main/.github/ISSUE_TEMPLATE](https://github.com/kubecub/kubecub/tree/main/.github/ISSUE_TEMPLATE) | Failed: Network error: Not Found
* [ERR] [file:///home/runner/work/go-project-layout/go-project-layout/.github/nsddd.top](file:///home/runner/work/go-project-layout/go-project-layout/.github/nsddd.top) | Failed: Cannot find file
* [404] [https://github.com/kubecub/community/blob/main/DEVELOPGUIDE.md](https://github.com/kubecub/community/blob/main/DEVELOPGUIDE.md) | Failed: Network error: Not Found
* [ERR] [file:///home/runner/work/go-project-layout/go-project-layout/.github/google.com/search](file:///home/runner/work/go-project-layout/go-project-layout/.github/google.com/search) | Failed: Cannot find file
### Errors in README.md
* [404] [https://github.com/kubecub/go-project-layout/generate](https://github.com/kubecub/go-project-layout/generate) | Failed: Network error: Not Found
* [400] [https://github.com/issues?q=org%kubecub+is%3Aissue+label%3A%22good+first+issue%22+no%3Aassignee](https://github.com/issues?q=org%kubecub+is%3Aissue+label%3A%22good+first+issue%22+no%3Aassignee) | Failed: Network error: Bad Request
[Full Github Actions output](https://github.com/kubecub/go-project-layout/actions/runs/5235294087?check_suite_focus=true)
| 1.0 | Bug reports for links in kubecub docs - ## Summary
| Status | Count |
|---------------|-------|
| 🔍 Total | 171 |
| ✅ Successful | 164 |
| ⏳ Timeouts | 1 |
| 🔀 Redirected | 0 |
| 👻 Excluded | 0 |
| ❓ Unknown | 0 |
| 🚫 Errors | 6 |
## Errors per input
### Errors in CONTRIBUTING.md
* [TIMEOUT] [https://twitter.com/xxw3293172751](https://twitter.com/xxw3293172751) | Timeout
### Errors in .github/CODE_OF_CONDUCT.md
* [404] [https://github.com/kubecub/kubecub/tree/main/.github/ISSUE_TEMPLATE](https://github.com/kubecub/kubecub/tree/main/.github/ISSUE_TEMPLATE) | Failed: Network error: Not Found
* [ERR] [file:///home/runner/work/go-project-layout/go-project-layout/.github/nsddd.top](file:///home/runner/work/go-project-layout/go-project-layout/.github/nsddd.top) | Failed: Cannot find file
* [404] [https://github.com/kubecub/community/blob/main/DEVELOPGUIDE.md](https://github.com/kubecub/community/blob/main/DEVELOPGUIDE.md) | Failed: Network error: Not Found
* [ERR] [file:///home/runner/work/go-project-layout/go-project-layout/.github/google.com/search](file:///home/runner/work/go-project-layout/go-project-layout/.github/google.com/search) | Failed: Cannot find file
### Errors in README.md
* [404] [https://github.com/kubecub/go-project-layout/generate](https://github.com/kubecub/go-project-layout/generate) | Failed: Network error: Not Found
* [400] [https://github.com/issues?q=org%kubecub+is%3Aissue+label%3A%22good+first+issue%22+no%3Aassignee](https://github.com/issues?q=org%kubecub+is%3Aissue+label%3A%22good+first+issue%22+no%3Aassignee) | Failed: Network error: Bad Request
[Full Github Actions output](https://github.com/kubecub/go-project-layout/actions/runs/5235294087?check_suite_focus=true)
| non_main | bug reports for links in kubecub docs summary status count 🔍 total ✅ successful ⏳ timeouts 🔀 redirected 👻 excluded ❓ unknown 🚫 errors errors per input errors in contributing md timeout errors in github code of conduct md failed network error not found file home runner work go project layout go project layout github nsddd top failed cannot find file failed network error not found file home runner work go project layout go project layout github google com search failed cannot find file errors in readme md failed network error not found failed network error bad request | 0 |
393,485 | 11,616,550,872 | IssuesEvent | 2020-02-26 15:53:10 | hms-dbmi/cistrome-higlass-wrapper | https://api.github.com/repos/hms-dbmi/cistrome-higlass-wrapper | closed | Use updated higlass viewport-projection-horizontal track to enable multiple genome interval selections | enhancement high priority | When https://github.com/higlass/higlass/pull/864 is merged this can be completed | 1.0 | Use updated higlass viewport-projection-horizontal track to enable multiple genome interval selections - When https://github.com/higlass/higlass/pull/864 is merged this can be completed | non_main | use updated higlass viewport projection horizontal track to enable multiple genome interval selections when is merged this can be completed | 0 |
63,769 | 12,374,413,646 | IssuesEvent | 2020-05-19 01:30:19 | toebes/ciphers | https://api.github.com/repos/toebes/ciphers | opened | Baconian word generator needs a UI to show letters chosen | CodeBusters enhancement | When generating a word baconian, it needs to have a field for the HINT characters.
With the given Hint characters, it should show in the letter map which letters are covered by the hint.
For example with the sample plain text
SOMETHING
and a HINT of
SOME
With the text chosen as:
BY OUR ERNST ALERT AUDIO --- BE ITS EARTH A BOOK ABBEY
On the mapping, the letters **AB DE I L NO RSTU Y** should be bold or highlighted in a color as well as the A/B letter that they map to
**AB**C**DE**FGH**I**JK**L**M**NO**PQ**RSTU**VWX**Y**Z
Ideally the code should also check the question text to make sure that the hint occurs in the question (like the other generators do). Note that the hint field should only be present and checked for the word baconian. | 1.0 | Baconian word generator needs a UI to show letters chosen - When generating a word baconian, it needs to have a field for the HINT characters.
With the given Hint characters, it should show in the letter map which letters are covered by the hint.
For example with the sample plain text
SOMETHING
and a HINT of
SOME
With the text chosen as:
BY OUR ERNST ALERT AUDIO --- BE ITS EARTH A BOOK ABBEY
On the mapping, the letters **AB DE I L NO RSTU Y** should be bold or highlighted in a color as well as the A/B letter that they map to
**AB**C**DE**FGH**I**JK**L**M**NO**PQ**RSTU**VWX**Y**Z
Ideally the code should also check the question text to make sure that the hint occurs in the question (like the other generators do). Note that the hint field should only be present and checked for the word baconian. | non_main | baconian word generator needs a ui to show letters chosen when generating a word baconian it needs to have a field for the hint characters with the given hint characters it should show in the letter map which letters are covered by the hint for example with the sample plain text something and a hint of some with the text chosen as by our ernst alert audio be its earth a book abbey on the mapping the letters ab de i l no rstu y should be bold or highlighted in a color as well as the a b letter that they map to ab c de fgh i jk l m no pq rstu vwx y z ideally the code should also check the question text to make sure that the hint occurs in the question like the other generators do note that the hint field should only be present and checked for the word baconian | 0 |
1,728 | 6,574,824,683 | IssuesEvent | 2017-09-11 14:12:25 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Passphrase protected private-key require to enter passphrase several times on one task to one host | affects_2.1 docs_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report / Documentation Report
##### COMPONENT NAME
git
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
all default
##### OS / ENVIRONMENT
Use _Putty_ to _Centos 7.x_ via _Vagrant_ on _VM VirtualBox_ at _Windows10_
##### SUMMARY
I have passphrase-protected-ssh-private-key for access the private git repo. I copy this key to target host but every time i run ansible-git-task it's asked me passphrase six (!) times for every single host.
Yes i know that one ansible git command translate into several git commands. It was not so obviously but afer some investigation time i found it. So my next step was to use some of forwarding practices. And cannot do this at all. 8(
Not helped:
ssh-agent + ssh-add
ansible.cfg with ssh_args = -o ForwardAgent=true
run playbook w/ or w/o sudo
##### STEPS TO REPRODUCE
1. Phassphrase private ssh key and private git repo (for example on bitbucket)
2. Create user (not root!) on remote host with this protected private key
3. Run ansible playbook command from control machine
Ansible git task example:
```
- name: checkout repo
git: repo=ssh://git@altssh.bitbucket.org:443/user/repo.git version="{{ git_branch }}" dest="{{ dir_app }}" accept_hostkey="yes"
become: yes
become_user: "{{ user.login }}"
tags: ['app-update', 'sandbox']
```
Playbook command example
```
[vagrant@localhost ~]$ ansible-playbook /vagrant/provisioning/sandbox/dd-apps-sandboxes.yml -i /vagrant/provisioning/dd-hosts.txt --limit="brutto.dev" --tags="app-update"
PLAY [brutto.dev] **************************************************************
TASK [setup] *******************************************************************
ok: [brutto.dev]
TASK [../roles/app : checkout repo] ********************************************
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
ok: [brutto.dev]
PLAY RECAP *********************************************************************
brutto.dev : ok=2 changed=0 unreachable=0 failed=0
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I want enter passphrase one time or never if i use some forwarding
##### ACTUAL RESULTS
Every time passphrase prompted six times!
| True | Passphrase protected private-key require to enter passphrase several times on one task to one host - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report / Documentation Report
##### COMPONENT NAME
git
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
all default
##### OS / ENVIRONMENT
Use _Putty_ to _Centos 7.x_ via _Vagrant_ on _VM VirtualBox_ at _Windows10_
##### SUMMARY
I have passphrase-protected-ssh-private-key for access the private git repo. I copy this key to target host but every time i run ansible-git-task it's asked me passphrase six (!) times for every single host.
Yes i know that one ansible git command translate into several git commands. It was not so obviously but afer some investigation time i found it. So my next step was to use some of forwarding practices. And cannot do this at all. 8(
Not helped:
ssh-agent + ssh-add
ansible.cfg with ssh_args = -o ForwardAgent=true
run playbook w/ or w/o sudo
##### STEPS TO REPRODUCE
1. Phassphrase private ssh key and private git repo (for example on bitbucket)
2. Create user (not root!) on remote host with this protected private key
3. Run ansible playbook command from control machine
Ansible git task example:
```
- name: checkout repo
git: repo=ssh://git@altssh.bitbucket.org:443/user/repo.git version="{{ git_branch }}" dest="{{ dir_app }}" accept_hostkey="yes"
become: yes
become_user: "{{ user.login }}"
tags: ['app-update', 'sandbox']
```
Playbook command example
```
[vagrant@localhost ~]$ ansible-playbook /vagrant/provisioning/sandbox/dd-apps-sandboxes.yml -i /vagrant/provisioning/dd-hosts.txt --limit="brutto.dev" --tags="app-update"
PLAY [brutto.dev] **************************************************************
TASK [setup] *******************************************************************
ok: [brutto.dev]
TASK [../roles/app : checkout repo] ********************************************
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
Enter passphrase for key '/home/brutto/.ssh/id_rsa':
ok: [brutto.dev]
PLAY RECAP *********************************************************************
brutto.dev : ok=2 changed=0 unreachable=0 failed=0
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I want enter passphrase one time or never if i use some forwarding
##### ACTUAL RESULTS
Every time passphrase prompted six times!
| main | passphrase protected private key require to enter passphrase several times on one task to one host issue type bug report documentation report component name git ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration all default os environment use putty to centos x via vagrant on vm virtualbox at summary i have passphrase protected ssh private key for access the private git repo i copy this key to target host but every time i run ansible git task it s asked me passphrase six times for every single host yes i know that one ansible git command translate into several git commands it was not so obviously but afer some investigation time i found it so my next step was to use some of forwarding practices and cannot do this at all not helped ssh agent ssh add ansible cfg with ssh args o forwardagent true run playbook w or w o sudo steps to reproduce phassphrase private ssh key and private git repo for example on bitbucket create user not root on remote host with this protected private key run ansible playbook command from control machine ansible git task example name checkout repo git repo ssh git altssh bitbucket org user repo git version git branch dest dir app accept hostkey yes become yes become user user login tags playbook command example ansible playbook vagrant provisioning sandbox dd apps sandboxes yml i vagrant provisioning dd hosts txt limit brutto dev tags app update play task ok task enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa enter passphrase for key home brutto ssh id rsa ok play recap brutto dev ok changed unreachable failed expected results i want enter passphrase one time or never if i use some forwarding actual results every time passphrase prompted six times | 1 |
359 | 3,298,189,639 | IssuesEvent | 2015-11-02 13:21:53 | Homebrew/homebrew | https://api.github.com/repos/Homebrew/homebrew | opened | Possible way to handle sandbox issues for Postgres's plugins | help wanted maintainer feedback sandbox upstream issue | As we can seen in https://github.com/Homebrew/homebrew/pull/41962 and many others PR, all of Postgre's plugins are broken under sandbox. Moreover, this means all of them are broken during `upgrade/unlink/link/switch` etc.
Considering the amount of plugins for Postgres, vending all of them will soon become unscalable. However, until it's fixed/supported by upstream (See https://github.com/Homebrew/homebrew/issues/10247), Postgres is inherently hostile to Homebrew-style sandboxing where several components are symlinked into a common prefix.
Since there isn't any perfect solution, we may will just accept some hacking middle ground. AFAIK, NixOS handles this by copying all of binaries directly to common prefix, hence breaking its symlink sandbox as well. We may take some similar approach:
* Compile Postgres as usual.
* Copy all of binaries in `prefix/bin` to `prefix/libexec/bin-backup`.
* Hard link binaries `prefix/libexec/bin-backup` to `HOMEBREW_PREFIX/bin` during `post_install`.
Clearly, it's still breaking our symlink system. But at least, it can work under sandbox.
Any objection/suggestion/commments? OR should we just vendor all of them inside one mega formula?
cc @mikemcquaid @DomT4 | True | Possible way to handle sandbox issues for Postgres's plugins - As we can seen in https://github.com/Homebrew/homebrew/pull/41962 and many others PR, all of Postgre's plugins are broken under sandbox. Moreover, this means all of them are broken during `upgrade/unlink/link/switch` etc.
Considering the amount of plugins for Postgres, vending all of them will soon become unscalable. However, until it's fixed/supported by upstream (See https://github.com/Homebrew/homebrew/issues/10247), Postgres is inherently hostile to Homebrew-style sandboxing where several components are symlinked into a common prefix.
Since there isn't any perfect solution, we may will just accept some hacking middle ground. AFAIK, NixOS handles this by copying all of binaries directly to common prefix, hence breaking its symlink sandbox as well. We may take some similar approach:
* Compile Postgres as usual.
* Copy all of binaries in `prefix/bin` to `prefix/libexec/bin-backup`.
* Hard link binaries `prefix/libexec/bin-backup` to `HOMEBREW_PREFIX/bin` during `post_install`.
Clearly, it's still breaking our symlink system. But at least, it can work under sandbox.
Any objection/suggestion/commments? OR should we just vendor all of them inside one mega formula?
cc @mikemcquaid @DomT4 | main | possible way to handle sandbox issues for postgres s plugins as we can seen in and many others pr all of postgre s plugins are broken under sandbox moreover this means all of them are broken during upgrade unlink link switch etc considering the amount of plugins for postgres vending all of them will soon become unscalable however until it s fixed supported by upstream see postgres is inherently hostile to homebrew style sandboxing where several components are symlinked into a common prefix since there isn t any perfect solution we may will just accept some hacking middle ground afaik nixos handles this by copying all of binaries directly to common prefix hence breaking its symlink sandbox as well we may take some similar approach compile postgres as usual copy all of binaries in prefix bin to prefix libexec bin backup hard link binaries prefix libexec bin backup to homebrew prefix bin during post install clearly it s still breaking our symlink system but at least it can work under sandbox any objection suggestion commments or should we just vendor all of them inside one mega formula cc mikemcquaid | 1 |
220,163 | 17,153,381,185 | IssuesEvent | 2021-07-14 01:21:44 | kworkflow/kworkflow | https://api.github.com/repos/kworkflow/kworkflow | opened | vm_test fail if we have kworkflow.config in the kw main directory | bug tests | **Describe the bug**
When I was running `./run_test test vm_test`, I got the following error:
```
=========================================================
test_vm_mount
ASSERT:(1) - Expected 125 expected:<125> but was:<0>
test_vm_umount
Ran 2 tests.
FAILED (failures=1)
```
It looks like that `vm_test` is sensitive to an external file.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the main folder from kw (`cd kworkflow`);
2. Use `kw init`. You will see a file named `kworkflow.config`.
3. Run: `./run_tests.sh test vm_test`.
**Expected behavior**
Full pass.
**Desktop (please complete the following information):**
- OS: Ubuntu
- Version: Sid
- Bash Version: 5.0.17
| 1.0 | vm_test fail if we have kworkflow.config in the kw main directory - **Describe the bug**
When I was running `./run_test test vm_test`, I got the following error:
```
=========================================================
test_vm_mount
ASSERT:(1) - Expected 125 expected:<125> but was:<0>
test_vm_umount
Ran 2 tests.
FAILED (failures=1)
```
It looks like that `vm_test` is sensitive to an external file.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the main folder from kw (`cd kworkflow`);
2. Use `kw init`. You will see a file named `kworkflow.config`.
3. Run: `./run_tests.sh test vm_test`.
**Expected behavior**
Full pass.
**Desktop (please complete the following information):**
- OS: Ubuntu
- Version: Sid
- Bash Version: 5.0.17
| non_main | vm test fail if we have kworkflow config in the kw main directory describe the bug when i was running run test test vm test i got the following error test vm mount assert expected expected but was test vm umount ran tests failed failures it looks like that vm test is sensitive to an external file to reproduce steps to reproduce the behavior go to the main folder from kw cd kworkflow use kw init you will see a file named kworkflow config run run tests sh test vm test expected behavior full pass desktop please complete the following information os ubuntu version sid bash version | 0 |
4,703 | 24,270,821,562 | IssuesEvent | 2022-09-28 10:07:06 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | SEO | Duplicate title tags | engineering Maintain | Off the back of the Grassriots site audit, it has been identified that there are a number of issues with pages that have duplicate title tags.
Detail from Grassriots:
Pages in this set have duplicate / generalized title tags - in some case english language / non-translated. Duplicate <title> tags make it difficult for search engines to determine which of a website's pages is relevant for a specific search query, and which one should be prioritized in search results. Pages with duplicate titles have a lower chance of ranking well and are at risk of being banned. Moreover, identical <title> tags confuse users as to which webpage they should follow.
Link to the [audit](https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=627737737) | True | SEO | Duplicate title tags - Off the back of the Grassriots site audit, it has been identified that there are a number of issues with pages that have duplicate title tags.
Detail from Grassriots:
Pages in this set have duplicate / generalized title tags - in some case english language / non-translated. Duplicate <title> tags make it difficult for search engines to determine which of a website's pages is relevant for a specific search query, and which one should be prioritized in search results. Pages with duplicate titles have a lower chance of ranking well and are at risk of being banned. Moreover, identical <title> tags confuse users as to which webpage they should follow.
Link to the [audit](https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=627737737) | main | seo duplicate title tags off the back of the grassriots site audit it has been identified that there are a number of issues with pages that have duplicate title tags detail from grassriots pages in this set have duplicate generalized title tags in some case english language non translated duplicate tags make it difficult for search engines to determine which of a website s pages is relevant for a specific search query and which one should be prioritized in search results pages with duplicate titles have a lower chance of ranking well and are at risk of being banned moreover identical tags confuse users as to which webpage they should follow link to the | 1 |
549,887 | 16,101,522,798 | IssuesEvent | 2021-04-27 09:53:45 | googleapis/python-spanner | https://api.github.com/repos/googleapis/python-spanner | opened | Synthesis failed for python-spanner | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate python-spanner. :broken_heart:
Please investigate and fix this issue within 5 business days. While it remains broken,
this library cannot be updated with changes to the python-spanner API, and the library grows
stale.
See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md
for trouble shooting tips.
Here's the output from running `synth.py`:
```
l_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:77:1
DEBUG: Rule 'com_google_protoc_java_resource_names_plugin' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "4b714b35ee04ba90f560ee60e64c7357428efcb6b0f3a298f343f8ec2c6d4a5d"
DEBUG: Call stack for the definition of repository 'com_google_protoc_java_resource_names_plugin' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:234:1
DEBUG: Rule 'protoc_docs_plugin' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "33b387245455775e0de45869c7355cc5a9e98b396a6fc43b02812a63b75fee20"
DEBUG: Call stack for the definition of repository 'protoc_docs_plugin' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:258:1
DEBUG: Rule 'rules_python' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "48f7e716f4098b85296ad93f5a133baf712968c13fbc2fdf3a6136158fe86eac"
DEBUG: Call stack for the definition of repository 'rules_python' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:42:1
DEBUG: Rule 'gapic_generator_python' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "fe995def6873fcbdc2a8764ef4bce96eb971a9d1950fe9db9be442f3c64fb3b6"
DEBUG: Call stack for the definition of repository 'gapic_generator_python' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:278:1
DEBUG: Rule 'com_googleapis_gapic_generator_go' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "c0d0efba86429cee5e52baf838165b0ed7cafae1748d025abec109d25e006628"
DEBUG: Call stack for the definition of repository 'com_googleapis_gapic_generator_go' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:300:1
DEBUG: Rule 'gapic_generator_php' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "3dffc5c34a5f35666843df04b42d6ce1c545b992f9c093a777ec40833b548d86"
DEBUG: Call stack for the definition of repository 'gapic_generator_php' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:364:1
DEBUG: Rule 'gapic_generator_csharp' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "4db430cfb9293e4521ec8e8138f8095faf035d8e752cf332d227710d749939eb"
DEBUG: Call stack for the definition of repository 'gapic_generator_csharp' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:386:1
DEBUG: Rule 'gapic_generator_ruby' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "a14ec475388542f2ea70d16d75579065758acc4b99fdd6d59463d54e1a9e4499"
DEBUG: Call stack for the definition of repository 'gapic_generator_ruby' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:400:1
DEBUG: /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/rules_python/python/pip.bzl:61:5: DEPRECATED: the pip_repositories rule has been replaced with pip_install, please see rules_python 0.1 release notes
DEBUG: Rule 'bazel_skylib' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "1dde365491125a3db70731e25658dfdd3bc5dbdfd11b840b3e987ecf043c7ca0"
DEBUG: Call stack for the definition of repository 'bazel_skylib' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:35:1
Analyzing: target //google/spanner/v1:spanner-v1-py (1 packages loaded, 0 targets configured)
ERROR: /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/upb/bazel/upb_proto_library.bzl:257:29: aspect() got unexpected keyword argument 'incompatible_use_toolchain_transition'
ERROR: Analysis of target '//google/spanner/v1:spanner-v1-py' failed; build aborted: error loading package '@com_github_grpc_grpc//': in /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/com_github_grpc_grpc/bazel/grpc_build_system.bzl: Extension file 'bazel/upb_proto_library.bzl' has errors
INFO: Elapsed time: 0.252s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded, 13 targets configured)
FAILED: Build did NOT complete successfully (2 packages loaded, 13 targets configured)
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/python-spanner/synth.py", line 30, in <module>
include_protos=True,
File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_bazel.py", line 52, in py_library
return self._generate_code(service, version, "python", False, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_bazel.py", line 204, in _generate_code
shell.run(bazel_run_args)
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['bazel', '--max_idle_secs=240', 'build', '//google/spanner/v1:spanner-v1-py']' returned non-zero exit status 1.
2021-04-27 02:53:43,740 autosynth [ERROR] > Synthesis failed
2021-04-27 02:53:43,740 autosynth [DEBUG] > Running: git reset --hard HEAD
HEAD is now at 7bddb81 chore(revert): revert preventing normalization (#318)
2021-04-27 02:53:43,746 autosynth [DEBUG] > Running: git checkout autosynth
Switched to branch 'autosynth'
2021-04-27 02:53:43,751 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 336, in _inner_main
commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 68, in synthesize_loop
has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest)
File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch
synthesizer.synthesize(synth_log_path, self.environ)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/1d11e2cc-303c-4be4-a4bf-414e479e240b/targets/github%2Fsynthtool;config=default/tests;query=python-spanner;failed=false).
| 1.0 | Synthesis failed for python-spanner - Hello! Autosynth couldn't regenerate python-spanner. :broken_heart:
Please investigate and fix this issue within 5 business days. While it remains broken,
this library cannot be updated with changes to the python-spanner API, and the library grows
stale.
See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md
for trouble shooting tips.
Here's the output from running `synth.py`:
```
l_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:77:1
DEBUG: Rule 'com_google_protoc_java_resource_names_plugin' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "4b714b35ee04ba90f560ee60e64c7357428efcb6b0f3a298f343f8ec2c6d4a5d"
DEBUG: Call stack for the definition of repository 'com_google_protoc_java_resource_names_plugin' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:234:1
DEBUG: Rule 'protoc_docs_plugin' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "33b387245455775e0de45869c7355cc5a9e98b396a6fc43b02812a63b75fee20"
DEBUG: Call stack for the definition of repository 'protoc_docs_plugin' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:258:1
DEBUG: Rule 'rules_python' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "48f7e716f4098b85296ad93f5a133baf712968c13fbc2fdf3a6136158fe86eac"
DEBUG: Call stack for the definition of repository 'rules_python' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:42:1
DEBUG: Rule 'gapic_generator_python' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "fe995def6873fcbdc2a8764ef4bce96eb971a9d1950fe9db9be442f3c64fb3b6"
DEBUG: Call stack for the definition of repository 'gapic_generator_python' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:278:1
DEBUG: Rule 'com_googleapis_gapic_generator_go' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "c0d0efba86429cee5e52baf838165b0ed7cafae1748d025abec109d25e006628"
DEBUG: Call stack for the definition of repository 'com_googleapis_gapic_generator_go' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:300:1
DEBUG: Rule 'gapic_generator_php' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "3dffc5c34a5f35666843df04b42d6ce1c545b992f9c093a777ec40833b548d86"
DEBUG: Call stack for the definition of repository 'gapic_generator_php' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:364:1
DEBUG: Rule 'gapic_generator_csharp' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "4db430cfb9293e4521ec8e8138f8095faf035d8e752cf332d227710d749939eb"
DEBUG: Call stack for the definition of repository 'gapic_generator_csharp' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:386:1
DEBUG: Rule 'gapic_generator_ruby' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "a14ec475388542f2ea70d16d75579065758acc4b99fdd6d59463d54e1a9e4499"
DEBUG: Call stack for the definition of repository 'gapic_generator_ruby' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:400:1
DEBUG: /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/rules_python/python/pip.bzl:61:5: DEPRECATED: the pip_repositories rule has been replaced with pip_install, please see rules_python 0.1 release notes
DEBUG: Rule 'bazel_skylib' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "1dde365491125a3db70731e25658dfdd3bc5dbdfd11b840b3e987ecf043c7ca0"
DEBUG: Call stack for the definition of repository 'bazel_skylib' which is a http_archive (rule definition at /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
- <builtin>
- /home/kbuilder/.cache/synthtool/googleapis/WORKSPACE:35:1
Analyzing: target //google/spanner/v1:spanner-v1-py (1 packages loaded, 0 targets configured)
ERROR: /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/upb/bazel/upb_proto_library.bzl:257:29: aspect() got unexpected keyword argument 'incompatible_use_toolchain_transition'
ERROR: Analysis of target '//google/spanner/v1:spanner-v1-py' failed; build aborted: error loading package '@com_github_grpc_grpc//': in /home/kbuilder/.cache/bazel/_bazel_kbuilder/a732f932c2cbeb7e37e1543f189a2a73/external/com_github_grpc_grpc/bazel/grpc_build_system.bzl: Extension file 'bazel/upb_proto_library.bzl' has errors
INFO: Elapsed time: 0.252s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded, 13 targets configured)
FAILED: Build did NOT complete successfully (2 packages loaded, 13 targets configured)
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/python-spanner/synth.py", line 30, in <module>
include_protos=True,
File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_bazel.py", line 52, in py_library
return self._generate_code(service, version, "python", False, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/gcp/gapic_bazel.py", line 204, in _generate_code
shell.run(bazel_run_args)
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['bazel', '--max_idle_secs=240', 'build', '//google/spanner/v1:spanner-v1-py']' returned non-zero exit status 1.
2021-04-27 02:53:43,740 autosynth [ERROR] > Synthesis failed
2021-04-27 02:53:43,740 autosynth [DEBUG] > Running: git reset --hard HEAD
HEAD is now at 7bddb81 chore(revert): revert preventing normalization (#318)
2021-04-27 02:53:43,746 autosynth [DEBUG] > Running: git checkout autosynth
Switched to branch 'autosynth'
2021-04-27 02:53:43,751 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 336, in _inner_main
commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 68, in synthesize_loop
has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest)
File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch
synthesizer.synthesize(synth_log_path, self.environ)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/1d11e2cc-303c-4be4-a4bf-414e479e240b/targets/github%2Fsynthtool;config=default/tests;query=python-spanner;failed=false).
| non_main | synthesis failed for python spanner hello autosynth couldn t regenerate python spanner broken heart please investigate and fix this issue within business days while it remains broken this library cannot be updated with changes to the python spanner api and the library grows stale see for trouble shooting tips here s the output from running synth py l kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule com google protoc java resource names plugin indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository com google protoc java resource names plugin which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule protoc docs plugin indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository protoc docs plugin which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule rules python indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository rules python which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule gapic generator python indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository gapic generator python which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule com googleapis gapic generator go indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository com googleapis gapic generator go which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule gapic generator php indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository gapic generator php which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule gapic generator csharp indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository gapic generator csharp which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug rule gapic generator ruby indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository gapic generator ruby which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace debug home kbuilder cache bazel bazel kbuilder external rules python python pip bzl deprecated the pip repositories rule has been replaced with pip install please see rules python release notes debug rule bazel skylib indicated that a canonical reproducible form can be obtained by modifying arguments debug call stack for the definition of repository bazel skylib which is a http archive rule definition at home kbuilder cache bazel bazel kbuilder external bazel tools tools build defs repo http bzl home kbuilder cache synthtool googleapis workspace analyzing target google spanner spanner py packages loaded targets configured error home kbuilder cache bazel bazel kbuilder external upb bazel upb proto library bzl aspect got unexpected keyword argument incompatible use toolchain transition error analysis of target google spanner spanner py failed build aborted error loading package com github grpc grpc in home kbuilder cache bazel bazel kbuilder external com github grpc grpc bazel grpc build system bzl extension file bazel upb proto library bzl has errors info elapsed time info processes failed build did not complete successfully packages loaded targets configured failed build did not complete successfully packages loaded targets configured traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file home kbuilder cache synthtool python spanner synth py line in include protos true file tmpfs src github synthtool synthtool gcp gapic bazel py line in py library return self generate code service version python false kwargs file tmpfs src github synthtool synthtool gcp gapic bazel py line in generate code shell run bazel run args file tmpfs src github synthtool synthtool shell py line in run raise exc file tmpfs src github synthtool synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status autosynth synthesis failed autosynth running git reset hard head head is now at chore revert revert preventing normalization autosynth running git checkout autosynth switched to branch autosynth autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main commit count synthesize loop x multiple prs change pusher synthesizer file tmpfs src github synthtool autosynth synth py line in synthesize loop has changes toolbox synthesize version in new branch synthesizer youngest file tmpfs src github synthtool autosynth synth toolbox py line in synthesize version in new branch synthesizer synthesize synth log path self environ file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log | 0 |
346,973 | 10,422,361,666 | IssuesEvent | 2019-09-16 08:51:26 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.google.com - see bug description | browser-firefox-mobile engine-gecko priority-critical | <!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.google.com/search?q=xvideos&client=firefox-b-m&source=lnms&tbm=vid&sa=X&ved=0ahUKEwjhm_Lars7kAhVMxoUKHeqICrYQ_AUIBygC
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android 8.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: The video not working
**Steps to Reproduce**:
I need help
[](https://webcompat.com/uploads/2019/9/5e22c587-7994-4274-b064-acefc8db06eb.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190909131947</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
Submitted in the name of `@Xvideos`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.google.com - see bug description - <!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.google.com/search?q=xvideos&client=firefox-b-m&source=lnms&tbm=vid&sa=X&ved=0ahUKEwjhm_Lars7kAhVMxoUKHeqICrYQ_AUIBygC
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android 8.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: The video not working
**Steps to Reproduce**:
I need help
[](https://webcompat.com/uploads/2019/9/5e22c587-7994-4274-b064-acefc8db06eb.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190909131947</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
Submitted in the name of `@Xvideos`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description the video not working steps to reproduce i need help browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false submitted in the name of xvideos from with ❤️ | 0 |
551,178 | 16,164,507,705 | IssuesEvent | 2021-05-01 08:01:02 | containrrr/watchtower | https://api.github.com/repos/containrrr/watchtower | opened | 'Failed to send notification via shoutrrr' after recreating on another docker network bridge | Priority: Medium Status: Available Type: Bug | Hi, I had an issue back in Jan for a similar shoutrrr authentication error you helped me with (https://github.com/containrrr/watchtower/issues/754) and I've now got a similar but different error.
I recently moved my watchtower from one docker network to another, along with a number of other containers in the stack. Since doing so, I started getting the following error in the (portainer) log (I've replaced the sensitive data with the .env references):
`Failed to send notification via shoutrrr (url=smtp://root:[PASSWD]@[$SERVER]:587/?auth=Plain&encryption=None&fromaddress=[$FROM]&fromname=Watchtower&starttls=Yes&subject=Watchtower updates on b694eb3e1072&toaddresses=[$TO]&usehtml=No): error authenticating: 535 5.7.8 Authentication failed`
My docker-compose is as below:
``` watchtower: #automatic container version monitoring and updating
container_name: watchtower
image: containrrr/watchtower:latest
environment:
- TZ=$TZ
- DEBUG=true
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_RESTARTING=true
- WATCHTOWER_INCLUDE_STOPPED=true
- WATCHTOWER_NOTIFICATIONS=email
- WATCHTOWER_NOTIFICATION_EMAIL_FROM=$FROM
- WATCHTOWER_NOTIFICATION_EMAIL_TO=$TO
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER=$SERVER
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=$SERVER_PORT
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=$USER
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=$PASSWD
- WATCHTOWER_NOTIFICATION_EMAIL_DELAY=2
- WATCHTOWER_LABEL_ENABLE=true
command: --interval 360
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- Horus
```
Some things to note:
- I've not changed my compose file other than for networks (and it connects to the network no problem)
- I've not changed the smpt server details at all (email from/to, user, password, server, auth etc.)
- I've quadruple checked the .env
- I've made the variables explicit (rather than .env references) and throw up the same error
- It's putting `root:$PASSWD` into the first part of the URL rather than the actual email address - this is confusing to me
- When I first moved the container across, I got the first notification email that WT had started, but then nothing (container remained up) so I checked the logs and found the error, and confirmed it had updated a few containers without sending notification
I've gone over the previous thread to make sure I'm not missing something, plus other support posts with an authentication error (that I could find) but no joy so far.
Hoping you might have a suggestion. Thanks for any help!
| 1.0 | 'Failed to send notification via shoutrrr' after recreating on another docker network bridge - Hi, I had an issue back in Jan for a similar shoutrrr authentication error you helped me with (https://github.com/containrrr/watchtower/issues/754) and I've now got a similar but different error.
I recently moved my watchtower from one docker network to another, along with a number of other containers in the stack. Since doing so, I started getting the following error in the (portainer) log (I've replaced the sensitive data with the .env references):
`Failed to send notification via shoutrrr (url=smtp://root:[PASSWD]@[$SERVER]:587/?auth=Plain&encryption=None&fromaddress=[$FROM]&fromname=Watchtower&starttls=Yes&subject=Watchtower updates on b694eb3e1072&toaddresses=[$TO]&usehtml=No): error authenticating: 535 5.7.8 Authentication failed`
My docker-compose is as below:
``` watchtower: #automatic container version monitoring and updating
container_name: watchtower
image: containrrr/watchtower:latest
environment:
- TZ=$TZ
- DEBUG=true
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_RESTARTING=true
- WATCHTOWER_INCLUDE_STOPPED=true
- WATCHTOWER_NOTIFICATIONS=email
- WATCHTOWER_NOTIFICATION_EMAIL_FROM=$FROM
- WATCHTOWER_NOTIFICATION_EMAIL_TO=$TO
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER=$SERVER
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT=$SERVER_PORT
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=$USER
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=$PASSWD
- WATCHTOWER_NOTIFICATION_EMAIL_DELAY=2
- WATCHTOWER_LABEL_ENABLE=true
command: --interval 360
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- Horus
```
Some things to note:
- I've not changed my compose file other than for networks (and it connects to the network no problem)
- I've not changed the smpt server details at all (email from/to, user, password, server, auth etc.)
- I've quadruple checked the .env
- I've made the variables explicit (rather than .env references) and throw up the same error
- It's putting `root:$PASSWD` into the first part of the URL rather than the actual email address - this is confusing to me
- When I first moved the container across, I got the first notification email that WT had started, but then nothing (container remained up) so I checked the logs and found the error, and confirmed it had updated a few containers without sending notification
I've gone over the previous thread to make sure I'm not missing something, plus other support posts with an authentication error (that I could find) but no joy so far.
Hoping you might have a suggestion. Thanks for any help!
| non_main | failed to send notification via shoutrrr after recreating on another docker network bridge hi i had an issue back in jan for a similar shoutrrr authentication error you helped me with and i ve now got a similar but different error i recently moved my watchtower from one docker network to another along with a number of other containers in the stack since doing so i started getting the following error in the portainer log i ve replaced the sensitive data with the env references failed to send notification via shoutrrr url smtp root auth plain encryption none fromaddress fromname watchtower starttls yes subject watchtower updates on toaddresses usehtml no error authenticating authentication failed my docker compose is as below watchtower automatic container version monitoring and updating container name watchtower image containrrr watchtower latest environment tz tz debug true watchtower cleanup true watchtower include restarting true watchtower include stopped true watchtower notifications email watchtower notification email from from watchtower notification email to to watchtower notification email server server watchtower notification email server port server port watchtower notification email server user user watchtower notification email server password passwd watchtower notification email delay watchtower label enable true command interval restart unless stopped volumes var run docker sock var run docker sock networks horus some things to note i ve not changed my compose file other than for networks and it connects to the network no problem i ve not changed the smpt server details at all email from to user password server auth etc i ve quadruple checked the env i ve made the variables explicit rather than env references and throw up the same error it s putting root passwd into the first part of the url rather than the actual email address this is confusing to me when i first moved the container across i got the first notification email that wt had started but then nothing container remained up so i checked the logs and found the error and confirmed it had updated a few containers without sending notification i ve gone over the previous thread to make sure i m not missing something plus other support posts with an authentication error that i could find but no joy so far hoping you might have a suggestion thanks for any help | 0 |
202,201 | 15,822,683,188 | IssuesEvent | 2021-04-05 22:50:31 | lenaschimmel/schnelltestrechner | https://api.github.com/repos/lenaschimmel/schnelltestrechner | opened | Generelle Unklarheit: Infiziert vs. infektiös | data documentation enhancement help wanted rapidtests-feedback ui | Eine Unklarheit zieht sich wie ein roter Faden durch die Seite und durch fast alle verfügbaren Daten: es ist unklar ob der Test prüfen soll, ob man Infiziert oder infektiös ist. Dabei ist die Sensitivität für diese beiden Fragen verschieden, und für manche Tests haben wir sogar die Daten getrennt zu beidem vorliegen. In anderen Fällen liegen sie getrennt für hohe und niedrige Ct-Werte vor, was ungefähr das gleiche bedeutet.
Da die Herstellerangaben zur Sensitivität stets deutlich höher sind als praktisch alle Studien, können wir standartmäßig davon ausgehen, dass sie sich nur auf "infektiös sein" beziehen.
Die Seite müsste diese Unterscheidung an vielen Stellen in Erklärungstexten, UI, Berechnungen und Datenanzeige vornehmen. Andererseits fehlen uns derzeit noch die Daten (bzw. die Stuktur in den Daten), um diese Unterscheidung wirklich konsequent durchzuziehen. | 1.0 | Generelle Unklarheit: Infiziert vs. infektiös - Eine Unklarheit zieht sich wie ein roter Faden durch die Seite und durch fast alle verfügbaren Daten: es ist unklar ob der Test prüfen soll, ob man Infiziert oder infektiös ist. Dabei ist die Sensitivität für diese beiden Fragen verschieden, und für manche Tests haben wir sogar die Daten getrennt zu beidem vorliegen. In anderen Fällen liegen sie getrennt für hohe und niedrige Ct-Werte vor, was ungefähr das gleiche bedeutet.
Da die Herstellerangaben zur Sensitivität stets deutlich höher sind als praktisch alle Studien, können wir standartmäßig davon ausgehen, dass sie sich nur auf "infektiös sein" beziehen.
Die Seite müsste diese Unterscheidung an vielen Stellen in Erklärungstexten, UI, Berechnungen und Datenanzeige vornehmen. Andererseits fehlen uns derzeit noch die Daten (bzw. die Stuktur in den Daten), um diese Unterscheidung wirklich konsequent durchzuziehen. | non_main | generelle unklarheit infiziert vs infektiös eine unklarheit zieht sich wie ein roter faden durch die seite und durch fast alle verfügbaren daten es ist unklar ob der test prüfen soll ob man infiziert oder infektiös ist dabei ist die sensitivität für diese beiden fragen verschieden und für manche tests haben wir sogar die daten getrennt zu beidem vorliegen in anderen fällen liegen sie getrennt für hohe und niedrige ct werte vor was ungefähr das gleiche bedeutet da die herstellerangaben zur sensitivität stets deutlich höher sind als praktisch alle studien können wir standartmäßig davon ausgehen dass sie sich nur auf infektiös sein beziehen die seite müsste diese unterscheidung an vielen stellen in erklärungstexten ui berechnungen und datenanzeige vornehmen andererseits fehlen uns derzeit noch die daten bzw die stuktur in den daten um diese unterscheidung wirklich konsequent durchzuziehen | 0 |
49,687 | 7,527,566,897 | IssuesEvent | 2018-04-13 17:31:35 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | Passing an ARN to the value of a data_resource for creating an aws_cloudtrail generates exception | bug documentation service/cloudtrail | ### Terraform Version
Terraform v0.11.2
+ provider.archive v1.0.0
+ provider.aws v1.13.0
### Affected Resource(s)
- aws_cloudtrail
### Terraform Configuration Files
```hcl
data "aws_s3_bucket" "data_bucket" {
bucket = "data-bucket"
}
resource "aws_cloudtrail" "trail" {
name = "a-name"
s3_bucket_name = "a-bucket"
s3_key_prefix = "dev"
include_global_service_events = false
event_selector {
read_write_type = "All"
include_management_events = false
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.data_bucket.arn}"]
}
}
}
```
### Expected Behavior
Should apply correctly and configure `data-bucket` for data events in CloudTrail
### Actual Behavior
```
* aws_cloudtrail.tral: Error set event selector on CloudTrail (secrets): InvalidEventSelectorsException: Value arn:aws:s3:::data-bucket for DataResources.Values is invalid.
status code: 400, request id: 3d4a1c48-2e26-4e4f-852e-573b902d18ed
```
If I add a trailing `/` to the bucket arn like this `values = ["${data.aws_s3_bucket.data_bucket.arn}/"]` it works fine
### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform plan -out tplan` (Plan is ok)
2. `terrafrom apply tplan` (fails here)
| 1.0 | Passing an ARN to the value of a data_resource for creating an aws_cloudtrail generates exception - ### Terraform Version
Terraform v0.11.2
+ provider.archive v1.0.0
+ provider.aws v1.13.0
### Affected Resource(s)
- aws_cloudtrail
### Terraform Configuration Files
```hcl
data "aws_s3_bucket" "data_bucket" {
bucket = "data-bucket"
}
resource "aws_cloudtrail" "trail" {
name = "a-name"
s3_bucket_name = "a-bucket"
s3_key_prefix = "dev"
include_global_service_events = false
event_selector {
read_write_type = "All"
include_management_events = false
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.data_bucket.arn}"]
}
}
}
```
### Expected Behavior
Should apply correctly and configure `data-bucket` for data events in CloudTrail
### Actual Behavior
```
* aws_cloudtrail.tral: Error set event selector on CloudTrail (secrets): InvalidEventSelectorsException: Value arn:aws:s3:::data-bucket for DataResources.Values is invalid.
status code: 400, request id: 3d4a1c48-2e26-4e4f-852e-573b902d18ed
```
If I add a trailing `/` to the bucket arn like this `values = ["${data.aws_s3_bucket.data_bucket.arn}/"]` it works fine
### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform plan -out tplan` (Plan is ok)
2. `terrafrom apply tplan` (fails here)
| non_main | passing an arn to the value of a data resource for creating an aws cloudtrail generates exception terraform version terraform provider archive provider aws affected resource s aws cloudtrail terraform configuration files hcl data aws bucket data bucket bucket data bucket resource aws cloudtrail trail name a name bucket name a bucket key prefix dev include global service events false event selector read write type all include management events false data resource type aws object values expected behavior should apply correctly and configure data bucket for data events in cloudtrail actual behavior aws cloudtrail tral error set event selector on cloudtrail secrets invalideventselectorsexception value arn aws data bucket for dataresources values is invalid status code request id if i add a trailing to the bucket arn like this values it works fine steps to reproduce please list the steps required to reproduce the issue for example terraform plan out tplan plan is ok terrafrom apply tplan fails here | 0 |
5,450 | 27,284,624,255 | IssuesEvent | 2023-02-23 12:38:12 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Demo user should not have the `admin` role, they should have `db manager` access. | type: bug work: backend status: ready restricted: maintainers | ## Description
* Our live demo has a `demo` user with `admin` role.
* This allows users to be able to create new users, and perform upgrades etc.,
* We should convert demo user to the standard role, and provide them with a database manager access. | True | Demo user should not have the `admin` role, they should have `db manager` access. - ## Description
* Our live demo has a `demo` user with `admin` role.
* This allows users to be able to create new users, and perform upgrades etc.,
* We should convert demo user to the standard role, and provide them with a database manager access. | main | demo user should not have the admin role they should have db manager access description our live demo has a demo user with admin role this allows users to be able to create new users and perform upgrades etc we should convert demo user to the standard role and provide them with a database manager access | 1 |
312,752 | 26,873,962,764 | IssuesEvent | 2023-02-04 20:21:36 | Makuna/NeoPixelBus | https://api.github.com/repos/Makuna/NeoPixelBus | reopened | Using NeoRgbwUcs8904Feature with NeoPixelBrightnessBus causes heap corruption | bug under test | **Describe the bug**
Reporting this on behalf of [WLED](https://github.com/Aircoookie/WLED) project where we want to add support for UCS890x chipsets. Even though WLED does not yet use 16-bit internal logic we adjusted output to be aware of 16-bit UCS LED chip.
UCS8903 seems to work correctly (using `NeoRgbwUcs8903Feature`) but as soon as we create a `NeoPixelBrightnessBus` object with `NeoRgbwUcs8904Feature` heap gets corrupted and abort is called causing ESP32 to reboot. Technically it (abort()) can happen on another core, not within NeoPixelBus (or NeoPixelBrightnessBus).
**To Reproduce**
WLED allows dynamic selection of LED types so we use operator new to create appropriate bus.
The logic goes like so:
```c++
void *busptr = new NeoPixelBrightnessBus<NeoRgbwUcs8904Feature, NeoEsp32RmtNWs2812xMethod>(len, pin, channel);
(static_cast<NeoPixelBrightnessBus<NeoRgbwUcs8904Feature, NeoEsp32RmtNWs2812xMethod>*>(busptr))->Begin();
...
```
We use similar approach for all other features and methods and so far none caused heap corruption.
**Expected behavior**
No heap corruption.
**Development environment (please complete the following information):**
- OS: [macOS 10.14]
- Build Environment [MS VSC+PIO]
- Board target [Espressif ESP32]
- Library version [v2.6.9]
**Minimal Sketch that reproduced the problem:**
See above.
**Additional context**
There is no need to call any other NeoPixelBrightnessBus method as heap gets corrupted within Begin() method (or so it seems). I have not yet had the time to debug NeoPixelBus internals since it is beyond my expertise. | 1.0 | Using NeoRgbwUcs8904Feature with NeoPixelBrightnessBus causes heap corruption - **Describe the bug**
Reporting this on behalf of [WLED](https://github.com/Aircoookie/WLED) project where we want to add support for UCS890x chipsets. Even though WLED does not yet use 16-bit internal logic we adjusted output to be aware of 16-bit UCS LED chip.
UCS8903 seems to work correctly (using `NeoRgbwUcs8903Feature`) but as soon as we create a `NeoPixelBrightnessBus` object with `NeoRgbwUcs8904Feature` heap gets corrupted and abort is called causing ESP32 to reboot. Technically it (abort()) can happen on another core, not within NeoPixelBus (or NeoPixelBrightnessBus).
**To Reproduce**
WLED allows dynamic selection of LED types so we use operator new to create appropriate bus.
The logic goes like so:
```c++
void *busptr = new NeoPixelBrightnessBus<NeoRgbwUcs8904Feature, NeoEsp32RmtNWs2812xMethod>(len, pin, channel);
(static_cast<NeoPixelBrightnessBus<NeoRgbwUcs8904Feature, NeoEsp32RmtNWs2812xMethod>*>(busptr))->Begin();
...
```
We use similar approach for all other features and methods and so far none caused heap corruption.
**Expected behavior**
No heap corruption.
**Development environment (please complete the following information):**
- OS: [macOS 10.14]
- Build Environment [MS VSC+PIO]
- Board target [Espressif ESP32]
- Library version [v2.6.9]
**Minimal Sketch that reproduced the problem:**
See above.
**Additional context**
There is no need to call any other NeoPixelBrightnessBus method as heap gets corrupted within Begin() method (or so it seems). I have not yet had the time to debug NeoPixelBus internals since it is beyond my expertise. | non_main | using with neopixelbrightnessbus causes heap corruption describe the bug reporting this on behalf of project where we want to add support for chipsets even though wled does not yet use bit internal logic we adjusted output to be aware of bit ucs led chip seems to work correctly using but as soon as we create a neopixelbrightnessbus object with heap gets corrupted and abort is called causing to reboot technically it abort can happen on another core not within neopixelbus or neopixelbrightnessbus to reproduce wled allows dynamic selection of led types so we use operator new to create appropriate bus the logic goes like so c void busptr new neopixelbrightnessbus len pin channel static cast busptr begin we use similar approach for all other features and methods and so far none caused heap corruption expected behavior no heap corruption development environment please complete the following information os build environment board target library version minimal sketch that reproduced the problem see above additional context there is no need to call any other neopixelbrightnessbus method as heap gets corrupted within begin method or so it seems i have not yet had the time to debug neopixelbus internals since it is beyond my expertise | 0 |
5,403 | 27,115,680,781 | IssuesEvent | 2023-02-15 18:22:30 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | closed | Disable run coding algorithms button when busy/unavailable | Type: Maintainance Language: Python Domain: Frontend Status: Inactive | **What is the expected state?**
I expect that the button to initiate a coding algorithm run is disabled if the supporting backend services are busy of unavailable for some reason. I expect if the button is disabled that some message indicating why is shown at some point.
**What is the actual state?**
I am able to click the "Run coding algorithms" button regardless of backend/celery status
**Relevant context**
- `va_explorer/templates/home/index.html`
- `va_explorer/va_data_management/tasks.py`
- `va_explorer/va_data_management/views.py`
- `va_explorer/va_data_management/utils/coding.py` | True | Disable run coding algorithms button when busy/unavailable - **What is the expected state?**
I expect that the button to initiate a coding algorithm run is disabled if the supporting backend services are busy of unavailable for some reason. I expect if the button is disabled that some message indicating why is shown at some point.
**What is the actual state?**
I am able to click the "Run coding algorithms" button regardless of backend/celery status
**Relevant context**
- `va_explorer/templates/home/index.html`
- `va_explorer/va_data_management/tasks.py`
- `va_explorer/va_data_management/views.py`
- `va_explorer/va_data_management/utils/coding.py` | main | disable run coding algorithms button when busy unavailable what is the expected state i expect that the button to initiate a coding algorithm run is disabled if the supporting backend services are busy of unavailable for some reason i expect if the button is disabled that some message indicating why is shown at some point what is the actual state i am able to click the run coding algorithms button regardless of backend celery status relevant context va explorer templates home index html va explorer va data management tasks py va explorer va data management views py va explorer va data management utils coding py | 1 |
233,116 | 17,851,629,234 | IssuesEvent | 2021-09-04 06:59:39 | ddradar/ddradar | https://api.github.com/repos/ddradar/ddradar | closed | Create manual for DDRadar | documentation:memo: enhancement:speech_balloon: | ## Tasks
- [ ] User Registration / ユーザー登録
- [ ] Input Score / スコア登録
- [ ] Import Scores / スコアインポート
- [ ] Groove Radar / グルーブレーダー | 1.0 | Create manual for DDRadar - ## Tasks
- [ ] User Registration / ユーザー登録
- [ ] Input Score / スコア登録
- [ ] Import Scores / スコアインポート
- [ ] Groove Radar / グルーブレーダー | non_main | create manual for ddradar tasks user registration ユーザー登録 input score スコア登録 import scores スコアインポート groove radar グルーブレーダー | 0 |
4,528 | 23,539,892,274 | IssuesEvent | 2022-08-20 08:03:21 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | closed | Error: expected an identifier or string, found '(' in kv.wit for make build-c example | 🐛 bug 🚧 maintainer issue | **Description of the bug**
I wanted to work from a [devcontainer](https://github.com/KaiWalter/spiderlightning/blob/main/.devcontainer/devcontainer.json) so that I have a clear baseline on the dependencies for this project and potentially can help with [Development Environment Setup](https://github.com/deislabs/spiderlightning/blob/eb2b867c0347bfae529990d2ac69446e9a6bee41/CONTRIBUTING.md#development-environment-setup).
After bringing up the container, when I run the block
```
$ make install-deps # installs the WASI-SDK
$ make build # builds SpiderLightning/Slight
$ make build-c # builds our c example
```
I get
```
make -C examples/multi_capability-demo-clang/ clean
make[1]: Entering directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
rm -rf bindings/
mkdir bindings/
make[1]: Leaving directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
make -C examples/multi_capability-demo-clang/ bindings
make[1]: Entering directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
wit-bindgen c --import ../../wit/kv.wit --out-dir bindings/
Error: expected an identifier or string, found '('
--> ../../wit/kv.wit:7:23
|
7 | static open: function(name: string) -> expected<kv, error>
| ^
make[1]: *** [Makefile:19: bindings] Error 1
make[1]: Leaving directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
make: *** [Makefile:115: build-c] Error 2
```
Maybe one of my dependencies in the devcontainer still is not correct.
**To Reproduce**
spin up environment e.g. in a GitHub Codespace from fork <https://github.com/KaiWalter/spiderlightning>
**Additional context**
cmake version 3.22.1
wit-bindgen-cli 0.2.0
`/etc/os-release`:
```
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
``` | True | Error: expected an identifier or string, found '(' in kv.wit for make build-c example - **Description of the bug**
I wanted to work from a [devcontainer](https://github.com/KaiWalter/spiderlightning/blob/main/.devcontainer/devcontainer.json) so that I have a clear baseline on the dependencies for this project and potentially can help with [Development Environment Setup](https://github.com/deislabs/spiderlightning/blob/eb2b867c0347bfae529990d2ac69446e9a6bee41/CONTRIBUTING.md#development-environment-setup).
After bringing up the container, when I run the block
```
$ make install-deps # installs the WASI-SDK
$ make build # builds SpiderLightning/Slight
$ make build-c # builds our c example
```
I get
```
make -C examples/multi_capability-demo-clang/ clean
make[1]: Entering directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
rm -rf bindings/
mkdir bindings/
make[1]: Leaving directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
make -C examples/multi_capability-demo-clang/ bindings
make[1]: Entering directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
wit-bindgen c --import ../../wit/kv.wit --out-dir bindings/
Error: expected an identifier or string, found '('
--> ../../wit/kv.wit:7:23
|
7 | static open: function(name: string) -> expected<kv, error>
| ^
make[1]: *** [Makefile:19: bindings] Error 1
make[1]: Leaving directory '/workspaces/spiderlightning/examples/multi_capability-demo-clang'
make: *** [Makefile:115: build-c] Error 2
```
Maybe one of my dependencies in the devcontainer still is not correct.
**To Reproduce**
spin up environment e.g. in a GitHub Codespace from fork <https://github.com/KaiWalter/spiderlightning>
**Additional context**
cmake version 3.22.1
wit-bindgen-cli 0.2.0
`/etc/os-release`:
```
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
``` | main | error expected an identifier or string found in kv wit for make build c example description of the bug i wanted to work from a so that i have a clear baseline on the dependencies for this project and potentially can help with after bringing up the container when i run the block make install deps installs the wasi sdk make build builds spiderlightning slight make build c builds our c example i get make c examples multi capability demo clang clean make entering directory workspaces spiderlightning examples multi capability demo clang rm rf bindings mkdir bindings make leaving directory workspaces spiderlightning examples multi capability demo clang make c examples multi capability demo clang bindings make entering directory workspaces spiderlightning examples multi capability demo clang wit bindgen c import wit kv wit out dir bindings error expected an identifier or string found wit kv wit static open function name string expected make error make leaving directory workspaces spiderlightning examples multi capability demo clang make error maybe one of my dependencies in the devcontainer still is not correct to reproduce spin up environment e g in a github codespace from fork additional context cmake version wit bindgen cli etc os release pretty name ubuntu lts name ubuntu version id version lts jammy jellyfish version codename jammy id ubuntu id like debian | 1 |
1,639 | 6,572,661,956 | IssuesEvent | 2017-09-11 04:11:14 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | npm "fs" package installation is not idempotent | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
installing "fs" package with npm module is always "changed" status
##### STEPS TO REPRODUCE
Try to install "fs" twice.
In example below, express is already installed.
```
root@g25:~# ansible localhost -m npm -a "name=fs global=yes executable=/usr/bin/npm state=present"
localhost | SUCCESS => {
"changed": true
}
root@g25:~# ansible localhost -m npm -a "name=fs global=yes executable=/usr/bin/npm state=present"
localhost | SUCCESS => {
"changed": true
}
root@g25:~# ansible localhost -m npm -a "name=express global=yes executable=/usr/bin/npm state=present"
localhost | SUCCESS => {
"changed": false
}
```
| True | npm "fs" package installation is not idempotent - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
installing "fs" package with npm module is always "changed" status
##### STEPS TO REPRODUCE
Try to install "fs" twice.
In example below, express is already installed.
```
root@g25:~# ansible localhost -m npm -a "name=fs global=yes executable=/usr/bin/npm state=present"
localhost | SUCCESS => {
"changed": true
}
root@g25:~# ansible localhost -m npm -a "name=fs global=yes executable=/usr/bin/npm state=present"
localhost | SUCCESS => {
"changed": true
}
root@g25:~# ansible localhost -m npm -a "name=express global=yes executable=/usr/bin/npm state=present"
localhost | SUCCESS => {
"changed": false
}
```
| main | npm fs package installation is not idempotent issue type bug report component name npm ansible version ansible config file configured module search path default w o overrides os environment n a summary installing fs package with npm module is always changed status steps to reproduce try to install fs twice in example below express is already installed root ansible localhost m npm a name fs global yes executable usr bin npm state present localhost success changed true root ansible localhost m npm a name fs global yes executable usr bin npm state present localhost success changed true root ansible localhost m npm a name express global yes executable usr bin npm state present localhost success changed false | 1 |
3,351 | 12,992,812,502 | IssuesEvent | 2020-07-23 07:41:23 | megabit-labs/pathways | https://api.github.com/repos/megabit-labs/pathways | closed | Follow the design pattern in the search page | maintainability | Any page with a given url pattern should be rendered as a component from the _screens_ directory (just like the `CreateEditPathway` screen). This makes it easier to organise and navigate code.
Moreover, any components that will not be re-used should be put inside their parent component's directory. For example, the `StepEditArea` component directory contains the the `StepContentEdit` and the `StepDataEdit` components, since these two components are not used anywhere else. Again, this makes the code easier to navigate.
Right now, the search page does not follow this structure, but it should. | True | Follow the design pattern in the search page - Any page with a given url pattern should be rendered as a component from the _screens_ directory (just like the `CreateEditPathway` screen). This makes it easier to organise and navigate code.
Moreover, any components that will not be re-used should be put inside their parent component's directory. For example, the `StepEditArea` component directory contains the the `StepContentEdit` and the `StepDataEdit` components, since these two components are not used anywhere else. Again, this makes the code easier to navigate.
Right now, the search page does not follow this structure, but it should. | main | follow the design pattern in the search page any page with a given url pattern should be rendered as a component from the screens directory just like the createeditpathway screen this makes it easier to organise and navigate code moreover any components that will not be re used should be put inside their parent component s directory for example the stepeditarea component directory contains the the stepcontentedit and the stepdataedit components since these two components are not used anywhere else again this makes the code easier to navigate right now the search page does not follow this structure but it should | 1 |
8,505 | 11,686,156,911 | IssuesEvent | 2020-03-05 10:22:26 | prisma/prisma2 | https://api.github.com/repos/prisma/prisma2 | opened | Put `@prisma/sdk` version in lockstep with `prisma2` | kind/improvement process/candidate | And at the same time lock down the version of the binaries delivered with that package, the same way we do it in `prisma2`. The reason is very simple: Tools that depend on `@prisma/sdk` right now need a complex setup to pin the binary version.
| 1.0 | Put `@prisma/sdk` version in lockstep with `prisma2` - And at the same time lock down the version of the binaries delivered with that package, the same way we do it in `prisma2`. The reason is very simple: Tools that depend on `@prisma/sdk` right now need a complex setup to pin the binary version.
| non_main | put prisma sdk version in lockstep with and at the same time lock down the version of the binaries delivered with that package the same way we do it in the reason is very simple tools that depend on prisma sdk right now need a complex setup to pin the binary version | 0 |
315,861 | 27,112,214,295 | IssuesEvent | 2023-02-15 16:02:43 | input-output-hk/anti-diffs | https://api.github.com/repos/input-output-hk/anti-diffs | opened | Add assertions to code for testing invariants. | enhancement testing diff-containers | We should add assertions for testing invariants (like `positivity` and `normality`) to the code in `diff-containers`. We can then enable these assertions in the test suite, which then warn us if definitions are breaking these invariants. Production code should disable these assertions, because they are probably expensive to compute. | 1.0 | Add assertions to code for testing invariants. - We should add assertions for testing invariants (like `positivity` and `normality`) to the code in `diff-containers`. We can then enable these assertions in the test suite, which then warn us if definitions are breaking these invariants. Production code should disable these assertions, because they are probably expensive to compute. | non_main | add assertions to code for testing invariants we should add assertions for testing invariants like positivity and normality to the code in diff containers we can then enable these assertions in the test suite which then warn us if definitions are breaking these invariants production code should disable these assertions because they are probably expensive to compute | 0 |
4,494 | 23,412,700,873 | IssuesEvent | 2022-08-12 19:27:06 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Passing CodeUri as a parameter to an application | area/package type/question maintainer/need-response | Hello,
My team has recently started trying to use SAM, specifically with Applications but we've hit a problem and I can't find anyone else talk about it anywhere else. There are a few issues with passing a CodeUri to a function as a !Ref, which is ok - however our issue is slightly different.
We are wondering if it's possible to pass a local CodeUri (`hello-world/`) as a parameter into a local application, which then uses that parameter inside a lambda created in that application?
We have tried so far to do exactly that, however when we run `sam package`, the CodeUri does not get updated to a valid S3 uri as it does with a function outside an application.
So basically running `sam package` will make the CodeUri inside the generated application template become `CodeUri` instead of the local path we passed in.
Is there any way to achieve this dynamic application lambda code idea without having to zip and upload our functions before running `sam package` ?
The code below shows what we are currently trying to do:
_template.yaml_
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
Globals:
Function:
Timeout: 3
Resources:
hello-world:
Type: AWS::Serverless::Application
Properties:
Location: ./application.yaml
Parameters:
CodeUri: hello-world/
```
_application.yaml_
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
RC-sam-app
RC base SAM App
Globals:
Function:
Timeout: 3
Parameters:
EventName:
Type: String
Default: DefaultEvent
Handler:
Type: String
Default: app.lambdaHandler
CodeUri:
Type: String
BatchSize:
Type: Number
Default: 1
Resources:
hello-world:
Type: AWS::Serverless::Function
Properties:
CodeUri: !Ref CodeUri
Handler: !Ref Handler
Runtime: nodejs8.10
Role: <rolearn>
```
_packaged.yaml_ (generated after `sam package`)
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'sam-app2
Sample SAM Template for sam-app
'
Globals:
Function:
Timeout: 3
Resources:
hello-world:
Type: AWS::Serverless::Application
Properties:
Location: https://s3.amazonaws.com/<locationPath>.template
Parameters:
CodeUri: hello-world/
```
_application.template_ (pushed up to s3)
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'RC-sam-app
RC base SAM App
'
Globals:
Function:
Timeout: 3
Parameters:
EventName:
Type: String
Default: DefaultEvent
Handler:
Type: String
Default: app.lambdaHandler
CodeUri:
Type: String
BatchSize:
Type: Number
Default: 1
Resources:
hello-world:
Type: AWS::Serverless::Function
Properties:
CodeUri:
// So this cost hasn't been zipped and pushed to S3
Ref: CodeUri
Handler:
Ref: Handler
Runtime: nodejs8.10
Role: <roleArn>
Environment:
Variables:
eventName:
Ref: EventName
```
Thank you | True | Passing CodeUri as a parameter to an application - Hello,
My team has recently started trying to use SAM, specifically with Applications but we've hit a problem and I can't find anyone else talk about it anywhere else. There are a few issues with passing a CodeUri to a function as a !Ref, which is ok - however our issue is slightly different.
We are wondering if it's possible to pass a local CodeUri (`hello-world/`) as a parameter into a local application, which then uses that parameter inside a lambda created in that application?
We have tried so far to do exactly that, however when we run `sam package`, the CodeUri does not get updated to a valid S3 uri as it does with a function outside an application.
So basically running `sam package` will make the CodeUri inside the generated application template become `CodeUri` instead of the local path we passed in.
Is there any way to achieve this dynamic application lambda code idea without having to zip and upload our functions before running `sam package` ?
The code below shows what we are currently trying to do:
_template.yaml_
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
Globals:
Function:
Timeout: 3
Resources:
hello-world:
Type: AWS::Serverless::Application
Properties:
Location: ./application.yaml
Parameters:
CodeUri: hello-world/
```
_application.yaml_
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
RC-sam-app
RC base SAM App
Globals:
Function:
Timeout: 3
Parameters:
EventName:
Type: String
Default: DefaultEvent
Handler:
Type: String
Default: app.lambdaHandler
CodeUri:
Type: String
BatchSize:
Type: Number
Default: 1
Resources:
hello-world:
Type: AWS::Serverless::Function
Properties:
CodeUri: !Ref CodeUri
Handler: !Ref Handler
Runtime: nodejs8.10
Role: <rolearn>
```
_packaged.yaml_ (generated after `sam package`)
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'sam-app2
Sample SAM Template for sam-app
'
Globals:
Function:
Timeout: 3
Resources:
hello-world:
Type: AWS::Serverless::Application
Properties:
Location: https://s3.amazonaws.com/<locationPath>.template
Parameters:
CodeUri: hello-world/
```
_application.template_ (pushed up to s3)
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'RC-sam-app
RC base SAM App
'
Globals:
Function:
Timeout: 3
Parameters:
EventName:
Type: String
Default: DefaultEvent
Handler:
Type: String
Default: app.lambdaHandler
CodeUri:
Type: String
BatchSize:
Type: Number
Default: 1
Resources:
hello-world:
Type: AWS::Serverless::Function
Properties:
CodeUri:
// So this cost hasn't been zipped and pushed to S3
Ref: CodeUri
Handler:
Ref: Handler
Runtime: nodejs8.10
Role: <roleArn>
Environment:
Variables:
eventName:
Ref: EventName
```
Thank you | main | passing codeuri as a parameter to an application hello my team has recently started trying to use sam specifically with applications but we ve hit a problem and i can t find anyone else talk about it anywhere else there are a few issues with passing a codeuri to a function as a ref which is ok however our issue is slightly different we are wondering if it s possible to pass a local codeuri hello world as a parameter into a local application which then uses that parameter inside a lambda created in that application we have tried so far to do exactly that however when we run sam package the codeuri does not get updated to a valid uri as it does with a function outside an application so basically running sam package will make the codeuri inside the generated application template become codeuri instead of the local path we passed in is there any way to achieve this dynamic application lambda code idea without having to zip and upload our functions before running sam package the code below shows what we are currently trying to do template yaml awstemplateformatversion transform aws serverless description sam app sample sam template for sam app globals function timeout resources hello world type aws serverless application properties location application yaml parameters codeuri hello world application yaml awstemplateformatversion transform aws serverless description rc sam app rc base sam app globals function timeout parameters eventname type string default defaultevent handler type string default app lambdahandler codeuri type string batchsize type number default resources hello world type aws serverless function properties codeuri ref codeuri handler ref handler runtime role packaged yaml generated after sam package awstemplateformatversion transform aws serverless description sam sample sam template for sam app globals function timeout resources hello world type aws serverless application properties location parameters codeuri hello world application template pushed up to awstemplateformatversion transform aws serverless description rc sam app rc base sam app globals function timeout parameters eventname type string default defaultevent handler type string default app lambdahandler codeuri type string batchsize type number default resources hello world type aws serverless function properties codeuri so this cost hasn t been zipped and pushed to ref codeuri handler ref handler runtime role environment variables eventname ref eventname thank you | 1 |
5,289 | 26,730,827,215 | IssuesEvent | 2023-01-30 04:14:07 | Windham-High-School/CubeServer | https://api.github.com/repos/Windham-High-School/CubeServer | closed | Make config.py in the repo root by default | maintainability | `config.py`, from CubeServer-common, should begin in the root of the repository with .env, and be copied to its place by a `configure` bash script. | True | Make config.py in the repo root by default - `config.py`, from CubeServer-common, should begin in the root of the repository with .env, and be copied to its place by a `configure` bash script. | main | make config py in the repo root by default config py from cubeserver common should begin in the root of the repository with env and be copied to its place by a configure bash script | 1 |
4,517 | 23,491,025,974 | IssuesEvent | 2022-08-17 18:45:10 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Pick up the IAM Role from Template | type/feature maintainer/need-response | Newish to all this, but my cloudformation templates typically build the roles for the lambda to execute under. The roles don't exist outside of this scope. Trying to use the sam local invoke, unfortunately it doesn't seem to reach out to that role. The steps therefore to test locally now seem more involved / cumbersome. What advise would anyone have in this regard? I would have thought its a common pattern to support and probably too much to ask / out of scope?
Having said that, I suppose if it were actually picking up my aws credentials instead of the IAM role of the server I'm running on I wouldn't have an issue!? Not sure what's going on now, and why its using the server IAM profile. | True | Pick up the IAM Role from Template - Newish to all this, but my cloudformation templates typically build the roles for the lambda to execute under. The roles don't exist outside of this scope. Trying to use the sam local invoke, unfortunately it doesn't seem to reach out to that role. The steps therefore to test locally now seem more involved / cumbersome. What advise would anyone have in this regard? I would have thought its a common pattern to support and probably too much to ask / out of scope?
Having said that, I suppose if it were actually picking up my aws credentials instead of the IAM role of the server I'm running on I wouldn't have an issue!? Not sure what's going on now, and why its using the server IAM profile. | main | pick up the iam role from template newish to all this but my cloudformation templates typically build the roles for the lambda to execute under the roles don t exist outside of this scope trying to use the sam local invoke unfortunately it doesn t seem to reach out to that role the steps therefore to test locally now seem more involved cumbersome what advise would anyone have in this regard i would have thought its a common pattern to support and probably too much to ask out of scope having said that i suppose if it were actually picking up my aws credentials instead of the iam role of the server i m running on i wouldn t have an issue not sure what s going on now and why its using the server iam profile | 1 |
2,860 | 10,270,778,543 | IssuesEvent | 2019-08-23 12:34:51 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | reopened | Do not use .Equals, ==, !=, <=, <, >=, > in assertions | Area: analyzer Area: maintainability feature | If assertions such as `Assert.That(...)` contain operators such as `==`, `!=`, `<=`, `<`, `>=`, `>` or the `Equals()` method, then those methods test for booleans.
Hence it is hard to understand when the test fails with an assertion that e.g. `true` was expected but `false` was received.
The test situation would be much easier to understand if the test would immediately state what was expected (e.g `5` was expected but `12` was received). | True | Do not use .Equals, ==, !=, <=, <, >=, > in assertions - If assertions such as `Assert.That(...)` contain operators such as `==`, `!=`, `<=`, `<`, `>=`, `>` or the `Equals()` method, then those methods test for booleans.
Hence it is hard to understand when the test fails with an assertion that e.g. `true` was expected but `false` was received.
The test situation would be much easier to understand if the test would immediately state what was expected (e.g `5` was expected but `12` was received). | main | do not use equals in assertions if assertions such as assert that contain operators such as or the equals method then those methods test for booleans hence it is hard to understand when the test fails with an assertion that e g true was expected but false was received the test situation would be much easier to understand if the test would immediately state what was expected e g was expected but was received | 1 |
80,533 | 15,443,529,605 | IssuesEvent | 2021-03-08 09:15:15 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | Outpost crew management screen crash | Bug Code Crash Medium Prio Need more info | When i play, i got a crash
A few seconds before the crash, some kind of Tyler when he shot at worms, he created a ladder from experience (I mean, he got it very quickly) I think it was something like an overload that caused the game to crash
Here the crashreport:
Barotrauma Client crash report (generated on 30.01.2021 18:32:18)
Barotrauma seems to have crashed. Sorry for the inconvenience!
47BDC83F092CDFE5B755C0308161F31E
Game version 0.11.0.10 (ReleaseWindows, branch release, revision dc7157029e)
Graphics mode: 1920x1200 (BorderlessWindowed)
VSync ON
Language: Russian
Selected content packages: Vanilla 0.9
Level seed: Novaya MoskvaPelorus Linea
Loaded submarine: Typhon (C7BC4FEFFC56C772A0A65064749C8CC1)
Selected screen: Barotrauma.GameScreen
SteamManager initialized
Client (Round had started)
System info:
Operating system: Microsoft Windows NT 10.0.18363.0 64 bit
GPU name: NVIDIA GeForce GTX 1060 3GB
Display mode: {Width:1920 Height:1080 Format:Color AspectRatio:1,7777778}
GPU status: Normal
Exception: Object reference not set to an instance of an object. (System.NullReferenceException)
Target site: Void CreateCharacterFrame(Barotrauma.CharacterInfo, Barotrauma.GUIListBox)
Stack trace:
at Barotrauma.CrewManagement.CreateCharacterFrame(CharacterInfo characterInfo, GUIListBox listBox) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 373
at Barotrauma.CrewManagement.UpdateCrew() in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 293
at Barotrauma.CrewManagement.UpdateLocationView(Location location, Boolean removePending, Location prevLocation) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 245
at Barotrauma.CrewManagement..ctor(CampaignUI campaignUI, GUIComponent parentComponent) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 45
at Barotrauma.CampaignUI.CreateUI(GUIComponent container) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignUI.cs:line 80
at Barotrauma.CampaignUI..ctor(CampaignMode campaign, GUIComponent container) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignUI.cs:line 51
at Barotrauma.MultiPlayerCampaign.InitCampaignUI() in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 169
at Barotrauma.MultiPlayerCampaign.<InitProjSpecific>b__39_0(GUIButton btn, Object userdata) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 138
at Barotrauma.GUIButton.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\GUIButton.cs:line 262
at Barotrauma.MultiPlayerCampaign.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 376
at Barotrauma.GameSession.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaShared\SharedSource\GameSession\GameSession.cs:line 545
at Barotrauma.GameScreen.Update(Double deltaTime) in <DEV>\Barotrauma\BarotraumaShared\SharedSource\Screens\GameScreen.cs:line 134
at Barotrauma.GameMain.Update(GameTime gameTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameMain.cs:line 924
at Microsoft.Xna.Framework.Game.DoUpdate(GameTime gameTime) in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 656
at Microsoft.Xna.Framework.Game.Tick() in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 500
at Microsoft.Xna.Framework.SdlGamePlatform.RunLoop() in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\SDL\SDLGamePlatform.cs:line 92
at Microsoft.Xna.Framework.Game.Run(GameRunBehavior runBehavior) in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 397
at Microsoft.Xna.Framework.Game.Run() in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 367
at Barotrauma.Program.Main(String[] args) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Program.cs:line 58 | 1.0 | Outpost crew management screen crash - When i play, i got a crash
A few seconds before the crash, some kind of Tyler when he shot at worms, he created a ladder from experience (I mean, he got it very quickly) I think it was something like an overload that caused the game to crash
Here the crashreport:
Barotrauma Client crash report (generated on 30.01.2021 18:32:18)
Barotrauma seems to have crashed. Sorry for the inconvenience!
47BDC83F092CDFE5B755C0308161F31E
Game version 0.11.0.10 (ReleaseWindows, branch release, revision dc7157029e)
Graphics mode: 1920x1200 (BorderlessWindowed)
VSync ON
Language: Russian
Selected content packages: Vanilla 0.9
Level seed: Novaya MoskvaPelorus Linea
Loaded submarine: Typhon (C7BC4FEFFC56C772A0A65064749C8CC1)
Selected screen: Barotrauma.GameScreen
SteamManager initialized
Client (Round had started)
System info:
Operating system: Microsoft Windows NT 10.0.18363.0 64 bit
GPU name: NVIDIA GeForce GTX 1060 3GB
Display mode: {Width:1920 Height:1080 Format:Color AspectRatio:1,7777778}
GPU status: Normal
Exception: Object reference not set to an instance of an object. (System.NullReferenceException)
Target site: Void CreateCharacterFrame(Barotrauma.CharacterInfo, Barotrauma.GUIListBox)
Stack trace:
at Barotrauma.CrewManagement.CreateCharacterFrame(CharacterInfo characterInfo, GUIListBox listBox) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 373
at Barotrauma.CrewManagement.UpdateCrew() in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 293
at Barotrauma.CrewManagement.UpdateLocationView(Location location, Boolean removePending, Location prevLocation) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 245
at Barotrauma.CrewManagement..ctor(CampaignUI campaignUI, GUIComponent parentComponent) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\CrewManagement.cs:line 45
at Barotrauma.CampaignUI.CreateUI(GUIComponent container) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignUI.cs:line 80
at Barotrauma.CampaignUI..ctor(CampaignMode campaign, GUIComponent container) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignUI.cs:line 51
at Barotrauma.MultiPlayerCampaign.InitCampaignUI() in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 169
at Barotrauma.MultiPlayerCampaign.<InitProjSpecific>b__39_0(GUIButton btn, Object userdata) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 138
at Barotrauma.GUIButton.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GUI\GUIButton.cs:line 262
at Barotrauma.MultiPlayerCampaign.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 376
at Barotrauma.GameSession.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaShared\SharedSource\GameSession\GameSession.cs:line 545
at Barotrauma.GameScreen.Update(Double deltaTime) in <DEV>\Barotrauma\BarotraumaShared\SharedSource\Screens\GameScreen.cs:line 134
at Barotrauma.GameMain.Update(GameTime gameTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameMain.cs:line 924
at Microsoft.Xna.Framework.Game.DoUpdate(GameTime gameTime) in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 656
at Microsoft.Xna.Framework.Game.Tick() in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 500
at Microsoft.Xna.Framework.SdlGamePlatform.RunLoop() in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\SDL\SDLGamePlatform.cs:line 92
at Microsoft.Xna.Framework.Game.Run(GameRunBehavior runBehavior) in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 397
at Microsoft.Xna.Framework.Game.Run() in <DEV>\Libraries\MonoGame.Framework\Src\MonoGame.Framework\Game.cs:line 367
at Barotrauma.Program.Main(String[] args) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Program.cs:line 58 | non_main | outpost crew management screen crash when i play i got a crash a few seconds before the crash some kind of tyler when he shot at worms he created a ladder from experience i mean he got it very quickly i think it was something like an overload that caused the game to crash here the crashreport barotrauma client crash report generated on barotrauma seems to have crashed sorry for the inconvenience game version releasewindows branch release revision graphics mode borderlesswindowed vsync on language russian selected content packages vanilla level seed novaya moskvapelorus linea loaded submarine typhon selected screen barotrauma gamescreen steammanager initialized client round had started system info operating system microsoft windows nt bit gpu name nvidia geforce gtx display mode width height format color aspectratio gpu status normal exception object reference not set to an instance of an object system nullreferenceexception target site void createcharacterframe barotrauma characterinfo barotrauma guilistbox stack trace at barotrauma crewmanagement createcharacterframe characterinfo characterinfo guilistbox listbox in barotrauma barotraumaclient clientsource gui crewmanagement cs line at barotrauma crewmanagement updatecrew in barotrauma barotraumaclient clientsource gui crewmanagement cs line at barotrauma crewmanagement updatelocationview location location boolean removepending location prevlocation in barotrauma barotraumaclient clientsource gui crewmanagement cs line at barotrauma crewmanagement ctor campaignui campaignui guicomponent parentcomponent in barotrauma barotraumaclient clientsource gui crewmanagement cs line at barotrauma campaignui createui guicomponent container in barotrauma barotraumaclient clientsource screens campaignui cs line at barotrauma campaignui ctor campaignmode campaign guicomponent container in barotrauma barotraumaclient clientsource screens campaignui cs line at barotrauma multiplayercampaign initcampaignui in barotrauma barotraumaclient clientsource gamesession gamemodes multiplayercampaign cs line at barotrauma multiplayercampaign b guibutton btn object userdata in barotrauma barotraumaclient clientsource gamesession gamemodes multiplayercampaign cs line at barotrauma guibutton update single deltatime in barotrauma barotraumaclient clientsource gui guibutton cs line at barotrauma multiplayercampaign update single deltatime in barotrauma barotraumaclient clientsource gamesession gamemodes multiplayercampaign cs line at barotrauma gamesession update single deltatime in barotrauma barotraumashared sharedsource gamesession gamesession cs line at barotrauma gamescreen update double deltatime in barotrauma barotraumashared sharedsource screens gamescreen cs line at barotrauma gamemain update gametime gametime in barotrauma barotraumaclient clientsource gamemain cs line at microsoft xna framework game doupdate gametime gametime in libraries monogame framework src monogame framework game cs line at microsoft xna framework game tick in libraries monogame framework src monogame framework game cs line at microsoft xna framework sdlgameplatform runloop in libraries monogame framework src monogame framework sdl sdlgameplatform cs line at microsoft xna framework game run gamerunbehavior runbehavior in libraries monogame framework src monogame framework game cs line at microsoft xna framework game run in libraries monogame framework src monogame framework game cs line at barotrauma program main string args in barotrauma barotraumaclient clientsource program cs line | 0 |
316,373 | 27,157,404,135 | IssuesEvent | 2023-02-17 09:05:06 | red-hat-storage/ocs-ci | https://api.github.com/repos/red-hat-storage/ocs-ci | closed | Test test_rwx_pvc_assign_pod_node failing in ODF 4.10 on MS | TestCase failing | tests.manage.pv_services.test_pvc_assign_pod_node.TestPvcAssignPodNode.test_rwx_pvc_assign_pod_node
```
> assert not (
error_msg in pod_log
), f"Logs should not contain the error message '{error_msg}'"
E AssertionError: Logs should not contain the error message 'Authorization: Bearer'
E assert not 'Authorization: Bearer' in 'I0213 11:02:34.183823 1 main.go:181] Valid token audiences: \nI0213 11:02:34.184099 1 main.go:289] Genera...-4038-92c3-6d1ac09669fd"]}},"audiences":["https://d3gt1gce2zmg3d.cloudfront.net/21rq6tah1ocs3fmuocbtres2gmq4ga9a"]}}\n'
tests/manage/pv_services/test_pvc_assign_pod_node.py:46: AssertionError
``` | 1.0 | Test test_rwx_pvc_assign_pod_node failing in ODF 4.10 on MS - tests.manage.pv_services.test_pvc_assign_pod_node.TestPvcAssignPodNode.test_rwx_pvc_assign_pod_node
```
> assert not (
error_msg in pod_log
), f"Logs should not contain the error message '{error_msg}'"
E AssertionError: Logs should not contain the error message 'Authorization: Bearer'
E assert not 'Authorization: Bearer' in 'I0213 11:02:34.183823 1 main.go:181] Valid token audiences: \nI0213 11:02:34.184099 1 main.go:289] Genera...-4038-92c3-6d1ac09669fd"]}},"audiences":["https://d3gt1gce2zmg3d.cloudfront.net/21rq6tah1ocs3fmuocbtres2gmq4ga9a"]}}\n'
tests/manage/pv_services/test_pvc_assign_pod_node.py:46: AssertionError
``` | non_main | test test rwx pvc assign pod node failing in odf on ms tests manage pv services test pvc assign pod node testpvcassignpodnode test rwx pvc assign pod node assert not error msg in pod log f logs should not contain the error message error msg e assertionerror logs should not contain the error message authorization bearer e assert not authorization bearer in main go valid token audiences main go genera audiences n tests manage pv services test pvc assign pod node py assertionerror | 0 |
85,104 | 24,512,467,919 | IssuesEvent | 2022-10-10 23:33:51 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | In a non-Kerberos cluster, the latest libhdfs3 branch code causes a clickhouse crash. | alternative build comp-3rdparty-libs comp-hdfs potential bug | > You have to provide the following information whenever possible.
I want to compile the latest libhdfs3 version to solve some problems. However, when an HDFS engine table is created in a common cluster for access, clickhouse crash occurs and the following stack information is displayed:
clickhouse-client --host 192.168.10.110 --user cfy1 --password 123456
ClickHouse client version 22.9.1.1.
Connecting to 192.168.10.110:9000 as user cfy1.
Connected to ClickHouse server version 22.9.1 revision 54460.
node-master1mjpR :) select * from hdfs_engine_table;
SELECT *
FROM hdfs_engine_table
Query id: b79d2562-aa62-4d62-acb2-dcbe98af0196
[node-master1mjpR] 2022.09.08 16:07:40.368172 [ 950367 ] <Fatal> BaseDaemon: ########################################
[node-master1mjpR] 2022.09.08 16:07:40.368246 [ 950367 ] <Fatal> BaseDaemon: (version 22.9.1.1, build id: E92376CAC71C67762DCE0123A03329FD916A61FA) (from thread 947539) (query_id: b79d2562-aa62-4d62-acb2-dcbe98af0196) (query: select * from hdfs_engine_table;) Received signal Segmentation fault (11)
[node-master1mjpR] 2022.09.08 16:07:40.368285 [ 950367 ] <Fatal> BaseDaemon: Address: 0x12 Access: read. Address not mapped to object.
[node-master1mjpR] 2022.09.08 16:07:40.368316 [ 950367 ] <Fatal> BaseDaemon: Stack trace: 0xb6fa2e8 0xb8b614d 0x7f64093ba320
[node-master1mjpR] 2022.09.08 16:07:40.373622 [ 950367 ] <Fatal> BaseDaemon: 0.1. inlined from ./build/../src/Common/StackTrace.cpp:331: StackTrace::tryCapture()
[node-master1mjpR] 2022.09.08 16:07:40.373661 [ 950367 ] <Fatal> BaseDaemon: 0. ../src/Common/StackTrace.cpp:297: StackTrace::StackTrace(ucontext_t const&) in /usr/bin/clickhouse
[node-master1mjpR] 2022.09.08 16:07:40.385044 [ 950367 ] <Fatal> BaseDaemon: 1. ./build/../src/Daemon/BaseDaemon.cpp:0: signalHandler(int, siginfo_t*, void*) in /usr/bin/clickhouse
[node-master1mjpR] 2022.09.08 16:07:40.385158 [ 950367 ] <Fatal> BaseDaemon: 2. ? in /usr/lib64/libpthread-2.28.so
[node-master1mjpR] 2022.09.08 16:07:40.513404 [ 950367 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read. (calculated checksum: 8A925299ED10893750CEE068956B9E9C)
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from 192.168.10.110:9000. (ATTEMPT_TO_READ_AFTER_EOF)
Connecting to 192.168.10.110:9000 as user cfy1.
Code: 210. DB::NetException: Connection refused (192.168.10.110:9000). (NETWORK_ERROR)
**How to reproduce**
1. git clone master branch code;
2. cd /ClickHouse/contrib/libhdfs3 , git checkout cb9a82ac13950a0d3e428048e8f80da6b212041b

4. and build it;
* Which ClickHouse server version to use
22.9.1.1 master
* `CREATE TABLE` statements for all tables involved
CREATE TABLE default.hdfs_engine_table (`name` String, `value` UInt32) ENGINE = HDFS('hdfs://192.168.10.211:8020/tmp/cfy/cfy_secure_ck.txt', 'TSV')
and I want to fix by [#PR26](https://github.com/ClickHouse/libhdfs3/pull/26)
| 1.0 | In a non-Kerberos cluster, the latest libhdfs3 branch code causes a clickhouse crash. - > You have to provide the following information whenever possible.
I want to compile the latest libhdfs3 version to solve some problems. However, when an HDFS engine table is created in a common cluster for access, clickhouse crash occurs and the following stack information is displayed:
clickhouse-client --host 192.168.10.110 --user cfy1 --password 123456
ClickHouse client version 22.9.1.1.
Connecting to 192.168.10.110:9000 as user cfy1.
Connected to ClickHouse server version 22.9.1 revision 54460.
node-master1mjpR :) select * from hdfs_engine_table;
SELECT *
FROM hdfs_engine_table
Query id: b79d2562-aa62-4d62-acb2-dcbe98af0196
[node-master1mjpR] 2022.09.08 16:07:40.368172 [ 950367 ] <Fatal> BaseDaemon: ########################################
[node-master1mjpR] 2022.09.08 16:07:40.368246 [ 950367 ] <Fatal> BaseDaemon: (version 22.9.1.1, build id: E92376CAC71C67762DCE0123A03329FD916A61FA) (from thread 947539) (query_id: b79d2562-aa62-4d62-acb2-dcbe98af0196) (query: select * from hdfs_engine_table;) Received signal Segmentation fault (11)
[node-master1mjpR] 2022.09.08 16:07:40.368285 [ 950367 ] <Fatal> BaseDaemon: Address: 0x12 Access: read. Address not mapped to object.
[node-master1mjpR] 2022.09.08 16:07:40.368316 [ 950367 ] <Fatal> BaseDaemon: Stack trace: 0xb6fa2e8 0xb8b614d 0x7f64093ba320
[node-master1mjpR] 2022.09.08 16:07:40.373622 [ 950367 ] <Fatal> BaseDaemon: 0.1. inlined from ./build/../src/Common/StackTrace.cpp:331: StackTrace::tryCapture()
[node-master1mjpR] 2022.09.08 16:07:40.373661 [ 950367 ] <Fatal> BaseDaemon: 0. ../src/Common/StackTrace.cpp:297: StackTrace::StackTrace(ucontext_t const&) in /usr/bin/clickhouse
[node-master1mjpR] 2022.09.08 16:07:40.385044 [ 950367 ] <Fatal> BaseDaemon: 1. ./build/../src/Daemon/BaseDaemon.cpp:0: signalHandler(int, siginfo_t*, void*) in /usr/bin/clickhouse
[node-master1mjpR] 2022.09.08 16:07:40.385158 [ 950367 ] <Fatal> BaseDaemon: 2. ? in /usr/lib64/libpthread-2.28.so
[node-master1mjpR] 2022.09.08 16:07:40.513404 [ 950367 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read. (calculated checksum: 8A925299ED10893750CEE068956B9E9C)
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from 192.168.10.110:9000. (ATTEMPT_TO_READ_AFTER_EOF)
Connecting to 192.168.10.110:9000 as user cfy1.
Code: 210. DB::NetException: Connection refused (192.168.10.110:9000). (NETWORK_ERROR)
**How to reproduce**
1. git clone master branch code;
2. cd /ClickHouse/contrib/libhdfs3 , git checkout cb9a82ac13950a0d3e428048e8f80da6b212041b

4. and build it;
* Which ClickHouse server version to use
22.9.1.1 master
* `CREATE TABLE` statements for all tables involved
CREATE TABLE default.hdfs_engine_table (`name` String, `value` UInt32) ENGINE = HDFS('hdfs://192.168.10.211:8020/tmp/cfy/cfy_secure_ck.txt', 'TSV')
and I want to fix by [#PR26](https://github.com/ClickHouse/libhdfs3/pull/26)
| non_main | in a non kerberos cluster the latest branch code causes a clickhouse crash you have to provide the following information whenever possible i want to compile the latest version to solve some problems however when an hdfs engine table is created in a common cluster for access clickhouse crash occurs and the following stack information is displayed clickhouse client host user password clickhouse client version connecting to as user connected to clickhouse server version revision node select from hdfs engine table select from hdfs engine table query id basedaemon basedaemon version build id from thread query id query select from hdfs engine table received signal segmentation fault basedaemon address access read address not mapped to object basedaemon stack trace basedaemon inlined from build src common stacktrace cpp stacktrace trycapture basedaemon src common stacktrace cpp stacktrace stacktrace ucontext t const in usr bin clickhouse basedaemon build src daemon basedaemon cpp signalhandler int siginfo t void in usr bin clickhouse basedaemon in usr libpthread so basedaemon integrity check of the executable skipped because the reference checksum could not be read calculated checksum exception on client code db exception attempt to read after eof while receiving packet from attempt to read after eof connecting to as user code db netexception connection refused network error how to reproduce git clone master branch code cd clickhouse contrib git checkout and build it; which clickhouse server version to use master create table statements for all tables involved create table default hdfs engine table name string value engine hdfs hdfs tmp cfy cfy secure ck txt tsv and i want to fix by | 0 |
97,144 | 20,171,112,072 | IssuesEvent | 2022-02-10 10:29:34 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | [Unstable] Crash when shooting a Watcher | Bug Code Crash | - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
When a Watcher is shot, the game crashes to desktop.
**Steps To Reproduce**
1. Spawn a Watcher
2. Shoot it
**Version**
Windows Unstable v0.16.3.0
**Additional information**
Crash Report: [crashreport (14).log](https://github.com/Regalis11/Barotrauma/files/8038531/crashreport.14.log) | 1.0 | [Unstable] Crash when shooting a Watcher - - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
When a Watcher is shot, the game crashes to desktop.
**Steps To Reproduce**
1. Spawn a Watcher
2. Shoot it
**Version**
Windows Unstable v0.16.3.0
**Additional information**
Crash Report: [crashreport (14).log](https://github.com/Regalis11/Barotrauma/files/8038531/crashreport.14.log) | non_main | crash when shooting a watcher i have searched the issue tracker to check if the issue has already been reported description when a watcher is shot the game crashes to desktop steps to reproduce spawn a watcher shoot it version windows unstable additional information crash report | 0 |
184,407 | 14,289,346,455 | IssuesEvent | 2020-11-23 19:06:42 | github-vet/rangeclosure-findings | https://api.github.com/repos/github-vet/rangeclosure-findings | closed | gortc/dtls: handshake_client_test.go; 32 LoC | fresh small test |
Found a possible issue in [gortc/dtls](https://www.github.com/gortc/dtls) at [handshake_client_test.go](https://github.com/gortc/dtls/blob/cd7e09739df7dfcfb20a4ae8669d5ae74b08dd77/handshake_client_test.go#L1562-L1593)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/gortc/dtls/blob/cd7e09739df7dfcfb20a4ae8669d5ae74b08dd77/handshake_client_test.go#L1562-L1593)
<details>
<summary>Click here to show the 32 line(s) of Go which triggered the analyzer.</summary>
```go
for i, test := range tests {
c, s := localPipe(t)
done := make(chan error)
var clientCalled, serverCalled bool
go func() {
config := testConfig.Clone()
config.ServerName = "example.golang"
config.ClientAuth = RequireAndVerifyClientCert
config.ClientCAs = rootCAs
config.Time = now
config.MaxVersion = version
test.configureServer(config, &serverCalled)
err = Server(s, config).Handshake()
s.Close()
done <- err
}()
config := testConfig.Clone()
config.ServerName = "example.golang"
config.RootCAs = rootCAs
config.Time = now
config.MaxVersion = version
test.configureClient(config, &clientCalled)
clientErr := Client(c, config).Handshake()
c.Close()
serverErr := <-done
test.validate(t, i, clientCalled, serverCalled, clientErr, serverErr)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: cd7e09739df7dfcfb20a4ae8669d5ae74b08dd77
| 1.0 | gortc/dtls: handshake_client_test.go; 32 LoC -
Found a possible issue in [gortc/dtls](https://www.github.com/gortc/dtls) at [handshake_client_test.go](https://github.com/gortc/dtls/blob/cd7e09739df7dfcfb20a4ae8669d5ae74b08dd77/handshake_client_test.go#L1562-L1593)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/gortc/dtls/blob/cd7e09739df7dfcfb20a4ae8669d5ae74b08dd77/handshake_client_test.go#L1562-L1593)
<details>
<summary>Click here to show the 32 line(s) of Go which triggered the analyzer.</summary>
```go
for i, test := range tests {
c, s := localPipe(t)
done := make(chan error)
var clientCalled, serverCalled bool
go func() {
config := testConfig.Clone()
config.ServerName = "example.golang"
config.ClientAuth = RequireAndVerifyClientCert
config.ClientCAs = rootCAs
config.Time = now
config.MaxVersion = version
test.configureServer(config, &serverCalled)
err = Server(s, config).Handshake()
s.Close()
done <- err
}()
config := testConfig.Clone()
config.ServerName = "example.golang"
config.RootCAs = rootCAs
config.Time = now
config.MaxVersion = version
test.configureClient(config, &clientCalled)
clientErr := Client(c, config).Handshake()
c.Close()
serverErr := <-done
test.validate(t, i, clientCalled, serverCalled, clientErr, serverErr)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: cd7e09739df7dfcfb20a4ae8669d5ae74b08dd77
| non_main | gortc dtls handshake client test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for i test range tests c s localpipe t done make chan error var clientcalled servercalled bool go func config testconfig clone config servername example golang config clientauth requireandverifyclientcert config clientcas rootcas config time now config maxversion version test configureserver config servercalled err server s config handshake s close done err config testconfig clone config servername example golang config rootcas rootcas config time now config maxversion version test configureclient config clientcalled clienterr client c config handshake c close servererr done test validate t i clientcalled servercalled clienterr servererr leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
151,514 | 13,425,086,842 | IssuesEvent | 2020-09-06 08:39:59 | BuildForSDGCohort2/Team-069-Group-A-Backend | https://api.github.com/repos/BuildForSDGCohort2/Team-069-Group-A-Backend | closed | Diagram the class diagram and activity diagram | documentation enhancement | This will help get the general idea behind the activities and the data that is part of the system | 1.0 | Diagram the class diagram and activity diagram - This will help get the general idea behind the activities and the data that is part of the system | non_main | diagram the class diagram and activity diagram this will help get the general idea behind the activities and the data that is part of the system | 0 |
85,835 | 10,687,078,208 | IssuesEvent | 2019-10-22 15:29:04 | andrewfstratton/quando | https://api.github.com/repos/andrewfstratton/quando | closed | Add Inventor block for box | design enhancement tidy usability | Need name of box
- default ${box}
- also add default 'box' to box
- add () => { ${box} } | 1.0 | Add Inventor block for box - Need name of box
- default ${box}
- also add default 'box' to box
- add () => { ${box} } | non_main | add inventor block for box need name of box default box also add default box to box add box | 0 |
798,447 | 28,264,351,417 | IssuesEvent | 2023-04-07 04:41:01 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | mail.google.com - see bug description | browser-chrome priority-critical | <!-- @browser: Chrome 111.0.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 -->
<!-- @reported_with: unknown -->
**URL**: https://mail.google.com/mail/u/0/#inbox
**Browser / Version**: Chrome 111.0.0
**Operating System**: Linux
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: Extensions
**Steps to Reproduce**:
I'm hacked, i hace extensions and oarent restricciones, bugs
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | mail.google.com - see bug description - <!-- @browser: Chrome 111.0.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 -->
<!-- @reported_with: unknown -->
**URL**: https://mail.google.com/mail/u/0/#inbox
**Browser / Version**: Chrome 111.0.0
**Operating System**: Linux
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: Extensions
**Steps to Reproduce**:
I'm hacked, i hace extensions and oarent restricciones, bugs
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | mail google com see bug description url browser version chrome operating system linux tested another browser yes other problem type something else description extensions steps to reproduce i m hacked i hace extensions and oarent restricciones bugs browser configuration none from with ❤️ | 0 |
290,055 | 32,029,848,990 | IssuesEvent | 2023-09-22 11:31:36 | dreamboy9/mongo | https://api.github.com/repos/dreamboy9/mongo | closed | CVE-2019-6285 (Medium) detected in mongor5.0.0-rc5, mongor5.0.0-rc5 - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-6285 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mongor5.0.0-rc5</b>, <b>mongor5.0.0-rc5</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The SingleDocParser::HandleFlowSequence function in yaml-cpp (aka LibYaml-C++) 0.6.2 allows remote attackers to cause a denial of service (stack consumption and application crash) via a crafted YAML file.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-6285>CVE-2019-6285</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-6285 (Medium) detected in mongor5.0.0-rc5, mongor5.0.0-rc5 - autoclosed - ## CVE-2019-6285 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mongor5.0.0-rc5</b>, <b>mongor5.0.0-rc5</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The SingleDocParser::HandleFlowSequence function in yaml-cpp (aka LibYaml-C++) 0.6.2 allows remote attackers to cause a denial of service (stack consumption and application crash) via a crafted YAML file.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-6285>CVE-2019-6285</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in autoclosed cve medium severity vulnerability vulnerable libraries vulnerability details the singledocparser handleflowsequence function in yaml cpp aka libyaml c allows remote attackers to cause a denial of service stack consumption and application crash via a crafted yaml file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
5,070 | 3,899,914,936 | IssuesEvent | 2016-04-18 01:01:03 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 12922611: Should be able to have multiple distribution signing certs in the same keychain | classification:ui/usability reproducible:always status:open | #### Description
It is a royal annoyance that Xcode and Keychain Access cannot properly handle having multiple certificates in the same keychain at the same time that both have "3rd Party Mac Developer Installer" or "3rd Party Mac Developer Application". This is helpfully, but far too non-obviously called out in the documentation.
However, for anyone who is only multiple teams, or who has previously expired certificates for these entries it means that Xcode throws errors their way with completely non-obvious instructions on how to fix the situation, such as the errors mentioned in <rdar://problem/12922600>
It's high time that Xcode got a more robust way to deal with the keychain instead of all this brittle prefix searching shenanigans. Please provide more robust team, certificate, and signing support and take the guessing games out of this.
-
Product Version:
Created: 2012-12-21T04:09:09.436748
Originated: 2016-04-18T00:00:00
Open Radar Link: http://www.openradar.me/12922611 | True | 12922611: Should be able to have multiple distribution signing certs in the same keychain - #### Description
It is a royal annoyance that Xcode and Keychain Access cannot properly handle having multiple certificates in the same keychain at the same time that both have "3rd Party Mac Developer Installer" or "3rd Party Mac Developer Application". This is helpfully, but far too non-obviously called out in the documentation.
However, for anyone who is only multiple teams, or who has previously expired certificates for these entries it means that Xcode throws errors their way with completely non-obvious instructions on how to fix the situation, such as the errors mentioned in <rdar://problem/12922600>
It's high time that Xcode got a more robust way to deal with the keychain instead of all this brittle prefix searching shenanigans. Please provide more robust team, certificate, and signing support and take the guessing games out of this.
-
Product Version:
Created: 2012-12-21T04:09:09.436748
Originated: 2016-04-18T00:00:00
Open Radar Link: http://www.openradar.me/12922611 | non_main | should be able to have multiple distribution signing certs in the same keychain description it is a royal annoyance that xcode and keychain access cannot properly handle having multiple certificates in the same keychain at the same time that both have party mac developer installer or party mac developer application this is helpfully but far too non obviously called out in the documentation however for anyone who is only multiple teams or who has previously expired certificates for these entries it means that xcode throws errors their way with completely non obvious instructions on how to fix the situation such as the errors mentioned in it s high time that xcode got a more robust way to deal with the keychain instead of all this brittle prefix searching shenanigans please provide more robust team certificate and signing support and take the guessing games out of this product version created originated open radar link | 0 |
25,565 | 4,385,456,747 | IssuesEvent | 2016-08-08 09:01:58 | dukenuking/runuo | https://api.github.com/repos/dukenuking/runuo | closed | Issues with character startup and character equipping | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. RunUO startup
2. Client Login
3. 100% Repeat.
What is the expected output?
After creating a character, editing the clothing, hair, facial hair, and race.
Character is spawned into world as created. And equipment is equipped and/or
usable.
What do you see instead?
When spawned into world, character is completely naked, no facial hair, nor
head hair, and the race is human regardless what was chosen. Nothing can be
equipped, be it weapon or clothing.
What version of the product are you using? On what operating system?
RunUO server 2.3r987 full,
Classic 2d Client 7.0.24 from eamythic, converted .uop extensions to .mul using
LegacyMulConverter
OS Windows 8.1 64bit
Please provide any additional information below.
```
Original issue reported on code.google.com by `robert.a...@gmail.com` on 14 Dec 2014 at 1:56 | 1.0 | Issues with character startup and character equipping - ```
What steps will reproduce the problem?
1. RunUO startup
2. Client Login
3. 100% Repeat.
What is the expected output?
After creating a character, editing the clothing, hair, facial hair, and race.
Character is spawned into world as created. And equipment is equipped and/or
usable.
What do you see instead?
When spawned into world, character is completely naked, no facial hair, nor
head hair, and the race is human regardless what was chosen. Nothing can be
equipped, be it weapon or clothing.
What version of the product are you using? On what operating system?
RunUO server 2.3r987 full,
Classic 2d Client 7.0.24 from eamythic, converted .uop extensions to .mul using
LegacyMulConverter
OS Windows 8.1 64bit
Please provide any additional information below.
```
Original issue reported on code.google.com by `robert.a...@gmail.com` on 14 Dec 2014 at 1:56 | non_main | issues with character startup and character equipping what steps will reproduce the problem runuo startup client login repeat what is the expected output after creating a character editing the clothing hair facial hair and race character is spawned into world as created and equipment is equipped and or usable what do you see instead when spawned into world character is completely naked no facial hair nor head hair and the race is human regardless what was chosen nothing can be equipped be it weapon or clothing what version of the product are you using on what operating system runuo server full classic client from eamythic converted uop extensions to mul using legacymulconverter os windows please provide any additional information below original issue reported on code google com by robert a gmail com on dec at | 0 |
38,155 | 8,675,196,581 | IssuesEvent | 2018-11-30 10:09:57 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | InputMask: server-side validation of mask attribute skips required-validation | defect | ## 1) Environment
- PrimeFaces version: 6.3 SNAPSHOT as of 28.11.2018 09:30 UTC+1
- Application server + version: Wildfly 10.1, JSF Mojarra 2.2.13.SP1 20160303-1204
- Affected browsers: All
## 2) Expected behavior
Input mask with `required = true` should validate and set `validationFailed = true` when nothing or something invalid (not matching the mask) was inserted into the field.
## 3) Actual behavior
Submitted value of the input mask will be set to `null` and therefore the whole validation is skipped. UIInput#validate(FacesContext) contains the code
```
// Submitted value == null means "the component was not submitted
// at all".
Object submittedValue = getSubmittedValue();
if (submittedValue == null) {
return;
}
```
This behavior was introduced by #3234 / #3237
## 4) Steps to reproduce
Add an <p:inputMask requiired="true" ... /> to your page and insert nothing / something invalid according to the mask. Submit the field. The input is accepted as valid and the whole JSF lifecycle is executed including invoke-application, etc
## 5) Sample XHTML
```
<h:form>
<p:inputMask value="#{myBean.maskedValue}" mask="99:99" required="true" />
<p:commandButton value="execute" process="@form" update="@form" actionListener="#{myBean.doProcessMaskedValue}" />
</h:form>
```
## 6) Sample bean
```
@ViewScoped
@Named
public class MyBean implements Serializable {
private String _maskedValue;
public void doProcessMaskedValue() {
System.out.prinln("Masked value should not be NULL, current value is = " + _maskedValue);
}
public String getMaskedValue() {
return _maskedValue;
}
public void setMaskedValue(String maskedValue) {
_maskedValue = maskedValue;
}
}
```
..
| 1.0 | InputMask: server-side validation of mask attribute skips required-validation - ## 1) Environment
- PrimeFaces version: 6.3 SNAPSHOT as of 28.11.2018 09:30 UTC+1
- Application server + version: Wildfly 10.1, JSF Mojarra 2.2.13.SP1 20160303-1204
- Affected browsers: All
## 2) Expected behavior
Input mask with `required = true` should validate and set `validationFailed = true` when nothing or something invalid (not matching the mask) was inserted into the field.
## 3) Actual behavior
Submitted value of the input mask will be set to `null` and therefore the whole validation is skipped. UIInput#validate(FacesContext) contains the code
```
// Submitted value == null means "the component was not submitted
// at all".
Object submittedValue = getSubmittedValue();
if (submittedValue == null) {
return;
}
```
This behavior was introduced by #3234 / #3237
## 4) Steps to reproduce
Add an <p:inputMask requiired="true" ... /> to your page and insert nothing / something invalid according to the mask. Submit the field. The input is accepted as valid and the whole JSF lifecycle is executed including invoke-application, etc
## 5) Sample XHTML
```
<h:form>
<p:inputMask value="#{myBean.maskedValue}" mask="99:99" required="true" />
<p:commandButton value="execute" process="@form" update="@form" actionListener="#{myBean.doProcessMaskedValue}" />
</h:form>
```
## 6) Sample bean
```
@ViewScoped
@Named
public class MyBean implements Serializable {
private String _maskedValue;
public void doProcessMaskedValue() {
System.out.prinln("Masked value should not be NULL, current value is = " + _maskedValue);
}
public String getMaskedValue() {
return _maskedValue;
}
public void setMaskedValue(String maskedValue) {
_maskedValue = maskedValue;
}
}
```
..
| non_main | inputmask server side validation of mask attribute skips required validation environment primefaces version snapshot as of utc application server version wildfly jsf mojarra affected browsers all expected behavior input mask with required true should validate and set validationfailed true when nothing or something invalid not matching the mask was inserted into the field actual behavior submitted value of the input mask will be set to null and therefore the whole validation is skipped uiinput validate facescontext contains the code submitted value null means the component was not submitted at all object submittedvalue getsubmittedvalue if submittedvalue null return this behavior was introduced by steps to reproduce add an to your page and insert nothing something invalid according to the mask submit the field the input is accepted as valid and the whole jsf lifecycle is executed including invoke application etc sample xhtml sample bean viewscoped named public class mybean implements serializable private string maskedvalue public void doprocessmaskedvalue system out prinln masked value should not be null current value is maskedvalue public string getmaskedvalue return maskedvalue public void setmaskedvalue string maskedvalue maskedvalue maskedvalue | 0 |
1,231 | 5,255,999,282 | IssuesEvent | 2017-02-02 16:50:19 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Proposal: options for issue prioritisation | awaiting maintainer feedback discussion | I had been thinking for the past few weeks about how inadequate github issues can be for us, and https://github.com/caskroom/homebrew-cask/issues/17323 is validation of that. It’s not the first time (and I’m including myself) a maintainer can’t recall if something is implemented, or is under the impression it is because it has been talked about so much.
I have also been thinking about how to solve it, but haven’t really settled on a great solution. It is clear, however, the goal of the system should be clarity (which is what’s lacking in the _issues_ system), namely clarity in prioritisation (doesn’t necessarily mean features nave to be implemented in that order, but it should rate them by importance), which also brings with it clarity in what needs to be tackled.
This could also encourage more users to contribute to the core, if they have a place they can reliably go to to see what needs to be done.
Some possibilities:
##### Master issue
- **Pro:** Uses the familiar issues system.
- **Con:** As more issues are opened and closed, it may become harder to keep track of where it is, and possible contributors may not even see it.
- **Con:** Can become messy when (almost inevitably) there is discussion it the issue itself.
##### Document on the repo
- **Pro:** Easy to keep track of where it is.
- **Pro:** Every addition/removal is clearly visible to maintainers/repo watchers (as long as it’s made as a PR).
- **Con:** Can be hard to find if you don’t know it exists.
##### External system (e.g. [Trello](https://trello.com/))
- **Pro:** Great at prioritisation and tracking progress.
- **Con:** Requires use of an external website/service.
---
My favourite option is the document in the repo. Its **con** can be mitigated in two ways: the second pro, since it’ll lend it visibility, and being featured in CONTRIBUTING, in the section about contributing to the core (this would also apply to the first **con** of the master issue option).
| True | Proposal: options for issue prioritisation - I had been thinking for the past few weeks about how inadequate github issues can be for us, and https://github.com/caskroom/homebrew-cask/issues/17323 is validation of that. It’s not the first time (and I’m including myself) a maintainer can’t recall if something is implemented, or is under the impression it is because it has been talked about so much.
I have also been thinking about how to solve it, but haven’t really settled on a great solution. It is clear, however, the goal of the system should be clarity (which is what’s lacking in the _issues_ system), namely clarity in prioritisation (doesn’t necessarily mean features nave to be implemented in that order, but it should rate them by importance), which also brings with it clarity in what needs to be tackled.
This could also encourage more users to contribute to the core, if they have a place they can reliably go to to see what needs to be done.
Some possibilities:
##### Master issue
- **Pro:** Uses the familiar issues system.
- **Con:** As more issues are opened and closed, it may become harder to keep track of where it is, and possible contributors may not even see it.
- **Con:** Can become messy when (almost inevitably) there is discussion it the issue itself.
##### Document on the repo
- **Pro:** Easy to keep track of where it is.
- **Pro:** Every addition/removal is clearly visible to maintainers/repo watchers (as long as it’s made as a PR).
- **Con:** Can be hard to find if you don’t know it exists.
##### External system (e.g. [Trello](https://trello.com/))
- **Pro:** Great at prioritisation and tracking progress.
- **Con:** Requires use of an external website/service.
---
My favourite option is the document in the repo. Its **con** can be mitigated in two ways: the second pro, since it’ll lend it visibility, and being featured in CONTRIBUTING, in the section about contributing to the core (this would also apply to the first **con** of the master issue option).
| main | proposal options for issue prioritisation i had been thinking for the past few weeks about how inadequate github issues can be for us and is validation of that it’s not the first time and i’m including myself a maintainer can’t recall if something is implemented or is under the impression it is because it has been talked about so much i have also been thinking about how to solve it but haven’t really settled on a great solution it is clear however the goal of the system should be clarity which is what’s lacking in the issues system namely clarity in prioritisation doesn’t necessarily mean features nave to be implemented in that order but it should rate them by importance which also brings with it clarity in what needs to be tackled this could also encourage more users to contribute to the core if they have a place they can reliably go to to see what needs to be done some possibilities master issue pro uses the familiar issues system con as more issues are opened and closed it may become harder to keep track of where it is and possible contributors may not even see it con can become messy when almost inevitably there is discussion it the issue itself document on the repo pro easy to keep track of where it is pro every addition removal is clearly visible to maintainers repo watchers as long as it’s made as a pr con can be hard to find if you don’t know it exists external system e g pro great at prioritisation and tracking progress con requires use of an external website service my favourite option is the document in the repo its con can be mitigated in two ways the second pro since it’ll lend it visibility and being featured in contributing in the section about contributing to the core this would also apply to the first con of the master issue option | 1 |
68,184 | 28,214,029,347 | IssuesEvent | 2023-04-05 07:35:41 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | Create SPIFFE identity when Cilium identity is created | kind/feature area/operator sig/agent feature/servicemesh | This issue covers building something that, when a new Cilium identity is created, will create a SPIFFE identity in the SPIRE server, so that the SPIRE server can create a keypair for that Cilium identity.
This should be in the operator, so there may be some trickiness to talking to the SPIRE server (as creating identities is a privileged operation). | 1.0 | Create SPIFFE identity when Cilium identity is created - This issue covers building something that, when a new Cilium identity is created, will create a SPIFFE identity in the SPIRE server, so that the SPIRE server can create a keypair for that Cilium identity.
This should be in the operator, so there may be some trickiness to talking to the SPIRE server (as creating identities is a privileged operation). | non_main | create spiffe identity when cilium identity is created this issue covers building something that when a new cilium identity is created will create a spiffe identity in the spire server so that the spire server can create a keypair for that cilium identity this should be in the operator so there may be some trickiness to talking to the spire server as creating identities is a privileged operation | 0 |
1,014 | 4,794,289,876 | IssuesEvent | 2016-10-31 20:36:08 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Feature request: brew cask info should show remote file size | awaiting maintainer feedback | ### Description of feature/enhancement
brew cask info (maybe a verbose info) should show file size
### Justification
i would like to see how big would be the app
### Example use case
```
$ brew cask info atom -verbose
atom: 1.11.2
https://atom.io/
Not installed
From: https://github.com/caskroom/homebrew-cask/blob/master/Casks/atom.rb
==> Name
Github Atom
==> Artifacts
Atom.app (app)
/Applications/Atom.app/Contents/Resources/app/apm/node_modules/.bin/apm (binary)
/Applications/Atom.app/Contents/Resources/app/atom.sh (binary)
==> Check Remote Binaries
https://github.com/atom/atom/releases/download/v1.11.2/atom-mac.zip
binary available: yes
File Size: 86.5 MB
```
| True | Feature request: brew cask info should show remote file size - ### Description of feature/enhancement
brew cask info (maybe a verbose info) should show file size
### Justification
i would like to see how big would be the app
### Example use case
```
$ brew cask info atom -verbose
atom: 1.11.2
https://atom.io/
Not installed
From: https://github.com/caskroom/homebrew-cask/blob/master/Casks/atom.rb
==> Name
Github Atom
==> Artifacts
Atom.app (app)
/Applications/Atom.app/Contents/Resources/app/apm/node_modules/.bin/apm (binary)
/Applications/Atom.app/Contents/Resources/app/atom.sh (binary)
==> Check Remote Binaries
https://github.com/atom/atom/releases/download/v1.11.2/atom-mac.zip
binary available: yes
File Size: 86.5 MB
```
| main | feature request brew cask info should show remote file size description of feature enhancement brew cask info maybe a verbose info should show file size justification i would like to see how big would be the app example use case brew cask info atom verbose atom not installed from name github atom artifacts atom app app applications atom app contents resources app apm node modules bin apm binary applications atom app contents resources app atom sh binary check remote binaries binary available yes file size mb | 1 |
449,265 | 12,965,951,540 | IssuesEvent | 2020-07-20 23:33:26 | openshift/odo | https://api.github.com/repos/openshift/odo | closed | odo should be able to delete operator backed services | kind/user-story priority/High required-for-v2 | /kind user-story
## User Story
As a user I want to be able to delete operator backed services that I might have created earlier.
## Acceptance Criteria
- [ ] `odo service delete <service-name>` should work for operator backed services.
## Links
- Feature Request: #2613
/kind user-story
| 1.0 | odo should be able to delete operator backed services - /kind user-story
## User Story
As a user I want to be able to delete operator backed services that I might have created earlier.
## Acceptance Criteria
- [ ] `odo service delete <service-name>` should work for operator backed services.
## Links
- Feature Request: #2613
/kind user-story
| non_main | odo should be able to delete operator backed services kind user story user story as a user i want to be able to delete operator backed services that i might have created earlier acceptance criteria odo service delete should work for operator backed services links feature request kind user story | 0 |
3,330 | 12,941,882,045 | IssuesEvent | 2020-07-18 00:00:14 | PowerShell/PowerShell | https://api.github.com/repos/PowerShell/PowerShell | closed | InterpreterFrame indexes Data with a private type | Issue-Question Resolution-Answered Review - Maintainer | <!--
For Windows PowerShell 5.1 issues, suggestions, or feature requests please use the following link instead:
Windows PowerShell [UserVoice](https://windowsserver.uservoice.com/forums/301869-powershell)
This repository is **ONLY** for PowerShell Core 6 and PowerShell 7+ issues.
- Make sure you are able to repro it on the [latest released version](https://github.com/PowerShell/PowerShell/releases)
- Search the existing issues.
- Refer to the [FAQ](https://github.com/PowerShell/PowerShell/blob/master/docs/FAQ.md).
- Refer to the [known issues](https://docs.microsoft.com/powershell/scripting/whats-new/known-issues-ps6).
-->
## Steps to reproduce
```powershell
[INT] '?'
$ERROR | SELECT -FIRST 1 | % EXCEPTION | % InnerException | % Data | % Item([System.Management.Automation.Interpreter.InterpretedFrameInfo])
```
## Expected behavior
```none
MethodName DebugInfo
---------- ---------
<ScriptBlock>
```
## Actual behavior
```none
InvalidOperation: Unable to find type [System.Management.Automation.Interpreter.InterpretedFrameInfo]
```
## Environment data
<!-- provide the output of $PSVersionTable -->
```none
Name Value
---- -----
PSVersion 7.0.2
PSEdition Core
GitCommitId 7.0.2
OS Microsoft Windows 10.0.18363
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
```
| True | InterpreterFrame indexes Data with a private type - <!--
For Windows PowerShell 5.1 issues, suggestions, or feature requests please use the following link instead:
Windows PowerShell [UserVoice](https://windowsserver.uservoice.com/forums/301869-powershell)
This repository is **ONLY** for PowerShell Core 6 and PowerShell 7+ issues.
- Make sure you are able to repro it on the [latest released version](https://github.com/PowerShell/PowerShell/releases)
- Search the existing issues.
- Refer to the [FAQ](https://github.com/PowerShell/PowerShell/blob/master/docs/FAQ.md).
- Refer to the [known issues](https://docs.microsoft.com/powershell/scripting/whats-new/known-issues-ps6).
-->
## Steps to reproduce
```powershell
[INT] '?'
$ERROR | SELECT -FIRST 1 | % EXCEPTION | % InnerException | % Data | % Item([System.Management.Automation.Interpreter.InterpretedFrameInfo])
```
## Expected behavior
```none
MethodName DebugInfo
---------- ---------
<ScriptBlock>
```
## Actual behavior
```none
InvalidOperation: Unable to find type [System.Management.Automation.Interpreter.InterpretedFrameInfo]
```
## Environment data
<!-- provide the output of $PSVersionTable -->
```none
Name Value
---- -----
PSVersion 7.0.2
PSEdition Core
GitCommitId 7.0.2
OS Microsoft Windows 10.0.18363
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
```
| main | interpreterframe indexes data with a private type for windows powershell issues suggestions or feature requests please use the following link instead windows powershell this repository is only for powershell core and powershell issues make sure you are able to repro it on the search the existing issues refer to the refer to the steps to reproduce powershell error select first exception innerexception data item expected behavior none methodname debuginfo actual behavior none invalidoperation unable to find type environment data none name value psversion psedition core gitcommitid os microsoft windows platform pscompatibleversions … psremotingprotocolversion serializationversion wsmanstackversion | 1 |
162,592 | 20,235,376,985 | IssuesEvent | 2022-02-14 01:03:12 | artsking/linux-5.13.13 | https://api.github.com/repos/artsking/linux-5.13.13 | opened | CVE-2021-38300 (High) detected in linux-yoctov5.13.15 | security vulnerability | ## CVE-2021-38300 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.13.15</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
arch/mips/net/bpf_jit.c in the Linux kernel before 5.4.10 can generate undesirable machine code when transforming unprivileged cBPF programs, allowing execution of arbitrary code within the kernel context. This occurs because conditional branches can exceed the 128 KB limit of the MIPS architecture.
<p>Publish Date: 2021-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38300>CVE-2021-38300</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38300">https://www.linuxkernelcves.com/cves/CVE-2021-38300</a></p>
<p>Release Date: 1970-01-01</p>
<p>Fix Resolution: v4.14.251,v4.19.211,v5.4.153,v5.10.71,v5.14.10,v5.15-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-38300 (High) detected in linux-yoctov5.13.15 - ## CVE-2021-38300 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.13.15</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
arch/mips/net/bpf_jit.c in the Linux kernel before 5.4.10 can generate undesirable machine code when transforming unprivileged cBPF programs, allowing execution of arbitrary code within the kernel context. This occurs because conditional branches can exceed the 128 KB limit of the MIPS architecture.
<p>Publish Date: 2021-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38300>CVE-2021-38300</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38300">https://www.linuxkernelcves.com/cves/CVE-2021-38300</a></p>
<p>Release Date: 1970-01-01</p>
<p>Fix Resolution: v4.14.251,v4.19.211,v5.4.153,v5.10.71,v5.14.10,v5.15-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files arch mips net bpf jit c arch mips net bpf jit c vulnerability details arch mips net bpf jit c in the linux kernel before can generate undesirable machine code when transforming unprivileged cbpf programs allowing execution of arbitrary code within the kernel context this occurs because conditional branches can exceed the kb limit of the mips architecture publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
1,506 | 6,520,099,663 | IssuesEvent | 2017-08-28 15:14:22 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | JS Minifier - minify CSS as well | Improvement Maintainer Input Requested Programming Mission Skill: JavaScript Status: Work In Progress Topic: JavaScript | The library used for JS Minifier IA, [prettydiff](https://github.com/prettydiff/prettydiff) can handle CSS too, shouldn't be too hard to add that functionality.
//cc @moollaza
---
IA Page: https://duck.co/ia/view/js_minify
Maintainer: @sahildua2305 | True | JS Minifier - minify CSS as well - The library used for JS Minifier IA, [prettydiff](https://github.com/prettydiff/prettydiff) can handle CSS too, shouldn't be too hard to add that functionality.
//cc @moollaza
---
IA Page: https://duck.co/ia/view/js_minify
Maintainer: @sahildua2305 | main | js minifier minify css as well the library used for js minifier ia can handle css too shouldn t be too hard to add that functionality cc moollaza ia page maintainer | 1 |
4,661 | 24,097,706,369 | IssuesEvent | 2022-09-19 20:24:00 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Sam Package/Deploy --image-repository Behavior | type/feature maintainer/need-followup | Many of the process I put in place for both open source and in company deploy pipelines take advantage of SAM CLI and the AWS CLI using conventions like AWS_PROFILE. I've been very happy that SAM CLI has followed these patterns. Today when working with the new container features I was surprised by this odd behavior of [sam package](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-package.html) when using the `--image-repository` option. Here is an example of my usage where the new image repo was added to my process.
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository "lambyc-starter" \
--s3-bucket "${CLOUDFORMATION_BUCKET}" \
--s3-prefix "lambyc-starter-${RAILS_ENV}"
```
These commands are run as either the default AWS_PROFILE or with specific ENV overrides. Given this was set and that the `--region` was set here, my expectation was this command was going to find and publish to the ECR repo within my AWS account. Instead, it tried to push to docker.io and failed with a user password. Digging into some guides and published SAM examples I can see what you expect folks to do is:
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository "123456789.dkr.ecr.us-east-1.amazonaws.com/lambyc-starter" \
--s3-bucket "${CLOUDFORMATION_BUCKET}" \
--s3-prefix "lambyc-starter-${RAILS_ENV}"
```
This feels like the wrong interface to me and against the grain of how the CLI operates given all my previous experiences. I can work around this if y'all disagree by adding more `aws` CLI commands to find the account ID and use the `AWS_DEFAULT_REGION` env and/or look that up as well. But it would cool if SAM did this. Thoughts? | True | Sam Package/Deploy --image-repository Behavior - Many of the process I put in place for both open source and in company deploy pipelines take advantage of SAM CLI and the AWS CLI using conventions like AWS_PROFILE. I've been very happy that SAM CLI has followed these patterns. Today when working with the new container features I was surprised by this odd behavior of [sam package](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-package.html) when using the `--image-repository` option. Here is an example of my usage where the new image repo was added to my process.
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository "lambyc-starter" \
--s3-bucket "${CLOUDFORMATION_BUCKET}" \
--s3-prefix "lambyc-starter-${RAILS_ENV}"
```
These commands are run as either the default AWS_PROFILE or with specific ENV overrides. Given this was set and that the `--region` was set here, my expectation was this command was going to find and publish to the ECR repo within my AWS account. Instead, it tried to push to docker.io and failed with a user password. Digging into some guides and published SAM examples I can see what you expect folks to do is:
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository "123456789.dkr.ecr.us-east-1.amazonaws.com/lambyc-starter" \
--s3-bucket "${CLOUDFORMATION_BUCKET}" \
--s3-prefix "lambyc-starter-${RAILS_ENV}"
```
This feels like the wrong interface to me and against the grain of how the CLI operates given all my previous experiences. I can work around this if y'all disagree by adding more `aws` CLI commands to find the account ID and use the `AWS_DEFAULT_REGION` env and/or look that up as well. But it would cool if SAM did this. Thoughts? | main | sam package deploy image repository behavior many of the process i put in place for both open source and in company deploy pipelines take advantage of sam cli and the aws cli using conventions like aws profile i ve been very happy that sam cli has followed these patterns today when working with the new container features i was surprised by this odd behavior of when using the image repository option here is an example of my usage where the new image repo was added to my process shell sam package region aws default region template file aws sam build template yaml output template file aws sam build packaged yaml image repository lambyc starter bucket cloudformation bucket prefix lambyc starter rails env these commands are run as either the default aws profile or with specific env overrides given this was set and that the region was set here my expectation was this command was going to find and publish to the ecr repo within my aws account instead it tried to push to docker io and failed with a user password digging into some guides and published sam examples i can see what you expect folks to do is shell sam package region aws default region template file aws sam build template yaml output template file aws sam build packaged yaml image repository dkr ecr us east amazonaws com lambyc starter bucket cloudformation bucket prefix lambyc starter rails env this feels like the wrong interface to me and against the grain of how the cli operates given all my previous experiences i can work around this if y all disagree by adding more aws cli commands to find the account id and use the aws default region env and or look that up as well but it would cool if sam did this thoughts | 1 |
3,414 | 13,182,083,899 | IssuesEvent | 2020-08-12 15:14:33 | duo-labs/cloudmapper | https://api.github.com/repos/duo-labs/cloudmapper | closed | Lateral flows missing across VPC peers due to sg_to_instance_mapping | bug map unmaintained_functionality | Recognise a potential overlap with https://github.com/duo-labs/cloudmapper/issues/72
## Summary
In an environment with 2 or more VPCS (affects single account, single region), cross-vpc lateral movement flows that are permitted by Security Group rules are not represented as edges by _prepare.py_. The VPCs are correctly identified as peers by _collect.py_.
## Steps to reproduce
• Create 2x VPCs within an AWS account (using same region in this example)
• Create 2x private subnet with associated routing, 1x within each VPC
• Create 2x security groups, 1x within each VPC
* Set an ingress rule on Security Group (A) to allow traffic from source Security Group (B)
* Keep egress rules as default (allow all)
• Deploy 2x ec2 instance, 1x to each respective VPC subnet & security group
* Run cloudmapper collect, prepare and webserver commands
* Lateral movement flows were not displayed
## Root cause?
as part of the _get_connections_ function within _prepare.py_ the loop `for target in sg_to_instance_mapping.get(sg["GroupId"], {}):` appears to try and find a match on security group, however the subsequent `for source in sg_to_instance_mapping.get(ingress_sg, {}):` finds no known instances within the dictionary
To the best of my knowledge the following code is looking too narrowly:
```
# Get mapping of security group names to nodes that have that security group
sg_to_instance_mapping = {}
for instance in vpc.leaves:
for sg in instance.security_groups():
sg_to_instance_mapping.setdefault(sg, {})[instance] = True
```
## Solution?
If we expand the instance mapping to vpc peers, then we find the other matches, and display them in the graph
```
# Get mapping of security group names to nodes that have that security group
sg_to_instance_mapping = {}
for instance in vpc.leaves:
for sg in instance.security_groups():
sg_to_instance_mapping.setdefault(sg, {})[instance] = True
# Get mapping from VPC peers as well
for peer in vpc.peers:
for instance in peer.leaves:
for sg in instance.security_groups():
sg_to_instance_mapping.setdefault(sg, {})[instance] = True
```
I don't believe we'll get any unwanted collisions or extra edges, but not tested exhaustively
| True | Lateral flows missing across VPC peers due to sg_to_instance_mapping - Recognise a potential overlap with https://github.com/duo-labs/cloudmapper/issues/72
## Summary
In an environment with 2 or more VPCS (affects single account, single region), cross-vpc lateral movement flows that are permitted by Security Group rules are not represented as edges by _prepare.py_. The VPCs are correctly identified as peers by _collect.py_.
## Steps to reproduce
• Create 2x VPCs within an AWS account (using same region in this example)
• Create 2x private subnet with associated routing, 1x within each VPC
• Create 2x security groups, 1x within each VPC
* Set an ingress rule on Security Group (A) to allow traffic from source Security Group (B)
* Keep egress rules as default (allow all)
• Deploy 2x ec2 instance, 1x to each respective VPC subnet & security group
* Run cloudmapper collect, prepare and webserver commands
* Lateral movement flows were not displayed
## Root cause?
as part of the _get_connections_ function within _prepare.py_ the loop `for target in sg_to_instance_mapping.get(sg["GroupId"], {}):` appears to try and find a match on security group, however the subsequent `for source in sg_to_instance_mapping.get(ingress_sg, {}):` finds no known instances within the dictionary
To the best of my knowledge the following code is looking too narrowly:
```
# Get mapping of security group names to nodes that have that security group
sg_to_instance_mapping = {}
for instance in vpc.leaves:
for sg in instance.security_groups():
sg_to_instance_mapping.setdefault(sg, {})[instance] = True
```
## Solution?
If we expand the instance mapping to vpc peers, then we find the other matches, and display them in the graph
```
# Get mapping of security group names to nodes that have that security group
sg_to_instance_mapping = {}
for instance in vpc.leaves:
for sg in instance.security_groups():
sg_to_instance_mapping.setdefault(sg, {})[instance] = True
# Get mapping from VPC peers as well
for peer in vpc.peers:
for instance in peer.leaves:
for sg in instance.security_groups():
sg_to_instance_mapping.setdefault(sg, {})[instance] = True
```
I don't believe we'll get any unwanted collisions or extra edges, but not tested exhaustively
| main | lateral flows missing across vpc peers due to sg to instance mapping recognise a potential overlap with summary in an environment with or more vpcs affects single account single region cross vpc lateral movement flows that are permitted by security group rules are not represented as edges by prepare py the vpcs are correctly identified as peers by collect py steps to reproduce • create vpcs within an aws account using same region in this example • create private subnet with associated routing within each vpc • create security groups within each vpc set an ingress rule on security group a to allow traffic from source security group b keep egress rules as default allow all • deploy instance to each respective vpc subnet security group run cloudmapper collect prepare and webserver commands lateral movement flows were not displayed root cause as part of the get connections function within prepare py the loop for target in sg to instance mapping get sg appears to try and find a match on security group however the subsequent for source in sg to instance mapping get ingress sg finds no known instances within the dictionary to the best of my knowledge the following code is looking too narrowly get mapping of security group names to nodes that have that security group sg to instance mapping for instance in vpc leaves for sg in instance security groups sg to instance mapping setdefault sg true solution if we expand the instance mapping to vpc peers then we find the other matches and display them in the graph get mapping of security group names to nodes that have that security group sg to instance mapping for instance in vpc leaves for sg in instance security groups sg to instance mapping setdefault sg true get mapping from vpc peers as well for peer in vpc peers for instance in peer leaves for sg in instance security groups sg to instance mapping setdefault sg true i don t believe we ll get any unwanted collisions or extra edges but not tested exhaustively | 1 |
2,301 | 8,224,226,394 | IssuesEvent | 2018-09-06 13:09:40 | TabbycatDebate/tabbycat | https://api.github.com/repos/TabbycatDebate/tabbycat | closed | Make private URL distribution e-mails customisable | awaiting maintainer good first issue help wanted in progress wudc2019 | A nice-to-have, allow private URL distribution e-mails to be edited before sending.
- Should be considered an advanced feature; i.e., it shouldn't be part of the standard workflow.
- Should include default text (basically the string we currently use).
- Validation should check that all merge fields are present (to allow for typos).
- Should show a completed example, or even better allow you to switch between all of them like in mail merge (but the latter is probably a bit overkill). Actually, this part probably should be part of the standard workflow. | True | Make private URL distribution e-mails customisable - A nice-to-have, allow private URL distribution e-mails to be edited before sending.
- Should be considered an advanced feature; i.e., it shouldn't be part of the standard workflow.
- Should include default text (basically the string we currently use).
- Validation should check that all merge fields are present (to allow for typos).
- Should show a completed example, or even better allow you to switch between all of them like in mail merge (but the latter is probably a bit overkill). Actually, this part probably should be part of the standard workflow. | main | make private url distribution e mails customisable a nice to have allow private url distribution e mails to be edited before sending should be considered an advanced feature i e it shouldn t be part of the standard workflow should include default text basically the string we currently use validation should check that all merge fields are present to allow for typos should show a completed example or even better allow you to switch between all of them like in mail merge but the latter is probably a bit overkill actually this part probably should be part of the standard workflow | 1 |
1,541 | 6,572,233,159 | IssuesEvent | 2017-09-11 00:23:20 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Haproxy module doesn't check if the service is present on the given host | affects_2.0 bug_report networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
haproxy
##### ANSIBLE VERSION
2.0.2.0
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
The haproxy module assumes that a given host presents in _all_ proxies, and if it is not true, an error occures.
##### STEPS TO REPRODUCE
haproxy: host=a_backend_host socket=/run/haproxy/admin.sock state=enabled
If a service exists in haproxy, but there's no backend at the backend host, the module fails.
##### EXPECTED RESULTS
It should skip the service instead.
##### ACTUAL RESULTS
It fails, with the message: 'unable to find server backend/a_backend_host'
| True | Haproxy module doesn't check if the service is present on the given host - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
haproxy
##### ANSIBLE VERSION
2.0.2.0
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
The haproxy module assumes that a given host presents in _all_ proxies, and if it is not true, an error occures.
##### STEPS TO REPRODUCE
haproxy: host=a_backend_host socket=/run/haproxy/admin.sock state=enabled
If a service exists in haproxy, but there's no backend at the backend host, the module fails.
##### EXPECTED RESULTS
It should skip the service instead.
##### ACTUAL RESULTS
It fails, with the message: 'unable to find server backend/a_backend_host'
| main | haproxy module doesn t check if the service is present on the given host issue type bug report component name haproxy ansible version configuration os environment summary the haproxy module assumes that a given host presents in all proxies and if it is not true an error occures steps to reproduce haproxy host a backend host socket run haproxy admin sock state enabled if a service exists in haproxy but there s no backend at the backend host the module fails expected results it should skip the service instead actual results it fails with the message unable to find server backend a backend host | 1 |
1,863 | 6,577,414,013 | IssuesEvent | 2017-09-12 00:44:36 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_lc support for "PlacementTenancy" option | affects_2.0 aws cloud feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
ec2_lc
##### ANSIBLE VERSION
```
ansible 2.0.1.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
AWS cli and api supports "Placement Tenancy" in the launch config allowing to deploy dedicated ec2 instance. The ec2_lc is missing this option.
| True | ec2_lc support for "PlacementTenancy" option - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
ec2_lc
##### ANSIBLE VERSION
```
ansible 2.0.1.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
AWS cli and api supports "Placement Tenancy" in the launch config allowing to deploy dedicated ec2 instance. The ec2_lc is missing this option.
| main | lc support for placementtenancy option issue type feature idea component name lc ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary aws cli and api supports placement tenancy in the launch config allowing to deploy dedicated instance the lc is missing this option | 1 |
5,220 | 26,483,147,150 | IssuesEvent | 2023-01-17 16:02:07 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | opened | Volunteer co-maintainer needed for Docker-Coq | maintainer-wanted | The Coq Team and Coq-community are looking for a volunteer co-maintainer of the [Docker-Coq](https://github.com/coq-community/docker-coq) project, which provides [Docker container](https://www.docker.com/resources/what-container/) images of many versions of Coq as a service to Coq users.
Docker-Coq is an open source project on GitHub under the BSD-3-Clause license. It maintains definitions of a set of Docker images that provide a basic Coq environment for continuous integration and local use. Thanks to the [docker-keeper](https://gitlab.com/erikmd/docker-keeper) software, images built from Docker-Coq definitions are continuously deployed to the [public Docker registry](https://hub.docker.com/r/coqorg/coq), where users can pull them without worry of rate limitations.
Maintainer core tasks:
- Create and deploy new Coq Docker definitions to the Docker registry after Coq (pre-)releases.
- Monitor the use of Coq Docker images for continuous integration on GitHub/GitLab and rebuild images when necessary.
- Work with the current Docker-Coq and Docker-Keeper [maintainer](https://github.com/erikmd) to further develop and automate the toolchain.
During their tenure, a maintainer will be considered part of the [Coq Team](https://coq.inria.fr/coq-team.html) and credited for their work in release notes for Coq releases, for example on [Zenodo](https://doi.org/10.5281/zenodo.1003420).
Please respond to this GitHub issue with your motivation, and a short summary of relevant experience, for becoming a Docker-Coq maintainer. The maintainer will be selected from the issue responders by the Coq Team and Coq-community owners. | True | Volunteer co-maintainer needed for Docker-Coq - The Coq Team and Coq-community are looking for a volunteer co-maintainer of the [Docker-Coq](https://github.com/coq-community/docker-coq) project, which provides [Docker container](https://www.docker.com/resources/what-container/) images of many versions of Coq as a service to Coq users.
Docker-Coq is an open source project on GitHub under the BSD-3-Clause license. It maintains definitions of a set of Docker images that provide a basic Coq environment for continuous integration and local use. Thanks to the [docker-keeper](https://gitlab.com/erikmd/docker-keeper) software, images built from Docker-Coq definitions are continuously deployed to the [public Docker registry](https://hub.docker.com/r/coqorg/coq), where users can pull them without worry of rate limitations.
Maintainer core tasks:
- Create and deploy new Coq Docker definitions to the Docker registry after Coq (pre-)releases.
- Monitor the use of Coq Docker images for continuous integration on GitHub/GitLab and rebuild images when necessary.
- Work with the current Docker-Coq and Docker-Keeper [maintainer](https://github.com/erikmd) to further develop and automate the toolchain.
During their tenure, a maintainer will be considered part of the [Coq Team](https://coq.inria.fr/coq-team.html) and credited for their work in release notes for Coq releases, for example on [Zenodo](https://doi.org/10.5281/zenodo.1003420).
Please respond to this GitHub issue with your motivation, and a short summary of relevant experience, for becoming a Docker-Coq maintainer. The maintainer will be selected from the issue responders by the Coq Team and Coq-community owners. | main | volunteer co maintainer needed for docker coq the coq team and coq community are looking for a volunteer co maintainer of the project which provides images of many versions of coq as a service to coq users docker coq is an open source project on github under the bsd clause license it maintains definitions of a set of docker images that provide a basic coq environment for continuous integration and local use thanks to the software images built from docker coq definitions are continuously deployed to the where users can pull them without worry of rate limitations maintainer core tasks create and deploy new coq docker definitions to the docker registry after coq pre releases monitor the use of coq docker images for continuous integration on github gitlab and rebuild images when necessary work with the current docker coq and docker keeper to further develop and automate the toolchain during their tenure a maintainer will be considered part of the and credited for their work in release notes for coq releases for example on please respond to this github issue with your motivation and a short summary of relevant experience for becoming a docker coq maintainer the maintainer will be selected from the issue responders by the coq team and coq community owners | 1 |
5,340 | 26,939,607,539 | IssuesEvent | 2023-02-08 00:24:46 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Setup linting for Django templates | engineering maintain | I ran across https://github.com/prettier/prettier/issues/5581#issuecomment-1169111071 and as I recall, there was no linting for the django code (yet), so some of the mentioned solutions might actually be good fits for the parts that currently left unlinted. | True | Setup linting for Django templates - I ran across https://github.com/prettier/prettier/issues/5581#issuecomment-1169111071 and as I recall, there was no linting for the django code (yet), so some of the mentioned solutions might actually be good fits for the parts that currently left unlinted. | main | setup linting for django templates i ran across and as i recall there was no linting for the django code yet so some of the mentioned solutions might actually be good fits for the parts that currently left unlinted | 1 |
462 | 3,671,084,298 | IssuesEvent | 2016-02-22 04:08:44 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | opened | `find_outdated_appcasts`: Add counter for number of Casks | awaiting maintainer feedback discussion | Envisioning a parameter where the number of Casks to find outdated appcasts for can be set (ie. number of issues to open).
Use case: I may have time to do maybe 10 a day, so right now I just run it and manually exit after I see issues + 10.
I know this doesn't exactly fit with the original plan of "someone run once a week", but with a greater frequency, the delta per run should also decrease. | True | `find_outdated_appcasts`: Add counter for number of Casks - Envisioning a parameter where the number of Casks to find outdated appcasts for can be set (ie. number of issues to open).
Use case: I may have time to do maybe 10 a day, so right now I just run it and manually exit after I see issues + 10.
I know this doesn't exactly fit with the original plan of "someone run once a week", but with a greater frequency, the delta per run should also decrease. | main | find outdated appcasts add counter for number of casks envisioning a parameter where the number of casks to find outdated appcasts for can be set ie number of issues to open use case i may have time to do maybe a day so right now i just run it and manually exit after i see issues i know this doesn t exactly fit with the original plan of someone run once a week but with a greater frequency the delta per run should also decrease | 1 |
543,609 | 15,883,913,512 | IssuesEvent | 2021-04-09 18:05:02 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | opened | Feature Request - Project enhancements | Display/Interface Enhancement Function-PublicationOrProject Priority-High | 1. Can we add individual records to a project (not entire accessions)? I'm creating a project to capture our partnership with the Denver Zoo and we have almost 200 in our legacy accessions.
2. When there is not an End Date, can we check a box so it will display "ongoing" on the detail page? Just having "1903-05-03 - " looks weird.
3. @mkoo has added a photo to this [Mexican Wolf Project ](https://arctos.database.museum/project/1000071) and I think that adds a lot if you're using projects for fundraising or just general Arctos promotion. Can we add something like this as an option on the 'edit' page? I see that the markdown is not that complicated, but people would be more likely to do it if there is a field to pick/create media. I think it should display at the top to the left of the Project Title rather than down in the Description area.
And finally,
4. If you are logged in and viewing another collections project, you can't see any of the 'Cataloged Records Used' or the 'Projects using contributed catalog records' (and probably other things). Similarly, if you are logged in and viewing one of your own projects, you can't see contributions from other collections. See https://arctos.database.museum/project/10002283 for example (256 objects when I'm logged in, 326 when I'm not; also 5 additional contributing projects). I assume this is a VPN issue, but is there some sort of work around? Maybe a summary table without active links?? | 1.0 | Feature Request - Project enhancements - 1. Can we add individual records to a project (not entire accessions)? I'm creating a project to capture our partnership with the Denver Zoo and we have almost 200 in our legacy accessions.
2. When there is not an End Date, can we check a box so it will display "ongoing" on the detail page? Just having "1903-05-03 - " looks weird.
3. @mkoo has added a photo to this [Mexican Wolf Project ](https://arctos.database.museum/project/1000071) and I think that adds a lot if you're using projects for fundraising or just general Arctos promotion. Can we add something like this as an option on the 'edit' page? I see that the markdown is not that complicated, but people would be more likely to do it if there is a field to pick/create media. I think it should display at the top to the left of the Project Title rather than down in the Description area.
And finally,
4. If you are logged in and viewing another collections project, you can't see any of the 'Cataloged Records Used' or the 'Projects using contributed catalog records' (and probably other things). Similarly, if you are logged in and viewing one of your own projects, you can't see contributions from other collections. See https://arctos.database.museum/project/10002283 for example (256 objects when I'm logged in, 326 when I'm not; also 5 additional contributing projects). I assume this is a VPN issue, but is there some sort of work around? Maybe a summary table without active links?? | non_main | feature request project enhancements can we add individual records to a project not entire accessions i m creating a project to capture our partnership with the denver zoo and we have almost in our legacy accessions when there is not an end date can we check a box so it will display ongoing on the detail page just having looks weird mkoo has added a photo to this and i think that adds a lot if you re using projects for fundraising or just general arctos promotion can we add something like this as an option on the edit page i see that the markdown is not that complicated but people would be more likely to do it if there is a field to pick create media i think it should display at the top to the left of the project title rather than down in the description area and finally if you are logged in and viewing another collections project you can t see any of the cataloged records used or the projects using contributed catalog records and probably other things similarly if you are logged in and viewing one of your own projects you can t see contributions from other collections see for example objects when i m logged in when i m not also additional contributing projects i assume this is a vpn issue but is there some sort of work around maybe a summary table without active links | 0 |
2,829 | 10,141,317,817 | IssuesEvent | 2019-08-03 13:07:05 | arcticicestudio/nord-sublime-text | https://api.github.com/repos/arcticicestudio/nord-sublime-text | opened | Transition to new JSON based syntax color scheme format | context-syntax scope-compatibility scope-maintainability scope-stability type-feature | As of Sublime Text 3 build 3149, a new color scheme format `.sublime-color-scheme` was introduced for easier editing, customization and addition of new features. The documentation for the new format is available at the main [Color Schemes documentation][cs-docs].
Nord will migrate to the new format (JSON) from the now deprecated/legacy `.tmTheme` format (XML).
All versions greater or equal to 3.1 build 3120 **comes with a builtin tool to convert legacy themes to the new format** through the command palette **only when the files is opened in the editor**: „Convert Color Scheme“
@kaine119 already submitted #20 that'll be used as base and will be extended to align with Nord's style guidelines, adding missing keys and using the _color palette_ feature that allows to define variables instead of using "hard-coded" HEX values.
[cs-docs]: https://www.sublimetext.com/docs/3/color_schemes.html | True | Transition to new JSON based syntax color scheme format - As of Sublime Text 3 build 3149, a new color scheme format `.sublime-color-scheme` was introduced for easier editing, customization and addition of new features. The documentation for the new format is available at the main [Color Schemes documentation][cs-docs].
Nord will migrate to the new format (JSON) from the now deprecated/legacy `.tmTheme` format (XML).
All versions greater or equal to 3.1 build 3120 **comes with a builtin tool to convert legacy themes to the new format** through the command palette **only when the files is opened in the editor**: „Convert Color Scheme“
@kaine119 already submitted #20 that'll be used as base and will be extended to align with Nord's style guidelines, adding missing keys and using the _color palette_ feature that allows to define variables instead of using "hard-coded" HEX values.
[cs-docs]: https://www.sublimetext.com/docs/3/color_schemes.html | main | transition to new json based syntax color scheme format as of sublime text build a new color scheme format sublime color scheme was introduced for easier editing customization and addition of new features the documentation for the new format is available at the main nord will migrate to the new format json from the now deprecated legacy tmtheme format xml all versions greater or equal to build comes with a builtin tool to convert legacy themes to the new format through the command palette only when the files is opened in the editor „convert color scheme“ already submitted that ll be used as base and will be extended to align with nord s style guidelines adding missing keys and using the color palette feature that allows to define variables instead of using hard coded hex values | 1 |
5,588 | 28,010,793,926 | IssuesEvent | 2023-03-27 18:28:11 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | opened | Verify if custom CSS fixes for Wagtail v3 are still needed for v4 | engineering wagtail frontend maintain Wagtail 4.1.3 | We added [custom-fix.css](https://github.com/MozillaFoundation/foundation.mozilla.org/blob/main/network-api/networkapi/wagtailcustomization/static/wagtailadmin/css/custom-fix.css) to address styling issues found on Wagtail v3. Once v4 is on prod, we should verify if these custom fixes are still needed and update the code/file accordingly.
| True | Verify if custom CSS fixes for Wagtail v3 are still needed for v4 - We added [custom-fix.css](https://github.com/MozillaFoundation/foundation.mozilla.org/blob/main/network-api/networkapi/wagtailcustomization/static/wagtailadmin/css/custom-fix.css) to address styling issues found on Wagtail v3. Once v4 is on prod, we should verify if these custom fixes are still needed and update the code/file accordingly.
| main | verify if custom css fixes for wagtail are still needed for we added to address styling issues found on wagtail once is on prod we should verify if these custom fixes are still needed and update the code file accordingly | 1 |
241,341 | 7,811,634,249 | IssuesEvent | 2018-06-12 10:46:17 | FubarDevelopment/QuickGraph | https://api.github.com/repos/FubarDevelopment/QuickGraph | closed | CP-12403: GraphML Deserialization for Missing <node>/<edge> Data Elements | priority:Low | *From unknown CodePlex user on Monday, 19 January 2009 22:07:32*
Consider the following graph.
```xml
<graph id="G" edgedefault="directed">
<node id="0">
<data key="isSpecial">true</data>
<data key="isSuperSpecial">true</data>
</node>
<node id="1">
<data key="isSuperSpecial">true</data>
</node>
<node id="2"/>
</graph>
```
The IsSpecial and IsSuperSpecial keys are properties that are decorated with an appropriate `XmlAttribtueAttribute`. When I deserialize this graph, I am hoping that the missing property values for nodes 1,2 will be deserialized with a suitable default. However, none of the property values are deserialized, even the ones that are explicitly provided in node 0.
I'm not sure if this is intended, but the semantics appear correct for a GraphML document.
[GraphML.zip](https://github.com/FubarDevelopment/QuickGraph/files/2093137/GraphML.zip)
| 1.0 | CP-12403: GraphML Deserialization for Missing <node>/<edge> Data Elements - *From unknown CodePlex user on Monday, 19 January 2009 22:07:32*
Consider the following graph.
```xml
<graph id="G" edgedefault="directed">
<node id="0">
<data key="isSpecial">true</data>
<data key="isSuperSpecial">true</data>
</node>
<node id="1">
<data key="isSuperSpecial">true</data>
</node>
<node id="2"/>
</graph>
```
The IsSpecial and IsSuperSpecial keys are properties that are decorated with an appropriate `XmlAttribtueAttribute`. When I deserialize this graph, I am hoping that the missing property values for nodes 1,2 will be deserialized with a suitable default. However, none of the property values are deserialized, even the ones that are explicitly provided in node 0.
I'm not sure if this is intended, but the semantics appear correct for a GraphML document.
[GraphML.zip](https://github.com/FubarDevelopment/QuickGraph/files/2093137/GraphML.zip)
| non_main | cp graphml deserialization for missing data elements from unknown codeplex user on monday january consider the following graph xml true true true the isspecial and issuperspecial keys are properties that are decorated with an appropriate xmlattribtueattribute when i deserialize this graph i am hoping that the missing property values for nodes will be deserialized with a suitable default however none of the property values are deserialized even the ones that are explicitly provided in node i m not sure if this is intended but the semantics appear correct for a graphml document | 0 |
536,941 | 15,716,871,386 | IssuesEvent | 2021-03-28 09:34:04 | sopra-fs21-group-03/Server | https://api.github.com/repos/sopra-fs21-group-03/Server | opened | Create a "Poker Instruction" Button in the game selection area, which redirects a user to the instructions page | low priority task | Time estimate: 1h
This task is part of user story #19 | 1.0 | Create a "Poker Instruction" Button in the game selection area, which redirects a user to the instructions page - Time estimate: 1h
This task is part of user story #19 | non_main | create a poker instruction button in the game selection area which redirects a user to the instructions page time estimate this task is part of user story | 0 |
2,307 | 8,269,270,555 | IssuesEvent | 2018-09-15 03:34:42 | react-navigation/react-navigation | https://api.github.com/repos/react-navigation/react-navigation | closed | cannot read property options on backbutton | needs action from maintainer needs more info | ### Current Behavior
When I navigate from a page and replace it by pop then navigate or replace and then
hitting the back button quickly before the page fully loads results in "cannot read property options of undefined" in the stackviewlayout.
I have tried putting the replace/pop then navigate in a componenthasmounted, and conditional ismounted to no avail. I am guessing the pop is causing a race condition to where the back button is hit it triggers on the first screen and tries to navigate on a stack that has been removed
`const ViewProfileScreen = createStackNavigator({
ViewProfile: {key:"ViewProfile", screen: ViewProfile,path:"ViewProfile/:id",
navigationOptions: ({ navigation }) => ({
headerMode:"float" ,
headerLeft: <HeaderBackButton onPress={() =>{navigation.goBack(null); }} />,
}),
},
});
const EditProfileScreen = createStackNavigator({
EditProfileScreen: { key:"EditProfile",screen: EditProfile,path:"editProfile/:id",
navigationOptions: ({ navigation }) => ({
headerMode:"float" ,
headerLeft: <HeaderBackButton onPress={() =>{navigation.goBack(null)}} />,
}),
},
});`

### Expected Behavior
Back button not to break
| True | cannot read property options on backbutton - ### Current Behavior
When I navigate from a page and replace it by pop then navigate or replace and then
hitting the back button quickly before the page fully loads results in "cannot read property options of undefined" in the stackviewlayout.
I have tried putting the replace/pop then navigate in a componenthasmounted, and conditional ismounted to no avail. I am guessing the pop is causing a race condition to where the back button is hit it triggers on the first screen and tries to navigate on a stack that has been removed
`const ViewProfileScreen = createStackNavigator({
ViewProfile: {key:"ViewProfile", screen: ViewProfile,path:"ViewProfile/:id",
navigationOptions: ({ navigation }) => ({
headerMode:"float" ,
headerLeft: <HeaderBackButton onPress={() =>{navigation.goBack(null); }} />,
}),
},
});
const EditProfileScreen = createStackNavigator({
EditProfileScreen: { key:"EditProfile",screen: EditProfile,path:"editProfile/:id",
navigationOptions: ({ navigation }) => ({
headerMode:"float" ,
headerLeft: <HeaderBackButton onPress={() =>{navigation.goBack(null)}} />,
}),
},
});`

### Expected Behavior
Back button not to break
| main | cannot read property options on backbutton current behavior when i navigate from a page and replace it by pop then navigate or replace and then hitting the back button quickly before the page fully loads results in cannot read property options of undefined in the stackviewlayout i have tried putting the replace pop then navigate in a componenthasmounted and conditional ismounted to no avail i am guessing the pop is causing a race condition to where the back button is hit it triggers on the first screen and tries to navigate on a stack that has been removed const viewprofilescreen createstacknavigator viewprofile key viewprofile screen viewprofile path viewprofile id navigationoptions navigation headermode float headerleft navigation goback null const editprofilescreen createstacknavigator editprofilescreen key editprofile screen editprofile path editprofile id navigationoptions navigation headermode float headerleft navigation goback null expected behavior back button not to break | 1 |
99,265 | 11,137,268,163 | IssuesEvent | 2019-12-20 18:51:58 | CoderLine/alphaTab | https://api.github.com/repos/CoderLine/alphaTab | closed | Configuration based Settings JSON Serialization | :bulb: type-feature-request area-documentation platform-javascript priority-high state-accepted type-improvement | # Description
As of today the settings handling in alphaTab for JSON is done manually. This is hard to maintain and prone to errors and inconsistencies.
# Possible Solutions
It would be better if each setting defines the serialization information by attributes just like most serialization systems also do it.
| 1.0 | Configuration based Settings JSON Serialization - # Description
As of today the settings handling in alphaTab for JSON is done manually. This is hard to maintain and prone to errors and inconsistencies.
# Possible Solutions
It would be better if each setting defines the serialization information by attributes just like most serialization systems also do it.
| non_main | configuration based settings json serialization description as of today the settings handling in alphatab for json is done manually this is hard to maintain and prone to errors and inconsistencies possible solutions it would be better if each setting defines the serialization information by attributes just like most serialization systems also do it | 0 |
2,023 | 6,757,636,597 | IssuesEvent | 2017-10-24 11:35:14 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Многострочные комментарии | enhancement need-maintainer | ### 1. Запрос
Неплохо было бы, если б поддерживались многострочные комментарии.
### 2. Аргументация
Разделённый на абзацы текст удобнее и приятнее читать, нежели длинные «простыни».
### 3. Пример
Например, предлагаю разделять строки сочетанием символом `\n` как в языке программирования Python.
Пример вопроса:
```markdown
Улан-Удэ, 1689—934@Верхнеудинск*-info-В 1666 было основано Удинское зимовье, в 1689 строится Верхнеудинская крепость, преобразованная в город в 1783./nУлан-Удэ — «красная Уда», Уда — название реки./nНижнеудинск на территории современной Иркутской области.*-proof-172—3
```
В игре данная конструкция будет выглядеть следующим образом:
```markdown
[9:06:29 PM] <GREEN>
Вопрос №141 из 158:
—------------------------------------------------------
Улан-Удэ, 1689—934
—------------------------------------------------------
[9:06:42 PM] <орнитоптера_Королевы_Александры> Верхнеудинск
[9:06:42 PM] <GREEN> орнитоптера_Королевы_Александры - даёт правильный ответ
[9:06:42 PM] <GREEN> Правильный ответ: "Верхнеудинск"
[9:06:42 PM] <GREEN> Комментарии: В 1666 было основано Удинское зимовье, в 1689 строится Верхнеудинская крепость, преобразованная в город в 1783.
Улан-Удэ — «красная Уда», Уда — название реки.
Нижнеудинск на территории современной Иркутской области.
[9:06:42 PM] <GREEN> Источник: 172—3
```
Спасибо. | True | [Feature request] Многострочные комментарии - ### 1. Запрос
Неплохо было бы, если б поддерживались многострочные комментарии.
### 2. Аргументация
Разделённый на абзацы текст удобнее и приятнее читать, нежели длинные «простыни».
### 3. Пример
Например, предлагаю разделять строки сочетанием символом `\n` как в языке программирования Python.
Пример вопроса:
```markdown
Улан-Удэ, 1689—934@Верхнеудинск*-info-В 1666 было основано Удинское зимовье, в 1689 строится Верхнеудинская крепость, преобразованная в город в 1783./nУлан-Удэ — «красная Уда», Уда — название реки./nНижнеудинск на территории современной Иркутской области.*-proof-172—3
```
В игре данная конструкция будет выглядеть следующим образом:
```markdown
[9:06:29 PM] <GREEN>
Вопрос №141 из 158:
—------------------------------------------------------
Улан-Удэ, 1689—934
—------------------------------------------------------
[9:06:42 PM] <орнитоптера_Королевы_Александры> Верхнеудинск
[9:06:42 PM] <GREEN> орнитоптера_Королевы_Александры - даёт правильный ответ
[9:06:42 PM] <GREEN> Правильный ответ: "Верхнеудинск"
[9:06:42 PM] <GREEN> Комментарии: В 1666 было основано Удинское зимовье, в 1689 строится Верхнеудинская крепость, преобразованная в город в 1783.
Улан-Удэ — «красная Уда», Уда — название реки.
Нижнеудинск на территории современной Иркутской области.
[9:06:42 PM] <GREEN> Источник: 172—3
```
Спасибо. | main | многострочные комментарии запрос неплохо было бы если б поддерживались многострочные комментарии аргументация разделённый на абзацы текст удобнее и приятнее читать нежели длинные «простыни» пример например предлагаю разделять строки сочетанием символом n как в языке программирования python пример вопроса markdown улан удэ — верхнеудинск info в было основано удинское зимовье в строится верхнеудинская крепость преобразованная в город в nулан удэ — «красная уда» уда — название реки nнижнеудинск на территории современной иркутской области proof — в игре данная конструкция будет выглядеть следующим образом markdown вопрос № из — улан удэ — — верхнеудинск орнитоптера королевы александры даёт правильный ответ правильный ответ верхнеудинск комментарии в было основано удинское зимовье в строится верхнеудинская крепость преобразованная в город в улан удэ — «красная уда» уда — название реки нижнеудинск на территории современной иркутской области источник — спасибо | 1 |
3,791 | 16,110,197,279 | IssuesEvent | 2021-04-27 20:03:42 | svengreb/wand | https://api.github.com/repos/svengreb/wand | closed | Update to `tmpl-go` template repository version `0.8.0` | context-workflow scope-maintainability scope-quality type-improvement | Update to [`tmpl-go` version `0.8.0`][1] which [updates `golangci-lint` to version `1.39.0`][2] and [the `tmpl` repository version `0.9.0`][3].
[1]: https://github.com/svengreb/tmpl-go/releases/tag/v0.8.0
[2]: https://github.com/svengreb/tmpl-go/issues/56
[3]: https://github.com/svengreb/tmpl-go/issues/58 | True | Update to `tmpl-go` template repository version `0.8.0` - Update to [`tmpl-go` version `0.8.0`][1] which [updates `golangci-lint` to version `1.39.0`][2] and [the `tmpl` repository version `0.9.0`][3].
[1]: https://github.com/svengreb/tmpl-go/releases/tag/v0.8.0
[2]: https://github.com/svengreb/tmpl-go/issues/56
[3]: https://github.com/svengreb/tmpl-go/issues/58 | main | update to tmpl go template repository version update to which and | 1 |
83,704 | 7,880,114,860 | IssuesEvent | 2018-06-26 15:05:24 | BoostGSoC18/tensor | https://api.github.com/repos/BoostGSoC18/tensor | closed | Unit-test tensor-tensor multiplication | unit-test | Need to test `ublas::ttt` as well as the `ublas::prod` functions. | 1.0 | Unit-test tensor-tensor multiplication - Need to test `ublas::ttt` as well as the `ublas::prod` functions. | non_main | unit test tensor tensor multiplication need to test ublas ttt as well as the ublas prod functions | 0 |
3,404 | 13,181,830,893 | IssuesEvent | 2020-08-12 14:53:30 | duo-labs/cloudmapper | https://api.github.com/repos/duo-labs/cloudmapper | closed | Change VPC endpoint icons to reduce confusion | map unmaintained_functionality | As noted in #528, the VPC endpoint icon for the S3 bucket constantly confuses people into thinking that the network diagram is showing access to S3 buckets. The diagram is showing access to VPC endpoints that support security groups, and which may allow some access to an S3 bucket. This is confusing and one way of making this more clear would be to not use an S3 bucket icon for the VPC endpoint. | True | Change VPC endpoint icons to reduce confusion - As noted in #528, the VPC endpoint icon for the S3 bucket constantly confuses people into thinking that the network diagram is showing access to S3 buckets. The diagram is showing access to VPC endpoints that support security groups, and which may allow some access to an S3 bucket. This is confusing and one way of making this more clear would be to not use an S3 bucket icon for the VPC endpoint. | main | change vpc endpoint icons to reduce confusion as noted in the vpc endpoint icon for the bucket constantly confuses people into thinking that the network diagram is showing access to buckets the diagram is showing access to vpc endpoints that support security groups and which may allow some access to an bucket this is confusing and one way of making this more clear would be to not use an bucket icon for the vpc endpoint | 1 |
4,103 | 19,430,005,276 | IssuesEvent | 2021-12-21 10:48:43 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFM - solr | Status: Available For Maintainer(s) | ## Current Maintainer
<!-- If you are not confirmed as a known maintainer, you may be asked to take additional steps to confirm your user account -->
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
<!-- You may leave the URLs empty as long as the issue title matches the identifier of the package -->
Package URL: https://chocolatey.org/packages/solr
Package source URL: https://github.com/majkinetor/au-packages/tree/master/solr
Package became too big to be embedded. I don't want to maintain non-embeddable packages.
| True | RFM - solr - ## Current Maintainer
<!-- If you are not confirmed as a known maintainer, you may be asked to take additional steps to confirm your user account -->
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
<!-- You may leave the URLs empty as long as the issue title matches the identifier of the package -->
Package URL: https://chocolatey.org/packages/solr
Package source URL: https://github.com/majkinetor/au-packages/tree/master/solr
Package became too big to be embedded. I don't want to maintain non-embeddable packages.
| main | rfm solr current maintainer i am the maintainer of the package and wish to pass it to someone else checklist issue title starts with rfm existing package details package url package source url package became too big to be embedded i don t want to maintain non embeddable packages | 1 |
266,062 | 28,298,887,328 | IssuesEvent | 2023-04-10 02:51:57 | nidhi7598/linux-4.19.72 | https://api.github.com/repos/nidhi7598/linux-4.19.72 | closed | CVE-2020-25220 (High) detected in linuxlinux-4.19.254 - autoclosed | Mend: dependency security vulnerability | ## CVE-2020-25220 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/cgroup/cgroup.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel 4.9.x before 4.9.233, 4.14.x before 4.14.194, and 4.19.x before 4.19.140 has a use-after-free because skcd->no_refcnt was not considered during a backport of a CVE-2020-14356 patch. This is related to the cgroups feature.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-25220>CVE-2020-25220</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220</a></p>
<p>Release Date: 2020-09-10</p>
<p>Fix Resolution: v4.9.223,v4.14.194,v4.19.140</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-25220 (High) detected in linuxlinux-4.19.254 - autoclosed - ## CVE-2020-25220 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/cgroup/cgroup.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel 4.9.x before 4.9.233, 4.14.x before 4.14.194, and 4.19.x before 4.19.140 has a use-after-free because skcd->no_refcnt was not considered during a backport of a CVE-2020-14356 patch. This is related to the cgroups feature.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-25220>CVE-2020-25220</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220</a></p>
<p>Release Date: 2020-09-10</p>
<p>Fix Resolution: v4.9.223,v4.14.194,v4.19.140</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files kernel cgroup cgroup c vulnerability details the linux kernel x before x before and x before has a use after free because skcd no refcnt was not considered during a backport of a cve patch this is related to the cgroups feature publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
4,918 | 25,281,891,221 | IssuesEvent | 2022-11-16 16:20:06 | precice/precice | https://api.github.com/repos/precice/precice | closed | Simplification of Waveform class | enhancement maintainability | To implement #1171 the following changes to the Waveform class are useful:
* Remove feature to store and use data from past windows. There is currently no clear use-case and it makes the management of data more complicated.
* Allow to store several data values inside of a window. Always keep the value at the beginning of the window, when a new iteration begins and automatically delete all others.
These changes should not break any existing code, but will help us to implement higher-order interpolation when subcycling is used and multiple data values are available in a window. | True | Simplification of Waveform class - To implement #1171 the following changes to the Waveform class are useful:
* Remove feature to store and use data from past windows. There is currently no clear use-case and it makes the management of data more complicated.
* Allow to store several data values inside of a window. Always keep the value at the beginning of the window, when a new iteration begins and automatically delete all others.
These changes should not break any existing code, but will help us to implement higher-order interpolation when subcycling is used and multiple data values are available in a window. | main | simplification of waveform class to implement the following changes to the waveform class are useful remove feature to store and use data from past windows there is currently no clear use case and it makes the management of data more complicated allow to store several data values inside of a window always keep the value at the beginning of the window when a new iteration begins and automatically delete all others these changes should not break any existing code but will help us to implement higher order interpolation when subcycling is used and multiple data values are available in a window | 1 |
615,524 | 19,256,429,419 | IssuesEvent | 2021-12-09 11:48:47 | googleapis/java-bigqueryconnection | https://api.github.com/repos/googleapis/java-bigqueryconnection | closed | com.example.bigqueryconnection.CreateConnectionIT: testCreateConnection failed | type: bug priority: p1 api: bigqueryconnection flakybot: issue | Note: #566 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 7ea272c7154e3a47de71204c5b77dfc90569e363
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/24af25e8-7560-4c36-b9f5-b0e87118d224), [Sponge](http://sponge2/24af25e8-7560-4c36-b9f5-b0e87118d224)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.NotFoundException: io.grpc.StatusRuntimeException: NOT_FOUND: Not found: Connection CREATE_CONNECTION_TEST_2c77c67c
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:45)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.bigqueryconnection.v1.ConnectionServiceClient.deleteConnection(ConnectionServiceClient.java:704)
at com.example.bigqueryconnection.DeleteConnection.deleteConnection(DeleteConnection.java:42)
at com.example.bigqueryconnection.CreateConnectionIT.tearDown(CreateConnectionIT.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Not found: Connection CREATE_CONNECTION_TEST_2c77c67c
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details> | 1.0 | com.example.bigqueryconnection.CreateConnectionIT: testCreateConnection failed - Note: #566 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 7ea272c7154e3a47de71204c5b77dfc90569e363
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/24af25e8-7560-4c36-b9f5-b0e87118d224), [Sponge](http://sponge2/24af25e8-7560-4c36-b9f5-b0e87118d224)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.NotFoundException: io.grpc.StatusRuntimeException: NOT_FOUND: Not found: Connection CREATE_CONNECTION_TEST_2c77c67c
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:45)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.bigqueryconnection.v1.ConnectionServiceClient.deleteConnection(ConnectionServiceClient.java:704)
at com.example.bigqueryconnection.DeleteConnection.deleteConnection(DeleteConnection.java:42)
at com.example.bigqueryconnection.CreateConnectionIT.tearDown(CreateConnectionIT.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Not found: Connection CREATE_CONNECTION_TEST_2c77c67c
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details> | non_main | com example bigqueryconnection createconnectionit testcreateconnection failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output com google api gax rpc notfoundexception io grpc statusruntimeexception not found not found connection create connection test at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax rpc asynctaskexception asynchronous task failed at com google api gax rpc apiexceptions callandtranslateapiexception apiexceptions java at com google api gax rpc unarycallable call unarycallable java at com google cloud bigqueryconnection connectionserviceclient deleteconnection connectionserviceclient java at com example bigqueryconnection deleteconnection deleteconnection deleteconnection java at com example bigqueryconnection createconnectionit teardown createconnectionit java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runafters invokemethod runafters java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by io grpc statusruntimeexception not found not found connection create connection test at io grpc status asruntimeexception status java more | 0 |
1,812 | 6,577,311,778 | IssuesEvent | 2017-09-12 00:01:53 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Enable the use of JSON for junos_command | affects_2.2 feature_idea networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
network/junos/junos_command.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
--- JUNOS 15.1R3.6 built 2016-03-24 18:40:35 UTC
##### SUMMARY
<!--- Explain the problem briefly -->
Enable format JSON for output from junos device running 14.2 or higher.
From 14.2 onward, junos supports JSON
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: show version json
junos_command:
host: "{{ inventory_hostname }}"
commands:
- "show version"
format: json
```
Results:
```
.......
"stdout": [{"multi-routing-engine-results": [{"multi-routing-engine-item": [{"re-name": [{"data": "fpc0"}], "software-information": [{"host-name": [{"data": "netlab-sw01-a"}], "junos-version": [{"data": "15.1R3.6"}], "package-information": [{"comment": [{"data": "JUNOS EX Software Suite [15.1R3.6]"}], "name": [{"data": "junos"}]}, {"comment": [{"data": "JUNOS FIPS mode utilities [15.1R3.6]"}], "name": [{"data": "fips-mode-powerpc"}]}, {"comment": [{"data": "JUNOS Online Documentation [15.1R3.6]"}], "name": [{"data": "jdocs-ex"}]}, {"comment": [{"data": "JUNOS EX 4200 Software Suite [15.1R3.6]"}], "name": [{"data": "junos-ex-4200"}]}, {"comment": [{"data": "JUNOS Web Management Platform Package [15.1R3.6]"}], "name": [{"data": "jweb-ex"}]}], "product-model": [{"data": "ex4200-24t"}], "product-name": [{"data": "ex4200-24t"}]}]}]}]}], "stdout_lines": [{"multi-routing-engine-results": [{"multi-routing-engine-item": [{"re-name": [{"data": "fpc0"}], "software-information": [{"host-name": [{"data": "netlab-sw01-a"}], "junos-version": [{"data": "15.1R3.6"}], "package-information": [{"comment": [{"data": "JUNOS EX Software Suite [15.1R3.6]"}], "name": [{"data": "junos"}]}, {"comment": [{"data": "JUNOS FIPS mode utilities [15.1R3.6]"}], "name": [{"data": "fips-mode-powerpc"}]}, {"comment": [{"data": "JUNOS Online Documentation [15.1R3.6]"}], "name": [{"data": "jdocs-ex"}]}, {"comment": [{"data": "JUNOS EX 4200 Software Suite [15.1R3.6]"}], "name": [{"data": "junos-ex-4200"}]}, {"comment": [{"data": "JUNOS Web Management Platform Package [15.1R3.6]"}], "name": [{"data": "jweb-ex"}]}], "product-model": [{"data": "ex4200-24t"}], "product-name": [{"data": "ex4200-24t"}]}]}]}]}]}
```
| True | Enable the use of JSON for junos_command - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
network/junos/junos_command.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
--- JUNOS 15.1R3.6 built 2016-03-24 18:40:35 UTC
##### SUMMARY
<!--- Explain the problem briefly -->
Enable format JSON for output from junos device running 14.2 or higher.
From 14.2 onward, junos supports JSON
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: show version json
junos_command:
host: "{{ inventory_hostname }}"
commands:
- "show version"
format: json
```
Results:
```
.......
"stdout": [{"multi-routing-engine-results": [{"multi-routing-engine-item": [{"re-name": [{"data": "fpc0"}], "software-information": [{"host-name": [{"data": "netlab-sw01-a"}], "junos-version": [{"data": "15.1R3.6"}], "package-information": [{"comment": [{"data": "JUNOS EX Software Suite [15.1R3.6]"}], "name": [{"data": "junos"}]}, {"comment": [{"data": "JUNOS FIPS mode utilities [15.1R3.6]"}], "name": [{"data": "fips-mode-powerpc"}]}, {"comment": [{"data": "JUNOS Online Documentation [15.1R3.6]"}], "name": [{"data": "jdocs-ex"}]}, {"comment": [{"data": "JUNOS EX 4200 Software Suite [15.1R3.6]"}], "name": [{"data": "junos-ex-4200"}]}, {"comment": [{"data": "JUNOS Web Management Platform Package [15.1R3.6]"}], "name": [{"data": "jweb-ex"}]}], "product-model": [{"data": "ex4200-24t"}], "product-name": [{"data": "ex4200-24t"}]}]}]}]}], "stdout_lines": [{"multi-routing-engine-results": [{"multi-routing-engine-item": [{"re-name": [{"data": "fpc0"}], "software-information": [{"host-name": [{"data": "netlab-sw01-a"}], "junos-version": [{"data": "15.1R3.6"}], "package-information": [{"comment": [{"data": "JUNOS EX Software Suite [15.1R3.6]"}], "name": [{"data": "junos"}]}, {"comment": [{"data": "JUNOS FIPS mode utilities [15.1R3.6]"}], "name": [{"data": "fips-mode-powerpc"}]}, {"comment": [{"data": "JUNOS Online Documentation [15.1R3.6]"}], "name": [{"data": "jdocs-ex"}]}, {"comment": [{"data": "JUNOS EX 4200 Software Suite [15.1R3.6]"}], "name": [{"data": "junos-ex-4200"}]}, {"comment": [{"data": "JUNOS Web Management Platform Package [15.1R3.6]"}], "name": [{"data": "jweb-ex"}]}], "product-model": [{"data": "ex4200-24t"}], "product-name": [{"data": "ex4200-24t"}]}]}]}]}]}
```
| main | enable the use of json for junos command issue type feature idea component name network junos junos command py ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific junos built utc summary enable format json for output from junos device running or higher from onward junos supports json steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name show version json junos command host inventory hostname commands show version format json results stdout software information junos version package information name comment name comment name comment name comment name product model product name stdout lines software information junos version package information name comment name comment name comment name comment name product model product name | 1 |
702,932 | 24,141,500,020 | IssuesEvent | 2022-09-21 15:09:10 | root-project/root | https://api.github.com/repos/root-project/root | closed | [RF] Buggy range overlap check in createNLL when SplitRange option is used | bug affects:master priority:high in:RooFit/RooStats affects:6.26 | - [x] Checked for duplicates
### Describe the bug
Since ROOT 6.26, there is a new range overlap check when calling the `createNLL` from a `RooAbsPdf` instance with a `Range` argument that contains multiple range names. However, it ignores the fact that when the `SplitRange` option is used, the name of the range to be checked should be appended by the appropriate category label from each category of the simultaneous pdf. This causes an exception to be falsely raised even if the ranges do not overlap. This is because all the named ranges will return the full observable range since it fetches the range from a range name that does not exist.
### To Reproduce
In PyROOT, the issue can be reproduced using
```Python
import ROOT
ws_cat1 = ROOT.RooWorkspace("ws_cat1")
ws_cat1.factory("Gaussian::pdf_cat1(x_cat1[0,10],mu_cat1[4,0,10],sigma_cat1[1.0,0.1,10.0])")
pdf_cat1 = ws_cat1.pdf("pdf_cat1")
x_cat1 = ws_cat1.var("x_cat1")
x_cat1.setRange("SideBandLo_cat1", 0, 2)
x_cat1.setRange("SideBandHi_cat1", 6, 10)
ds_cat1 = pdf_cat1.generate(ROOT.RooArgSet(x_cat1), 11000)
ws_cat2 = ROOT.RooWorkspace("ws_cat2")
ws_cat2.factory("Gaussian::pdf_cat2(x_cat2[0,10],mu_cat2[6,0,10],sigma_cat2[1.0,0.1,10.0])")
pdf_cat2 = ws_cat2.pdf("pdf_cat2")
x_cat2 = ws_cat2.var("x_cat2")
x_cat2.setRange("SideBandLo_cat2", 0, 4)
x_cat2.setRange("SideBandHi_cat2", 8, 10)
ds_cat2 = pdf_cat2.generate(ROOT.RooArgSet(x_cat2), 11000)
index_cat = ROOT.RooCategory("cat", "cat")
index_cat.defineType("cat1")
index_cat.defineType("cat2")
sim_pdf = ROOT.RooSimultaneous("sim_pdf", "", index_cat)
sim_pdf.addPdf(pdf_cat1, "cat1")
sim_pdf.addPdf(pdf_cat2, "cat2")
ROOT.gInterpreter.GenerateDictionary("std::map<std::string, RooDataSet*>", "map;string;RooDataSet.h")
ROOT.gInterpreter.GenerateDictionary("std::pair<std::string, RooDataSet*>", "map;string;RooDataSet.h")
dsmap = ROOT.std.map('string, RooDataSet*')()
dsmap.keepalive = list()
dsmap.keepalive.append(ds_cat1)
dsmap.keepalive.append(ds_cat2)
dsmap.insert(dsmap.begin(), ROOT.std.pair("const string, RooDataSet*")("cat1", ds_cat1))
dsmap.insert(dsmap.begin(), ROOT.std.pair("const string, RooDataSet*")("cat2", ds_cat2))
combData = ROOT.RooDataSet("combData", "", ROOT.RooArgSet(x_cat1, x_cat2),
ROOT.RooFit.Index(index_cat),
ROOT.RooFit.Import(dsmap))
nll = sim_pdf.createNLL(combData, ROOT.RooFit.Range("SideBandLo,SideBandHi"), ROOT.RooFit.SplitRange())
```
The last step raises the error "runtime_error: Error in RooAbsPdf::createNLL! The ranges SideBandLo,SideBandHi are overlapping!" when using ROOT 6.26+. | 1.0 | [RF] Buggy range overlap check in createNLL when SplitRange option is used - - [x] Checked for duplicates
### Describe the bug
Since ROOT 6.26, there is a new range overlap check when calling the `createNLL` from a `RooAbsPdf` instance with a `Range` argument that contains multiple range names. However, it ignores the fact that when the `SplitRange` option is used, the name of the range to be checked should be appended by the appropriate category label from each category of the simultaneous pdf. This causes an exception to be falsely raised even if the ranges do not overlap. This is because all the named ranges will return the full observable range since it fetches the range from a range name that does not exist.
### To Reproduce
In PyROOT, the issue can be reproduced using
```Python
import ROOT
ws_cat1 = ROOT.RooWorkspace("ws_cat1")
ws_cat1.factory("Gaussian::pdf_cat1(x_cat1[0,10],mu_cat1[4,0,10],sigma_cat1[1.0,0.1,10.0])")
pdf_cat1 = ws_cat1.pdf("pdf_cat1")
x_cat1 = ws_cat1.var("x_cat1")
x_cat1.setRange("SideBandLo_cat1", 0, 2)
x_cat1.setRange("SideBandHi_cat1", 6, 10)
ds_cat1 = pdf_cat1.generate(ROOT.RooArgSet(x_cat1), 11000)
ws_cat2 = ROOT.RooWorkspace("ws_cat2")
ws_cat2.factory("Gaussian::pdf_cat2(x_cat2[0,10],mu_cat2[6,0,10],sigma_cat2[1.0,0.1,10.0])")
pdf_cat2 = ws_cat2.pdf("pdf_cat2")
x_cat2 = ws_cat2.var("x_cat2")
x_cat2.setRange("SideBandLo_cat2", 0, 4)
x_cat2.setRange("SideBandHi_cat2", 8, 10)
ds_cat2 = pdf_cat2.generate(ROOT.RooArgSet(x_cat2), 11000)
index_cat = ROOT.RooCategory("cat", "cat")
index_cat.defineType("cat1")
index_cat.defineType("cat2")
sim_pdf = ROOT.RooSimultaneous("sim_pdf", "", index_cat)
sim_pdf.addPdf(pdf_cat1, "cat1")
sim_pdf.addPdf(pdf_cat2, "cat2")
ROOT.gInterpreter.GenerateDictionary("std::map<std::string, RooDataSet*>", "map;string;RooDataSet.h")
ROOT.gInterpreter.GenerateDictionary("std::pair<std::string, RooDataSet*>", "map;string;RooDataSet.h")
dsmap = ROOT.std.map('string, RooDataSet*')()
dsmap.keepalive = list()
dsmap.keepalive.append(ds_cat1)
dsmap.keepalive.append(ds_cat2)
dsmap.insert(dsmap.begin(), ROOT.std.pair("const string, RooDataSet*")("cat1", ds_cat1))
dsmap.insert(dsmap.begin(), ROOT.std.pair("const string, RooDataSet*")("cat2", ds_cat2))
combData = ROOT.RooDataSet("combData", "", ROOT.RooArgSet(x_cat1, x_cat2),
ROOT.RooFit.Index(index_cat),
ROOT.RooFit.Import(dsmap))
nll = sim_pdf.createNLL(combData, ROOT.RooFit.Range("SideBandLo,SideBandHi"), ROOT.RooFit.SplitRange())
```
The last step raises the error "runtime_error: Error in RooAbsPdf::createNLL! The ranges SideBandLo,SideBandHi are overlapping!" when using ROOT 6.26+. | non_main | buggy range overlap check in createnll when splitrange option is used checked for duplicates describe the bug since root there is a new range overlap check when calling the createnll from a rooabspdf instance with a range argument that contains multiple range names however it ignores the fact that when the splitrange option is used the name of the range to be checked should be appended by the appropriate category label from each category of the simultaneous pdf this causes an exception to be falsely raised even if the ranges do not overlap this is because all the named ranges will return the full observable range since it fetches the range from a range name that does not exist to reproduce in pyroot the issue can be reproduced using python import root ws root rooworkspace ws ws factory gaussian pdf x mu sigma pdf ws pdf pdf x ws var x x setrange sidebandlo x setrange sidebandhi ds pdf generate root rooargset x ws root rooworkspace ws ws factory gaussian pdf x mu sigma pdf ws pdf pdf x ws var x x setrange sidebandlo x setrange sidebandhi ds pdf generate root rooargset x index cat root roocategory cat cat index cat definetype index cat definetype sim pdf root roosimultaneous sim pdf index cat sim pdf addpdf pdf sim pdf addpdf pdf root ginterpreter generatedictionary std map map string roodataset h root ginterpreter generatedictionary std pair map string roodataset h dsmap root std map string roodataset dsmap keepalive list dsmap keepalive append ds dsmap keepalive append ds dsmap insert dsmap begin root std pair const string roodataset ds dsmap insert dsmap begin root std pair const string roodataset ds combdata root roodataset combdata root rooargset x x root roofit index index cat root roofit import dsmap nll sim pdf createnll combdata root roofit range sidebandlo sidebandhi root roofit splitrange the last step raises the error runtime error error in rooabspdf createnll the ranges sidebandlo sidebandhi are overlapping when using root | 0 |
810,091 | 30,224,659,502 | IssuesEvent | 2023-07-05 22:44:26 | RagnarokResearchLab/RagLite | https://api.github.com/repos/RagnarokResearchLab/RagLite | opened | Add support for diffuse textures to the WebGPU mesh rendering pipeline | Complexity: Moderate Priority: High Status: Accepted Type: New Feature Scope: Native Client | Goals:
- [ ] Texture can be uploaded into a GPU buffer
- [ ] The default (mesh) rendering pipeline supports textures and samplers
- [ ] The bound texture and sampler can be used to aooky diffuse colors | 1.0 | Add support for diffuse textures to the WebGPU mesh rendering pipeline - Goals:
- [ ] Texture can be uploaded into a GPU buffer
- [ ] The default (mesh) rendering pipeline supports textures and samplers
- [ ] The bound texture and sampler can be used to aooky diffuse colors | non_main | add support for diffuse textures to the webgpu mesh rendering pipeline goals texture can be uploaded into a gpu buffer the default mesh rendering pipeline supports textures and samplers the bound texture and sampler can be used to aooky diffuse colors | 0 |
3,904 | 17,376,851,928 | IssuesEvent | 2021-07-30 23:28:21 | chorman0773/Clever-ISA | https://api.github.com/repos/chorman0773/Clever-ISA | closed | Include Glossary of symbols used in encoding sections | I-enhancement S-blocked-on-maintainer X-generic | There should be a Glossary of each letter used and the meaning within encoding groups | True | Include Glossary of symbols used in encoding sections - There should be a Glossary of each letter used and the meaning within encoding groups | main | include glossary of symbols used in encoding sections there should be a glossary of each letter used and the meaning within encoding groups | 1 |
76,582 | 9,465,556,122 | IssuesEvent | 2019-04-18 00:10:52 | influxdata/influxdb | https://api.github.com/repos/influxdata/influxdb | opened | Editing a variable name should warn user of implications | ui/needs-design | Similar to the warning in when editing an org name.
queries, dashboards, templates, and other variables would be affected. | 1.0 | Editing a variable name should warn user of implications - Similar to the warning in when editing an org name.
queries, dashboards, templates, and other variables would be affected. | non_main | editing a variable name should warn user of implications similar to the warning in when editing an org name queries dashboards templates and other variables would be affected | 0 |
5,359 | 26,979,354,677 | IssuesEvent | 2023-02-09 11:58:00 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Application to join: rszrama (Contributing my Backdrop theme "Starch") | Maintainer application | This is a Backdrop implementation of the Apex WordPress theme that I've been tweaking over time. It's still not "1.0.0" ready, but I'll begin using it on sites in its current state and will continue to clean out the cruft / update it for the latest developments in Backdrop core.
Repository: https://github.com/codewombat/starch
Demo site: http://ryanszrama.com/starch/
(That's a temporary demo site; will move it when I have time to fool with DNS.)
| True | Application to join: rszrama (Contributing my Backdrop theme "Starch") - This is a Backdrop implementation of the Apex WordPress theme that I've been tweaking over time. It's still not "1.0.0" ready, but I'll begin using it on sites in its current state and will continue to clean out the cruft / update it for the latest developments in Backdrop core.
Repository: https://github.com/codewombat/starch
Demo site: http://ryanszrama.com/starch/
(That's a temporary demo site; will move it when I have time to fool with DNS.)
| main | application to join rszrama contributing my backdrop theme starch this is a backdrop implementation of the apex wordpress theme that i ve been tweaking over time it s still not ready but i ll begin using it on sites in its current state and will continue to clean out the cruft update it for the latest developments in backdrop core repository demo site that s a temporary demo site will move it when i have time to fool with dns | 1 |
150,446 | 5,767,144,935 | IssuesEvent | 2017-04-27 09:14:07 | minishift/minishift | https://api.github.com/repos/minishift/minishift | closed | minishift ssh doesn't work after PHP template deployment failure | kind/bug priority/major status/needs-info | Steps to reproduce this
```
1. Download minishift v1.0.0-rc.1
1. set cpu, memory and vm driver by using minishift config set
minishift config set cpu 4
minishift config set memory 2094
minishift config set vm-driver virtualbox
2. Then execute the command "minishift start --iso-url https://github.com/minishift/minishift-centos-iso/releases/download/v1.0.0-rc.4/minishift-centos7.iso"
3. Open web console and deploy php template (php + mysql). Here it stuck after pushing 73% of image.
Pushing image 172.30.1.1:5000/php/cakephp-mysql-persistent:latest ...
Pushed 0/9 layers, 1% complete
Pushed 1/9 layers, 18% complete
Pushed 2/9 layers, 27% complete
Pushed 3/9 layers, 36% complete
Pushed 4/9 layers, 47% complete
Pushed 5/9 layers, 57% complete
Pushed 6/9 layers, 73% complete
4. Then go to the terminal and run "minishift ssh" and it throws the error
$ minishift ssh
E0410 14:34:32.721166 48701 ssh.go:38] Cannot establish SSH connection to the VM: exit status 255
$minishift status
Running
```
Environment
```
os : OS X
Minishift : MiniShift-1.0.0-rc.1
iso image : centos v1.0.0-rc.4
vm-driver : virtualbox
``` | 1.0 | minishift ssh doesn't work after PHP template deployment failure - Steps to reproduce this
```
1. Download minishift v1.0.0-rc.1
1. set cpu, memory and vm driver by using minishift config set
minishift config set cpu 4
minishift config set memory 2094
minishift config set vm-driver virtualbox
2. Then execute the command "minishift start --iso-url https://github.com/minishift/minishift-centos-iso/releases/download/v1.0.0-rc.4/minishift-centos7.iso"
3. Open web console and deploy php template (php + mysql). Here it stuck after pushing 73% of image.
Pushing image 172.30.1.1:5000/php/cakephp-mysql-persistent:latest ...
Pushed 0/9 layers, 1% complete
Pushed 1/9 layers, 18% complete
Pushed 2/9 layers, 27% complete
Pushed 3/9 layers, 36% complete
Pushed 4/9 layers, 47% complete
Pushed 5/9 layers, 57% complete
Pushed 6/9 layers, 73% complete
4. Then go to the terminal and run "minishift ssh" and it throws the error
$ minishift ssh
E0410 14:34:32.721166 48701 ssh.go:38] Cannot establish SSH connection to the VM: exit status 255
$minishift status
Running
```
Environment
```
os : OS X
Minishift : MiniShift-1.0.0-rc.1
iso image : centos v1.0.0-rc.4
vm-driver : virtualbox
``` | non_main | minishift ssh doesn t work after php template deployment failure steps to reproduce this download minishift rc set cpu memory and vm driver by using minishift config set minishift config set cpu minishift config set memory minishift config set vm driver virtualbox then execute the command minishift start iso url open web console and deploy php template php mysql here it stuck after pushing of image pushing image php cakephp mysql persistent latest pushed layers complete pushed layers complete pushed layers complete pushed layers complete pushed layers complete pushed layers complete pushed layers complete then go to the terminal and run minishift ssh and it throws the error minishift ssh ssh go cannot establish ssh connection to the vm exit status minishift status running environment os os x minishift minishift rc iso image centos rc vm driver virtualbox | 0 |
231,337 | 17,674,303,692 | IssuesEvent | 2021-08-23 10:18:53 | AlexanderThaller/format_serde_error | https://api.github.com/repos/AlexanderThaller/format_serde_error | opened | Documentation of always_color and never_color are switched | documentation good first issue | https://docs.rs/format_serde_error/0.3.0/format_serde_error/fn.always_color.html
Set coloring mode to never use color in the output (ColoringMode::NeverColor).
https://docs.rs/format_serde_error/0.3.0/format_serde_error/fn.never_color.html
Set coloring mode to always use color in the output (ColoringMode::AlwaysColor).
Should be switched. | 1.0 | Documentation of always_color and never_color are switched - https://docs.rs/format_serde_error/0.3.0/format_serde_error/fn.always_color.html
Set coloring mode to never use color in the output (ColoringMode::NeverColor).
https://docs.rs/format_serde_error/0.3.0/format_serde_error/fn.never_color.html
Set coloring mode to always use color in the output (ColoringMode::AlwaysColor).
Should be switched. | non_main | documentation of always color and never color are switched set coloring mode to never use color in the output coloringmode nevercolor set coloring mode to always use color in the output coloringmode alwayscolor should be switched | 0 |
364,211 | 25,484,695,302 | IssuesEvent | 2022-11-26 08:03:56 | iGomezP/BeduAgil01 | https://api.github.com/repos/iGomezP/BeduAgil01 | closed | Añadir evidencias del Post-Work 1 | documentation | # Primera tarea
- [x] Preparar en local evidencias del post-work sesión 1
- [x] Hacer commit
- [x] Hacer pull request | 1.0 | Añadir evidencias del Post-Work 1 - # Primera tarea
- [x] Preparar en local evidencias del post-work sesión 1
- [x] Hacer commit
- [x] Hacer pull request | non_main | añadir evidencias del post work primera tarea preparar en local evidencias del post work sesión hacer commit hacer pull request | 0 |
3,513 | 13,725,900,524 | IssuesEvent | 2020-10-03 20:39:56 | NaluKit/nalu | https://api.github.com/repos/NaluKit/nalu | closed | Replace gwt-event source files with gwt-event from Maven Central | maintainance | Nalu is moving forward for the first release and to be deployed on Maven Central with version 1.0.0.
Because of the dependency to "org.gwtproject.event:gwt-event", there are problems using Nalu because of the vertispan repo - where gwt-event is currently hosted. Some enterprises do not allow to use repo, that are not Maven Central or Maven Central SNAPSHOT.
To solve this problem, the gwt-event source files are (temporally) added to Nalu.
Once gwt-event is deployed on Maven Central, the source files of gwt-event will be removed and the dependency to gwt-event will be added (again). | True | Replace gwt-event source files with gwt-event from Maven Central - Nalu is moving forward for the first release and to be deployed on Maven Central with version 1.0.0.
Because of the dependency to "org.gwtproject.event:gwt-event", there are problems using Nalu because of the vertispan repo - where gwt-event is currently hosted. Some enterprises do not allow to use repo, that are not Maven Central or Maven Central SNAPSHOT.
To solve this problem, the gwt-event source files are (temporally) added to Nalu.
Once gwt-event is deployed on Maven Central, the source files of gwt-event will be removed and the dependency to gwt-event will be added (again). | main | replace gwt event source files with gwt event from maven central nalu is moving forward for the first release and to be deployed on maven central with version because of the dependency to org gwtproject event gwt event there are problems using nalu because of the vertispan repo where gwt event is currently hosted some enterprises do not allow to use repo that are not maven central or maven central snapshot to solve this problem the gwt event source files are temporally added to nalu once gwt event is deployed on maven central the source files of gwt event will be removed and the dependency to gwt event will be added again | 1 |
1,744 | 6,574,917,778 | IssuesEvent | 2017-09-11 14:29:23 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_metric_alarm does not recognize provided credentials | affects_2.1 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_metric_alarm
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
C8H10N4O2:ansible mkramer$ ansible --version
ansible 2.1.0.0
config file = /Users/mkramer/github/infrastructure/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
OSX/AWS
##### SUMMARY
<!--- Explain the problem briefly -->
Running the ec2_metric_alarm in a playbook results in the following error:
```
"msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"
```
It fails to recognize boto profiles, or exported environment vars.
With some googling, I found other issues similar to this that occurred in other modules and one of the work arounds suggested was to use:
```
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"
```
explicitly in the play. That works.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
- Create a simple play based on the ec2_metric_alarm module example in the docs.
- Run with .aws/credentials profile or exported aws credentials.
<!--- Paste example playbooks or commands between quotes below -->
```
tasks:
- name: Gather facts
action: ec2_facts
- name: debug
debug: var=ansible_ec2_instance_id
- name: Create test alarm
ec2_metric_alarm:
#aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"
#aws_secret_key: "{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"
state: present
region: '{{ default_region }}'
name: "cpu-low"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: "<="
threshold: 5.0
period: 300
evaluation_periods: 3
unit: "Percent"
description: "This will alarm when a bamboo slave's cpu usage average is lower than 5% for 15 minutes "
dimensions: {'InstanceId':'{{ ansible_ec2_instance_id }}'}
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the play to run when using AWS_PROFILE=<profilename>
or at least after exporting aws credentials to environment vars.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
It dun borked: `"msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"`
| True | ec2_metric_alarm does not recognize provided credentials - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_metric_alarm
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
C8H10N4O2:ansible mkramer$ ansible --version
ansible 2.1.0.0
config file = /Users/mkramer/github/infrastructure/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
OSX/AWS
##### SUMMARY
<!--- Explain the problem briefly -->
Running the ec2_metric_alarm in a playbook results in the following error:
```
"msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"
```
It fails to recognize boto profiles, or exported environment vars.
With some googling, I found other issues similar to this that occurred in other modules and one of the work arounds suggested was to use:
```
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"
```
explicitly in the play. That works.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
- Create a simple play based on the ec2_metric_alarm module example in the docs.
- Run with .aws/credentials profile or exported aws credentials.
<!--- Paste example playbooks or commands between quotes below -->
```
tasks:
- name: Gather facts
action: ec2_facts
- name: debug
debug: var=ansible_ec2_instance_id
- name: Create test alarm
ec2_metric_alarm:
#aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY_ID') }}"
#aws_secret_key: "{{ lookup('env', 'AWS_SECRET_ACCESS_KEY') }}"
state: present
region: '{{ default_region }}'
name: "cpu-low"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: "<="
threshold: 5.0
period: 300
evaluation_periods: 3
unit: "Percent"
description: "This will alarm when a bamboo slave's cpu usage average is lower than 5% for 15 minutes "
dimensions: {'InstanceId':'{{ ansible_ec2_instance_id }}'}
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the play to run when using AWS_PROFILE=<profilename>
or at least after exporting aws credentials to environment vars.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
It dun borked: `"msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"`
| main | metric alarm does not recognize provided credentials issue type bug report component name metric alarm ansible version ansible mkramer ansible version ansible config file users mkramer github infrastructure ansible ansible cfg configured module search path default w o overrides os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific osx aws summary running the metric alarm in a playbook results in the following error msg no handler was ready to authenticate handlers were checked check your credentials it fails to recognize boto profiles or exported environment vars with some googling i found other issues similar to this that occurred in other modules and one of the work arounds suggested was to use aws access key lookup env aws access key id aws secret key lookup env aws secret access key explicitly in the play that works steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create a simple play based on the metric alarm module example in the docs run with aws credentials profile or exported aws credentials tasks name gather facts action facts name debug debug var ansible instance id name create test alarm metric alarm aws access key lookup env aws access key id aws secret key lookup env aws secret access key state present region default region name cpu low metric cpuutilization namespace aws statistic average comparison threshold period evaluation periods unit percent description this will alarm when a bamboo slave s cpu usage average is lower than for minutes dimensions instanceid ansible instance id expected results i expect the play to run when using aws profile or at least after exporting aws credentials to environment vars actual results it dun borked msg no handler was ready to authenticate handlers were checked check your credentials | 1 |
4,942 | 25,401,963,679 | IssuesEvent | 2022-11-22 12:48:07 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Unable to use the API to reset the record summary template to default | type: bug work: backend status: ready restricted: maintainers | ## Steps to reproduce
1. Begin with a table that has never had its record summary template customized. The response from `/api/db/v0/tables/<table_id>/` looks like:
```json5
{
// ...
"settings": {
"id": 26,
"preview_settings": {
"customized": false,
"template": "{5247}"
}
},
}
```
Good.
1. Customized the record summary template with a request like:
```http
PATCH http://localhost:8000/api/db/v0/tables/1265/settings/26/
```
```json
{
"preview_settings": {
"customized": true,
"template": "{5247}"
}
}
```
This works. I get those same `preview_settings` back from the API.
1. Revert back to the default behavior wherein the server will generate the record summary template depending on the columns in the table.
```http
PATCH http://localhost:8000/api/db/v0/tables/1265/settings/24/
```
```json
{
"preview_settings": {
"customized": false
}
}
```
1. Expect the response to look like:
```json
{
"preview_settings": {
"customized": false,
"template": "{1287}"
}
}
```
Here, the `template` value should be computed by the server using the algorithm it was using before I ever customized the record summary template.
1. Instead, observe the response to be:
```json
{
"preview_settings": {
"customized": true,
"template": ""
}
}
```
The empty record summary means records can't be summarized.
CC @silentninja @mathemancer
| True | Unable to use the API to reset the record summary template to default - ## Steps to reproduce
1. Begin with a table that has never had its record summary template customized. The response from `/api/db/v0/tables/<table_id>/` looks like:
```json5
{
// ...
"settings": {
"id": 26,
"preview_settings": {
"customized": false,
"template": "{5247}"
}
},
}
```
Good.
1. Customized the record summary template with a request like:
```http
PATCH http://localhost:8000/api/db/v0/tables/1265/settings/26/
```
```json
{
"preview_settings": {
"customized": true,
"template": "{5247}"
}
}
```
This works. I get those same `preview_settings` back from the API.
1. Revert back to the default behavior wherein the server will generate the record summary template depending on the columns in the table.
```http
PATCH http://localhost:8000/api/db/v0/tables/1265/settings/24/
```
```json
{
"preview_settings": {
"customized": false
}
}
```
1. Expect the response to look like:
```json
{
"preview_settings": {
"customized": false,
"template": "{1287}"
}
}
```
Here, the `template` value should be computed by the server using the algorithm it was using before I ever customized the record summary template.
1. Instead, observe the response to be:
```json
{
"preview_settings": {
"customized": true,
"template": ""
}
}
```
The empty record summary means records can't be summarized.
CC @silentninja @mathemancer
| main | unable to use the api to reset the record summary template to default steps to reproduce begin with a table that has never had its record summary template customized the response from api db tables looks like settings id preview settings customized false template good customized the record summary template with a request like http patch json preview settings customized true template this works i get those same preview settings back from the api revert back to the default behavior wherein the server will generate the record summary template depending on the columns in the table http patch json preview settings customized false expect the response to look like json preview settings customized false template here the template value should be computed by the server using the algorithm it was using before i ever customized the record summary template instead observe the response to be json preview settings customized true template the empty record summary means records can t be summarized cc silentninja mathemancer | 1 |
162,226 | 25,503,933,813 | IssuesEvent | 2022-11-28 07:49:51 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | Military Outpost Listening Device Quest is quite broken | Bug Need more info Design | ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
When you get the "special announcement" from the security chief in a military outpost and accept the "listening device" quest from them, they tell you to install listening devices in the crew quarters. However, the dialogue option to install the devices always appears in the dining room/ central room of the outpost. Also, the quest doesn't have a fail condition. It doesn't matter if an outpost guard is sitting a metre away and staring right at you. You will always install the devices and get the money (As well as electrical skill if you're smart).
### Reproduction steps
1. Visit a military outpost
2. Get the special announcement from the security chief
3. Try to reach the crew quarters
4. Watch as the quest dialogue appears in the dining room
5. Finish the quest while surrounded by outpost NPCs
### Bug prevalence
Happens every time I play
### Version
0.20.8.0 (Unstable)
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | 1.0 | Military Outpost Listening Device Quest is quite broken - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
When you get the "special announcement" from the security chief in a military outpost and accept the "listening device" quest from them, they tell you to install listening devices in the crew quarters. However, the dialogue option to install the devices always appears in the dining room/ central room of the outpost. Also, the quest doesn't have a fail condition. It doesn't matter if an outpost guard is sitting a metre away and staring right at you. You will always install the devices and get the money (As well as electrical skill if you're smart).
### Reproduction steps
1. Visit a military outpost
2. Get the special announcement from the security chief
3. Try to reach the crew quarters
4. Watch as the quest dialogue appears in the dining room
5. Finish the quest while surrounded by outpost NPCs
### Bug prevalence
Happens every time I play
### Version
0.20.8.0 (Unstable)
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | non_main | military outpost listening device quest is quite broken disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened when you get the special announcement from the security chief in a military outpost and accept the listening device quest from them they tell you to install listening devices in the crew quarters however the dialogue option to install the devices always appears in the dining room central room of the outpost also the quest doesn t have a fail condition it doesn t matter if an outpost guard is sitting a metre away and staring right at you you will always install the devices and get the money as well as electrical skill if you re smart reproduction steps visit a military outpost get the special announcement from the security chief try to reach the crew quarters watch as the quest dialogue appears in the dining room finish the quest while surrounded by outpost npcs bug prevalence happens every time i play version unstable no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response | 0 |
5,884 | 32,030,005,532 | IssuesEvent | 2023-09-22 11:38:38 | beyarkay/eskom-calendar | https://api.github.com/repos/beyarkay/eskom-calendar | opened | Missing area schedule hayfields 2 Pietermaritzburg Msunduzi KZN | waiting-on-maintainer missing-area-schedule | **What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
| True | Missing area schedule hayfields 2 Pietermaritzburg Msunduzi KZN - **What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
| main | missing area schedule hayfields pietermaritzburg msunduzi kzn what area s couldn t you find on please also give the province municipality our beautiful country has a surprising number of places that are named the same as each other if you know what your area is named on eskomsepush including that also helps a lot where did you hear about this really helps us figure out what s working any other information if you ve got any other info you think might be helpful feel free to leave it here | 1 |
2,382 | 8,484,212,428 | IssuesEvent | 2018-10-26 01:17:58 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | show feedback when using "ec2_asg" and doing a rolling restart with "replace_all_instances" | affects_2.8 aws cloud feature module needs_maintainer needs_triage support:community support:core | <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Is it possible to show feedback when using "ec2_asg" and doing a rolling restart with "replace_all_instances". Currently if there are alot of Instances the play will run and not report anything until its all finished. Can it list the Instances as they are recycled.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
!ec2_asg
##### ADDITIONAL INFORMATION
The play just sits there and doesn't report as its rebooting the instances. It would be nice to see which instances have been rebooted and the order they are replaced in real-time.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: change launch config and do a rolling restart of servers
ec2_asg:
name: "{{ asg_maybe.results[0].auto_scaling_group_name }}"
launch_config_name: "{{ asg_maybe.results[0].launch_config_name }}"
min_size: "{{ asg_maybe.results[0].min_size | int }}"
max_size: "{{ asg_maybe.results[0].max_size | int }}"
desired_capacity: "{{ asg_maybe.results[0].desired_capacity | int }}"
replace_all_instances: yes
target_group_arns: "{{ asg_maybe.results[0].target_group_arns | default([]) }}"
replace_batch_size: 1
health_check_type: ELB
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| True | show feedback when using "ec2_asg" and doing a rolling restart with "replace_all_instances" - <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Is it possible to show feedback when using "ec2_asg" and doing a rolling restart with "replace_all_instances". Currently if there are alot of Instances the play will run and not report anything until its all finished. Can it list the Instances as they are recycled.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
!ec2_asg
##### ADDITIONAL INFORMATION
The play just sits there and doesn't report as its rebooting the instances. It would be nice to see which instances have been rebooted and the order they are replaced in real-time.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: change launch config and do a rolling restart of servers
ec2_asg:
name: "{{ asg_maybe.results[0].auto_scaling_group_name }}"
launch_config_name: "{{ asg_maybe.results[0].launch_config_name }}"
min_size: "{{ asg_maybe.results[0].min_size | int }}"
max_size: "{{ asg_maybe.results[0].max_size | int }}"
desired_capacity: "{{ asg_maybe.results[0].desired_capacity | int }}"
replace_all_instances: yes
target_group_arns: "{{ asg_maybe.results[0].target_group_arns | default([]) }}"
replace_batch_size: 1
health_check_type: ELB
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| main | show feedback when using asg and doing a rolling restart with replace all instances summary is it possible to show feedback when using asg and doing a rolling restart with replace all instances currently if there are alot of instances the play will run and not report anything until its all finished can it list the instances as they are recycled issue type feature idea component name asg additional information the play just sits there and doesn t report as its rebooting the instances it would be nice to see which instances have been rebooted and the order they are replaced in real time yaml name change launch config and do a rolling restart of servers asg name asg maybe results auto scaling group name launch config name asg maybe results launch config name min size asg maybe results min size int max size asg maybe results max size int desired capacity asg maybe results desired capacity int replace all instances yes target group arns asg maybe results target group arns default replace batch size health check type elb | 1 |
4,642 | 24,031,834,751 | IssuesEvent | 2022-09-15 15:35:47 | ClaudiuCreanga/magento2-store-locator-stockists-extension | https://api.github.com/repos/ClaudiuCreanga/magento2-store-locator-stockists-extension | closed | Looking for maintainer | maintainer wanted | Looking for a maintainer to take control of this project and move it forward. There are new releases of magento2 and apparently some things stopped working, i.e. issue #29. As I no longer work with magento, I don't have time to debug the issue. Anyone interested, post your availability here. | True | Looking for maintainer - Looking for a maintainer to take control of this project and move it forward. There are new releases of magento2 and apparently some things stopped working, i.e. issue #29. As I no longer work with magento, I don't have time to debug the issue. Anyone interested, post your availability here. | main | looking for maintainer looking for a maintainer to take control of this project and move it forward there are new releases of and apparently some things stopped working i e issue as i no longer work with magento i don t have time to debug the issue anyone interested post your availability here | 1 |
1,957 | 6,678,592,645 | IssuesEvent | 2017-10-05 14:42:27 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | What3Words Geocoder: Should accept space-separated co-ordinates | Low-Hanging Fruit Maintainer Input Requested Suggestion Triggering | For example https://duckduckgo.com/?q=what3words++51.9985+-0.7439&ia=map
---
IA Page: http://duck.co/ia/view/what3words
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @moollaza
| True | What3Words Geocoder: Should accept space-separated co-ordinates - For example https://duckduckgo.com/?q=what3words++51.9985+-0.7439&ia=map
---
IA Page: http://duck.co/ia/view/what3words
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @moollaza
| main | geocoder should accept space separated co ordinates for example ia page moollaza | 1 |
1,147 | 5,005,652,621 | IssuesEvent | 2016-12-12 11:26:01 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | yum install failed but task outputs OK (with_items) | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
yum
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Running on Ubuntu 14.04
Managing CentOS 7.2
##### SUMMARY
The tasks outputs OK but actually failed. This happens only when using with_items.
Because of the wrong output. In Big playbooks it's hard to find the error.
##### STEPS TO REPRODUCE
```
---
- name: Test yum install
hosts: somehost
tasks:
- name: install package
become: yes
yum:
state: present
name: "{{ item }}"
with_items:
- bash
- doesnotexist
```
##### EXPECTED RESULTS
```
TASK [install package] *********************************************************
fatal: [somehost]: FAILED! => {"changed": false, "failed": true, "msg": "No Package matching 'doesnotexisthere' found available, installed or updated", "rc": 0, "results": []}
```
##### ACTUAL RESULTS
```
TASK [install package] *********************************************************
ok: [somehost] => (item=[u'bash', u'doesnotexist'])
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/home/user/stuff/ansible/playbooks/test.retry
PLAY RECAP *********************************************************************
somehost : ok=0 changed=0 unreachable=0 failed=1
```
| True | yum install failed but task outputs OK (with_items) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
yum
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Running on Ubuntu 14.04
Managing CentOS 7.2
##### SUMMARY
The tasks outputs OK but actually failed. This happens only when using with_items.
Because of the wrong output. In Big playbooks it's hard to find the error.
##### STEPS TO REPRODUCE
```
---
- name: Test yum install
hosts: somehost
tasks:
- name: install package
become: yes
yum:
state: present
name: "{{ item }}"
with_items:
- bash
- doesnotexist
```
##### EXPECTED RESULTS
```
TASK [install package] *********************************************************
fatal: [somehost]: FAILED! => {"changed": false, "failed": true, "msg": "No Package matching 'doesnotexisthere' found available, installed or updated", "rc": 0, "results": []}
```
##### ACTUAL RESULTS
```
TASK [install package] *********************************************************
ok: [somehost] => (item=[u'bash', u'doesnotexist'])
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/home/user/stuff/ansible/playbooks/test.retry
PLAY RECAP *********************************************************************
somehost : ok=0 changed=0 unreachable=0 failed=1
```
| main | yum install failed but task outputs ok with items issue type bug report component name yum ansible version ansible configuration os environment running on ubuntu managing centos summary the tasks outputs ok but actually failed this happens only when using with items because of the wrong output in big playbooks it s hard to find the error steps to reproduce name test yum install hosts somehost tasks name install package become yes yum state present name item with items bash doesnotexist expected results task fatal failed changed false failed true msg no package matching doesnotexisthere found available installed or updated rc results actual results task ok item no more hosts left to retry use limit home user stuff ansible playbooks test retry play recap somehost ok changed unreachable failed | 1 |
4,636 | 24,005,406,996 | IssuesEvent | 2022-09-14 14:28:37 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | Toggle and ToggleSmall aria-label is better placed on the checkbox, not the label element | severity: 2 type: a11y ♿ component: toggle status: waiting for maintainer response 💬 screen-reader: JAWS | ## Toggle and ToggleSmall aria-labels are misplaced - the attribute should go on the input element, not the label
## Environment
> MacOS
> Chrome
> VoiceOver
## Detailed description
VoiceOver on Chrome does not announce the aria-label of the Toggle element because the attribute is on the label element, not the input itself.
> What did you expect to happen?
I expected the text of the aria-label to be announced by VoiceOver upon focusing on the toggle element.
> What happened instead?
The text was not announced, only the "tick box" and its on/off state.
> What WCAG 2.1 checkpoint does the issue violate?
Looking at the "accessible name computation" specification, it seems to me that the current setup, having an aria-label on a label, is not in the specification. The aria-label attribute would be better suited on the input element itself.
## Steps to reproduce the issue
1. Turn on VoiceOver for Chrome
2. Go to [Storybook](https://react.carbondesignsystem.com/?path=/story/togglesmall--toggled)
2. Navigate to the Toggle element
## Additional information
- It's important to note that the label is announced in Safari, which is probably the most used browser with VoiceOver
- I was unable to test this with NVDA/JAWS at the moment
- But as per specification, I thing the aria attribute should go onto the input element anyways.
- I tested it by moving the attribute using the Developer Tools, and it worked both with Safari and Chrome
Let me know if you found my reasoning solid, and I'll go ahead and create a PR! | True | Toggle and ToggleSmall aria-label is better placed on the checkbox, not the label element - ## Toggle and ToggleSmall aria-labels are misplaced - the attribute should go on the input element, not the label
## Environment
> MacOS
> Chrome
> VoiceOver
## Detailed description
VoiceOver on Chrome does not announce the aria-label of the Toggle element because the attribute is on the label element, not the input itself.
> What did you expect to happen?
I expected the text of the aria-label to be announced by VoiceOver upon focusing on the toggle element.
> What happened instead?
The text was not announced, only the "tick box" and its on/off state.
> What WCAG 2.1 checkpoint does the issue violate?
Looking at the "accessible name computation" specification, it seems to me that the current setup, having an aria-label on a label, is not in the specification. The aria-label attribute would be better suited on the input element itself.
## Steps to reproduce the issue
1. Turn on VoiceOver for Chrome
2. Go to [Storybook](https://react.carbondesignsystem.com/?path=/story/togglesmall--toggled)
2. Navigate to the Toggle element
## Additional information
- It's important to note that the label is announced in Safari, which is probably the most used browser with VoiceOver
- I was unable to test this with NVDA/JAWS at the moment
- But as per specification, I thing the aria attribute should go onto the input element anyways.
- I tested it by moving the attribute using the Developer Tools, and it worked both with Safari and Chrome
Let me know if you found my reasoning solid, and I'll go ahead and create a PR! | main | toggle and togglesmall aria label is better placed on the checkbox not the label element toggle and togglesmall aria labels are misplaced the attribute should go on the input element not the label environment macos chrome voiceover detailed description voiceover on chrome does not announce the aria label of the toggle element because the attribute is on the label element not the input itself what did you expect to happen i expected the text of the aria label to be announced by voiceover upon focusing on the toggle element what happened instead the text was not announced only the tick box and its on off state what wcag checkpoint does the issue violate looking at the accessible name computation specification it seems to me that the current setup having an aria label on a label is not in the specification the aria label attribute would be better suited on the input element itself steps to reproduce the issue turn on voiceover for chrome go to navigate to the toggle element additional information it s important to note that the label is announced in safari which is probably the most used browser with voiceover i was unable to test this with nvda jaws at the moment but as per specification i thing the aria attribute should go onto the input element anyways i tested it by moving the attribute using the developer tools and it worked both with safari and chrome let me know if you found my reasoning solid and i ll go ahead and create a pr | 1 |
80,298 | 15,381,054,541 | IssuesEvent | 2021-03-02 22:02:29 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | azure 2020 key | devops-code-git/tech devops/prod duplicate |
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f8b4989-21f0-53c5-dfce-6d5f1a66b8b2
* Version Independent ID: 2f4419a7-408b-40cd-9190-985a9976d556
* Content: [Authenticate with your Git repos - Azure Repos](https://docs.microsoft.com/en-us/azure/devops/repos/git/auth-overview?view=azure-devops)
* Content Source: [docs/repos/git/auth-overview.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/repos/git/auth-overview.md)
* Product: **devops**
* Technology: **devops-code-git**
* GitHub Login: @vtbassmatt
* Microsoft Alias: **macoope** | 1.0 | azure 2020 key -
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f8b4989-21f0-53c5-dfce-6d5f1a66b8b2
* Version Independent ID: 2f4419a7-408b-40cd-9190-985a9976d556
* Content: [Authenticate with your Git repos - Azure Repos](https://docs.microsoft.com/en-us/azure/devops/repos/git/auth-overview?view=azure-devops)
* Content Source: [docs/repos/git/auth-overview.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/repos/git/auth-overview.md)
* Product: **devops**
* Technology: **devops-code-git**
* GitHub Login: @vtbassmatt
* Microsoft Alias: **macoope** | non_main | azure key document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id dfce version independent id content content source product devops technology devops code git github login vtbassmatt microsoft alias macoope | 0 |
3,685 | 15,051,718,409 | IssuesEvent | 2021-02-03 14:25:52 | cloverhearts/quilljs-markdown | https://api.github.com/repos/cloverhearts/quilljs-markdown | closed | Allow change regex patterns and ignore tags with option param. | Saw with Maintainer | #25
@cloverhearts @borzecki
Allow change regex patterns and ignore tags with option param.
Example:
```
this.editor = new Quill('#editor', options);
const markdownOptions = {
ignoreTags: ['H1', 'H2'],
tags: {
bold: {
pattern: /(\*){1}(.+?)(?:\1){1}/g,
},
italic: {
pattern: /(\_){1}(.+?)(?:\1){1}/g,
},
},
};
const markdown = new QuillMarkdown(this.editor, markdownOptions);
``` | True | Allow change regex patterns and ignore tags with option param. - #25
@cloverhearts @borzecki
Allow change regex patterns and ignore tags with option param.
Example:
```
this.editor = new Quill('#editor', options);
const markdownOptions = {
ignoreTags: ['H1', 'H2'],
tags: {
bold: {
pattern: /(\*){1}(.+?)(?:\1){1}/g,
},
italic: {
pattern: /(\_){1}(.+?)(?:\1){1}/g,
},
},
};
const markdown = new QuillMarkdown(this.editor, markdownOptions);
``` | main | allow change regex patterns and ignore tags with option param cloverhearts borzecki allow change regex patterns and ignore tags with option param example this editor new quill editor options const markdownoptions ignoretags tags bold pattern g italic pattern g const markdown new quillmarkdown this editor markdownoptions | 1 |
5,560 | 27,815,639,279 | IssuesEvent | 2023-03-18 16:53:20 | microsoft/DirectXTK | https://api.github.com/repos/microsoft/DirectXTK | closed | Retire legacy Xbox One XDK support | maintainence | The only scenario that still uses VS 2017 is for the legacy Xbox One XDK. This task is drop support for this older Xbox development model and remove the following projects:
```
DirectXTK_XboxOneXDK_2017.sln
```
and all the associated XboxOneXDK project files in the test suite.
> The end-of-life release will be hosted on https://github.com/microsoft/Xbox-ATG-Samples. | True | Retire legacy Xbox One XDK support - The only scenario that still uses VS 2017 is for the legacy Xbox One XDK. This task is drop support for this older Xbox development model and remove the following projects:
```
DirectXTK_XboxOneXDK_2017.sln
```
and all the associated XboxOneXDK project files in the test suite.
> The end-of-life release will be hosted on https://github.com/microsoft/Xbox-ATG-Samples. | main | retire legacy xbox one xdk support the only scenario that still uses vs is for the legacy xbox one xdk this task is drop support for this older xbox development model and remove the following projects directxtk xboxonexdk sln and all the associated xboxonexdk project files in the test suite the end of life release will be hosted on | 1 |
372,602 | 11,017,550,167 | IssuesEvent | 2019-12-05 08:42:39 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.androidauthority.com - design is broken | browser-fenix engine-gecko priority-normal | <!-- @browser: Firefox Mobile 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.androidauthority.com/
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Design malfunction
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.androidauthority.com - design is broken - <!-- @browser: Firefox Mobile 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.androidauthority.com/
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Design malfunction
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | design is broken url browser version firefox mobile operating system android tested another browser yes problem type design is broken description design malfunction steps to reproduce browser configuration none from with ❤️ | 0 |
1,149 | 5,008,015,815 | IssuesEvent | 2016-12-12 18:18:52 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Validating configuration files that include files with relative paths | affects_2.2 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- template
- copy
##### ANSIBLE VERSION
```
ansible 2.2.0.0
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When you validate configuration files that include additional files with relative paths, the validation fails bacause those included files are not found.
##### STEPS TO REPRODUCE
For example, I have an nginx configuration file called `/etc/nginx/includes.d/php.conf` that includes a file called `fastcgi_params` which is inside the `/etc/nginx` path and nginx always executes from this path so everything works fine. However, when I try to validate the configuration, I get this:
```
TASK [nginx : Add nginx configuration] *****************************************
fatal: [test]: FAILED! => {"changed": true, "exit_status": 1, "failed": true, "msg": "failed to validate", "stderr": "nginx: [emerg] open() \"/root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/fastcgi_params\" failed (2: No such file or directory) in /etc/nginx/includes.d/php.conf:19\nnginx: configuration file /root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/source test failed\n", "stdout": "", "stdout_lines": []}
```
It would be great if you could provide an additional option for validation, like `validate_cwd` so the validation process would change to this directory before the test. In this case, setting `validate_cwd` to `/etc/nginx` should make the validation pass.
Of course, the workaround is to always use absolute paths, but I think this might be a handy option to have. | True | Validating configuration files that include files with relative paths - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- template
- copy
##### ANSIBLE VERSION
```
ansible 2.2.0.0
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When you validate configuration files that include additional files with relative paths, the validation fails bacause those included files are not found.
##### STEPS TO REPRODUCE
For example, I have an nginx configuration file called `/etc/nginx/includes.d/php.conf` that includes a file called `fastcgi_params` which is inside the `/etc/nginx` path and nginx always executes from this path so everything works fine. However, when I try to validate the configuration, I get this:
```
TASK [nginx : Add nginx configuration] *****************************************
fatal: [test]: FAILED! => {"changed": true, "exit_status": 1, "failed": true, "msg": "failed to validate", "stderr": "nginx: [emerg] open() \"/root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/fastcgi_params\" failed (2: No such file or directory) in /etc/nginx/includes.d/php.conf:19\nnginx: configuration file /root/.ansible/tmp/ansible-tmp-1480609035.98-258840759926753/source test failed\n", "stdout": "", "stdout_lines": []}
```
It would be great if you could provide an additional option for validation, like `validate_cwd` so the validation process would change to this directory before the test. In this case, setting `validate_cwd` to `/etc/nginx` should make the validation pass.
Of course, the workaround is to always use absolute paths, but I think this might be a handy option to have. | main | validating configuration files that include files with relative paths issue type feature idea component name template copy ansible version ansible os environment n a summary when you validate configuration files that include additional files with relative paths the validation fails bacause those included files are not found steps to reproduce for example i have an nginx configuration file called etc nginx includes d php conf that includes a file called fastcgi params which is inside the etc nginx path and nginx always executes from this path so everything works fine however when i try to validate the configuration i get this task fatal failed changed true exit status failed true msg failed to validate stderr nginx open root ansible tmp ansible tmp fastcgi params failed no such file or directory in etc nginx includes d php conf nnginx configuration file root ansible tmp ansible tmp source test failed n stdout stdout lines it would be great if you could provide an additional option for validation like validate cwd so the validation process would change to this directory before the test in this case setting validate cwd to etc nginx should make the validation pass of course the workaround is to always use absolute paths but i think this might be a handy option to have | 1 |
87,712 | 8,114,995,360 | IssuesEvent | 2018-08-15 04:03:16 | Azure/azure-iot-sdk-csharp | https://api.github.com/repos/Azure/azure-iot-sdk-csharp | opened | Enable the AmqpTransportHandler_RejectAmqpSettingsChange test | test bug | Tracking re-enabling the `AmqpTransportHandler_RejectAmqpSettingsChange` unit-test. | 1.0 | Enable the AmqpTransportHandler_RejectAmqpSettingsChange test - Tracking re-enabling the `AmqpTransportHandler_RejectAmqpSettingsChange` unit-test. | non_main | enable the amqptransporthandler rejectamqpsettingschange test tracking re enabling the amqptransporthandler rejectamqpsettingschange unit test | 0 |
228,819 | 17,481,310,577 | IssuesEvent | 2021-08-09 03:10:20 | kurodenimu/PersonalNewsSiteSupportTool | https://api.github.com/repos/kurodenimu/PersonalNewsSiteSupportTool | closed | 開発用ドキュメント整備 | documentation | - [x] ルール的なもの→Wikiに記載する
- [x] 各クラスの注意事項など→ドキュメンテーションコメントを書く
特に共通クラス的なものはメソッド等の説明を詳細にする | 1.0 | 開発用ドキュメント整備 - - [x] ルール的なもの→Wikiに記載する
- [x] 各クラスの注意事項など→ドキュメンテーションコメントを書く
特に共通クラス的なものはメソッド等の説明を詳細にする | non_main | 開発用ドキュメント整備 ルール的なもの→wikiに記載する 各クラスの注意事項など→ドキュメンテーションコメントを書く 特に共通クラス的なものはメソッド等の説明を詳細にする | 0 |
32,835 | 4,791,811,253 | IssuesEvent | 2016-10-31 13:52:27 | khartec/waltz | https://api.github.com/repos/khartec/waltz | closed | Physical Flow view - not returning bookmarks | fixed (test & close) | Agreed with Dave that we should return Specification bookmarks for now until it becomes clear that physical flows are likely to require documentation.
| 1.0 | Physical Flow view - not returning bookmarks - Agreed with Dave that we should return Specification bookmarks for now until it becomes clear that physical flows are likely to require documentation.
| non_main | physical flow view not returning bookmarks agreed with dave that we should return specification bookmarks for now until it becomes clear that physical flows are likely to require documentation | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.