Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,311 | 26,810,680,730 | IssuesEvent | 2023-02-01 22:06:16 | MozillaFoundation/donate-wagtail | https://api.github.com/repos/MozillaFoundation/donate-wagtail | closed | Braintree Python SDK Update | engineering maintain | It looks like we'll need to update the Braintree Python SDK one last time. From Braintree:
> This is a reminder that a new version of the SDK was released on October 14, 2022 with new security enhancements. Please upgrade to the Braintree Python SDK v.4.17.1 as soon as possible. Starting February 28, 2023, older versions of the SDK may no longer be supported. Please disregard if you've already updated your SDK following our initial outreach on January 10 but take action now if you haven’t yet done so.
>To make this update:
> If you're currently using v4 of the Python SDK, you need to upgrade the SDK to v4.17.1 or higher. Your integration won't require any other changes.
If you're currently using v3 or lower of the Braintree Python SDK, please upgrade to Braintree Python SDK v3.59.1. You also should plan to upgrade to v4.17.1 or higher in the near future. See our [migration guide](https://developer.paypal.com/braintree/docs/reference/general/server-sdk-migration-guide/python) for details on necessary integration changes. | True | Braintree Python SDK Update - It looks like we'll need to update the Braintree Python SDK one last time. From Braintree:
> This is a reminder that a new version of the SDK was released on October 14, 2022 with new security enhancements. Please upgrade to the Braintree Python SDK v.4.17.1 as soon as possible. Starting February 28, 2023, older versions of the SDK may no longer be supported. Please disregard if you've already updated your SDK following our initial outreach on January 10 but take action now if you haven’t yet done so.
>To make this update:
> If you're currently using v4 of the Python SDK, you need to upgrade the SDK to v4.17.1 or higher. Your integration won't require any other changes.
If you're currently using v3 or lower of the Braintree Python SDK, please upgrade to Braintree Python SDK v3.59.1. You also should plan to upgrade to v4.17.1 or higher in the near future. See our [migration guide](https://developer.paypal.com/braintree/docs/reference/general/server-sdk-migration-guide/python) for details on necessary integration changes. | main | braintree python sdk update it looks like we ll need to update the braintree python sdk one last time from braintree this is a reminder that a new version of the sdk was released on october with new security enhancements please upgrade to the braintree python sdk v as soon as possible starting february older versions of the sdk may no longer be supported please disregard if you ve already updated your sdk following our initial outreach on january but take action now if you haven’t yet done so to make this update if you re currently using of the python sdk you need to upgrade the sdk to or higher your integration won t require any other changes if you re currently using or lower of the braintree python sdk please upgrade to braintree python sdk you also should plan to upgrade to or higher in the near future see our for details on necessary integration changes | 1 |
155 | 2,700,437,339 | IssuesEvent | 2015-04-04 04:59:48 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | Large rsc files - TGservers have unsustainably high bandwidth usage. | Maintainability - Hinders improvements Not a bug Sound Sprites | http://ss13.eu/phpbb/viewtopic.php?f=3&t=229
Currently, our codebase has a .rsc filesize of 27.2Mb. This is extremely high.
Bandwidth usage for TGservers is almost 2000Gb a month.
This is something we've been saying is an issue for a long long time. It's not only an issue for hosts, but one for clients having to download almost 30Mb on a throttled byond connection.
As I see it we have some key savings we could make:
*Remove unused icon_states! There are loaaaaads.
*Make a MAPPING define, which would set area icons - this way, these icons would not be compiled into release versions where they are not used.
*Making use of the map-merge tools obligatory
*Reduce the colour pallettes of icon files
*Remove identical repeated frames (copypasted) in animated icons. Instead use the repeat frame option (could be particularly helpful on the titlescreen image)
*Remove/replace/reduce sounds which add nothing to the experience. Note, I'm not saying remove anything like sound effects. I am saying stuff like say, if "put a banging donk on it" was a 6Mb Ogg, possibly reducing that to a more acceptable size....as a hypothetical example
*Marking our more-stable revisions so servers do not have to update as often (not sure how you'd judge which ones are more stable)
*Look into hosting rsc files separately. Unfortunately, byond does not support https, so I am unsure how this would work in a practical sense. However, adding -support- for it should be fairly trivial, and would pretty much eliminate most of the bandwidth use whilst not affecting stability (it reverts to the old behaviour of downloading from the server if the rsc file cannot be fetched from the alternative source) | True | Large rsc files - TGservers have unsustainably high bandwidth usage. - http://ss13.eu/phpbb/viewtopic.php?f=3&t=229
Currently, our codebase has a .rsc filesize of 27.2Mb. This is extremely high.
Bandwidth usage for TGservers is almost 2000Gb a month.
This is something we've been saying is an issue for a long long time. It's not only an issue for hosts, but one for clients having to download almost 30Mb on a throttled byond connection.
As I see it we have some key savings we could make:
*Remove unused icon_states! There are loaaaaads.
*Make a MAPPING define, which would set area icons - this way, these icons would not be compiled into release versions where they are not used.
*Making use of the map-merge tools obligatory
*Reduce the colour pallettes of icon files
*Remove identical repeated frames (copypasted) in animated icons. Instead use the repeat frame option (could be particularly helpful on the titlescreen image)
*Remove/replace/reduce sounds which add nothing to the experience. Note, I'm not saying remove anything like sound effects. I am saying stuff like say, if "put a banging donk on it" was a 6Mb Ogg, possibly reducing that to a more acceptable size....as a hypothetical example
*Marking our more-stable revisions so servers do not have to update as often (not sure how you'd judge which ones are more stable)
*Look into hosting rsc files separately. Unfortunately, byond does not support https, so I am unsure how this would work in a practical sense. However, adding -support- for it should be fairly trivial, and would pretty much eliminate most of the bandwidth use whilst not affecting stability (it reverts to the old behaviour of downloading from the server if the rsc file cannot be fetched from the alternative source) | main | large rsc files tgservers have unsustainably high bandwidth usage currently our codebase has a rsc filesize of this is extremely high bandwidth usage for tgservers is almost a month this is something we ve been saying is an issue for a long long time it s not only an issue for hosts but one for clients having to download almost on a throttled byond connection as i see it we have some key savings we could make remove unused icon states there are loaaaaads make a mapping define which would set area icons this way these icons would not be compiled into release versions where they are not used making use of the map merge tools obligatory reduce the colour pallettes of icon files remove identical repeated frames copypasted in animated icons instead use the repeat frame option could be particularly helpful on the titlescreen image remove replace reduce sounds which add nothing to the experience note i m not saying remove anything like sound effects i am saying stuff like say if put a banging donk on it was a ogg possibly reducing that to a more acceptable size as a hypothetical example marking our more stable revisions so servers do not have to update as often not sure how you d judge which ones are more stable look into hosting rsc files separately unfortunately byond does not support https so i am unsure how this would work in a practical sense however adding support for it should be fairly trivial and would pretty much eliminate most of the bandwidth use whilst not affecting stability it reverts to the old behaviour of downloading from the server if the rsc file cannot be fetched from the alternative source | 1 |
286,342 | 8,786,885,683 | IssuesEvent | 2018-12-20 16:54:58 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Infinite loading with IE v.11 | Priority: Blocker Timeline bug geonode_integration invalid ready | ### Description
Opening a new map or a pre-saved map with IE v.11 you have a infinite loading and the map not opens, this is probally due to a syntax error in configuration, see below:

### In case of Bug
- [x] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
- version 11
*Steps to reproduce*
- open a new map or a presaved map
*Expected Result*
- you can open maps
*Current Result*
- you can't open maps | 1.0 | Infinite loading with IE v.11 - ### Description
Opening a new map or a pre-saved map with IE v.11 you have a infinite loading and the map not opens, this is probally due to a syntax error in configuration, see below:

### In case of Bug
- [x] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
- version 11
*Steps to reproduce*
- open a new map or a presaved map
*Expected Result*
- you can open maps
*Current Result*
- you can't open maps | non_main | infinite loading with ie v description opening a new map or a pre saved map with ie v you have a infinite loading and the map not opens this is probally due to a syntax error in configuration see below in case of bug internet explorer chrome firefox safari browser version affected version steps to reproduce open a new map or a presaved map expected result you can open maps current result you can t open maps | 0 |
233,731 | 25,765,820,930 | IssuesEvent | 2022-12-09 01:40:57 | aero-surge/Word_of_the_day_messager | https://api.github.com/repos/aero-surge/Word_of_the_day_messager | opened | CVE-2022-23491 (Medium) detected in certifi-2021.5.30-py2.py3-none-any.whl | security vulnerability | ## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>certifi-2021.5.30-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- PyDictionary-2.0.1-py3-none-any.whl (Root Library)
- requests-2.26.0-py2.py3-none-any.whl
- :x: **certifi-2021.5.30-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23491 (Medium) detected in certifi-2021.5.30-py2.py3-none-any.whl - ## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>certifi-2021.5.30-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- PyDictionary-2.0.1-py3-none-any.whl (Root Library)
- requests-2.26.0-py2.py3-none-any.whl
- :x: **certifi-2021.5.30-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in certifi none any whl cve medium severity vulnerability vulnerable library certifi none any whl python package for providing mozilla s ca bundle library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy pydictionary none any whl root library requests none any whl x certifi none any whl vulnerable library found in base branch main vulnerability details certifi is a curated collection of root certificates for validating the trustworthiness of ssl certificates while verifying the identity of tls hosts certifi removes root certificates from trustcor from the root store these are in the process of being removed from mozilla s trust store trustcor s root certificates are being removed pursuant to an investigation prompted by media reporting that trustcor s ownership also operated a business that produced spyware conclusions of mozilla s investigation can be found in the linked google group discussion publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope changed impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution certifi step up your open source security game with mend | 0 |
5,725 | 30,270,983,782 | IssuesEvent | 2023-07-07 15:19:30 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | Discord invite link in `README` not working | status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | Hi 👋 Love the carbon system!
I wanted to join the Discord to see what's up and whether I could get some informal help with a TypeScript issue, but the one in the readme is not working for me. Maybe the link expired, I don't know?
I only see that the invitation cannot be accepted.
<img width="718" alt="Screenshot 2023-06-26 at 14 28 33" src="https://github.com/carbon-design-system/carbon/assets/2946344/df55d9c2-bb88-4e93-9b76-e0bf8a4ace09">
If this is a non-issue, of course feel free to close this without answering - I won't be offended 😊 | True | Discord invite link in `README` not working - Hi 👋 Love the carbon system!
I wanted to join the Discord to see what's up and whether I could get some informal help with a TypeScript issue, but the one in the readme is not working for me. Maybe the link expired, I don't know?
I only see that the invitation cannot be accepted.
<img width="718" alt="Screenshot 2023-06-26 at 14 28 33" src="https://github.com/carbon-design-system/carbon/assets/2946344/df55d9c2-bb88-4e93-9b76-e0bf8a4ace09">
If this is a non-issue, of course feel free to close this without answering - I won't be offended 😊 | main | discord invite link in readme not working hi 👋 love the carbon system i wanted to join the discord to see what s up and whether i could get some informal help with a typescript issue but the one in the readme is not working for me maybe the link expired i don t know i only see that the invitation cannot be accepted img width alt screenshot at src if this is a non issue of course feel free to close this without answering i won t be offended 😊 | 1 |
299,812 | 9,205,876,503 | IssuesEvent | 2019-03-08 11:57:50 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | QGIS slows down PostgreSQL | Category: Data Provider Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report | ---
Author Name: **Paolo Cavallini** (Paolo Cavallini)
Original Redmine Issue: 1174, https://issues.qgis.org/issues/1174
Original Assignee: Jürgen Fischer
---
When qgis (0.10 on deb) queries the db (to load geometries) often postgres takes up a very large amount of CPU, for *many* minutes. I once had even to kill postgres (!!). Does this happens to
others? We never noticed it before. Tested on several db.
| 1.0 | QGIS slows down PostgreSQL - ---
Author Name: **Paolo Cavallini** (Paolo Cavallini)
Original Redmine Issue: 1174, https://issues.qgis.org/issues/1174
Original Assignee: Jürgen Fischer
---
When qgis (0.10 on deb) queries the db (to load geometries) often postgres takes up a very large amount of CPU, for *many* minutes. I once had even to kill postgres (!!). Does this happens to
others? We never noticed it before. Tested on several db.
| non_main | qgis slows down postgresql author name paolo cavallini paolo cavallini original redmine issue original assignee jürgen fischer when qgis on deb queries the db to load geometries often postgres takes up a very large amount of cpu for many minutes i once had even to kill postgres does this happens to others we never noticed it before tested on several db | 0 |
4,074 | 19,249,909,131 | IssuesEvent | 2021-12-09 03:05:47 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Apple M1: thread 'main' panicked at 'attempt to divide by zero' | blocked/more-info-needed area/local/start-api stage/needs-investigation area/cdk maintainer/need-followup platform/mac/arm | ### Description:
I experience this error: `thread 'main' panicked at 'attempt to divide by zero'` when I try to access an endpoint on the function.
### Steps to reproduce:
Run `sam-beta-cdk local start-api -p <PORT> --skip-pull-image --warm-containers EAGER --debug`
### Observed result:
```
2021-08-20 09:00:27,794 | Found one Lambda function with name '<FUNCTION_NAME>'
2021-08-20 09:00:27,794 | Invoking Container created from <FUNCTION_NAME>
2021-08-20 09:00:27,794 | Environment variables overrides data is standard format
2021-08-20 09:00:27,814 | Reuse the created warm container for Lambda function '<FUNCTION_NAME>'
2021-08-20 09:00:27,821 | Lambda function '<FUNCTION_NAME>' is already running
2021-08-20 09:00:27,822 | Starting a timer for 10 seconds for function '<FUNCTION_NAME>'
START RequestId: <REQUEST_ID> Version: $LATEST
thread 'meter_probe' panicked at 'Unable to read /proc for agent process: InternalError(bug at /local/p4clients/pkgbuild-QP4Aq/workspace/build/AWSLogsLambdaInsights/AWSLogsLambdaInsights-1.0.115.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cargo-home/registry/src/-c477fb05a7ac3d62/procfs-0.7.9/src/process/stat.rs:300 (please report this procfs bug)
Internal Unwrap Error: Internal error: bug at /local/p4clients/pkgbuild-QP4Aq/workspace/build/AWSLogsLambdaInsights/AWSLogsLambdaInsights-1.0.115.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cargo-home/registry/src/-c477fb05a7ac3d62/procfs-0.7.9/src/lib.rs:285 (please report this procfs bug)
Internal Unwrap Error: NoneError)', src/inputs/memory.rs:59:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'main' panicked at 'attempt to divide by zero', src/inputs/memory.rs:44:32
```
### Expected result:
`Return appropriate result from the endpoint`
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Big Sur v11.5.1
2. `sam --version`: SAM CLI, version 1.22.0.dev202107140310
3. AWS region: eu-central-1
`Add --debug flag to command you are running`
| True | Apple M1: thread 'main' panicked at 'attempt to divide by zero' - ### Description:
I experience this error: `thread 'main' panicked at 'attempt to divide by zero'` when I try to access an endpoint on the function.
### Steps to reproduce:
Run `sam-beta-cdk local start-api -p <PORT> --skip-pull-image --warm-containers EAGER --debug`
### Observed result:
```
2021-08-20 09:00:27,794 | Found one Lambda function with name '<FUNCTION_NAME>'
2021-08-20 09:00:27,794 | Invoking Container created from <FUNCTION_NAME>
2021-08-20 09:00:27,794 | Environment variables overrides data is standard format
2021-08-20 09:00:27,814 | Reuse the created warm container for Lambda function '<FUNCTION_NAME>'
2021-08-20 09:00:27,821 | Lambda function '<FUNCTION_NAME>' is already running
2021-08-20 09:00:27,822 | Starting a timer for 10 seconds for function '<FUNCTION_NAME>'
START RequestId: <REQUEST_ID> Version: $LATEST
thread 'meter_probe' panicked at 'Unable to read /proc for agent process: InternalError(bug at /local/p4clients/pkgbuild-QP4Aq/workspace/build/AWSLogsLambdaInsights/AWSLogsLambdaInsights-1.0.115.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cargo-home/registry/src/-c477fb05a7ac3d62/procfs-0.7.9/src/process/stat.rs:300 (please report this procfs bug)
Internal Unwrap Error: Internal error: bug at /local/p4clients/pkgbuild-QP4Aq/workspace/build/AWSLogsLambdaInsights/AWSLogsLambdaInsights-1.0.115.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cargo-home/registry/src/-c477fb05a7ac3d62/procfs-0.7.9/src/lib.rs:285 (please report this procfs bug)
Internal Unwrap Error: NoneError)', src/inputs/memory.rs:59:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'main' panicked at 'attempt to divide by zero', src/inputs/memory.rs:44:32
```
### Expected result:
`Return appropriate result from the endpoint`
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Big Sur v11.5.1
2. `sam --version`: SAM CLI, version 1.22.0.dev202107140310
3. AWS region: eu-central-1
`Add --debug flag to command you are running`
| main | apple thread main panicked at attempt to divide by zero description i experience this error thread main panicked at attempt to divide by zero when i try to access an endpoint on the function steps to reproduce run sam beta cdk local start api p skip pull image warm containers eager debug observed result found one lambda function with name invoking container created from environment variables overrides data is standard format reuse the created warm container for lambda function lambda function is already running starting a timer for seconds for function start requestid version latest thread meter probe panicked at unable to read proc for agent process internalerror bug at local pkgbuild workspace build awslogslambdainsights awslogslambdainsights dev std pthread build private cargo home registry src procfs src process stat rs please report this procfs bug internal unwrap error internal error bug at local pkgbuild workspace build awslogslambdainsights awslogslambdainsights dev std pthread build private cargo home registry src procfs src lib rs please report this procfs bug internal unwrap error noneerror src inputs memory rs note run with rust backtrace environment variable to display a backtrace thread main panicked at attempt to divide by zero src inputs memory rs expected result return appropriate result from the endpoint additional environment details ex windows mac amazon linux etc os big sur sam version sam cli version aws region eu central add debug flag to command you are running | 1 |
48,873 | 5,988,917,927 | IssuesEvent | 2017-06-02 06:56:13 | mautic/mautic | https://api.github.com/repos/mautic/mautic | closed | Segment filters doesn't remove contacts which does not fit the filters anymore | Bug Ready To Test | What type of report is this:
| Q | A
| ---| ---
| Bug report? | X
| Feature request? |
| Enhancement? |
## Description:
The more details the better...
## If a bug:
| Q | A
| --- | ---
| Mautic version | 2.8.1
| PHP version | 7
### Steps to reproduce:
1. I create a segment with a condition that corresponds to 3 contacts
2. Cron creates a segment of three contacts
3. Change the segment condition to suit other 4 contacts
4. Cron adds 4 other contacts, but does not delete the original
### Log errors:
_Please check for related errors in the latest log file in [mautic root]/app/log/ and/or the web server's logs and post them here. Be sure to remove sensitive information if applicable._
| 1.0 | Segment filters doesn't remove contacts which does not fit the filters anymore - What type of report is this:
| Q | A
| ---| ---
| Bug report? | X
| Feature request? |
| Enhancement? |
## Description:
The more details the better...
## If a bug:
| Q | A
| --- | ---
| Mautic version | 2.8.1
| PHP version | 7
### Steps to reproduce:
1. I create a segment with a condition that corresponds to 3 contacts
2. Cron creates a segment of three contacts
3. Change the segment condition to suit other 4 contacts
4. Cron adds 4 other contacts, but does not delete the original
### Log errors:
_Please check for related errors in the latest log file in [mautic root]/app/log/ and/or the web server's logs and post them here. Be sure to remove sensitive information if applicable._
| non_main | segment filters doesn t remove contacts which does not fit the filters anymore what type of report is this q a bug report x feature request enhancement description the more details the better if a bug q a mautic version php version steps to reproduce i create a segment with a condition that corresponds to contacts cron creates a segment of three contacts change the segment condition to suit other contacts cron adds other contacts but does not delete the original log errors please check for related errors in the latest log file in app log and or the web server s logs and post them here be sure to remove sensitive information if applicable | 0 |
219,839 | 24,539,336,352 | IssuesEvent | 2022-10-12 01:12:36 | pactflow/example-consumer-cypress | https://api.github.com/repos/pactflow/example-consumer-cypress | opened | CVE-2021-23382 (High) detected in postcss-7.0.21.tgz, postcss-7.0.27.tgz | security vulnerability | ## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.27.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.0.tgz (Root Library)
- resolve-url-loader-3.1.1.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.27.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.0.tgz (Root Library)
- postcss-safe-parser-4.0.1.tgz
- :x: **postcss-7.0.27.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/pactflow/example-consumer-cypress/commit/340e24f4064182631ba657b42580dbb0476e3b94">340e24f4064182631ba657b42580dbb0476e3b94</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p><p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2021-23382 (High) detected in postcss-7.0.21.tgz, postcss-7.0.27.tgz - ## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.27.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.0.tgz (Root Library)
- resolve-url-loader-3.1.1.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.27.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.0.tgz (Root Library)
- postcss-safe-parser-4.0.1.tgz
- :x: **postcss-7.0.27.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/pactflow/example-consumer-cypress/commit/340e24f4064182631ba657b42580dbb0476e3b94">340e24f4064182631ba657b42580dbb0476e3b94</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p><p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_main | cve high detected in postcss tgz postcss tgz cve high severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules resolve url loader node modules postcss package json dependency hierarchy react scripts tgz root library resolve url loader tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules postcss package json dependency hierarchy react scripts tgz root library postcss safe parser tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss direct dependency fix resolution react scripts fix resolution postcss direct dependency fix resolution react scripts check this box to open an automated fix pr | 0 |
375 | 3,385,752,053 | IssuesEvent | 2015-11-27 13:30:05 | Homebrew/homebrew | https://api.github.com/repos/Homebrew/homebrew | closed | Travis CI issues tracking | bug features help wanted maintainer feedback travis usability | I'd like us to be able to use Travis CI for all non-bottle builds in future (and maybe one day all bottle builds too). In the short-term this means fixing Travis so the only failures are legitimate timeouts and the only missing features are bottle uploads.
@Homebrew/owners please add any weird Travis issues you see for **any new builds started after now** as comments in this issue and I'll try and triage them into specific issues and then fix them.
Once these are all fixed I'll switch our `master` build over to Travis CI and then move Jenkins so it requires a `@BrewTestBot test this please` to actually run jobs on Jenkins. This means Jenkins will be far less loaded and all builds that don't rely on a bottle being uploaded (e.g. audit failures, pulls of formulae without bottles) can be done without getting Jenkins involved. It'll also mean, due to Travis's parallel builds, that we can get much quicker feedback to users.
Thanks! | True | Travis CI issues tracking - I'd like us to be able to use Travis CI for all non-bottle builds in future (and maybe one day all bottle builds too). In the short-term this means fixing Travis so the only failures are legitimate timeouts and the only missing features are bottle uploads.
@Homebrew/owners please add any weird Travis issues you see for **any new builds started after now** as comments in this issue and I'll try and triage them into specific issues and then fix them.
Once these are all fixed I'll switch our `master` build over to Travis CI and then move Jenkins so it requires a `@BrewTestBot test this please` to actually run jobs on Jenkins. This means Jenkins will be far less loaded and all builds that don't rely on a bottle being uploaded (e.g. audit failures, pulls of formulae without bottles) can be done without getting Jenkins involved. It'll also mean, due to Travis's parallel builds, that we can get much quicker feedback to users.
Thanks! | main | travis ci issues tracking i d like us to be able to use travis ci for all non bottle builds in future and maybe one day all bottle builds too in the short term this means fixing travis so the only failures are legitimate timeouts and the only missing features are bottle uploads homebrew owners please add any weird travis issues you see for any new builds started after now as comments in this issue and i ll try and triage them into specific issues and then fix them once these are all fixed i ll switch our master build over to travis ci and then move jenkins so it requires a brewtestbot test this please to actually run jobs on jenkins this means jenkins will be far less loaded and all builds that don t rely on a bottle being uploaded e g audit failures pulls of formulae without bottles can be done without getting jenkins involved it ll also mean due to travis s parallel builds that we can get much quicker feedback to users thanks | 1 |
63,156 | 17,397,557,302 | IssuesEvent | 2021-08-02 15:09:29 | snowplow/snowplow-javascript-tracker | https://api.github.com/repos/snowplow/snowplow-javascript-tracker | closed | Check stateStorageStrategy before testing for localStorage | type:defect | **Describe the bug**
When using `stateStorageStrategy: 'none'`, the below line means we check for localStorage before checking if we are allowed to use local storage, we should check `useLocalStorage` before calling `localStorageAccessible()`. Seems pointless to check it if we're not permitted by configuration to use it.
https://github.com/snowplow/snowplow-javascript-tracker/blob/268f6cef26edb87b51fda39126fd7ffbc41dff5b/libraries/browser-tracker-core/src/tracker/out_queue.ts#L109
| 1.0 | Check stateStorageStrategy before testing for localStorage - **Describe the bug**
When using `stateStorageStrategy: 'none'`, the below line means we check for localStorage before checking if we are allowed to use local storage, we should check `useLocalStorage` before calling `localStorageAccessible()`. Seems pointless to check it if we're not permitted by configuration to use it.
https://github.com/snowplow/snowplow-javascript-tracker/blob/268f6cef26edb87b51fda39126fd7ffbc41dff5b/libraries/browser-tracker-core/src/tracker/out_queue.ts#L109
| non_main | check statestoragestrategy before testing for localstorage describe the bug when using statestoragestrategy none the below line means we check for localstorage before checking if we are allowed to use local storage we should check uselocalstorage before calling localstorageaccessible seems pointless to check it if we re not permitted by configuration to use it | 0 |
435,378 | 30,496,977,139 | IssuesEvent | 2023-07-18 11:32:54 | josura/c2c-sepia | https://api.github.com/repos/josura/c2c-sepia | opened | Validation of the model | documentation | Since the model simulates the behavior of the cell and the perturbation of the cell itself, there needs to be some kind of validation (maybe in another branch) to describe and compare the simulation itself to real data (maybe differential expression at two different times). Also to compare the use of a drug and see if the drug itself changes the convergence of the system | 1.0 | Validation of the model - Since the model simulates the behavior of the cell and the perturbation of the cell itself, there needs to be some kind of validation (maybe in another branch) to describe and compare the simulation itself to real data (maybe differential expression at two different times). Also to compare the use of a drug and see if the drug itself changes the convergence of the system | non_main | validation of the model since the model simulates the behavior of the cell and the perturbation of the cell itself there needs to be some kind of validation maybe in another branch to describe and compare the simulation itself to real data maybe differential expression at two different times also to compare the use of a drug and see if the drug itself changes the convergence of the system | 0 |
3,612 | 14,611,240,183 | IssuesEvent | 2020-12-22 02:43:51 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | gfortran cask duplicates the gcc formula in homebrew-core | awaiting maintainer feedback | I discovered today that there is a `gfortran` cask, which is distributing the gfortran installers that I build and make available. It's been happening for 3 years and I never knew 😄
I don't think that it fits the homebrew-cask rules, though:
- despite their name, these installers are a full GCC install, not just gfortran
- gfortran is fully part of GCC and cannot be separated, anyway
- this is purely command-line software
- that is already available as the `gcc` formula in homebrew-core
I could understand if the two distributions were different, but as I'm basically maintaining both, I don't think it should be kept that way. I suggest the cask be removed and users redirected to the gcc formula. | True | gfortran cask duplicates the gcc formula in homebrew-core - I discovered today that there is a `gfortran` cask, which is distributing the gfortran installers that I build and make available. It's been happening for 3 years and I never knew 😄
I don't think that it fits the homebrew-cask rules, though:
- despite their name, these installers are a full GCC install, not just gfortran
- gfortran is fully part of GCC and cannot be separated, anyway
- this is purely command-line software
- that is already available as the `gcc` formula in homebrew-core
I could understand if the two distributions were different, but as I'm basically maintaining both, I don't think it should be kept that way. I suggest the cask be removed and users redirected to the gcc formula. | main | gfortran cask duplicates the gcc formula in homebrew core i discovered today that there is a gfortran cask which is distributing the gfortran installers that i build and make available it s been happening for years and i never knew 😄 i don t think that it fits the homebrew cask rules though despite their name these installers are a full gcc install not just gfortran gfortran is fully part of gcc and cannot be separated anyway this is purely command line software that is already available as the gcc formula in homebrew core i could understand if the two distributions were different but as i m basically maintaining both i don t think it should be kept that way i suggest the cask be removed and users redirected to the gcc formula | 1 |
75,999 | 14,546,578,441 | IssuesEvent | 2020-12-15 21:27:52 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | superpmi: problem using arm collections | area-CodeGen-coreclr | I generated asm diffs on Windows x86 using Linux arm collection and clrjit_unix_arm_x86.dll cross-compiler JIT:
```
py -3 C:\gh\runtime\src\coreclr\scripts\superpmi.py asmdiffs -arch x86 -target_arch arm -filter libraries -jit_name clrjit_unix_arm_x86.dll --gcinfo -target_os Linux
```
This fails to replay every MC due to what appears to be an issue with sign extension of pointer types.
The JIT calls `getMethodClass()` with, in my example, 0xe8b8303c (from some previous SPMI call).
This calls the SuperPMI function:
```
CORINFO_CLASS_HANDLE MethodContext::repGetMethodClass(CORINFO_METHOD_HANDLE methodHandle)
```
which calls:
```
int index = GetMethodClass->GetIndex((DWORDLONG)methodHandle);
```
It casts a `CORINFO_METHOD_HANDLE`, which is a (32-bit) pointer, to a `DWORDLONG`, which is `unsigned __int64`, and in doing so sign extends it to 0xffffffffe8b8303c. It looks up in the GetMethodClass map, which includes a non-sign-extended value, and fails to find it.
Is there a difference in behavior between the C++ compiler behavior on Linux and Windows w.r.t. casting 32-bit pointer to 64-bit unsigned int? Does clang not sign extend? I would expect if it does sign extend, we would see the sign extended values stored in the method context.
We might need to change SuperPMI to specifically cast pointers (and thus handles) to same-sized unsigned ints before extending to larger unsigned ints.
category:eng-sys
theme:super-pmi
skill-level:intermediate
cost:medium
| 1.0 | superpmi: problem using arm collections - I generated asm diffs on Windows x86 using Linux arm collection and clrjit_unix_arm_x86.dll cross-compiler JIT:
```
py -3 C:\gh\runtime\src\coreclr\scripts\superpmi.py asmdiffs -arch x86 -target_arch arm -filter libraries -jit_name clrjit_unix_arm_x86.dll --gcinfo -target_os Linux
```
This fails to replay every MC due to what appears to be an issue with sign extension of pointer types.
The JIT calls `getMethodClass()` with, in my example, 0xe8b8303c (from some previous SPMI call).
This calls the SuperPMI function:
```
CORINFO_CLASS_HANDLE MethodContext::repGetMethodClass(CORINFO_METHOD_HANDLE methodHandle)
```
which calls:
```
int index = GetMethodClass->GetIndex((DWORDLONG)methodHandle);
```
It casts a `CORINFO_METHOD_HANDLE`, which is a (32-bit) pointer, to a `DWORDLONG`, which is `unsigned __int64`, and in doing so sign extends it to 0xffffffffe8b8303c. It looks up in the GetMethodClass map, which includes a non-sign-extended value, and fails to find it.
Is there a difference in behavior between the C++ compiler behavior on Linux and Windows w.r.t. casting 32-bit pointer to 64-bit unsigned int? Does clang not sign extend? I would expect if it does sign extend, we would see the sign extended values stored in the method context.
We might need to change SuperPMI to specifically cast pointers (and thus handles) to same-sized unsigned ints before extending to larger unsigned ints.
category:eng-sys
theme:super-pmi
skill-level:intermediate
cost:medium
| non_main | superpmi problem using arm collections i generated asm diffs on windows using linux arm collection and clrjit unix arm dll cross compiler jit py c gh runtime src coreclr scripts superpmi py asmdiffs arch target arch arm filter libraries jit name clrjit unix arm dll gcinfo target os linux this fails to replay every mc due to what appears to be an issue with sign extension of pointer types the jit calls getmethodclass with in my example from some previous spmi call this calls the superpmi function corinfo class handle methodcontext repgetmethodclass corinfo method handle methodhandle which calls int index getmethodclass getindex dwordlong methodhandle it casts a corinfo method handle which is a bit pointer to a dwordlong which is unsigned and in doing so sign extends it to it looks up in the getmethodclass map which includes a non sign extended value and fails to find it is there a difference in behavior between the c compiler behavior on linux and windows w r t casting bit pointer to bit unsigned int does clang not sign extend i would expect if it does sign extend we would see the sign extended values stored in the method context we might need to change superpmi to specifically cast pointers and thus handles to same sized unsigned ints before extending to larger unsigned ints category eng sys theme super pmi skill level intermediate cost medium | 0 |
4,393 | 22,536,669,960 | IssuesEvent | 2022-06-25 10:22:24 | wkentaro/gdown | https://api.github.com/repos/wkentaro/gdown | closed | --folder raises FileNotFoundError if files on Google Drive have slashes in their names | bug status: wip-by-maintainer | My folder has a file named `January/February 2020.csv` (and several others with the same naming pattern). Running `gdown --folder <url> -O /tmp/my-local-dir/` results in `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/my-local-dir/January'` (treating `January/` as a local filesystem subdirectory).
I have fixed this locally by patching `file_name = file.name.replace(osp.sep, '_')` here:
https://github.com/wkentaro/gdown/blob/main/gdown/download_folder.py#L239-L250
But I am not sure if this is the only and the right place to fix this, so decided to first create an issue and not a PR. I may create a PR if you tell me which places should this affect too. | True | --folder raises FileNotFoundError if files on Google Drive have slashes in their names - My folder has a file named `January/February 2020.csv` (and several others with the same naming pattern). Running `gdown --folder <url> -O /tmp/my-local-dir/` results in `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/my-local-dir/January'` (treating `January/` as a local filesystem subdirectory).
I have fixed this locally by patching `file_name = file.name.replace(osp.sep, '_')` here:
https://github.com/wkentaro/gdown/blob/main/gdown/download_folder.py#L239-L250
But I am not sure if this is the only and the right place to fix this, so decided to first create an issue and not a PR. I may create a PR if you tell me which places should this affect too. | main | folder raises filenotfounderror if files on google drive have slashes in their names my folder has a file named january february csv and several others with the same naming pattern running gdown folder o tmp my local dir results in filenotfounderror no such file or directory tmp my local dir january treating january as a local filesystem subdirectory i have fixed this locally by patching file name file name replace osp sep here but i am not sure if this is the only and the right place to fix this so decided to first create an issue and not a pr i may create a pr if you tell me which places should this affect too | 1 |
1,649 | 6,572,678,727 | IssuesEvent | 2017-09-11 04:20:36 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | IPA: can't set password for ipa_user module | affects_2.3 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ipa
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux Mint 18
##### SUMMARY
<!--- Explain the problem briefly -->
Can't add password for ipa user through ipa_user module - password is always empty in IPA
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Run ipa_user module with all required fields and password field filled.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Ensure user is present
ipa_user:
name: "{{ item.0.login }}"
state: present
givenname: "{{ item.1.first_name }}"
sn: "{{ item.1.last_name }}"
mail: "{{ item.1.mail }}"
password: 123321
telephonenumber: "{{ item.1.telnum }}"
title: "{{ item.1.jobtitle }}"
ipa_host: "{{ global_host }}"
ipa_user: "{{ global_user }}"
ipa_pass: "{{ global_pass }}"
validate_certs: no
with_subelements:
- "{{ users_to_add }}"
- personal_data
ignore_errors: true
users_to_add:
- username: Harley Quinn
login: 90987264
password: "adasdk212masd"
cluster_zone: Default
group: mininform
group_desc: "Some random data for description"
personal_data:
- first_name: Harley
last_name: Quinn
mail: harley@gmail.com
telnum: +79788880132
jobtitle: Minister
- username: Vasya Pupkin
login: 77777777
password: "adasdk212masd"
cluster_zone: Default
group: mininform
group_desc: "Some random data for description"
personal_data:
- first_name: Vasya
last_name: Pupkin
mail: vasya@gmail.com
telnum: +7970000805
jobtitle: Vice minister
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
User creation with password expected.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
User created has no password set. And module does not change user credentials (password) if you change it in playbook.
<!--- Paste verbatim command output between quotes below -->
```
ok: [ipa111.krtech.loc] => (item=({u'username': u'Harley Quinn', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 90987264, u'password': u'adasdk212masd'}, {u'mail': u'harley@gmail.com', u'first_name': u'Harley', u'last_name
': u'Quinn', u'jobtitle': u'Minister', u'telnum': 79788880132}))
ok: [ipa111.krtech.loc] => (item=({u'username': u'Vasya Pupkin', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 77777777, u'password': u'adasdk212masd'}, {u'mail': u'vasya@gmail.com', u'first_name': u'Vasya', u'last_name':
u'Pupkin', u'jobtitle': u'Vice minister', u'telnum': 7970000805}))
```
| True | IPA: can't set password for ipa_user module - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ipa
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux Mint 18
##### SUMMARY
<!--- Explain the problem briefly -->
Can't add password for ipa user through ipa_user module - password is always empty in IPA
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Run ipa_user module with all required fields and password field filled.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Ensure user is present
ipa_user:
name: "{{ item.0.login }}"
state: present
givenname: "{{ item.1.first_name }}"
sn: "{{ item.1.last_name }}"
mail: "{{ item.1.mail }}"
password: 123321
telephonenumber: "{{ item.1.telnum }}"
title: "{{ item.1.jobtitle }}"
ipa_host: "{{ global_host }}"
ipa_user: "{{ global_user }}"
ipa_pass: "{{ global_pass }}"
validate_certs: no
with_subelements:
- "{{ users_to_add }}"
- personal_data
ignore_errors: true
users_to_add:
- username: Harley Quinn
login: 90987264
password: "adasdk212masd"
cluster_zone: Default
group: mininform
group_desc: "Some random data for description"
personal_data:
- first_name: Harley
last_name: Quinn
mail: harley@gmail.com
telnum: +79788880132
jobtitle: Minister
- username: Vasya Pupkin
login: 77777777
password: "adasdk212masd"
cluster_zone: Default
group: mininform
group_desc: "Some random data for description"
personal_data:
- first_name: Vasya
last_name: Pupkin
mail: vasya@gmail.com
telnum: +7970000805
jobtitle: Vice minister
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
User creation with password expected.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
User created has no password set. And module does not change user credentials (password) if you change it in playbook.
<!--- Paste verbatim command output between quotes below -->
```
ok: [ipa111.krtech.loc] => (item=({u'username': u'Harley Quinn', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 90987264, u'password': u'adasdk212masd'}, {u'mail': u'harley@gmail.com', u'first_name': u'Harley', u'last_name
': u'Quinn', u'jobtitle': u'Minister', u'telnum': 79788880132}))
ok: [ipa111.krtech.loc] => (item=({u'username': u'Vasya Pupkin', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 77777777, u'password': u'adasdk212masd'}, {u'mail': u'vasya@gmail.com', u'first_name': u'Vasya', u'last_name':
u'Pupkin', u'jobtitle': u'Vice minister', u'telnum': 7970000805}))
```
| main | ipa can t set password for ipa user module issue type bug report component name ipa ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux mint summary can t add password for ipa user through ipa user module password is always empty in ipa steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run ipa user module with all required fields and password field filled name ensure user is present ipa user name item login state present givenname item first name sn item last name mail item mail password telephonenumber item telnum title item jobtitle ipa host global host ipa user global user ipa pass global pass validate certs no with subelements users to add personal data ignore errors true users to add username harley quinn login password cluster zone default group mininform group desc some random data for description personal data first name harley last name quinn mail harley gmail com telnum jobtitle minister username vasya pupkin login password cluster zone default group mininform group desc some random data for description personal data first name vasya last name pupkin mail vasya gmail com telnum jobtitle vice minister expected results user creation with password expected actual results user created has no password set and module does not change user credentials password if you change it in playbook ok item u username u harley quinn u group u mininform u cluster zone u default u group desc u some rando m data for description u login u password u u mail u harley gmail com u first name u harley u last name u quinn u jobtitle u minister u telnum ok item u username u vasya pupkin u group u mininform u cluster zone u default u group desc u some rando m data for description u login u password u u mail u vasya gmail com u first name u vasya u last name u pupkin u jobtitle u vice minister u telnum | 1 |
144,124 | 11,595,731,180 | IssuesEvent | 2020-02-24 17:33:05 | terraform-providers/terraform-provider-google | https://api.github.com/repos/terraform-providers/terraform-provider-google | opened | Fix TestAccAppEngineServiceSplitTraffic_appEngineServiceSplitTrafficExample test | test failure | Missing a mutex I think. | 1.0 | Fix TestAccAppEngineServiceSplitTraffic_appEngineServiceSplitTrafficExample test - Missing a mutex I think. | non_main | fix testaccappengineservicesplittraffic appengineservicesplittrafficexample test missing a mutex i think | 0 |
1,010 | 4,787,260,955 | IssuesEvent | 2016-10-29 22:08:53 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Return codes for command-module whould be configurable | affects_2.2 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
command-module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/24 14:13:42 (GMT +000)
```
##### SUMMARY
Allowed return codes to be successful should be configurable in commands module
e.g. the command `grep` has three possible return codes: 0, 1, 2
but on only 2 signals an error.
So it should be possible, to configure 0 AND 1 as "good" return codes. | True | Return codes for command-module whould be configurable - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
command-module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/24 14:13:42 (GMT +000)
```
##### SUMMARY
Allowed return codes to be successful should be configurable in commands module
e.g. the command `grep` has three possible return codes: 0, 1, 2
but on only 2 signals an error.
So it should be possible, to configure 0 AND 1 as "good" return codes. | main | return codes for command module whould be configurable issue type feature idea component name command module ansible version ansible detached head last updated gmt summary allowed return codes to be successful should be configurable in commands module e g the command grep has three possible return codes but on only signals an error so it should be possible to configure and as good return codes | 1 |
44,334 | 23,587,251,250 | IssuesEvent | 2022-08-23 12:39:38 | python/cpython | https://api.github.com/repos/python/cpython | closed | Patch for thread-support in md5module.c | performance extension-modules pending | BPO | [4818](https://bugs.python.org/issue4818)
--- | :---
Nosy | @loewis, @gpshead, @jcea, @tiran
Files | <li>[md5module_small_locks-2.diff](https://bugs.python.org/file12568/md5module_small_locks-2.diff "Uploaded as text/plain at 2009-01-03.14:44:19 by ebfe")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2009-01-03.11:23:59.564>
labels = ['extension-modules', 'performance']
title = 'Patch for thread-support in md5module.c'
updated_at = <Date 2012-10-06.23:44:01.385>
user = 'https://bugs.python.org/ebfe'
```
bugs.python.org fields:
```python
activity = <Date 2012-10-06.23:44:01.385>
actor = 'christian.heimes'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Extension Modules']
creation = <Date 2009-01-03.11:23:59.564>
creator = 'ebfe'
dependencies = []
files = ['12568']
hgrepos = []
issue_num = 4818
keywords = ['patch']
message_count = 5.0
messages = ['78947', '78950', '78954', '78963', '81727']
nosy_count = 5.0
nosy_names = ['loewis', 'gregory.p.smith', 'jcea', 'christian.heimes', 'ebfe']
pr_nums = []
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'performance'
url = 'https://bugs.python.org/issue4818'
versions = ['Python 3.4']
```
</p></details>
| True | Patch for thread-support in md5module.c - BPO | [4818](https://bugs.python.org/issue4818)
--- | :---
Nosy | @loewis, @gpshead, @jcea, @tiran
Files | <li>[md5module_small_locks-2.diff](https://bugs.python.org/file12568/md5module_small_locks-2.diff "Uploaded as text/plain at 2009-01-03.14:44:19 by ebfe")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2009-01-03.11:23:59.564>
labels = ['extension-modules', 'performance']
title = 'Patch for thread-support in md5module.c'
updated_at = <Date 2012-10-06.23:44:01.385>
user = 'https://bugs.python.org/ebfe'
```
bugs.python.org fields:
```python
activity = <Date 2012-10-06.23:44:01.385>
actor = 'christian.heimes'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Extension Modules']
creation = <Date 2009-01-03.11:23:59.564>
creator = 'ebfe'
dependencies = []
files = ['12568']
hgrepos = []
issue_num = 4818
keywords = ['patch']
message_count = 5.0
messages = ['78947', '78950', '78954', '78963', '81727']
nosy_count = 5.0
nosy_names = ['loewis', 'gregory.p.smith', 'jcea', 'christian.heimes', 'ebfe']
pr_nums = []
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'performance'
url = 'https://bugs.python.org/issue4818'
versions = ['Python 3.4']
```
</p></details>
| non_main | patch for thread support in c bpo nosy loewis gpshead jcea tiran files uploaded as text plain at by ebfe note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title patch for thread support in c updated at user bugs python org fields python activity actor christian heimes assignee none closed false closed date none closer none components creation creator ebfe dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type performance url versions | 0 |
1,934 | 6,609,879,948 | IssuesEvent | 2017-09-19 15:50:32 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Регулируемое время на чтение комментариев | need-maintainer | ### 1. Запрос
#### 1. Желательно
Неплохо было бы, если б пользователь комнаты имел возможность регулировать время, которое ему нужно на прочтение комментария. Убрать время на просмотр источника.
#### 2. Альтернатива
Установить 3 секунды на просмотр комментария. Убрать время на просмотр источника.
### 2. Аргументация
В настоящее время даётся 3 секунды на просмотр комментария и 3 секунды на просмотр источника.
1. Время на просмотр источника — лишнее. Изучение источников — процесс не быстрый, во время интенсивной игры проанализировать их толком не получится.
1. Секунды простоя на каждом вопросе перетекают в минуты. Каждая тренировка пакета затягивается на несколько минут из-за простоев.
Спасибо. | True | [Feature request] Регулируемое время на чтение комментариев - ### 1. Запрос
#### 1. Желательно
Неплохо было бы, если б пользователь комнаты имел возможность регулировать время, которое ему нужно на прочтение комментария. Убрать время на просмотр источника.
#### 2. Альтернатива
Установить 3 секунды на просмотр комментария. Убрать время на просмотр источника.
### 2. Аргументация
В настоящее время даётся 3 секунды на просмотр комментария и 3 секунды на просмотр источника.
1. Время на просмотр источника — лишнее. Изучение источников — процесс не быстрый, во время интенсивной игры проанализировать их толком не получится.
1. Секунды простоя на каждом вопросе перетекают в минуты. Каждая тренировка пакета затягивается на несколько минут из-за простоев.
Спасибо. | main | регулируемое время на чтение комментариев запрос желательно неплохо было бы если б пользователь комнаты имел возможность регулировать время которое ему нужно на прочтение комментария убрать время на просмотр источника альтернатива установить секунды на просмотр комментария убрать время на просмотр источника аргументация в настоящее время даётся секунды на просмотр комментария и секунды на просмотр источника время на просмотр источника — лишнее изучение источников — процесс не быстрый во время интенсивной игры проанализировать их толком не получится секунды простоя на каждом вопросе перетекают в минуты каждая тренировка пакета затягивается на несколько минут из за простоев спасибо | 1 |
116,599 | 17,379,814,269 | IssuesEvent | 2021-07-31 13:13:17 | sap-labs-france/ev-server | https://api.github.com/repos/sap-labs-france/ev-server | closed | Logs > Security: Ensure that all user's actions are logged using Logging.logSecurityInfo/Warning/Error | security | To ensure tracability of the user's actions from the UI.
These logs will be kept one year.
Request a meeting for this issue. | True | Logs > Security: Ensure that all user's actions are logged using Logging.logSecurityInfo/Warning/Error - To ensure tracability of the user's actions from the UI.
These logs will be kept one year.
Request a meeting for this issue. | non_main | logs security ensure that all user s actions are logged using logging logsecurityinfo warning error to ensure tracability of the user s actions from the ui these logs will be kept one year request a meeting for this issue | 0 |
4,158 | 19,957,807,616 | IssuesEvent | 2022-01-28 02:46:54 | microsoft/DirectXTK | https://api.github.com/repos/microsoft/DirectXTK | opened | Retire VS 2017 support | maintainence | Visual Studio 2017 reaches it's [mainstream end-of-life]() on **April 2022**. I should retire these projects that time:
* DirectXTK_Desktop_2017.vcxproj
* DirectXTK_Desktop_2017_Win10.vcxproj
* DirectXTK_Windows10_2017.vcxproj
> I am not sure when I'll be retiring Xbox One XDK support which is not supported for VS 2019 or later. That means I'm not sure if I'll delete ``DirectXTK_XboxOneXDK_2017.vcxproj`` or not with this change.
| True | Retire VS 2017 support - Visual Studio 2017 reaches it's [mainstream end-of-life]() on **April 2022**. I should retire these projects that time:
* DirectXTK_Desktop_2017.vcxproj
* DirectXTK_Desktop_2017_Win10.vcxproj
* DirectXTK_Windows10_2017.vcxproj
> I am not sure when I'll be retiring Xbox One XDK support which is not supported for VS 2019 or later. That means I'm not sure if I'll delete ``DirectXTK_XboxOneXDK_2017.vcxproj`` or not with this change.
| main | retire vs support visual studio reaches it s on april i should retire these projects that time directxtk desktop vcxproj directxtk desktop vcxproj directxtk vcxproj i am not sure when i ll be retiring xbox one xdk support which is not supported for vs or later that means i m not sure if i ll delete directxtk xboxonexdk vcxproj or not with this change | 1 |
1,372 | 5,949,202,693 | IssuesEvent | 2017-05-26 13:42:50 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | closed | Inconsistent behavior of align functions | Component-Analysis maintainability question refactoring usability | In #714 the rmsd functions default behavior is changed and I looked if this would affect potential uses of it in `align.py`. I found that the functions/classes in there to alignment all slightly differently.
1. **mass_weighted**
`rotation_matrix` and `rmsd` use weights relative to the mean weight for calculations. In eg `alignto` it
means we use the center_of_mass for translational centering AND the mass as additional fit weights.
2. **rms_fit_trj**
This function is only using the center of mass for centering. Weights are optionally given to the
alignment function `CalcRMSDRotationalMatrix`.
3. **CalcRMSDRotationalMatrix**
This is used directly in functions that work on AtomGroups in `align.py` and `rms.py` instead of
the wrapper functions `rmsd` and `rotation_matrix`. Only Exception `alignto`.
All of this can be fixed when we port the alignment to the new `BaseAnalysis` class. The questions is what unified behavior should they have (similar/same keywords should have the same effect in different functions).
| True | Inconsistent behavior of align functions - In #714 the rmsd functions default behavior is changed and I looked if this would affect potential uses of it in `align.py`. I found that the functions/classes in there to alignment all slightly differently.
1. **mass_weighted**
`rotation_matrix` and `rmsd` use weights relative to the mean weight for calculations. In eg `alignto` it
means we use the center_of_mass for translational centering AND the mass as additional fit weights.
2. **rms_fit_trj**
This function is only using the center of mass for centering. Weights are optionally given to the
alignment function `CalcRMSDRotationalMatrix`.
3. **CalcRMSDRotationalMatrix**
This is used directly in functions that work on AtomGroups in `align.py` and `rms.py` instead of
the wrapper functions `rmsd` and `rotation_matrix`. Only Exception `alignto`.
All of this can be fixed when we port the alignment to the new `BaseAnalysis` class. The questions is what unified behavior should they have (similar/same keywords should have the same effect in different functions).
| main | inconsistent behavior of align functions in the rmsd functions default behavior is changed and i looked if this would affect potential uses of it in align py i found that the functions classes in there to alignment all slightly differently mass weighted rotation matrix and rmsd use weights relative to the mean weight for calculations in eg alignto it means we use the center of mass for translational centering and the mass as additional fit weights rms fit trj this function is only using the center of mass for centering weights are optionally given to the alignment function calcrmsdrotationalmatrix calcrmsdrotationalmatrix this is used directly in functions that work on atomgroups in align py and rms py instead of the wrapper functions rmsd and rotation matrix only exception alignto all of this can be fixed when we port the alignment to the new baseanalysis class the questions is what unified behavior should they have similar same keywords should have the same effect in different functions | 1 |
3,289 | 12,577,893,989 | IssuesEvent | 2020-06-09 10:17:14 | LightForm-group/matflow | https://api.github.com/repos/LightForm-group/matflow | closed | Move initialisation and validation of task schemas and extensions | maintainability | These are currently in the root `__init__.py`, and so run whenever a `matflow` command is invoked. However, it is not always necessary to initialise and validate task schemas and extensions. E.g. when loading a workflow into the viewer. | True | Move initialisation and validation of task schemas and extensions - These are currently in the root `__init__.py`, and so run whenever a `matflow` command is invoked. However, it is not always necessary to initialise and validate task schemas and extensions. E.g. when loading a workflow into the viewer. | main | move initialisation and validation of task schemas and extensions these are currently in the root init py and so run whenever a matflow command is invoked however it is not always necessary to initialise and validate task schemas and extensions e g when loading a workflow into the viewer | 1 |
73,872 | 7,360,739,372 | IssuesEvent | 2018-03-10 21:41:59 | magneticstain/Inquisition | https://api.github.com/repos/magneticstain/Inquisition | opened | Review Code | enhancement testing | Now that the first submodule has been completed (Alerts), we should comb through the come to review it and optimize/improve any deficiencies identified. | 1.0 | Review Code - Now that the first submodule has been completed (Alerts), we should comb through the come to review it and optimize/improve any deficiencies identified. | non_main | review code now that the first submodule has been completed alerts we should comb through the come to review it and optimize improve any deficiencies identified | 0 |
4,492 | 23,391,995,205 | IssuesEvent | 2022-08-11 18:48:02 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | opened | change all examples to use `configs.azapp` for their secret store | 💫 refactor 🚧 maintainer issue | **Describe the solution you'd like**
n/a
**Additional context**
n/a | True | change all examples to use `configs.azapp` for their secret store - **Describe the solution you'd like**
n/a
**Additional context**
n/a | main | change all examples to use configs azapp for their secret store describe the solution you d like n a additional context n a | 1 |
31,564 | 13,559,864,771 | IssuesEvent | 2020-09-18 00:08:09 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | Support for customer_owned_ipv4_pool for Outpost ALBs | enhancement good first issue service/elbv2 | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
This feature request is to add the ability to use **customer_owned_ipv4_pool** when creating an ALB in Outpost.

```
aws elbv2 create-load-balancer --name my-load-balancer --subnets <SUBNET> --scheme internal --customer-owned-ipv4-pool <COIP_POOL_ID>
```
https://docs.aws.amazon.com/cli/latest/reference/elbv2/create-load-balancer.html
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* aws_lb
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_lb" "test" {
name = "test-lb-tf"
internal = true
load_balancer_type = "application"
security_groups = [aws_security_group.lb_sg.id]
subnets = [aws_subnet.public.*.id]
customer_owned_ipv4_pool = <COIP_POOL_ID>
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
I couldn't find any issues relating to this. Apologies if my searching was lacking and there is.
| 1.0 | Support for customer_owned_ipv4_pool for Outpost ALBs - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
This feature request is to add the ability to use **customer_owned_ipv4_pool** when creating an ALB in Outpost.

```
aws elbv2 create-load-balancer --name my-load-balancer --subnets <SUBNET> --scheme internal --customer-owned-ipv4-pool <COIP_POOL_ID>
```
https://docs.aws.amazon.com/cli/latest/reference/elbv2/create-load-balancer.html
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* aws_lb
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_lb" "test" {
name = "test-lb-tf"
internal = true
load_balancer_type = "application"
security_groups = [aws_security_group.lb_sg.id]
subnets = [aws_subnet.public.*.id]
customer_owned_ipv4_pool = <COIP_POOL_ID>
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
I couldn't find any issues relating to this. Apologies if my searching was lacking and there is.
| non_main | support for customer owned pool for outpost albs community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description this feature request is to add the ability to use customer owned pool when creating an alb in outpost aws create load balancer name my load balancer subnets scheme internal customer owned pool new or affected resource s aws lb potential terraform configuration hcl resource aws lb test name test lb tf internal true load balancer type application security groups subnets customer owned pool references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example i couldn t find any issues relating to this apologies if my searching was lacking and there is | 0 |
86,872 | 10,519,734,073 | IssuesEvent | 2019-09-29 20:02:34 | capstone-coal/pycoal | https://api.github.com/repos/capstone-coal/pycoal | opened | Provide More Detailed Algorithm Descriptions | documentation | It would be beneficial to have more thorough descriptions of what each part of the program does internally, even if it's at a high level. This would especially make sense for the examples portion so that people can know what exactly it is that they're looking at. Maybe we can compile our knowledge here and then format it nicely in the examples README? We can also include non-essential / lower-level information in separate documentation files.
Information we have so far:
### Mineral
This algorithm classifies each pixel as a certain kind of mineral. Once it finishes running, you can launch QGIS (download/install it [here](https://qgis.org/en/site/forusers/download.html)) and view it by clicking Layer > Add Layer > Add Raster Layer... Then select the .img file (something like <flight name>_img_class.img). You can expand the layer, double click on one of the minerals, open the Symbology tab, and then change the colors to visualize it however you see fit.
### Mining
Mining runs in a similar fashion to mineral. It takes the output of mineral classification and classifies mines.
### Correlation
The environment correlation part runs gdal_rasterize to match hydrography features with the image that the mining part produced. It does this using the National Hydrography Dataset, which contains information about water drainage paths in America (the flow of rivers, lakes, dams, etc...). In the example we currently use (example_environment.py), the code runs gdal_rasterize.py to match the mining image with NHDFlowine.shp. This basically tries to show which possible directions water flows in (and superimpose it onto the mining image?). Read [this](https://nhd.usgs.gov/userGuide/Robohelpfiles/NHD_User_Guide/Feature_Catalog/Hydrography_Dataset/NHDFlowline/NHDFlowline.htm) for more info about NHDFlowline. Read [this](https://www.usgs.gov/core-science-systems/ngp/national-hydrography/national-hydrography-dataset?qt-science_support_page_related_con=0#qt-science_support_page_related_con) for more information about the NHD in general. There's also a nice visualizer of United States hydrography data [here](https://viewer.nationalmap.gov/basic/?basemap=b1&category=nhd&title=NHD%20View). It includes NHD and NHDPlus, but I'm not sure if this project supports NHDPlus or how the file format has changed for NHDPlus other than making it higher resolution. To use the correlation algorithm in a meaningful way, you have to download a shape file from this link that is in the same area as the spectral data you're using (it will be in the format NHDFlowline.shp).
@capstone-coal/19-usc-capstone-team | 1.0 | Provide More Detailed Algorithm Descriptions - It would be beneficial to have more thorough descriptions of what each part of the program does internally, even if it's at a high level. This would especially make sense for the examples portion so that people can know what exactly it is that they're looking at. Maybe we can compile our knowledge here and then format it nicely in the examples README? We can also include non-essential / lower-level information in separate documentation files.
Information we have so far:
### Mineral
This algorithm classifies each pixel as a certain kind of mineral. Once it finishes running, you can launch QGIS (download/install it [here](https://qgis.org/en/site/forusers/download.html)) and view it by clicking Layer > Add Layer > Add Raster Layer... Then select the .img file (something like <flight name>_img_class.img). You can expand the layer, double click on one of the minerals, open the Symbology tab, and then change the colors to visualize it however you see fit.
### Mining
Mining runs in a similar fashion to mineral. It takes the output of mineral classification and classifies mines.
### Correlation
The environment correlation part runs gdal_rasterize to match hydrography features with the image that the mining part produced. It does this using the National Hydrography Dataset, which contains information about water drainage paths in America (the flow of rivers, lakes, dams, etc...). In the example we currently use (example_environment.py), the code runs gdal_rasterize.py to match the mining image with NHDFlowine.shp. This basically tries to show which possible directions water flows in (and superimpose it onto the mining image?). Read [this](https://nhd.usgs.gov/userGuide/Robohelpfiles/NHD_User_Guide/Feature_Catalog/Hydrography_Dataset/NHDFlowline/NHDFlowline.htm) for more info about NHDFlowline. Read [this](https://www.usgs.gov/core-science-systems/ngp/national-hydrography/national-hydrography-dataset?qt-science_support_page_related_con=0#qt-science_support_page_related_con) for more information about the NHD in general. There's also a nice visualizer of United States hydrography data [here](https://viewer.nationalmap.gov/basic/?basemap=b1&category=nhd&title=NHD%20View). It includes NHD and NHDPlus, but I'm not sure if this project supports NHDPlus or how the file format has changed for NHDPlus other than making it higher resolution. To use the correlation algorithm in a meaningful way, you have to download a shape file from this link that is in the same area as the spectral data you're using (it will be in the format NHDFlowline.shp).
@capstone-coal/19-usc-capstone-team | non_main | provide more detailed algorithm descriptions it would be beneficial to have more thorough descriptions of what each part of the program does internally even if it s at a high level this would especially make sense for the examples portion so that people can know what exactly it is that they re looking at maybe we can compile our knowledge here and then format it nicely in the examples readme we can also include non essential lower level information in separate documentation files information we have so far mineral this algorithm classifies each pixel as a certain kind of mineral once it finishes running you can launch qgis download install it and view it by clicking layer add layer add raster layer then select the img file something like img class img you can expand the layer double click on one of the minerals open the symbology tab and then change the colors to visualize it however you see fit mining mining runs in a similar fashion to mineral it takes the output of mineral classification and classifies mines correlation the environment correlation part runs gdal rasterize to match hydrography features with the image that the mining part produced it does this using the national hydrography dataset which contains information about water drainage paths in america the flow of rivers lakes dams etc in the example we currently use example environment py the code runs gdal rasterize py to match the mining image with nhdflowine shp this basically tries to show which possible directions water flows in and superimpose it onto the mining image read for more info about nhdflowline read for more information about the nhd in general there s also a nice visualizer of united states hydrography data it includes nhd and nhdplus but i m not sure if this project supports nhdplus or how the file format has changed for nhdplus other than making it higher resolution to use the correlation algorithm in a meaningful way you have to download a shape file from this link that is in the same area as the spectral data you re using it will be in the format nhdflowline shp capstone coal usc capstone team | 0 |
689,817 | 23,635,384,795 | IssuesEvent | 2022-08-25 12:55:44 | PastVu/pastvu | https://api.github.com/repos/PastVu/pastvu | closed | Subscriptions bug / баг с подпиской | Bug Priority: Low Subscriptions good first issue | https://pastvu.com/news/115?hl=comment-1178636
Не сработала автоподписка при первом комментировании
Такое бывает, если комментируемое фото открыто кликом по точке на карте со станицы, на которую подписки нет /news/123?hl=comment-764978. Также случается, если впервые комменитруемое фото открыто из ленты пользователя выше карты, или ленты Ближайшие фотографии ниже карты, или из галереи. У меня такие случаи происходят постоянно, уже перестал про них писать
| 1.0 | Subscriptions bug / баг с подпиской - https://pastvu.com/news/115?hl=comment-1178636
Не сработала автоподписка при первом комментировании
Такое бывает, если комментируемое фото открыто кликом по точке на карте со станицы, на которую подписки нет /news/123?hl=comment-764978. Также случается, если впервые комменитруемое фото открыто из ленты пользователя выше карты, или ленты Ближайшие фотографии ниже карты, или из галереи. У меня такие случаи происходят постоянно, уже перестал про них писать
| non_main | subscriptions bug баг с подпиской не сработала автоподписка при первом комментировании такое бывает если комментируемое фото открыто кликом по точке на карте со станицы на которую подписки нет news hl comment также случается если впервые комменитруемое фото открыто из ленты пользователя выше карты или ленты ближайшие фотографии ниже карты или из галереи у меня такие случаи происходят постоянно уже перестал про них писать | 0 |
1,248 | 5,308,981,158 | IssuesEvent | 2017-02-12 04:05:40 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | vmware_guest.py implement reconfigure options | affects_2.2 cloud feature_idea vmware waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_guest.py
##### ANSIBLE VERSION
```
ansible 2.2.0
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Like the module vsphere_guest.py, this module should support reconfigure in order to add or remove resources from the VM.
For instance:
Add a new disk. Remove is not required.
Add/Remove memory.
Add/Remove vcpus.
Add/Remove network interfaces.
Additionally would allow to specify minimal resource settings for instance:
- Memory limits/shares/reservations.
- Disk IO limits/shares.
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
```
N/A
```
| True | vmware_guest.py implement reconfigure options - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_guest.py
##### ANSIBLE VERSION
```
ansible 2.2.0
```
##### CONFIGURATION
Default configuration
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Like the module vsphere_guest.py, this module should support reconfigure in order to add or remove resources from the VM.
For instance:
Add a new disk. Remove is not required.
Add/Remove memory.
Add/Remove vcpus.
Add/Remove network interfaces.
Additionally would allow to specify minimal resource settings for instance:
- Memory limits/shares/reservations.
- Disk IO limits/shares.
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
```
N/A
```
| main | vmware guest py implement reconfigure options issue type feature idea component name vmware guest py ansible version ansible configuration default configuration os environment n a summary like the module vsphere guest py this module should support reconfigure in order to add or remove resources from the vm for instance add a new disk remove is not required add remove memory add remove vcpus add remove network interfaces additionally would allow to specify minimal resource settings for instance memory limits shares reservations disk io limits shares steps to reproduce n a expected results n a actual results n a | 1 |
2,006 | 6,718,164,834 | IssuesEvent | 2017-10-15 09:04:16 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Тао-блоки | enhancement need-maintainer | ### 1. Запрос
Было бы неплохо, если бы в комнатах была возможность дополнительно отыгрывать наиболее сложные места пакетов — т. н. «тао-блоки».
### 2. Аргументация
Некоторые понятия в пакетах из-за тао запоминаются труднее остальных. Чтобы они лучше запомнились, нелишней была бы дополнительная их прокачка.
Да, я в курсе, что формируется пакет `неотвеченные`. Но тао-термины желательно запоминать именно «блоками», где они расположены рядом. Это помогает лучше отделить тао-термины друг от друга в памяти, иначе говоря, игрок будет меньше в них путаться.
Что такое «тао», в подробностях см. здесь — **<http://kristinita.ru/Gingerinas/%D0%A2%D0%B0%D0%BE>**.
### 3. Пример реализации
#### 3.1. Дополнительные строки к пакету
[**Пакет «Термины_генетики»**](https://github.com/Kristinita/Eric-s-Green-Room/blob/master/Термины_генетики.txt). Например, в конец пакета добавляются строки:
```markdown
<tao>
63, 66, 79, 97, 118
85, 88, 105
```
Числа означают номера строк пакета. я путаюсь в определениях репликации, трансляции, транскрипции, транслокации и экспрессии; а также пенетрантности, плейотропии и плюрипотентности. Такие наборы понятий, в которых путаюсь, есть едва ли не в каждом присланном пакете.
#### 3.2. Игровой процесс
Если в опциях указано `Отыгрывать Тао-блоки? — ДА`, тогда по окончании отыгрыша пакета опять появляются вопросы со строками 63, 66, 79 и так далее.
Неплохо было бы реализовать так (на примере выше): произошёл отыгрыш последнего вопроса пакета → появляется сообщение `Тао 1` → отыгрываются вопросы строк 63, 66, 79, 97, 118 → появляется сообщение `Тао 2` → отыгрываются вопросы 85, 88, 105 → появляется сообщение `Вопросы закончились. Спасибо за игру.`.
Спасибо. | True | [Feature request] Тао-блоки - ### 1. Запрос
Было бы неплохо, если бы в комнатах была возможность дополнительно отыгрывать наиболее сложные места пакетов — т. н. «тао-блоки».
### 2. Аргументация
Некоторые понятия в пакетах из-за тао запоминаются труднее остальных. Чтобы они лучше запомнились, нелишней была бы дополнительная их прокачка.
Да, я в курсе, что формируется пакет `неотвеченные`. Но тао-термины желательно запоминать именно «блоками», где они расположены рядом. Это помогает лучше отделить тао-термины друг от друга в памяти, иначе говоря, игрок будет меньше в них путаться.
Что такое «тао», в подробностях см. здесь — **<http://kristinita.ru/Gingerinas/%D0%A2%D0%B0%D0%BE>**.
### 3. Пример реализации
#### 3.1. Дополнительные строки к пакету
[**Пакет «Термины_генетики»**](https://github.com/Kristinita/Eric-s-Green-Room/blob/master/Термины_генетики.txt). Например, в конец пакета добавляются строки:
```markdown
<tao>
63, 66, 79, 97, 118
85, 88, 105
```
Числа означают номера строк пакета. я путаюсь в определениях репликации, трансляции, транскрипции, транслокации и экспрессии; а также пенетрантности, плейотропии и плюрипотентности. Такие наборы понятий, в которых путаюсь, есть едва ли не в каждом присланном пакете.
#### 3.2. Игровой процесс
Если в опциях указано `Отыгрывать Тао-блоки? — ДА`, тогда по окончании отыгрыша пакета опять появляются вопросы со строками 63, 66, 79 и так далее.
Неплохо было бы реализовать так (на примере выше): произошёл отыгрыш последнего вопроса пакета → появляется сообщение `Тао 1` → отыгрываются вопросы строк 63, 66, 79, 97, 118 → появляется сообщение `Тао 2` → отыгрываются вопросы 85, 88, 105 → появляется сообщение `Вопросы закончились. Спасибо за игру.`.
Спасибо. | main | тао блоки запрос было бы неплохо если бы в комнатах была возможность дополнительно отыгрывать наиболее сложные места пакетов — т н «тао блоки» аргументация некоторые понятия в пакетах из за тао запоминаются труднее остальных чтобы они лучше запомнились нелишней была бы дополнительная их прокачка да я в курсе что формируется пакет неотвеченные но тао термины желательно запоминать именно «блоками» где они расположены рядом это помогает лучше отделить тао термины друг от друга в памяти иначе говоря игрок будет меньше в них путаться что такое «тао» в подробностях см здесь — пример реализации дополнительные строки к пакету например в конец пакета добавляются строки markdown числа означают номера строк пакета я путаюсь в определениях репликации трансляции транскрипции транслокации и экспрессии а также пенетрантности плейотропии и плюрипотентности такие наборы понятий в которых путаюсь есть едва ли не в каждом присланном пакете игровой процесс если в опциях указано отыгрывать тао блоки — да тогда по окончании отыгрыша пакета опять появляются вопросы со строками и так далее неплохо было бы реализовать так на примере выше произошёл отыгрыш последнего вопроса пакета → появляется сообщение тао → отыгрываются вопросы строк → появляется сообщение тао → отыгрываются вопросы → появляется сообщение вопросы закончились спасибо за игру спасибо | 1 |
825,109 | 31,273,356,623 | IssuesEvent | 2023-08-22 03:08:26 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | System.InvalidOperationException: Cannot call InternalChild unless there's exactly one Drawable in Children (currently 0)! | priority:0 osu!framework issue type:input | This was probably inadvertently exposed by https://github.com/ppy/osu/pull/24121. I'm not sure how to reproduce this exactly, but this seems potentially related to (and may potentially be fixed by) https://github.com/ppy/osu-framework/pull/5557, i.e. `KeyBindingContainer` event handlers firing via disposed drawables.
Sentry Issue: [OSU-ME9](https://sentry.ppy.sh/organizations/ppy/issues/22717/?referrer=github_integration)
```
System.InvalidOperationException: Cannot call InternalChild unless there's exactly one Drawable in Children (currently 0)!
?, in T Container<T>.get_Child()
?, in void DrawableHoldNote.OnReleased(KeyBindingReleaseEvent<ManiaAction> e)
?, in bool KeyBindingContainer<T>.triggerKeyBindingEvent(IDrawable drawable, KeyBindingEvent<T> e)
?, in void KeyBindingContainer<T>.PropagateReleased(IEnumerable<Drawable> drawables, InputState state, T released)
?, in void KeyBindingContainer<T>.handleNewReleased(InputState state, InputKey releasedKey)
...
(24 additional frame(s) were not displayed)
An unhandled error has occurred.
``` | 1.0 | System.InvalidOperationException: Cannot call InternalChild unless there's exactly one Drawable in Children (currently 0)! - This was probably inadvertently exposed by https://github.com/ppy/osu/pull/24121. I'm not sure how to reproduce this exactly, but this seems potentially related to (and may potentially be fixed by) https://github.com/ppy/osu-framework/pull/5557, i.e. `KeyBindingContainer` event handlers firing via disposed drawables.
Sentry Issue: [OSU-ME9](https://sentry.ppy.sh/organizations/ppy/issues/22717/?referrer=github_integration)
```
System.InvalidOperationException: Cannot call InternalChild unless there's exactly one Drawable in Children (currently 0)!
?, in T Container<T>.get_Child()
?, in void DrawableHoldNote.OnReleased(KeyBindingReleaseEvent<ManiaAction> e)
?, in bool KeyBindingContainer<T>.triggerKeyBindingEvent(IDrawable drawable, KeyBindingEvent<T> e)
?, in void KeyBindingContainer<T>.PropagateReleased(IEnumerable<Drawable> drawables, InputState state, T released)
?, in void KeyBindingContainer<T>.handleNewReleased(InputState state, InputKey releasedKey)
...
(24 additional frame(s) were not displayed)
An unhandled error has occurred.
``` | non_main | system invalidoperationexception cannot call internalchild unless there s exactly one drawable in children currently this was probably inadvertently exposed by i m not sure how to reproduce this exactly but this seems potentially related to and may potentially be fixed by i e keybindingcontainer event handlers firing via disposed drawables sentry issue system invalidoperationexception cannot call internalchild unless there s exactly one drawable in children currently in t container get child in void drawableholdnote onreleased keybindingreleaseevent e in bool keybindingcontainer triggerkeybindingevent idrawable drawable keybindingevent e in void keybindingcontainer propagatereleased ienumerable drawables inputstate state t released in void keybindingcontainer handlenewreleased inputstate state inputkey releasedkey additional frame s were not displayed an unhandled error has occurred | 0 |
3,991 | 18,449,940,147 | IssuesEvent | 2021-10-15 09:15:25 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Drowsiness Needs to be "automatically" capped | Maintainability/Hinders improvements Good First Issue | ANYTIME you are having to touch drowsiness, it requires a clamp so usually the code is
`Mob.drowsiness = max(mob.drowsiness + effect, 0)`
This is because negative values will cause semi-permanent drowsiness, and has resulted in a non-zero amount of bugs (#61396 for example).
What i would like is to have a proc that handles this on the mob similar to other values such as sleepiness and damage.
The code then will likely look like
`Mob.AdjustDrowsiness(effect)`
This will need to be changed for any part of the code that touches drowsiness. | True | Drowsiness Needs to be "automatically" capped - ANYTIME you are having to touch drowsiness, it requires a clamp so usually the code is
`Mob.drowsiness = max(mob.drowsiness + effect, 0)`
This is because negative values will cause semi-permanent drowsiness, and has resulted in a non-zero amount of bugs (#61396 for example).
What i would like is to have a proc that handles this on the mob similar to other values such as sleepiness and damage.
The code then will likely look like
`Mob.AdjustDrowsiness(effect)`
This will need to be changed for any part of the code that touches drowsiness. | main | drowsiness needs to be automatically capped anytime you are having to touch drowsiness it requires a clamp so usually the code is mob drowsiness max mob drowsiness effect this is because negative values will cause semi permanent drowsiness and has resulted in a non zero amount of bugs for example what i would like is to have a proc that handles this on the mob similar to other values such as sleepiness and damage the code then will likely look like mob adjustdrowsiness effect this will need to be changed for any part of the code that touches drowsiness | 1 |
5,668 | 29,494,544,953 | IssuesEvent | 2023-06-02 15:49:42 | ipfs/js-ipfs | https://api.github.com/repos/ipfs/js-ipfs | closed | A way to sign things using IPFS keys | exp/novice kind/feature need/maintainer-input kind/maybe-in-helia | So each peer has its own key (and optional additional keys), but it seems there is no API which would allow one to sign messages using those keys (except by storing them inside IPNS). | True | A way to sign things using IPFS keys - So each peer has its own key (and optional additional keys), but it seems there is no API which would allow one to sign messages using those keys (except by storing them inside IPNS). | main | a way to sign things using ipfs keys so each peer has its own key and optional additional keys but it seems there is no api which would allow one to sign messages using those keys except by storing them inside ipns | 1 |
1,026 | 4,821,438,729 | IssuesEvent | 2016-11-05 10:17:50 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Jira modules failes with "dictionary update sequence element #0 has length 1; 2 is required" in Ansible 2.0.2.0 | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
Jira
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
Using the jira module with Ansible 2.1.0.0 results in the following error message:
```
dictionary update sequence element #0 has length 1; 2 is required
```
Playbooks worked with Ansible 2.0.2.0
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: xxxx
tasks:
- name: (JIRA) Sample ansible issue
jira: description=something issuetype=Bug operation=create password=XXXX project=xxx summary=test uri=https://hostname.com username=XXX
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| True | Jira modules failes with "dictionary update sequence element #0 has length 1; 2 is required" in Ansible 2.0.2.0 - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
Jira
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
Using the jira module with Ansible 2.1.0.0 results in the following error message:
```
dictionary update sequence element #0 has length 1; 2 is required
```
Playbooks worked with Ansible 2.0.2.0
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: xxxx
tasks:
- name: (JIRA) Sample ansible issue
jira: description=something issuetype=Bug operation=create password=XXXX project=xxx summary=test uri=https://hostname.com username=XXX
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| main | jira modules failes with dictionary update sequence element has length is required in ansible issue type bug report component name jira ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary using the jira module with ansible results in the following error message dictionary update sequence element has length is required playbooks worked with ansible steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts xxxx tasks name jira sample ansible issue jira description something issuetype bug operation create password xxxx project xxx summary test uri username xxx expected results actual results | 1 |
22,633 | 15,342,369,823 | IssuesEvent | 2021-02-27 15:57:25 | pythonitalia/pycon | https://api.github.com/repos/pythonitalia/pycon | closed | Investigate how and where to store our DB code | infrastructure | It's a DB shared between all resources, does it make sense to put it into the global terraform? | 1.0 | Investigate how and where to store our DB code - It's a DB shared between all resources, does it make sense to put it into the global terraform? | non_main | investigate how and where to store our db code it s a db shared between all resources does it make sense to put it into the global terraform | 0 |
5,177 | 26,347,684,227 | IssuesEvent | 2023-01-11 00:12:27 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Refactor `network-api/networkapi/tests/tests.py` | engineering maintain | This module currently contains a huge list of unrelated tests. The tests should be moved into a structure that matches the source structure. If the units (e.g. classes or functions) that are being tested are in different modules, so should their tests be. That prevent test modules from getting unwieldy and the tests hard to find. | True | Refactor `network-api/networkapi/tests/tests.py` - This module currently contains a huge list of unrelated tests. The tests should be moved into a structure that matches the source structure. If the units (e.g. classes or functions) that are being tested are in different modules, so should their tests be. That prevent test modules from getting unwieldy and the tests hard to find. | main | refactor network api networkapi tests tests py this module currently contains a huge list of unrelated tests the tests should be moved into a structure that matches the source structure if the units e g classes or functions that are being tested are in different modules so should their tests be that prevent test modules from getting unwieldy and the tests hard to find | 1 |
758 | 4,351,996,319 | IssuesEvent | 2016-08-01 03:35:41 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Bug: Check in unarchive module whether 'dest' is writable should be removed. | bug_report feature_idea waiting_on_maintainer | ##### Issue Type:
Bug in ```unarchive``` module.
##### Ansible Version:
```ansible 1.9.0.1```
##### Ansible Configuration:
n/a
##### Environment:
n/a
##### Summary:
The ```unarchive``` module checks whether the ```dest``` directory is writable before unpacking the archive:
```
if not os.access(dest, os.W_OK):
module.fail_json(msg="Destination '%s' not writable" % dest)
```
While this is certainly well intended it prevents archives from being unpacked that don't actually create files in ```dest``` but only in (writable!) sub-directories. For instance an archive like this
```
# tar tvf myarchive.tar
/tmp/file1
/tmp/file2
```
will trigger this error if the ```unarchive``` module detects that the ```/``` directory is not writable even if ```/tmp``` was.
##### Steps To Reproduce:
n/a
##### Expected Results:
Being able to use ```unarchive``` with archives similar to the one as described in the Summary. I believe the check whether ```dest``` is writable should be removed.
##### Actual Results:
Currently the ```unarchive``` module reports a
```
msg: Destination '/' not writable
```
message.
| True | Bug: Check in unarchive module whether 'dest' is writable should be removed. - ##### Issue Type:
Bug in ```unarchive``` module.
##### Ansible Version:
```ansible 1.9.0.1```
##### Ansible Configuration:
n/a
##### Environment:
n/a
##### Summary:
The ```unarchive``` module checks whether the ```dest``` directory is writable before unpacking the archive:
```
if not os.access(dest, os.W_OK):
module.fail_json(msg="Destination '%s' not writable" % dest)
```
While this is certainly well intended it prevents archives from being unpacked that don't actually create files in ```dest``` but only in (writable!) sub-directories. For instance an archive like this
```
# tar tvf myarchive.tar
/tmp/file1
/tmp/file2
```
will trigger this error if the ```unarchive``` module detects that the ```/``` directory is not writable even if ```/tmp``` was.
##### Steps To Reproduce:
n/a
##### Expected Results:
Being able to use ```unarchive``` with archives similar to the one as described in the Summary. I believe the check whether ```dest``` is writable should be removed.
##### Actual Results:
Currently the ```unarchive``` module reports a
```
msg: Destination '/' not writable
```
message.
| main | bug check in unarchive module whether dest is writable should be removed issue type bug in unarchive module ansible version ansible ansible configuration n a environment n a summary the unarchive module checks whether the dest directory is writable before unpacking the archive if not os access dest os w ok module fail json msg destination s not writable dest while this is certainly well intended it prevents archives from being unpacked that don t actually create files in dest but only in writable sub directories for instance an archive like this tar tvf myarchive tar tmp tmp will trigger this error if the unarchive module detects that the directory is not writable even if tmp was steps to reproduce n a expected results being able to use unarchive with archives similar to the one as described in the summary i believe the check whether dest is writable should be removed actual results currently the unarchive module reports a msg destination not writable message | 1 |
5,809 | 30,760,677,922 | IssuesEvent | 2023-07-29 17:00:07 | garret1317/yt-dlp-rajiko | https://api.github.com/repos/garret1317/yt-dlp-rajiko | closed | auth fails on newer yt-dlp! | bug maintainance live timefree | ```
[RadikoTimeFree] Authenticating: step 1
[debug] [RadikoTimeFree] please send a part of key
ERROR: Response.info() is deprecated, use Response.headers; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[RadikoTimeFree] JP22: Authenticating: step 2
ERROR: [RadikoTimeFree] 20230729000000: Unable to download webpage: HTTP Error 401: Unauthorized (caused by <HTTPError 401: Unauthorized>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 715, in extract
ie_result = self._real_extract(url)
File "/home/g/.config/yt-dlp/plugins/yt-dlp-rajiko/yt_dlp_plugins/extractor/radiko.py", line 809, in _real_extract
auth_data = self._auth(region)
File "/home/g/.config/yt-dlp/plugins/yt-dlp-rajiko/yt_dlp_plugins/extractor/radiko.py", line 492, in _auth
token = self._negotiate_token(station_region)
File "/home/g/.config/yt-dlp/plugins/yt-dlp-rajiko/yt_dlp_plugins/extractor/radiko.py", line 463, in _negotiate_token
auth2 = self._download_webpage("https://radiko.jp/v2/api/auth2", station_region,
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1118, in _download_webpage
return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1069, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 903, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 860, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 437, in _send
res = opener.open(urllib_req, timeout=float(request.extensions.get('timeout') or self.timeout))
File "/usr/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/usr/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 401: Unauthorized
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4060, in urlopen
return self._request_director.send(req)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 90, in send
response = handler.send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_helper.py", line 203, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 301, in send
return self._send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 442, in _send
raise HTTPError(UrllibResponseAdapter(e.fp), redirect_loop='redirect error' in str(e)) from e
yt_dlp.networking.exceptions.HTTPError: HTTP Error 401: Unauthorized
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 847, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4079, in urlopen
raise _CompatHTTPError(e) from e
yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 401: Unauthorized
``` | True | auth fails on newer yt-dlp! - ```
[RadikoTimeFree] Authenticating: step 1
[debug] [RadikoTimeFree] please send a part of key
ERROR: Response.info() is deprecated, use Response.headers; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[RadikoTimeFree] JP22: Authenticating: step 2
ERROR: [RadikoTimeFree] 20230729000000: Unable to download webpage: HTTP Error 401: Unauthorized (caused by <HTTPError 401: Unauthorized>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 715, in extract
ie_result = self._real_extract(url)
File "/home/g/.config/yt-dlp/plugins/yt-dlp-rajiko/yt_dlp_plugins/extractor/radiko.py", line 809, in _real_extract
auth_data = self._auth(region)
File "/home/g/.config/yt-dlp/plugins/yt-dlp-rajiko/yt_dlp_plugins/extractor/radiko.py", line 492, in _auth
token = self._negotiate_token(station_region)
File "/home/g/.config/yt-dlp/plugins/yt-dlp-rajiko/yt_dlp_plugins/extractor/radiko.py", line 463, in _negotiate_token
auth2 = self._download_webpage("https://radiko.jp/v2/api/auth2", station_region,
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1118, in _download_webpage
return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1069, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 903, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 860, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 437, in _send
res = opener.open(urllib_req, timeout=float(request.extensions.get('timeout') or self.timeout))
File "/usr/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/usr/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 401: Unauthorized
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4060, in urlopen
return self._request_director.send(req)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 90, in send
response = handler.send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_helper.py", line 203, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 301, in send
return self._send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 442, in _send
raise HTTPError(UrllibResponseAdapter(e.fp), redirect_loop='redirect error' in str(e)) from e
yt_dlp.networking.exceptions.HTTPError: HTTP Error 401: Unauthorized
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 847, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4079, in urlopen
raise _CompatHTTPError(e) from e
yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 401: Unauthorized
``` | main | auth fails on newer yt dlp authenticating step please send a part of key error response info is deprecated use response headers please report this issue on filling out the appropriate issue template confirm you are on the latest version using yt dlp u authenticating step error unable to download webpage http error unauthorized caused by please report this issue on filling out the appropriate issue template confirm you are on the latest version using yt dlp u file usr local bin yt dlp yt dlp extractor common py line in extract ie result self real extract url file home g config yt dlp plugins yt dlp rajiko yt dlp plugins extractor radiko py line in real extract auth data self auth region file home g config yt dlp plugins yt dlp rajiko yt dlp plugins extractor radiko py line in auth token self negotiate token station region file home g config yt dlp plugins yt dlp rajiko yt dlp plugins extractor radiko py line in negotiate token self download webpage station region file usr local bin yt dlp yt dlp extractor common py line in download webpage return self download webpage url or request video id note errnote none fatal args kwargs file usr local bin yt dlp yt dlp extractor common py line in download content res getattr self download handle name url or request video id kwargs file usr local bin yt dlp yt dlp extractor common py line in download webpage handle urlh self request webpage url or request video id note errnote fatal data data headers headers query query expected status expected status file usr local bin yt dlp yt dlp extractor common py line in request webpage raise extractorerror errmsg cause err file usr local bin yt dlp yt dlp networking urllib py line in send res opener open urllib req timeout float request extensions get timeout or self timeout file usr lib urllib request py line in open response meth req response file usr lib urllib request py line in http response response self parent error file usr lib urllib request py line in error return self call chain args file usr lib urllib request py line in call chain result func args file usr lib urllib request py line in http error default raise httperror req full url code msg hdrs fp urllib error httperror http error unauthorized the above exception was the direct cause of the following exception traceback most recent call last file usr local bin yt dlp yt dlp youtubedl py line in urlopen return self request director send req file usr local bin yt dlp yt dlp networking common py line in send response handler send request file usr local bin yt dlp yt dlp networking helper py line in wrapper return func self args kwargs file usr local bin yt dlp yt dlp networking common py line in send return self send request file usr local bin yt dlp yt dlp networking urllib py line in send raise httperror urllibresponseadapter e fp redirect loop redirect error in str e from e yt dlp networking exceptions httperror http error unauthorized the above exception was the direct cause of the following exception traceback most recent call last file usr local bin yt dlp yt dlp extractor common py line in request webpage return self downloader urlopen self create request url or request data headers query file usr local bin yt dlp yt dlp youtubedl py line in urlopen raise compathttperror e from e yt dlp networking exceptions compathttperror http error unauthorized | 1 |
28,946 | 7,046,707,133 | IssuesEvent | 2018-01-02 09:35:57 | manu-chroma/username-availability-checker | https://api.github.com/repos/manu-chroma/username-availability-checker | closed | Use websites.yml to determine supported websites list | gci googlecodein | Make changes in:
- ``status.html``
- ``cli.py`` (``cli.py`` is broken, make a separate commit to fix it)
This will ensure, that a generic website can be added by just updating the data file (in case it is not a special case like facebook etc.) | 1.0 | Use websites.yml to determine supported websites list - Make changes in:
- ``status.html``
- ``cli.py`` (``cli.py`` is broken, make a separate commit to fix it)
This will ensure, that a generic website can be added by just updating the data file (in case it is not a special case like facebook etc.) | non_main | use websites yml to determine supported websites list make changes in status html cli py cli py is broken make a separate commit to fix it this will ensure that a generic website can be added by just updating the data file in case it is not a special case like facebook etc | 0 |
234,933 | 7,732,678,253 | IssuesEvent | 2018-05-26 00:53:27 | ODIQueensland/data-curator | https://api.github.com/repos/ODIQueensland/data-curator | closed | [EPIC] Integration with CKAN | est:Major f:Feature-request fn:Platform-Integration priority:High | A question I was asked (and a feature that exists in [Comma Chameleon](http://comma-chameleon.io))...
Why doesn't the [planned features](https://github.com/ODIQueensland/data-curator#planned-features) include publishing?
Publishing a data package to an open data portal is the next step in the data packaging workflow. Surely you could add a button to send the datapackage.zip to a your CKAN open data portal, [Datahub.io](https://datahub.io) or other portal.
When speaking with potential users of Data Curator, they expressed concerns about accidentally publishing open data before approval has been granted by the Data Custodian. They said they would present sample data, the schema and provenance information as part of the approval process.
So it's on the backlog for now.
Notes:
- Can the CKAN API accept Data Packages, like it [accepts them through the user interface](http://okfnlabs.org/blog/2016/07/25/publish-data-packages-to-datahub-ckan.html). Perhaps [some other tool](https://github.com/datahq/datahub-cli) would be helpful.
- The [Octopub API does not currently accept Data Packages](https://github.com/theodi/octopub/issues/477). | 1.0 | [EPIC] Integration with CKAN - A question I was asked (and a feature that exists in [Comma Chameleon](http://comma-chameleon.io))...
Why doesn't the [planned features](https://github.com/ODIQueensland/data-curator#planned-features) include publishing?
Publishing a data package to an open data portal is the next step in the data packaging workflow. Surely you could add a button to send the datapackage.zip to a your CKAN open data portal, [Datahub.io](https://datahub.io) or other portal.
When speaking with potential users of Data Curator, they expressed concerns about accidentally publishing open data before approval has been granted by the Data Custodian. They said they would present sample data, the schema and provenance information as part of the approval process.
So it's on the backlog for now.
Notes:
- Can the CKAN API accept Data Packages, like it [accepts them through the user interface](http://okfnlabs.org/blog/2016/07/25/publish-data-packages-to-datahub-ckan.html). Perhaps [some other tool](https://github.com/datahq/datahub-cli) would be helpful.
- The [Octopub API does not currently accept Data Packages](https://github.com/theodi/octopub/issues/477). | non_main | integration with ckan a question i was asked and a feature that exists in why doesn t the include publishing publishing a data package to an open data portal is the next step in the data packaging workflow surely you could add a button to send the datapackage zip to a your ckan open data portal or other portal when speaking with potential users of data curator they expressed concerns about accidentally publishing open data before approval has been granted by the data custodian they said they would present sample data the schema and provenance information as part of the approval process so it s on the backlog for now notes can the ckan api accept data packages like it perhaps would be helpful the | 0 |
17,164 | 23,680,238,190 | IssuesEvent | 2022-08-28 17:38:57 | akuker/RASCSI | https://api.github.com/repos/akuker/RASCSI | closed | RASCSI Zip drive emulation issue: Roland SP808EX | compatibility | # Info
- Which version of Pi are you using: Pi2
- Which github revision of software: 22.2.1
- Which board version: 2.3
- Which computer is the RaSCSI connected to: Roland SP808EX
# Describe the issue
Hi, The only SCSI drives supported by the SP808EX are Zip drives. I have attepted operation with both the Zip 250 and 100mb images without succsess . The sp808ex detects the scsi port, however when attempting to format I get a "drive not ready" indicating the sp808ex belives the disk is not inserted. reseach has provided the sp808ex requires the ATAPI POI MODE 4 protocol for removable drives to read and write zip disks. Is this emulated in the current build of RASCSI? I have succsessfully tested RASCSI with the Roland DJ-70 MKII using the same zip images. the DJ-70 formatted both images correctly. | True | RASCSI Zip drive emulation issue: Roland SP808EX - # Info
- Which version of Pi are you using: Pi2
- Which github revision of software: 22.2.1
- Which board version: 2.3
- Which computer is the RaSCSI connected to: Roland SP808EX
# Describe the issue
Hi, The only SCSI drives supported by the SP808EX are Zip drives. I have attepted operation with both the Zip 250 and 100mb images without succsess . The sp808ex detects the scsi port, however when attempting to format I get a "drive not ready" indicating the sp808ex belives the disk is not inserted. reseach has provided the sp808ex requires the ATAPI POI MODE 4 protocol for removable drives to read and write zip disks. Is this emulated in the current build of RASCSI? I have succsessfully tested RASCSI with the Roland DJ-70 MKII using the same zip images. the DJ-70 formatted both images correctly. | non_main | rascsi zip drive emulation issue roland info which version of pi are you using which github revision of software which board version which computer is the rascsi connected to roland describe the issue hi the only scsi drives supported by the are zip drives i have attepted operation with both the zip and images without succsess the detects the scsi port however when attempting to format i get a drive not ready indicating the belives the disk is not inserted reseach has provided the requires the atapi poi mode protocol for removable drives to read and write zip disks is this emulated in the current build of rascsi i have succsessfully tested rascsi with the roland dj mkii using the same zip images the dj formatted both images correctly | 0 |
5,347 | 26,962,818,571 | IssuesEvent | 2023-02-08 19:36:59 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | [Sam Local start api] Error 500 on _X_AMZN_TRACE_ID missing since upgrading to 1.0 | type/bug stage/needs-investigation maintainer/need-response | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description
Since upgrading to Sam 1.0.0 my local start api command is failing each NodeJS lambda request that's using X-ray, this was not a problem before
### Steps to reproduce
running sam local start-api -p 3000 -t template.yaml -s packages/website/build
```javascript
const AWSXRay = require('aws-xray-sdk-core');
const aws = AWSXRay.captureAWS(require('aws-sdk'));
const s3 = new aws.S3();
```
### Observed result
Expected _X_AMZN_TRACE_ID to be set.\n at Object.contextMissingRuntimeError [as contextMissing] (/var/task/node_modules/aws-xray-sdk-core/lib/context_utils.js:21:15)\n at Segment.resolveLambdaTraceData (/var/task/node_modules/aws-xray-sdk-core/lib/env/aws_lambda.js:93:43)\n at Object.getSegment (/var/task/node_modules/aws-xray-sdk-core/lib/context_utils.js:94:17)\n at Object.resolveSegment (/var/task/node_modules/aws-xray-sdk-core/lib/context_utils.js:73:19)\n at features.constructor.captureAWSRequest [as customRequestHandler] (/var/task/node_modules/aws-xray-sdk-core/lib/patchers/aws_p.js:66:29)\n at features.constructor.addAllRequestListeners (/var/task/node_modules/aws-sdk/lib/service.js:283:12)\n at features.constructor.makeRequest (/var/task/node_modules/aws-sdk/lib/service.js:203:10)\n at features.constructor.svc.<computed> [as getObject] (/var/task/node_modules/aws-sdk/lib/service.js:677:23)
### Expected result
Works as in the past
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: OS X Catalina 10.15.6 (19G73)
2. `sam --version`: 1.0.0
Could be mitigated by adding _X_AMZN_TRACE_ID: 1234 to my template.yaml Api Env Var section
| True | [Sam Local start api] Error 500 on _X_AMZN_TRACE_ID missing since upgrading to 1.0 - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description
Since upgrading to Sam 1.0.0 my local start api command is failing each NodeJS lambda request that's using X-ray, this was not a problem before
### Steps to reproduce
running sam local start-api -p 3000 -t template.yaml -s packages/website/build
```javascript
const AWSXRay = require('aws-xray-sdk-core');
const aws = AWSXRay.captureAWS(require('aws-sdk'));
const s3 = new aws.S3();
```
### Observed result
Expected _X_AMZN_TRACE_ID to be set.\n at Object.contextMissingRuntimeError [as contextMissing] (/var/task/node_modules/aws-xray-sdk-core/lib/context_utils.js:21:15)\n at Segment.resolveLambdaTraceData (/var/task/node_modules/aws-xray-sdk-core/lib/env/aws_lambda.js:93:43)\n at Object.getSegment (/var/task/node_modules/aws-xray-sdk-core/lib/context_utils.js:94:17)\n at Object.resolveSegment (/var/task/node_modules/aws-xray-sdk-core/lib/context_utils.js:73:19)\n at features.constructor.captureAWSRequest [as customRequestHandler] (/var/task/node_modules/aws-xray-sdk-core/lib/patchers/aws_p.js:66:29)\n at features.constructor.addAllRequestListeners (/var/task/node_modules/aws-sdk/lib/service.js:283:12)\n at features.constructor.makeRequest (/var/task/node_modules/aws-sdk/lib/service.js:203:10)\n at features.constructor.svc.<computed> [as getObject] (/var/task/node_modules/aws-sdk/lib/service.js:677:23)
### Expected result
Works as in the past
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: OS X Catalina 10.15.6 (19G73)
2. `sam --version`: 1.0.0
Could be mitigated by adding _X_AMZN_TRACE_ID: 1234 to my template.yaml Api Env Var section
| main | error on x amzn trace id missing since upgrading to make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description since upgrading to sam my local start api command is failing each nodejs lambda request that s using x ray this was not a problem before steps to reproduce running sam local start api p t template yaml s packages website build javascript const awsxray require aws xray sdk core const aws awsxray captureaws require aws sdk const new aws observed result expected x amzn trace id to be set n at object contextmissingruntimeerror var task node modules aws xray sdk core lib context utils js n at segment resolvelambdatracedata var task node modules aws xray sdk core lib env aws lambda js n at object getsegment var task node modules aws xray sdk core lib context utils js n at object resolvesegment var task node modules aws xray sdk core lib context utils js n at features constructor captureawsrequest var task node modules aws xray sdk core lib patchers aws p js n at features constructor addallrequestlisteners var task node modules aws sdk lib service js n at features constructor makerequest var task node modules aws sdk lib service js n at features constructor svc var task node modules aws sdk lib service js expected result works as in the past additional environment details ex windows mac amazon linux etc os os x catalina sam version could be mitigated by adding x amzn trace id to my template yaml api env var section | 1 |
771 | 4,381,194,617 | IssuesEvent | 2016-08-06 03:15:56 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Bug Report / (Feature Idea?): ec2_vpc_route_table does not support nat gateway | aws bug_report cloud feature_idea waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Ansible Version:
```
ansible 2.1.0 (devel 5144ee226e) last updated 2016/01/19 13:32:46 (GMT -500)
lib/ansible/modules/core: (detached HEAD ffea58ee86) last updated 2016/01/19 13:32:50 (GMT -500)
lib/ansible/modules/extras: (detached HEAD e9450df878) last updated 2016/01/19 13:32:54 (GMT -500)
config file = /Users/clucas/work/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
default config
##### Environment:
```
uname -a
Darwin leeroy 15.2.0 Darwin Kernel Version 15.2.0: Fri Nov 13 19:56:56 PST 2015; root:xnu-3248.20.55~2/RELEASE_X86_64 x86_64
```
##### Summary:
ec2_vpc_route_table fails with "The route identified by 0.0.0.0/0 already exists." when trying to use a nat gateway as a default route. As the AWS nat gateway is a relatively new feature, I expect that the problem is actually a combination of ec2_vpc_route_table module and it's use of an older boto that does not support nat gateway.
##### Steps To Reproduce:
```
- name: set up private subnet route tables
ec2_vpc_route_table:
region: "{{ region }}"
vpc_id: "{{ vpc.vpc.id }}"
tags:
Name: "rt_{{ region }}_{{ environ }}_nat"
Creator: "{{ creator }}"
Environment: "{{ environ }}"
subnets:
- "subnet-16640561"
routes:
- dest: 0.0.0.0/0
gateway_id: "nat-0b97eaa4f820cfe03"
```
##### Expected Results:
I expect the nat gateway to be used in the route table.
##### Actual Results:
ec2_vpc_route_table fails with "The route identified by 0.0.0.0/0 already exists."
| True | Bug Report / (Feature Idea?): ec2_vpc_route_table does not support nat gateway - ##### Issue Type:
- Bug Report
##### Ansible Version:
```
ansible 2.1.0 (devel 5144ee226e) last updated 2016/01/19 13:32:46 (GMT -500)
lib/ansible/modules/core: (detached HEAD ffea58ee86) last updated 2016/01/19 13:32:50 (GMT -500)
lib/ansible/modules/extras: (detached HEAD e9450df878) last updated 2016/01/19 13:32:54 (GMT -500)
config file = /Users/clucas/work/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
default config
##### Environment:
```
uname -a
Darwin leeroy 15.2.0 Darwin Kernel Version 15.2.0: Fri Nov 13 19:56:56 PST 2015; root:xnu-3248.20.55~2/RELEASE_X86_64 x86_64
```
##### Summary:
ec2_vpc_route_table fails with "The route identified by 0.0.0.0/0 already exists." when trying to use a nat gateway as a default route. As the AWS nat gateway is a relatively new feature, I expect that the problem is actually a combination of ec2_vpc_route_table module and it's use of an older boto that does not support nat gateway.
##### Steps To Reproduce:
```
- name: set up private subnet route tables
ec2_vpc_route_table:
region: "{{ region }}"
vpc_id: "{{ vpc.vpc.id }}"
tags:
Name: "rt_{{ region }}_{{ environ }}_nat"
Creator: "{{ creator }}"
Environment: "{{ environ }}"
subnets:
- "subnet-16640561"
routes:
- dest: 0.0.0.0/0
gateway_id: "nat-0b97eaa4f820cfe03"
```
##### Expected Results:
I expect the nat gateway to be used in the route table.
##### Actual Results:
ec2_vpc_route_table fails with "The route identified by 0.0.0.0/0 already exists."
| main | bug report feature idea vpc route table does not support nat gateway issue type bug report ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file users clucas work ansible ansible cfg configured module search path default w o overrides ansible configuration default config environment uname a darwin leeroy darwin kernel version fri nov pst root xnu release summary vpc route table fails with the route identified by already exists when trying to use a nat gateway as a default route as the aws nat gateway is a relatively new feature i expect that the problem is actually a combination of vpc route table module and it s use of an older boto that does not support nat gateway steps to reproduce name set up private subnet route tables vpc route table region region vpc id vpc vpc id tags name rt region environ nat creator creator environment environ subnets subnet routes dest gateway id nat expected results i expect the nat gateway to be used in the route table actual results vpc route table fails with the route identified by already exists | 1 |
476,941 | 13,752,675,100 | IssuesEvent | 2020-10-06 14:47:14 | input-output-hk/ouroboros-network | https://api.github.com/repos/input-output-hk/ouroboros-network | opened | Use real era tags for Allegra and Mary | consensus priority high transition2 | In #2666 we add(ed) the Allegra and Mary eras to `CardanoBlock`. These are identical to the Shelley era (as if Shelley hard-forks to Shelley to Shelley):
```
type AllegraEra c = Era.Shelley c
type MaryEra c = Era.Shelley c
```
This means we don't have to do any translations or conversions yet between Shelley-based eras.
When `cardano-ledger-specs` exports the right `AllegraEra` and `MaryEra` tags (see https://jira.iohk.io/browse/CAD-1999), we should start using them as the real era tags.
This means we'll have to do some translations (`LedgerView`, `LedgerState`, `ShelleyGenesis`, ...) for which we can use https://github.com/input-output-hk/cardano-ledger-specs/pull/1893. Note that all instances of the `TranslateEra` class that can be defined in `cardano-ledger-specs`, should be defined there. We will also have some consensus-specific ones.
We will also have to do cross-era chain selection using preferably [`SelectSameProtocol`](https://github.com/input-output-hk/ouroboros-network/blob/53a6ccbe3998764fc1a2cf42ef6fcafc60f817ec/ouroboros-consensus/src/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs#L49). Currently the protocol is parameterised over the era too, so `BlockProtocol (ShelleyBlock (ShelleyEra c) ~ BlockProtocol (ShelleyBlock (AllegraEra c))` is not true because `TPraos (ShelleyEra c) ~ TPraos (AllegraEra c)` is not true. We can use [`CustomChainSel`](https://github.com/input-output-hk/ouroboros-network/blob/53a6ccbe3998764fc1a2cf42ef6fcafc60f817ec/ouroboros-consensus/src/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs#L57) with some translations to work around it until this is fixed in the ledger itself, by not parameterising the protocol-specific types over the era, but just over the crypto (see https://jira.iohk.io/browse/CAD-2000).
At this point we should also start thinking about what we should do with the golden tests. | 1.0 | Use real era tags for Allegra and Mary - In #2666 we add(ed) the Allegra and Mary eras to `CardanoBlock`. These are identical to the Shelley era (as if Shelley hard-forks to Shelley to Shelley):
```
type AllegraEra c = Era.Shelley c
type MaryEra c = Era.Shelley c
```
This means we don't have to do any translations or conversions yet between Shelley-based eras.
When `cardano-ledger-specs` exports the right `AllegraEra` and `MaryEra` tags (see https://jira.iohk.io/browse/CAD-1999), we should start using them as the real era tags.
This means we'll have to do some translations (`LedgerView`, `LedgerState`, `ShelleyGenesis`, ...) for which we can use https://github.com/input-output-hk/cardano-ledger-specs/pull/1893. Note that all instances of the `TranslateEra` class that can be defined in `cardano-ledger-specs`, should be defined there. We will also have some consensus-specific ones.
We will also have to do cross-era chain selection using preferably [`SelectSameProtocol`](https://github.com/input-output-hk/ouroboros-network/blob/53a6ccbe3998764fc1a2cf42ef6fcafc60f817ec/ouroboros-consensus/src/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs#L49). Currently the protocol is parameterised over the era too, so `BlockProtocol (ShelleyBlock (ShelleyEra c) ~ BlockProtocol (ShelleyBlock (AllegraEra c))` is not true because `TPraos (ShelleyEra c) ~ TPraos (AllegraEra c)` is not true. We can use [`CustomChainSel`](https://github.com/input-output-hk/ouroboros-network/blob/53a6ccbe3998764fc1a2cf42ef6fcafc60f817ec/ouroboros-consensus/src/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs#L57) with some translations to work around it until this is fixed in the ledger itself, by not parameterising the protocol-specific types over the era, but just over the crypto (see https://jira.iohk.io/browse/CAD-2000).
At this point we should also start thinking about what we should do with the golden tests. | non_main | use real era tags for allegra and mary in we add ed the allegra and mary eras to cardanoblock these are identical to the shelley era as if shelley hard forks to shelley to shelley type allegraera c era shelley c type maryera c era shelley c this means we don t have to do any translations or conversions yet between shelley based eras when cardano ledger specs exports the right allegraera and maryera tags see we should start using them as the real era tags this means we ll have to do some translations ledgerview ledgerstate shelleygenesis for which we can use note that all instances of the translateera class that can be defined in cardano ledger specs should be defined there we will also have some consensus specific ones we will also have to do cross era chain selection using preferably currently the protocol is parameterised over the era too so blockprotocol shelleyblock shelleyera c blockprotocol shelleyblock allegraera c is not true because tpraos shelleyera c tpraos allegraera c is not true we can use with some translations to work around it until this is fixed in the ledger itself by not parameterising the protocol specific types over the era but just over the crypto see at this point we should also start thinking about what we should do with the golden tests | 0 |
395,085 | 11,671,436,327 | IssuesEvent | 2020-03-04 03:10:59 | Uvic-Robotics-Club/Scheduler | https://api.github.com/repos/Uvic-Robotics-Club/Scheduler | reopened | Implement way to identify and communicate with microcontrollers | high priority | ***What***: The rover's Raspberry Pi ("brain") must be able to identify what USB port a microcontroller is plugged into, and what the microcontroller's purpose is (eg. arm vs left drivetrain vs right drivetrain).
***Why***: To prevent having to hard-code COM ports (which can often change), and to allow the brain to send messages only to the intended microcontroller.
***Where***: Create a folder called "Arduino" in the Scheduler's top directory. Work in the [arduino_connection](https://github.com/Uvic-Robotics-Club/Scheduler/tree/arduino_connection) branch.
***Subtasks***:
- [x] Learn how to interface with Arduino
- [x] Learn how USB ports work on the Raspberry Pi
- [x] Write program that identifies the microcontroller
- [x] Test that program works | 1.0 | Implement way to identify and communicate with microcontrollers - ***What***: The rover's Raspberry Pi ("brain") must be able to identify what USB port a microcontroller is plugged into, and what the microcontroller's purpose is (eg. arm vs left drivetrain vs right drivetrain).
***Why***: To prevent having to hard-code COM ports (which can often change), and to allow the brain to send messages only to the intended microcontroller.
***Where***: Create a folder called "Arduino" in the Scheduler's top directory. Work in the [arduino_connection](https://github.com/Uvic-Robotics-Club/Scheduler/tree/arduino_connection) branch.
***Subtasks***:
- [x] Learn how to interface with Arduino
- [x] Learn how USB ports work on the Raspberry Pi
- [x] Write program that identifies the microcontroller
- [x] Test that program works | non_main | implement way to identify and communicate with microcontrollers what the rover s raspberry pi brain must be able to identify what usb port a microcontroller is plugged into and what the microcontroller s purpose is eg arm vs left drivetrain vs right drivetrain why to prevent having to hard code com ports which can often change and to allow the brain to send messages only to the intended microcontroller where create a folder called arduino in the scheduler s top directory work in the branch subtasks learn how to interface with arduino learn how usb ports work on the raspberry pi write program that identifies the microcontroller test that program works | 0 |
313,985 | 9,583,041,334 | IssuesEvent | 2019-05-08 03:29:00 | mit-cml/appinventor-sources | https://api.github.com/repos/mit-cml/appinventor-sources | closed | German translation for AI | priority: medium | Create properties file for german translation using existing completed project on Pootle (http://www.google.com/url?q=http%3A%2F%2Fpootle.appinventor.mit.edu%2F&sa=D&sntz=1&usg=AFQjCNGq4eLVh7Wh8j3wISC7fnvAq3cbxQ). That's an outdated project, so we'll have to see how well it matches existing strings. | 1.0 | German translation for AI - Create properties file for german translation using existing completed project on Pootle (http://www.google.com/url?q=http%3A%2F%2Fpootle.appinventor.mit.edu%2F&sa=D&sntz=1&usg=AFQjCNGq4eLVh7Wh8j3wISC7fnvAq3cbxQ). That's an outdated project, so we'll have to see how well it matches existing strings. | non_main | german translation for ai create properties file for german translation using existing completed project on pootle that s an outdated project so we ll have to see how well it matches existing strings | 0 |
4,572 | 23,751,168,783 | IssuesEvent | 2022-08-31 20:46:35 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Sam local invoke (go) not honoring timeout in context | area/docker type/bug area/local/invoke maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
When running a golang lambda function with `sam local invoke`, and making a request (in the function) using the context results in `RequestCanceled: request context canceled caused by: context deadline exceeded`.
When using ListBuckets (`without WithContext`) results in a successful result.
### Steps to reproduce:
<!-- Provide steps to replicate.-->
hello.go
```go
type Response events.APIGatewayProxyResponse
func Handler(ctx context.Context) (Response, error) {
fmt.Printf("%+v\n", ctx)
var buf bytes.Buffer
sess := session.Must(session.NewSession(&aws.Config{Region: aws.String("us-west-2")}))
svc := s3.New(sess)
// Using the WithContext version of ListBuckets makes it apparent that the context isn't set to honor the function timeout
b, err := svc.ListBucketsWithContext(ctx, &s3.ListBucketsInput{})
if err != nil {
fmt.Println(err)
}
fmt.Println(b)
body, err := json.Marshal(map[string]interface{}{
"message": "Go Serverless v1.0! Your function executed successfully!",
})
if err != nil {
return Response{StatusCode: 404}, err
}
json.HTMLEscape(&buf, body)
resp := Response{
StatusCode: 200,
IsBase64Encoded: false,
Body: buf.String(),
Headers: map[string]string{
"Content-Type": "application/json",
"X-MyCompany-Func-Reply": "hello-handler",
},
}
return resp, nil
}
func main() {
lambda.Start(Handler)
}
```
template.yml
```yml
AWSTemplateFormatVersion: "2010-09-09"
Transform: "AWS::Serverless-2016-10-31"
Description: "Demo SAM Template"
Resources:
Demo:
Type: "AWS::Serverless::Function"
Properties:
Handler: bin/hello
Runtime: go1.x
MemorySize: 128
Timeout: 30
Environment:
Variables:
AWS_ACCESS_KEY_ID: myaccesskey
AWS_SECRET_ACCESS_KEY: mysecretkey
AWS_SESSION_TOKEN: mytoken
Policies:
- Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- "s3:*"
Resource: "*"
```
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
`RequestCanceled: request context canceled caused by: context deadline exceeded`
### Expected result:
<!-- Describe what you expected.-->
```
{
Buckets: [
{
CreationDate: 2020-04-21 00:00:00 +0000 UTC,
Name: "bucket-names"
}
],
Owner: {
DisplayName: "...",
ID: "..."
}
}
```
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: macOS 10.15.7
2. `sam --version`: SAM CLI, version 1.15.
`Add --debug flag to command you are running`
| True | Sam local invoke (go) not honoring timeout in context - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
When running a golang lambda function with `sam local invoke`, and making a request (in the function) using the context results in `RequestCanceled: request context canceled caused by: context deadline exceeded`.
When using ListBuckets (`without WithContext`) results in a successful result.
### Steps to reproduce:
<!-- Provide steps to replicate.-->
hello.go
```go
type Response events.APIGatewayProxyResponse
func Handler(ctx context.Context) (Response, error) {
fmt.Printf("%+v\n", ctx)
var buf bytes.Buffer
sess := session.Must(session.NewSession(&aws.Config{Region: aws.String("us-west-2")}))
svc := s3.New(sess)
// Using the WithContext version of ListBuckets makes it apparent that the context isn't set to honor the function timeout
b, err := svc.ListBucketsWithContext(ctx, &s3.ListBucketsInput{})
if err != nil {
fmt.Println(err)
}
fmt.Println(b)
body, err := json.Marshal(map[string]interface{}{
"message": "Go Serverless v1.0! Your function executed successfully!",
})
if err != nil {
return Response{StatusCode: 404}, err
}
json.HTMLEscape(&buf, body)
resp := Response{
StatusCode: 200,
IsBase64Encoded: false,
Body: buf.String(),
Headers: map[string]string{
"Content-Type": "application/json",
"X-MyCompany-Func-Reply": "hello-handler",
},
}
return resp, nil
}
func main() {
lambda.Start(Handler)
}
```
template.yml
```yml
AWSTemplateFormatVersion: "2010-09-09"
Transform: "AWS::Serverless-2016-10-31"
Description: "Demo SAM Template"
Resources:
Demo:
Type: "AWS::Serverless::Function"
Properties:
Handler: bin/hello
Runtime: go1.x
MemorySize: 128
Timeout: 30
Environment:
Variables:
AWS_ACCESS_KEY_ID: myaccesskey
AWS_SECRET_ACCESS_KEY: mysecretkey
AWS_SESSION_TOKEN: mytoken
Policies:
- Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- "s3:*"
Resource: "*"
```
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
`RequestCanceled: request context canceled caused by: context deadline exceeded`
### Expected result:
<!-- Describe what you expected.-->
```
{
Buckets: [
{
CreationDate: 2020-04-21 00:00:00 +0000 UTC,
Name: "bucket-names"
}
],
Owner: {
DisplayName: "...",
ID: "..."
}
}
```
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: macOS 10.15.7
2. `sam --version`: SAM CLI, version 1.15.
`Add --debug flag to command you are running`
| main | sam local invoke go not honoring timeout in context make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description when running a golang lambda function with sam local invoke and making a request in the function using the context results in requestcanceled request context canceled caused by context deadline exceeded when using listbuckets without withcontext results in a successful result steps to reproduce hello go go type response events apigatewayproxyresponse func handler ctx context context response error fmt printf v n ctx var buf bytes buffer sess session must session newsession aws config region aws string us west svc new sess using the withcontext version of listbuckets makes it apparent that the context isn t set to honor the function timeout b err svc listbucketswithcontext ctx listbucketsinput if err nil fmt println err fmt println b body err json marshal map interface message go serverless your function executed successfully if err nil return response statuscode err json htmlescape buf body resp response statuscode false body buf string headers map string content type application json x mycompany func reply hello handler return resp nil func main lambda start handler template yml yml awstemplateformatversion transform aws serverless description demo sam template resources demo type aws serverless function properties handler bin hello runtime x memorysize timeout environment variables aws access key id myaccesskey aws secret access key mysecretkey aws session token mytoken policies version statement effect allow action resource observed result requestcanceled request context canceled caused by context deadline exceeded expected result buckets creationdate utc name bucket names owner displayname id additional environment details ex windows mac amazon linux etc os macos sam version sam cli version add debug flag to command you are running | 1 |
176 | 2,770,773,878 | IssuesEvent | 2015-05-01 16:57:51 | acl2/acl2 | https://api.github.com/repos/acl2/acl2 | opened | Cleanup the support directory of rtl/rel11 | Difficulty: Hard Maintainability | Russinoff and Nadezhin have reached consensus. I will document it here. I anticipate coordination between Russinoff and Nadezhin will occur via email, but this will be a way to keep the rest of the community in the loop.
Here are some of the terms to the cleanup:
-- `lib` books must be unchanged and certifiable
-- If someone wants to add theorems/books to the `lib` directory, put them in a parallel lib directory, perhaps called `lib-plus`
-- Russinoff's documentation that's on his website must be preserved
I plan on incorporating changes related to this issue after the 7.1 release of ACL2. | True | Cleanup the support directory of rtl/rel11 - Russinoff and Nadezhin have reached consensus. I will document it here. I anticipate coordination between Russinoff and Nadezhin will occur via email, but this will be a way to keep the rest of the community in the loop.
Here are some of the terms to the cleanup:
-- `lib` books must be unchanged and certifiable
-- If someone wants to add theorems/books to the `lib` directory, put them in a parallel lib directory, perhaps called `lib-plus`
-- Russinoff's documentation that's on his website must be preserved
I plan on incorporating changes related to this issue after the 7.1 release of ACL2. | main | cleanup the support directory of rtl russinoff and nadezhin have reached consensus i will document it here i anticipate coordination between russinoff and nadezhin will occur via email but this will be a way to keep the rest of the community in the loop here are some of the terms to the cleanup lib books must be unchanged and certifiable if someone wants to add theorems books to the lib directory put them in a parallel lib directory perhaps called lib plus russinoff s documentation that s on his website must be preserved i plan on incorporating changes related to this issue after the release of | 1 |
112,852 | 14,293,128,745 | IssuesEvent | 2020-11-24 02:51:39 | proustibat/lbc-messages | https://api.github.com/repos/proustibat/lbc-messages | closed | [Storybook] - Set up and add a CI/CD job | Design | - Setup local development
- Add needed addons
- Run it on CI/CD
- build
- deploy
| 1.0 | [Storybook] - Set up and add a CI/CD job - - Setup local development
- Add needed addons
- Run it on CI/CD
- build
- deploy
| non_main | set up and add a ci cd job setup local development add needed addons run it on ci cd build deploy | 0 |
871 | 4,537,659,185 | IssuesEvent | 2016-09-09 01:39:50 | Microsoft/DirectXTex | https://api.github.com/repos/Microsoft/DirectXTex | opened | Code cleanup | maintainence | DirectXTex was one of my first C++11 libraries, and I started it back in the Windows XP / VS 2010 days. As such, it's got a few lingering issues compared to my current coding style and usage:
* Replace ``LPVOID``, ``LPCVOID``, ``LPCWSTR`` with standard types
* Should use ``=delete``
* Use anonymous namespaces instead of static
* Use VS standard 'smart-indent' formatting (otherwise I spend a lot of time fighting it to put back in the spaces)
* Leading ``_`` in identifiers is reserved by the language. should minimize/remove them in case it causes problems with future compiler versions | True | Code cleanup - DirectXTex was one of my first C++11 libraries, and I started it back in the Windows XP / VS 2010 days. As such, it's got a few lingering issues compared to my current coding style and usage:
* Replace ``LPVOID``, ``LPCVOID``, ``LPCWSTR`` with standard types
* Should use ``=delete``
* Use anonymous namespaces instead of static
* Use VS standard 'smart-indent' formatting (otherwise I spend a lot of time fighting it to put back in the spaces)
* Leading ``_`` in identifiers is reserved by the language. should minimize/remove them in case it causes problems with future compiler versions | main | code cleanup directxtex was one of my first c libraries and i started it back in the windows xp vs days as such it s got a few lingering issues compared to my current coding style and usage replace lpvoid lpcvoid lpcwstr with standard types should use delete use anonymous namespaces instead of static use vs standard smart indent formatting otherwise i spend a lot of time fighting it to put back in the spaces leading in identifiers is reserved by the language should minimize remove them in case it causes problems with future compiler versions | 1 |
5,351 | 26,963,605,860 | IssuesEvent | 2023-02-08 20:15:52 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | [WINDOWS]Unable to upload artifactsshpk-conv) | area/package maintainer/need-followup | ### Description
`sam package` command not working
### Steps to reproduce
sam init
npm install request
`sam package --s3-bucket mybucket --output-template-file packaged.yaml`
### Observed result
Unable to upload artifact mystuff referenced by CodeUri parameter of mystuff resource.
[WinError 123] 'C:\\projects\\myproj\\.aws-sam\\build\\mystuff\\node_modules\\.bin\\sshpk-conv'
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: windows 10
2. Dependency: "request": "^2.88.0"

| True | [WINDOWS]Unable to upload artifactsshpk-conv) - ### Description
`sam package` command not working
### Steps to reproduce
sam init
npm install request
`sam package --s3-bucket mybucket --output-template-file packaged.yaml`
### Observed result
Unable to upload artifact mystuff referenced by CodeUri parameter of mystuff resource.
[WinError 123] 'C:\\projects\\myproj\\.aws-sam\\build\\mystuff\\node_modules\\.bin\\sshpk-conv'
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: windows 10
2. Dependency: "request": "^2.88.0"

| main | unable to upload artifactsshpk conv description sam package command not working steps to reproduce sam init npm install request sam package bucket mybucket output template file packaged yaml observed result unable to upload artifact mystuff referenced by codeuri parameter of mystuff resource c projects myproj aws sam build mystuff node modules bin sshpk conv additional environment details ex windows mac amazon linux etc os windows dependency request | 1 |
4,923 | 25,306,867,987 | IssuesEvent | 2022-11-17 14:45:50 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | valid_target_types AttributeError from columns endpoint | type: bug work: backend status: ready restricted: maintainers | I got this API error from the columns endpoint totally randomly. I had not performed any DDL or DML mutations in quite a while and had been previously getting successful responses from the same endpoint. I was not making lots of requests in parallel. I was just doing local testing of the record selector, which only ever performs GET requests. After this one error response, I continued to get successful responses.
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/5/columns/?limit=500
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/code/mathesar/models/base.py", line 685, in __getattribute__
return super().__getattribute__(name)
During handling of the above exception ('Column' object has no attribute 'valid_target_types'), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 826, in __getattr__
return getattr(self.comparator, key)
The above exception ('Comparator' object has no attribute 'valid_target_types') was the direct cause of the following exception:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 43, in list
return self.get_paginated_response(serializer.data)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 745, in data
ret = super().data
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 246, in data
self._data = self.to_representation(self.instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 663, in to_representation
return [
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 664, in <listcomp>
self.child.to_representation(item) for item in iterable
File "/code/mathesar/api/serializers/columns.py", line 84, in to_representation
representation = super().to_representation(instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 515, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "/usr/local/lib/python3.9/site-packages/rest_framework/fields.py", line 1882, in to_representation
return method(value)
File "/code/mathesar/api/serializers/columns.py", line 204, in get_valid_target_types
valid_target_types = column.valid_target_types
File "/code/mathesar/models/base.py", line 691, in __getattribute__
return getattr(self._sa_column, name)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 828, in __getattr__
util.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
Exception Type: AttributeError at /api/db/v0/tables/5/columns/
Exception Value: Neither 'Column' object nor 'Comparator' object has an attribute 'valid_target_types'
```
</details>
| True | valid_target_types AttributeError from columns endpoint - I got this API error from the columns endpoint totally randomly. I had not performed any DDL or DML mutations in quite a while and had been previously getting successful responses from the same endpoint. I was not making lots of requests in parallel. I was just doing local testing of the record selector, which only ever performs GET requests. After this one error response, I continued to get successful responses.
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/5/columns/?limit=500
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/code/mathesar/models/base.py", line 685, in __getattribute__
return super().__getattribute__(name)
During handling of the above exception ('Column' object has no attribute 'valid_target_types'), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 826, in __getattr__
return getattr(self.comparator, key)
The above exception ('Comparator' object has no attribute 'valid_target_types') was the direct cause of the following exception:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 43, in list
return self.get_paginated_response(serializer.data)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 745, in data
ret = super().data
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 246, in data
self._data = self.to_representation(self.instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 663, in to_representation
return [
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 664, in <listcomp>
self.child.to_representation(item) for item in iterable
File "/code/mathesar/api/serializers/columns.py", line 84, in to_representation
representation = super().to_representation(instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 515, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "/usr/local/lib/python3.9/site-packages/rest_framework/fields.py", line 1882, in to_representation
return method(value)
File "/code/mathesar/api/serializers/columns.py", line 204, in get_valid_target_types
valid_target_types = column.valid_target_types
File "/code/mathesar/models/base.py", line 691, in __getattribute__
return getattr(self._sa_column, name)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 828, in __getattr__
util.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
Exception Type: AttributeError at /api/db/v0/tables/5/columns/
Exception Value: Neither 'Column' object nor 'Comparator' object has an attribute 'valid_target_types'
```
</details>
| main | valid target types attributeerror from columns endpoint i got this api error from the columns endpoint totally randomly i had not performed any ddl or dml mutations in quite a while and had been previously getting successful responses from the same endpoint i was not making lots of requests in parallel i was just doing local testing of the record selector which only ever performs get requests after this one error response i continued to get successful responses traceback environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file code mathesar models base py line in getattribute return super getattribute name during handling of the above exception column object has no attribute valid target types another exception occurred file usr local lib site packages sqlalchemy sql elements py line in getattr return getattr self comparator key the above exception comparator object has no attribute valid target types was the direct cause of the following exception file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file usr local lib site packages rest framework mixins py line in list return self get paginated response serializer data file usr local lib site packages rest framework serializers py line in data ret super data file usr local lib site packages rest framework serializers py line in data self data self to representation self instance file usr local lib site packages rest framework serializers py line in to representation return file usr local lib site packages rest framework serializers py line in self child to representation item for item in iterable file code mathesar api serializers columns py line in to representation representation super to representation instance file usr local lib site packages rest framework serializers py line in to representation ret field to representation attribute file usr local lib site packages rest framework fields py line in to representation return method value file code mathesar api serializers columns py line in get valid target types valid target types column valid target types file code mathesar models base py line in getattribute return getattr self sa column name file usr local lib site packages sqlalchemy sql elements py line in getattr util raise file usr local lib site packages sqlalchemy util compat py line in raise raise exception exception type attributeerror at api db tables columns exception value neither column object nor comparator object has an attribute valid target types | 1 |
5,022 | 25,781,171,780 | IssuesEvent | 2022-12-09 16:03:34 | gorilla/websocket | https://api.github.com/repos/gorilla/websocket | closed | ⚠️ New maintainers needed | help wanted waiting on new maintainer | I am stepping down as the maintainer of the gorilla/WebSocket project.
I am looking for a new maintainer for the project. The new maintainer should have a track record of successfully maintaining an open-source project and implementing RFCs.
Potential maintainers can gain the required experience by contributing to this project. I need help triaging issues, reviewing PRs, and fixing issues labeled "help wanted." If you are interested, jump in and start contributing.
If you rely on the quality and ongoing maintenance of this package, please get involved by helping to maintain this package or finding people to help maintain the project. | True | ⚠️ New maintainers needed - I am stepping down as the maintainer of the gorilla/WebSocket project.
I am looking for a new maintainer for the project. The new maintainer should have a track record of successfully maintaining an open-source project and implementing RFCs.
Potential maintainers can gain the required experience by contributing to this project. I need help triaging issues, reviewing PRs, and fixing issues labeled "help wanted." If you are interested, jump in and start contributing.
If you rely on the quality and ongoing maintenance of this package, please get involved by helping to maintain this package or finding people to help maintain the project. | main | ⚠️ new maintainers needed i am stepping down as the maintainer of the gorilla websocket project i am looking for a new maintainer for the project the new maintainer should have a track record of successfully maintaining an open source project and implementing rfcs potential maintainers can gain the required experience by contributing to this project i need help triaging issues reviewing prs and fixing issues labeled help wanted if you are interested jump in and start contributing if you rely on the quality and ongoing maintenance of this package please get involved by helping to maintain this package or finding people to help maintain the project | 1 |
174,574 | 14,490,702,010 | IssuesEvent | 2020-12-11 02:53:52 | keep-network/keep-core | https://api.github.com/repos/keep-network/keep-core | closed | Update Gitbook staking documentation with screenshots of latest version of token dashboard | :book: documentation :old_key: token dashboard | Updating staking.keep.network with latest screenshots of dashboard; update any outdated text describing staking with the new screens. | 1.0 | Update Gitbook staking documentation with screenshots of latest version of token dashboard - Updating staking.keep.network with latest screenshots of dashboard; update any outdated text describing staking with the new screens. | non_main | update gitbook staking documentation with screenshots of latest version of token dashboard updating staking keep network with latest screenshots of dashboard update any outdated text describing staking with the new screens | 0 |
104,244 | 8,969,164,578 | IssuesEvent | 2019-01-29 10:05:40 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | pkg/kubeapiserver/options TestValidate failed | kind/test-flake priority/P0 sig/master | Seen: https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/21883/pull-ci-openshift-origin-master-unit/2303/
```
=== RUN TestValidate
--- FAIL: TestValidate (0.00s)
admission_test.go:52: Unexpected err: [plugins [TaintNodesByCondition] in RecommendedPluginOrder are not registered]
``` | 1.0 | pkg/kubeapiserver/options TestValidate failed - Seen: https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/21883/pull-ci-openshift-origin-master-unit/2303/
```
=== RUN TestValidate
--- FAIL: TestValidate (0.00s)
admission_test.go:52: Unexpected err: [plugins [TaintNodesByCondition] in RecommendedPluginOrder are not registered]
``` | non_main | pkg kubeapiserver options testvalidate failed seen run testvalidate fail testvalidate admission test go unexpected err in recommendedpluginorder are not registered | 0 |
11,309 | 2,649,049,489 | IssuesEvent | 2015-03-14 15:00:04 | policeman-tools/forbidden-apis | https://api.github.com/repos/policeman-tools/forbidden-apis | closed | Forbidden @java.lang.Deprecated is not always detected | auto-migrated Priority-Medium Type-Defect | ```
If you put java.lang.Deprecated on the forbidden apis list, it is not always
correctly detected. The reason for this is, that the Java compiler translates
it into the deprecated code attribute and may not always put it as a real
annotation.
As discussed on issue #44, we should "emulate" a java.lang.Deprecated ASM
annotation visitor event, if the attribute is found (and filter duplicates). By
that also code not making explicit use of @Deprecated annotation (just uses the
@deprecated javadoc tag), will be detected correctly as having the attribute.
```
Original issue reported on code.google.com by `uwe.h.schindler` on 24 Dec 2014 at 12:24 | 1.0 | Forbidden @java.lang.Deprecated is not always detected - ```
If you put java.lang.Deprecated on the forbidden apis list, it is not always
correctly detected. The reason for this is, that the Java compiler translates
it into the deprecated code attribute and may not always put it as a real
annotation.
As discussed on issue #44, we should "emulate" a java.lang.Deprecated ASM
annotation visitor event, if the attribute is found (and filter duplicates). By
that also code not making explicit use of @Deprecated annotation (just uses the
@deprecated javadoc tag), will be detected correctly as having the attribute.
```
Original issue reported on code.google.com by `uwe.h.schindler` on 24 Dec 2014 at 12:24 | non_main | forbidden java lang deprecated is not always detected if you put java lang deprecated on the forbidden apis list it is not always correctly detected the reason for this is that the java compiler translates it into the deprecated code attribute and may not always put it as a real annotation as discussed on issue we should emulate a java lang deprecated asm annotation visitor event if the attribute is found and filter duplicates by that also code not making explicit use of deprecated annotation just uses the deprecated javadoc tag will be detected correctly as having the attribute original issue reported on code google com by uwe h schindler on dec at | 0 |
49,006 | 7,469,279,309 | IssuesEvent | 2018-04-02 22:07:41 | 18F/calc | https://api.github.com/repos/18F/calc | closed | Update component.yaml and ssp.yml after 3-Year ATO is signed | component: ATO component: documentation | To keep our local yaml files in sync with the executed ATO and its documentation, we should update our files once the official docs are signed. | 1.0 | Update component.yaml and ssp.yml after 3-Year ATO is signed - To keep our local yaml files in sync with the executed ATO and its documentation, we should update our files once the official docs are signed. | non_main | update component yaml and ssp yml after year ato is signed to keep our local yaml files in sync with the executed ato and its documentation we should update our files once the official docs are signed | 0 |
1,111 | 4,988,627,149 | IssuesEvent | 2016-12-08 09:04:59 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Service module no longer works with async | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
service module
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/vagrant/source/GHE/ansible-playground/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Ubuntu 14.04 (but probably many others)
##### SUMMARY
On Ansible 2.1 , async could be specified on tasks using the `service` module. This was extremely useful to avoid playbooks from hanging if a service start did not return in a reasonable amount of time.
At Ansible 2.2, this fails with `async mode is not supported with the service module`
##### STEPS TO REPRODUCE
Create a dummy service that is guaranteed to take a certain amount of time to start.
For this reproduce, create file `/etc/init/testservice.conf` , as root, with the following contents:
```
pre-start script
#!/bin/bash
i=0
while [ "$i" -lt 10 ]
do
echo "Attempt $i"
sleep 2
i=$((i+1))
done
exit 0
end script
script
echo "Started"
end script
```
This service is guaranteed to take 20 seconds to start.
Run the following playbook against localhost:
```
---
- hosts: all
become: yes
become_user: root
become_method: sudo
tasks:
- name: upstart restart
service: "name=testservice state=restarted sleep=1"
async: 10
poll: 5
ignore_errors: yes
register: restart_status
- name: fail deploy if upstart restart failed
fail: msg="The upstart restart step failed."
when: restart_status | failed
```
##### EXPECTED RESULTS
At Ansible 2.1.2 restart timed out : `async task did not complete within the requested time`
```
PLAYBOOK: testservices.yml *****************************************************
1 plays in testservices.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `" && echo ansible-tmp-1478188242.21-246358132149554="` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpUugRdk TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gpflpphtwddftzrfkoriswnuymaymkrl; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/" > /dev/null 2>&1'"'"' && sleep 0'
ok: [127.0.0.1]
TASK [upstart restart] *********************************************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `" && echo ansible-tmp-1478188244.41-125126621993988="` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpUByg0S TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service
<127.0.0.1> PUT /tmp/tmpw0Z12E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-yxjgpmoqcwfowpsyvbxridnwklvypoha; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper 164650715721 10 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ > /dev/null 2>&1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `" && echo ansible-tmp-1478188250.68-192180370745608="` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmprydg9E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gepyxugizyhafzmhsvczcvexlulqhfgw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/" > /dev/null 2>&1'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `" && echo ansible-tmp-1478188255.79-182646281999246="` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpiswcWa TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ynrcmnmeylrhanlnoauwffgvwlzuwbgq; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "async task did not complete within the requested time"}
...ignoring
TASK [fail deploy if upstart restart failed] ***********************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"msg": "The upstart restart step failed."}, "module_name": "fail"}, "msg": "The upstart restart step failed."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1
```
##### ACTUAL RESULTS
At Ansible 2.2 : `async mode is not supported with the service module`
```
PLAYBOOK: testservices.yml *****************************************************
1 plays in testservices.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `" && echo ansible-tmp-1478188095.18-113784857458696="` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmphWWy6b TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-bulczgwqvdnbjvnyovwlypoyymqngdvk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/" > /dev/null 2>&1'"'"' && sleep 0'
ok: [127.0.0.1]
TASK [upstart restart] *********************************************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7
fatal: [127.0.0.1]: FAILED! => {
"failed": true,
"msg": "async mode is not supported with the service module"
}
...ignoring
TASK [fail deploy if upstart restart failed] ***********************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"msg": "The upstart restart step failed."
},
"module_name": "fail"
},
"msg": "The upstart restart step failed."
}
to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1
```
| True | Service module no longer works with async - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
service module
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/vagrant/source/GHE/ansible-playground/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Ubuntu 14.04 (but probably many others)
##### SUMMARY
On Ansible 2.1 , async could be specified on tasks using the `service` module. This was extremely useful to avoid playbooks from hanging if a service start did not return in a reasonable amount of time.
At Ansible 2.2, this fails with `async mode is not supported with the service module`
##### STEPS TO REPRODUCE
Create a dummy service that is guaranteed to take a certain amount of time to start.
For this reproduce, create file `/etc/init/testservice.conf` , as root, with the following contents:
```
pre-start script
#!/bin/bash
i=0
while [ "$i" -lt 10 ]
do
echo "Attempt $i"
sleep 2
i=$((i+1))
done
exit 0
end script
script
echo "Started"
end script
```
This service is guaranteed to take 20 seconds to start.
Run the following playbook against localhost:
```
---
- hosts: all
become: yes
become_user: root
become_method: sudo
tasks:
- name: upstart restart
service: "name=testservice state=restarted sleep=1"
async: 10
poll: 5
ignore_errors: yes
register: restart_status
- name: fail deploy if upstart restart failed
fail: msg="The upstart restart step failed."
when: restart_status | failed
```
##### EXPECTED RESULTS
At Ansible 2.1.2 restart timed out : `async task did not complete within the requested time`
```
PLAYBOOK: testservices.yml *****************************************************
1 plays in testservices.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `" && echo ansible-tmp-1478188242.21-246358132149554="` echo $HOME/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpUugRdk TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gpflpphtwddftzrfkoriswnuymaymkrl; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/setup; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188242.21-246358132149554/" > /dev/null 2>&1'"'"' && sleep 0'
ok: [127.0.0.1]
TASK [upstart restart] *********************************************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `" && echo ansible-tmp-1478188244.41-125126621993988="` echo $HOME/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpUByg0S TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service
<127.0.0.1> PUT /tmp/tmpw0Z12E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-yxjgpmoqcwfowpsyvbxridnwklvypoha; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/async_wrapper 164650715721 10 /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/service'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1478188244.41-125126621993988/ > /dev/null 2>&1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `" && echo ansible-tmp-1478188250.68-192180370745608="` echo $HOME/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmprydg9E TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-gepyxugizyhafzmhsvczcvexlulqhfgw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/async_status; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188250.68-192180370745608/" > /dev/null 2>&1'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `" && echo ansible-tmp-1478188255.79-182646281999246="` echo $HOME/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpiswcWa TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ynrcmnmeylrhanlnoauwffgvwlzuwbgq; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/async_status; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188255.79-182646281999246/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "async task did not complete within the requested time"}
...ignoring
TASK [fail deploy if upstart restart failed] ***********************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"msg": "The upstart restart step failed."}, "module_name": "fail"}, "msg": "The upstart restart step failed."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1
```
##### ACTUAL RESULTS
At Ansible 2.2 : `async mode is not supported with the service module`
```
PLAYBOOK: testservices.yml *****************************************************
1 plays in testservices.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `" && echo ansible-tmp-1478188095.18-113784857458696="` echo $HOME/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmphWWy6b TO /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/ /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-bulczgwqvdnbjvnyovwlypoyymqngdvk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478188095.18-113784857458696/" > /dev/null 2>&1'"'"' && sleep 0'
ok: [127.0.0.1]
TASK [upstart restart] *********************************************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:7
fatal: [127.0.0.1]: FAILED! => {
"failed": true,
"msg": "async mode is not supported with the service module"
}
...ignoring
TASK [fail deploy if upstart restart failed] ***********************************
task path: /home/vagrant/source/GHE/ansible-playground/testservices.yml:13
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"msg": "The upstart restart step failed."
},
"module_name": "fail"
},
"msg": "The upstart restart step failed."
}
to retry, use: --limit @/home/vagrant/source/GHE/ansible-playground/testservices.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=1
```
| main | service module no longer works with async issue type bug report component name service module ansible version ansible config file home vagrant source ghe ansible playground ansible cfg configured module search path default w o overrides configuration n a os environment ubuntu but probably many others summary on ansible async could be specified on tasks using the service module this was extremely useful to avoid playbooks from hanging if a service start did not return in a reasonable amount of time at ansible this fails with async mode is not supported with the service module steps to reproduce create a dummy service that is guaranteed to take a certain amount of time to start for this reproduce create file etc init testservice conf as root with the following contents pre start script bin bash i while do echo attempt i sleep i i done exit end script script echo started end script this service is guaranteed to take seconds to start run the following playbook against localhost hosts all become yes become user root become method sudo tasks name upstart restart service name testservice state restarted sleep async poll ignore errors yes register restart status name fail deploy if upstart restart failed fail msg the upstart restart step failed when restart status failed expected results at ansible restart timed out async task did not complete within the requested time playbook testservices yml plays in testservices yml play task establish local connection for user vagrant exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpuugrdk to home vagrant ansible tmp ansible tmp setup exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup sleep exec bin sh c sudo h s n u root bin sh c echo become success gpflpphtwddftzrfkoriswnuymaymkrl lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp setup rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant source ghe ansible playground testservices yml establish local connection for user vagrant exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp service put tmp to home vagrant ansible tmp ansible tmp async wrapper exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp service home vagrant ansible tmp ansible tmp async wrapper sleep exec bin sh c sudo h s n u root bin sh c echo become success yxjgpmoqcwfowpsyvbxridnwklvypoha lang en us utf lc all en us utf lc messages en us utf home vagrant ansible tmp ansible tmp async wrapper home vagrant ansible tmp ansible tmp service sleep exec bin sh c rm f r home vagrant ansible tmp ansible tmp dev null sleep exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp async status exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp async status sleep exec bin sh c sudo h s n u root bin sh c echo become success gepyxugizyhafzmhsvczcvexlulqhfgw lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp async status rm rf home vagrant ansible tmp ansible tmp dev null sleep exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiswcwa to home vagrant ansible tmp ansible tmp async status exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp async status sleep exec bin sh c sudo h s n u root bin sh c echo become success ynrcmnmeylrhanlnoauwffgvwlzuwbgq lang en us utf lc all en us utf lc messages en us utf usr bin python home vagrant ansible tmp ansible tmp async status rm rf home vagrant ansible tmp ansible tmp dev null sleep fatal failed changed false failed true msg async task did not complete within the requested time ignoring task task path home vagrant source ghe ansible playground testservices yml fatal failed changed false failed true invocation module args msg the upstart restart step failed module name fail msg the upstart restart step failed no more hosts left to retry use limit home vagrant source ghe ansible playground testservices retry play recap ok changed unreachable failed actual results at ansible async mode is not supported with the service module playbook testservices yml plays in testservices yml play task using module file usr local lib dist packages ansible modules core system setup py establish local connection for user vagrant exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py exec bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep exec bin sh c sudo h s n u root bin sh c echo become success bulczgwqvdnbjvnyovwlypoyymqngdvk usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant source ghe ansible playground testservices yml fatal failed failed true msg async mode is not supported with the service module ignoring task task path home vagrant source ghe ansible playground testservices yml fatal failed changed false failed true invocation module args msg the upstart restart step failed module name fail msg the upstart restart step failed to retry use limit home vagrant source ghe ansible playground testservices retry play recap ok changed unreachable failed | 1 |
563,120 | 16,676,305,260 | IssuesEvent | 2021-06-07 16:36:50 | epam/Indigo | https://api.github.com/repos/epam/Indigo | closed | core: investigate possibility and replace SharedPtr with std::shared_ptr | Core Enhancement High priority | Currently we have our own implementation of shared-owner smart pointer called SharedPtr (`core/indigo-core/common/base_cpp/shared_ptr.h`).
We need to:
- [ ] investigate possibility of replacing it with `std::shared_ptr`
- [ ] replace it's usages with `std::shared_ptr` with using `std::make_shared` where possible
- [ ] remove `SharedPtr` from the codebase | 1.0 | core: investigate possibility and replace SharedPtr with std::shared_ptr - Currently we have our own implementation of shared-owner smart pointer called SharedPtr (`core/indigo-core/common/base_cpp/shared_ptr.h`).
We need to:
- [ ] investigate possibility of replacing it with `std::shared_ptr`
- [ ] replace it's usages with `std::shared_ptr` with using `std::make_shared` where possible
- [ ] remove `SharedPtr` from the codebase | non_main | core investigate possibility and replace sharedptr with std shared ptr currently we have our own implementation of shared owner smart pointer called sharedptr core indigo core common base cpp shared ptr h we need to investigate possibility of replacing it with std shared ptr replace it s usages with std shared ptr with using std make shared where possible remove sharedptr from the codebase | 0 |
3,291 | 12,625,506,188 | IssuesEvent | 2020-06-14 12:15:17 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | CI Issues | awaiting maintainer feedback bug | CI is crapping itself again. Making this issue so we can track the problems.
- [x] Error: Calling brew pull is deprecated! Use hub checkout instead. ([example](https://github.com/Homebrew/homebrew-cask/pull/84018/checks))
- [ ] appcast at URL '{{SOMETHING}}' offline or looping ([example](https://github.com/Homebrew/homebrew-cask/pull/84018/checks)).
- [ ] Error: undefined method `any_version_installed? ([example](https://github.com/Homebrew/homebrew-cask/pull/83730/checks)) | True | CI Issues - CI is crapping itself again. Making this issue so we can track the problems.
- [x] Error: Calling brew pull is deprecated! Use hub checkout instead. ([example](https://github.com/Homebrew/homebrew-cask/pull/84018/checks))
- [ ] appcast at URL '{{SOMETHING}}' offline or looping ([example](https://github.com/Homebrew/homebrew-cask/pull/84018/checks)).
- [ ] Error: undefined method `any_version_installed? ([example](https://github.com/Homebrew/homebrew-cask/pull/83730/checks)) | main | ci issues ci is crapping itself again making this issue so we can track the problems error calling brew pull is deprecated use hub checkout instead appcast at url something offline or looping error undefined method any version installed | 1 |
281,914 | 8,700,795,567 | IssuesEvent | 2018-12-05 09:45:38 | omni-compiler/xcodeml-tools | https://api.github.com/repos/omni-compiler/xcodeml-tools | opened | Wrong error triggered: integer constant required | Kind: Bug Module: F_Front Priority: Low | The following code is valid Fortran but `F_Front` trigger an error as show below.
```fortran
module mod1
type type1
integer(kind=8) :: int1
end type
contains
subroutine sub1(t, char1)
type(type1) :: t
character char1*(*), char2*(t%int1)
end subroutine
end module mod1
```
```bash
"dummy.f90:mod1", line 11: integer constant is required
``` | 1.0 | Wrong error triggered: integer constant required - The following code is valid Fortran but `F_Front` trigger an error as show below.
```fortran
module mod1
type type1
integer(kind=8) :: int1
end type
contains
subroutine sub1(t, char1)
type(type1) :: t
character char1*(*), char2*(t%int1)
end subroutine
end module mod1
```
```bash
"dummy.f90:mod1", line 11: integer constant is required
``` | non_main | wrong error triggered integer constant required the following code is valid fortran but f front trigger an error as show below fortran module type integer kind end type contains subroutine t type t character t end subroutine end module bash dummy line integer constant is required | 0 |
2,334 | 8,357,622,596 | IssuesEvent | 2018-10-02 22:16:09 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | closed | Example UART in uartdmx plugin readme is wrong? | Component-Plugin Maintainability | Hi @richardash1981
Looking at some of the discussion in :
https://groups.google.com/forum/#!topic/open-lighting/myWi-K570Rc
It seems the example device in the uartdmx plugin may be wrong, as it's /dev/ttyACM0, whereas it appears it should be /dev/ttyAMA0.
It looks like this appeared with the plugin in this PR:
https://github.com/OpenLightingProject/ola/pull/385
After my (rather terse comment) in this commit:
https://github.com/OpenLightingProject/ola/pull/385/commits/39b4bf5296dded813057cb768e099a10cb6feb9a#r12026667
This also contradicts your own instructions:
http://eastertrail.blogspot.nl/2014/04/command-and-control-ii.html
Have I missed something, or should this really be /dev/ttyAMA0? | True | Example UART in uartdmx plugin readme is wrong? - Hi @richardash1981
Looking at some of the discussion in :
https://groups.google.com/forum/#!topic/open-lighting/myWi-K570Rc
It seems the example device in the uartdmx plugin may be wrong, as it's /dev/ttyACM0, whereas it appears it should be /dev/ttyAMA0.
It looks like this appeared with the plugin in this PR:
https://github.com/OpenLightingProject/ola/pull/385
After my (rather terse comment) in this commit:
https://github.com/OpenLightingProject/ola/pull/385/commits/39b4bf5296dded813057cb768e099a10cb6feb9a#r12026667
This also contradicts your own instructions:
http://eastertrail.blogspot.nl/2014/04/command-and-control-ii.html
Have I missed something, or should this really be /dev/ttyAMA0? | main | example uart in uartdmx plugin readme is wrong hi looking at some of the discussion in it seems the example device in the uartdmx plugin may be wrong as it s dev whereas it appears it should be dev it looks like this appeared with the plugin in this pr after my rather terse comment in this commit this also contradicts your own instructions have i missed something or should this really be dev | 1 |
580,697 | 17,264,486,259 | IssuesEvent | 2021-07-22 12:12:34 | argosp/trialdash | https://api.github.com/repos/argosp/trialdash | closed | Changing placing type on map keeps points | Priority Medium bug | For example, changing from rectencle to points leaves the black points on screen.
There is no way to remove them


| 1.0 | Changing placing type on map keeps points - For example, changing from rectencle to points leaves the black points on screen.
There is no way to remove them


| non_main | changing placing type on map keeps points for example changing from rectencle to points leaves the black points on screen there is no way to remove them | 0 |
348,283 | 24,910,484,576 | IssuesEvent | 2022-10-29 20:00:03 | bounswe/bounswe2022group7 | https://api.github.com/repos/bounswe/bounswe2022group7 | opened | Preparing Deliverables for Milestone | Type: Documentation Status: Not Yet Started Difficulty: Easy | We are required to put our deliverables in a folder for the first CMPE451 milestone. Here is the related section from the milestone description:
> In addition to the code that you develop you must create a folder titled CMPE451_Customer_Presentation_Milestone_1 inside deliverables folder in your repository. Commit all the deliverables to this directory (with appropriate comments)
I suggest we create a branch out of develop and add the files there, then merge it back into develop branch.
### List of Deliverables
The deliverables will include the following:
- Software Requirements Specification
- Software Design (UML)
- Scenarios and Mockups
- Project Plan
- Individual Contribution Reports
These deliverables are ready except for sequence diagrams, project plan and individual contribution reports. Here is how we can handle the deliverables:
- [ ] @canatakan adds use case and class diagrams.
- [ ] @CahidArda adds rest of the deliverables excluding sequence diagrams, project plan and individual contribution reports.
- [ ] We add sequence diagrams and project plan on monday 19:00
- [ ] @CahidArda creates individual contribution report template. Template will include the names of the team members in alphabetic order.
Deadline 29.10.2022, 23:59
Reviewer: @demet47 | 1.0 | Preparing Deliverables for Milestone - We are required to put our deliverables in a folder for the first CMPE451 milestone. Here is the related section from the milestone description:
> In addition to the code that you develop you must create a folder titled CMPE451_Customer_Presentation_Milestone_1 inside deliverables folder in your repository. Commit all the deliverables to this directory (with appropriate comments)
I suggest we create a branch out of develop and add the files there, then merge it back into develop branch.
### List of Deliverables
The deliverables will include the following:
- Software Requirements Specification
- Software Design (UML)
- Scenarios and Mockups
- Project Plan
- Individual Contribution Reports
These deliverables are ready except for sequence diagrams, project plan and individual contribution reports. Here is how we can handle the deliverables:
- [ ] @canatakan adds use case and class diagrams.
- [ ] @CahidArda adds rest of the deliverables excluding sequence diagrams, project plan and individual contribution reports.
- [ ] We add sequence diagrams and project plan on monday 19:00
- [ ] @CahidArda creates individual contribution report template. Template will include the names of the team members in alphabetic order.
Deadline 29.10.2022, 23:59
Reviewer: @demet47 | non_main | preparing deliverables for milestone we are required to put our deliverables in a folder for the first milestone here is the related section from the milestone description in addition to the code that you develop you must create a folder titled customer presentation milestone inside deliverables folder in your repository commit all the deliverables to this directory with appropriate comments i suggest we create a branch out of develop and add the files there then merge it back into develop branch list of deliverables the deliverables will include the following software requirements specification software design uml scenarios and mockups project plan individual contribution reports these deliverables are ready except for sequence diagrams project plan and individual contribution reports here is how we can handle the deliverables canatakan adds use case and class diagrams cahidarda adds rest of the deliverables excluding sequence diagrams project plan and individual contribution reports we add sequence diagrams and project plan on monday cahidarda creates individual contribution report template template will include the names of the team members in alphabetic order deadline reviewer | 0 |
5,432 | 27,240,291,110 | IssuesEvent | 2023-02-21 19:48:00 | cosmos/ibc-rs | https://api.github.com/repos/cosmos/ibc-rs | closed | Remove legacy `Reader`s and `Keeper`s | O: maintainability | In favor of `ValidationContext` and `ExecutionContext`. Each handler's `process()` should be implemented as suggested [here](https://github.com/cosmos/ibc-rs/issues/271#issuecomment-1331347210).
Follow-up to #257. | True | Remove legacy `Reader`s and `Keeper`s - In favor of `ValidationContext` and `ExecutionContext`. Each handler's `process()` should be implemented as suggested [here](https://github.com/cosmos/ibc-rs/issues/271#issuecomment-1331347210).
Follow-up to #257. | main | remove legacy reader s and keeper s in favor of validationcontext and executioncontext each handler s process should be implemented as suggested follow up to | 1 |
163,627 | 20,363,970,483 | IssuesEvent | 2022-02-21 01:52:00 | UNO-NULLify/eCTF20 | https://api.github.com/repos/UNO-NULLify/eCTF20 | opened | CVE-2022-0512 (High) detected in url-parse-1.4.7.tgz | security vulnerability | ## CVE-2022-0512 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-1.4.1.tgz (Root Library)
- core-1.4.1.tgz
- webpack-dev-server-3.10.3.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.6.
<p>Publish Date: 2022-02-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0512>CVE-2022-0512</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0512">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0512</a></p>
<p>Release Date: 2022-02-14</p>
<p>Fix Resolution: url-parse - 1.5.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0512 (High) detected in url-parse-1.4.7.tgz - ## CVE-2022-0512 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-1.4.1.tgz (Root Library)
- core-1.4.1.tgz
- webpack-dev-server-3.10.3.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.6.
<p>Publish Date: 2022-02-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0512>CVE-2022-0512</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0512">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0512</a></p>
<p>Release Date: 2022-02-14</p>
<p>Fix Resolution: url-parse - 1.5.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in url parse tgz cve high severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file package json path to vulnerable library node modules url parse package json dependency hierarchy vuepress tgz root library core tgz webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse step up your open source security game with whitesource | 0 |
700 | 4,272,107,822 | IssuesEvent | 2016-07-13 13:35:50 | Particular/NServiceBus.Persistence.AzureStorage | https://api.github.com/repos/Particular/NServiceBus.Persistence.AzureStorage | closed | Document saga data types allowed with Azure Storage Persistence | Impact: M Size: S State: In Progress - Maintainer Prio Tag: Maintainer Prio | Copied from the original issue:
https://github.com/Particular/docs.particular.net/issues/1666 | True | Document saga data types allowed with Azure Storage Persistence - Copied from the original issue:
https://github.com/Particular/docs.particular.net/issues/1666 | main | document saga data types allowed with azure storage persistence copied from the original issue | 1 |
532 | 3,931,538,930 | IssuesEvent | 2016-04-25 12:54:38 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Foundation 6 Cheat Sheet: Maintainer request | Maintainer Input Requested | I would like to become the maintainer for this!
Thanks!
IA Page: http://duck.co/ia/view/foundation_cheat_sheet | True | Foundation 6 Cheat Sheet: Maintainer request - I would like to become the maintainer for this!
Thanks!
IA Page: http://duck.co/ia/view/foundation_cheat_sheet | main | foundation cheat sheet maintainer request i would like to become the maintainer for this thanks ia page | 1 |
4,949 | 25,455,552,525 | IssuesEvent | 2022-11-24 13:55:26 | pace/bricks | https://api.github.com/repos/pace/bricks | closed | Do not allow empty user id in OAuth2 middleware | T::Maintainance | ### Problem
The OAuth2 middleware works with empty user id.
### Want
The OAuth2 middleware errors if the user id is empty. The same goes for the client id.
### Resources
Implement here: https://github.com/pace/bricks/blob/2cdb677ba04f0f70af432585e587b85e429dec90/http/oauth2/oauth2.go#L87 | True | Do not allow empty user id in OAuth2 middleware - ### Problem
The OAuth2 middleware works with empty user id.
### Want
The OAuth2 middleware errors if the user id is empty. The same goes for the client id.
### Resources
Implement here: https://github.com/pace/bricks/blob/2cdb677ba04f0f70af432585e587b85e429dec90/http/oauth2/oauth2.go#L87 | main | do not allow empty user id in middleware problem the middleware works with empty user id want the middleware errors if the user id is empty the same goes for the client id resources implement here | 1 |
1,808 | 6,575,944,250 | IssuesEvent | 2017-09-11 17:55:52 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | lineinfile documentation should clarify whether state=present will add a line if regexp has no match | affects_2.1 docs_report waiting_on_maintainer | ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
lineinfile module
##### ANSIBLE VERSION
2.1.1.0
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
For lineinfile module, when I run a task like this:
```
- lineinfile: line="example" regexp="exampl.*" state=present dest=somefile.txt
```
when file somefile.txt does not contain the regex:
```
other
text
```
The documentation doesn't make it clear whether `state=present` will ensure that the line is added even if the regex does not exist (adding it to the end) or if it will only be added if there is a match. It seems to me that ansible _used to_ add it regardless, but as of ansible 2.1.1.0 (if not prior), it is not doing that. I can't tell from the docs whether a bug was fixed or introduced.
##### STEPS TO REPRODUCE
Read "regexp" and "state" sections in doc:
http://docs.ansible.com/ansible/lineinfile_module.html
| True | lineinfile documentation should clarify whether state=present will add a line if regexp has no match - ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
lineinfile module
##### ANSIBLE VERSION
2.1.1.0
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
For lineinfile module, when I run a task like this:
```
- lineinfile: line="example" regexp="exampl.*" state=present dest=somefile.txt
```
when file somefile.txt does not contain the regex:
```
other
text
```
The documentation doesn't make it clear whether `state=present` will ensure that the line is added even if the regex does not exist (adding it to the end) or if it will only be added if there is a match. It seems to me that ansible _used to_ add it regardless, but as of ansible 2.1.1.0 (if not prior), it is not doing that. I can't tell from the docs whether a bug was fixed or introduced.
##### STEPS TO REPRODUCE
Read "regexp" and "state" sections in doc:
http://docs.ansible.com/ansible/lineinfile_module.html
| main | lineinfile documentation should clarify whether state present will add a line if regexp has no match issue type documentation report component name lineinfile module ansible version configuration n a os environment n a summary for lineinfile module when i run a task like this lineinfile line example regexp exampl state present dest somefile txt when file somefile txt does not contain the regex other text the documentation doesn t make it clear whether state present will ensure that the line is added even if the regex does not exist adding it to the end or if it will only be added if there is a match it seems to me that ansible used to add it regardless but as of ansible if not prior it is not doing that i can t tell from the docs whether a bug was fixed or introduced steps to reproduce read regexp and state sections in doc | 1 |
1,707 | 18,927,323,280 | IssuesEvent | 2021-11-17 10:54:54 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | opened | [EPIC] Verify DB System tables and tables where user do not have privileges are not accessible to the user at Airbyte UI | area/connectors Epic area/reliability | ## Tell us about the problem you're trying to solve
The issue #5172 was found by the customer. Sometimes users have access to SYSTEM tables and tables where users do not have SELECT privileges at Airbyte UI. As a result, we get a "Permission denied" runtime error, and the sync job crushes.
This issue is created to verify we do not have such issues at other potential source connectors. | True | [EPIC] Verify DB System tables and tables where user do not have privileges are not accessible to the user at Airbyte UI - ## Tell us about the problem you're trying to solve
The issue #5172 was found by the customer. Sometimes users have access to SYSTEM tables and tables where users do not have SELECT privileges at Airbyte UI. As a result, we get a "Permission denied" runtime error, and the sync job crushes.
This issue is created to verify we do not have such issues at other potential source connectors. | non_main | verify db system tables and tables where user do not have privileges are not accessible to the user at airbyte ui tell us about the problem you re trying to solve the issue was found by the customer sometimes users have access to system tables and tables where users do not have select privileges at airbyte ui as a result we get a permission denied runtime error and the sync job crushes this issue is created to verify we do not have such issues at other potential source connectors | 0 |
387,611 | 11,463,550,521 | IssuesEvent | 2020-02-07 16:13:08 | storybookjs/storybook | https://api.github.com/repos/storybookjs/storybook | closed | Link to brandImage not working | bug has workaround high priority theming ui | **Describe the bug**
During upgrade from 5.2.8 to 5.3.3 my brandImage stopped working.
I have a custom theme, using a static assets folder for the image
```
./storybook
- ./public
- logo.svg
```
Previously I had me theme defined like this:
```
import { create } from '@storybook/theming';
import logo from './public/logo.svg';
export default create({
base: 'light',
brandImage: logo,
brandTitle: 'Custom - Storybook'
});
```
After updating to 5.3.3 I've moved my theming to manager.js, like so
```
import { addons } from '@storybook/addons';
import { create } from '@storybook/theming/create';
import logo from './public/logo.svg';
const theme = create({
base: 'light',
brandImage: `/${logo}`,
brandTitle: 'Custom - Storybook'
});
addons.setConfig({
panelPosition: 'bottom',
theme
});
```
But the logo.svg does not show up when I start storybook using `start-storybook -p 6006 -s ./.storybook/public`.
If I however do a static build via `build-storybook -s ./.storybook/public`, the logo shows up correctly.
Webserver fetches the logo from `/media/static/logo.svg` in both cases. But it seems the local webserver started when starting storybook locally does not correctly allow fetching images from this folder.
**System:**
Environment Info:
System:
OS: macOS 10.15.2
CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Binaries:
Node: 13.6.0 - ~/.nvm/versions/node/v13.6.0/bin/node
Yarn: 1.19.1 - /usr/local/bin/yarn
npm: 6.13.4 - ~/.nvm/versions/node/v13.6.0/bin/npm
Browsers:
Chrome: 79.0.3945.117
Safari: 13.0.4
npmPackages:
@storybook/addon-a11y: ^5.3.3 => 5.3.3
@storybook/addon-actions: ^5.3.3 => 5.3.3
@storybook/addon-docs: ^5.3.3 => 5.3.3
@storybook/addon-knobs: ^5.3.3 => 5.3.3
@storybook/addon-links: ^5.3.3 => 5.3.3
@storybook/addon-notes: ^5.3.3 => 5.3.3
@storybook/addon-viewport: ^5.3.3 => 5.3.3
@storybook/addons: ^5.3.3 => 5.3.3
@storybook/angular: ^5.3.3 => 5.3.3
| 1.0 | Link to brandImage not working - **Describe the bug**
During upgrade from 5.2.8 to 5.3.3 my brandImage stopped working.
I have a custom theme, using a static assets folder for the image
```
./storybook
- ./public
- logo.svg
```
Previously I had me theme defined like this:
```
import { create } from '@storybook/theming';
import logo from './public/logo.svg';
export default create({
base: 'light',
brandImage: logo,
brandTitle: 'Custom - Storybook'
});
```
After updating to 5.3.3 I've moved my theming to manager.js, like so
```
import { addons } from '@storybook/addons';
import { create } from '@storybook/theming/create';
import logo from './public/logo.svg';
const theme = create({
base: 'light',
brandImage: `/${logo}`,
brandTitle: 'Custom - Storybook'
});
addons.setConfig({
panelPosition: 'bottom',
theme
});
```
But the logo.svg does not show up when I start storybook using `start-storybook -p 6006 -s ./.storybook/public`.
If I however do a static build via `build-storybook -s ./.storybook/public`, the logo shows up correctly.
Webserver fetches the logo from `/media/static/logo.svg` in both cases. But it seems the local webserver started when starting storybook locally does not correctly allow fetching images from this folder.
**System:**
Environment Info:
System:
OS: macOS 10.15.2
CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Binaries:
Node: 13.6.0 - ~/.nvm/versions/node/v13.6.0/bin/node
Yarn: 1.19.1 - /usr/local/bin/yarn
npm: 6.13.4 - ~/.nvm/versions/node/v13.6.0/bin/npm
Browsers:
Chrome: 79.0.3945.117
Safari: 13.0.4
npmPackages:
@storybook/addon-a11y: ^5.3.3 => 5.3.3
@storybook/addon-actions: ^5.3.3 => 5.3.3
@storybook/addon-docs: ^5.3.3 => 5.3.3
@storybook/addon-knobs: ^5.3.3 => 5.3.3
@storybook/addon-links: ^5.3.3 => 5.3.3
@storybook/addon-notes: ^5.3.3 => 5.3.3
@storybook/addon-viewport: ^5.3.3 => 5.3.3
@storybook/addons: ^5.3.3 => 5.3.3
@storybook/angular: ^5.3.3 => 5.3.3
| non_main | link to brandimage not working describe the bug during upgrade from to my brandimage stopped working i have a custom theme using a static assets folder for the image storybook public logo svg previously i had me theme defined like this import create from storybook theming import logo from public logo svg export default create base light brandimage logo brandtitle custom storybook after updating to i ve moved my theming to manager js like so import addons from storybook addons import create from storybook theming create import logo from public logo svg const theme create base light brandimage logo brandtitle custom storybook addons setconfig panelposition bottom theme but the logo svg does not show up when i start storybook using start storybook p s storybook public if i however do a static build via build storybook s storybook public the logo shows up correctly webserver fetches the logo from media static logo svg in both cases but it seems the local webserver started when starting storybook locally does not correctly allow fetching images from this folder system environment info system os macos cpu intel r core tm cpu binaries node nvm versions node bin node yarn usr local bin yarn npm nvm versions node bin npm browsers chrome safari npmpackages storybook addon storybook addon actions storybook addon docs storybook addon knobs storybook addon links storybook addon notes storybook addon viewport storybook addons storybook angular | 0 |
5,228 | 26,512,622,633 | IssuesEvent | 2023-01-18 18:19:03 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Breadcrumbs do not update when coming back from edit exploration to saved exploration | type: bug work: frontend status: ready restricted: maintainers | ### Steps to reproduce
1. Go to any saved exploration and notice the breadcrumbs.
2. Click on the edit button on the top right and notice the breadcrumbs.
3. Click on the browser's back button. Notice that the page has changed(routing is successful) but the breadcrumbs do not change. | True | Breadcrumbs do not update when coming back from edit exploration to saved exploration - ### Steps to reproduce
1. Go to any saved exploration and notice the breadcrumbs.
2. Click on the edit button on the top right and notice the breadcrumbs.
3. Click on the browser's back button. Notice that the page has changed(routing is successful) but the breadcrumbs do not change. | main | breadcrumbs do not update when coming back from edit exploration to saved exploration steps to reproduce go to any saved exploration and notice the breadcrumbs click on the edit button on the top right and notice the breadcrumbs click on the browser s back button notice that the page has changed routing is successful but the breadcrumbs do not change | 1 |
4,921 | 25,285,054,866 | IssuesEvent | 2022-11-16 18:34:43 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | sam-beta-cdk support for typescript lambda functions - aws-lambda-nodejs | type/feature maintainer/need-followup | **The feature:**
Would it be possible to add support for sam-cdk-beta to bundle typescript lambda functions using @aws-cdk/aws-lambda-nodejs (NodejsFunction) ?
**What I've tried:**
I have tried to use the @aws-cdk/aws-lambda-nodejs library to compile typescript functions as opposed to the @aws-cdk/aws-lambda library. However, what I notice is that when I run:
`sam-beta-cdk build`
the assets for a @aws-cdk/aws-lambda-nodejs (NodejsFunction) function only produce an index.js asset in the .aws-sam/cdk-out/asset.xxxxxxxxxxxxx/ directory but package.json also needs to be in there for the function to be build properly. If I manually add package.json from the function in question, then I can run
`cdk deploy -a .aws-sam/build`
and the stack will deploy correctly.
I'm not entirely sure if the solution would be as simple as adding package.json to the assets directory for a particular stack, but It seemed to work. Perhaps this is a feature I should request in the main aws-cdk repository. Please let me know if there's something I'm missing in my approach to creating typescript lambda functions.
| True | sam-beta-cdk support for typescript lambda functions - aws-lambda-nodejs - **The feature:**
Would it be possible to add support for sam-cdk-beta to bundle typescript lambda functions using @aws-cdk/aws-lambda-nodejs (NodejsFunction) ?
**What I've tried:**
I have tried to use the @aws-cdk/aws-lambda-nodejs library to compile typescript functions as opposed to the @aws-cdk/aws-lambda library. However, what I notice is that when I run:
`sam-beta-cdk build`
the assets for a @aws-cdk/aws-lambda-nodejs (NodejsFunction) function only produce an index.js asset in the .aws-sam/cdk-out/asset.xxxxxxxxxxxxx/ directory but package.json also needs to be in there for the function to be build properly. If I manually add package.json from the function in question, then I can run
`cdk deploy -a .aws-sam/build`
and the stack will deploy correctly.
I'm not entirely sure if the solution would be as simple as adding package.json to the assets directory for a particular stack, but It seemed to work. Perhaps this is a feature I should request in the main aws-cdk repository. Please let me know if there's something I'm missing in my approach to creating typescript lambda functions.
| main | sam beta cdk support for typescript lambda functions aws lambda nodejs the feature would it be possible to add support for sam cdk beta to bundle typescript lambda functions using aws cdk aws lambda nodejs nodejsfunction what i ve tried i have tried to use the aws cdk aws lambda nodejs library to compile typescript functions as opposed to the aws cdk aws lambda library however what i notice is that when i run sam beta cdk build the assets for a aws cdk aws lambda nodejs nodejsfunction function only produce an index js asset in the aws sam cdk out asset xxxxxxxxxxxxx directory but package json also needs to be in there for the function to be build properly if i manually add package json from the function in question then i can run cdk deploy a aws sam build and the stack will deploy correctly i m not entirely sure if the solution would be as simple as adding package json to the assets directory for a particular stack but it seemed to work perhaps this is a feature i should request in the main aws cdk repository please let me know if there s something i m missing in my approach to creating typescript lambda functions | 1 |
690 | 4,236,124,219 | IssuesEvent | 2016-07-05 17:22:19 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | AptGet Cheat Sheet: | Maintainer Input Requested PR Received | Can't we write in the description to use sudo for certain debian platforms.
Eg.
If you are getting this error Unable to lock the administration directory , are you root?
Then add sudo in prefix to all the listed commands.
OR
Note : All these commands except the search commands must be run as root or with superuser privileges.
------
IA Page: http://duck.co/ia/view/apt_get_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @abadojack | True | AptGet Cheat Sheet: - Can't we write in the description to use sudo for certain debian platforms.
Eg.
If you are getting this error Unable to lock the administration directory , are you root?
Then add sudo in prefix to all the listed commands.
OR
Note : All these commands except the search commands must be run as root or with superuser privileges.
------
IA Page: http://duck.co/ia/view/apt_get_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @abadojack | main | aptget cheat sheet can t we write in the description to use sudo for certain debian platforms eg if you are getting this error unable to lock the administration directory are you root then add sudo in prefix to all the listed commands or note all these commands except the search commands must be run as root or with superuser privileges ia page abadojack | 1 |
314,641 | 9,600,872,692 | IssuesEvent | 2019-05-10 10:30:05 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | Cataclysm Zone Mob XP | Fixed Confirmed Fixed in Dev Priority-High Regression zone 80-85 Cata | Here are the mob XP formulas from wiki
XP = (Char Level * 5) + 45, where Char Level = Mob Level, for mobs in Azeroth
XP = (Char Level * 5) + 235, where Char Level = Mob Level, for mobs in Outland
XP = (Char Level * 5) + 580, where Char Level = Mob Level, for mobs in Northrend
**XP = (Char Level * 5) + 1878, where Char Level = Mob Level, for mobs in Cataclysm**
With that calculation, a level 80 player killing a level 80 mob in a cataclysm zone would receive 2278 exp on x1 rates,, 9,1k with x2 + double xp weekend and 27k+ with x12 VIP.
Currently players experience that with the weekend x4 exp they get about 2-3k, and even the vip guys with x12 only get 7-10k depending on zone and char level.
The cataclysm formula must apply in cata zones, I think currently it uses one of the lesser formulas. | 1.0 | Cataclysm Zone Mob XP - Here are the mob XP formulas from wiki
XP = (Char Level * 5) + 45, where Char Level = Mob Level, for mobs in Azeroth
XP = (Char Level * 5) + 235, where Char Level = Mob Level, for mobs in Outland
XP = (Char Level * 5) + 580, where Char Level = Mob Level, for mobs in Northrend
**XP = (Char Level * 5) + 1878, where Char Level = Mob Level, for mobs in Cataclysm**
With that calculation, a level 80 player killing a level 80 mob in a cataclysm zone would receive 2278 exp on x1 rates,, 9,1k with x2 + double xp weekend and 27k+ with x12 VIP.
Currently players experience that with the weekend x4 exp they get about 2-3k, and even the vip guys with x12 only get 7-10k depending on zone and char level.
The cataclysm formula must apply in cata zones, I think currently it uses one of the lesser formulas. | non_main | cataclysm zone mob xp here are the mob xp formulas from wiki xp char level where char level mob level for mobs in azeroth xp char level where char level mob level for mobs in outland xp char level where char level mob level for mobs in northrend xp char level where char level mob level for mobs in cataclysm with that calculation a level player killing a level mob in a cataclysm zone would receive exp on rates with double xp weekend and with vip currently players experience that with the weekend exp they get about and even the vip guys with only get depending on zone and char level the cataclysm formula must apply in cata zones i think currently it uses one of the lesser formulas | 0 |
4,105 | 19,477,469,699 | IssuesEvent | 2021-12-24 15:53:00 | ortuman/jackal | https://api.github.com/repos/ortuman/jackal | closed | New maintainer(s) needed | help wanted waiting on new maintainer | I am stepping down as maintainer of the jackal project.
I am looking for one or more people with a track record of successfully maintaining an open source project. Potential maintainers can gain this experience by contributing to this project.
If you rely on the quality and ongoing maintenance of this project, then please get involved by helping to maintain this project or finding people to help maintain the project. | True | New maintainer(s) needed - I am stepping down as maintainer of the jackal project.
I am looking for one or more people with a track record of successfully maintaining an open source project. Potential maintainers can gain this experience by contributing to this project.
If you rely on the quality and ongoing maintenance of this project, then please get involved by helping to maintain this project or finding people to help maintain the project. | main | new maintainer s needed i am stepping down as maintainer of the jackal project i am looking for one or more people with a track record of successfully maintaining an open source project potential maintainers can gain this experience by contributing to this project if you rely on the quality and ongoing maintenance of this project then please get involved by helping to maintain this project or finding people to help maintain the project | 1 |
262,516 | 8,271,897,239 | IssuesEvent | 2018-09-16 14:29:55 | richelbilderbeek/BrainWeaver | https://api.github.com/repos/richelbilderbeek/BrainWeaver | closed | 2. (optional) Allow pruning | enhancement low-priority | > 2. Indien haalbaar binnen de gestelde randvoorwaarden: Mogelijkheid ontwikkelen tot pruning (cq. snoeien) om de Concept Map te kunnen vereenvoudigen. Kanttekening daarbij is dat de pruning-informatie niet verloren mag gaan. | 1.0 | 2. (optional) Allow pruning - > 2. Indien haalbaar binnen de gestelde randvoorwaarden: Mogelijkheid ontwikkelen tot pruning (cq. snoeien) om de Concept Map te kunnen vereenvoudigen. Kanttekening daarbij is dat de pruning-informatie niet verloren mag gaan. | non_main | optional allow pruning indien haalbaar binnen de gestelde randvoorwaarden mogelijkheid ontwikkelen tot pruning cq snoeien om de concept map te kunnen vereenvoudigen kanttekening daarbij is dat de pruning informatie niet verloren mag gaan | 0 |
5,477 | 27,364,302,527 | IssuesEvent | 2023-02-27 17:57:51 | microsoft/mu_feature_mm_supv | https://api.github.com/repos/microsoft/mu_feature_mm_supv | closed | [Bug]: Fragmented runtime buffer due to MM PEI initialization | type:bug state:needs-maintainer-feedback state:needs-triage urgency:medium | ### Is there an existing issue for this?
- [X] I have searched existing issues
### Current Behavior
The PEI initialization would require communicate buffer to be allocated during PEI phase and unblocked by supervisor. This is causing the system to have fragmented memory map on the runtime memory type.
### Expected Behavior
The runtime memory should hold consistent and not fragmented.
### Steps To Reproduce
Run the test point toolkit, it will report error.
### Build Environment
```markdown
- OS(s):
- Tool Chain(s):
- Targets Impacted:
```
### Version Information
```text
7.002 or lower.
```
### Urgency
Medium
### Are you going to fix this?
I will fix it
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | True | [Bug]: Fragmented runtime buffer due to MM PEI initialization - ### Is there an existing issue for this?
- [X] I have searched existing issues
### Current Behavior
The PEI initialization would require communicate buffer to be allocated during PEI phase and unblocked by supervisor. This is causing the system to have fragmented memory map on the runtime memory type.
### Expected Behavior
The runtime memory should hold consistent and not fragmented.
### Steps To Reproduce
Run the test point toolkit, it will report error.
### Build Environment
```markdown
- OS(s):
- Tool Chain(s):
- Targets Impacted:
```
### Version Information
```text
7.002 or lower.
```
### Urgency
Medium
### Are you going to fix this?
I will fix it
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | main | fragmented runtime buffer due to mm pei initialization is there an existing issue for this i have searched existing issues current behavior the pei initialization would require communicate buffer to be allocated during pei phase and unblocked by supervisor this is causing the system to have fragmented memory map on the runtime memory type expected behavior the runtime memory should hold consistent and not fragmented steps to reproduce run the test point toolkit it will report error build environment markdown os s tool chain s targets impacted version information text or lower urgency medium are you going to fix this i will fix it do you need maintainer feedback maintainer feedback requested anything else no response | 1 |
4,256 | 21,102,810,495 | IssuesEvent | 2022-04-04 15:51:21 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | carbon-components.min.css and carbon-components.min.js not found | status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components
### Browser
_No response_
### Operating System
_No response_
### Package version
11.0.0
### React version
_No response_
### Automated testing tool and ruleset
not needed
### Assistive technology
_No response_
### Description
https://unpkg.com/carbon-components/scripts/carbon-components.min.js
https://unpkg.com/carbon-components/css/carbon-components.min.css
### WCAG 2.1 Violation
_No response_
### CodeSandbox example
not needed
### Steps to reproduce
open unpkg.com links, files not found
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | carbon-components.min.css and carbon-components.min.js not found - ### Package
carbon-components
### Browser
_No response_
### Operating System
_No response_
### Package version
11.0.0
### React version
_No response_
### Automated testing tool and ruleset
not needed
### Assistive technology
_No response_
### Description
https://unpkg.com/carbon-components/scripts/carbon-components.min.js
https://unpkg.com/carbon-components/css/carbon-components.min.css
### WCAG 2.1 Violation
_No response_
### CodeSandbox example
not needed
### Steps to reproduce
open unpkg.com links, files not found
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | carbon components min css and carbon components min js not found package carbon components browser no response operating system no response package version react version no response automated testing tool and ruleset not needed assistive technology no response description wcag violation no response codesandbox example not needed steps to reproduce open unpkg com links files not found code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
68,549 | 13,153,887,307 | IssuesEvent | 2020-08-10 05:10:39 | numbersprotocol/capture-lite | https://api.github.com/repos/numbersprotocol/capture-lite | opened | Poor Performance on Mobile Device | code priority:high uiux | Extremely poor performance on mobile devices. Need time to investigate the root cause.
- It is highly possible that the `Filesystem` on the Android device is not fast enough to rapidly query multiple times.
- Possible fix: merge the information tuples into a large json file.
- Possible fix: use sqlite instead.
- Media data use base64 encoding to pass around instead of file object with source URI. | 1.0 | Poor Performance on Mobile Device - Extremely poor performance on mobile devices. Need time to investigate the root cause.
- It is highly possible that the `Filesystem` on the Android device is not fast enough to rapidly query multiple times.
- Possible fix: merge the information tuples into a large json file.
- Possible fix: use sqlite instead.
- Media data use base64 encoding to pass around instead of file object with source URI. | non_main | poor performance on mobile device extremely poor performance on mobile devices need time to investigate the root cause it is highly possible that the filesystem on the android device is not fast enough to rapidly query multiple times possible fix merge the information tuples into a large json file possible fix use sqlite instead media data use encoding to pass around instead of file object with source uri | 0 |
3,146 | 12,067,238,639 | IssuesEvent | 2020-04-16 13:05:59 | ipfs-shipyard/ipfs-companion | https://api.github.com/repos/ipfs-shipyard/ipfs-companion | opened | Support for mozilla-mobile/fenix | P2 area/android area/firefox dif/easy effort/hours help wanted kind/maintenance need/analysis need/maintainer-input | Mozilla ships multiple products on mobile and it is very confusing to understand what is what, and where IPFS Companion can run.
Below are mu notes to save everyone some time + next steps.
## The saga of Android browsers from Mozilla
### :snowflake: Fennec
In the past, there was "Firefox for Android" and "Firefox for Android Beta", both based on "Fennec" codebase, for which active development stopped around Firefox 68.
Cool thing about "Fennec" browser was that is support nearly all WebExtension APIs and could run the same extensions as Desktop Firefox, including IPFS Companion.
IPFS Companion runs fine in "Fennec" runtime. User could run local IPFS node (eg. unofficial [SweetIPFS](https://play.google.com/store/apps/details?id=fr.rhaz.ipfs.sweet&hl=en)) or switch Companion to use embedded js-ipfs, which did not provide gateway, but worked fine for quick uploads on mobile(!).
### :fire: Fenix
"Fenix" is an early version of an experimental Firefox browser for Android. Available on the Android Play store under "Firefox Preview" name, built on [GeckoView](https://mozilla.github.io/geckoview/), aims to replace "Fennec" on Android.
Key issue with "Fenix" was that is did not support WebExtensions at all. Part of users was really vocal about that shortcoming and a plan to bring them back on mobile can be tracked in [mozilla-mobile/fenix#5315](https://github.com/mozilla-mobile/fenix/issues/5315).
As I am writing this memo, Fenix still does not support installing arbitrary WebExtensions. Only a minimal set of APIs was whitelisted to enable extensions such as uBlock Origin to function.
### :anger: Fennec → Fenix = No IPFS Companion on Android.. for now
**TL;DR** from [FAQ for extension support in new Firefox for Android](https://blog.mozilla.org/addons/2020/02/11/faq-for-extension-support-in-new-firefox-for-android/):
- Mozilla got serious about deprecating "Fennec" runtime and started the process by switching "Firefox for Android Beta" from "Fennec" to "Fenix".
- Only a few extensions are whitelisted, everything else is disabled for now
- There is no ETA when it will be possible to install IPFS Companion on "Fenix"
- ..but there are instructions on [how you test webextension against org.mozilla.fenix.nightly](https://github.com/mozilla-mobile/fenix/issues/5315#issuecomment-592082109) to smoke-test and tell if our extension is ready.
### :memo: Next steps
This is not a priority, but we should eventually:
- [ ] try Companion with org.mozilla.fenix.nightly on Android ([following these steps](https://github.com/mozilla-mobile/fenix/issues/5315#issuecomment-592082109))
(this is something anyone with spare time can do)
- see what breaks due to missing APIs, comment below
- if nothing breaks, even better, comment below
- [ ] if everything works, reach out to Mozilla ask if there is anything else we need to do for whitelisting of IPFS Companion on Mobile. | True | Support for mozilla-mobile/fenix - Mozilla ships multiple products on mobile and it is very confusing to understand what is what, and where IPFS Companion can run.
Below are mu notes to save everyone some time + next steps.
## The saga of Android browsers from Mozilla
### :snowflake: Fennec
In the past, there was "Firefox for Android" and "Firefox for Android Beta", both based on "Fennec" codebase, for which active development stopped around Firefox 68.
Cool thing about "Fennec" browser was that is support nearly all WebExtension APIs and could run the same extensions as Desktop Firefox, including IPFS Companion.
IPFS Companion runs fine in "Fennec" runtime. User could run local IPFS node (eg. unofficial [SweetIPFS](https://play.google.com/store/apps/details?id=fr.rhaz.ipfs.sweet&hl=en)) or switch Companion to use embedded js-ipfs, which did not provide gateway, but worked fine for quick uploads on mobile(!).
### :fire: Fenix
"Fenix" is an early version of an experimental Firefox browser for Android. Available on the Android Play store under "Firefox Preview" name, built on [GeckoView](https://mozilla.github.io/geckoview/), aims to replace "Fennec" on Android.
Key issue with "Fenix" was that is did not support WebExtensions at all. Part of users was really vocal about that shortcoming and a plan to bring them back on mobile can be tracked in [mozilla-mobile/fenix#5315](https://github.com/mozilla-mobile/fenix/issues/5315).
As I am writing this memo, Fenix still does not support installing arbitrary WebExtensions. Only a minimal set of APIs was whitelisted to enable extensions such as uBlock Origin to function.
### :anger: Fennec → Fenix = No IPFS Companion on Android.. for now
**TL;DR** from [FAQ for extension support in new Firefox for Android](https://blog.mozilla.org/addons/2020/02/11/faq-for-extension-support-in-new-firefox-for-android/):
- Mozilla got serious about deprecating "Fennec" runtime and started the process by switching "Firefox for Android Beta" from "Fennec" to "Fenix".
- Only a few extensions are whitelisted, everything else is disabled for now
- There is no ETA when it will be possible to install IPFS Companion on "Fenix"
- ..but there are instructions on [how you test webextension against org.mozilla.fenix.nightly](https://github.com/mozilla-mobile/fenix/issues/5315#issuecomment-592082109) to smoke-test and tell if our extension is ready.
### :memo: Next steps
This is not a priority, but we should eventually:
- [ ] try Companion with org.mozilla.fenix.nightly on Android ([following these steps](https://github.com/mozilla-mobile/fenix/issues/5315#issuecomment-592082109))
(this is something anyone with spare time can do)
- see what breaks due to missing APIs, comment below
- if nothing breaks, even better, comment below
- [ ] if everything works, reach out to Mozilla ask if there is anything else we need to do for whitelisting of IPFS Companion on Mobile. | main | support for mozilla mobile fenix mozilla ships multiple products on mobile and it is very confusing to understand what is what and where ipfs companion can run below are mu notes to save everyone some time next steps the saga of android browsers from mozilla snowflake fennec in the past there was firefox for android and firefox for android beta both based on fennec codebase for which active development stopped around firefox cool thing about fennec browser was that is support nearly all webextension apis and could run the same extensions as desktop firefox including ipfs companion ipfs companion runs fine in fennec runtime user could run local ipfs node eg unofficial or switch companion to use embedded js ipfs which did not provide gateway but worked fine for quick uploads on mobile fire fenix fenix is an early version of an experimental firefox browser for android available on the android play store under firefox preview name built on aims to replace fennec on android key issue with fenix was that is did not support webextensions at all part of users was really vocal about that shortcoming and a plan to bring them back on mobile can be tracked in as i am writing this memo fenix still does not support installing arbitrary webextensions only a minimal set of apis was whitelisted to enable extensions such as ublock origin to function anger fennec → fenix no ipfs companion on android for now tl dr from mozilla got serious about deprecating fennec runtime and started the process by switching firefox for android beta from fennec to fenix only a few extensions are whitelisted everything else is disabled for now there is no eta when it will be possible to install ipfs companion on fenix but there are instructions on to smoke test and tell if our extension is ready memo next steps this is not a priority but we should eventually try companion with org mozilla fenix nightly on android this is something anyone with spare time can do see what breaks due to missing apis comment below if nothing breaks even better comment below if everything works reach out to mozilla ask if there is anything else we need to do for whitelisting of ipfs companion on mobile | 1 |
172,877 | 27,345,673,735 | IssuesEvent | 2023-02-27 04:35:42 | AlphaWallet/alpha-wallet-ios | https://api.github.com/repos/AlphaWallet/alpha-wallet-ios | closed | Remove mutually exclusive mainnet and testnet modes so testnets can be selected once they are enabled | Design Phase 2: To consider | **Zeplin:**
https://zpl.io/g8oZleA
**Changes:**
- Change the headline to "Select Active Networks"
- Update the plus icon https://zpl.io/beqJEqq
- Update the network icons https://zpl.io/agE1LeZ
- When you toggle Testnet on, you will see a warning about the Monopoly money
- When you hit OK on the warning screen, the list of networks expands by testnet networks
- We cancel Testnet mode where you only see Mainnet or Testnet at a time. They can be both displayed, so we are coming back to the same flow as previously
- At the bottom of the list you have a Browse More button, it takes you to the same screen as plus button at the top - browse all other networks
- Add placeholder network icons for all remaining networks - https://zpl.io/jZE1A5m
| 1.0 | Remove mutually exclusive mainnet and testnet modes so testnets can be selected once they are enabled - **Zeplin:**
https://zpl.io/g8oZleA
**Changes:**
- Change the headline to "Select Active Networks"
- Update the plus icon https://zpl.io/beqJEqq
- Update the network icons https://zpl.io/agE1LeZ
- When you toggle Testnet on, you will see a warning about the Monopoly money
- When you hit OK on the warning screen, the list of networks expands by testnet networks
- We cancel Testnet mode where you only see Mainnet or Testnet at a time. They can be both displayed, so we are coming back to the same flow as previously
- At the bottom of the list you have a Browse More button, it takes you to the same screen as plus button at the top - browse all other networks
- Add placeholder network icons for all remaining networks - https://zpl.io/jZE1A5m
| non_main | remove mutually exclusive mainnet and testnet modes so testnets can be selected once they are enabled zeplin changes change the headline to select active networks update the plus icon update the network icons when you toggle testnet on you will see a warning about the monopoly money when you hit ok on the warning screen the list of networks expands by testnet networks we cancel testnet mode where you only see mainnet or testnet at a time they can be both displayed so we are coming back to the same flow as previously at the bottom of the list you have a browse more button it takes you to the same screen as plus button at the top browse all other networks add placeholder network icons for all remaining networks | 0 |
518,745 | 15,033,978,062 | IssuesEvent | 2021-02-02 12:15:19 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | littlefs: Too small heap for file cache. | area: File System bug priority: low | **Describe the bug**
fs_open for CONFIG_FS_LITTLEFS_NUM_FILESth file returns -ENOMEM.
I.e. CONFIG_FS_LITTLEFS_NUM_FILES=3 and when fs_open is caled for 3rd file it returns -ENOMEM.
After incrementing CONFIG_FS_LITTLEFS_NUM_FILES and opening only CONFIG_FS_LITTLEFS_NUM_FILES-1 files problem is not present.
After increasing file_cache_pool (in littlefs_fs.c) even by 128 bytes problem is not present (CONFIG_FS_LITTLEFS_NUM_FILES=3).
**To Reproduce**
Try to open at the same time CONFIG_FS_LITTLEFS_NUM_FILES number of files on littleFS.
**Expected behavior**
No error from fs_open.
**Impact**
annoyance
**Logs and console output**
None
**Environment (please complete the following information):**
- OS: Linux
- ToolchainZephyr SDK 0.12.1
- Commit SHA c31ce55c583a3f68a6d4b815b1693d83098e2ab3
**Additional context**
Maybe commit [fcd392f](https://github.com/zephyrproject-rtos/zephyr/commit/fcd392f6ceff5d552d52b3337f373ed8ed14a30e#diff-c288df0c39e91d11f900c71b4978331560d72d79faf18fef58c65531f420a914) caused this problem because heap size available to user != declared size of heap.
| 1.0 | littlefs: Too small heap for file cache. - **Describe the bug**
fs_open for CONFIG_FS_LITTLEFS_NUM_FILESth file returns -ENOMEM.
I.e. CONFIG_FS_LITTLEFS_NUM_FILES=3 and when fs_open is caled for 3rd file it returns -ENOMEM.
After incrementing CONFIG_FS_LITTLEFS_NUM_FILES and opening only CONFIG_FS_LITTLEFS_NUM_FILES-1 files problem is not present.
After increasing file_cache_pool (in littlefs_fs.c) even by 128 bytes problem is not present (CONFIG_FS_LITTLEFS_NUM_FILES=3).
**To Reproduce**
Try to open at the same time CONFIG_FS_LITTLEFS_NUM_FILES number of files on littleFS.
**Expected behavior**
No error from fs_open.
**Impact**
annoyance
**Logs and console output**
None
**Environment (please complete the following information):**
- OS: Linux
- ToolchainZephyr SDK 0.12.1
- Commit SHA c31ce55c583a3f68a6d4b815b1693d83098e2ab3
**Additional context**
Maybe commit [fcd392f](https://github.com/zephyrproject-rtos/zephyr/commit/fcd392f6ceff5d552d52b3337f373ed8ed14a30e#diff-c288df0c39e91d11f900c71b4978331560d72d79faf18fef58c65531f420a914) caused this problem because heap size available to user != declared size of heap.
| non_main | littlefs too small heap for file cache describe the bug fs open for config fs littlefs num filesth file returns enomem i e config fs littlefs num files and when fs open is caled for file it returns enomem after incrementing config fs littlefs num files and opening only config fs littlefs num files files problem is not present after increasing file cache pool in littlefs fs c even by bytes problem is not present config fs littlefs num files to reproduce try to open at the same time config fs littlefs num files number of files on littlefs expected behavior no error from fs open impact annoyance logs and console output none environment please complete the following information os linux toolchainzephyr sdk commit sha additional context maybe commit caused this problem because heap size available to user declared size of heap | 0 |
5,624 | 28,138,319,447 | IssuesEvent | 2023-04-01 16:49:31 | scott-ainsworth/dotnet-eithers | https://api.github.com/repos/scott-ainsworth/dotnet-eithers | closed | Refactoring and cleanup | quality maintainability | *Issue*: Running [SonarLint](https://docs.sonarcloud.io/improving/sonarlint/) identified minor problems and some potential refactoring.
*Requirements*: Refactor and remove unnecessary code as needed.
- Remove redundant function overrides (particularly `object.GetHashCode()`) and suppress resulting false positive SonarLint diagnostics.
- Remove unnecessary `#nullable enable`s.
- Simplify types.
- Change all line endings to NL.
- Fix misspellings.
- Readability tweaks. | True | Refactoring and cleanup - *Issue*: Running [SonarLint](https://docs.sonarcloud.io/improving/sonarlint/) identified minor problems and some potential refactoring.
*Requirements*: Refactor and remove unnecessary code as needed.
- Remove redundant function overrides (particularly `object.GetHashCode()`) and suppress resulting false positive SonarLint diagnostics.
- Remove unnecessary `#nullable enable`s.
- Simplify types.
- Change all line endings to NL.
- Fix misspellings.
- Readability tweaks. | main | refactoring and cleanup issue running identified minor problems and some potential refactoring requirements refactor and remove unnecessary code as needed remove redundant function overrides particularly object gethashcode and suppress resulting false positive sonarlint diagnostics remove unnecessary nullable enable s simplify types change all line endings to nl fix misspellings readability tweaks | 1 |
4,804 | 24,746,605,299 | IssuesEvent | 2022-10-21 10:13:00 | toolbx-images/images | https://api.github.com/repos/toolbx-images/images | opened | Add distribution: Ubuntu 22.10 | new-image-request maintainers-wanted | ### Distribution name and versions requested
Ubuntu 22.10
### Where are the official container images from the distribution published?
N/A
### Will you be interested in maintaining this image?
No | True | Add distribution: Ubuntu 22.10 - ### Distribution name and versions requested
Ubuntu 22.10
### Where are the official container images from the distribution published?
N/A
### Will you be interested in maintaining this image?
No | main | add distribution ubuntu distribution name and versions requested ubuntu where are the official container images from the distribution published n a will you be interested in maintaining this image no | 1 |
1,925 | 6,588,340,653 | IssuesEvent | 2017-09-14 02:28:33 | tomchentw/react-google-maps | https://api.github.com/repos/tomchentw/react-google-maps | closed | Use React Storybook to demonstrate components | CALL_FOR_MAINTAINERS | Hi @tomchentw ,
We can use [react-storybook](https://github.com/kadirahq/react-storybook/) to demonstrate some use cases for each component. I can work on this issue if you're interested.
| True | Use React Storybook to demonstrate components - Hi @tomchentw ,
We can use [react-storybook](https://github.com/kadirahq/react-storybook/) to demonstrate some use cases for each component. I can work on this issue if you're interested.
| main | use react storybook to demonstrate components hi tomchentw we can use to demonstrate some use cases for each component i can work on this issue if you re interested | 1 |
438 | 3,561,251,161 | IssuesEvent | 2016-01-23 17:35:12 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Proposal: get rid of `license` | awaiting maintainer feedback core discussion | The `license` stanza was [thought out in the context of fonts](https://github.com/caskroom/homebrew-cask/pull/1860#issuecomment-30534010) and is nothing but [a curiosity for apps](https://github.com/caskroom/homebrew-cask/issues/8169#issuecomment-67259483) (especially we not being about discoverability, and all).
`license` is (at the time of this writing) missing from just 284 casks in the main repo (from a total of 2937).
Rationally, though, there’s no context in which we currently use its information, nor is any real context really planned, nor is anyone eager to do it. `license` is just there, it exists. However, I argue the `license` stanza is worse than useless, it’s actively harmful. For one, it is required, which makes absolutely no sense for user taps (which I believe we should encourage and have many times been vocal in thinking it is one of the best features of homebrew), but it also delays submissions, as users not filling it up/filling it up wrongly it’s a frequent point of friction for contributions.
Pinging @caskroom/maintainers, and I’ll specifically ping @victorpopkov and @Amorymeltzer as they both did a bunch of these. | True | Proposal: get rid of `license` - The `license` stanza was [thought out in the context of fonts](https://github.com/caskroom/homebrew-cask/pull/1860#issuecomment-30534010) and is nothing but [a curiosity for apps](https://github.com/caskroom/homebrew-cask/issues/8169#issuecomment-67259483) (especially we not being about discoverability, and all).
`license` is (at the time of this writing) missing from just 284 casks in the main repo (from a total of 2937).
Rationally, though, there’s no context in which we currently use its information, nor is any real context really planned, nor is anyone eager to do it. `license` is just there, it exists. However, I argue the `license` stanza is worse than useless, it’s actively harmful. For one, it is required, which makes absolutely no sense for user taps (which I believe we should encourage and have many times been vocal in thinking it is one of the best features of homebrew), but it also delays submissions, as users not filling it up/filling it up wrongly it’s a frequent point of friction for contributions.
Pinging @caskroom/maintainers, and I’ll specifically ping @victorpopkov and @Amorymeltzer as they both did a bunch of these. | main | proposal get rid of license the license stanza was and is nothing but especially we not being about discoverability and all license is at the time of this writing missing from just casks in the main repo from a total of rationally though there’s no context in which we currently use its information nor is any real context really planned nor is anyone eager to do it license is just there it exists however i argue the license stanza is worse than useless it’s actively harmful for one it is required which makes absolutely no sense for user taps which i believe we should encourage and have many times been vocal in thinking it is one of the best features of homebrew but it also delays submissions as users not filling it up filling it up wrongly it’s a frequent point of friction for contributions pinging caskroom maintainers and i’ll specifically ping victorpopkov and amorymeltzer as they both did a bunch of these | 1 |
5,632 | 28,292,047,443 | IssuesEvent | 2023-04-09 10:31:25 | arcticicestudio/nord-docs | https://api.github.com/repos/arcticicestudio/nord-docs | closed | `nordtheme` organization migration | scope-maintainability context-workflow | <p align="center">
<a href="https://www.nordtheme.com" target="_blank">
<picture>
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" />
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/light/frostic/nord6/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: dark)" />
<img src="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" />
</picture>
</a>
</p>
As part of the [“Northern Post — The state and roadmap of Nord“][1] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][2].
This issue is a task of nordtheme/nord#185 epic ([_tasklist_][3]).
[1]: https://github.com/orgs/nordtheme/discussions/183
[2]: https://github.com/nordtheme
[3]: https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/about-task-lists | True | `nordtheme` organization migration - <p align="center">
<a href="https://www.nordtheme.com" target="_blank">
<picture>
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" />
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/light/frostic/nord6/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: dark)" />
<img src="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" />
</picture>
</a>
</p>
As part of the [“Northern Post — The state and roadmap of Nord“][1] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][2].
This issue is a task of nordtheme/nord#185 epic ([_tasklist_][3]).
[1]: https://github.com/orgs/nordtheme/discussions/183
[2]: https://github.com/nordtheme
[3]: https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/about-task-lists | main | nordtheme organization migration as part of the announcement this repository will be migrated to this issue is a task of nordtheme nord epic | 1 |
35,293 | 7,687,929,213 | IssuesEvent | 2018-05-17 07:50:09 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | SelectOneMenu: after refreshing the widget in an AJAX request the popup opens at wrong position | defect | If an AJAX request updates a SelectOneMenu widget and user opens the popup of that SelectOneMenu the popup opens at wrong position: top left corner of the body instead of aligned to the widget.
Reason:
In PF 6.1 SelectOneMenu had following function, called from init():
```
appendPanel: function() {
var container = this.cfg.appendTo ? PrimeFaces.expressions.SearchExpressionFacade.resolveComponentsAsSelector(this.cfg.appendTo): $(document.body);
if(!container.is(this.jq)) {
**container.children(this.panelId).remove();**
this.panel.appendTo(container);
}
}
```
This removed the old popup panel ('..._panel') before appending the new panel.
In PF 6.2 this method was removed and instead init() contains following code:
`PrimeFaces.utils.registerDynamicOverlay(this, this.panel, this.id + '_panel');`
The passed argument "this.panel" is a two element list, because `this.panel = $(this.panelId)` finds the old and the new popup panel. `PrimeFaces.utils.registerDynamicOverlay` doesn't work correctly with such a two element list: it calls `PrimeFaces.utils.appendDynamicOverlay` which does nothing because if `(!elementParent.is(appendTo) && !appendTo.is(overlay))` returns false for the two element `overlay` var.
## 1) Environment
- PrimeFaces version: 6.2.2
- Does it work on the newest released PrimeFaces version? No
- Does it work on the newest sources in GitHub? No
- Application server + version: Tomcat 8 (arbitrary)
- Affected browsers: IE, FF, Chrome (arbitrary)
## 2) Expected behavior
the popup panel opens aligned to the widget (as normal for a select one dropdown widget)
## 3) Actual behavior
the popup panel opens at the top left corner of the HTML body
## 4) Steps to reproduce
perform an AJAX call which updates the SelectOneMenu widget in it's response. Then open the SelectOneMenu's popup panel via clicking the widget.
## 5) Sample XHTML
```
<p:selectOneMenu id="mySom" value="#{bean.mySom}" effect="none">
<f:selectItems value="#{bean.selectItemsMySom}" />
</p:selectOneMenu>
<p:commandButton id="updateMySom" ajax="true" value="AJAX Action" action="#{bean.updateMySom}" process="@all" update="mySom" />
```
## 6) Sample bean
straight forward
| 1.0 | SelectOneMenu: after refreshing the widget in an AJAX request the popup opens at wrong position - If an AJAX request updates a SelectOneMenu widget and user opens the popup of that SelectOneMenu the popup opens at wrong position: top left corner of the body instead of aligned to the widget.
Reason:
In PF 6.1 SelectOneMenu had following function, called from init():
```
appendPanel: function() {
var container = this.cfg.appendTo ? PrimeFaces.expressions.SearchExpressionFacade.resolveComponentsAsSelector(this.cfg.appendTo): $(document.body);
if(!container.is(this.jq)) {
**container.children(this.panelId).remove();**
this.panel.appendTo(container);
}
}
```
This removed the old popup panel ('..._panel') before appending the new panel.
In PF 6.2 this method was removed and instead init() contains following code:
`PrimeFaces.utils.registerDynamicOverlay(this, this.panel, this.id + '_panel');`
The passed argument "this.panel" is a two element list, because `this.panel = $(this.panelId)` finds the old and the new popup panel. `PrimeFaces.utils.registerDynamicOverlay` doesn't work correctly with such a two element list: it calls `PrimeFaces.utils.appendDynamicOverlay` which does nothing because if `(!elementParent.is(appendTo) && !appendTo.is(overlay))` returns false for the two element `overlay` var.
## 1) Environment
- PrimeFaces version: 6.2.2
- Does it work on the newest released PrimeFaces version? No
- Does it work on the newest sources in GitHub? No
- Application server + version: Tomcat 8 (arbitrary)
- Affected browsers: IE, FF, Chrome (arbitrary)
## 2) Expected behavior
the popup panel opens aligned to the widget (as normal for a select one dropdown widget)
## 3) Actual behavior
the popup panel opens at the top left corner of the HTML body
## 4) Steps to reproduce
perform an AJAX call which updates the SelectOneMenu widget in it's response. Then open the SelectOneMenu's popup panel via clicking the widget.
## 5) Sample XHTML
```
<p:selectOneMenu id="mySom" value="#{bean.mySom}" effect="none">
<f:selectItems value="#{bean.selectItemsMySom}" />
</p:selectOneMenu>
<p:commandButton id="updateMySom" ajax="true" value="AJAX Action" action="#{bean.updateMySom}" process="@all" update="mySom" />
```
## 6) Sample bean
straight forward
| non_main | selectonemenu after refreshing the widget in an ajax request the popup opens at wrong position if an ajax request updates a selectonemenu widget and user opens the popup of that selectonemenu the popup opens at wrong position top left corner of the body instead of aligned to the widget reason in pf selectonemenu had following function called from init appendpanel function var container this cfg appendto primefaces expressions searchexpressionfacade resolvecomponentsasselector this cfg appendto document body if container is this jq container children this panelid remove this panel appendto container this removed the old popup panel panel before appending the new panel in pf this method was removed and instead init contains following code primefaces utils registerdynamicoverlay this this panel this id panel the passed argument this panel is a two element list because this panel this panelid finds the old and the new popup panel primefaces utils registerdynamicoverlay doesn t work correctly with such a two element list it calls primefaces utils appenddynamicoverlay which does nothing because if elementparent is appendto appendto is overlay returns false for the two element overlay var environment primefaces version does it work on the newest released primefaces version no does it work on the newest sources in github no application server version tomcat arbitrary affected browsers ie ff chrome arbitrary expected behavior the popup panel opens aligned to the widget as normal for a select one dropdown widget actual behavior the popup panel opens at the top left corner of the html body steps to reproduce perform an ajax call which updates the selectonemenu widget in it s response then open the selectonemenu s popup panel via clicking the widget sample xhtml sample bean straight forward | 0 |
4,828 | 24,892,390,202 | IssuesEvent | 2022-10-28 13:09:13 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Improve dependency management for the frontend | enhancement maintainability | **Problem**
We currently ship frontend javascript libraries in the repository itself. These libraries have not been updated for a while and they sometimes get patched in various ways. It would be cleaner to use a proper dependency management system for this, just like we have moved from Ant to Maven for the backend.
**Possibilities**
* npm is the standard javascript package manager. We could try to use this. This would mean that dependencies are configured in a `package.json` and are pulled in at compile time using npm. The PR #2418 demonstrates some of that. This means that in addition to requiring Maven for build, we would also need npm. We should look into downloading npm on the fly in the `refine(.bat)` wrappers, which currently do this for Maven itself.
* webjars offer a way to use Maven's existing dependency management system to fetch Javascript libraries. This would let us use a single dependency management system, but the selection and freshness of packages is a bit more restricted.
* anything else? | True | Improve dependency management for the frontend - **Problem**
We currently ship frontend javascript libraries in the repository itself. These libraries have not been updated for a while and they sometimes get patched in various ways. It would be cleaner to use a proper dependency management system for this, just like we have moved from Ant to Maven for the backend.
**Possibilities**
* npm is the standard javascript package manager. We could try to use this. This would mean that dependencies are configured in a `package.json` and are pulled in at compile time using npm. The PR #2418 demonstrates some of that. This means that in addition to requiring Maven for build, we would also need npm. We should look into downloading npm on the fly in the `refine(.bat)` wrappers, which currently do this for Maven itself.
* webjars offer a way to use Maven's existing dependency management system to fetch Javascript libraries. This would let us use a single dependency management system, but the selection and freshness of packages is a bit more restricted.
* anything else? | main | improve dependency management for the frontend problem we currently ship frontend javascript libraries in the repository itself these libraries have not been updated for a while and they sometimes get patched in various ways it would be cleaner to use a proper dependency management system for this just like we have moved from ant to maven for the backend possibilities npm is the standard javascript package manager we could try to use this this would mean that dependencies are configured in a package json and are pulled in at compile time using npm the pr demonstrates some of that this means that in addition to requiring maven for build we would also need npm we should look into downloading npm on the fly in the refine bat wrappers which currently do this for maven itself webjars offer a way to use maven s existing dependency management system to fetch javascript libraries this would let us use a single dependency management system but the selection and freshness of packages is a bit more restricted anything else | 1 |
5,659 | 29,181,513,068 | IssuesEvent | 2023-05-19 12:18:39 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | closed | Gmail - Create draft email | templates maintainer | This notebook will show how to create a draft email using the Gmail API. It is usefull for organizations that need to automate the creation of emails.
| True | Gmail - Create draft email - This notebook will show how to create a draft email using the Gmail API. It is usefull for organizations that need to automate the creation of emails.
| main | gmail create draft email this notebook will show how to create a draft email using the gmail api it is usefull for organizations that need to automate the creation of emails | 1 |
185,241 | 14,346,406,904 | IssuesEvent | 2020-11-29 00:19:27 | fga-eps-mds/2020-1-DoctorS-Bot | https://api.github.com/repos/fga-eps-mds/2020-1-DoctorS-Bot | closed | Testes de integração | Teste | ## Descrição
Escrever os testes de integração para as funcionalidades disponíveis do bot.
## Tarefas
- [x] Teste de cadastro
- [x] Teste de login
- [x] Teste de logout
- [x] Teste de ajuda
- [x] Teste de dicas
- [x] Teste de edição de dados
- [x] Teste de visualização dos dados
## Critério de aceitação
- [x] Testes implementados e validados | 1.0 | Testes de integração - ## Descrição
Escrever os testes de integração para as funcionalidades disponíveis do bot.
## Tarefas
- [x] Teste de cadastro
- [x] Teste de login
- [x] Teste de logout
- [x] Teste de ajuda
- [x] Teste de dicas
- [x] Teste de edição de dados
- [x] Teste de visualização dos dados
## Critério de aceitação
- [x] Testes implementados e validados | non_main | testes de integração descrição escrever os testes de integração para as funcionalidades disponíveis do bot tarefas teste de cadastro teste de login teste de logout teste de ajuda teste de dicas teste de edição de dados teste de visualização dos dados critério de aceitação testes implementados e validados | 0 |
235,479 | 25,944,100,321 | IssuesEvent | 2022-12-16 21:50:46 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | closed | Consider disabling DeviceMotionEvent and DeviceOrientationEvent events by default | feature request 🌟 b:web-content Feature:Privacy&Security | ### Why/User Benefit/User Problem
Problem: The Tor browser considers the DeviceMotionEvent and DeviceOrientationEvent events a possible fingerprinting vector:
https://trac.torproject.org/projects/tor/ticket/21609
W3C discussion about requesting permission to receive device motion / orientation events:
https://github.com/w3c/deviceorientation/issues/57
### What / Requirements
Safari 12.1 on iOS added Motion & Orientation settings to enable the DeviceMotionEvent and DeviceOrientationEvent events. This setting is **disabled by default**.
https://developer.apple.com/documentation/safari_release_notes/safari_12_1_release_notes
https://twitter.com/rmondello/status/1091073298409160705
### Acceptance Criteria (how do I know when I’m done?)
DeviceMotionEvent and DeviceOrientationEvent are disabled by default. They can be enabled by a user setting like Safari 12.1.
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-12238)
| True | Consider disabling DeviceMotionEvent and DeviceOrientationEvent events by default - ### Why/User Benefit/User Problem
Problem: The Tor browser considers the DeviceMotionEvent and DeviceOrientationEvent events a possible fingerprinting vector:
https://trac.torproject.org/projects/tor/ticket/21609
W3C discussion about requesting permission to receive device motion / orientation events:
https://github.com/w3c/deviceorientation/issues/57
### What / Requirements
Safari 12.1 on iOS added Motion & Orientation settings to enable the DeviceMotionEvent and DeviceOrientationEvent events. This setting is **disabled by default**.
https://developer.apple.com/documentation/safari_release_notes/safari_12_1_release_notes
https://twitter.com/rmondello/status/1091073298409160705
### Acceptance Criteria (how do I know when I’m done?)
DeviceMotionEvent and DeviceOrientationEvent are disabled by default. They can be enabled by a user setting like Safari 12.1.
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-12238)
| non_main | consider disabling devicemotionevent and deviceorientationevent events by default why user benefit user problem problem the tor browser considers the devicemotionevent and deviceorientationevent events a possible fingerprinting vector discussion about requesting permission to receive device motion orientation events what requirements safari on ios added motion orientation settings to enable the devicemotionevent and deviceorientationevent events this setting is disabled by default acceptance criteria how do i know when i’m done devicemotionevent and deviceorientationevent are disabled by default they can be enabled by a user setting like safari ┆issue is synchronized with this | 0 |
325,128 | 27,849,619,513 | IssuesEvent | 2023-03-20 17:44:24 | eclipse-openj9/openj9 | https://api.github.com/repos/eclipse-openj9/openj9 | opened | JDK20 cmdLineTester_oracleSecurityTest_0_FAILED Case 5: C_M.class = Modified C.class, ACC_SUPER is removed, invokeSpecial to A.foo() | comp:vm test failure jdk20 | Failure link
------------
From [an internal build](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk20_j9_extended.functional_s390x_linux_Personal/1/consoleFull)(`fyrlx11l`):
```
09:02:09 openjdk version "20-internal" 2023-03-21
09:02:09 OpenJDK Runtime Environment (build 20-internal-adhoc.jenkins.BuildJDK20s390xlinuxPersonal)
09:02:09 Eclipse OpenJ9 VM (build master-8cc902bc75f, JRE 20 Linux s390x-64-Bit Compressed References 20230316_13 (JIT enabled, AOT enabled)
09:02:09 OpenJ9 - 8cc902bc75f
09:02:09 OMR - 035ec68ec0c
09:02:09 JCL - 10a35739269 based on jdk-20+36)
```
[Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=customized&TARGET=testList+TESTLIST=cmdLineTester_oracleSecurityTest_0&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=&VENDOR_TEST_DIRS=functional&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=s390x_linux&GENERATE_JOBS=true&KEEP_REPORTDIR=false&PERSONAL_BUILD=false&DOCKER_REGISTRY_DIR=&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&RERUN_ITERATIONS=3&SETUP_JCK_RUN=false&DOCKER_REGISTRY_URL_CREDENTIAL_ID=&LABEL=&EXTRA_OPTIONS=&BUILD_IDENTIFIER=fengj%40ca.ibm.com&CUSTOMIZED_SDK_URL=https%3A%2F%2Fna-public.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDK20_s390x_linux_Personal%2F13%2FOpenJ9-JDK20-s390x_linux-20230316-051333.tar.gz+https%3A%2F%2Fna-public.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDK20_s390x_linux_Personal%2F13%2Ftest-images.tar.gz&JENKINS_KEY=Jenkins+4096+key&ADOPTOPENJDK_BRANCH=master&LIGHT_WEIGHT_CHECKOUT=true&USE_JRE=false&ARTIFACTORY_SERVER=na-public.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=83181e25-eea4-4f55-8b3e-e79615733226&JDK_VERSION=20&DOCKER_REGISTRY_URL=&ITERATIONS=1&VENDOR_TEST_REPOS=git%40github.ibm.com%3Aruntimes%2Ftest.git&JDK_REPO=git%40github.com%3Aibmruntimes%2Fopenj9-openjdk-jdk20.git&JCK_GIT_BRANCH=main&OPENJ9_BRANCH=master&OPENJ9_SHA=8cc902bc75ffa8e24f4beef2a1f8726a383771eb&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=master&OPENJ9_REPO=git%40github.com%3Aeclipse%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=034d4edd4350276a2d01aade526b242fb70f4eee&JDK_BRANCH=openj9&LABEL_ADDITION=ci.project.openj9&ARTIFACTORY_REPO=sys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=%23rt-jenkins&DYNAMIC_COMPILE=true&RELATED_NODES=&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&APPLICATION_OPTIONS=&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=7c1c2c28-650f-49e0-afd1-ca6b60479546&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=2&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=false&BUILD_LIST=functional&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=Dynamic) - Change TARGET to run only the failed test targets.
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
variation: NoOptions
JVM_OPTIONS:
Testing: Case 5: C_M.class = Modified C.class, ACC_SUPER is removed, invokeSpecial to A.foo()
Test start time: 2023/03/16 14:55:41 Coordinated Universal Time
Running command: "/home/jenkins/workspace/Test_openjdk20_j9_extended.functional_s390x_linux_Personal_testList_1/openjdkbinary/j2sdk-image/bin/java" -cp "/home/jenkins/workspace/Test_openjdk20_j9_extended.functional_s390x_linux_Personal_testList_1/aqa-tests/TKG/../../jvmtest/functional/IBM_Internal/cmdLineTests_IBM_Internal/oracleSecurityTest/oracleSecurityTest.jar" -XX:+AllowNonVirtualCalls C_M
Time spent starting: 2 milliseconds
Time spent executing: 65 milliseconds
Test result: FAILED
Output from test:
[OUT] B.foo()
>> Success condition was found: [Return code: 0]
>> Success condition was not found: [Output match: A.foo()]
>> Failure condition was found: [Output match: B.foo()]
>> Failure condition was not found: [Output match: Unhandled Exception]
>> Failure condition was not found: [Output match: Exception:]
>> Failure condition was not found: [Output match: Processing dump event]
---TEST RESULTS---
Number of PASSED tests: 8 out of 9
Number of FAILED tests: 1 out of 9
---SUMMARY OF FAILED TESTS---
Case 5: C_M.class = Modified C.class, ACC_SUPER is removed, invokeSpecial to A.foo()
-----------------------------
-----------------------------------
cmdLineTester_oracleSecurityTest_0_FAILED
```
This failure is across platforms such as [x86-64_linux](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk20_j9_extended.functional_x86-64_linux_Personal/1/tapResults/) [ppc64le_linux](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk20_j9_extended.functional_ppc64le_linux_Personal/1/tapResults/) | 1.0 | JDK20 cmdLineTester_oracleSecurityTest_0_FAILED Case 5: C_M.class = Modified C.class, ACC_SUPER is removed, invokeSpecial to A.foo() - Failure link
------------
From [an internal build](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk20_j9_extended.functional_s390x_linux_Personal/1/consoleFull)(`fyrlx11l`):
```
09:02:09 openjdk version "20-internal" 2023-03-21
09:02:09 OpenJDK Runtime Environment (build 20-internal-adhoc.jenkins.BuildJDK20s390xlinuxPersonal)
09:02:09 Eclipse OpenJ9 VM (build master-8cc902bc75f, JRE 20 Linux s390x-64-Bit Compressed References 20230316_13 (JIT enabled, AOT enabled)
09:02:09 OpenJ9 - 8cc902bc75f
09:02:09 OMR - 035ec68ec0c
09:02:09 JCL - 10a35739269 based on jdk-20+36)
```
[Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=customized&TARGET=testList+TESTLIST=cmdLineTester_oracleSecurityTest_0&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=&VENDOR_TEST_DIRS=functional&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=s390x_linux&GENERATE_JOBS=true&KEEP_REPORTDIR=false&PERSONAL_BUILD=false&DOCKER_REGISTRY_DIR=&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&RERUN_ITERATIONS=3&SETUP_JCK_RUN=false&DOCKER_REGISTRY_URL_CREDENTIAL_ID=&LABEL=&EXTRA_OPTIONS=&BUILD_IDENTIFIER=fengj%40ca.ibm.com&CUSTOMIZED_SDK_URL=https%3A%2F%2Fna-public.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDK20_s390x_linux_Personal%2F13%2FOpenJ9-JDK20-s390x_linux-20230316-051333.tar.gz+https%3A%2F%2Fna-public.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2FBuild_JDK20_s390x_linux_Personal%2F13%2Ftest-images.tar.gz&JENKINS_KEY=Jenkins+4096+key&ADOPTOPENJDK_BRANCH=master&LIGHT_WEIGHT_CHECKOUT=true&USE_JRE=false&ARTIFACTORY_SERVER=na-public.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=83181e25-eea4-4f55-8b3e-e79615733226&JDK_VERSION=20&DOCKER_REGISTRY_URL=&ITERATIONS=1&VENDOR_TEST_REPOS=git%40github.ibm.com%3Aruntimes%2Ftest.git&JDK_REPO=git%40github.com%3Aibmruntimes%2Fopenj9-openjdk-jdk20.git&JCK_GIT_BRANCH=main&OPENJ9_BRANCH=master&OPENJ9_SHA=8cc902bc75ffa8e24f4beef2a1f8726a383771eb&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=master&OPENJ9_REPO=git%40github.com%3Aeclipse%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=034d4edd4350276a2d01aade526b242fb70f4eee&JDK_BRANCH=openj9&LABEL_ADDITION=ci.project.openj9&ARTIFACTORY_REPO=sys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=%23rt-jenkins&DYNAMIC_COMPILE=true&RELATED_NODES=&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&APPLICATION_OPTIONS=&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=7c1c2c28-650f-49e0-afd1-ca6b60479546&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=2&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=false&BUILD_LIST=functional&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=Dynamic) - Change TARGET to run only the failed test targets.
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
variation: NoOptions
JVM_OPTIONS:
Testing: Case 5: C_M.class = Modified C.class, ACC_SUPER is removed, invokeSpecial to A.foo()
Test start time: 2023/03/16 14:55:41 Coordinated Universal Time
Running command: "/home/jenkins/workspace/Test_openjdk20_j9_extended.functional_s390x_linux_Personal_testList_1/openjdkbinary/j2sdk-image/bin/java" -cp "/home/jenkins/workspace/Test_openjdk20_j9_extended.functional_s390x_linux_Personal_testList_1/aqa-tests/TKG/../../jvmtest/functional/IBM_Internal/cmdLineTests_IBM_Internal/oracleSecurityTest/oracleSecurityTest.jar" -XX:+AllowNonVirtualCalls C_M
Time spent starting: 2 milliseconds
Time spent executing: 65 milliseconds
Test result: FAILED
Output from test:
[OUT] B.foo()
>> Success condition was found: [Return code: 0]
>> Success condition was not found: [Output match: A.foo()]
>> Failure condition was found: [Output match: B.foo()]
>> Failure condition was not found: [Output match: Unhandled Exception]
>> Failure condition was not found: [Output match: Exception:]
>> Failure condition was not found: [Output match: Processing dump event]
---TEST RESULTS---
Number of PASSED tests: 8 out of 9
Number of FAILED tests: 1 out of 9
---SUMMARY OF FAILED TESTS---
Case 5: C_M.class = Modified C.class, ACC_SUPER is removed, invokeSpecial to A.foo()
-----------------------------
-----------------------------------
cmdLineTester_oracleSecurityTest_0_FAILED
```
This failure is across platforms such as [x86-64_linux](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk20_j9_extended.functional_x86-64_linux_Personal/1/tapResults/) [ppc64le_linux](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk20_j9_extended.functional_ppc64le_linux_Personal/1/tapResults/) | non_main | cmdlinetester oraclesecuritytest failed case c m class modified c class acc super is removed invokespecial to a foo failure link from openjdk version internal openjdk runtime environment build internal adhoc jenkins eclipse vm build master jre linux bit compressed references jit enabled aot enabled omr jcl based on jdk change target to run only the failed test targets optional info failure output captured from console output variation nooptions jvm options testing case c m class modified c class acc super is removed invokespecial to a foo test start time coordinated universal time running command home jenkins workspace test extended functional linux personal testlist openjdkbinary image bin java cp home jenkins workspace test extended functional linux personal testlist aqa tests tkg jvmtest functional ibm internal cmdlinetests ibm internal oraclesecuritytest oraclesecuritytest jar xx allownonvirtualcalls c m time spent starting milliseconds time spent executing milliseconds test result failed output from test b foo success condition was found success condition was not found failure condition was found failure condition was not found failure condition was not found failure condition was not found test results number of passed tests out of number of failed tests out of summary of failed tests case c m class modified c class acc super is removed invokespecial to a foo cmdlinetester oraclesecuritytest failed this failure is across platforms such as | 0 |
642,947 | 20,918,714,100 | IssuesEvent | 2022-03-24 15:28:57 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | opened | Error while updating grant types for an Application under Auth 0 | Type/Bug Priority/Normal | ### Description:
There is a bad request error while trying to update the grant types for an application registered in Auth 0. The grant types are not intitially updated for the application when trying to update the Audience and hence we need to re update the application to include the grant types and the below error is observed
[2022-03-24 20:39:20,831] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper.
feign.FeignException$BadRequest: [400 ] during [PATCH] to [https://dev-6sjoea18.us.auth0.com/api/v2/clients/yHks8s9CEUE4G5P3Ljj3AWROFE45Iaw6] [Auth0DCRClient#updateApplication(String,Auth0ClientInfo)]: [{"statusCode":400,"error":"Bad Request","message":"Payload validation error: 'Array items are not unique (indexes 1 and 2)' on property grant_types (A set of grant types that the client is authorized to use).","errorCode":"invalid_body"}]
at feign.FeignException.clientErrorStatus(FeignException.java:195) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.FeignException.errorStatus(FeignException.java:177) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.FeignException.errorStatus(FeignException.java:169) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.codec.ErrorDecoder$Default.decode(ErrorDecoder.java:92) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.AsyncResponseHandler.handleResponse(AsyncResponseHandler.java:96) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:138) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:89) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at com.sun.proxy.$Proxy505.updateApplication(Unknown Source) ~[?:?]
at org.wso2.auth0.client.Auth0OAuthClient.updateApplication(Auth0OAuthClient.java:223) ~[auth0.key.manager-1.0.3.jar:?]
at org.wso2.carbon.apimgt.impl.APIConsumerImpl.updateAuthClient_aroundBody188(APIConsumerImpl.java:4853) ~[org.wso2.carbon.apimgt.impl_6.7.206.262.jar:?]
at org.wso2.carbon.apimgt.impl.APIConsumerImpl.updateAuthClient(APIConsumerImpl.java:4760) ~[org.wso2.carbon.apimgt.impl_6.7.206.262.jar:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApplicationsApiServiceImpl.applicationsApplicationIdOauthKeysKeyMappingIdPut(ApplicationsApiServiceImpl.java:1042) ~[classes/:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.ApplicationsApi.applicationsApplicationIdOauthKeysKeyMappingIdPut(ApplicationsApi.java:373) ~[classes/:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) [cxf-rt-frontend-jaxrs-3.4.4.jar:3.4.4]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) [cxf-rt-frontend-jaxrs-3.4.4.jar:3.4.4]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:298) [cxf-rt-transports-http
### Steps to reproduce:
### Affected Product Version:
<!-- Members can use Affected/*** labels -->
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members--> | 1.0 | Error while updating grant types for an Application under Auth 0 - ### Description:
There is a bad request error while trying to update the grant types for an application registered in Auth 0. The grant types are not intitially updated for the application when trying to update the Audience and hence we need to re update the application to include the grant types and the below error is observed
[2022-03-24 20:39:20,831] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper.
feign.FeignException$BadRequest: [400 ] during [PATCH] to [https://dev-6sjoea18.us.auth0.com/api/v2/clients/yHks8s9CEUE4G5P3Ljj3AWROFE45Iaw6] [Auth0DCRClient#updateApplication(String,Auth0ClientInfo)]: [{"statusCode":400,"error":"Bad Request","message":"Payload validation error: 'Array items are not unique (indexes 1 and 2)' on property grant_types (A set of grant types that the client is authorized to use).","errorCode":"invalid_body"}]
at feign.FeignException.clientErrorStatus(FeignException.java:195) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.FeignException.errorStatus(FeignException.java:177) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.FeignException.errorStatus(FeignException.java:169) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.codec.ErrorDecoder$Default.decode(ErrorDecoder.java:92) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.AsyncResponseHandler.handleResponse(AsyncResponseHandler.java:96) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:138) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:89) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100) ~[io.github.openfeign.feign-core_11.0.0.jar:?]
at com.sun.proxy.$Proxy505.updateApplication(Unknown Source) ~[?:?]
at org.wso2.auth0.client.Auth0OAuthClient.updateApplication(Auth0OAuthClient.java:223) ~[auth0.key.manager-1.0.3.jar:?]
at org.wso2.carbon.apimgt.impl.APIConsumerImpl.updateAuthClient_aroundBody188(APIConsumerImpl.java:4853) ~[org.wso2.carbon.apimgt.impl_6.7.206.262.jar:?]
at org.wso2.carbon.apimgt.impl.APIConsumerImpl.updateAuthClient(APIConsumerImpl.java:4760) ~[org.wso2.carbon.apimgt.impl_6.7.206.262.jar:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.impl.ApplicationsApiServiceImpl.applicationsApplicationIdOauthKeysKeyMappingIdPut(ApplicationsApiServiceImpl.java:1042) ~[classes/:?]
at org.wso2.carbon.apimgt.rest.api.store.v1.ApplicationsApi.applicationsApplicationIdOauthKeysKeyMappingIdPut(ApplicationsApi.java:373) ~[classes/:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) [cxf-rt-frontend-jaxrs-3.4.4.jar:3.4.4]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) [cxf-rt-frontend-jaxrs-3.4.4.jar:3.4.4]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) [cxf-core-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) [cxf-rt-transports-http-3.4.4.jar:3.4.4]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:298) [cxf-rt-transports-http
### Steps to reproduce:
### Affected Product Version:
<!-- Members can use Affected/*** labels -->
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members--> | non_main | error while updating grant types for an application under auth description there is a bad request error while trying to update the grant types for an application registered in auth the grant types are not intitially updated for the application when trying to update the audience and hence we need to re update the application to include the grant types and the below error is observed error globalthrowablemapper an unknown exception has been captured by the global exception mapper feign feignexception badrequest during to at feign feignexception clienterrorstatus feignexception java at feign feignexception errorstatus feignexception java at feign feignexception errorstatus feignexception java at feign codec errordecoder default decode errordecoder java at feign asyncresponsehandler handleresponse asyncresponsehandler java at feign synchronousmethodhandler executeanddecode synchronousmethodhandler java at feign synchronousmethodhandler invoke synchronousmethodhandler java at feign reflectivefeign feigninvocationhandler invoke reflectivefeign java at com sun proxy updateapplication unknown source at org client updateapplication java at org carbon apimgt impl apiconsumerimpl updateauthclient apiconsumerimpl java at org carbon apimgt impl apiconsumerimpl updateauthclient apiconsumerimpl java at org carbon apimgt rest api store impl applicationsapiserviceimpl applicationsapplicationidoauthkeyskeymappingidput applicationsapiserviceimpl java at org carbon apimgt rest api store applicationsapi applicationsapplicationidoauthkeyskeymappingidput applicationsapi java at jdk internal reflect nativemethodaccessorimpl native method at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache cxf service invoker abstractinvoker performinvocation abstractinvoker java at org apache cxf service invoker abstractinvoker invoke abstractinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf interceptor serviceinvokerinterceptor run serviceinvokerinterceptor java at org apache cxf interceptor serviceinvokerinterceptor handlemessage serviceinvokerinterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java cxf rt transports http steps to reproduce affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees | 0 |
1,378 | 5,966,286,279 | IssuesEvent | 2017-05-30 13:42:44 | enterprisemediawiki/meza | https://api.github.com/repos/enterprisemediawiki/meza | closed | Only rebuild search index and SMW data when required | critical: bug difficulty: hard important: maintainability v1.0 | Currently need to add `--skip-tags smw-data,search-index` to avoid doing these, which is pretty counter-intuitive. | True | Only rebuild search index and SMW data when required - Currently need to add `--skip-tags smw-data,search-index` to avoid doing these, which is pretty counter-intuitive. | main | only rebuild search index and smw data when required currently need to add skip tags smw data search index to avoid doing these which is pretty counter intuitive | 1 |
208 | 2,851,735,769 | IssuesEvent | 2015-06-01 09:05:11 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | closed | Cleanup the SimpleSAML_Utilities class | enhancement maintainability started | The following must be done:
* [x] Remove the `validateCA()` method.
* [x] Remove the `generateRandomBytesMTrand()` method.
* [x] Remove the `validateXML()` and `validateXMLDocument()` methods. Use a standalone composer module instead.
* [x] Modify the `generateRandomBytes()` method to remove the fallback parameter. The code not calling `openssl_random_pseudo_bytes` can then be removed.
* [x] Refactor the rest of it to group methods by their functionality in dedicated classes under `lib/SimpleSAML/Utils/`. | True | Cleanup the SimpleSAML_Utilities class - The following must be done:
* [x] Remove the `validateCA()` method.
* [x] Remove the `generateRandomBytesMTrand()` method.
* [x] Remove the `validateXML()` and `validateXMLDocument()` methods. Use a standalone composer module instead.
* [x] Modify the `generateRandomBytes()` method to remove the fallback parameter. The code not calling `openssl_random_pseudo_bytes` can then be removed.
* [x] Refactor the rest of it to group methods by their functionality in dedicated classes under `lib/SimpleSAML/Utils/`. | main | cleanup the simplesaml utilities class the following must be done remove the validateca method remove the generaterandombytesmtrand method remove the validatexml and validatexmldocument methods use a standalone composer module instead modify the generaterandombytes method to remove the fallback parameter the code not calling openssl random pseudo bytes can then be removed refactor the rest of it to group methods by their functionality in dedicated classes under lib simplesaml utils | 1 |
1,707 | 6,574,435,760 | IssuesEvent | 2017-09-11 12:53:32 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ansible 2.2: docker_service 'Error starting project - ' | affects_2.2 bug_report cloud docker waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_service
##### ANSIBLE VERSION
ansible 2.2.0.0
config file = /home/jhoeve-a/GitCollections/authdns-ansible-code/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
In version 2.2 I use:
```
- name: starting bind9 in docker
docker_service:
pull: yes
project_name: bind9
timeout: 120
definition:
version: '2'
services:
bind9:
restart: always
logging:
driver: syslog
options:
syslog-facility: local6
tag: bind9
image: docker.solvinity.net/bind9
network_mode: host
volumes:
- /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data
- /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc
register: output
tags:
- docker
```
In version 2.1.2 I used (due to lack of pull statement):
```
- name: pull image bind9
docker_image:
name: docker.solvinity.net/bind9
pull: yes
force: yes
tags:
- docker
- name: starting bind9 in docker
docker_service:
# pull: yes
project_name: bind9
timeout: 120
definition:
version: '2'
services:
bind9:
restart: always
logging:
driver: syslog
options:
syslog-facility: local6
tag: bind9
image: docker.solvinity.net/bind9
network_mode: host
volumes:
- /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data
- /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc
register: output
tags:
- docker
```
##### OS / ENVIRONMENT
Ubuntu 16.04 / Ansible 2.2 from ppa
##### SUMMARY
It used to work in 2.1.2 but now fails.
##### STEPS TO REPRODUCE
Simply update to Ansible 2.2 and run playbook with snippet above.
##### EXPECTED RESULTS
Upgrade / Start docker container
##### ACTUAL RESULTS
```
fatal: [lnx2346vm.internal.asp4all.nl]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"api_version": null,
"build": false,
"cacert_path": null,
"cert_path": null,
"debug": false,
"definition": {
"services": {
"bind9": {
"image": "docker.solvinity.net/bind9",
"logging": {
"driver": "syslog",
"options": {
"syslog-facility": "local6",
"tag": "bind9"
}
},
"network_mode": "host",
"restart": "always",
"volumes": [
"/export/bind/chroot/usr/local/bind/data:/usr/local/bind/data",
"/export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc"
]
}
},
"version": "2"
},
"dependencies": true,
"docker_host": null,
"files": null,
"filter_logger": false,
"hostname_check": false,
"key_path": null,
"nocache": false,
"project_name": "bind9",
"project_src": null,
"pull": true,
"recreate": "smart",
"remove_images": null,
"remove_orphans": false,
"remove_volumes": false,
"restarted": false,
"scale": null,
"services": null,
"ssl_version": null,
"state": "present",
"stopped": false,
"timeout": 120,
"tls": null,
"tls_hostname": null,
"tls_verify": null
},
"module_name": "docker_service"
},
"msg": "Error starting project - "
}
``` | True | ansible 2.2: docker_service 'Error starting project - ' - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_service
##### ANSIBLE VERSION
ansible 2.2.0.0
config file = /home/jhoeve-a/GitCollections/authdns-ansible-code/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
In version 2.2 I use:
```
- name: starting bind9 in docker
docker_service:
pull: yes
project_name: bind9
timeout: 120
definition:
version: '2'
services:
bind9:
restart: always
logging:
driver: syslog
options:
syslog-facility: local6
tag: bind9
image: docker.solvinity.net/bind9
network_mode: host
volumes:
- /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data
- /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc
register: output
tags:
- docker
```
In version 2.1.2 I used (due to lack of pull statement):
```
- name: pull image bind9
docker_image:
name: docker.solvinity.net/bind9
pull: yes
force: yes
tags:
- docker
- name: starting bind9 in docker
docker_service:
# pull: yes
project_name: bind9
timeout: 120
definition:
version: '2'
services:
bind9:
restart: always
logging:
driver: syslog
options:
syslog-facility: local6
tag: bind9
image: docker.solvinity.net/bind9
network_mode: host
volumes:
- /export/bind/chroot/usr/local/bind/data:/usr/local/bind/data
- /export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc
register: output
tags:
- docker
```
##### OS / ENVIRONMENT
Ubuntu 16.04 / Ansible 2.2 from ppa
##### SUMMARY
It used to work in 2.1.2 but now fails.
##### STEPS TO REPRODUCE
Simply update to Ansible 2.2 and run playbook with snippet above.
##### EXPECTED RESULTS
Upgrade / Start docker container
##### ACTUAL RESULTS
```
fatal: [lnx2346vm.internal.asp4all.nl]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"api_version": null,
"build": false,
"cacert_path": null,
"cert_path": null,
"debug": false,
"definition": {
"services": {
"bind9": {
"image": "docker.solvinity.net/bind9",
"logging": {
"driver": "syslog",
"options": {
"syslog-facility": "local6",
"tag": "bind9"
}
},
"network_mode": "host",
"restart": "always",
"volumes": [
"/export/bind/chroot/usr/local/bind/data:/usr/local/bind/data",
"/export/bind/chroot/usr/local/bind/etc:/usr/local/bind/etc"
]
}
},
"version": "2"
},
"dependencies": true,
"docker_host": null,
"files": null,
"filter_logger": false,
"hostname_check": false,
"key_path": null,
"nocache": false,
"project_name": "bind9",
"project_src": null,
"pull": true,
"recreate": "smart",
"remove_images": null,
"remove_orphans": false,
"remove_volumes": false,
"restarted": false,
"scale": null,
"services": null,
"ssl_version": null,
"state": "present",
"stopped": false,
"timeout": 120,
"tls": null,
"tls_hostname": null,
"tls_verify": null
},
"module_name": "docker_service"
},
"msg": "Error starting project - "
}
``` | main | ansible docker service error starting project issue type bug report component name docker service ansible version ansible config file home jhoeve a gitcollections authdns ansible code ansible cfg configured module search path default w o overrides configuration in version i use name starting in docker docker service pull yes project name timeout definition version services restart always logging driver syslog options syslog facility tag image docker solvinity net network mode host volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc register output tags docker in version i used due to lack of pull statement name pull image docker image name docker solvinity net pull yes force yes tags docker name starting in docker docker service pull yes project name timeout definition version services restart always logging driver syslog options syslog facility tag image docker solvinity net network mode host volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc register output tags docker os environment ubuntu ansible from ppa summary it used to work in but now fails steps to reproduce simply update to ansible and run playbook with snippet above expected results upgrade start docker container actual results fatal failed changed false failed true invocation module args api version null build false cacert path null cert path null debug false definition services image docker solvinity net logging driver syslog options syslog facility tag network mode host restart always volumes export bind chroot usr local bind data usr local bind data export bind chroot usr local bind etc usr local bind etc version dependencies true docker host null files null filter logger false hostname check false key path null nocache false project name project src null pull true recreate smart remove images null remove orphans false remove volumes false restarted false scale null services null ssl version null state present stopped false timeout tls null tls hostname null tls verify null module name docker service msg error starting project | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.