Unnamed: 0 int64 3 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 2 430 | labels stringlengths 4 347 | body stringlengths 5 237k | index stringclasses 7
values | text_combine stringlengths 96 237k | label stringclasses 2
values | text stringlengths 96 219k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
67,766 | 28,044,934,660 | IssuesEvent | 2023-03-28 21:48:32 | dockstore/dockstore | https://api.github.com/repos/dockstore/dockstore | closed | Determine Open data for CWL workflows | bug web-service review | **Describe the bug**
Open data calculations are not correct; see #5339. In PR #5347, we've improved the check for open data for WDL workflows. This ticket is to extend the logic for CWL workflows.
**Expected behavior**
Open data calculations for CWL should correctly figure out open data test parameter files.
**Notes**
This looks more complicated than the WDL work; we already had logic in WDLHandler that determined file inputs. For CWL, it looks like that logic is only in the CLI. Ideally move the CLI code to the common module in the web service to avoid duplication.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2331)
┆Attachments: <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10938">Screenshot from 2023-03-28 17-45-58.png</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10939">Screenshot from 2023-03-28 17-47-44.png</a>
┆Fix Versions: Dockstore 1.14
┆Issue Number: DOCK-2331
┆Sprint: 107 - Eridanos
┆Issue Type: Story
| 1.0 | Determine Open data for CWL workflows - **Describe the bug**
Open data calculations are not correct; see #5339. In PR #5347, we've improved the check for open data for WDL workflows. This ticket is to extend the logic for CWL workflows.
**Expected behavior**
Open data calculations for CWL should correctly figure out open data test parameter files.
**Notes**
This looks more complicated than the WDL work; we already had logic in WDLHandler that determined file inputs. For CWL, it looks like that logic is only in the CLI. Ideally move the CLI code to the common module in the web service to avoid duplication.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2331)
┆Attachments: <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10938">Screenshot from 2023-03-28 17-45-58.png</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10939">Screenshot from 2023-03-28 17-47-44.png</a>
┆Fix Versions: Dockstore 1.14
┆Issue Number: DOCK-2331
┆Sprint: 107 - Eridanos
┆Issue Type: Story
| non_comp | determine open data for cwl workflows describe the bug open data calculations are not correct see in pr we ve improved the check for open data for wdl workflows this ticket is to extend the logic for cwl workflows expected behavior open data calculations for cwl should correctly figure out open data test parameter files notes this looks more complicated than the wdl work we already had logic in wdlhandler that determined file inputs for cwl it looks like that logic is only in the cli ideally move the cli code to the common module in the web service to avoid duplication ┆issue is synchronized with this ┆attachments ┆fix versions dockstore ┆issue number dock ┆sprint eridanos ┆issue type story | 0 |
19,494 | 27,074,926,982 | IssuesEvent | 2023-02-14 09:55:24 | planetteamspeak/ts3phpframework | https://api.github.com/repos/planetteamspeak/ts3phpframework | closed | PHP 8.0 - Deprecated required parameters following optional parameter | compatibility | I am currently testing the PHP 8.0 RC 1 with a custom script.
The framework seems to be working fine with it. But I am using only a few functions of it.
One thing, which I noticed is the new check on functions. Is there a required parameter following an optional parameter, it returns an info message. It's no big deal. I only want to report it. ([more information](https://php.watch/versions/8.0/deprecate-required-param-after-optional))
Concerned functions:
https://github.com/planetteamspeak/ts3phpframework/blob/d629e767acf9d21056b3588e909443177d2bac47/libraries/TeamSpeak3/Node/Server.php#L2021
https://github.com/planetteamspeak/ts3phpframework/blob/d629e767acf9d21056b3588e909443177d2bac47/libraries/TeamSpeak3/Node/Server.php#L2036
Example (custom) error output:
```
2020-10-09 09:50:41.805911 INFO 8192: Required parameter $id1 follows optional parameter $type on line 2021 in /var/www/tsn/ranksystem_dev/libs/ts3_lib/Node/Server.php
2020-10-09 09:50:41.805817 INFO 8192: Required parameter $id1 follows optional parameter $type on line 2036 in /var/www/tsn/ranksystem_dev/libs/ts3_lib/Node/Server.php
```
An easy fix would be to set the $id1 as optional parameter like
` public function privilegeKeyCreate($type = TeamSpeak3::TOKEN_SERVERGROUP, $id1 = 0, $id2 = 0, $description = null, $customset = null)`
This will run into an TS3 error, when the $id1 is not given by the third party script, since it is the groupID of the TS3 query function `tokenadd`.
As a second possibility I see the option to change the order of the parameters. It's your decision. ;-) | True | PHP 8.0 - Deprecated required parameters following optional parameter - I am currently testing the PHP 8.0 RC 1 with a custom script.
The framework seems to be working fine with it. But I am using only a few functions of it.
One thing, which I noticed is the new check on functions. Is there a required parameter following an optional parameter, it returns an info message. It's no big deal. I only want to report it. ([more information](https://php.watch/versions/8.0/deprecate-required-param-after-optional))
Concerned functions:
https://github.com/planetteamspeak/ts3phpframework/blob/d629e767acf9d21056b3588e909443177d2bac47/libraries/TeamSpeak3/Node/Server.php#L2021
https://github.com/planetteamspeak/ts3phpframework/blob/d629e767acf9d21056b3588e909443177d2bac47/libraries/TeamSpeak3/Node/Server.php#L2036
Example (custom) error output:
```
2020-10-09 09:50:41.805911 INFO 8192: Required parameter $id1 follows optional parameter $type on line 2021 in /var/www/tsn/ranksystem_dev/libs/ts3_lib/Node/Server.php
2020-10-09 09:50:41.805817 INFO 8192: Required parameter $id1 follows optional parameter $type on line 2036 in /var/www/tsn/ranksystem_dev/libs/ts3_lib/Node/Server.php
```
An easy fix would be to set the $id1 as optional parameter like
` public function privilegeKeyCreate($type = TeamSpeak3::TOKEN_SERVERGROUP, $id1 = 0, $id2 = 0, $description = null, $customset = null)`
This will run into an TS3 error, when the $id1 is not given by the third party script, since it is the groupID of the TS3 query function `tokenadd`.
As a second possibility I see the option to change the order of the parameters. It's your decision. ;-) | comp | php deprecated required parameters following optional parameter i am currently testing the php rc with a custom script the framework seems to be working fine with it but i am using only a few functions of it one thing which i noticed is the new check on functions is there a required parameter following an optional parameter it returns an info message it s no big deal i only want to report it concerned functions example custom error output info required parameter follows optional parameter type on line in var www tsn ranksystem dev libs lib node server php info required parameter follows optional parameter type on line in var www tsn ranksystem dev libs lib node server php an easy fix would be to set the as optional parameter like public function privilegekeycreate type token servergroup description null customset null this will run into an error when the is not given by the third party script since it is the groupid of the query function tokenadd as a second possibility i see the option to change the order of the parameters it s your decision | 1 |
14,003 | 16,777,599,818 | IssuesEvent | 2021-06-15 00:34:36 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | After upgrading ClickHouse from 20.4 to 21.3 PARTITION key stops working because it's wrongly counted as float | backward compatibility bug v21.3-affected | I have the following table in ClickHouse 20.4:
```
CREATE TABLE FileRecord
(
fileId UInt64,
sourceRecord String
) ENGINE = MergeTree()
Partition by (ceil(fileId / 1000) % 1000)
settings index_granularity = 4096
```
and it worked just fine, until i'be upgraded to 21.3, where we have following PR-feature:
https://github.com/ClickHouse/ClickHouse/pull/18464
which disallows to use floating point keys as partition key. No problem, but if i run
`toTypeName((ceil(fileId / 1000) % 1000))`
it returns `Int16`, which seems like a bug in typeName or in the newly introduced feature with `allow_floating_point_partition_key`. I do not show my stacktrace, since it's obvious and simply shows error
`Donot support float point as partition key: (ceil(fileId / 1000) % 1000)`
| True | After upgrading ClickHouse from 20.4 to 21.3 PARTITION key stops working because it's wrongly counted as float - I have the following table in ClickHouse 20.4:
```
CREATE TABLE FileRecord
(
fileId UInt64,
sourceRecord String
) ENGINE = MergeTree()
Partition by (ceil(fileId / 1000) % 1000)
settings index_granularity = 4096
```
and it worked just fine, until i'be upgraded to 21.3, where we have following PR-feature:
https://github.com/ClickHouse/ClickHouse/pull/18464
which disallows to use floating point keys as partition key. No problem, but if i run
`toTypeName((ceil(fileId / 1000) % 1000))`
it returns `Int16`, which seems like a bug in typeName or in the newly introduced feature with `allow_floating_point_partition_key`. I do not show my stacktrace, since it's obvious and simply shows error
`Donot support float point as partition key: (ceil(fileId / 1000) % 1000)`
| comp | after upgrading clickhouse from to partition key stops working because it s wrongly counted as float i have the following table in clickhouse create table filerecord fileid sourcerecord string engine mergetree partition by ceil fileid settings index granularity and it worked just fine until i be upgraded to where we have following pr feature which disallows to use floating point keys as partition key no problem but if i run totypename ceil fileid it returns which seems like a bug in typename or in the newly introduced feature with allow floating point partition key i do not show my stacktrace since it s obvious and simply shows error donot support float point as partition key ceil fileid | 1 |
7,210 | 6,823,451,223 | IssuesEvent | 2017-11-08 00:01:53 | brave/browser-laptop | https://api.github.com/repos/brave/browser-laptop | opened | Linux cibuild script fails on Jenkins | Infrastructure | <!--
Have you searched for similar issues? We have received a lot of feedback and bug reports that we have closed as duplicates. Before submitting this issue, please visit our community site for common ones: https://community.brave.com/c/common-issues
-->
### Description
When doing builds (release channel, beta channel, etc), the Linux job is always failing
I believe the root cause was unintentionally introduced with https://github.com/brave/browser-laptop/pull/10274
The upload script is recursively searching for all matching binaries and finding the RPM under the `suse` folder.
- A proper fix would exclude this directory
- A quick work-around would be to delete the folder before starting the upload
### Steps to Reproduce
<!--
Please add a series of steps to reproduce the problem. See https://stackoverflow.com/help/mcve for in depth information on how to create a minimal, complete, and verifiable example.
-->
1. Go to https://jenkins.brave.com/view/laptop%20builds%20(child%20jobs)/
2. Pick the "browser-laptop-build-linux" type and queue a build
**Actual result:**
It will eventually fail with an error like so:
```
{
"documentation_url": "https://developer.github.com/v3",
"message": "Validation Failed",
"errors": [
{
"field": "name",
"code": "already_exists",
"resource": "ReleaseAsset"
}
],
"request_id": "924F:1690:30E530:34464A:5A015C2A"
}
```
**Expected result:**
the cibuild should not fail
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
<!--
Please open about:brave, copy the version information, and paste it.
-->
**Reproducible on current live release:**
<!--
Is this a problem with the live build? It matters for triage reasons.
-->
### Additional Information
<!--
Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue.
-->
| 1.0 | Linux cibuild script fails on Jenkins - <!--
Have you searched for similar issues? We have received a lot of feedback and bug reports that we have closed as duplicates. Before submitting this issue, please visit our community site for common ones: https://community.brave.com/c/common-issues
-->
### Description
When doing builds (release channel, beta channel, etc), the Linux job is always failing
I believe the root cause was unintentionally introduced with https://github.com/brave/browser-laptop/pull/10274
The upload script is recursively searching for all matching binaries and finding the RPM under the `suse` folder.
- A proper fix would exclude this directory
- A quick work-around would be to delete the folder before starting the upload
### Steps to Reproduce
<!--
Please add a series of steps to reproduce the problem. See https://stackoverflow.com/help/mcve for in depth information on how to create a minimal, complete, and verifiable example.
-->
1. Go to https://jenkins.brave.com/view/laptop%20builds%20(child%20jobs)/
2. Pick the "browser-laptop-build-linux" type and queue a build
**Actual result:**
It will eventually fail with an error like so:
```
{
"documentation_url": "https://developer.github.com/v3",
"message": "Validation Failed",
"errors": [
{
"field": "name",
"code": "already_exists",
"resource": "ReleaseAsset"
}
],
"request_id": "924F:1690:30E530:34464A:5A015C2A"
}
```
**Expected result:**
the cibuild should not fail
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
<!--
Please open about:brave, copy the version information, and paste it.
-->
**Reproducible on current live release:**
<!--
Is this a problem with the live build? It matters for triage reasons.
-->
### Additional Information
<!--
Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue.
-->
| non_comp | linux cibuild script fails on jenkins have you searched for similar issues we have received a lot of feedback and bug reports that we have closed as duplicates before submitting this issue please visit our community site for common ones description when doing builds release channel beta channel etc the linux job is always failing i believe the root cause was unintentionally introduced with the upload script is recursively searching for all matching binaries and finding the rpm under the suse folder a proper fix would exclude this directory a quick work around would be to delete the folder before starting the upload steps to reproduce please add a series of steps to reproduce the problem see for in depth information on how to create a minimal complete and verifiable example go to pick the browser laptop build linux type and queue a build actual result it will eventually fail with an error like so documentation url message validation failed errors field name code already exists resource releaseasset request id expected result the cibuild should not fail reproduces how often brave version about brave info please open about brave copy the version information and paste it reproducible on current live release is this a problem with the live build it matters for triage reasons additional information any additional information related issues extra qa steps configuration or data that might be necessary to reproduce the issue | 0 |
175,915 | 14,545,040,898 | IssuesEvent | 2020-12-15 19:03:29 | roboticslab-uc3m/follow-me | https://api.github.com/repos/roboticslab-uc3m/follow-me | closed | Update follow-me-app.graphml | documentation | I believe the PNG generated and shown in the main page is not the same as seen in the .graphml file.
Related: roboticslab-uc3m/yarp-devices#99.
| 1.0 | Update follow-me-app.graphml - I believe the PNG generated and shown in the main page is not the same as seen in the .graphml file.
Related: roboticslab-uc3m/yarp-devices#99.
| non_comp | update follow me app graphml i believe the png generated and shown in the main page is not the same as seen in the graphml file related roboticslab yarp devices | 0 |
75,092 | 7,459,787,256 | IssuesEvent | 2018-03-30 16:50:16 | swentel/indieweb | https://api.github.com/repos/swentel/indieweb | opened | test that deleting a node, deletes the syndication link and webmention | tests webmention | syndications are already deleted, webmentions are not | 1.0 | test that deleting a node, deletes the syndication link and webmention - syndications are already deleted, webmentions are not | non_comp | test that deleting a node deletes the syndication link and webmention syndications are already deleted webmentions are not | 0 |
3,910 | 6,760,108,246 | IssuesEvent | 2017-10-24 19:26:18 | mlibrary/cozy-sun-bear | https://api.github.com/repos/mlibrary/cozy-sun-bear | opened | Improvements for mobile screens | browser compatibility mobile UI | - [ ] Change "Get Citation" to "Cite"
- [ ] Decrease size of left and right arrows
- [ ] Decrease size of Close Button
- [ ] Decrease size of Book Title / Section Title
- [ ] Contents modal width should fill the screen
- [ ] Search results modal width should fill the screen
- [ ] Preferences modal width should fill the screen
- [ ] Ensure ellipsis displays when Book Title / Section Title overflow is hidden
| True | Improvements for mobile screens - - [ ] Change "Get Citation" to "Cite"
- [ ] Decrease size of left and right arrows
- [ ] Decrease size of Close Button
- [ ] Decrease size of Book Title / Section Title
- [ ] Contents modal width should fill the screen
- [ ] Search results modal width should fill the screen
- [ ] Preferences modal width should fill the screen
- [ ] Ensure ellipsis displays when Book Title / Section Title overflow is hidden
| comp | improvements for mobile screens change get citation to cite decrease size of left and right arrows decrease size of close button decrease size of book title section title contents modal width should fill the screen search results modal width should fill the screen preferences modal width should fill the screen ensure ellipsis displays when book title section title overflow is hidden | 1 |
95,613 | 19,722,218,406 | IssuesEvent | 2022-01-13 16:22:50 | Sheeves11/UnnamedFiefdomGame | https://api.github.com/repos/Sheeves11/UnnamedFiefdomGame | opened | Variable Naming Schema | code improvement | Would be good to maintain a rule for constant variables that can only be updated if you modify the code directly.
I have renamed a number of variables that I notice this with in THIS_STYLE.
If variables are subject to change, or if I'm not entirely sure, then I'll leave them as-is. (like GOLD_PER and defendersPer).
To push it a step further, it would be good to define global variables without camel back, like:
DefendersPer. This way, we know it is something that may get modified, but is also global.
I haven't done this yet, camel back is fine for now for everything besides constants. | 1.0 | Variable Naming Schema - Would be good to maintain a rule for constant variables that can only be updated if you modify the code directly.
I have renamed a number of variables that I notice this with in THIS_STYLE.
If variables are subject to change, or if I'm not entirely sure, then I'll leave them as-is. (like GOLD_PER and defendersPer).
To push it a step further, it would be good to define global variables without camel back, like:
DefendersPer. This way, we know it is something that may get modified, but is also global.
I haven't done this yet, camel back is fine for now for everything besides constants. | non_comp | variable naming schema would be good to maintain a rule for constant variables that can only be updated if you modify the code directly i have renamed a number of variables that i notice this with in this style if variables are subject to change or if i m not entirely sure then i ll leave them as is like gold per and defendersper to push it a step further it would be good to define global variables without camel back like defendersper this way we know it is something that may get modified but is also global i haven t done this yet camel back is fine for now for everything besides constants | 0 |
5,960 | 3,703,084,060 | IssuesEvent | 2016-02-29 19:07:36 | deis/deis | https://api.github.com/repos/deis/deis | closed | deis-builder 1.10.0 is spitting 'Failed handshake: EOF' once per ~5 seconds | builder easy-fix | Log as seen in papertrail:
```
Sep 08 15:07:11 52.22.0.199 logger: 2015-09-08T22:07:11UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc82046a2a0}})
Sep 08 15:07:11 52.22.0.199 logger: 2015-09-08T22:07:11UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:11 52.22.0.199 logger: 2015-09-08T22:07:11UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:17 52.22.0.199 logger: 2015-09-08T22:07:17UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc82017f7a0}})
Sep 08 15:07:21 52.22.0.199 logger: 2015-09-08T22:07:21UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:21 52.22.0.199 logger: 2015-09-08T22:07:21UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:24 52.22.0.199 logger: 2015-09-08T22:07:24UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc820436770}})
Sep 08 15:07:25 52.22.0.199 logger: 2015-09-08T22:07:25UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:25 52.22.0.199 logger: 2015-09-08T22:07:25UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:29 52.22.0.199 logger: 2015-09-08T22:07:29UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:29 52.22.0.199 logger: 2015-09-08T22:07:29UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:29 52.22.0.199 logger: 2015-09-08T22:07:29UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc820048fc0}})
Sep 08 15:07:32 52.22.0.199 logger: 2015-09-08T22:07:32UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc8202682a0}})
Sep 08 15:07:33 52.22.0.199 logger: 2015-09-08T22:07:33UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:33 52.22.0.199 logger: 2015-09-08T22:07:33UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:40 52.22.0.199 logger: 2015-09-08T22:07:40UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc8201bc7e0}})
``` | 1.0 | deis-builder 1.10.0 is spitting 'Failed handshake: EOF' once per ~5 seconds - Log as seen in papertrail:
```
Sep 08 15:07:11 52.22.0.199 logger: 2015-09-08T22:07:11UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc82046a2a0}})
Sep 08 15:07:11 52.22.0.199 logger: 2015-09-08T22:07:11UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:11 52.22.0.199 logger: 2015-09-08T22:07:11UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:17 52.22.0.199 logger: 2015-09-08T22:07:17UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc82017f7a0}})
Sep 08 15:07:21 52.22.0.199 logger: 2015-09-08T22:07:21UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:21 52.22.0.199 logger: 2015-09-08T22:07:21UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:24 52.22.0.199 logger: 2015-09-08T22:07:24UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc820436770}})
Sep 08 15:07:25 52.22.0.199 logger: 2015-09-08T22:07:25UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:25 52.22.0.199 logger: 2015-09-08T22:07:25UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:29 52.22.0.199 logger: 2015-09-08T22:07:29UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:29 52.22.0.199 logger: 2015-09-08T22:07:29UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:29 52.22.0.199 logger: 2015-09-08T22:07:29UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc820048fc0}})
Sep 08 15:07:32 52.22.0.199 logger: 2015-09-08T22:07:32UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc8202682a0}})
Sep 08 15:07:33 52.22.0.199 logger: 2015-09-08T22:07:33UTC deis-builder[1]: [info] Checking closer.
Sep 08 15:07:33 52.22.0.199 logger: 2015-09-08T22:07:33UTC deis-builder[1]: [info] Accepted connection.
Sep 08 15:07:40 52.22.0.199 logger: 2015-09-08T22:07:40UTC deis-builder[1]: [error] Failed handshake: EOF (&{{0xc8201bc7e0}})
``` | non_comp | deis builder is spitting failed handshake eof once per seconds log as seen in papertrail sep logger deis builder failed handshake eof sep logger deis builder accepted connection sep logger deis builder checking closer sep logger deis builder failed handshake eof sep logger deis builder accepted connection sep logger deis builder checking closer sep logger deis builder failed handshake eof sep logger deis builder accepted connection sep logger deis builder checking closer sep logger deis builder checking closer sep logger deis builder accepted connection sep logger deis builder failed handshake eof sep logger deis builder failed handshake eof sep logger deis builder checking closer sep logger deis builder accepted connection sep logger deis builder failed handshake eof | 0 |
12,746 | 15,024,584,980 | IssuesEvent | 2021-02-01 19:51:01 | AmProsius/gothic-1-community-patch | https://api.github.com/repos/AmProsius/gothic-1-community-patch | closed | Wolf's minecrawler plate dialog doesn't disappear | compatibility easy only for session | Wolf can be offered minecrawlers' armor plates even if the player already did so. | True | Wolf's minecrawler plate dialog doesn't disappear - Wolf can be offered minecrawlers' armor plates even if the player already did so. | comp | wolf s minecrawler plate dialog doesn t disappear wolf can be offered minecrawlers armor plates even if the player already did so | 1 |
9,977 | 11,963,272,453 | IssuesEvent | 2020-04-05 15:20:01 | sebastianbergmann/phpunit | https://api.github.com/repos/sebastianbergmann/phpunit | opened | "PHPUnit\TextUI\Configuration" vs autoloader | type/backward-compatibility type/bug | Hello,
I'll probably get some hate because I **don't** use Composer,
but this file:
https://github.com/sebastianbergmann/phpunit/blob/5a3b8a9d2a4b71ca251fc8b616cca642e4a46bc4/src/TextUI/Configuration/PHPUnit/ExtensionHandler.php#L10
Is under the folder "Configuration/PHPUnit",
yet the namespace is:
namespace PHPUnit\TextUI\Configuration;
(I see the same applies to other files under that folder)
Therefore, these classes become "non-automatically findable", by having my version of the autoloader find the file directly in "PHPUnit\TextUI\Configuration".
Is there a reason the namespace isn't "PHPUnit\TextUI\Configuration\PHPUnit" ?
Thank you. | True | "PHPUnit\TextUI\Configuration" vs autoloader - Hello,
I'll probably get some hate because I **don't** use Composer,
but this file:
https://github.com/sebastianbergmann/phpunit/blob/5a3b8a9d2a4b71ca251fc8b616cca642e4a46bc4/src/TextUI/Configuration/PHPUnit/ExtensionHandler.php#L10
Is under the folder "Configuration/PHPUnit",
yet the namespace is:
namespace PHPUnit\TextUI\Configuration;
(I see the same applies to other files under that folder)
Therefore, these classes become "non-automatically findable", by having my version of the autoloader find the file directly in "PHPUnit\TextUI\Configuration".
Is there a reason the namespace isn't "PHPUnit\TextUI\Configuration\PHPUnit" ?
Thank you. | comp | phpunit textui configuration vs autoloader hello i ll probably get some hate because i don t use composer but this file is under the folder configuration phpunit yet the namespace is namespace phpunit textui configuration i see the same applies to other files under that folder therefore these classes become non automatically findable by having my version of the autoloader find the file directly in phpunit textui configuration is there a reason the namespace isn t phpunit textui configuration phpunit thank you | 1 |
5,903 | 8,722,366,751 | IssuesEvent | 2018-12-09 11:47:28 | kerubistan/kerub | https://api.github.com/repos/kerubistan/kerub | opened | measure and track cpu temperature information | component:data processing enhancement priority: normal | probably lm-sensors, or proc filesystem, or whatever is available on the OS | 1.0 | measure and track cpu temperature information - probably lm-sensors, or proc filesystem, or whatever is available on the OS | non_comp | measure and track cpu temperature information probably lm sensors or proc filesystem or whatever is available on the os | 0 |
14,555 | 17,650,786,990 | IssuesEvent | 2021-08-20 12:57:17 | ModdingForBlockheads/CookingForBlockheads | https://api.github.com/repos/ModdingForBlockheads/CookingForBlockheads | opened | Compat with a few mods [datapack included] | compatibility | I made a datapack for compat with Farmers Delight, Builders Crafts and Additions, and Crayfish's More Furniture mod.
Here is the pack:
[Cooking for Blockheads compat.zip](https://github.com/ModdingForBlockheads/CookingForBlockheads/files/7021598/Cooking.for.Blockheads.compat.zip)
| True | Compat with a few mods [datapack included] - I made a datapack for compat with Farmers Delight, Builders Crafts and Additions, and Crayfish's More Furniture mod.
Here is the pack:
[Cooking for Blockheads compat.zip](https://github.com/ModdingForBlockheads/CookingForBlockheads/files/7021598/Cooking.for.Blockheads.compat.zip)
| comp | compat with a few mods i made a datapack for compat with farmers delight builders crafts and additions and crayfish s more furniture mod here is the pack | 1 |
19,695 | 27,338,186,432 | IssuesEvent | 2023-02-26 13:27:11 | TwelveIterationMods/InventoryEssentials | https://api.github.com/repos/TwelveIterationMods/InventoryEssentials | closed | Item duplication glitch with Quark Oddities backpack | bug compatibility pending release 1.18 pending lts | ### Minecraft Version
1.18.x
### Mod Loader
Forge
### Mod Loader Version
40.1.68
### Mod Version
4.0.2
### Describe the Issue
When ctrl-clicking on a stack in a backpack from [Quark Oddities](https://www.curseforge.com/minecraft/mc-mods/quark-oddities) to move a single item into my inventory, it clones the entire stack into the target inventory, though reduces the original stack's amount as expected.
### Before

### After

### Logs
_No response_
### Do you use any performance-enhancing mods (e.g. OptiFine)?
FerriteCore, Radium Reforged, Rubidium | True | Item duplication glitch with Quark Oddities backpack - ### Minecraft Version
1.18.x
### Mod Loader
Forge
### Mod Loader Version
40.1.68
### Mod Version
4.0.2
### Describe the Issue
When ctrl-clicking on a stack in a backpack from [Quark Oddities](https://www.curseforge.com/minecraft/mc-mods/quark-oddities) to move a single item into my inventory, it clones the entire stack into the target inventory, though reduces the original stack's amount as expected.
### Before

### After

### Logs
_No response_
### Do you use any performance-enhancing mods (e.g. OptiFine)?
FerriteCore, Radium Reforged, Rubidium | comp | item duplication glitch with quark oddities backpack minecraft version x mod loader forge mod loader version mod version describe the issue when ctrl clicking on a stack in a backpack from to move a single item into my inventory it clones the entire stack into the target inventory though reduces the original stack s amount as expected before after logs no response do you use any performance enhancing mods e g optifine ferritecore radium reforged rubidium | 1 |
118,367 | 17,581,204,266 | IssuesEvent | 2021-08-16 07:42:21 | AlexRogalskiy/charts | https://api.github.com/repos/AlexRogalskiy/charts | opened | CVE-2020-11023 (Medium) detected in jquery-1.8.1.min.js, jquery-1.9.1.js | security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.8.1.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: charts/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: charts/node_modules/tinygradient/bower_components/tinycolor/index.html</p>
<p>Path to vulnerable library: /node_modules/tinygradient/bower_components/tinycolor/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/89e847ca7dee5ccc2cec2459799650cbed48cea6">89e847ca7dee5ccc2cec2459799650cbed48cea6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11023 (Medium) detected in jquery-1.8.1.min.js, jquery-1.9.1.js - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.8.1.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: charts/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: charts/node_modules/tinygradient/bower_components/tinycolor/index.html</p>
<p>Path to vulnerable library: /node_modules/tinygradient/bower_components/tinycolor/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/89e847ca7dee5ccc2cec2459799650cbed48cea6">89e847ca7dee5ccc2cec2459799650cbed48cea6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_comp | cve medium detected in jquery min js jquery js cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js javascript library for dom operations library home page a href path to dependency file charts node modules redeyed examples browser index html path to vulnerable library node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file charts node modules tinygradient bower components tinycolor index html path to vulnerable library node modules tinygradient bower components tinycolor demo jquery js dependency hierarchy x jquery js vulnerable library found in head commit a href vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with whitesource | 0 |
287,938 | 31,856,518,669 | IssuesEvent | 2023-09-15 07:52:27 | Trinadh465/linux-4.1.15_CVE-2023-26607 | https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-26607 | opened | CVE-2022-2196 (High) detected in linuxlinux-4.6 | Mend: dependency security vulnerability | ## CVE-2022-2196 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A regression exists in the Linux Kernel within KVM: nVMX that allowed for speculative execution attacks. L2 can carry out Spectre v2 attacks on L1 due to L1 thinking it doesn't need retpolines or IBPB after running L2 due to KVM (L0) advertising eIBRS support to L1. An attacker at L2 with code execution can execute code on an indirect branch on the host machine. We recommend upgrading to Kernel 6.2 or past commit 2e7eab81425a
<p>Publish Date: 2023-01-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2196>CVE-2022-2196</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-2196">https://www.linuxkernelcves.com/cves/CVE-2022-2196</a></p>
<p>Release Date: 2023-01-09</p>
<p>Fix Resolution: v5.10.170,v5.15.96,v6.1.14</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-2196 (High) detected in linuxlinux-4.6 - ## CVE-2022-2196 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A regression exists in the Linux Kernel within KVM: nVMX that allowed for speculative execution attacks. L2 can carry out Spectre v2 attacks on L1 due to L1 thinking it doesn't need retpolines or IBPB after running L2 due to KVM (L0) advertising eIBRS support to L1. An attacker at L2 with code execution can execute code on an indirect branch on the host machine. We recommend upgrading to Kernel 6.2 or past commit 2e7eab81425a
<p>Publish Date: 2023-01-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2196>CVE-2022-2196</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-2196">https://www.linuxkernelcves.com/cves/CVE-2022-2196</a></p>
<p>Release Date: 2023-01-09</p>
<p>Fix Resolution: v5.10.170,v5.15.96,v6.1.14</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_comp | cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files vulnerability details a regression exists in the linux kernel within kvm nvmx that allowed for speculative execution attacks can carry out spectre attacks on due to thinking it doesn t need retpolines or ibpb after running due to kvm advertising eibrs support to an attacker at with code execution can execute code on an indirect branch on the host machine we recommend upgrading to kernel or past commit publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
483,227 | 13,920,986,903 | IssuesEvent | 2020-10-21 11:15:27 | MontbitTech/WebApp-e-EdPort-LMS | https://api.github.com/repos/MontbitTech/WebApp-e-EdPort-LMS | closed | Assignment attach file issue | Critical High Priority bug | I noticed that whenever we are clicking on the attach file option in the new assignment tab. It is directly creating the exam.
Expected: A proper space for the attachment of the file should be given in the new assignment tab itself or after clicking on the create and submit assignment a pop-up should open for the submission of file.

| 1.0 | Assignment attach file issue - I noticed that whenever we are clicking on the attach file option in the new assignment tab. It is directly creating the exam.
Expected: A proper space for the attachment of the file should be given in the new assignment tab itself or after clicking on the create and submit assignment a pop-up should open for the submission of file.

| non_comp | assignment attach file issue i noticed that whenever we are clicking on the attach file option in the new assignment tab it is directly creating the exam expected a proper space for the attachment of the file should be given in the new assignment tab itself or after clicking on the create and submit assignment a pop up should open for the submission of file | 0 |
5,052 | 7,643,730,565 | IssuesEvent | 2018-05-08 13:36:27 | Yoast/wordpress-seo | https://api.github.com/repos/Yoast/wordpress-seo | closed | WooCommerce Category Pages Display Incorrect Open Graph Meta Description | compatibility support woocommerce | ### What did you expect to happen?
When checked in the Facebook Developers Tool, I expected my custom Facebook meta description to be displayed.
### What happened instead?
Facebook displayed the category description. When checked in source-code, Yoast SEO is outputting the category description as the Open Graph meta description. When WooCommerce SEO is disabled the Open Graph meta description tags get outputted.
### How can we reproduce this behavior?
1.Create a category
2.Fill in the social settings by writing a Facebook meta description

3.Check in Facebook Dev tool and in source code
**Source Code**

**Facebook Dev Tool**

4.Deactivate WooCommerce SEO and the OG tags appear
**Sourcecode**

**Facebook:**

Can you provide a link to a page which shows this issue?
http://www.anblik.com/review-category/best-online-website-builders-software-to-create-free-websites/
### Technical info
-WordPress version: 4.5.3
-Yoast SEO version: 3.2.5
-WooCommerce SEO: 3.2.1
Notes: Twitter is not affected.
It is also appearing on WordPress Demo Site
http://support.yoastdemo.com/en/product-category/clothing/
-WordPress version: 4.5.3
-Yoast SEO version: 3.2.4
-WooCommerce SEO: 3.1.1
| True | WooCommerce Category Pages Display Incorrect Open Graph Meta Description - ### What did you expect to happen?
When checked in the Facebook Developers Tool, I expected my custom Facebook meta description to be displayed.
### What happened instead?
Facebook displayed the category description. When checked in source-code, Yoast SEO is outputting the category description as the Open Graph meta description. When WooCommerce SEO is disabled the Open Graph meta description tags get outputted.
### How can we reproduce this behavior?
1.Create a category
2.Fill in the social settings by writing a Facebook meta description

3.Check in Facebook Dev tool and in source code
**Source Code**

**Facebook Dev Tool**

4.Deactivate WooCommerce SEO and the OG tags appear
**Sourcecode**

**Facebook:**

Can you provide a link to a page which shows this issue?
http://www.anblik.com/review-category/best-online-website-builders-software-to-create-free-websites/
### Technical info
-WordPress version: 4.5.3
-Yoast SEO version: 3.2.5
-WooCommerce SEO: 3.2.1
Notes: Twitter is not affected.
It is also appearing on WordPress Demo Site
http://support.yoastdemo.com/en/product-category/clothing/
-WordPress version: 4.5.3
-Yoast SEO version: 3.2.4
-WooCommerce SEO: 3.1.1
| comp | woocommerce category pages display incorrect open graph meta description what did you expect to happen when checked in the facebook developers tool i expected my custom facebook meta description to be displayed what happened instead facebook displayed the category description when checked in source code yoast seo is outputting the category description as the open graph meta description when woocommerce seo is disabled the open graph meta description tags get outputted how can we reproduce this behavior create a category fill in the social settings by writing a facebook meta description check in facebook dev tool and in source code source code facebook dev tool deactivate woocommerce seo and the og tags appear sourcecode facebook can you provide a link to a page which shows this issue technical info wordpress version yoast seo version woocommerce seo notes twitter is not affected it is also appearing on wordpress demo site wordpress version yoast seo version woocommerce seo | 1 |
6,334 | 8,679,438,723 | IssuesEvent | 2018-11-30 23:52:58 | cobalt-org/liquid-rust | https://api.github.com/repos/cobalt-org/liquid-rust | opened | `sort` does not support operating on a hash | enhancement shopify-compatibility | Example test:
```rust
assert_eq!(
v!([{ "a": 1, "b": 2 }]),
filters!(sort, v!({ "a": 1, "b": 2 }))
);
``` | True | `sort` does not support operating on a hash - Example test:
```rust
assert_eq!(
v!([{ "a": 1, "b": 2 }]),
filters!(sort, v!({ "a": 1, "b": 2 }))
);
``` | comp | sort does not support operating on a hash example test rust assert eq v filters sort v a b | 1 |
18,731 | 26,096,908,247 | IssuesEvent | 2022-12-26 21:45:15 | nonamecrackers2/crackers-wither-storm-mod | https://api.github.com/repos/nonamecrackers2/crackers-wither-storm-mod | closed | Dropz Mod Incompatibility | 1.16.5 compatibility 1.18.2 1.19 | **Minecraft version**
The version of Minecraft the mod is running on
1.18.2
**Mod version**
The version of the mod you are using
Latest version
**Describe the bug**
A clear and concise description of what the bug is.
Wither Storm won't pick up dropped items with the Dropz mod https://www.curseforge.com/minecraft/mc-mods/dropz
**To Reproduce**
Steps to reproduce the behavior:
1 Summon the Wither Storm with the Dropz mod installed,
2 Drop any item
3 The Wither Storm won't pick up the item
**Crash Reports / Logs**
Crash reports / logs hosted on pastebin, gist, etc.
**Screenshots**
If applicable, add screenshots to help explain your problem.
| True | Dropz Mod Incompatibility - **Minecraft version**
The version of Minecraft the mod is running on
1.18.2
**Mod version**
The version of the mod you are using
Latest version
**Describe the bug**
A clear and concise description of what the bug is.
Wither Storm won't pick up dropped items with the Dropz mod https://www.curseforge.com/minecraft/mc-mods/dropz
**To Reproduce**
Steps to reproduce the behavior:
1 Summon the Wither Storm with the Dropz mod installed,
2 Drop any item
3 The Wither Storm won't pick up the item
**Crash Reports / Logs**
Crash reports / logs hosted on pastebin, gist, etc.
**Screenshots**
If applicable, add screenshots to help explain your problem.
| comp | dropz mod incompatibility minecraft version the version of minecraft the mod is running on mod version the version of the mod you are using latest version describe the bug a clear and concise description of what the bug is wither storm won t pick up dropped items with the dropz mod to reproduce steps to reproduce the behavior summon the wither storm with the dropz mod installed drop any item the wither storm won t pick up the item crash reports logs crash reports logs hosted on pastebin gist etc screenshots if applicable add screenshots to help explain your problem | 1 |
1,601 | 4,161,960,130 | IssuesEvent | 2016-06-17 18:30:27 | LegendOfMCPE/EssentialsPE | https://api.github.com/repos/LegendOfMCPE/EssentialsPE | closed | Bug in PocketMine for MCPE 0.15.0.0 | Invalid Plugin compatibility issue | When server is starting...
`
[17:28:09] [Server thread/INFO]: Увімкнення SimpleWarp v2.1.0
[17:28:09] [Server thread/INFO]: [SimpleWarp] Enabling EssentialsPE support...
[17:28:09] [Server thread/CRITICAL]: Error: "Call to a member function setLabel() on null" (EXCEPTION) in "/plugins/SimpleWarp_v2.1.0.phar/src/falkirks/simplewarp/SimpleWarp" at line 101
[17:28:09] [Server thread/DEBUG]: #0 /src/pocketmine/plugin/PluginBase(86): falkirks\simplewarp\SimpleWarp->onEnable(boolean)
[17:28:09] [Server thread/DEBUG]: #1 /src/pocketmine/plugin/PharPluginLoader(123): pocketmine\plugin\PluginBase->setEnabled(boolean 1)
[17:28:09] [Server thread/DEBUG]: #2 /src/pocketmine/plugin/PluginManager(604): pocketmine\plugin\PharPluginLoader->enablePlugin(falkirks\simplewarp\SimpleWarp object)
[17:28:09] [Server thread/DEBUG]: #3 /src/pocketmine/Server(1849): pocketmine\plugin\PluginManager->enablePlugin(falkirks\simplewarp\SimpleWarp object)
[17:28:09] [Server thread/DEBUG]: #4 /src/pocketmine/Server(1835): pocketmine\Server->enablePlugin(falkirks\simplewarp\SimpleWarp object)
[17:28:09] [Server thread/DEBUG]: #5 /src/pocketmine/Server(1651): pocketmine\Server->enablePlugins(integer 1)
[17:28:09] [Server thread/DEBUG]: #6 /src/pocketmine/PocketMine(464): pocketmine\Server->__construct(pocketmine\CompatibleClassLoader object, pocketmine\utils\MainLogger object, string C:\PM\, string C:\PM\, string C:\PM\plugins\)
[17:28:09] [Server thread/INFO]: Вимкнення SimpleWarp v2.1.0
[17:28:09] [Server thread/INFO]: Enabling EssentialsPE v2.0.0
`
and when player is connecting, it immediately disconnects... | True | Bug in PocketMine for MCPE 0.15.0.0 - When server is starting...
`
[17:28:09] [Server thread/INFO]: Увімкнення SimpleWarp v2.1.0
[17:28:09] [Server thread/INFO]: [SimpleWarp] Enabling EssentialsPE support...
[17:28:09] [Server thread/CRITICAL]: Error: "Call to a member function setLabel() on null" (EXCEPTION) in "/plugins/SimpleWarp_v2.1.0.phar/src/falkirks/simplewarp/SimpleWarp" at line 101
[17:28:09] [Server thread/DEBUG]: #0 /src/pocketmine/plugin/PluginBase(86): falkirks\simplewarp\SimpleWarp->onEnable(boolean)
[17:28:09] [Server thread/DEBUG]: #1 /src/pocketmine/plugin/PharPluginLoader(123): pocketmine\plugin\PluginBase->setEnabled(boolean 1)
[17:28:09] [Server thread/DEBUG]: #2 /src/pocketmine/plugin/PluginManager(604): pocketmine\plugin\PharPluginLoader->enablePlugin(falkirks\simplewarp\SimpleWarp object)
[17:28:09] [Server thread/DEBUG]: #3 /src/pocketmine/Server(1849): pocketmine\plugin\PluginManager->enablePlugin(falkirks\simplewarp\SimpleWarp object)
[17:28:09] [Server thread/DEBUG]: #4 /src/pocketmine/Server(1835): pocketmine\Server->enablePlugin(falkirks\simplewarp\SimpleWarp object)
[17:28:09] [Server thread/DEBUG]: #5 /src/pocketmine/Server(1651): pocketmine\Server->enablePlugins(integer 1)
[17:28:09] [Server thread/DEBUG]: #6 /src/pocketmine/PocketMine(464): pocketmine\Server->__construct(pocketmine\CompatibleClassLoader object, pocketmine\utils\MainLogger object, string C:\PM\, string C:\PM\, string C:\PM\plugins\)
[17:28:09] [Server thread/INFO]: Вимкнення SimpleWarp v2.1.0
[17:28:09] [Server thread/INFO]: Enabling EssentialsPE v2.0.0
`
and when player is connecting, it immediately disconnects... | comp | bug in pocketmine for mcpe when server is starting увімкнення simplewarp enabling essentialspe support error call to a member function setlabel on null exception in plugins simplewarp phar src falkirks simplewarp simplewarp at line src pocketmine plugin pluginbase falkirks simplewarp simplewarp onenable boolean src pocketmine plugin pharpluginloader pocketmine plugin pluginbase setenabled boolean src pocketmine plugin pluginmanager pocketmine plugin pharpluginloader enableplugin falkirks simplewarp simplewarp object src pocketmine server pocketmine plugin pluginmanager enableplugin falkirks simplewarp simplewarp object src pocketmine server pocketmine server enableplugin falkirks simplewarp simplewarp object src pocketmine server pocketmine server enableplugins integer src pocketmine pocketmine pocketmine server construct pocketmine compatibleclassloader object pocketmine utils mainlogger object string c pm string c pm string c pm plugins вимкнення simplewarp enabling essentialspe and when player is connecting it immediately disconnects | 1 |
106,139 | 13,247,729,402 | IssuesEvent | 2020-08-19 17:44:54 | tokio-rs/tracing | https://api.github.com/repos/tokio-rs/tracing | closed | fmt::Subscribers don't want their output to be captured | crate/subscriber kind/feature needs/design | ## Bug Report
<!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
-->
### Version
```
"checksum tracing 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)" = "1721cc8cf7d770cc4257872507180f35a4797272f5962f24c806af9e7faf52ab"
"checksum tracing-attributes 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)" = "7fbad39da2f9af1cae3016339ad7f2c7a9e870f12e8fd04c4fd7ef35b30c0d2b"
"checksum tracing-core 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "0aa83a9a47081cd522c09c81b31aec2c9273424976f922ad61c053b58350b715"
"checksum tracing-log 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "5e0f8c7178e13481ff6765bd169b33e8d554c5d2bbede5e32c356194be02b9b9"
"checksum tracing-serde 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b6ccba2f8f16e0ed268fc765d9b7ff22e965e7185d32f8f1ec8294fe17d86e79"
"checksum tracing-subscriber 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "dedebcf5813b02261d6bab3a12c6a8ae702580c0405a2e8ec16c3713caf14c20"
```
### Platform
Linux localhost 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
### Crates
`tracing-subscriber`
### Description
I cannot get the output of a `fmt::Subscriber` to coöperate with settings that want to capture or redirect stdio. For example, the tracing messages all show through the test runner output of `cargo test`, even when the `--nocapture` flag is not set.
I have tried to even make my own `MakeWriter` that calls `eprintln!()` directly, but that only seems to hide some of the output. It would be nice to have an easy way to clean up test output short of disabling the output entirely, since it's nice to have the output there in case a test fails. | 1.0 | fmt::Subscribers don't want their output to be captured - ## Bug Report
<!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
-->
### Version
```
"checksum tracing 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)" = "1721cc8cf7d770cc4257872507180f35a4797272f5962f24c806af9e7faf52ab"
"checksum tracing-attributes 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)" = "7fbad39da2f9af1cae3016339ad7f2c7a9e870f12e8fd04c4fd7ef35b30c0d2b"
"checksum tracing-core 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "0aa83a9a47081cd522c09c81b31aec2c9273424976f922ad61c053b58350b715"
"checksum tracing-log 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "5e0f8c7178e13481ff6765bd169b33e8d554c5d2bbede5e32c356194be02b9b9"
"checksum tracing-serde 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b6ccba2f8f16e0ed268fc765d9b7ff22e965e7185d32f8f1ec8294fe17d86e79"
"checksum tracing-subscriber 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "dedebcf5813b02261d6bab3a12c6a8ae702580c0405a2e8ec16c3713caf14c20"
```
### Platform
Linux localhost 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
### Crates
`tracing-subscriber`
### Description
I cannot get the output of a `fmt::Subscriber` to coöperate with settings that want to capture or redirect stdio. For example, the tracing messages all show through the test runner output of `cargo test`, even when the `--nocapture` flag is not set.
I have tried to even make my own `MakeWriter` that calls `eprintln!()` directly, but that only seems to hide some of the output. It would be nice to have an easy way to clean up test output short of disabling the output entirely, since it's nice to have the output there in case a test fails. | non_comp | fmt subscribers don t want their output to be captured bug report thank you for reporting an issue please fill in as much of the template below as you re able version checksum tracing registry checksum tracing attributes registry checksum tracing core registry checksum tracing log registry checksum tracing serde registry checksum tracing subscriber registry platform linux localhost generic ubuntu smp tue feb utc gnu linux crates tracing subscriber description i cannot get the output of a fmt subscriber to coöperate with settings that want to capture or redirect stdio for example the tracing messages all show through the test runner output of cargo test even when the nocapture flag is not set i have tried to even make my own makewriter that calls eprintln directly but that only seems to hide some of the output it would be nice to have an easy way to clean up test output short of disabling the output entirely since it s nice to have the output there in case a test fails | 0 |
20,517 | 30,388,029,940 | IssuesEvent | 2023-07-13 03:39:22 | Apollounknowndev/tectonic | https://api.github.com/repos/Apollounknowndev/tectonic | closed | Compatible with Regions Unexplored? | works as intended compatibility | Everytime I tried to locate one of your biome, it will say "Could not find a biome of type "Tectonic's biome" within reasonable distance."
Is it because of other biome mod? I also used Regions Unexplored. | True | Compatible with Regions Unexplored? - Everytime I tried to locate one of your biome, it will say "Could not find a biome of type "Tectonic's biome" within reasonable distance."
Is it because of other biome mod? I also used Regions Unexplored. | comp | compatible with regions unexplored everytime i tried to locate one of your biome it will say could not find a biome of type tectonic s biome within reasonable distance is it because of other biome mod i also used regions unexplored | 1 |
7,932 | 10,131,019,355 | IssuesEvent | 2019-08-01 18:25:58 | arcticicestudio/nord-jetbrains | https://api.github.com/repos/arcticicestudio/nord-jetbrains | closed | Diagram Theme Support | context-plugin-support scope-compatibility scope-ux type-feature | [JetBrains core product version 2019.2 introduced theme support][idea-207533] for [_Diagrams_][d] in order to prevent unreadable output due to "hardcoded" color values not matching the currently active UI theme. Add the available theme keys to better match Nord's style.
[idea-207533]: https://youtrack.jetbrains.com/issue/IDEA-207533
[d]: https://www.jetbrains.com/help/idea/diagrams.html | True | Diagram Theme Support - [JetBrains core product version 2019.2 introduced theme support][idea-207533] for [_Diagrams_][d] in order to prevent unreadable output due to "hardcoded" color values not matching the currently active UI theme. Add the available theme keys to better match Nord's style.
[idea-207533]: https://youtrack.jetbrains.com/issue/IDEA-207533
[d]: https://www.jetbrains.com/help/idea/diagrams.html | comp | diagram theme support for in order to prevent unreadable output due to hardcoded color values not matching the currently active ui theme add the available theme keys to better match nord s style | 1 |
14,587 | 17,753,625,800 | IssuesEvent | 2021-08-28 09:50:34 | PowerNukkit/PowerNukkit | https://api.github.com/repos/PowerNukkit/PowerNukkit | closed | Plugin Fakeinventorys | Type: compatibility | # 🔌 Plugin compatibility issue
<!--
👉 This template is helpful, but you may erase everything if you can express the issue clearly
Feel free to ask questions or start related discussion
--> Since the latest Update of Powernukkit, no Plugin works with Fakeinventorys Depend.
I have contact the Plugin Owner but he sais its a Problem of Powernukkit.
For Example i have tried following plugins: Seeinv, Backpack
Everytime i try to use this command, Server closed.
The Error Message is toooooo long.
In 1.5.1.0 all these Plugins works great.
### 📸 Screenshots / Videos
<!-- ✍ If applicable, add screenshots or video recordings to help explain your problem -->
### ▶ Steps to Reproduce
<!--- ✍ Reliable steps which someone can use to reproduce the issue. -->
1. Install Fakeinventorys and Backpack
2. Use /bp
3. Server closed
4. See error
### ✔ Expected Behavior
<!-- ✍ What would you expect to happen -->
### ❌ Actual Behavior
<!-- ✍ What actually happened -->
### 📋 Debug information
<!-- Use the 'debugpaste upload' and 'timings paste' command in PowerNukkit -->
<!-- You can get the version from the file name, the 'about' or 'debugpaste' command outputs -->
* PowerNukkit version: git-cb4ac65
* Debug link: ✍
* Timings link (if relevant): ✍
### 💢 Crash Dump, Stack Trace and Other Files
<!-- ✍ Use https://hastebin.com for big logs or dumps -->
### 💬 Anything else we should know?
<!-- ✍ This is the perfect place to add any additional details -->
| True | Plugin Fakeinventorys - # 🔌 Plugin compatibility issue
<!--
👉 This template is helpful, but you may erase everything if you can express the issue clearly
Feel free to ask questions or start related discussion
--> Since the latest Update of Powernukkit, no Plugin works with Fakeinventorys Depend.
I have contact the Plugin Owner but he sais its a Problem of Powernukkit.
For Example i have tried following plugins: Seeinv, Backpack
Everytime i try to use this command, Server closed.
The Error Message is toooooo long.
In 1.5.1.0 all these Plugins works great.
### 📸 Screenshots / Videos
<!-- ✍ If applicable, add screenshots or video recordings to help explain your problem -->
### ▶ Steps to Reproduce
<!--- ✍ Reliable steps which someone can use to reproduce the issue. -->
1. Install Fakeinventorys and Backpack
2. Use /bp
3. Server closed
4. See error
### ✔ Expected Behavior
<!-- ✍ What would you expect to happen -->
### ❌ Actual Behavior
<!-- ✍ What actually happened -->
### 📋 Debug information
<!-- Use the 'debugpaste upload' and 'timings paste' command in PowerNukkit -->
<!-- You can get the version from the file name, the 'about' or 'debugpaste' command outputs -->
* PowerNukkit version: git-cb4ac65
* Debug link: ✍
* Timings link (if relevant): ✍
### 💢 Crash Dump, Stack Trace and Other Files
<!-- ✍ Use https://hastebin.com for big logs or dumps -->
### 💬 Anything else we should know?
<!-- ✍ This is the perfect place to add any additional details -->
| comp | plugin fakeinventorys 🔌 plugin compatibility issue 👉 this template is helpful but you may erase everything if you can express the issue clearly feel free to ask questions or start related discussion since the latest update of powernukkit no plugin works with fakeinventorys depend i have contact the plugin owner but he sais its a problem of powernukkit for example i have tried following plugins seeinv backpack everytime i try to use this command server closed the error message is toooooo long in all these plugins works great 📸 screenshots videos ▶ steps to reproduce install fakeinventorys and backpack use bp server closed see error ✔ expected behavior ❌ actual behavior 📋 debug information powernukkit version git debug link ✍ timings link if relevant ✍ 💢 crash dump stack trace and other files 💬 anything else we should know | 1 |
65 | 2,535,895,254 | IssuesEvent | 2015-01-26 08:58:46 | dranzd/Pike-Arms | https://api.github.com/repos/dranzd/Pike-Arms | opened | Catalog's edit account does not show the values of the forms when values are available | bug composer-compatible | Form is empty even when data is available.
Caused by the tep_draw_*_field that uses combination of _REQUEST and GLOBALS to populate the fields but the account_edit.php sets the data into the _POST var.
tep_draw_*_field where modified before to not use GLOBALS but might have been programmed incorrectly. | True | Catalog's edit account does not show the values of the forms when values are available - Form is empty even when data is available.
Caused by the tep_draw_*_field that uses combination of _REQUEST and GLOBALS to populate the fields but the account_edit.php sets the data into the _POST var.
tep_draw_*_field where modified before to not use GLOBALS but might have been programmed incorrectly. | comp | catalog s edit account does not show the values of the forms when values are available form is empty even when data is available caused by the tep draw field that uses combination of request and globals to populate the fields but the account edit php sets the data into the post var tep draw field where modified before to not use globals but might have been programmed incorrectly | 1 |
16,647 | 22,792,497,039 | IssuesEvent | 2022-07-10 08:00:29 | euratom-software/calcam | https://api.github.com/repos/euratom-software/calcam | closed | Exception when opening calibration info with PyQt6 | bug compatibility | When clicking the calibration "info" button, the attached error gets raised
[Calibration info button error.txt](https://github.com/euratom-software/calcam/files/8909164/Calibration.info.button.error.txt)
. | True | Exception when opening calibration info with PyQt6 - When clicking the calibration "info" button, the attached error gets raised
[Calibration info button error.txt](https://github.com/euratom-software/calcam/files/8909164/Calibration.info.button.error.txt)
. | comp | exception when opening calibration info with when clicking the calibration info button the attached error gets raised | 1 |
442,238 | 12,742,151,728 | IssuesEvent | 2020-06-26 07:51:44 | OpenMined/PyGridNetwork | https://api.github.com/repos/OpenMined/PyGridNetwork | closed | Rename project | Priority: 3 - Medium :unamused: Severity: 4 - Low :sunglasses: Status: Available :wave: Type: Improvement :chart_with_upwards_trend: | ## Description
We need to rename this project slightly to fit [our new project naming standards](https://github.com/OpenMined/.github/issues/11).
**The new name for this repo should be:** PyGridNetwork
**The new release name for this repo should be:** openmined.gridnetwork
I have gone ahead and renamed the repo, but I will need someone else to be in charge of changing the release name. **With that said, if this repo doesn't have an official release, then this issue can be closed.**
If you have any questions about this issue, please comment here or message **@cereallarceny** on Slack. | 1.0 | Rename project - ## Description
We need to rename this project slightly to fit [our new project naming standards](https://github.com/OpenMined/.github/issues/11).
**The new name for this repo should be:** PyGridNetwork
**The new release name for this repo should be:** openmined.gridnetwork
I have gone ahead and renamed the repo, but I will need someone else to be in charge of changing the release name. **With that said, if this repo doesn't have an official release, then this issue can be closed.**
If you have any questions about this issue, please comment here or message **@cereallarceny** on Slack. | non_comp | rename project description we need to rename this project slightly to fit the new name for this repo should be pygridnetwork the new release name for this repo should be openmined gridnetwork i have gone ahead and renamed the repo but i will need someone else to be in charge of changing the release name with that said if this repo doesn t have an official release then this issue can be closed if you have any questions about this issue please comment here or message cereallarceny on slack | 0 |
202,205 | 15,822,851,169 | IssuesEvent | 2021-04-05 23:14:25 | divanov11/Mumble | https://api.github.com/repos/divanov11/Mumble | closed | Typo in About section of GitHub. built for "develpers" | Status: Dennis Feedback Needed documentation | built for develpers. It should be developers
| 1.0 | Typo in About section of GitHub. built for "develpers" - built for develpers. It should be developers
| non_comp | typo in about section of github built for develpers built for develpers it should be developers | 0 |
123,848 | 10,291,654,929 | IssuesEvent | 2019-08-27 12:58:11 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | teamcity: failed test: _\N_with_trailing_char_direct=false | C-test-failure O-robot | The following tests appear to have failed on master (testrace): _\N_with_trailing_char_direct=false
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+_\N_with_trailing_char_direct=false).
[#1451439](https://teamcity.cockroachdb.com/viewLog.html?buildId=1451439):
```
_\N_with_trailing_char_direct=false
--- FAIL: testrace/TestImportData/MYSQLOUTFILE:_\N_with_trailing_char_direct=false (0.000s)
Test ended in panic.
------- Stdout: -------
I190823 06:53:15.654928 819 sql/event_log.go:130 [n1,client=127.0.0.1:33836,user=root] Event: "create_database", target: 89, info: {DatabaseName:d18 Statement:CREATE DATABASE d18 User:root}
I190823 06:53:15.894280 9572 storage/replica_command.go:284 [n1,s1,r59/1:/{Table/88/1-Max}] initiating a split of this range at key /Table/90/1 [r61] (manual)
I190823 06:53:15.944381 9571 ccl/importccl/read_import_proc.go:83 [n1,import-distsql-ingest] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I190823 06:53:15.956536 9526 storage/replica_command.go:598 [n1,merge,s1,r37/1:/Table/64{-/1}] initiating a merge of r36:/Table/6{4/1-6} [(n1,s1):1, next=2, gen=12] into this range (lhs+rhs has (size=0 B+70 B qps=0.00+0.00 --> 0.00qps) below threshold (size=70 B, qps=0.00))
I190823 06:53:16.045844 9604 storage/replica_command.go:284 [n1,split,s1,r59/1:/{Table/88/1-Max}] initiating a split of this range at key /Table/90 [r62] (zone config)
I190823 06:53:16.050058 147 storage/store.go:2593 [n1,s1,r37/1:/Table/64{-/1}] removing replica r36/1
I190823 06:53:16.220486 819 sql/event_log.go:130 [n1,client=127.0.0.1:33836,user=root] Event: "drop_database", target: 89, info: {DatabaseName:d18 Statement:DROP DATABASE d18 User:root DroppedSchemaObjects:[]}
```
Please assign, take a look and update the issue accordingly.
| 1.0 | teamcity: failed test: _\N_with_trailing_char_direct=false - The following tests appear to have failed on master (testrace): _\N_with_trailing_char_direct=false
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+_\N_with_trailing_char_direct=false).
[#1451439](https://teamcity.cockroachdb.com/viewLog.html?buildId=1451439):
```
_\N_with_trailing_char_direct=false
--- FAIL: testrace/TestImportData/MYSQLOUTFILE:_\N_with_trailing_char_direct=false (0.000s)
Test ended in panic.
------- Stdout: -------
I190823 06:53:15.654928 819 sql/event_log.go:130 [n1,client=127.0.0.1:33836,user=root] Event: "create_database", target: 89, info: {DatabaseName:d18 Statement:CREATE DATABASE d18 User:root}
I190823 06:53:15.894280 9572 storage/replica_command.go:284 [n1,s1,r59/1:/{Table/88/1-Max}] initiating a split of this range at key /Table/90/1 [r61] (manual)
I190823 06:53:15.944381 9571 ccl/importccl/read_import_proc.go:83 [n1,import-distsql-ingest] could not fetch file size; falling back to per-file progress: bad ContentLength: -1
I190823 06:53:15.956536 9526 storage/replica_command.go:598 [n1,merge,s1,r37/1:/Table/64{-/1}] initiating a merge of r36:/Table/6{4/1-6} [(n1,s1):1, next=2, gen=12] into this range (lhs+rhs has (size=0 B+70 B qps=0.00+0.00 --> 0.00qps) below threshold (size=70 B, qps=0.00))
I190823 06:53:16.045844 9604 storage/replica_command.go:284 [n1,split,s1,r59/1:/{Table/88/1-Max}] initiating a split of this range at key /Table/90 [r62] (zone config)
I190823 06:53:16.050058 147 storage/store.go:2593 [n1,s1,r37/1:/Table/64{-/1}] removing replica r36/1
I190823 06:53:16.220486 819 sql/event_log.go:130 [n1,client=127.0.0.1:33836,user=root] Event: "drop_database", target: 89, info: {DatabaseName:d18 Statement:DROP DATABASE d18 User:root DroppedSchemaObjects:[]}
```
Please assign, take a look and update the issue accordingly.
| non_comp | teamcity failed test n with trailing char direct false the following tests appear to have failed on master testrace n with trailing char direct false you may want to check n with trailing char direct false fail testrace testimportdata mysqloutfile n with trailing char direct false test ended in panic stdout sql event log go event create database target info databasename statement create database user root storage replica command go initiating a split of this range at key table manual ccl importccl read import proc go could not fetch file size falling back to per file progress bad contentlength storage replica command go initiating a merge of table into this range lhs rhs has size b b qps below threshold size b qps storage replica command go initiating a split of this range at key table zone config storage store go removing replica sql event log go event drop database target info databasename statement drop database user root droppedschemaobjects please assign take a look and update the issue accordingly | 0 |
15,533 | 19,915,159,781 | IssuesEvent | 2022-01-25 21:39:34 | Tewr/BlazorWorker | https://api.github.com/repos/Tewr/BlazorWorker | closed | Only globalization-invariant mode is supported in the WorkerService | enhancement Compatibility | An attempt to get CultureInfo with System.Globalization.CultureInfo.GetCultureInfo(String name)
throws exception "Only the invariant culture is supported in globalization-invariant mode" in the WorkerService (net 6.0) | True | Only globalization-invariant mode is supported in the WorkerService - An attempt to get CultureInfo with System.Globalization.CultureInfo.GetCultureInfo(String name)
throws exception "Only the invariant culture is supported in globalization-invariant mode" in the WorkerService (net 6.0) | comp | only globalization invariant mode is supported in the workerservice an attempt to get cultureinfo with system globalization cultureinfo getcultureinfo string name throws exception only the invariant culture is supported in globalization invariant mode in the workerservice net | 1 |
15,393 | 19,652,962,797 | IssuesEvent | 2022-01-10 09:30:24 | AmProsius/gothic-1-community-patch | https://api.github.com/repos/AmProsius/gothic-1-community-patch | opened | Viran's subtitles don't match the audio pt. 2 (EN) | language dependend type: revert on save compatibility: difficult validation: required impl: change obj var | **Describe the bug**
Viran's voice (audio) says "bloodflies" while the subtitles (output unit) display "bugs" in the dialog when the player wrongfully claims all bloodflies are taken care of.
**Expected behavior**
Viran's subtitles now match the audio when wrongfully claiming to have dealt with the bloodflies.
**Steps to reproduce the issue**
1. Initiate and progress through the "The Swampweed Harvest" quest.
2. Talk the Viran about the bloodflies.
3. Talk to Viran again claiming they are gone (when they are in fact not).
**Additional context**
The dialog line in the scripts:
https://github.com/AmProsius/gothic-1-community-patch/blob/2892a60a3edf33340d7907fe7b3c137d079e8474/scriptbase/_work/Data/Scripts/Content/Story/Missions/DIA_NOV_1302_Viran.d#L166
Related to #251.
| True | Viran's subtitles don't match the audio pt. 2 (EN) - **Describe the bug**
Viran's voice (audio) says "bloodflies" while the subtitles (output unit) display "bugs" in the dialog when the player wrongfully claims all bloodflies are taken care of.
**Expected behavior**
Viran's subtitles now match the audio when wrongfully claiming to have dealt with the bloodflies.
**Steps to reproduce the issue**
1. Initiate and progress through the "The Swampweed Harvest" quest.
2. Talk the Viran about the bloodflies.
3. Talk to Viran again claiming they are gone (when they are in fact not).
**Additional context**
The dialog line in the scripts:
https://github.com/AmProsius/gothic-1-community-patch/blob/2892a60a3edf33340d7907fe7b3c137d079e8474/scriptbase/_work/Data/Scripts/Content/Story/Missions/DIA_NOV_1302_Viran.d#L166
Related to #251.
| comp | viran s subtitles don t match the audio pt en describe the bug viran s voice audio says bloodflies while the subtitles output unit display bugs in the dialog when the player wrongfully claims all bloodflies are taken care of expected behavior viran s subtitles now match the audio when wrongfully claiming to have dealt with the bloodflies steps to reproduce the issue initiate and progress through the the swampweed harvest quest talk the viran about the bloodflies talk to viran again claiming they are gone when they are in fact not additional context the dialog line in the scripts related to | 1 |
19,598 | 27,214,489,752 | IssuesEvent | 2023-02-20 20:00:55 | Alujjdnd/Ngrok-LAN | https://api.github.com/repos/Alujjdnd/Ngrok-LAN | opened | [INCOMPATIBILITY] Custom LAN | incompatibility | **Name of Mod**
Custom LAN
**Link to Mod Source**
https://github.com/DimiDimit/Custom-LAN
**Screenshots**
N/A
**Desktop (please complete the following information):**
- OS: Windows 11
- Minecraft Version 1.19.3
- Ngrok-LAN Version 1.4.4
**Additional context**
- Crash happens the moment you click "Open to LAN"
- Crash log: https://mclo.gs/cgXUeVy
| True | [INCOMPATIBILITY] Custom LAN - **Name of Mod**
Custom LAN
**Link to Mod Source**
https://github.com/DimiDimit/Custom-LAN
**Screenshots**
N/A
**Desktop (please complete the following information):**
- OS: Windows 11
- Minecraft Version 1.19.3
- Ngrok-LAN Version 1.4.4
**Additional context**
- Crash happens the moment you click "Open to LAN"
- Crash log: https://mclo.gs/cgXUeVy
| comp | custom lan name of mod custom lan link to mod source screenshots n a desktop please complete the following information os windows minecraft version ngrok lan version additional context crash happens the moment you click open to lan crash log | 1 |
113,944 | 9,669,159,959 | IssuesEvent | 2019-05-21 16:40:13 | bamhm182/FinTrinity | https://api.github.com/repos/bamhm182/FinTrinity | closed | macOS Support | awaiting test | It would be great if there was macOS support. To do this, I need to figure out where QCMA stores its configuration file on macOS. Still trying to figure out how to find it without access to a Mac. | 1.0 | macOS Support - It would be great if there was macOS support. To do this, I need to figure out where QCMA stores its configuration file on macOS. Still trying to figure out how to find it without access to a Mac. | non_comp | macos support it would be great if there was macos support to do this i need to figure out where qcma stores its configuration file on macos still trying to figure out how to find it without access to a mac | 0 |
4,688 | 7,308,332,112 | IssuesEvent | 2018-02-28 07:53:45 | SpongePowered/SpongeForge | https://api.github.com/repos/SpongePowered/SpongeForge | closed | Incorrect block state. | status: pr pending type: bug type: mod incompatibility version: 1.12 | **I am currently running**
<!-- If you don't use the latest version, please tell us why. -->
- SpongeForge version: spongeforge-1.12.2-2586-7.1.0-BETA-2887
- Forge version: forge-1.12.2-14.23.1.2586
- Java version: 1.8.0_131 x64
- Operating System: Ubuntu Server 16.0.4 or Windows 7/8.1 x64
- Plugins (27): Minecraft, Minecraft Coder Pack, SpongeAPI, SpongeForge, FastAsyncWorldEdit, BeanCore, CatClearLag, CmdCalendar, CommandSyncSponge, CommandUtils, EconomyLite, GriefPrevention, Holograms, JobsLite, LangSwitch, LuckPerms, Nucleus, PayDay, PlaceholderAPI, RedProtect, SkyClaims, TabManager, UltimateChat, UniversalMarket, VillagerShops, VirtualChest, WorldEdit
- Mods (16): Minecraft, Minecraft Coder Pack, Forge Mod Loader, Minecraft Forge, SpongeAPI, SpongeForge, Bed Patch, Ex Nihilo Creatio, Fishing Net Mod, FoamFix, FoamFixCore, Giacomo's Foundry, IndustrialCraft 2, Parachronology, Tree Chopper, Waterfree Farming
**Mod GiacomosFishingNet on SpongeForge does not work correctly. After placing the fishing net in the water, it can not be broken except by a piston. When trying to break this block, it stays in place, but the item can fall out. The flow of water also stops. [I enclose a video made by my assistant and teste](https://www.youtube.com/watch?v=9YcyR7dRDw0)r.**
This issue does not cause errors in the console or other messages.
| True | Incorrect block state. - **I am currently running**
<!-- If you don't use the latest version, please tell us why. -->
- SpongeForge version: spongeforge-1.12.2-2586-7.1.0-BETA-2887
- Forge version: forge-1.12.2-14.23.1.2586
- Java version: 1.8.0_131 x64
- Operating System: Ubuntu Server 16.0.4 or Windows 7/8.1 x64
- Plugins (27): Minecraft, Minecraft Coder Pack, SpongeAPI, SpongeForge, FastAsyncWorldEdit, BeanCore, CatClearLag, CmdCalendar, CommandSyncSponge, CommandUtils, EconomyLite, GriefPrevention, Holograms, JobsLite, LangSwitch, LuckPerms, Nucleus, PayDay, PlaceholderAPI, RedProtect, SkyClaims, TabManager, UltimateChat, UniversalMarket, VillagerShops, VirtualChest, WorldEdit
- Mods (16): Minecraft, Minecraft Coder Pack, Forge Mod Loader, Minecraft Forge, SpongeAPI, SpongeForge, Bed Patch, Ex Nihilo Creatio, Fishing Net Mod, FoamFix, FoamFixCore, Giacomo's Foundry, IndustrialCraft 2, Parachronology, Tree Chopper, Waterfree Farming
**Mod GiacomosFishingNet on SpongeForge does not work correctly. After placing the fishing net in the water, it can not be broken except by a piston. When trying to break this block, it stays in place, but the item can fall out. The flow of water also stops. [I enclose a video made by my assistant and teste](https://www.youtube.com/watch?v=9YcyR7dRDw0)r.**
This issue does not cause errors in the console or other messages.
| comp | incorrect block state i am currently running spongeforge version spongeforge beta forge version forge java version operating system ubuntu server or windows plugins minecraft minecraft coder pack spongeapi spongeforge fastasyncworldedit beancore catclearlag cmdcalendar commandsyncsponge commandutils economylite griefprevention holograms jobslite langswitch luckperms nucleus payday placeholderapi redprotect skyclaims tabmanager ultimatechat universalmarket villagershops virtualchest worldedit mods minecraft minecraft coder pack forge mod loader minecraft forge spongeapi spongeforge bed patch ex nihilo creatio fishing net mod foamfix foamfixcore giacomo s foundry industrialcraft parachronology tree chopper waterfree farming mod giacomosfishingnet on spongeforge does not work correctly after placing the fishing net in the water it can not be broken except by a piston when trying to break this block it stays in place but the item can fall out the flow of water also stops this issue does not cause errors in the console or other messages | 1 |
17,532 | 24,177,400,206 | IssuesEvent | 2022-09-23 04:28:31 | aesara-devs/aesara | https://api.github.com/repos/aesara-devs/aesara | closed | Bug in `JITLinker` when first output of inner `FunctionGraph` is an input variable | bug JAX backend compatibility Numba | An error is raised when a function with `mode in ("NUMBA", "JAX")` returns a `SharedVariable` that has updates.
```python
import aesara
import aesara.tensor as at
x = aesara.shared(0, name="x")
f = aesara.function([], [x], updates={x: x+1}, mode="NUMBA")
```
```python
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/__init__.py:317, in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input)
311 fn = orig_function(
312 inputs, outputs, mode=mode, accept_inplace=accept_inplace, name=name
313 )
314 else:
315 # note: pfunc will also call orig_function -- orig_function is
316 # a choke point that all compilation must pass through
--> 317 fn = pfunc(
318 params=inputs,
319 outputs=outputs,
320 mode=mode,
321 updates=updates,
322 givens=givens,
323 no_default_updates=no_default_updates,
324 accept_inplace=accept_inplace,
325 name=name,
326 rebuild_strict=rebuild_strict,
327 allow_input_downcast=allow_input_downcast,
328 on_unused_input=on_unused_input,
329 profile=profile,
330 output_keys=output_keys,
331 )
332 return fn
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/pfunc.py:363, in pfunc(params, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input, output_keys)
350 profile = ProfileStats(message=profile)
352 inputs, cloned_outputs = construct_pfunc_ins_and_outs(
353 params,
354 outputs,
(...)
360 allow_input_downcast,
361 )
--> 363 return orig_function(
364 inputs,
365 cloned_outputs,
366 mode,
367 accept_inplace=accept_inplace,
368 name=name,
369 profile=profile,
370 on_unused_input=on_unused_input,
371 output_keys=output_keys,
372 )
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/types.py:1738, in orig_function(inputs, outputs, mode, accept_inplace, name, profile, on_unused_input, output_keys)
1727 m = Maker(
1728 inputs,
1729 outputs,
(...)
1735 name=name,
1736 )
1737 with config.change_flags(compute_test_value="off"):
-> 1738 fn = m.create(defaults)
1739 finally:
1740 t2 = time.time()
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/types.py:1633, in FunctionMaker.create(self, input_storage, trustme, storage_map)
1630 start_import_time = aesara.link.c.cmodule.import_time
1632 with config.change_flags(traceback__limit=config.traceback__compile_limit):
-> 1633 _fn, _i, _o = self.linker.make_thunk(
1634 input_storage=input_storage_lists, storage_map=storage_map
1635 )
1637 end_linker = time.time()
1639 linker_time = end_linker - start_linker
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/link/basic.py:254, in LocalLinker.make_thunk(self, input_storage, output_storage, storage_map, **kwargs)
247 def make_thunk(
248 self,
249 input_storage: Optional["InputStorageType"] = None,
(...)
252 **kwargs,
253 ) -> Tuple["BasicThunkType", "InputStorageType", "OutputStorageType"]:
--> 254 return self.make_all(
255 input_storage=input_storage,
256 output_storage=output_storage,
257 storage_map=storage_map,
258 )[:3]
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/link/basic.py:702, in JITLinker.make_all(self, input_storage, output_storage, storage_map)
696 compute_map[k] = [k.owner is None]
698 thunks, nodes, jit_fn = self.create_jitable_thunk(
699 compute_map, nodes, input_storage, output_storage, storage_map
700 )
--> 702 computed, last_user = gc_helper(nodes)
704 if self.allow_gc:
705 post_thunk_old_storage = []
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/link/utils.py:263, in gc_helper(node_list)
261 computed = set()
262 for node in node_list:
--> 263 for input in node.inputs:
264 last_user[input] = node
265 for output in node.outputs:
AttributeError: 'NoneType' object has no attribute 'inputs'
``` | True | Bug in `JITLinker` when first output of inner `FunctionGraph` is an input variable - An error is raised when a function with `mode in ("NUMBA", "JAX")` returns a `SharedVariable` that has updates.
```python
import aesara
import aesara.tensor as at
x = aesara.shared(0, name="x")
f = aesara.function([], [x], updates={x: x+1}, mode="NUMBA")
```
```python
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/__init__.py:317, in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input)
311 fn = orig_function(
312 inputs, outputs, mode=mode, accept_inplace=accept_inplace, name=name
313 )
314 else:
315 # note: pfunc will also call orig_function -- orig_function is
316 # a choke point that all compilation must pass through
--> 317 fn = pfunc(
318 params=inputs,
319 outputs=outputs,
320 mode=mode,
321 updates=updates,
322 givens=givens,
323 no_default_updates=no_default_updates,
324 accept_inplace=accept_inplace,
325 name=name,
326 rebuild_strict=rebuild_strict,
327 allow_input_downcast=allow_input_downcast,
328 on_unused_input=on_unused_input,
329 profile=profile,
330 output_keys=output_keys,
331 )
332 return fn
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/pfunc.py:363, in pfunc(params, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input, output_keys)
350 profile = ProfileStats(message=profile)
352 inputs, cloned_outputs = construct_pfunc_ins_and_outs(
353 params,
354 outputs,
(...)
360 allow_input_downcast,
361 )
--> 363 return orig_function(
364 inputs,
365 cloned_outputs,
366 mode,
367 accept_inplace=accept_inplace,
368 name=name,
369 profile=profile,
370 on_unused_input=on_unused_input,
371 output_keys=output_keys,
372 )
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/types.py:1738, in orig_function(inputs, outputs, mode, accept_inplace, name, profile, on_unused_input, output_keys)
1727 m = Maker(
1728 inputs,
1729 outputs,
(...)
1735 name=name,
1736 )
1737 with config.change_flags(compute_test_value="off"):
-> 1738 fn = m.create(defaults)
1739 finally:
1740 t2 = time.time()
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/compile/function/types.py:1633, in FunctionMaker.create(self, input_storage, trustme, storage_map)
1630 start_import_time = aesara.link.c.cmodule.import_time
1632 with config.change_flags(traceback__limit=config.traceback__compile_limit):
-> 1633 _fn, _i, _o = self.linker.make_thunk(
1634 input_storage=input_storage_lists, storage_map=storage_map
1635 )
1637 end_linker = time.time()
1639 linker_time = end_linker - start_linker
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/link/basic.py:254, in LocalLinker.make_thunk(self, input_storage, output_storage, storage_map, **kwargs)
247 def make_thunk(
248 self,
249 input_storage: Optional["InputStorageType"] = None,
(...)
252 **kwargs,
253 ) -> Tuple["BasicThunkType", "InputStorageType", "OutputStorageType"]:
--> 254 return self.make_all(
255 input_storage=input_storage,
256 output_storage=output_storage,
257 storage_map=storage_map,
258 )[:3]
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/link/basic.py:702, in JITLinker.make_all(self, input_storage, output_storage, storage_map)
696 compute_map[k] = [k.owner is None]
698 thunks, nodes, jit_fn = self.create_jitable_thunk(
699 compute_map, nodes, input_storage, output_storage, storage_map
700 )
--> 702 computed, last_user = gc_helper(nodes)
704 if self.allow_gc:
705 post_thunk_old_storage = []
File ~/miniconda3/envs/aesara/lib/python3.10/site-packages/aesara/link/utils.py:263, in gc_helper(node_list)
261 computed = set()
262 for node in node_list:
--> 263 for input in node.inputs:
264 last_user[input] = node
265 for output in node.outputs:
AttributeError: 'NoneType' object has no attribute 'inputs'
``` | comp | bug in jitlinker when first output of inner functiongraph is an input variable an error is raised when a function with mode in numba jax returns a sharedvariable that has updates python import aesara import aesara tensor as at x aesara shared name x f aesara function updates x x mode numba python file envs aesara lib site packages aesara compile function init py in function inputs outputs mode updates givens no default updates accept inplace name rebuild strict allow input downcast profile on unused input fn orig function inputs outputs mode mode accept inplace accept inplace name name else note pfunc will also call orig function orig function is a choke point that all compilation must pass through fn pfunc params inputs outputs outputs mode mode updates updates givens givens no default updates no default updates accept inplace accept inplace name name rebuild strict rebuild strict allow input downcast allow input downcast on unused input on unused input profile profile output keys output keys return fn file envs aesara lib site packages aesara compile function pfunc py in pfunc params outputs mode updates givens no default updates accept inplace name rebuild strict allow input downcast profile on unused input output keys profile profilestats message profile inputs cloned outputs construct pfunc ins and outs params outputs allow input downcast return orig function inputs cloned outputs mode accept inplace accept inplace name name profile profile on unused input on unused input output keys output keys file envs aesara lib site packages aesara compile function types py in orig function inputs outputs mode accept inplace name profile on unused input output keys m maker inputs outputs name name with config change flags compute test value off fn m create defaults finally time time file envs aesara lib site packages aesara compile function types py in functionmaker create self input storage trustme storage map start import time aesara link c cmodule import time with config change flags traceback limit config traceback compile limit fn i o self linker make thunk input storage input storage lists storage map storage map end linker time time linker time end linker start linker file envs aesara lib site packages aesara link basic py in locallinker make thunk self input storage output storage storage map kwargs def make thunk self input storage optional none kwargs tuple return self make all input storage input storage output storage output storage storage map storage map file envs aesara lib site packages aesara link basic py in jitlinker make all self input storage output storage storage map compute map thunks nodes jit fn self create jitable thunk compute map nodes input storage output storage storage map computed last user gc helper nodes if self allow gc post thunk old storage file envs aesara lib site packages aesara link utils py in gc helper node list computed set for node in node list for input in node inputs last user node for output in node outputs attributeerror nonetype object has no attribute inputs | 1 |
1,301 | 3,799,842,522 | IssuesEvent | 2016-03-23 17:08:48 | zalando/dress-code | https://api.github.com/repos/zalando/dress-code | closed | docs: add supported browser section | compatibilty(browser) docs on review | Proposal:
IE 10+
Safari latest 2 stable versions
Firefox latest 2 stable versions
Chrome | True | docs: add supported browser section - Proposal:
IE 10+
Safari latest 2 stable versions
Firefox latest 2 stable versions
Chrome | comp | docs add supported browser section proposal ie safari latest stable versions firefox latest stable versions chrome | 1 |
727,314 | 25,031,138,674 | IssuesEvent | 2022-11-04 12:29:33 | wso2/api-manager | https://api.github.com/repos/wso2/api-manager | closed | Incompatible Parameters Error in data service call mediator | Type/Bug Priority/Normal U2/WUM Affected/MI-4.1.0 | ### Description
When there are multiple data service call mediators inside a iterate mediator flow, and the iterate count is high, incorrect query params are passed to the data service operations.
<img width="1430" alt="Screenshot 2022-10-17 at 17 43 42" src="https://user-images.githubusercontent.com/45654571/196868964-8f522740-9d2c-4bb8-b220-a12d4841f1d9.png">
### Steps to Reproduce
1. Create a mysql database and tables using the “Script.sql” file
2. Copy the “mysql-connector-java-8.0.28.jar” file into the “<MI_HOME>/lib” directory
3. Copy the “RESTDataService-1.0.0.dbs” file to the “<MI_HOME>/repository/deployment/server/dataservices” directory (Please change the database URL, Username and Password accordingly)
4. Copy the “HelloworldCompositeExporter_1.0.0-SNAPSHOT.car” file to the “<MI_HOME>/repository/deployment/server/carbonapps” directory
5. Invoke the deployed API via the provided postman collection (MI.postman_collection.json)
We can observe the above error in the carbon log after several iterations.
Link to the artifacts:
https://drive.google.com/file/d/1DAKMhaPIh8yEg4DYaVM7El402MeI3PMa/view?usp=sharing
### Affected Component
MI
### Version
4.1.0.3
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_ | 1.0 | Incompatible Parameters Error in data service call mediator - ### Description
When there are multiple data service call mediators inside a iterate mediator flow, and the iterate count is high, incorrect query params are passed to the data service operations.
<img width="1430" alt="Screenshot 2022-10-17 at 17 43 42" src="https://user-images.githubusercontent.com/45654571/196868964-8f522740-9d2c-4bb8-b220-a12d4841f1d9.png">
### Steps to Reproduce
1. Create a mysql database and tables using the “Script.sql” file
2. Copy the “mysql-connector-java-8.0.28.jar” file into the “<MI_HOME>/lib” directory
3. Copy the “RESTDataService-1.0.0.dbs” file to the “<MI_HOME>/repository/deployment/server/dataservices” directory (Please change the database URL, Username and Password accordingly)
4. Copy the “HelloworldCompositeExporter_1.0.0-SNAPSHOT.car” file to the “<MI_HOME>/repository/deployment/server/carbonapps” directory
5. Invoke the deployed API via the provided postman collection (MI.postman_collection.json)
We can observe the above error in the carbon log after several iterations.
Link to the artifacts:
https://drive.google.com/file/d/1DAKMhaPIh8yEg4DYaVM7El402MeI3PMa/view?usp=sharing
### Affected Component
MI
### Version
4.1.0.3
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_ | non_comp | incompatible parameters error in data service call mediator description when there are multiple data service call mediators inside a iterate mediator flow and the iterate count is high incorrect query params are passed to the data service operations img width alt screenshot at src steps to reproduce create a mysql database and tables using the “script sql” file copy the “mysql connector java jar” file into the “ lib” directory copy the “restdataservice dbs” file to the “ repository deployment server dataservices” directory please change the database url username and password accordingly copy the “helloworldcompositeexporter snapshot car” file to the “ repository deployment server carbonapps” directory invoke the deployed api via the provided postman collection mi postman collection json we can observe the above error in the carbon log after several iterations link to the artifacts affected component mi version environment details with versions no response relevant log output no response related issues no response suggested labels no response | 0 |
10,065 | 12,061,345,034 | IssuesEvent | 2020-04-15 23:30:16 | dreamer/boxtron | https://api.github.com/repos/dreamer/boxtron | closed | 302334 - Tex Murphy: Mean Streets Linux | compatibility report | ## Compatibility Report
- Link to Steam store: https://store.steampowered.com/app/302330/Tex_Murphy_Mean_Streets/
- Link to GOG store: <!-- only for GOG games -->
- Link to ProtonDB: <!-- OPTIONAL, e.g. https://www.protondb.com/app/2280 -->
### System Information
- Linux distribution: Ubuntu 18.04 LTS (LinuxMint 19.3)
- Kernel: 4.15.0-88-generic
- Desktop: Mate 1.24
- DOSBox: 0.74-5-2 SVN
### Report
- The game runs out of the box: Yes
### Configuration changes
<!-- Describe additional changes to dosbox .conf files -->
<!-- e.g. "Changed `cpu.cycles` to `1337` -->
### Report
<!-- Describe any problems that you have or just that game runs perfectly ;) -->
Boxtron
| True | 302334 - Tex Murphy: Mean Streets Linux - ## Compatibility Report
- Link to Steam store: https://store.steampowered.com/app/302330/Tex_Murphy_Mean_Streets/
- Link to GOG store: <!-- only for GOG games -->
- Link to ProtonDB: <!-- OPTIONAL, e.g. https://www.protondb.com/app/2280 -->
### System Information
- Linux distribution: Ubuntu 18.04 LTS (LinuxMint 19.3)
- Kernel: 4.15.0-88-generic
- Desktop: Mate 1.24
- DOSBox: 0.74-5-2 SVN
### Report
- The game runs out of the box: Yes
### Configuration changes
<!-- Describe additional changes to dosbox .conf files -->
<!-- e.g. "Changed `cpu.cycles` to `1337` -->
### Report
<!-- Describe any problems that you have or just that game runs perfectly ;) -->
Boxtron
| comp | tex murphy mean streets linux compatibility report link to steam store link to gog store link to protondb system information linux distribution ubuntu lts linuxmint kernel generic desktop mate dosbox svn report the game runs out of the box yes configuration changes report boxtron | 1 |
18,471 | 25,546,827,788 | IssuesEvent | 2022-11-29 19:34:28 | dotnet/docs | https://api.github.com/repos/dotnet/docs | closed | [Breaking change]: BrotliStream ctor no longer allows values not defined in CompressionLevel enum | doc-idea breaking-change Pri1 binary incompatible :checkered_flag: Release: .NET 7 in-pr | ### Description
BrotliStream ctor no longer allows values not defined in CompressionLevel enum, an ArgumentException is thrown if that's the case.
### Version
.NET 7
### Previous behavior
BrotliStream allowed to pass in the level to the ctor by casting the desired level directly to CompressionLevel, like this:
```cs
BrotliStream brotli = new BrotliStream(baseStream,
CompressionMode.Compress,
leaveOpen,
bufferSize,
(CompressionLevel)5); // Use level 5
```
### New behavior
BrotliStream only allows the values defined in CompressionLevel, passing an undefined defined will result in an `ArgumentException`.
### Type of breaking change
- [X] **Binary incompatible**: Existing binaries may encounter a breaking change in behavior, such as failure to load/execute or different run-time behavior.
- [ ] **Source incompatible**: Source code may encounter a breaking change in behavior when targeting the new runtime/component/SDK, such as compile errors or different run-time behavior.
### Reason for change
The purpose of the CompressionLevel enumeration is to let folks use compression algorithms without needing to understand the meaning of their tuning parameters.
If an arbitrary level was provided, that was passed through as-is to the underlying library, resulting in inconsistent and potentially unexpected behavior. With this change the behavior is aligned with other compression streams e.g: `DeflateStream`.
With the tuning of the CompressionLevel values made in https://github.com/dotnet/runtime/pull/72266 and with the addition of `CompressionLevel.SmallestSize` https://github.com/dotnet/runtime/pull/41960, it is now possible to have a variety of trade-offs in the compression algorithms and users can keep relying in CompressionLevel as being an abstraction of such trade-off.
### Recommended action
If you were relying in passing undefined values as the CompressionLevel, be advised that you need to re-visit your use case and decide which [documented value](https://learn.microsoft.com/dotnet/api/system.io.compression.compressionlevel?view=net-7.0) is the most optimal for it.
### Feature area
Core .NET libraries
### Affected APIs
```cs
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionLevel compressionLevel);
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionMode mode);
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionLevel compressionLevel, bool leaveOpen);
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionMode mode, bool leaveOpen);
``` | True | [Breaking change]: BrotliStream ctor no longer allows values not defined in CompressionLevel enum - ### Description
BrotliStream ctor no longer allows values not defined in CompressionLevel enum, an ArgumentException is thrown if that's the case.
### Version
.NET 7
### Previous behavior
BrotliStream allowed to pass in the level to the ctor by casting the desired level directly to CompressionLevel, like this:
```cs
BrotliStream brotli = new BrotliStream(baseStream,
CompressionMode.Compress,
leaveOpen,
bufferSize,
(CompressionLevel)5); // Use level 5
```
### New behavior
BrotliStream only allows the values defined in CompressionLevel, passing an undefined defined will result in an `ArgumentException`.
### Type of breaking change
- [X] **Binary incompatible**: Existing binaries may encounter a breaking change in behavior, such as failure to load/execute or different run-time behavior.
- [ ] **Source incompatible**: Source code may encounter a breaking change in behavior when targeting the new runtime/component/SDK, such as compile errors or different run-time behavior.
### Reason for change
The purpose of the CompressionLevel enumeration is to let folks use compression algorithms without needing to understand the meaning of their tuning parameters.
If an arbitrary level was provided, that was passed through as-is to the underlying library, resulting in inconsistent and potentially unexpected behavior. With this change the behavior is aligned with other compression streams e.g: `DeflateStream`.
With the tuning of the CompressionLevel values made in https://github.com/dotnet/runtime/pull/72266 and with the addition of `CompressionLevel.SmallestSize` https://github.com/dotnet/runtime/pull/41960, it is now possible to have a variety of trade-offs in the compression algorithms and users can keep relying in CompressionLevel as being an abstraction of such trade-off.
### Recommended action
If you were relying in passing undefined values as the CompressionLevel, be advised that you need to re-visit your use case and decide which [documented value](https://learn.microsoft.com/dotnet/api/system.io.compression.compressionlevel?view=net-7.0) is the most optimal for it.
### Feature area
Core .NET libraries
### Affected APIs
```cs
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionLevel compressionLevel);
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionMode mode);
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionLevel compressionLevel, bool leaveOpen);
public BrotliStream (System.IO.Stream stream, System.IO.Compression.CompressionMode mode, bool leaveOpen);
``` | comp | brotlistream ctor no longer allows values not defined in compressionlevel enum description brotlistream ctor no longer allows values not defined in compressionlevel enum an argumentexception is thrown if that s the case version net previous behavior brotlistream allowed to pass in the level to the ctor by casting the desired level directly to compressionlevel like this cs brotlistream brotli new brotlistream basestream compressionmode compress leaveopen buffersize compressionlevel use level new behavior brotlistream only allows the values defined in compressionlevel passing an undefined defined will result in an argumentexception type of breaking change binary incompatible existing binaries may encounter a breaking change in behavior such as failure to load execute or different run time behavior source incompatible source code may encounter a breaking change in behavior when targeting the new runtime component sdk such as compile errors or different run time behavior reason for change the purpose of the compressionlevel enumeration is to let folks use compression algorithms without needing to understand the meaning of their tuning parameters if an arbitrary level was provided that was passed through as is to the underlying library resulting in inconsistent and potentially unexpected behavior with this change the behavior is aligned with other compression streams e g deflatestream with the tuning of the compressionlevel values made in and with the addition of compressionlevel smallestsize it is now possible to have a variety of trade offs in the compression algorithms and users can keep relying in compressionlevel as being an abstraction of such trade off recommended action if you were relying in passing undefined values as the compressionlevel be advised that you need to re visit your use case and decide which is the most optimal for it feature area core net libraries affected apis cs public brotlistream system io stream stream system io compression compressionlevel compressionlevel public brotlistream system io stream stream system io compression compressionmode mode public brotlistream system io stream stream system io compression compressionlevel compressionlevel bool leaveopen public brotlistream system io stream stream system io compression compressionmode mode bool leaveopen | 1 |
13,184 | 15,531,771,461 | IssuesEvent | 2021-03-14 01:32:04 | blay09/CookingForBlockheads | https://api.github.com/repos/blay09/CookingForBlockheads | closed | Add inspirations:book item tag to Cooking for Blockheads Original Edition | compatibility fixed in next version | Inspirations has a feature where any item named 'book' or tagged as such can fit in its shelves.
The CfB books I and II (internally named Recipe Book and Crafting Book) then can be placed on those shelves, but CFB black edition doesnt. This is my humble request to add at least an item tag to allow it to be found.
Inspirations specifically uses the `inspirations:book` tag for the detection, but they also recommended `forge:books`. | True | Add inspirations:book item tag to Cooking for Blockheads Original Edition - Inspirations has a feature where any item named 'book' or tagged as such can fit in its shelves.
The CfB books I and II (internally named Recipe Book and Crafting Book) then can be placed on those shelves, but CFB black edition doesnt. This is my humble request to add at least an item tag to allow it to be found.
Inspirations specifically uses the `inspirations:book` tag for the detection, but they also recommended `forge:books`. | comp | add inspirations book item tag to cooking for blockheads original edition inspirations has a feature where any item named book or tagged as such can fit in its shelves the cfb books i and ii internally named recipe book and crafting book then can be placed on those shelves but cfb black edition doesnt this is my humble request to add at least an item tag to allow it to be found inspirations specifically uses the inspirations book tag for the detection but they also recommended forge books | 1 |
18,159 | 25,084,650,994 | IssuesEvent | 2022-11-07 22:28:29 | Shirajuki/anki-redesign | https://api.github.com/repos/Shirajuki/anki-redesign | closed | NDFS compatibility configuration problem | bug addon compatibility hopefully fixed | Hi,
Thanks for making and maintaining Anki Redesign! It greatly improves the experience of using Anki ^_^
I see there is a new fix for NDFS compatibility. If I check the relevant checkbox on the Anki Redesign configuration window and click *Save*, it returns to an unchecked state when I revisit the configuration window, and doesn’t have any effect on the interface (even after restarting Anki).
It does seem to work if I bypass the configuration by changing
https://github.com/Shirajuki/anki-redesign/blob/68483dbb01f794e8245ce8e2ddf823a0109fc007/config.py#L8
into
```python
config['addon_no_distractions_full_screen'] = True# if config.get('addon_no_distractions_full_screen', "false").lower() == "true" else False
``` | True | NDFS compatibility configuration problem - Hi,
Thanks for making and maintaining Anki Redesign! It greatly improves the experience of using Anki ^_^
I see there is a new fix for NDFS compatibility. If I check the relevant checkbox on the Anki Redesign configuration window and click *Save*, it returns to an unchecked state when I revisit the configuration window, and doesn’t have any effect on the interface (even after restarting Anki).
It does seem to work if I bypass the configuration by changing
https://github.com/Shirajuki/anki-redesign/blob/68483dbb01f794e8245ce8e2ddf823a0109fc007/config.py#L8
into
```python
config['addon_no_distractions_full_screen'] = True# if config.get('addon_no_distractions_full_screen', "false").lower() == "true" else False
``` | comp | ndfs compatibility configuration problem hi thanks for making and maintaining anki redesign it greatly improves the experience of using anki i see there is a new fix for ndfs compatibility if i check the relevant checkbox on the anki redesign configuration window and click save it returns to an unchecked state when i revisit the configuration window and doesn’t have any effect on the interface even after restarting anki it does seem to work if i bypass the configuration by changing into python config true if config get addon no distractions full screen false lower true else false | 1 |
17,162 | 23,678,144,357 | IssuesEvent | 2022-08-28 11:59:50 | 3arthqu4ke/3arthh4ck | https://api.github.com/repos/3arthqu4ke/3arthh4ck | closed | [BUG] cosmos rotation compatibility | bug compatibility | **Describe the bug**
A clear and concise description of what the bug is.
Phobos conflicting with cosmos ca
**To Reproduce**
Steps to reproduce the behavior:
1. use phobos and turn on cosmos ca with rotate on lol
2. it no work :o
**Expected behavior**
A clear and concise description of what you expected to happen. If your bug is a crash or something you don't need to fill this in.
Rotations conflicting just like with future, Add cosmos support to "**compablity**" module
**Crashlog**
If you crashed and you don't provide a crashlog I will ignore you!
**Additional context**
Add any other context about the problem here.
Fix plz :pray:
**Checklist**
- [x] I know how to properly use check boxes.
- [ ] I have included logs, screenshots, exceptions and / or steps to reproduce the issue.
- [x] I followed the issue template.
| True | [BUG] cosmos rotation compatibility - **Describe the bug**
A clear and concise description of what the bug is.
Phobos conflicting with cosmos ca
**To Reproduce**
Steps to reproduce the behavior:
1. use phobos and turn on cosmos ca with rotate on lol
2. it no work :o
**Expected behavior**
A clear and concise description of what you expected to happen. If your bug is a crash or something you don't need to fill this in.
Rotations conflicting just like with future, Add cosmos support to "**compablity**" module
**Crashlog**
If you crashed and you don't provide a crashlog I will ignore you!
**Additional context**
Add any other context about the problem here.
Fix plz :pray:
**Checklist**
- [x] I know how to properly use check boxes.
- [ ] I have included logs, screenshots, exceptions and / or steps to reproduce the issue.
- [x] I followed the issue template.
| comp | cosmos rotation compatibility describe the bug a clear and concise description of what the bug is phobos conflicting with cosmos ca to reproduce steps to reproduce the behavior use phobos and turn on cosmos ca with rotate on lol it no work o expected behavior a clear and concise description of what you expected to happen if your bug is a crash or something you don t need to fill this in rotations conflicting just like with future add cosmos support to compablity module crashlog if you crashed and you don t provide a crashlog i will ignore you additional context add any other context about the problem here fix plz pray checklist i know how to properly use check boxes i have included logs screenshots exceptions and or steps to reproduce the issue i followed the issue template | 1 |
100,932 | 11,208,321,035 | IssuesEvent | 2020-01-06 07:27:06 | milvus-io/docs | https://api.github.com/repos/milvus-io/docs | closed | [Suggestion] Change Prometheus config file locations to version-specific URLs | documentation | > Note: This repository is ONLY used to solve issues related to DOCS.
> For other issues, please move to [other repositories](https://github.com/milvus-io/).
**Is there anything that's missing or inappropriate in the docs? Please describe.**
https://www.milvus.io/docs/v0.6.0/guides/monitor.md
**Describe your suggestion**
Change Prometheus config file locations to version-specific URLs.
| 1.0 | [Suggestion] Change Prometheus config file locations to version-specific URLs - > Note: This repository is ONLY used to solve issues related to DOCS.
> For other issues, please move to [other repositories](https://github.com/milvus-io/).
**Is there anything that's missing or inappropriate in the docs? Please describe.**
https://www.milvus.io/docs/v0.6.0/guides/monitor.md
**Describe your suggestion**
Change Prometheus config file locations to version-specific URLs.
| non_comp | change prometheus config file locations to version specific urls note this repository is only used to solve issues related to docs for other issues please move to is there anything that s missing or inappropriate in the docs please describe describe your suggestion change prometheus config file locations to version specific urls | 0 |
1,339 | 3,869,918,998 | IssuesEvent | 2016-04-10 21:41:20 | MJRLegends/Space-Astronomy-Feedback- | https://api.github.com/repos/MJRLegends/Space-Astronomy-Feedback- | closed | "entity.soul.name soul aspect" drops from cows | being looked at compatibility issue mod bug | I'm not familiar with the Practicalities mod so I'm not sure how this is dropping. | True | "entity.soul.name soul aspect" drops from cows - I'm not familiar with the Practicalities mod so I'm not sure how this is dropping. | comp | entity soul name soul aspect drops from cows i m not familiar with the practicalities mod so i m not sure how this is dropping | 1 |
16,561 | 22,574,458,714 | IssuesEvent | 2022-06-28 05:42:08 | CleanroomMC/Fluidlogged-API | https://api.github.com/repos/CleanroomMC/Fluidlogged-API | closed | Crash with Cubic Chunks | bug incompatibility fixed in dev | The game hangs then crashes if a world is loaded while cubic chunks and fluidloggedapi are both loaded.
| True | Crash with Cubic Chunks - The game hangs then crashes if a world is loaded while cubic chunks and fluidloggedapi are both loaded.
| comp | crash with cubic chunks the game hangs then crashes if a world is loaded while cubic chunks and fluidloggedapi are both loaded | 1 |
651,778 | 21,510,103,347 | IssuesEvent | 2022-04-28 02:52:03 | JACorley1/UWG-SE2-Spring22-Team1 | https://api.github.com/repos/JACorley1/UWG-SE2-Spring22-Team1 | closed | The client removes, then adds a new habit when updating a habit. | low-priority | There is functionality on the server to update a habit without having to remove it. However, the `ServerServerCommunicator` doesn't have a method for sending that request.
As it stands, this method works well enough, so this issue should be considered low priority. | 1.0 | The client removes, then adds a new habit when updating a habit. - There is functionality on the server to update a habit without having to remove it. However, the `ServerServerCommunicator` doesn't have a method for sending that request.
As it stands, this method works well enough, so this issue should be considered low priority. | non_comp | the client removes then adds a new habit when updating a habit there is functionality on the server to update a habit without having to remove it however the serverservercommunicator doesn t have a method for sending that request as it stands this method works well enough so this issue should be considered low priority | 0 |
52,253 | 13,211,412,652 | IssuesEvent | 2020-08-15 22:57:54 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | TimeShifted Interaction height (Trac #1926) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1926">https://code.icecube.wisc.edu/projects/icecube/ticket/1926</a>, reported by juancarlosand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-01-12T18:29:02",
"_ts": "1484245742556670",
"description": "From Dennis Soldin:\n\nThe problem seems to be in line 185 of I3TimeShifter.cxx (see http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/trigger-sim/releases/V07-06-06/private/trigger-sim/modules/I3TimeShifter.cxx#L185) which is called in icecube.simprod.detectors.IC86, in tray 4 of the scripts you sent me. I think the problem is that I3TimeShifter assumes by default every I3Double being a time-like value. Therefore it is shifted. This might be a potential problem for any not time-like I3Double stored to the frame that people forgot about... Just in case any other strange values appear in the future (or past?)...",
"reporter": "juancarlos",
"cc": "dsoldin",
"resolution": "fixed",
"time": "2016-12-20T16:03:49",
"component": "combo simulation",
"summary": "TimeShifted Interaction height",
"priority": "normal",
"keywords": "simprod-scripts",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| 1.0 | TimeShifted Interaction height (Trac #1926) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1926">https://code.icecube.wisc.edu/projects/icecube/ticket/1926</a>, reported by juancarlosand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-01-12T18:29:02",
"_ts": "1484245742556670",
"description": "From Dennis Soldin:\n\nThe problem seems to be in line 185 of I3TimeShifter.cxx (see http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/trigger-sim/releases/V07-06-06/private/trigger-sim/modules/I3TimeShifter.cxx#L185) which is called in icecube.simprod.detectors.IC86, in tray 4 of the scripts you sent me. I think the problem is that I3TimeShifter assumes by default every I3Double being a time-like value. Therefore it is shifted. This might be a potential problem for any not time-like I3Double stored to the frame that people forgot about... Just in case any other strange values appear in the future (or past?)...",
"reporter": "juancarlos",
"cc": "dsoldin",
"resolution": "fixed",
"time": "2016-12-20T16:03:49",
"component": "combo simulation",
"summary": "TimeShifted Interaction height",
"priority": "normal",
"keywords": "simprod-scripts",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| non_comp | timeshifted interaction height trac migrated from json status closed changetime ts description from dennis soldin n nthe problem seems to be in line of cxx see which is called in icecube simprod detectors in tray of the scripts you sent me i think the problem is that assumes by default every being a time like value therefore it is shifted this might be a potential problem for any not time like stored to the frame that people forgot about just in case any other strange values appear in the future or past reporter juancarlos cc dsoldin resolution fixed time component combo simulation summary timeshifted interaction height priority normal keywords simprod scripts milestone owner olivas type defect | 0 |
67,028 | 7,033,147,832 | IssuesEvent | 2017-12-27 09:09:26 | vedmack/yadcf | https://api.github.com/repos/vedmack/yadcf | closed | Pagination state is not being saved when using yadcf with language option | waiting for response / test page link | I appreciate your awesome works.
I am working on DT 1.10.16 with yadcf
and everything works great other than using language option..
stateSave: true,
language: {
url: '/static/datatables/datatable-korean.json',
},
If i remove url, pagination state is being saved... otherwise it does not work
Thanks | 1.0 | Pagination state is not being saved when using yadcf with language option - I appreciate your awesome works.
I am working on DT 1.10.16 with yadcf
and everything works great other than using language option..
stateSave: true,
language: {
url: '/static/datatables/datatable-korean.json',
},
If i remove url, pagination state is being saved... otherwise it does not work
Thanks | non_comp | pagination state is not being saved when using yadcf with language option i appreciate your awesome works i am working on dt with yadcf and everything works great other than using language option statesave true language url static datatables datatable korean json if i remove url pagination state is being saved otherwise it does not work thanks | 0 |
18,825 | 26,177,997,953 | IssuesEvent | 2023-01-02 12:13:49 | spring-projects-experimental/spring-native | https://api.github.com/repos/spring-projects-experimental/spring-native | closed | Add support for spring-cloud-starter-stream-rabbit | type: compatibility for: external-project | Hello, I'm trying to migrate a spring microsservice to spring native, but some things are going wrong.
I'm using org.springframework.cloud:spring-cloud-starter-bus-amqp dependency with rabbitMQ, and
when my application is starting, I get this error:
```
***************************
APPLICATION FAILED TO START
***************************
Description:
Native reflection configuration for org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration is missing.
```
then I created the follow configuration:
````
@TypeHint(
typeNames = [
"org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration",
"org.springframework.cloud.stream.binder.rabbit.config.RabbitHealthIndicatorConfiguration",
"org.springframework.cloud.stream.binder.rabbit.config.NoCloudProfile",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$CloudConnectors",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$CloudConnectors\$UseCloudConnectors",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$CloudConnectors\$OverrideCloudConnectors",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$NoCloudConnectors",
],
access = AccessBits.ALL
)
@Configuration
class RabbitServiceAutoConfigurationNativeConfiguration
````
but, the build failed with the message:
```
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':generateAot'.
> ERROR: in 'org.springframework.cloud.stream.binder.rabbit.config.RabbitMessageChannelBinderConfiguration' these methods are directly invoking methods marked @Bean: [rabbitMessageChannelBinder, rabbitMessageChannelBinder, rabbitMessageChannelBinder] - due to the enforced proxyBeanMethods=false for components in a native-image, please consider refactoring to use instance injection. If you are confident this is not going to affect your application, you may turn this check off using -Dspring.native.verify=false.
```
then I placed the taks:
````
springAot{
verify.set(false)
}
````
but now Spring failed to instantiate RabbitServiceAutoConfiguration, which is an abstract class
````
main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'rabbitServiceAutoConfiguration': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration]: Is it an abstract class?; nested exception is java.lang.InstantiationException: Cannot instantiate class org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration
````
Am I missing something? Is Spring Native compatible with spring-cloud-starter-bus-amqp? I'm stuck at this stage and I appreciate any help.
+--- org.springframework.cloud:spring-cloud-starter-bus-amqp -> 3.0.3
| +--- org.springframework.cloud:spring-cloud-starter-stream-rabbit:3.1.3
| | \--- org.springframework.cloud:spring-cloud-stream-binder-rabbit:3.1.3
| | | \--- org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration.class
Spring Version: "2.5.2"
io.spring.dependency-management: "1.0.11.RELEASE"
experimental.aot: "0.10.2"
buildtools.native: "0.9.1"
kotlin-version: "1.5.21" | True | Add support for spring-cloud-starter-stream-rabbit - Hello, I'm trying to migrate a spring microsservice to spring native, but some things are going wrong.
I'm using org.springframework.cloud:spring-cloud-starter-bus-amqp dependency with rabbitMQ, and
when my application is starting, I get this error:
```
***************************
APPLICATION FAILED TO START
***************************
Description:
Native reflection configuration for org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration is missing.
```
then I created the follow configuration:
````
@TypeHint(
typeNames = [
"org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration",
"org.springframework.cloud.stream.binder.rabbit.config.RabbitHealthIndicatorConfiguration",
"org.springframework.cloud.stream.binder.rabbit.config.NoCloudProfile",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$CloudConnectors",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$CloudConnectors\$UseCloudConnectors",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$CloudConnectors\$OverrideCloudConnectors",
"org.springframework.cloud.stream.binder.rabbit.config.CloudProfile\$NoCloudConnectors",
],
access = AccessBits.ALL
)
@Configuration
class RabbitServiceAutoConfigurationNativeConfiguration
````
but, the build failed with the message:
```
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':generateAot'.
> ERROR: in 'org.springframework.cloud.stream.binder.rabbit.config.RabbitMessageChannelBinderConfiguration' these methods are directly invoking methods marked @Bean: [rabbitMessageChannelBinder, rabbitMessageChannelBinder, rabbitMessageChannelBinder] - due to the enforced proxyBeanMethods=false for components in a native-image, please consider refactoring to use instance injection. If you are confident this is not going to affect your application, you may turn this check off using -Dspring.native.verify=false.
```
then I placed the taks:
````
springAot{
verify.set(false)
}
````
but now Spring failed to instantiate RabbitServiceAutoConfiguration, which is an abstract class
````
main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'rabbitServiceAutoConfiguration': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration]: Is it an abstract class?; nested exception is java.lang.InstantiationException: Cannot instantiate class org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration
````
Am I missing something? Is Spring Native compatible with spring-cloud-starter-bus-amqp? I'm stuck at this stage and I appreciate any help.
+--- org.springframework.cloud:spring-cloud-starter-bus-amqp -> 3.0.3
| +--- org.springframework.cloud:spring-cloud-starter-stream-rabbit:3.1.3
| | \--- org.springframework.cloud:spring-cloud-stream-binder-rabbit:3.1.3
| | | \--- org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration.class
Spring Version: "2.5.2"
io.spring.dependency-management: "1.0.11.RELEASE"
experimental.aot: "0.10.2"
buildtools.native: "0.9.1"
kotlin-version: "1.5.21" | comp | add support for spring cloud starter stream rabbit hello i m trying to migrate a spring microsservice to spring native but some things are going wrong i m using org springframework cloud spring cloud starter bus amqp dependency with rabbitmq and when my application is starting i get this error application failed to start description native reflection configuration for org springframework cloud stream binder rabbit config rabbitserviceautoconfiguration is missing then i created the follow configuration typehint typenames org springframework cloud stream binder rabbit config rabbitserviceautoconfiguration org springframework cloud stream binder rabbit config rabbithealthindicatorconfiguration org springframework cloud stream binder rabbit config nocloudprofile org springframework cloud stream binder rabbit config cloudprofile org springframework cloud stream binder rabbit config cloudprofile cloudconnectors org springframework cloud stream binder rabbit config cloudprofile cloudconnectors usecloudconnectors org springframework cloud stream binder rabbit config cloudprofile cloudconnectors overridecloudconnectors org springframework cloud stream binder rabbit config cloudprofile nocloudconnectors access accessbits all configuration class rabbitserviceautoconfigurationnativeconfiguration but the build failed with the message failure build failed with an exception what went wrong execution failed for task generateaot error in org springframework cloud stream binder rabbit config rabbitmessagechannelbinderconfiguration these methods are directly invoking methods marked bean due to the enforced proxybeanmethods false for components in a native image please consider refactoring to use instance injection if you are confident this is not going to affect your application you may turn this check off using dspring native verify false then i placed the taks springaot verify set false but now spring failed to instantiate rabbitserviceautoconfiguration which is an abstract class main o s boot springapplication application run failed org springframework context applicationcontextexception failed to start bean inputbindinglifecycle nested exception is org springframework beans factory beancreationexception error creating bean with name rabbitserviceautoconfiguration instantiation of bean failed nested exception is org springframework beans beaninstantiationexception failed to instantiate is it an abstract class nested exception is java lang instantiationexception cannot instantiate class org springframework cloud stream binder rabbit config rabbitserviceautoconfiguration am i missing something is spring native compatible with spring cloud starter bus amqp i m stuck at this stage and i appreciate any help org springframework cloud spring cloud starter bus amqp org springframework cloud spring cloud starter stream rabbit org springframework cloud spring cloud stream binder rabbit org springframework cloud stream binder rabbit config rabbitserviceautoconfiguration class spring version io spring dependency management release experimental aot buildtools native kotlin version | 1 |
5,444 | 7,904,337,246 | IssuesEvent | 2018-07-02 03:46:25 | asamuzaK/sidebarTabs | https://api.github.com/repos/asamuzaK/sidebarTabs | opened | Consider the "active tab" as part of the multi-selection by default | compatibility | See [1468443 - Consider the "active tab" as part of the multi-selection by default](https://bugzilla.mozilla.org/show_bug.cgi?id=1468443 "1468443 - Consider the "active tab" as part of the multi-selection by default")
Related #33 | True | Consider the "active tab" as part of the multi-selection by default - See [1468443 - Consider the "active tab" as part of the multi-selection by default](https://bugzilla.mozilla.org/show_bug.cgi?id=1468443 "1468443 - Consider the "active tab" as part of the multi-selection by default")
Related #33 | comp | consider the active tab as part of the multi selection by default see consider the quot active tab quot as part of the multi selection by default related | 1 |
284 | 2,725,585,509 | IssuesEvent | 2015-04-15 01:49:10 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | MethodBase.GetCurrentMethod().DeclaringType returns wrong type when used in Lambda | Area-Compilers Compatibility | The following code returns ConsoleApplication6.Program in VS 2013 and ConsoleApplication6.Program+<>c__DisplayClass0 in VS 2015 CTP
```
class Program
{
static Lazy<string> typeName = new Lazy<string>(() => MethodBase.GetCurrentMethod().DeclaringType.FullName);
static void Main(string[] args)
{
Console.WriteLine(typeName.Value);
}
}
``` | True | MethodBase.GetCurrentMethod().DeclaringType returns wrong type when used in Lambda - The following code returns ConsoleApplication6.Program in VS 2013 and ConsoleApplication6.Program+<>c__DisplayClass0 in VS 2015 CTP
```
class Program
{
static Lazy<string> typeName = new Lazy<string>(() => MethodBase.GetCurrentMethod().DeclaringType.FullName);
static void Main(string[] args)
{
Console.WriteLine(typeName.Value);
}
}
``` | comp | methodbase getcurrentmethod declaringtype returns wrong type when used in lambda the following code returns program in vs and program c in vs ctp class program static lazy typename new lazy methodbase getcurrentmethod declaringtype fullname static void main string args console writeline typename value | 1 |
1,632 | 4,184,431,406 | IssuesEvent | 2016-06-23 07:01:44 | jagregory/cfval | https://api.github.com/repos/jagregory/cfval | opened | AWS::Directory::Service::* resource support | aws-compatibility | - [ ] AWS::DirectoryService::MicrosoftAD
- [ ] AWS::DirectoryService::SimpleAD | True | AWS::Directory::Service::* resource support - - [ ] AWS::DirectoryService::MicrosoftAD
- [ ] AWS::DirectoryService::SimpleAD | comp | aws directory service resource support aws directoryservice microsoftad aws directoryservice simplead | 1 |
12,590 | 14,900,325,838 | IssuesEvent | 2021-01-21 15:17:54 | empla/coredi | https://api.github.com/repos/empla/coredi | closed | Change forking mode | enhancement incompatible | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| True | Change forking mode - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| comp | change forking mode is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here | 1 |
16,765 | 23,112,336,565 | IssuesEvent | 2022-07-27 13:59:32 | woocommerce/woocommerce-google-analytics-integration | https://api.github.com/repos/woocommerce/woocommerce-google-analytics-integration | opened | Custom Order Table compatibility | type: compatibility | ### Describe the issue:
To ensure we remain compatible with custom order tables, we should not access order data through post functions, instead we should use the WC_Order functions.
The order meta field `_ga_tracked` uses `get_post_meta` and `update_post_meta`. This should be switched to `WC_Order->get_meta` and `WC_Order->update_meta`.
There are also various places in the code where compatibility code is in place for accessing direct order properties. The minimum WooCommerce version has [already been bumped to 3.2](https://github.com/woocommerce/woocommerce-google-analytics-integration/commit/69e706ce5da6ada920453a6159480b3ad4054f08). So we can remove any compatibility code for WC < 3.0 when we access any order data. | True | Custom Order Table compatibility - ### Describe the issue:
To ensure we remain compatible with custom order tables, we should not access order data through post functions, instead we should use the WC_Order functions.
The order meta field `_ga_tracked` uses `get_post_meta` and `update_post_meta`. This should be switched to `WC_Order->get_meta` and `WC_Order->update_meta`.
There are also various places in the code where compatibility code is in place for accessing direct order properties. The minimum WooCommerce version has [already been bumped to 3.2](https://github.com/woocommerce/woocommerce-google-analytics-integration/commit/69e706ce5da6ada920453a6159480b3ad4054f08). So we can remove any compatibility code for WC < 3.0 when we access any order data. | comp | custom order table compatibility describe the issue to ensure we remain compatible with custom order tables we should not access order data through post functions instead we should use the wc order functions the order meta field ga tracked uses get post meta and update post meta this should be switched to wc order get meta and wc order update meta there are also various places in the code where compatibility code is in place for accessing direct order properties the minimum woocommerce version has so we can remove any compatibility code for wc when we access any order data | 1 |
210,468 | 7,190,151,462 | IssuesEvent | 2018-02-02 16:19:01 | kubernetes/federation | https://api.github.com/repos/kubernetes/federation | closed | federation: alpha APIs should be disabled by default | area/federation priority/important-soon sig/multicluster | <a href="https://github.com/nikhiljindal"><img src="https://avatars0.githubusercontent.com/u/10199099?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [nikhiljindal](https://github.com/nikhiljindal)**
_Monday Apr 10, 2017 at 19:01 GMT_
_Originally opened as https://github.com/kubernetes/kubernetes/issues/44291_
----
Forked from https://github.com/kubernetes/kubernetes/issues/38593.
Now that we have flags to disable specific APIs (https://github.com/kubernetes/kubernetes/issues/38593), we should use them to disable alpha APIs by default.
cc @kubernetes/sig-federation-bugs
| 1.0 | federation: alpha APIs should be disabled by default - <a href="https://github.com/nikhiljindal"><img src="https://avatars0.githubusercontent.com/u/10199099?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [nikhiljindal](https://github.com/nikhiljindal)**
_Monday Apr 10, 2017 at 19:01 GMT_
_Originally opened as https://github.com/kubernetes/kubernetes/issues/44291_
----
Forked from https://github.com/kubernetes/kubernetes/issues/38593.
Now that we have flags to disable specific APIs (https://github.com/kubernetes/kubernetes/issues/38593), we should use them to disable alpha APIs by default.
cc @kubernetes/sig-federation-bugs
| non_comp | federation alpha apis should be disabled by default issue by monday apr at gmt originally opened as forked from now that we have flags to disable specific apis we should use them to disable alpha apis by default cc kubernetes sig federation bugs | 0 |
8,382 | 10,416,941,927 | IssuesEvent | 2019-09-14 17:32:30 | daniele-salvagni/color-goggles | https://api.github.com/repos/daniele-salvagni/color-goggles | closed | Does not seem to be working :/ | compatibility | csgo.exe and the needed dll are there but it does not seem to be changing any saturation or vibrancy | True | Does not seem to be working :/ - csgo.exe and the needed dll are there but it does not seem to be changing any saturation or vibrancy | comp | does not seem to be working csgo exe and the needed dll are there but it does not seem to be changing any saturation or vibrancy | 1 |
14,249 | 17,118,098,752 | IssuesEvent | 2021-07-11 19:32:39 | Sloeber/arduino-eclipse-plugin | https://api.github.com/repos/Sloeber/arduino-eclipse-plugin | closed | Nested variables in platform.txt do not resolve | Hot issue OS: all domain: other hardware importance: board specific importance: improvement request status: Arduino IDE incompatibility status: fixed in nightly | **Describe the bug**
I am adding support for ESP32S2 to ESP32 Arduino and in order to include the proper flags/libs etc, platform.txt has some variables, like `compiler.path={runtime.tools.xtensa-{build.mcu}-elf-gcc.path}/bin/` that resolve fine in Arduino Builder, but seem to result in just `/bin/` in sloeber.
**To Reproduce**
Steps to reproduce the behavior:
1. Follow the instructions on how to install the development environment on your particular OS here: https://github.com/espressif/arduino-esp32/tree/esp32s2
2. Make sure that you clone the `esp32s2` branch.
3. Compile an empty sketch
**Expected behavior**
Sketch should compile fine, in fact I replaced all those nested {build.mcu} with esp32 and esp32s2 and it worked ok.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: MacOS 10.15.4
- Browser does not matter
- Version 4.3.1
**Additional context**
https://github.com/espressif/arduino-esp32/blob/esp32s2/platform.txt
| True | Nested variables in platform.txt do not resolve - **Describe the bug**
I am adding support for ESP32S2 to ESP32 Arduino and in order to include the proper flags/libs etc, platform.txt has some variables, like `compiler.path={runtime.tools.xtensa-{build.mcu}-elf-gcc.path}/bin/` that resolve fine in Arduino Builder, but seem to result in just `/bin/` in sloeber.
**To Reproduce**
Steps to reproduce the behavior:
1. Follow the instructions on how to install the development environment on your particular OS here: https://github.com/espressif/arduino-esp32/tree/esp32s2
2. Make sure that you clone the `esp32s2` branch.
3. Compile an empty sketch
**Expected behavior**
Sketch should compile fine, in fact I replaced all those nested {build.mcu} with esp32 and esp32s2 and it worked ok.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: MacOS 10.15.4
- Browser does not matter
- Version 4.3.1
**Additional context**
https://github.com/espressif/arduino-esp32/blob/esp32s2/platform.txt
| comp | nested variables in platform txt do not resolve describe the bug i am adding support for to arduino and in order to include the proper flags libs etc platform txt has some variables like compiler path runtime tools xtensa build mcu elf gcc path bin that resolve fine in arduino builder but seem to result in just bin in sloeber to reproduce steps to reproduce the behavior follow the instructions on how to install the development environment on your particular os here make sure that you clone the branch compile an empty sketch expected behavior sketch should compile fine in fact i replaced all those nested build mcu with and and it worked ok screenshots if applicable add screenshots to help explain your problem desktop please complete the following information os macos browser does not matter version additional context | 1 |
26,569 | 2,684,876,311 | IssuesEvent | 2015-03-29 13:26:33 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Crash when opening main menu | 2–5 stars bug imported Priority-Medium | _From [yaskevic...@gmail.com](https://code.google.com/u/113208971525563967575/) on June 10, 2013 00:58:32_
Required information! OS version: Win7 SP1 x64 ConEmu version: 130523 x86
VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 27 2010 17:59:02)
MS-Windows 32-bit GUI version with OLE support
Included patches: 1-46
Compiled by Bram@KIBAALE *Bug description* ConEmu crashes every time I try to open the main menu (in the left upper corner of the window) having attached GVIM opened. *Steps to reproduction* 1. Attach GVIM to the console.
2. Select attached GVIM tab.
3. Open main menu.
**Attachment:** [Capture.PNG](http://code.google.com/p/conemu-maximus5/issues/detail?id=1091)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1091_ | 1.0 | Crash when opening main menu - _From [yaskevic...@gmail.com](https://code.google.com/u/113208971525563967575/) on June 10, 2013 00:58:32_
Required information! OS version: Win7 SP1 x64 ConEmu version: 130523 x86
VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 27 2010 17:59:02)
MS-Windows 32-bit GUI version with OLE support
Included patches: 1-46
Compiled by Bram@KIBAALE *Bug description* ConEmu crashes every time I try to open the main menu (in the left upper corner of the window) having attached GVIM opened. *Steps to reproduction* 1. Attach GVIM to the console.
2. Select attached GVIM tab.
3. Open main menu.
**Attachment:** [Capture.PNG](http://code.google.com/p/conemu-maximus5/issues/detail?id=1091)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1091_ | non_comp | crash when opening main menu from on june required information os version conemu version vim vi improved aug compiled oct ms windows bit gui version with ole support included patches compiled by bram kibaale bug description conemu crashes every time i try to open the main menu in the left upper corner of the window having attached gvim opened steps to reproduction attach gvim to the console select attached gvim tab open main menu attachment original issue | 0 |
58,678 | 8,300,692,404 | IssuesEvent | 2018-09-21 08:56:24 | tox-dev/tox | https://api.github.com/repos/tox-dev/tox | closed | Document PY_COLORS to enforce color output | area:documentation pr-available | - Bitbucket: https://bitbucket.org/hpk42/tox/issue/163
- Originally reported by: @hashar
- Originally created at: 2014-03-21T18:30:40.458
While integrating a tox based job in Jenkins (which has a dumb terminal) I was looking for a way to force color output.
Looking at _py/terminalwriter.py , it looks up the environment variable PY_COLORS. A value of 1 would enforce color, a value of 0 would disable color. The global override the TERM value.
It would be nice to have PY_COLORS documented in the tox documentation.
py-trunk doc update request is: https://bitbucket.org/hpk42/py-trunk/issue/138/py_colors-env-is-undocumented
| 1.0 | Document PY_COLORS to enforce color output - - Bitbucket: https://bitbucket.org/hpk42/tox/issue/163
- Originally reported by: @hashar
- Originally created at: 2014-03-21T18:30:40.458
While integrating a tox based job in Jenkins (which has a dumb terminal) I was looking for a way to force color output.
Looking at _py/terminalwriter.py , it looks up the environment variable PY_COLORS. A value of 1 would enforce color, a value of 0 would disable color. The global override the TERM value.
It would be nice to have PY_COLORS documented in the tox documentation.
py-trunk doc update request is: https://bitbucket.org/hpk42/py-trunk/issue/138/py_colors-env-is-undocumented
| non_comp | document py colors to enforce color output bitbucket originally reported by hashar originally created at while integrating a tox based job in jenkins which has a dumb terminal i was looking for a way to force color output looking at py terminalwriter py it looks up the environment variable py colors a value of would enforce color a value of would disable color the global override the term value it would be nice to have py colors documented in the tox documentation py trunk doc update request is | 0 |
195,477 | 14,730,800,170 | IssuesEvent | 2021-01-06 13:47:39 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Flaky Test [Build&Test / libbeat-build / TestAutodiscoverWithMutlipleEntries – autodiscover] | Team:Integrations Team:Platforms bug ci-reported flaky-test | ## Flaky Test
* **Test Name:** Build&Test / libbeat-build / TestAutodiscoverWithMutlipleEntries – autodiscover
* **Link:** https://github.com/elastic/beats/blob/feb6cbffa41250cf6f0e715e9446897c484885d4/libbeat/autodiscover/autodiscover_test.go#L380
* **Branch:** Seen in https://github.com/elastic/beats/pull/23313, reproduced in master.
* **Notes:**
* May be related to #22668.
* According to the comments in the test, this can be an actual bug, because it seems that events are being received in wrong order (or multiple times?).
* It can be reproduced locally with `go test -count 10 -run TestAutodiscoverWithMutlipleEntries ./libbeat/autodiscover/`
### Stack Trace
```
autodiscover_test.go:447:
Error Trace: autodiscover_test.go:447
Error: Not equal:
expected: &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc00009bc00)}
actual : &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc00009bd00)}
Diff:
--- Expected
+++ Actual
@@ -8,3 +8,3 @@
d: (map[string]ucfg.value) (len=1) {
- (string) (len=1) "a": (*ucfg.cfgString)({
+ (string) (len=1) "x": (*ucfg.cfgString)({
cfgPrimitive: (ucfg.cfgPrimitive) {
@@ -14,3 +14,3 @@
},
- field: (string) (len=1) "a"
+ field: (string) (len=1) "x"
},
@@ -18,3 +18,3 @@
},
- s: (string) (len=1) "b"
+ s: (string) (len=1) "y"
})
Test: TestAutodiscoverWithMutlipleEntries
autodiscover_test.go:469:
Error Trace: autodiscover_test.go:469
Error: Not equal:
expected: &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc00009bba0)}
actual : &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc0005f8c00)}
Diff:
--- Expected
+++ Actual
@@ -8,3 +8,3 @@
d: (map[string]ucfg.value) (len=1) {
- (string) (len=1) "x": (*ucfg.cfgString)({
+ (string) (len=1) "a": (*ucfg.cfgString)({
cfgPrimitive: (ucfg.cfgPrimitive) {
@@ -14,3 +14,3 @@
},
- field: (string) (len=1) "x"
+ field: (string) (len=1) "a"
},
@@ -18,3 +18,3 @@
},
- s: (string) (len=1) "y"
+ s: (string) (len=1) "b"
})
Test: TestAutodiscoverWithMutlipleEntries
autodiscover_test.go:473:
Error Trace: autodiscover_test.go:473
Error: Should be false
Test: TestAutodiscoverWithMutlipleEntries
autodiscover_test.go:515: Waiting for condition
```
| 1.0 | Flaky Test [Build&Test / libbeat-build / TestAutodiscoverWithMutlipleEntries – autodiscover] - ## Flaky Test
* **Test Name:** Build&Test / libbeat-build / TestAutodiscoverWithMutlipleEntries – autodiscover
* **Link:** https://github.com/elastic/beats/blob/feb6cbffa41250cf6f0e715e9446897c484885d4/libbeat/autodiscover/autodiscover_test.go#L380
* **Branch:** Seen in https://github.com/elastic/beats/pull/23313, reproduced in master.
* **Notes:**
* May be related to #22668.
* According to the comments in the test, this can be an actual bug, because it seems that events are being received in wrong order (or multiple times?).
* It can be reproduced locally with `go test -count 10 -run TestAutodiscoverWithMutlipleEntries ./libbeat/autodiscover/`
### Stack Trace
```
autodiscover_test.go:447:
Error Trace: autodiscover_test.go:447
Error: Not equal:
expected: &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc00009bc00)}
actual : &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc00009bd00)}
Diff:
--- Expected
+++ Actual
@@ -8,3 +8,3 @@
d: (map[string]ucfg.value) (len=1) {
- (string) (len=1) "a": (*ucfg.cfgString)({
+ (string) (len=1) "x": (*ucfg.cfgString)({
cfgPrimitive: (ucfg.cfgPrimitive) {
@@ -14,3 +14,3 @@
},
- field: (string) (len=1) "a"
+ field: (string) (len=1) "x"
},
@@ -18,3 +18,3 @@
},
- s: (string) (len=1) "b"
+ s: (string) (len=1) "y"
})
Test: TestAutodiscoverWithMutlipleEntries
autodiscover_test.go:469:
Error Trace: autodiscover_test.go:469
Error: Not equal:
expected: &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc00009bba0)}
actual : &common.Config{ctx:ucfg.context{parent:ucfg.value(nil), field:""}, metadata:(*ucfg.Meta)(nil), fields:(*ucfg.fields)(0xc0005f8c00)}
Diff:
--- Expected
+++ Actual
@@ -8,3 +8,3 @@
d: (map[string]ucfg.value) (len=1) {
- (string) (len=1) "x": (*ucfg.cfgString)({
+ (string) (len=1) "a": (*ucfg.cfgString)({
cfgPrimitive: (ucfg.cfgPrimitive) {
@@ -14,3 +14,3 @@
},
- field: (string) (len=1) "x"
+ field: (string) (len=1) "a"
},
@@ -18,3 +18,3 @@
},
- s: (string) (len=1) "y"
+ s: (string) (len=1) "b"
})
Test: TestAutodiscoverWithMutlipleEntries
autodiscover_test.go:473:
Error Trace: autodiscover_test.go:473
Error: Should be false
Test: TestAutodiscoverWithMutlipleEntries
autodiscover_test.go:515: Waiting for condition
```
| non_comp | flaky test flaky test test name build test libbeat build testautodiscoverwithmutlipleentries – autodiscover link branch seen in reproduced in master notes may be related to according to the comments in the test this can be an actual bug because it seems that events are being received in wrong order or multiple times it can be reproduced locally with go test count run testautodiscoverwithmutlipleentries libbeat autodiscover stack trace autodiscover test go error trace autodiscover test go error not equal expected common config ctx ucfg context parent ucfg value nil field metadata ucfg meta nil fields ucfg fields actual common config ctx ucfg context parent ucfg value nil field metadata ucfg meta nil fields ucfg fields diff expected actual d map ucfg value len string len a ucfg cfgstring string len x ucfg cfgstring cfgprimitive ucfg cfgprimitive field string len a field string len x s string len b s string len y test testautodiscoverwithmutlipleentries autodiscover test go error trace autodiscover test go error not equal expected common config ctx ucfg context parent ucfg value nil field metadata ucfg meta nil fields ucfg fields actual common config ctx ucfg context parent ucfg value nil field metadata ucfg meta nil fields ucfg fields diff expected actual d map ucfg value len string len x ucfg cfgstring string len a ucfg cfgstring cfgprimitive ucfg cfgprimitive field string len x field string len a s string len y s string len b test testautodiscoverwithmutlipleentries autodiscover test go error trace autodiscover test go error should be false test testautodiscoverwithmutlipleentries autodiscover test go waiting for condition | 0 |
91,765 | 8,318,115,735 | IssuesEvent | 2018-09-25 13:56:17 | rixed/ramen | https://api.github.com/repos/rixed/ramen | closed | `ramen test` should accept immediate programs instead of reference to binaries | tests | Would come handy to add test functions over the output on programs under tests | 1.0 | `ramen test` should accept immediate programs instead of reference to binaries - Would come handy to add test functions over the output on programs under tests | non_comp | ramen test should accept immediate programs instead of reference to binaries would come handy to add test functions over the output on programs under tests | 0 |
726,774 | 25,011,324,635 | IssuesEvent | 2022-11-03 15:26:51 | AY2223S1-CS2103T-T12-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-T12-4/tp | closed | Improve code quality in `Undo/Redo` commands | type.Task priority.High severity.Medium | The previous working iteration of the `undo` and `redo` commands adopts the control flow that after every command is executed, if the command is undoable the model will make a snapshot of `UninurseBook`.
The current working iteration adopts the control flow that a snapshot is only made after a change is invoked in `Model` through the methods `addPerson()`, `deletePerson()`, `setPerson()` and `clearPersons()`.
The proposed implementation is to shift the logic to be handled by either one of `Model` or `PersistentUninurseBook` to reduce any potential coupling. | 1.0 | Improve code quality in `Undo/Redo` commands - The previous working iteration of the `undo` and `redo` commands adopts the control flow that after every command is executed, if the command is undoable the model will make a snapshot of `UninurseBook`.
The current working iteration adopts the control flow that a snapshot is only made after a change is invoked in `Model` through the methods `addPerson()`, `deletePerson()`, `setPerson()` and `clearPersons()`.
The proposed implementation is to shift the logic to be handled by either one of `Model` or `PersistentUninurseBook` to reduce any potential coupling. | non_comp | improve code quality in undo redo commands the previous working iteration of the undo and redo commands adopts the control flow that after every command is executed if the command is undoable the model will make a snapshot of uninursebook the current working iteration adopts the control flow that a snapshot is only made after a change is invoked in model through the methods addperson deleteperson setperson and clearpersons the proposed implementation is to shift the logic to be handled by either one of model or persistentuninursebook to reduce any potential coupling | 0 |
18,017 | 24,886,347,887 | IssuesEvent | 2022-10-28 08:07:08 | GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator | https://api.github.com/repos/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator | closed | Build failure on iOS | next release compatibility | On iOS when I build this library, I get this error:
```
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:6446:10: error: call to member function 'WriteNumber' is ambiguous
json.WriteNumber(allocationCount);
~~~~~^~~~~~~~~~~
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5686:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint32_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5693:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint64_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:6449:10: error: call to member function 'WriteNumber' is ambiguous
json.WriteNumber(unusedRangeCount);
~~~~~^~~~~~~~~~~
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5686:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint32_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5693:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint64_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:15967:34: error: call to member function 'ContinueString' is ambiguous
json.ContinueString(index++);
~~~~~^~~~~~~~~~~~~~
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5657:21: note: candidate function
void VmaJsonWriter::ContinueString(uint32_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5663:21: note: candidate function
void VmaJsonWriter::ContinueString(uint64_t n)
```
It seems like it's because size_t is not typedeff'd to either uint64_t or uint32_t. I'm not exactly sure what it is typdeff'd to, but I imagine an explicit cast would be a reasonable solution.
If this seems fine, let me know and I'm happy to open a PR.
There is a workaround, which is defining
`#define VMA_STATS_STRING_ENABLED 0 `
| True | Build failure on iOS - On iOS when I build this library, I get this error:
```
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:6446:10: error: call to member function 'WriteNumber' is ambiguous
json.WriteNumber(allocationCount);
~~~~~^~~~~~~~~~~
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5686:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint32_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5693:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint64_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:6449:10: error: call to member function 'WriteNumber' is ambiguous
json.WriteNumber(unusedRangeCount);
~~~~~^~~~~~~~~~~
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5686:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint32_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5693:21: note: candidate function
void VmaJsonWriter::WriteNumber(uint64_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:15967:34: error: call to member function 'ContinueString' is ambiguous
json.ContinueString(index++);
~~~~~^~~~~~~~~~~~~~
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5657:21: note: candidate function
void VmaJsonWriter::ContinueString(uint32_t n)
^
/Users/russellgreene/ars/ext/vcpkg_installed_ios/arm64-ios/include/vk_mem_alloc.h:5663:21: note: candidate function
void VmaJsonWriter::ContinueString(uint64_t n)
```
It seems like it's because size_t is not typedeff'd to either uint64_t or uint32_t. I'm not exactly sure what it is typdeff'd to, but I imagine an explicit cast would be a reasonable solution.
If this seems fine, let me know and I'm happy to open a PR.
There is a workaround, which is defining
`#define VMA_STATS_STRING_ENABLED 0 `
| comp | build failure on ios on ios when i build this library i get this error users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h error call to member function writenumber is ambiguous json writenumber allocationcount users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h note candidate function void vmajsonwriter writenumber t n users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h note candidate function void vmajsonwriter writenumber t n users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h error call to member function writenumber is ambiguous json writenumber unusedrangecount users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h note candidate function void vmajsonwriter writenumber t n users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h note candidate function void vmajsonwriter writenumber t n users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h error call to member function continuestring is ambiguous json continuestring index users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h note candidate function void vmajsonwriter continuestring t n users russellgreene ars ext vcpkg installed ios ios include vk mem alloc h note candidate function void vmajsonwriter continuestring t n it seems like it s because size t is not typedeff d to either t or t i m not exactly sure what it is typdeff d to but i imagine an explicit cast would be a reasonable solution if this seems fine let me know and i m happy to open a pr there is a workaround which is defining define vma stats string enabled | 1 |
9,991 | 11,982,059,218 | IssuesEvent | 2020-04-07 12:18:30 | Snailclimb/JavaGuide | https://api.github.com/repos/Snailclimb/JavaGuide | closed | 同学,学成了就来阿里C2M技术部试试吧,2021届实习优先~ | incompatible | 作为阿里巴巴电商板块新兴的核心业务(C2M),我们的主要工作是为全行业供应商提供面向国内外线上&线下一站式、全链路的采供销服务,国内市场上建立消费分层浪潮下的全平台产地供货中心,国际市场上支撑阿里货通全球战略,基于千亿量级的业务数据,通过线上化、数字化、智能化手段帮助几百万供应商服务全球十亿以上的消费者。
主要方向包括:
1、分布式服务端程序的系统设计,针对复杂的业务场景,抽象核心业务模式与关键能力,打造平台产品体系,为客户提供多样化、个性化的服务解决方案;
2、结合数据、算法,提供各种产品技术创新能力,为客户提供数智化的商业能力。
我们部门目前面临各种业务、技术挑战,产品技术发展空间非常大,特别难得的是在这里你能看到商业生态全景、接触到最一线的商业解决方案,以及最一线的产品技术思维,希望大家能够加入我们,一起学习、成长。
求简历:chongbing.wch@alibaba-inc.com | True | 同学,学成了就来阿里C2M技术部试试吧,2021届实习优先~ - 作为阿里巴巴电商板块新兴的核心业务(C2M),我们的主要工作是为全行业供应商提供面向国内外线上&线下一站式、全链路的采供销服务,国内市场上建立消费分层浪潮下的全平台产地供货中心,国际市场上支撑阿里货通全球战略,基于千亿量级的业务数据,通过线上化、数字化、智能化手段帮助几百万供应商服务全球十亿以上的消费者。
主要方向包括:
1、分布式服务端程序的系统设计,针对复杂的业务场景,抽象核心业务模式与关键能力,打造平台产品体系,为客户提供多样化、个性化的服务解决方案;
2、结合数据、算法,提供各种产品技术创新能力,为客户提供数智化的商业能力。
我们部门目前面临各种业务、技术挑战,产品技术发展空间非常大,特别难得的是在这里你能看到商业生态全景、接触到最一线的商业解决方案,以及最一线的产品技术思维,希望大家能够加入我们,一起学习、成长。
求简历:chongbing.wch@alibaba-inc.com | comp | 同学, , 作为阿里巴巴电商板块新兴的核心业务( ),我们的主要工作是为全行业供应商提供面向国内外线上 线下一站式、全链路的采供销服务,国内市场上建立消费分层浪潮下的全平台产地供货中心,国际市场上支撑阿里货通全球战略,基于千亿量级的业务数据,通过线上化、数字化、智能化手段帮助几百万供应商服务全球十亿以上的消费者。 主要方向包括: 、分布式服务端程序的系统设计,针对复杂的业务场景,抽象核心业务模式与关键能力,打造平台产品体系,为客户提供多样化、个性化的服务解决方案; 、结合数据、算法,提供各种产品技术创新能力,为客户提供数智化的商业能力。 我们部门目前面临各种业务、技术挑战,产品技术发展空间非常大,特别难得的是在这里你能看到商业生态全景、接触到最一线的商业解决方案,以及最一线的产品技术思维,希望大家能够加入我们,一起学习、成长。 求简历:chongbing wch alibaba inc com | 1 |
155,430 | 19,802,861,629 | IssuesEvent | 2022-01-19 01:06:02 | snowdensb/SecurityShepherd | https://api.github.com/repos/snowdensb/SecurityShepherd | opened | CVE-2020-9493 (High) detected in log4j-1.2.7.jar | security vulnerability | ## CVE-2020-9493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.7.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/log4j/log4j/1.2.7/log4j-1.2.7.jar,/SecurityShepherd/target/owaspSecurityShepherd/WEB-INF/lib/log4j-1.2.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.7.jar** (Vulnerable Library)
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.
<p>Publish Date: 2021-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493>CVE-2020-9493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.7","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"log4j:log4j:1.2.7","isMinimumFixVersionAvailable":false,"isBinary":false}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2020-9493","vulnerabilityDetails":"A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-9493 (High) detected in log4j-1.2.7.jar - ## CVE-2020-9493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.7.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/log4j/log4j/1.2.7/log4j-1.2.7.jar,/SecurityShepherd/target/owaspSecurityShepherd/WEB-INF/lib/log4j-1.2.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.7.jar** (Vulnerable Library)
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.
<p>Publish Date: 2021-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493>CVE-2020-9493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.7","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"log4j:log4j:1.2.7","isMinimumFixVersionAvailable":false,"isBinary":false}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2020-9493","vulnerabilityDetails":"A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_comp | cve high detected in jar cve high severity vulnerability vulnerable library jar path to dependency file pom xml path to vulnerable library canner repository jar securityshepherd target owaspsecurityshepherd web inf lib jar dependency hierarchy x jar vulnerable library found in base branch dev vulnerability details a deserialization flaw was found in apache chainsaw versions prior to which could lead to malicious code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree isminimumfixversionavailable false isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails a deserialization flaw was found in apache chainsaw versions prior to which could lead to malicious code execution vulnerabilityurl | 0 |
9,635 | 11,707,244,379 | IssuesEvent | 2020-03-08 04:43:27 | SchrodingerZhu/snmalloc-rs | https://api.github.com/repos/SchrodingerZhu/snmalloc-rs | closed | Use libc++ on Mac OS X | bug compatibility | Could we change to using libc++ on Mac OS X. The current crate does not work on Catalina.
I think we just need to add another option here:
https://github.com/SchrodingerZhu/snmalloc-rs/blob/a358c33f2fa4715d4899949f1bd879db8c820b80/snmalloc-sys/build.rs#L40
I guess this would change to
```C
println!("cargo:rustc-link-lib=dylib=c++");
```
on Mac OS X.
| True | Use libc++ on Mac OS X - Could we change to using libc++ on Mac OS X. The current crate does not work on Catalina.
I think we just need to add another option here:
https://github.com/SchrodingerZhu/snmalloc-rs/blob/a358c33f2fa4715d4899949f1bd879db8c820b80/snmalloc-sys/build.rs#L40
I guess this would change to
```C
println!("cargo:rustc-link-lib=dylib=c++");
```
on Mac OS X.
| comp | use libc on mac os x could we change to using libc on mac os x the current crate does not work on catalina i think we just need to add another option here i guess this would change to c println cargo rustc link lib dylib c on mac os x | 1 |
2,644 | 5,389,025,549 | IssuesEvent | 2017-02-25 00:14:21 | Polymer/polymer | https://api.github.com/repos/Polymer/polymer | closed | Update `importHref` in 2.x-preview branch to match implementation in master | 1.x-2.x compatibility 2.x | The implementation in the 2.x-preview branch is outdated. This merge should be done after https://github.com/Polymer/polymer/pull/4114 gets merged to 1.x. | True | Update `importHref` in 2.x-preview branch to match implementation in master - The implementation in the 2.x-preview branch is outdated. This merge should be done after https://github.com/Polymer/polymer/pull/4114 gets merged to 1.x. | comp | update importhref in x preview branch to match implementation in master the implementation in the x preview branch is outdated this merge should be done after gets merged to x | 1 |
417,263 | 28,110,270,602 | IssuesEvent | 2023-03-31 06:30:40 | shenchenzizoe/ped | https://api.github.com/repos/shenchenzizoe/ped | opened | unmatchable mark command | type.DocumentationBug severity.High | Mark command in the UG does not meet with the one provided by the app


<!--session: 1680242659928-7cdd008f-296b-4da9-8478-b05f1db25bc6-->
<!--Version: Web v3.4.7--> | 1.0 | unmatchable mark command - Mark command in the UG does not meet with the one provided by the app


<!--session: 1680242659928-7cdd008f-296b-4da9-8478-b05f1db25bc6-->
<!--Version: Web v3.4.7--> | non_comp | unmatchable mark command mark command in the ug does not meet with the one provided by the app | 0 |
50,173 | 7,571,259,741 | IssuesEvent | 2018-04-23 11:39:59 | mAAdhaTTah/wordpress-github-sync | https://api.github.com/repos/mAAdhaTTah/wordpress-github-sync | closed | Unable to backup Private Posts? | documentation enhancement help wanted | Some posts I'm working on is in a private status, but seems these private posts unable to sync via this plugin? | 1.0 | Unable to backup Private Posts? - Some posts I'm working on is in a private status, but seems these private posts unable to sync via this plugin? | non_comp | unable to backup private posts some posts i m working on is in a private status but seems these private posts unable to sync via this plugin | 0 |
126,424 | 17,892,188,268 | IssuesEvent | 2021-09-08 02:10:55 | matteobaccan/Web3jClient | https://api.github.com/repos/matteobaccan/Web3jClient | opened | CVE-2016-1000340 (High) detected in bcprov-jdk15on-1.54.jar | security vulnerability | ## CVE-2016-1000340 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.54.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: Web3jClient/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.54/bcprov-jdk15on-1.54.jar</p>
<p>
Dependency Hierarchy:
- core-0.5.2.jar (Root Library)
- :x: **bcprov-jdk15on-1.54.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/matteobaccan/Web3jClient/commit/fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48">fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider versions 1.51 to 1.55, a carry propagation bug was introduced in the implementation of squaring for several raw math classes have been fixed (org.bouncycastle.math.raw.Nat???). These classes are used by our custom elliptic curve implementations (org.bouncycastle.math.ec.custom.**), so there was the possibility of rare (in general usage) spurious calculations for elliptic curve scalar multiplications. Such errors would have been detected with high probability by the output validation for our scalar multipliers.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000340>CVE-2016-1000340</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000340">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000340</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: org.bouncycastle:bcprov-debug-jdk15on:1.56,org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-1000340 (High) detected in bcprov-jdk15on-1.54.jar - ## CVE-2016-1000340 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.54.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: Web3jClient/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.54/bcprov-jdk15on-1.54.jar</p>
<p>
Dependency Hierarchy:
- core-0.5.2.jar (Root Library)
- :x: **bcprov-jdk15on-1.54.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/matteobaccan/Web3jClient/commit/fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48">fa4e59f4e4e6a1a132c43e14cbbd748d6a32fe48</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider versions 1.51 to 1.55, a carry propagation bug was introduced in the implementation of squaring for several raw math classes have been fixed (org.bouncycastle.math.raw.Nat???). These classes are used by our custom elliptic curve implementations (org.bouncycastle.math.ec.custom.**), so there was the possibility of rare (in general usage) spurious calculations for elliptic curve scalar multiplications. Such errors would have been detected with high probability by the output validation for our scalar multipliers.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000340>CVE-2016-1000340</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000340">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000340</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: org.bouncycastle:bcprov-debug-jdk15on:1.56,org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_comp | cve high detected in bcprov jar cve high severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org bouncycastle bcprov bcprov jar dependency hierarchy core jar root library x bcprov jar vulnerable library found in head commit a href found in base branch master vulnerability details in the bouncy castle jce provider versions to a carry propagation bug was introduced in the implementation of squaring for several raw math classes have been fixed org bouncycastle math raw nat these classes are used by our custom elliptic curve implementations org bouncycastle math ec custom so there was the possibility of rare in general usage spurious calculations for elliptic curve scalar multiplications such errors would have been detected with high probability by the output validation for our scalar multipliers publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov debug org bouncycastle bcprov debug org bouncycastle bcprov ext org bouncycastle bcprov org bouncycastle bcprov org bouncycastle bcprov ext debug step up your open source security game with whitesource | 0 |
5,758 | 8,210,552,183 | IssuesEvent | 2018-09-04 11:07:51 | kasemir/org.csstudio.display.builder | https://api.github.com/repos/kasemir/org.csstudio.display.builder | opened | Problems opening BOY screens from Display Builder | BOY-compatibility blocking | At ESS we have started the conversion of BOY screens coming from in-kind partners to Display Builder.
If a single OPI has to be converted there is no problem, but if this (converted) OPI opens another one (still in BOY format) it will be opened with Display Builder and more often than not this is badly displayed, with missing widget and different script behaviour.
It would be nice to be able to open this BOY files with BOY instead of Display Builder. I'm thinking of
- having a DB preference governing how BOY files are opened (from inside a Display Builder screen),
- or having a flag for the Action Button (and maybe for the scripting utility function) where this behaviour can be set,
- or both.
This is a blocking problem because often the groups in charge of their OPIs (and their conversion) are different with different speed and policies about the conversion. | True | Problems opening BOY screens from Display Builder - At ESS we have started the conversion of BOY screens coming from in-kind partners to Display Builder.
If a single OPI has to be converted there is no problem, but if this (converted) OPI opens another one (still in BOY format) it will be opened with Display Builder and more often than not this is badly displayed, with missing widget and different script behaviour.
It would be nice to be able to open this BOY files with BOY instead of Display Builder. I'm thinking of
- having a DB preference governing how BOY files are opened (from inside a Display Builder screen),
- or having a flag for the Action Button (and maybe for the scripting utility function) where this behaviour can be set,
- or both.
This is a blocking problem because often the groups in charge of their OPIs (and their conversion) are different with different speed and policies about the conversion. | comp | problems opening boy screens from display builder at ess we have started the conversion of boy screens coming from in kind partners to display builder if a single opi has to be converted there is no problem but if this converted opi opens another one still in boy format it will be opened with display builder and more often than not this is badly displayed with missing widget and different script behaviour it would be nice to be able to open this boy files with boy instead of display builder i m thinking of having a db preference governing how boy files are opened from inside a display builder screen or having a flag for the action button and maybe for the scripting utility function where this behaviour can be set or both this is a blocking problem because often the groups in charge of their opis and their conversion are different with different speed and policies about the conversion | 1 |
309,752 | 23,304,566,025 | IssuesEvent | 2022-08-07 20:38:56 | matrix-org/matrix-appservice-bridge | https://api.github.com/repos/matrix-org/matrix-appservice-bridge | opened | HOWTO.md: Replace "node:querystring" with URLSearchParams API | T-Documentation Z-Good-First-Issue | The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams
This is fully supported since Node.js 10. | 1.0 | HOWTO.md: Replace "node:querystring" with URLSearchParams API - The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams
This is fully supported since Node.js 10. | non_comp | howto md replace node querystring with urlsearchparams api the querystring api is considered legacy new code should use the urlsearchparams api instead this is fully supported since node js | 0 |
116,145 | 24,866,680,267 | IssuesEvent | 2022-10-27 12:30:28 | eclipse/che | https://api.github.com/repos/eclipse/che | opened | Add the Happy Path PR check for Che-Code repository | kind/task severity/P1 team/editors area/editor/che-code | ### Is your task related to a problem? Please describe
Currently, there're PR checks in the Che-Code repository that run the upstream tests. But there's no PR check that would run the Happy Path tests for testing Che-specific functionality. So, the developer needs to check by hand if they didn't break anything Che-specific.
### Describe the solution you'd like
Need to add a PR check that would run Happy Path Tests against the opened PR. It should be a required check to prevent merging the code introduces some regression.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 1.0 | Add the Happy Path PR check for Che-Code repository - ### Is your task related to a problem? Please describe
Currently, there're PR checks in the Che-Code repository that run the upstream tests. But there's no PR check that would run the Happy Path tests for testing Che-specific functionality. So, the developer needs to check by hand if they didn't break anything Che-specific.
### Describe the solution you'd like
Need to add a PR check that would run Happy Path Tests against the opened PR. It should be a required check to prevent merging the code introduces some regression.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | non_comp | add the happy path pr check for che code repository is your task related to a problem please describe currently there re pr checks in the che code repository that run the upstream tests but there s no pr check that would run the happy path tests for testing che specific functionality so the developer needs to check by hand if they didn t break anything che specific describe the solution you d like need to add a pr check that would run happy path tests against the opened pr it should be a required check to prevent merging the code introduces some regression describe alternatives you ve considered no response additional context no response | 0 |
7,605 | 2,915,464,154 | IssuesEvent | 2015-06-23 12:43:02 | dotnet/wcf | https://api.github.com/repos/dotnet/wcf | opened | Self-host service should share service interface definitions with tests | enhancement test bug | Currently the self-host server contains declarations for all its service interfaces. These same interfaces are repeated in the common test projects. We should have a shared file or shared library that the self-host server and the tests use. We shouldn't maintain service interfaces in 2 places. | 1.0 | Self-host service should share service interface definitions with tests - Currently the self-host server contains declarations for all its service interfaces. These same interfaces are repeated in the common test projects. We should have a shared file or shared library that the self-host server and the tests use. We shouldn't maintain service interfaces in 2 places. | non_comp | self host service should share service interface definitions with tests currently the self host server contains declarations for all its service interfaces these same interfaces are repeated in the common test projects we should have a shared file or shared library that the self host server and the tests use we shouldn t maintain service interfaces in places | 0 |
20,010 | 27,853,108,737 | IssuesEvent | 2023-03-20 20:20:41 | A248/LibertyBans | https://api.github.com/repos/A248/LibertyBans | closed | Shared library 'Caffeine' with EcoSkills plugin | invalid incompatibility | ### LibertyBans Version
1.0.2
### I have confirmed that ...
- [X] LibertyBans is up to date
- [X] No similar issue has been reported
### Platform
Spigot/Paper
### Description
Hi A248 and others, I hope you are all well!
I wish to forward my conversation on Auxilor's Discord server which provides support for his EcoSkills plugin.
Note: this seems to be an issue with their `eco` library, not `EcoSkills` specifically (which depends on `eco`).
> **lokka30**
```
[LibertyBans] *******************************************
Plugin 'eco 6.38.0' has a critical bug. That plugin has shaded the library 'Caffeine' but did not relocate it, which will pose problems.
LibertyBans is not guaranteed to function if you do not fix this bug in Plugin 'eco 6.38.0'
Contact the author of this plugin and tell them to relocate their dependencies. Unrelocated class detected was com.github.benmanes.caffeine.cache.Caffeine
Note for advanced users: Understanding the consequences, you can disable this check by setting the system property libertybans.relocationbug.disablecheck to 'true'
*******************************************
```
> **lokka30**
> LibertyBans and eco seem to both not relocate Caffeiene. I'm unsure how this could be resolved without unofficial patches.
> On another note, wouldn't Gradle allow for the eco suite (and other plugins in this boat) to relocate these dependencies?
> Could two separate plugins which do not relocate Caffeine possibly cause issues with each other?
> **_OfTeN_**
eco can't relocate kotlin and caffeine because it's a library and eco plugins use those libs from eco
LibertyBans sounds like a standalone plugin so I don't see why can't they relocate caffeine
> **Seitan**
lokka30 what did lite and devs said to you? Because we can like often said, why couldn’t they? | True | Shared library 'Caffeine' with EcoSkills plugin - ### LibertyBans Version
1.0.2
### I have confirmed that ...
- [X] LibertyBans is up to date
- [X] No similar issue has been reported
### Platform
Spigot/Paper
### Description
Hi A248 and others, I hope you are all well!
I wish to forward my conversation on Auxilor's Discord server which provides support for his EcoSkills plugin.
Note: this seems to be an issue with their `eco` library, not `EcoSkills` specifically (which depends on `eco`).
> **lokka30**
```
[LibertyBans] *******************************************
Plugin 'eco 6.38.0' has a critical bug. That plugin has shaded the library 'Caffeine' but did not relocate it, which will pose problems.
LibertyBans is not guaranteed to function if you do not fix this bug in Plugin 'eco 6.38.0'
Contact the author of this plugin and tell them to relocate their dependencies. Unrelocated class detected was com.github.benmanes.caffeine.cache.Caffeine
Note for advanced users: Understanding the consequences, you can disable this check by setting the system property libertybans.relocationbug.disablecheck to 'true'
*******************************************
```
> **lokka30**
> LibertyBans and eco seem to both not relocate Caffeiene. I'm unsure how this could be resolved without unofficial patches.
> On another note, wouldn't Gradle allow for the eco suite (and other plugins in this boat) to relocate these dependencies?
> Could two separate plugins which do not relocate Caffeine possibly cause issues with each other?
> **_OfTeN_**
eco can't relocate kotlin and caffeine because it's a library and eco plugins use those libs from eco
LibertyBans sounds like a standalone plugin so I don't see why can't they relocate caffeine
> **Seitan**
lokka30 what did lite and devs said to you? Because we can like often said, why couldn’t they? | comp | shared library caffeine with ecoskills plugin libertybans version i have confirmed that libertybans is up to date no similar issue has been reported platform spigot paper description hi and others i hope you are all well i wish to forward my conversation on auxilor s discord server which provides support for his ecoskills plugin note this seems to be an issue with their eco library not ecoskills specifically which depends on eco plugin eco has a critical bug that plugin has shaded the library caffeine but did not relocate it which will pose problems libertybans is not guaranteed to function if you do not fix this bug in plugin eco contact the author of this plugin and tell them to relocate their dependencies unrelocated class detected was com github benmanes caffeine cache caffeine note for advanced users understanding the consequences you can disable this check by setting the system property libertybans relocationbug disablecheck to true libertybans and eco seem to both not relocate caffeiene i m unsure how this could be resolved without unofficial patches on another note wouldn t gradle allow for the eco suite and other plugins in this boat to relocate these dependencies could two separate plugins which do not relocate caffeine possibly cause issues with each other often eco can t relocate kotlin and caffeine because it s a library and eco plugins use those libs from eco libertybans sounds like a standalone plugin so i don t see why can t they relocate caffeine seitan what did lite and devs said to you because we can like often said why couldn’t they | 1 |
4,778 | 7,393,930,128 | IssuesEvent | 2018-03-17 03:40:47 | MightyPirates/OpenComputers | https://api.github.com/repos/MightyPirates/OpenComputers | closed | Mod interaction - robot.swing() and Draconic Evolution blocks | incompatibility | Robots will not mine Draconium Blocks, Charged Draconium Blocks, and Awakened Draconium Blocks.
Tested on versions:
MC 1.7.10, OC 1.5.21.41, DE 1.0.2-snapshot_9 (in Resonant Rise 3.3.0.1)
MC 1.7.10, OC 1.5.22.46, DE 1.0.2
MC 1.7.10, OC 1.6.0.1-beta, DE 1.0.2
Tested with equipped Draconic Staff of Power, Draconic Pickaxe and tinkers' construct pickaxes of mining level cobalt and above. Players can mine these blocks with these tools and the tools will mine other blocks when equipped to robots.
Hope that's enough information. Never posted an issue before.
| True | Mod interaction - robot.swing() and Draconic Evolution blocks - Robots will not mine Draconium Blocks, Charged Draconium Blocks, and Awakened Draconium Blocks.
Tested on versions:
MC 1.7.10, OC 1.5.21.41, DE 1.0.2-snapshot_9 (in Resonant Rise 3.3.0.1)
MC 1.7.10, OC 1.5.22.46, DE 1.0.2
MC 1.7.10, OC 1.6.0.1-beta, DE 1.0.2
Tested with equipped Draconic Staff of Power, Draconic Pickaxe and tinkers' construct pickaxes of mining level cobalt and above. Players can mine these blocks with these tools and the tools will mine other blocks when equipped to robots.
Hope that's enough information. Never posted an issue before.
| comp | mod interaction robot swing and draconic evolution blocks robots will not mine draconium blocks charged draconium blocks and awakened draconium blocks tested on versions mc oc de snapshot in resonant rise mc oc de mc oc beta de tested with equipped draconic staff of power draconic pickaxe and tinkers construct pickaxes of mining level cobalt and above players can mine these blocks with these tools and the tools will mine other blocks when equipped to robots hope that s enough information never posted an issue before | 1 |
15,114 | 18,981,476,887 | IssuesEvent | 2021-11-21 00:23:38 | KiwiHawk/SeaBlock | https://api.github.com/repos/KiwiHawk/SeaBlock | closed | Bob's Vehicle Equipment compatibility | mod compatibility bobs | Bob's Vehicle Equipment techs still use alien science packs.
If mod is enabled, update techs to use regular science packs. | True | Bob's Vehicle Equipment compatibility - Bob's Vehicle Equipment techs still use alien science packs.
If mod is enabled, update techs to use regular science packs. | comp | bob s vehicle equipment compatibility bob s vehicle equipment techs still use alien science packs if mod is enabled update techs to use regular science packs | 1 |
13,147 | 15,437,138,732 | IssuesEvent | 2021-03-07 15:38:13 | ORelio/Minecraft-Console-Client | https://api.github.com/repos/ORelio/Minecraft-Console-Client | closed | Support for snapshots? | a:enhancement in:protocol-compatibility rejected:wontfix | **Prerequisites**
- [x] I have read and understood the [user manual](https://github.com/ORelio/Minecraft-Console-Client/tree/master/MinecraftClient/config)
- [X] I made sure I am running the latest [development build](https://ci.appveyor.com/project/ORelio/minecraft-console-client/build/artifacts)
- [X] I tried to [look for similar feature requests](https://github.com/ORelio/Minecraft-Console-Client/issues?q=is%3Aissue) before opening a new one
Add support for snapshot servers?
as you can see its not joining servers?

| True | Support for snapshots? - **Prerequisites**
- [x] I have read and understood the [user manual](https://github.com/ORelio/Minecraft-Console-Client/tree/master/MinecraftClient/config)
- [X] I made sure I am running the latest [development build](https://ci.appveyor.com/project/ORelio/minecraft-console-client/build/artifacts)
- [X] I tried to [look for similar feature requests](https://github.com/ORelio/Minecraft-Console-Client/issues?q=is%3Aissue) before opening a new one
Add support for snapshot servers?
as you can see its not joining servers?

| comp | support for snapshots prerequisites i have read and understood the i made sure i am running the latest i tried to before opening a new one add support for snapshot servers as you can see its not joining servers | 1 |
96,039 | 27,730,133,158 | IssuesEvent | 2023-03-15 07:17:33 | samynk/GameDevEngine2 | https://api.github.com/repos/samynk/GameDevEngine2 | opened | cmake problem when building | Windows build | The 4 SDL libraries give errors when initializing the cmake build system on windows. What typically happens is that the first the build is initiated, the main SDL library is not found and the cmake build system is not fully initialized. The second attempt then finds the sdl main library but not the second sdl library. After repeating this process enough times the build finally works.
Generating the build again and again works but is not user friendly.
| 1.0 | cmake problem when building - The 4 SDL libraries give errors when initializing the cmake build system on windows. What typically happens is that the first the build is initiated, the main SDL library is not found and the cmake build system is not fully initialized. The second attempt then finds the sdl main library but not the second sdl library. After repeating this process enough times the build finally works.
Generating the build again and again works but is not user friendly.
| non_comp | cmake problem when building the sdl libraries give errors when initializing the cmake build system on windows what typically happens is that the first the build is initiated the main sdl library is not found and the cmake build system is not fully initialized the second attempt then finds the sdl main library but not the second sdl library after repeating this process enough times the build finally works generating the build again and again works but is not user friendly | 0 |
12,955 | 15,211,866,021 | IssuesEvent | 2021-02-17 09:39:47 | the1812/Bilibili-Evolved | https://api.github.com/repos/the1812/Bilibili-Evolved | closed | 1.11.5整页白屏,向下滚动才能看到内容 | compatibility | <!-- 发布后默认您已阅读发贴须知 -->
<!-- https://github.com/the1812/Bilibili-Evolved/blob/preview/issue-rules.md -->
<!-- 发之前记得看下置顶问题 (Pinned issues)(如果有的话) -->
<!-- https://github.com/the1812/Bilibili-Evolved/issues -->
**关于哪一项功能**
页面显示
**问题描述**
<!-- 如何重现此问题, 在哪个页面里出现这个问题, 比如视频相关的可以提供一下av号 -->
首页,视频播放页等
**脚本版本**
<!-- 例如正式版1.10.0 -->
1.11.5
**浏览器版本**
<!-- 例如Chrome 80 -->
360极速11.0
**错误信息**
<!-- **请尽量填写, 这对于确定问题原因非常重要** -->
<!-- 脚本直接报告的错误信息, 或者浏览器开发者工具(F12 或 Ctrl+Shift+I 召唤)里Console一栏的输出, 详见 https://github.com/the1812/Bilibili-Evolved/wiki/查看浏览器输出的信息 -->

**附加截图**

| True | 1.11.5整页白屏,向下滚动才能看到内容 - <!-- 发布后默认您已阅读发贴须知 -->
<!-- https://github.com/the1812/Bilibili-Evolved/blob/preview/issue-rules.md -->
<!-- 发之前记得看下置顶问题 (Pinned issues)(如果有的话) -->
<!-- https://github.com/the1812/Bilibili-Evolved/issues -->
**关于哪一项功能**
页面显示
**问题描述**
<!-- 如何重现此问题, 在哪个页面里出现这个问题, 比如视频相关的可以提供一下av号 -->
首页,视频播放页等
**脚本版本**
<!-- 例如正式版1.10.0 -->
1.11.5
**浏览器版本**
<!-- 例如Chrome 80 -->
360极速11.0
**错误信息**
<!-- **请尽量填写, 这对于确定问题原因非常重要** -->
<!-- 脚本直接报告的错误信息, 或者浏览器开发者工具(F12 或 Ctrl+Shift+I 召唤)里Console一栏的输出, 详见 https://github.com/the1812/Bilibili-Evolved/wiki/查看浏览器输出的信息 -->

**附加截图**

| comp | 向下滚动才能看到内容 关于哪一项功能 页面显示 问题描述 首页 视频播放页等 脚本版本 浏览器版本 错误信息 附加截图 | 1 |
66,658 | 12,810,604,975 | IssuesEvent | 2020-07-03 19:16:27 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [ 4.0.0 b2-dev]Install from Web - Joomla version compatibility. | J4 Issue No Code Attached Yet | ### Steps to reproduce the issue
### Expected result
Show compatibility for Joomla Extensions / only display compatible extensions.

### Actual result
Install from web not showing Joomla version compatibility for extensions.

### System information (as much as possible)
### Additional comments
| 1.0 | [ 4.0.0 b2-dev]Install from Web - Joomla version compatibility. - ### Steps to reproduce the issue
### Expected result
Show compatibility for Joomla Extensions / only display compatible extensions.

### Actual result
Install from web not showing Joomla version compatibility for extensions.

### System information (as much as possible)
### Additional comments
| non_comp | install from web joomla version compatibility steps to reproduce the issue expected result show compatibility for joomla extensions only display compatible extensions actual result install from web not showing joomla version compatibility for extensions system information as much as possible additional comments | 0 |
169,056 | 14,189,726,688 | IssuesEvent | 2020-11-14 02:09:22 | congo-basin-institute/silk | https://api.github.com/repos/congo-basin-institute/silk | closed | Relational history (Tom's and CBI's) | documentation |
> [Open in MURAL](https://app.mural.co/t/calculator1507/m/calculator1507/1599669157008?wid=0-1599671194694) 
> SILK website Structure - Louder Than Ten
| 1.0 | Relational history (Tom's and CBI's) -
> [Open in MURAL](https://app.mural.co/t/calculator1507/m/calculator1507/1599669157008?wid=0-1599671194694) 
> SILK website Structure - Louder Than Ten
| non_comp | relational history tom s and cbi s silk website structure louder than ten | 0 |
10,179 | 12,194,443,551 | IssuesEvent | 2020-04-29 15:48:46 | woocommerce/action-scheduler | https://api.github.com/repos/woocommerce/action-scheduler | closed | WP shutdown hook runs on fatal error | Docs compatibility custom-tables | Action Scheduler registers the autoloader in `plugins_loaded` but doesn't initialize the data stores until `init`. If a fatal error occurs between those two hooks, the WP `shutdown` hook still runs.
I found this while investigating #516 and triggered an error in the WooCommerce initialization. Both the WC error and the AS error were logged. #498 attempted to mitigate a conflict with WP Rocket where the Action data store appeared to not have been initialized.
This wouldn't have been a problem with the WP Post store as WP initializes the core database variables before loading any plugins. Data stores that rely on custom tables should not be used in the `shutdown` hook.
Scheduling the migration hook should be moved out of the `shutdown` hook and the documentation should be updated to reflect the limitation. | True | WP shutdown hook runs on fatal error - Action Scheduler registers the autoloader in `plugins_loaded` but doesn't initialize the data stores until `init`. If a fatal error occurs between those two hooks, the WP `shutdown` hook still runs.
I found this while investigating #516 and triggered an error in the WooCommerce initialization. Both the WC error and the AS error were logged. #498 attempted to mitigate a conflict with WP Rocket where the Action data store appeared to not have been initialized.
This wouldn't have been a problem with the WP Post store as WP initializes the core database variables before loading any plugins. Data stores that rely on custom tables should not be used in the `shutdown` hook.
Scheduling the migration hook should be moved out of the `shutdown` hook and the documentation should be updated to reflect the limitation. | comp | wp shutdown hook runs on fatal error action scheduler registers the autoloader in plugins loaded but doesn t initialize the data stores until init if a fatal error occurs between those two hooks the wp shutdown hook still runs i found this while investigating and triggered an error in the woocommerce initialization both the wc error and the as error were logged attempted to mitigate a conflict with wp rocket where the action data store appeared to not have been initialized this wouldn t have been a problem with the wp post store as wp initializes the core database variables before loading any plugins data stores that rely on custom tables should not be used in the shutdown hook scheduling the migration hook should be moved out of the shutdown hook and the documentation should be updated to reflect the limitation | 1 |
44,351 | 12,104,413,527 | IssuesEvent | 2020-04-20 20:10:06 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | site_alert: delete during deploy did not seem to run | Defect DevOps | When the deploy failed on 4/20 and I had to go remove the System update in progress message, I notice the the warning message was still present. This leads me to believe that the Before any site alert is added, there should be a call to site_alert: delete to get rid os any existing alerts.

| 1.0 | site_alert: delete during deploy did not seem to run - When the deploy failed on 4/20 and I had to go remove the System update in progress message, I notice the the warning message was still present. This leads me to believe that the Before any site alert is added, there should be a call to site_alert: delete to get rid os any existing alerts.

| non_comp | site alert delete during deploy did not seem to run when the deploy failed on and i had to go remove the system update in progress message i notice the the warning message was still present this leads me to believe that the before any site alert is added there should be a call to site alert delete to get rid os any existing alerts | 0 |
13,840 | 16,564,993,509 | IssuesEvent | 2021-05-29 07:58:38 | arcticicestudio/nord-vim | https://api.github.com/repos/arcticicestudio/nord-vim | closed | conceal color cannot be changed | context-plugin-support context-syntax scope-compatibility scope-quality type-feature | I'd like use the settings:
```
hi Conceal ctermfg=NONE
hi Conceal ctermbg=NONE
hi Conceal guifg=NONE
hi Conceal guibg=NONE
```
to make concealed chars and words similar to the background.
And it seems there's a bug for this fantastic nord theme:
**Nord:**

**OceanicNext:**

There's an old issue posted here, but seems not resolved:
https://github.com/arcticicestudio/nord-vim/issues/149
And I found some post here:
kaicataldo/material.vim#12
for the 'material` theme, as they add some defined colour to the scheme.
Could we also do similar things for Nord? | True | conceal color cannot be changed - I'd like use the settings:
```
hi Conceal ctermfg=NONE
hi Conceal ctermbg=NONE
hi Conceal guifg=NONE
hi Conceal guibg=NONE
```
to make concealed chars and words similar to the background.
And it seems there's a bug for this fantastic nord theme:
**Nord:**

**OceanicNext:**

There's an old issue posted here, but seems not resolved:
https://github.com/arcticicestudio/nord-vim/issues/149
And I found some post here:
kaicataldo/material.vim#12
for the 'material` theme, as they add some defined colour to the scheme.
Could we also do similar things for Nord? | comp | conceal color cannot be changed i d like use the settings hi conceal ctermfg none hi conceal ctermbg none hi conceal guifg none hi conceal guibg none to make concealed chars and words similar to the background and it seems there s a bug for this fantastic nord theme nord oceanicnext there s an old issue posted here but seems not resolved and i found some post here kaicataldo material vim for the material theme as they add some defined colour to the scheme could we also do similar things for nord | 1 |
89,364 | 17,868,211,803 | IssuesEvent | 2021-09-06 12:12:15 | informalsystems/ibc-rs | https://api.github.com/repos/informalsystems/ibc-rs | closed | Make more use of generic types | relayer code-hygiene | ## Summary
We should make more using of generic types instead of dynamic traits to abstract over the implementation. For example, instead of a function like fn handler(chain1: Box<dyn ChainHandler>, chain2: Box<dyn ChainHandler>), it could instead be written as fn handle<Chain1: ChainHandler, Chain2: ChainHandler>(chain1: Chain1, chain2: Chain2). The chain implementations could then have associated types such as Height, Connection, etc. This would make it less likely to accidentally mix up the values when forwarding them from one chain to another.
(copied from @soareschen's original proposal [here](https://github.com/informalsystems/ibc-rs/discussions/1055#discussioncomment-839782))
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate milestone (priority) applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
| 1.0 | Make more use of generic types - ## Summary
We should make more using of generic types instead of dynamic traits to abstract over the implementation. For example, instead of a function like fn handler(chain1: Box<dyn ChainHandler>, chain2: Box<dyn ChainHandler>), it could instead be written as fn handle<Chain1: ChainHandler, Chain2: ChainHandler>(chain1: Chain1, chain2: Chain2). The chain implementations could then have associated types such as Height, Connection, etc. This would make it less likely to accidentally mix up the values when forwarding them from one chain to another.
(copied from @soareschen's original proposal [here](https://github.com/informalsystems/ibc-rs/discussions/1055#discussioncomment-839782))
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate milestone (priority) applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
| non_comp | make more use of generic types summary we should make more using of generic types instead of dynamic traits to abstract over the implementation for example instead of a function like fn handler box box it could instead be written as fn handle the chain implementations could then have associated types such as height connection etc this would make it less likely to accidentally mix up the values when forwarding them from one chain to another copied from soareschen s original proposal for admin use not duplicate issue appropriate labels applied appropriate milestone priority applied appropriate contributors tagged contributor assigned self assigned | 0 |
134,937 | 30,216,384,051 | IssuesEvent | 2023-07-05 15:51:37 | h4sh5/pypi-auto-scanner | https://api.github.com/repos/h4sh5/pypi-auto-scanner | opened | wmagent 2.2.2rc4 has 1 GuardDog issues | guarddog code-execution | https://pypi.org/project/wmagent
https://inspector.pypi.io/project/wmagent
```{
"dependency": "wmagent",
"version": "2.2.2rc4",
"result": {
"issues": 1,
"errors": {},
"results": {
"code-execution": [
{
"location": "wmagent-2.2.2rc4/setup.py:36",
"code": " os.system ('rm -rfv ./dist ./src/python/*.egg-info')",
"message": "This package is executing OS commands in the setup.py file"
}
]
},
"path": "/tmp/tmp4e3p5bj8/wmagent"
}
}``` | 1.0 | wmagent 2.2.2rc4 has 1 GuardDog issues - https://pypi.org/project/wmagent
https://inspector.pypi.io/project/wmagent
```{
"dependency": "wmagent",
"version": "2.2.2rc4",
"result": {
"issues": 1,
"errors": {},
"results": {
"code-execution": [
{
"location": "wmagent-2.2.2rc4/setup.py:36",
"code": " os.system ('rm -rfv ./dist ./src/python/*.egg-info')",
"message": "This package is executing OS commands in the setup.py file"
}
]
},
"path": "/tmp/tmp4e3p5bj8/wmagent"
}
}``` | non_comp | wmagent has guarddog issues dependency wmagent version result issues errors results code execution location wmagent setup py code os system rm rfv dist src python egg info message this package is executing os commands in the setup py file path tmp wmagent | 0 |
4,457 | 7,126,300,576 | IssuesEvent | 2018-01-20 08:26:03 | brenoalvs/monk | https://api.github.com/repos/brenoalvs/monk | closed | "Parse error: syntax error, unexpected '[' in class-monk-admin.php:1226 | bug compatibility | I received this bug report, we need confirm and fix it.
Monk is throwing an error with WordPress 4.9.2 + ACF
```
"Parse error: syntax error, unexpected '[' in D:\Dados\Projetos\Vivian\publicado\wp-content\plugins\monk\admin\class-monk-admin.php on line 1226"
``` | True | "Parse error: syntax error, unexpected '[' in class-monk-admin.php:1226 - I received this bug report, we need confirm and fix it.
Monk is throwing an error with WordPress 4.9.2 + ACF
```
"Parse error: syntax error, unexpected '[' in D:\Dados\Projetos\Vivian\publicado\wp-content\plugins\monk\admin\class-monk-admin.php on line 1226"
``` | comp | parse error syntax error unexpected in class monk admin php i received this bug report we need confirm and fix it monk is throwing an error with wordpress acf parse error syntax error unexpected in d dados projetos vivian publicado wp content plugins monk admin class monk admin php on line | 1 |
189,856 | 14,525,723,356 | IssuesEvent | 2020-12-14 13:19:59 | fourMs/MGT-python | https://api.github.com/repos/fourMs/MGT-python | closed | when exporting motion data with multiple output formats the returned list missing dots before extension | bug testing | The files render correctly. Example:
```
# saving in multiple formats if data_format is a list
mg.motiondata(data_format=["txt", "csv"])
```
Output:

| 1.0 | when exporting motion data with multiple output formats the returned list missing dots before extension - The files render correctly. Example:
```
# saving in multiple formats if data_format is a list
mg.motiondata(data_format=["txt", "csv"])
```
Output:

| non_comp | when exporting motion data with multiple output formats the returned list missing dots before extension the files render correctly example saving in multiple formats if data format is a list mg motiondata data format output | 0 |
892 | 3,353,686,021 | IssuesEvent | 2015-11-18 08:02:06 | sekiguchi-nagisa/ydsh | https://api.github.com/repos/sekiguchi-nagisa/ydsh | opened | replace libedit to readline or linenoise | enhancement incompatible change | libedit dose not correctly handle utf8 character.
if replace to readline, change to GPL-v3.
if replace to linenoise, need utf8 support. | True | replace libedit to readline or linenoise - libedit dose not correctly handle utf8 character.
if replace to readline, change to GPL-v3.
if replace to linenoise, need utf8 support. | comp | replace libedit to readline or linenoise libedit dose not correctly handle character if replace to readline change to gpl if replace to linenoise need support | 1 |
20,907 | 31,551,663,100 | IssuesEvent | 2023-09-02 05:57:22 | sekiguchi-nagisa/ydsh | https://api.github.com/repos/sekiguchi-nagisa/ydsh | opened | change printf '%b' specifier behavior | incompatible change Builtin | `%b` specifier generate binary number (like another Cx2 printf and other format-string implementation)
add `%#s` specifier for replacing current `%b` specifier
see (https://lists.gnu.org/archive/html/bug-bash/2023-08/msg00112.html) | True | change printf '%b' specifier behavior - `%b` specifier generate binary number (like another Cx2 printf and other format-string implementation)
add `%#s` specifier for replacing current `%b` specifier
see (https://lists.gnu.org/archive/html/bug-bash/2023-08/msg00112.html) | comp | change printf b specifier behavior b specifier generate binary number like another printf and other format string implementation add s specifier for replacing current b specifier see | 1 |
583 | 3,010,068,476 | IssuesEvent | 2015-07-28 10:49:38 | CESNET/libnetconf | https://api.github.com/repos/CESNET/libnetconf | closed | Validation across all modules | auto-migrated Compatibility libnetconf2 | ```
Feature description:
RFC 6020 states that leaferf built-in type can reference nodes from any module
in the datastore. Now, libnetconf validates data connected with a specific YANG
module (including the augments) so when leafref references a node from some
other YANG data model, the validation fails.
```
Original issue reported on code.google.com by `rkre...@cesnet.cz` on 29 Apr 2015 at 8:06 | True | Validation across all modules - ```
Feature description:
RFC 6020 states that leaferf built-in type can reference nodes from any module
in the datastore. Now, libnetconf validates data connected with a specific YANG
module (including the augments) so when leafref references a node from some
other YANG data model, the validation fails.
```
Original issue reported on code.google.com by `rkre...@cesnet.cz` on 29 Apr 2015 at 8:06 | comp | validation across all modules feature description rfc states that leaferf built in type can reference nodes from any module in the datastore now libnetconf validates data connected with a specific yang module including the augments so when leafref references a node from some other yang data model the validation fails original issue reported on code google com by rkre cesnet cz on apr at | 1 |
114 | 2,579,221,797 | IssuesEvent | 2015-02-13 08:13:40 | tshemsedinov/impress | https://api.github.com/repos/tshemsedinov/impress | opened | WS and RPC conflict | bug compatibility | Websocket and Impress RPC initialization conflict detected in ```accept()``` step | True | WS and RPC conflict - Websocket and Impress RPC initialization conflict detected in ```accept()``` step | comp | ws and rpc conflict websocket and impress rpc initialization conflict detected in accept step | 1 |
210,227 | 16,092,082,926 | IssuesEvent | 2021-04-26 18:00:01 | eclipse/openj9 | https://api.github.com/repos/eclipse/openj9 | opened | jtreg jdk/internal/loader/ClassLoaderValue/ClassLoaderValueTest.java segfault vmState=0x00000000 | comp:vm segfault test failure | https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/350
jdk_lang_0
JVM_OPTIONS: -Xdump:system:none -Xdump:heap:none -Xdump:system:events=gpf+abort+traceassert+corruptcache -XX:-JITServerTechPreviewMessage -XX:+UseCompressedOops
jdk/internal/loader/ClassLoaderValue/ClassLoaderValueTest.java
https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Test/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/350/openjdk_test_output.tar.gz
```
23:59:16 stderr:
23:59:16 Unhandled exception
23:59:16 Type=Segmentation error vmState=0x00000000
23:59:16 Unhandled exception
23:59:16 Type=Segmentation error vmState=0x00000000
23:59:16 J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
23:59:16 Handler1=00007F3C719645B0 Handler2=00007F3C716C3A70 InaccessibleAddress=0000000000000010
23:59:16 J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
23:59:16 Handler1=00007F3C719645B0 Handler2=00007F3C716C3A70 InaccessibleAddress=0000000000000010
23:59:16 RDI=0000000000392800 RSI=000000000000000C RAX=00007F3C6C026FF0 RBX=0000000000000000
23:59:16 RCX=0000000000000000 RDX=000000000000000C R8=0000000000000000 R9=0000000000000498
23:59:16 R10=0000000000172000 R11=00007F3C19708A84 R12=00007F3C1970A92E R13=0000000000000000
23:59:16 R14=0000000000392800 R15=00007F3C486899D0
23:59:16 RDI=0000000000335500 RSI=000000000000000C RAX=00007F3C6C026FF0 RBX=0000000000000000
23:59:16 RCX=0000000000000000 RDX=000000000000000C R8=0000000000000000 R9=0000000000000498
23:59:16 R10=0000000000172000 R11=00007F3C719626D0 R12=00007F3C1970A92E R13=0000000000000000
23:59:16 R14=0000000000335500 R15=00007F3C4833F9D0
23:59:16 RIP=00007F3C719B5DA7 GS=0000 FS=0000 RSP=00007F3C48689650
23:59:16 EFlags=0000000000010246 CS=0033 RBP=00007F3C71AE9BA0 ERR=0000000000000004
23:59:16 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=0000000000000010
23:59:16 RIP=00007F3C719B5DA7 GS=0000 FS=0000 RSP=00007F3C4833F650
23:59:16 EFlags=0000000000010246 CS=0033 RBP=00007F3C71AE9BA0 ERR=0000000000000004
23:59:16 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=0000000000000010
23:59:16 xmm0 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm1 6f7268636e795364 (f: 1853444992.000000, d: 6.977146e+228)
23:59:16 xmm2 00007f3c48689a00 (f: 1214814720.000000, d: 6.911825e-310)
23:59:16 xmm3 00007f3c48689a00 (f: 1214814720.000000, d: 6.911825e-310)
23:59:16 xmm4 00007f3c48689a00 (f: 1214814720.000000, d: 6.911825e-310)
23:59:16 xmm5 000000003f82002d (f: 1065484352.000000, d: 5.264192e-315)
23:59:16 xmm6 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm7 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm8 74752f6176616a4c (f: 1986095744.000000, d: 9.707480e+252)
23:59:16 xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm10 00000000ffff0000 (f: 4294901760.000000, d: 2.121963e-314)
23:59:16 xmm11 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm12 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm0 00000000002930f0 (f: 2699504.000000, d: 1.333732e-317)
23:59:16 xmm1 00000000ffc3f2d0 (f: 4291031808.000000, d: 2.120051e-314)
23:59:16 xmm2 00007f3c4833fa00 (f: 1211365888.000000, d: 6.911825e-310)
23:59:16 xmm3 00007f3c4833fa00 (f: 1211365888.000000, d: 6.911825e-310)
23:59:16 xmm4 00007f3c4833fa00 (f: 1211365888.000000, d: 6.911825e-310)
23:59:16 xmm5 000000003fc49838 (f: 1069848640.000000, d: 5.285755e-315)
23:59:16 xmm6 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm7 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm8 74752f6176616a4c (f: 1986095744.000000, d: 9.707480e+252)
23:59:16 xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm10 00000000ffff0000 (f: 4294901760.000000, d: 2.121963e-314)
23:59:16 xmm11 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm12 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 Module=/home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/lib/default/libj9vm29.so
23:59:16 Module_base_address=00007F3C71926000
23:59:16 Target=2_90_20210425_687 (Linux 4.15.0-96-generic)
23:59:16 CPU=amd64 (4 logical CPUs) (0x5e2815000 RAM)
23:59:16 ----------- Stack Backtrace -----------
23:59:16 Module=/home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/lib/default/libj9vm29.so
23:59:16 Module_base_address=00007F3C71926000
23:59:16 Target=2_90_20210425_687 (Linux 4.15.0-96-generic)
23:59:16 CPU=amd64 (4 logical CPUs) (0x5e2815000 RAM)
23:59:16 ----------- Stack Backtrace -----------
23:59:16 (0x00007F3C719B5DA7 [libj9vm29.so+0x8fda7])
23:59:16 (0x00007F3C719B55C5 [libj9vm29.so+0x8f5c5])
23:59:16 (0x00007F3C71A60982 [libj9vm29.so+0x13a982])
23:59:16 ---------------------------------------
``` | 1.0 | jtreg jdk/internal/loader/ClassLoaderValue/ClassLoaderValueTest.java segfault vmState=0x00000000 - https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/350
jdk_lang_0
JVM_OPTIONS: -Xdump:system:none -Xdump:heap:none -Xdump:system:events=gpf+abort+traceassert+corruptcache -XX:-JITServerTechPreviewMessage -XX:+UseCompressedOops
jdk/internal/loader/ClassLoaderValue/ClassLoaderValueTest.java
https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Test/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/350/openjdk_test_output.tar.gz
```
23:59:16 stderr:
23:59:16 Unhandled exception
23:59:16 Type=Segmentation error vmState=0x00000000
23:59:16 Unhandled exception
23:59:16 Type=Segmentation error vmState=0x00000000
23:59:16 J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
23:59:16 Handler1=00007F3C719645B0 Handler2=00007F3C716C3A70 InaccessibleAddress=0000000000000010
23:59:16 J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
23:59:16 Handler1=00007F3C719645B0 Handler2=00007F3C716C3A70 InaccessibleAddress=0000000000000010
23:59:16 RDI=0000000000392800 RSI=000000000000000C RAX=00007F3C6C026FF0 RBX=0000000000000000
23:59:16 RCX=0000000000000000 RDX=000000000000000C R8=0000000000000000 R9=0000000000000498
23:59:16 R10=0000000000172000 R11=00007F3C19708A84 R12=00007F3C1970A92E R13=0000000000000000
23:59:16 R14=0000000000392800 R15=00007F3C486899D0
23:59:16 RDI=0000000000335500 RSI=000000000000000C RAX=00007F3C6C026FF0 RBX=0000000000000000
23:59:16 RCX=0000000000000000 RDX=000000000000000C R8=0000000000000000 R9=0000000000000498
23:59:16 R10=0000000000172000 R11=00007F3C719626D0 R12=00007F3C1970A92E R13=0000000000000000
23:59:16 R14=0000000000335500 R15=00007F3C4833F9D0
23:59:16 RIP=00007F3C719B5DA7 GS=0000 FS=0000 RSP=00007F3C48689650
23:59:16 EFlags=0000000000010246 CS=0033 RBP=00007F3C71AE9BA0 ERR=0000000000000004
23:59:16 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=0000000000000010
23:59:16 RIP=00007F3C719B5DA7 GS=0000 FS=0000 RSP=00007F3C4833F650
23:59:16 EFlags=0000000000010246 CS=0033 RBP=00007F3C71AE9BA0 ERR=0000000000000004
23:59:16 TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=0000000000000010
23:59:16 xmm0 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm1 6f7268636e795364 (f: 1853444992.000000, d: 6.977146e+228)
23:59:16 xmm2 00007f3c48689a00 (f: 1214814720.000000, d: 6.911825e-310)
23:59:16 xmm3 00007f3c48689a00 (f: 1214814720.000000, d: 6.911825e-310)
23:59:16 xmm4 00007f3c48689a00 (f: 1214814720.000000, d: 6.911825e-310)
23:59:16 xmm5 000000003f82002d (f: 1065484352.000000, d: 5.264192e-315)
23:59:16 xmm6 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm7 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm8 74752f6176616a4c (f: 1986095744.000000, d: 9.707480e+252)
23:59:16 xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm10 00000000ffff0000 (f: 4294901760.000000, d: 2.121963e-314)
23:59:16 xmm11 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm12 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm0 00000000002930f0 (f: 2699504.000000, d: 1.333732e-317)
23:59:16 xmm1 00000000ffc3f2d0 (f: 4291031808.000000, d: 2.120051e-314)
23:59:16 xmm2 00007f3c4833fa00 (f: 1211365888.000000, d: 6.911825e-310)
23:59:16 xmm3 00007f3c4833fa00 (f: 1211365888.000000, d: 6.911825e-310)
23:59:16 xmm4 00007f3c4833fa00 (f: 1211365888.000000, d: 6.911825e-310)
23:59:16 xmm5 000000003fc49838 (f: 1069848640.000000, d: 5.285755e-315)
23:59:16 xmm6 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm7 00000000429dafa1 (f: 1117630336.000000, d: 5.521828e-315)
23:59:16 xmm8 74752f6176616a4c (f: 1986095744.000000, d: 9.707480e+252)
23:59:16 xmm9 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm10 00000000ffff0000 (f: 4294901760.000000, d: 2.121963e-314)
23:59:16 xmm11 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm12 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 xmm15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
23:59:16 Module=/home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/lib/default/libj9vm29.so
23:59:16 Module_base_address=00007F3C71926000
23:59:16 Target=2_90_20210425_687 (Linux 4.15.0-96-generic)
23:59:16 CPU=amd64 (4 logical CPUs) (0x5e2815000 RAM)
23:59:16 ----------- Stack Backtrace -----------
23:59:16 Module=/home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/lib/default/libj9vm29.so
23:59:16 Module_base_address=00007F3C71926000
23:59:16 Target=2_90_20210425_687 (Linux 4.15.0-96-generic)
23:59:16 CPU=amd64 (4 logical CPUs) (0x5e2815000 RAM)
23:59:16 ----------- Stack Backtrace -----------
23:59:16 (0x00007F3C719B5DA7 [libj9vm29.so+0x8fda7])
23:59:16 (0x00007F3C719B55C5 [libj9vm29.so+0x8f5c5])
23:59:16 (0x00007F3C71A60982 [libj9vm29.so+0x13a982])
23:59:16 ---------------------------------------
``` | non_comp | jtreg jdk internal loader classloadervalue classloadervaluetest java segfault vmstate jdk lang jvm options xdump system none xdump heap none xdump system events gpf abort traceassert corruptcache xx jitservertechpreviewmessage xx usecompressedoops jdk internal loader classloadervalue classloadervaluetest java stderr unhandled exception type segmentation error vmstate unhandled exception type segmentation error vmstate signal number signal number error value signal code inaccessibleaddress signal number signal number error value signal code inaccessibleaddress rdi rsi rax rbx rcx rdx rdi rsi rax rbx rcx rdx rip gs fs rsp eflags cs rbp err trapno oldmask rip gs fs rsp eflags cs rbp err trapno oldmask f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d f d module home jenkins workspace test sanity openjdk linux nightly openjdkbinary image lib default so module base address target linux generic cpu logical cpus ram stack backtrace module home jenkins workspace test sanity openjdk linux nightly openjdkbinary image lib default so module base address target linux generic cpu logical cpus ram stack backtrace | 0 |
538,726 | 15,776,691,013 | IssuesEvent | 2021-04-01 05:09:07 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | JSON Schema validation for API calls not working as expected | API-M 4.0.0 Feature/RuntimeSchemaValidation Priority/Highest Type/Bug | Tried sending a request to an API with a request body containing an extra key-value pair and an invalid datatype for a key. The tryout menu returns '500: undocumented error' and the cURL command returns nothing. | 1.0 | JSON Schema validation for API calls not working as expected - Tried sending a request to an API with a request body containing an extra key-value pair and an invalid datatype for a key. The tryout menu returns '500: undocumented error' and the cURL command returns nothing. | non_comp | json schema validation for api calls not working as expected tried sending a request to an api with a request body containing an extra key value pair and an invalid datatype for a key the tryout menu returns undocumented error and the curl command returns nothing | 0 |
4,623 | 7,261,927,433 | IssuesEvent | 2018-02-19 01:38:08 | tex-xet/xepersian | https://api.github.com/repos/tex-xet/xepersian | closed | The problem in simultaneous use two packs xepersian and easylist | Compatibility Normal Question Wontfix | <!---
!! Please fill out all sections !!
-->
## Brief outline of the issue
The following file is executed without calling the `xepersian` package (or without the `\NewList` command) without error. But now it makes a error.
## Minimal example showing the issue
```tex
\documentclass{article}
\usepackage[at]{easylist}
\usepackage{xepersian}
\settextfont{Yas}
\begin{document}
\begin{easylist}\NewList
@
\end{easylist}
\end{document}
```
## Log and PDF files
[1.log](https://github.com/tex-xet/xepersian/files/1735222/1.log)
<!---
!! Use drag-and-drop !!
-->
| True | The problem in simultaneous use two packs xepersian and easylist - <!---
!! Please fill out all sections !!
-->
## Brief outline of the issue
The following file is executed without calling the `xepersian` package (or without the `\NewList` command) without error. But now it makes a error.
## Minimal example showing the issue
```tex
\documentclass{article}
\usepackage[at]{easylist}
\usepackage{xepersian}
\settextfont{Yas}
\begin{document}
\begin{easylist}\NewList
@
\end{easylist}
\end{document}
```
## Log and PDF files
[1.log](https://github.com/tex-xet/xepersian/files/1735222/1.log)
<!---
!! Use drag-and-drop !!
-->
| comp | the problem in simultaneous use two packs xepersian and easylist please fill out all sections brief outline of the issue the following file is executed without calling the xepersian package or without the newlist command without error but now it makes a error minimal example showing the issue tex documentclass article usepackage easylist usepackage xepersian settextfont yas begin document begin easylist newlist end easylist end document log and pdf files use drag and drop | 1 |
282,931 | 21,315,992,524 | IssuesEvent | 2022-04-16 09:29:28 | LDerpy/pe | https://api.github.com/repos/LDerpy/pe | opened | Missing description of return values in sequence diagrams | severity.Low type.DocumentationBug | 
An example seen above is that the user may not know what it being returned on lines 3 and 5. This issue can be seen in other parts of the DG.
<!--session: 1650096759619-d1a653dc-2f93-4fae-b0b1-29a7bd9483a9-->
<!--Version: Web v3.4.2--> | 1.0 | Missing description of return values in sequence diagrams - 
An example seen above is that the user may not know what it being returned on lines 3 and 5. This issue can be seen in other parts of the DG.
<!--session: 1650096759619-d1a653dc-2f93-4fae-b0b1-29a7bd9483a9-->
<!--Version: Web v3.4.2--> | non_comp | missing description of return values in sequence diagrams an example seen above is that the user may not know what it being returned on lines and this issue can be seen in other parts of the dg | 0 |
66,485 | 8,025,965,055 | IssuesEvent | 2018-07-27 00:54:40 | GiovineItalia/Gadfly.jl | https://api.github.com/repos/GiovineItalia/Gadfly.jl | closed | Use DocStringExtensions in Gadfly | design-discussion | [`DocStringExtensions.jl`](https://github.com/JuliaDocs/DocStringExtensions.jl) is a really nice self-contained package that allows us to have much more flexible and dynamic docstrings. I'm very wary of adding more dependencies to Gadfly, but this package would add no subdependencies (it only depends on `Compat` and `Base`). A couple examples of where we can leverage it:
1) By using the automatically generated function signatures in the docstrings (so they don't get out sync) with the actual functions
2) By unrolling the fields in a type (see https://github.com/GiovineItalia/Gadfly.jl/pull/1157/files#diff-9ec506bf78232ae17d082c22c2e66449R507). Currently, all those docs are hidden when you do
`?Gadfly.Stat.density`, but could be unrolled with `DocStringExtension.jl`
Thoughts? | 1.0 | Use DocStringExtensions in Gadfly - [`DocStringExtensions.jl`](https://github.com/JuliaDocs/DocStringExtensions.jl) is a really nice self-contained package that allows us to have much more flexible and dynamic docstrings. I'm very wary of adding more dependencies to Gadfly, but this package would add no subdependencies (it only depends on `Compat` and `Base`). A couple examples of where we can leverage it:
1) By using the automatically generated function signatures in the docstrings (so they don't get out sync) with the actual functions
2) By unrolling the fields in a type (see https://github.com/GiovineItalia/Gadfly.jl/pull/1157/files#diff-9ec506bf78232ae17d082c22c2e66449R507). Currently, all those docs are hidden when you do
`?Gadfly.Stat.density`, but could be unrolled with `DocStringExtension.jl`
Thoughts? | non_comp | use docstringextensions in gadfly is a really nice self contained package that allows us to have much more flexible and dynamic docstrings i m very wary of adding more dependencies to gadfly but this package would add no subdependencies it only depends on compat and base a couple examples of where we can leverage it by using the automatically generated function signatures in the docstrings so they don t get out sync with the actual functions by unrolling the fields in a type see currently all those docs are hidden when you do gadfly stat density but could be unrolled with docstringextension jl thoughts | 0 |
14,181 | 17,064,593,808 | IssuesEvent | 2021-07-07 05:04:54 | EwyBoy/OreTweaker | https://api.github.com/repos/EwyBoy/OreTweaker | opened | [Compatibility] Immersive Engineering | Compatibility Request | **Name of mod you want compatibility with?**
Mod name. Immersive Engineering
**Is your compatibility request related to a problem?**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
There form of Copper and Silver ore is not being removed from world generation when using this mods ability to remove said ore.
**Describe the solution you'd like**
Would like to be able to use your mods ability to remove Immersive Engineering ore generation from the world
**Additional context**


| True | [Compatibility] Immersive Engineering - **Name of mod you want compatibility with?**
Mod name. Immersive Engineering
**Is your compatibility request related to a problem?**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
There form of Copper and Silver ore is not being removed from world generation when using this mods ability to remove said ore.
**Describe the solution you'd like**
Would like to be able to use your mods ability to remove Immersive Engineering ore generation from the world
**Additional context**


| comp | immersive engineering name of mod you want compatibility with mod name immersive engineering is your compatibility request related to a problem a clear and concise description of what the problem is ex i m always frustrated when there form of copper and silver ore is not being removed from world generation when using this mods ability to remove said ore describe the solution you d like would like to be able to use your mods ability to remove immersive engineering ore generation from the world additional context | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.