Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
202,979 | 15,863,614,458 | IssuesEvent | 2021-04-08 12:59:09 | matteobruni/tsparticles | https://api.github.com/repos/matteobruni/tsparticles | closed | Particles splitting on bounce | Core documentation enhancement feature_request good first issue help wanted no-issue-activity up-for-grabs | A nice effect to add could be particles splitting into smaller particles when bouncing (not by default, it should be an option).
Aside from enabling or disabling the effect, also the split size and number should be configurable something like:
```javascript
split: {
enable: true, // or false
size: 50,
count: 10
}
``` | 1.0 | Particles splitting on bounce - A nice effect to add could be particles splitting into smaller particles when bouncing (not by default, it should be an option).
Aside from enabling or disabling the effect, also the split size and number should be configurable something like:
```javascript
split: {
enable: true, // or false
size: 50,
count: 10
}
``` | non_defect | particles splitting on bounce a nice effect to add could be particles splitting into smaller particles when bouncing not by default it should be an option aside from enabling or disabling the effect also the split size and number should be configurable something like javascript split enable true or false size count | 0 |
423,554 | 28,633,942,092 | IssuesEvent | 2023-04-25 00:16:37 | envoyproxy/gateway | https://api.github.com/repos/envoyproxy/gateway | closed | Update Compatability Matrix | documentation no stalebot | Update the compatibility matrix with the required release details. Instead of closing this issue, it should be carried over from release-to-release.
| 1.0 | Update Compatability Matrix - Update the compatibility matrix with the required release details. Instead of closing this issue, it should be carried over from release-to-release.
| non_defect | update compatability matrix update the compatibility matrix with the required release details instead of closing this issue it should be carried over from release to release | 0 |
302,657 | 26,158,745,586 | IssuesEvent | 2022-12-31 06:31:01 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: schemachange/mixed-versions failed | C-test-failure O-robot O-roachtest release-blocker branch-release-22.2 | roachtest.schemachange/mixed-versions [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=artifacts#/schemachange/mixed-versions) on release-22.2 @ [07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1](https://github.com/cockroachdb/cockroach/commits/07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1):
```
test artifacts and logs in: /artifacts/schemachange/mixed-versions/run_1
(test_impl.go:286).Fatal: output in run_063035.259016239_n2_workload_run_schemachange: ./workload run schemachange --verbose=1 --max-ops 100 --concurrency 5 {pgurl:1-4} returned: COMMAND_PROBLEM: exit status 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #91594 roachtest: schemachange/mixed-versions-compat failed [C-test-failure O-roachtest O-robot T-sql-schema branch-master release-blocker]
- #91350 roachtest: schemachange/mixed-versions-compat failed [C-test-failure O-roachtest O-robot T-sql-schema]
</p>
</details>
/cc @cockroachdb/sql-schema
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*schemachange/mixed-versions.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: schemachange/mixed-versions failed - roachtest.schemachange/mixed-versions [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=artifacts#/schemachange/mixed-versions) on release-22.2 @ [07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1](https://github.com/cockroachdb/cockroach/commits/07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1):
```
test artifacts and logs in: /artifacts/schemachange/mixed-versions/run_1
(test_impl.go:286).Fatal: output in run_063035.259016239_n2_workload_run_schemachange: ./workload run schemachange --verbose=1 --max-ops 100 --concurrency 5 {pgurl:1-4} returned: COMMAND_PROBLEM: exit status 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #91594 roachtest: schemachange/mixed-versions-compat failed [C-test-failure O-roachtest O-robot T-sql-schema branch-master release-blocker]
- #91350 roachtest: schemachange/mixed-versions-compat failed [C-test-failure O-roachtest O-robot T-sql-schema]
</p>
</details>
/cc @cockroachdb/sql-schema
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*schemachange/mixed-versions.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_defect | roachtest schemachange mixed versions failed roachtest schemachange mixed versions with on release test artifacts and logs in artifacts schemachange mixed versions run test impl go fatal output in run workload run schemachange workload run schemachange verbose max ops concurrency pgurl returned command problem exit status parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see same failure on other branches roachtest schemachange mixed versions compat failed roachtest schemachange mixed versions compat failed cc cockroachdb sql schema | 0 |
772,598 | 27,128,000,953 | IssuesEvent | 2023-02-16 07:30:47 | rpm-software-management/dnf5 | https://api.github.com/repos/rpm-software-management/dnf5 | reopened | Add a download method for Transaction class | Priority: MEDIUM | The new method should simplify the workflow of DNF5 when transaction is created.
Basically to replace following code
```
# Add the inbound packages (packages that are being installed on the system)
# to the downloader.
for tspkg in transaction.get_transaction_packages():
if libdnf5.base.transaction.transaction_item_action_is_inbound(tspkg.get_action()):
downloader.add(tspkg.get_package())
# Download the packages.
#
# The first argument is `fail_fast`, meaning the download will fail right away
# on a first package download failure. The second argument is `resume`, if
# `true`, the downloader will try to resume downloads of any partially
# downloaded RPMs.
downloader.download(True, True)
```
By
```
transaction.download()
```
- [x] add the method to Transaction Class
- [ ] Modify tutorials to use the method | 1.0 | Add a download method for Transaction class - The new method should simplify the workflow of DNF5 when transaction is created.
Basically to replace following code
```
# Add the inbound packages (packages that are being installed on the system)
# to the downloader.
for tspkg in transaction.get_transaction_packages():
if libdnf5.base.transaction.transaction_item_action_is_inbound(tspkg.get_action()):
downloader.add(tspkg.get_package())
# Download the packages.
#
# The first argument is `fail_fast`, meaning the download will fail right away
# on a first package download failure. The second argument is `resume`, if
# `true`, the downloader will try to resume downloads of any partially
# downloaded RPMs.
downloader.download(True, True)
```
By
```
transaction.download()
```
- [x] add the method to Transaction Class
- [ ] Modify tutorials to use the method | non_defect | add a download method for transaction class the new method should simplify the workflow of when transaction is created basically to replace following code add the inbound packages packages that are being installed on the system to the downloader for tspkg in transaction get transaction packages if base transaction transaction item action is inbound tspkg get action downloader add tspkg get package download the packages the first argument is fail fast meaning the download will fail right away on a first package download failure the second argument is resume if true the downloader will try to resume downloads of any partially downloaded rpms downloader download true true by transaction download add the method to transaction class modify tutorials to use the method | 0 |
76,670 | 26,545,639,152 | IssuesEvent | 2023-01-19 23:47:13 | google/google-id-token | https://api.github.com/repos/google/google-id-token | closed | Need ability to specify public certs URL for ID Token verification | Priority-Medium Type-Defect auto-migrated | ```
See:
https://code.google.com/p/google-id-token/source/browse/lib/google-id-token.rb#3
3
Need an ability to customize that URL.
```
Original issue reported on code.google.com by `yan...@google.com` on 24 Mar 2013 at 11:26
| 1.0 | Need ability to specify public certs URL for ID Token verification - ```
See:
https://code.google.com/p/google-id-token/source/browse/lib/google-id-token.rb#3
3
Need an ability to customize that URL.
```
Original issue reported on code.google.com by `yan...@google.com` on 24 Mar 2013 at 11:26
| defect | need ability to specify public certs url for id token verification see need an ability to customize that url original issue reported on code google com by yan google com on mar at | 1 |
29,504 | 5,705,983,587 | IssuesEvent | 2017-04-18 09:57:44 | contao/core | https://api.github.com/repos/contao/core | closed | inconsistent cache key generation | defect | There is an inconsistency with the cache key generation and usage. When a page is generated for the page cache, `FrontendTemplate::addToCache` _always_ uses the __current host__ for the cache key:
[system/modules/core/classes/FrontendTemplate.php#L216-L220](https://github.com/contao/core/blob/3.5.25/system/modules/core/classes/FrontendTemplate.php#L216-L220)
```php
$strCacheKey = \Environment::get('host') . '/empty.' . $objPage->language;
…
$strCacheKey = \Environment::get('host') . '/' . \Environment::get('request');
```
However, `Automator::generateConfigCache` _always_ uses either the _DNS setting_ from the page object or `*`:
[system/modules/core/library/Contao/Automator.php#L539](https://github.com/contao/core/blob/3.5.25/system/modules/core/library/Contao/Automator.php#L539)
```php
$strBase = ($objPages->dns ?: '*');
```
So when the internal cache is active, the resulting `system/cache/config/mapping.php` will look like this for example:
```php
<?php
return array (
'*/empty.fallback' => '*/empty.de',
'*/empty.de' => '*/empty.de',
);
```
But the cache file was generated with `example.org/empty.de` as its cache key.
This then leads to the start page _never_ being served from the page cache, if the internal cache is built and no DNS setting was present in the website root. | 1.0 | inconsistent cache key generation - There is an inconsistency with the cache key generation and usage. When a page is generated for the page cache, `FrontendTemplate::addToCache` _always_ uses the __current host__ for the cache key:
[system/modules/core/classes/FrontendTemplate.php#L216-L220](https://github.com/contao/core/blob/3.5.25/system/modules/core/classes/FrontendTemplate.php#L216-L220)
```php
$strCacheKey = \Environment::get('host') . '/empty.' . $objPage->language;
…
$strCacheKey = \Environment::get('host') . '/' . \Environment::get('request');
```
However, `Automator::generateConfigCache` _always_ uses either the _DNS setting_ from the page object or `*`:
[system/modules/core/library/Contao/Automator.php#L539](https://github.com/contao/core/blob/3.5.25/system/modules/core/library/Contao/Automator.php#L539)
```php
$strBase = ($objPages->dns ?: '*');
```
So when the internal cache is active, the resulting `system/cache/config/mapping.php` will look like this for example:
```php
<?php
return array (
'*/empty.fallback' => '*/empty.de',
'*/empty.de' => '*/empty.de',
);
```
But the cache file was generated with `example.org/empty.de` as its cache key.
This then leads to the start page _never_ being served from the page cache, if the internal cache is built and no DNS setting was present in the website root. | defect | inconsistent cache key generation there is an inconsistency with the cache key generation and usage when a page is generated for the page cache frontendtemplate addtocache always uses the current host for the cache key php strcachekey environment get host empty objpage language … strcachekey environment get host environment get request however automator generateconfigcache always uses either the dns setting from the page object or php strbase objpages dns so when the internal cache is active the resulting system cache config mapping php will look like this for example php php return array empty fallback empty de empty de empty de but the cache file was generated with example org empty de as its cache key this then leads to the start page never being served from the page cache if the internal cache is built and no dns setting was present in the website root | 1 |
53,706 | 13,262,122,803 | IssuesEvent | 2020-08-20 21:08:50 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [cmake] python3 and the shebang (Trac #1912) | Migrated from Trac cmake defect | If you build cmake with a python binary that isn't called `python`, such as `python3`, then `icetray-inspect` fails. Likely `dataio-pyshovel` and other python scripts fail too.
I guess the solution is to modify the shebang of all python executables as a cmake step? Something like:
https://github.com/ros/catkin/pull/574/files
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1912">https://code.icecube.wisc.edu/projects/icecube/ticket/1912</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-09-18T05:50:20",
"_ts": "1568785820929050",
"description": "If you build cmake with a python binary that isn't called `python`, such as `python3`, then `icetray-inspect` fails. Likely `dataio-pyshovel` and other python scripts fail too.\n\nI guess the solution is to modify the shebang of all python executables as a cmake step? Something like:\nhttps://github.com/ros/catkin/pull/574/files",
"reporter": "david.schultz",
"cc": "kjmeagher",
"resolution": "worksforme",
"time": "2016-11-17T22:54:14",
"component": "cmake",
"summary": "[cmake] python3 and the shebang",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [cmake] python3 and the shebang (Trac #1912) - If you build cmake with a python binary that isn't called `python`, such as `python3`, then `icetray-inspect` fails. Likely `dataio-pyshovel` and other python scripts fail too.
I guess the solution is to modify the shebang of all python executables as a cmake step? Something like:
https://github.com/ros/catkin/pull/574/files
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1912">https://code.icecube.wisc.edu/projects/icecube/ticket/1912</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-09-18T05:50:20",
"_ts": "1568785820929050",
"description": "If you build cmake with a python binary that isn't called `python`, such as `python3`, then `icetray-inspect` fails. Likely `dataio-pyshovel` and other python scripts fail too.\n\nI guess the solution is to modify the shebang of all python executables as a cmake step? Something like:\nhttps://github.com/ros/catkin/pull/574/files",
"reporter": "david.schultz",
"cc": "kjmeagher",
"resolution": "worksforme",
"time": "2016-11-17T22:54:14",
"component": "cmake",
"summary": "[cmake] python3 and the shebang",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | and the shebang trac if you build cmake with a python binary that isn t called python such as then icetray inspect fails likely dataio pyshovel and other python scripts fail too i guess the solution is to modify the shebang of all python executables as a cmake step something like migrated from json status closed changetime ts description if you build cmake with a python binary that isn t called python such as then icetray inspect fails likely dataio pyshovel and other python scripts fail too n ni guess the solution is to modify the shebang of all python executables as a cmake step something like n reporter david schultz cc kjmeagher resolution worksforme time component cmake summary and the shebang priority normal keywords milestone long term future owner nega type defect | 1 |
206,057 | 7,108,254,812 | IssuesEvent | 2018-01-16 23:08:25 | TylerConlee/slab | https://api.github.com/repos/TylerConlee/slab | opened | State or diagnostic command | enhancement priority:high | Add @slab state, which DMs the user a list of the Sent notifications slice | 1.0 | State or diagnostic command - Add @slab state, which DMs the user a list of the Sent notifications slice | non_defect | state or diagnostic command add slab state which dms the user a list of the sent notifications slice | 0 |
40,824 | 10,583,043,295 | IssuesEvent | 2019-10-08 12:57:07 | ocaml/opam | https://api.github.com/repos/ocaml/opam | closed | Solaris 10 patch command doesn't get file to patch | AREA: BUILD AREA: PORTABILITY | After editing
opam-full-1.2.2-rc2/src_ext/Makefile
to remove suppression of recipe echoing:
...
if [ -d patches/cmdliner ]; then \
cd cmdliner && \
for p in ../patches/cmdliner/*.patch; do \
patch -p1 < $p; \
done; \
fi
Looks like a unified context diff.
File to patch:
That is, the patch command prompts the user.
opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch
diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml
--- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100
+++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100
...
See the man page for the Solaris 10 patch command.
http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html
In particular, we are interested in the "File Name Determination" section of that document.
If no file operand is specified, patch performs the following steps to obtain a path name:
If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ...
src/cmdliner.ml
src/cmdliner.ml
"Both" files exist.
If both files exist, patch assumes that no path name can be obtained from this step ...
If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input.
One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly.
Alan Feldstein
Cosmic Horizon
http://www.alanfeldstein.com
| 1.0 | Solaris 10 patch command doesn't get file to patch - After editing
opam-full-1.2.2-rc2/src_ext/Makefile
to remove suppression of recipe echoing:
...
if [ -d patches/cmdliner ]; then \
cd cmdliner && \
for p in ../patches/cmdliner/*.patch; do \
patch -p1 < $p; \
done; \
fi
Looks like a unified context diff.
File to patch:
That is, the patch command prompts the user.
opam-full-1.2.2-rc2/src_ext/patches/cmdliner/backport_pre_4_00_0.patch
diff -Naur cmdliner-0.9.7/src/cmdliner.ml cmdliner-0.9.7.patched/src/cmdliner.ml
--- cmdliner-0.9.7/src/cmdliner.ml 2015-02-06 11:33:44.000000000 +0100
+++ cmdliner-0.9.7.patched/src/cmdliner.ml 2015-02-18 23:04:04.000000000 +0100
...
See the man page for the Solaris 10 patch command.
http://docs.oracle.com/cd/E19253-01/816-5165/6mbb0m9n6/index.html
In particular, we are interested in the "File Name Determination" section of that document.
If no file operand is specified, patch performs the following steps to obtain a path name:
If the patch contains the strings **\* and - - -, patch strips components from the beginning of each path name (depending on the presence or value of the -p option), then tests for the existence of both files in the current directory ...
src/cmdliner.ml
src/cmdliner.ml
"Both" files exist.
If both files exist, patch assumes that no path name can be obtained from this step ...
If no path name can be obtained by applying the previous steps, ... patch will write a prompt to standard output and request a file name interactively from standard input.
One possible solution is for the makefile to read the patch file, extracting the path name using the Linux patch command algorithm. Then feed that path name to the patch command explicitly.
Alan Feldstein
Cosmic Horizon
http://www.alanfeldstein.com
| non_defect | solaris patch command doesn t get file to patch after editing opam full src ext makefile to remove suppression of recipe echoing if then cd cmdliner for p in patches cmdliner patch do patch p done fi looks like a unified context diff file to patch that is the patch command prompts the user opam full src ext patches cmdliner backport pre patch diff naur cmdliner src cmdliner ml cmdliner patched src cmdliner ml cmdliner src cmdliner ml cmdliner patched src cmdliner ml see the man page for the solaris patch command in particular we are interested in the file name determination section of that document if no file operand is specified patch performs the following steps to obtain a path name if the patch contains the strings and patch strips components from the beginning of each path name depending on the presence or value of the p option then tests for the existence of both files in the current directory src cmdliner ml src cmdliner ml both files exist if both files exist patch assumes that no path name can be obtained from this step if no path name can be obtained by applying the previous steps patch will write a prompt to standard output and request a file name interactively from standard input one possible solution is for the makefile to read the patch file extracting the path name using the linux patch command algorithm then feed that path name to the patch command explicitly alan feldstein cosmic horizon | 0 |
230,966 | 25,482,845,296 | IssuesEvent | 2022-11-26 01:42:56 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | reopened | CVE-2016-3955 (High) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2016-3955 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/usbip/usbip_common.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/usbip/usbip_common.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The usbip_recv_xbuff function in drivers/usb/usbip/usbip_common.c in the Linux kernel before 4.5.3 allows remote attackers to cause a denial of service (out-of-bounds write) or possibly have unspecified other impact via a crafted length value in a USB/IP packet.
<p>Publish Date: 2016-07-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3955>CVE-2016-3955</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-3955">https://nvd.nist.gov/vuln/detail/CVE-2016-3955</a></p>
<p>Release Date: 2016-07-03</p>
<p>Fix Resolution: 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-3955 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2016-3955 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/usbip/usbip_common.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/usbip/usbip_common.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The usbip_recv_xbuff function in drivers/usb/usbip/usbip_common.c in the Linux kernel before 4.5.3 allows remote attackers to cause a denial of service (out-of-bounds write) or possibly have unspecified other impact via a crafted length value in a USB/IP packet.
<p>Publish Date: 2016-07-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3955>CVE-2016-3955</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-3955">https://nvd.nist.gov/vuln/detail/CVE-2016-3955</a></p>
<p>Release Date: 2016-07-03</p>
<p>Fix Resolution: 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers usb usbip usbip common c drivers usb usbip usbip common c vulnerability details the usbip recv xbuff function in drivers usb usbip usbip common c in the linux kernel before allows remote attackers to cause a denial of service out of bounds write or possibly have unspecified other impact via a crafted length value in a usb ip packet publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
33,322 | 2,763,952,425 | IssuesEvent | 2015-04-29 13:04:25 | kromkrom/wordcontrol | https://api.github.com/repos/kromkrom/wordcontrol | closed | Filter syntcats to only available in a language | 0_task priority:+1 state:work_in_progress task:bug | При настройке парсинга csv-файла при сопоставлении частей речи нужно проверять, существует ли данная часть речи в языке | 1.0 | Filter syntcats to only available in a language - При настройке парсинга csv-файла при сопоставлении частей речи нужно проверять, существует ли данная часть речи в языке | non_defect | filter syntcats to only available in a language при настройке парсинга csv файла при сопоставлении частей речи нужно проверять существует ли данная часть речи в языке | 0 |
55,084 | 14,177,549,732 | IssuesEvent | 2020-11-13 02:25:35 | networkx/networkx | https://api.github.com/repos/networkx/networkx | closed | Update plot_antigraph.py example to remove `_iter` in method name. | Defect | `def adjacency_iter(self)` should be `def adjacency(self)`
There may be other places (especially in the examples) where we've missed an ```_iter``` update. | 1.0 | Update plot_antigraph.py example to remove `_iter` in method name. - `def adjacency_iter(self)` should be `def adjacency(self)`
There may be other places (especially in the examples) where we've missed an ```_iter``` update. | defect | update plot antigraph py example to remove iter in method name def adjacency iter self should be def adjacency self there may be other places especially in the examples where we ve missed an iter update | 1 |
74,003 | 24,899,964,089 | IssuesEvent | 2022-10-28 19:43:23 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | opened | [🐛 Bug]: Server Timeout? | I-defect needs-triaging | ### What happened?
Browser opens successfully, program execution does not continue, eventually browser closes and program crashes.
The browser instance works fine during manual operation but the application cannot connect.
### How can we reproduce the issue?
```shell
Unsure, I have ran into the issue on 2 separate machines, running ubuntu 20.04 and 22.04.
Both with clean dotnet6 templates. I reinstalled chromium through snap and apt, no effect.
```
### Relevant log output
```shell
/home/spy/RiderProjects/ConsoleApp1/ConsoleApp1/bin/Debug/net6.0/ConsoleApp1
Starting ChromeDriver 107.0.5304.62 (1eec40d3a5764881c92085aaee66d25075c159aa-refs/branch-heads/5304@{#942}) on port 33077
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
Unhandled exception. OpenQA.Selenium.WebDriverException: The HTTP request to the remote WebDriver server for URL http://localhost:33077/session timed out after 60 seconds.
---> System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 60 seconds elapsing.
---> System.TimeoutException: The operation was canceled.
---> System.Threading.Tasks.TaskCanceledException: The operation was canceled.
---> System.IO.IOException: Unable to read data from the transport connection: Operation canceled.
---> System.Net.Sockets.SocketException (125): Operation canceled
--- End of inner exception stack trace ---
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource<System.Int32>.GetResult(Int16 token)
at System.Net.Http.HttpConnection.InitialFillAsync(Boolean async)
at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at System.Net.Http.HttpClient.HandleFailure(Exception e, Boolean telemetryStarted, HttpResponseMessage response, CancellationTokenSource cts, CancellationToken cancellationToken, CancellationTokenSource pendingRequestsCts)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at OpenQA.Selenium.Remote.HttpCommandExecutor.MakeHttpRequest(HttpRequestInfo requestInfo)
at OpenQA.Selenium.Remote.HttpCommandExecutor.Execute(Command commandToExecute)
--- End of inner exception stack trace ---
at OpenQA.Selenium.Remote.HttpCommandExecutor.Execute(Command commandToExecute)
at OpenQA.Selenium.Remote.DriverServiceCommandExecutor.Execute(Command commandToExecute)
at OpenQA.Selenium.WebDriver.Execute(String driverCommandToExecute, Dictionary`2 parameters)
at OpenQA.Selenium.WebDriver.StartSession(ICapabilities desiredCapabilities)
at OpenQA.Selenium.WebDriver..ctor(ICommandExecutor executor, ICapabilities capabilities)
at OpenQA.Selenium.Chromium.ChromiumDriver..ctor(ChromiumDriverService service, ChromiumOptions options, TimeSpan commandTimeout)
at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeDriverService service, ChromeOptions options, TimeSpan commandTimeout)
at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeOptions options)
at OpenQA.Selenium.Chrome.ChromeDriver..ctor()
at Program.<Main>$(String[] args) in /home/spy/RiderProjects/ConsoleApp1/ConsoleApp1/Program.cs:line 2
```
### Operating System
Ubuntu 22.04
### Selenium version
C# 4.5.1
### What are the browser(s) and version(s) where you see this issue?
Chromium 106
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 106.0.5249.119
### Are you using Selenium Grid?
_No response_ | 1.0 | [🐛 Bug]: Server Timeout? - ### What happened?
Browser opens successfully, program execution does not continue, eventually browser closes and program crashes.
The browser instance works fine during manual operation but the application cannot connect.
### How can we reproduce the issue?
```shell
Unsure, I have ran into the issue on 2 separate machines, running ubuntu 20.04 and 22.04.
Both with clean dotnet6 templates. I reinstalled chromium through snap and apt, no effect.
```
### Relevant log output
```shell
/home/spy/RiderProjects/ConsoleApp1/ConsoleApp1/bin/Debug/net6.0/ConsoleApp1
Starting ChromeDriver 107.0.5304.62 (1eec40d3a5764881c92085aaee66d25075c159aa-refs/branch-heads/5304@{#942}) on port 33077
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
Unhandled exception. OpenQA.Selenium.WebDriverException: The HTTP request to the remote WebDriver server for URL http://localhost:33077/session timed out after 60 seconds.
---> System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 60 seconds elapsing.
---> System.TimeoutException: The operation was canceled.
---> System.Threading.Tasks.TaskCanceledException: The operation was canceled.
---> System.IO.IOException: Unable to read data from the transport connection: Operation canceled.
---> System.Net.Sockets.SocketException (125): Operation canceled
--- End of inner exception stack trace ---
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource<System.Int32>.GetResult(Int16 token)
at System.Net.Http.HttpConnection.InitialFillAsync(Boolean async)
at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at System.Net.Http.HttpClient.HandleFailure(Exception e, Boolean telemetryStarted, HttpResponseMessage response, CancellationTokenSource cts, CancellationToken cancellationToken, CancellationTokenSource pendingRequestsCts)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at OpenQA.Selenium.Remote.HttpCommandExecutor.MakeHttpRequest(HttpRequestInfo requestInfo)
at OpenQA.Selenium.Remote.HttpCommandExecutor.Execute(Command commandToExecute)
--- End of inner exception stack trace ---
at OpenQA.Selenium.Remote.HttpCommandExecutor.Execute(Command commandToExecute)
at OpenQA.Selenium.Remote.DriverServiceCommandExecutor.Execute(Command commandToExecute)
at OpenQA.Selenium.WebDriver.Execute(String driverCommandToExecute, Dictionary`2 parameters)
at OpenQA.Selenium.WebDriver.StartSession(ICapabilities desiredCapabilities)
at OpenQA.Selenium.WebDriver..ctor(ICommandExecutor executor, ICapabilities capabilities)
at OpenQA.Selenium.Chromium.ChromiumDriver..ctor(ChromiumDriverService service, ChromiumOptions options, TimeSpan commandTimeout)
at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeDriverService service, ChromeOptions options, TimeSpan commandTimeout)
at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeOptions options)
at OpenQA.Selenium.Chrome.ChromeDriver..ctor()
at Program.<Main>$(String[] args) in /home/spy/RiderProjects/ConsoleApp1/ConsoleApp1/Program.cs:line 2
```
### Operating System
Ubuntu 22.04
### Selenium version
C# 4.5.1
### What are the browser(s) and version(s) where you see this issue?
Chromium 106
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 106.0.5249.119
### Are you using Selenium Grid?
_No response_ | defect | server timeout what happened browser opens successfully program execution does not continue eventually browser closes and program crashes the browser instance works fine during manual operation but the application cannot connect how can we reproduce the issue shell unsure i have ran into the issue on separate machines running ubuntu and both with clean templates i reinstalled chromium through snap and apt no effect relevant log output shell home spy riderprojects bin debug starting chromedriver refs branch heads on port only local connections are allowed please see for suggestions on keeping chromedriver safe chromedriver was started successfully unhandled exception openqa selenium webdriverexception the http request to the remote webdriver server for url timed out after seconds system threading tasks taskcanceledexception the request was canceled due to the configured httpclient timeout of seconds elapsing system timeoutexception the operation was canceled system threading tasks taskcanceledexception the operation was canceled system io ioexception unable to read data from the transport connection operation canceled system net sockets socketexception operation canceled end of inner exception stack trace at system net sockets socket awaitablesocketasynceventargs throwexception socketerror error cancellationtoken cancellationtoken at system net sockets socket awaitablesocketasynceventargs system threading tasks sources ivaluetasksource getresult token at system net http httpconnection initialfillasync boolean async at system net http httpconnection sendasynccore httprequestmessage request boolean async cancellationtoken cancellationtoken end of inner exception stack trace at system net http httpconnection sendasynccore httprequestmessage request boolean async cancellationtoken cancellationtoken at system net http httpconnectionpool sendwithversiondetectionandretryasync httprequestmessage request boolean async boolean dorequestauth cancellationtoken cancellationtoken at system net http redirecthandler sendasync httprequestmessage request boolean async cancellationtoken cancellationtoken at system net http httpclient g core httprequestmessage request httpcompletionoption completionoption cancellationtokensource cts boolean disposects cancellationtokensource pendingrequestscts cancellationtoken originalcancellationtoken end of inner exception stack trace end of inner exception stack trace at system net http httpclient handlefailure exception e boolean telemetrystarted httpresponsemessage response cancellationtokensource cts cancellationtoken cancellationtoken cancellationtokensource pendingrequestscts at system net http httpclient g core httprequestmessage request httpcompletionoption completionoption cancellationtokensource cts boolean disposects cancellationtokensource pendingrequestscts cancellationtoken originalcancellationtoken at openqa selenium remote httpcommandexecutor makehttprequest httprequestinfo requestinfo at openqa selenium remote httpcommandexecutor execute command commandtoexecute end of inner exception stack trace at openqa selenium remote httpcommandexecutor execute command commandtoexecute at openqa selenium remote driverservicecommandexecutor execute command commandtoexecute at openqa selenium webdriver execute string drivercommandtoexecute dictionary parameters at openqa selenium webdriver startsession icapabilities desiredcapabilities at openqa selenium webdriver ctor icommandexecutor executor icapabilities capabilities at openqa selenium chromium chromiumdriver ctor chromiumdriverservice service chromiumoptions options timespan commandtimeout at openqa selenium chrome chromedriver ctor chromedriverservice service chromeoptions options timespan commandtimeout at openqa selenium chrome chromedriver ctor chromeoptions options at openqa selenium chrome chromedriver ctor at program string args in home spy riderprojects program cs line operating system ubuntu selenium version c what are the browser s and version s where you see this issue chromium what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no response | 1 |
335,278 | 30,020,865,294 | IssuesEvent | 2023-06-26 23:12:15 | ray-project/ray | https://api.github.com/repos/ray-project/ray | closed | Release test air_benchmark_xgboost_cpu_10.aws failed | bug P1 release-test jailed-test | Release test air_benchmark_xgboost_cpu_10.aws failed.
See https://buildkite.com/ray-project/release-tests-branch/builds/1774#01889f07-2fb3-4bea-a864-9a783bc864a4 for more details.
cc @ml
-- created by ray-test-bot | 2.0 | Release test air_benchmark_xgboost_cpu_10.aws failed - Release test air_benchmark_xgboost_cpu_10.aws failed.
See https://buildkite.com/ray-project/release-tests-branch/builds/1774#01889f07-2fb3-4bea-a864-9a783bc864a4 for more details.
cc @ml
-- created by ray-test-bot | non_defect | release test air benchmark xgboost cpu aws failed release test air benchmark xgboost cpu aws failed see for more details cc ml created by ray test bot | 0 |
63,398 | 12,311,979,855 | IssuesEvent | 2020-05-12 13:17:16 | dotnet/roslyn-analyzers | https://api.github.com/repos/dotnet/roslyn-analyzers | closed | CA1810 should not warn when assigning field in event handler. | Area-Microsoft.CodeQuality.Analyzers Bug help wanted | ```cs
class C
{
private static string? s;
// ↓ CA1810 should not warn here
static C()
{
Console.CancelKeyPress += (o, e) => s = string.Empty;
}
}
``` | 1.0 | CA1810 should not warn when assigning field in event handler. - ```cs
class C
{
private static string? s;
// ↓ CA1810 should not warn here
static C()
{
Console.CancelKeyPress += (o, e) => s = string.Empty;
}
}
``` | non_defect | should not warn when assigning field in event handler cs class c private static string s ↓ should not warn here static c console cancelkeypress o e s string empty | 0 |
68,774 | 17,398,403,634 | IssuesEvent | 2021-08-02 16:06:34 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | Prevent unnecessary renders of ActionCreator,EntityName,CodeEditor and more.. | UI Building UI Building Pod UI Performance | The following components re-render every-time something changes in the redux store
- ActionCreator,
- EntityName,
- CodeEditor and
- ActionEntityContextMenu
Optimize the selectors to fix this issue
| 2.0 | Prevent unnecessary renders of ActionCreator,EntityName,CodeEditor and more.. - The following components re-render every-time something changes in the redux store
- ActionCreator,
- EntityName,
- CodeEditor and
- ActionEntityContextMenu
Optimize the selectors to fix this issue
| non_defect | prevent unnecessary renders of actioncreator entityname codeeditor and more the following components re render every time something changes in the redux store actioncreator entityname codeeditor and actionentitycontextmenu optimize the selectors to fix this issue | 0 |
62,430 | 17,023,921,745 | IssuesEvent | 2021-07-03 04:34:13 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Street name not rendered when specified only on the "associatedStreet" relation | Component: mapnik Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 9.47pm, Sunday, 24th May 2015]**
The relation
name=chemin du Radier
type=associatedStreet
contains 3 highways (role=street), but the name of the street is not rendered on the map (NB : none of the 3 highways duplicate the relation's name)
see
https://www.openstreetmap.org/relation/5155566 | 1.0 | Street name not rendered when specified only on the "associatedStreet" relation - **[Submitted to the original trac issue database at 9.47pm, Sunday, 24th May 2015]**
The relation
name=chemin du Radier
type=associatedStreet
contains 3 highways (role=street), but the name of the street is not rendered on the map (NB : none of the 3 highways duplicate the relation's name)
see
https://www.openstreetmap.org/relation/5155566 | defect | street name not rendered when specified only on the associatedstreet relation the relation name chemin du radier type associatedstreet contains highways role street but the name of the street is not rendered on the map nb none of the highways duplicate the relation s name see | 1 |
37,771 | 5,142,777,825 | IssuesEvent | 2017-01-12 14:20:02 | sakaiproject/sakai | https://api.github.com/repos/sakaiproject/sakai | closed | Student view: the Course Grade overlaps the blue box below | bug GradebookNG ready to test | <img width="930" alt="gbng" src="https://cloud.githubusercontent.com/assets/20171201/21547539/bd73d278-cdb4-11e6-9ca3-a0d2a4848c99.png">
Viewing Gradebook as a student, the Course Grade section overlaps the blue box showing no Gradebook items exist | 1.0 | Student view: the Course Grade overlaps the blue box below - <img width="930" alt="gbng" src="https://cloud.githubusercontent.com/assets/20171201/21547539/bd73d278-cdb4-11e6-9ca3-a0d2a4848c99.png">
Viewing Gradebook as a student, the Course Grade section overlaps the blue box showing no Gradebook items exist | non_defect | student view the course grade overlaps the blue box below img width alt gbng src viewing gradebook as a student the course grade section overlaps the blue box showing no gradebook items exist | 0 |
62,013 | 17,023,832,001 | IssuesEvent | 2021-07-03 04:04:45 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | http://osm.org Transport Map is missing a map key | Component: website Priority: major Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 9.06pm, Tuesday, 16th October 2012]**
There's no description for the meaning of any map line or symbol.
Also see Ticket #4590 | 1.0 | http://osm.org Transport Map is missing a map key - **[Submitted to the original trac issue database at 9.06pm, Tuesday, 16th October 2012]**
There's no description for the meaning of any map line or symbol.
Also see Ticket #4590 | defect | transport map is missing a map key there s no description for the meaning of any map line or symbol also see ticket | 1 |
152,417 | 13,454,280,477 | IssuesEvent | 2020-09-09 03:17:05 | fga-eps-mds/2020-1-Ziguen | https://api.github.com/repos/fga-eps-mds/2020-1-Ziguen | opened | Organizando e Criando Arquivos da Documenatação | documentation | | Título| Assunto | Label | Assignees | Versão |
|------|------|------|-----|----|
| Documentação| Organizando e criando | Documentation | @francisco1code | - |
### Descrição
Foi criada alguma pasta e arquivos para organizar o mecanismo de documentação
* Pastas Criadas
* Diagramas
* Imagens
* Arquivos criados .MD
* Diagrama de caso de uso
* Diagrama de sequencia
* Metodologia
* Backlog do produto
### Criterio de aceitação
Concluir os arquivos
- [ ] Diagrama de caso de uso
- [ ] Diagrama de sequencia
- [ ] Metodologia
- [ ] Backlog do produto
| 1.0 | Organizando e Criando Arquivos da Documenatação - | Título| Assunto | Label | Assignees | Versão |
|------|------|------|-----|----|
| Documentação| Organizando e criando | Documentation | @francisco1code | - |
### Descrição
Foi criada alguma pasta e arquivos para organizar o mecanismo de documentação
* Pastas Criadas
* Diagramas
* Imagens
* Arquivos criados .MD
* Diagrama de caso de uso
* Diagrama de sequencia
* Metodologia
* Backlog do produto
### Criterio de aceitação
Concluir os arquivos
- [ ] Diagrama de caso de uso
- [ ] Diagrama de sequencia
- [ ] Metodologia
- [ ] Backlog do produto
| non_defect | organizando e criando arquivos da documenatação título assunto label assignees versão documentação organizando e criando documentation descrição foi criada alguma pasta e arquivos para organizar o mecanismo de documentação pastas criadas diagramas imagens arquivos criados md diagrama de caso de uso diagrama de sequencia metodologia backlog do produto criterio de aceitação concluir os arquivos diagrama de caso de uso diagrama de sequencia metodologia backlog do produto | 0 |
43,991 | 11,893,705,496 | IssuesEvent | 2020-03-29 12:50:17 | kward/shunit2 | https://api.github.com/repos/kward/shunit2 | closed | assertTrue does not work with "set -e" | Priority-Medium Type-Defect auto-migrated | ```
The below script crashes but would works fine if you remove '-e'
#! /bin/sh -e
testTrue()
{
assertTrue 0
}
. shunit2
I'm using shunit2 v2.1.6-1 from ubuntu raring
```
Original issue reported on code.google.com by `ert...@gmail.com` on 21 Jul 2013 at 9:28
| 1.0 | assertTrue does not work with "set -e" - ```
The below script crashes but would works fine if you remove '-e'
#! /bin/sh -e
testTrue()
{
assertTrue 0
}
. shunit2
I'm using shunit2 v2.1.6-1 from ubuntu raring
```
Original issue reported on code.google.com by `ert...@gmail.com` on 21 Jul 2013 at 9:28
| defect | asserttrue does not work with set e the below script crashes but would works fine if you remove e bin sh e testtrue asserttrue i m using from ubuntu raring original issue reported on code google com by ert gmail com on jul at | 1 |
37,840 | 8,531,133,479 | IssuesEvent | 2018-11-04 08:36:47 | contao/manager-bundle | https://api.github.com/repos/contao/manager-bundle | closed | If the .htaccess already exist contao is not placing its own | defect | Some Webhoster are adding a `.htaccess` with an `AddHandler` in the web-folder if you change the PHP version.
For example: `AddHandler application/x-httpd-php72 .php`
Is it possible to merge the AddHandler during the creation of the `.htaccess` and overwrite the old version.
While testing the new Contao Manager everything went well, expect updating the database (domain.com/contao/install), while the needed lines in the .htaccess were missing. | 1.0 | If the .htaccess already exist contao is not placing its own - Some Webhoster are adding a `.htaccess` with an `AddHandler` in the web-folder if you change the PHP version.
For example: `AddHandler application/x-httpd-php72 .php`
Is it possible to merge the AddHandler during the creation of the `.htaccess` and overwrite the old version.
While testing the new Contao Manager everything went well, expect updating the database (domain.com/contao/install), while the needed lines in the .htaccess were missing. | defect | if the htaccess already exist contao is not placing its own some webhoster are adding a htaccess with an addhandler in the web folder if you change the php version for example addhandler application x httpd php is it possible to merge the addhandler during the creation of the htaccess and overwrite the old version while testing the new contao manager everything went well expect updating the database domain com contao install while the needed lines in the htaccess were missing | 1 |
50,878 | 3,007,677,225 | IssuesEvent | 2015-07-27 17:17:19 | CenterForOpenScience/osf.io | https://api.github.com/repos/CenterForOpenScience/osf.io | closed | [feature request] Filter for Search | 2 - ready Core: Search feature priority - medium ui | I have been exploring the search feature and I've found it to be less than intuitive when it comes to filtering out results. It would be useful to filter out specific keywords, like "posters" specifically.
| 1.0 | [feature request] Filter for Search - I have been exploring the search feature and I've found it to be less than intuitive when it comes to filtering out results. It would be useful to filter out specific keywords, like "posters" specifically.
| non_defect | filter for search i have been exploring the search feature and i ve found it to be less than intuitive when it comes to filtering out results it would be useful to filter out specific keywords like posters specifically | 0 |
9,495 | 7,735,443,548 | IssuesEvent | 2018-05-27 15:11:29 | unitystation/unitystation | https://api.github.com/repos/unitystation/unitystation | opened | Discussion: Easy to use networked Tabs/Windows | Security feature help wanted | # Preliminary information
This does not concern client-based tabs/windows like Alt-click tabs
## Category
Feature Request
# Report
## Current Behaviour
At the moment everything is handled pretty low-level and scattered throughout a lot of places, therefore:
- It's easy to forget to add some checks, lots of ways to accidentally compromise security,
- Tons of boilerplate code,
- Hard to understand for new devs,
- ControlTabs is hardcoded and will soon turn into a mess
We need to make a robust and easy to use solution for networked windows/tabs before we have too much content in.
## Expected/Wanted/Requested Behaviour
Here's what I've come up with:
Requirements:
- Dynamic content support (for vendors that are hackable etc)
- Built-in implicit interaction range and ability to use checks (server-side)
- Tab opening is initiated on server
- Player must not know the contents of dynamic content vendors on the distance
- Multiple instances of the same window
- Server must know if window is opened at the moment. (useful for sending regular updates to the window) Client could send a msg if he initiates close.
- Window elements should be easily bindable (single- or bidirectional) to server methods. Server-to-client updates should be implicit. This should also have dynamic element support (like Vend buttons in item list for vendors)
Nice to have:
- Tab templates stored somewhere outside Managers.prefab
- Multi-user support: n players have the shuttle control window open and both receive same button/radar updates
Any ideas? | True | Discussion: Easy to use networked Tabs/Windows - # Preliminary information
This does not concern client-based tabs/windows like Alt-click tabs
## Category
Feature Request
# Report
## Current Behaviour
At the moment everything is handled pretty low-level and scattered throughout a lot of places, therefore:
- It's easy to forget to add some checks, lots of ways to accidentally compromise security,
- Tons of boilerplate code,
- Hard to understand for new devs,
- ControlTabs is hardcoded and will soon turn into a mess
We need to make a robust and easy to use solution for networked windows/tabs before we have too much content in.
## Expected/Wanted/Requested Behaviour
Here's what I've come up with:
Requirements:
- Dynamic content support (for vendors that are hackable etc)
- Built-in implicit interaction range and ability to use checks (server-side)
- Tab opening is initiated on server
- Player must not know the contents of dynamic content vendors on the distance
- Multiple instances of the same window
- Server must know if window is opened at the moment. (useful for sending regular updates to the window) Client could send a msg if he initiates close.
- Window elements should be easily bindable (single- or bidirectional) to server methods. Server-to-client updates should be implicit. This should also have dynamic element support (like Vend buttons in item list for vendors)
Nice to have:
- Tab templates stored somewhere outside Managers.prefab
- Multi-user support: n players have the shuttle control window open and both receive same button/radar updates
Any ideas? | non_defect | discussion easy to use networked tabs windows preliminary information this does not concern client based tabs windows like alt click tabs category feature request report current behaviour at the moment everything is handled pretty low level and scattered throughout a lot of places therefore it s easy to forget to add some checks lots of ways to accidentally compromise security tons of boilerplate code hard to understand for new devs controltabs is hardcoded and will soon turn into a mess we need to make a robust and easy to use solution for networked windows tabs before we have too much content in expected wanted requested behaviour here s what i ve come up with requirements dynamic content support for vendors that are hackable etc built in implicit interaction range and ability to use checks server side tab opening is initiated on server player must not know the contents of dynamic content vendors on the distance multiple instances of the same window server must know if window is opened at the moment useful for sending regular updates to the window client could send a msg if he initiates close window elements should be easily bindable single or bidirectional to server methods server to client updates should be implicit this should also have dynamic element support like vend buttons in item list for vendors nice to have tab templates stored somewhere outside managers prefab multi user support n players have the shuttle control window open and both receive same button radar updates any ideas | 0 |
618,368 | 19,433,360,483 | IssuesEvent | 2021-12-21 14:28:28 | gardener/gardener | https://api.github.com/repos/gardener/gardener | closed | resource-manager excessively updates `shoot-access-cloud-config-downloader` Secret on creation | kind/bug area/robustness area/scalability priority/2 | **How to categorize this issue?**
<!--
Please select area, kind, and priority for this issue. This helps the community categorizing it.
Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion.
If multiple identifiers make sense you can also state the commands multiple times, e.g.
/area control-plane
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
-->
/area robustness scalability
/kind bug
**What happened**:
During shoot creation, gardener-resource-manager excessively updated the secret `shoot-access-cloud-config-downloader` (more than 1000 times).
```
$ k get secret -w -ojson --field-selector=metadata.name=shoot-access-cloud-config-downloader
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301712061",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"serviceaccount.resources.gardener.cloud/token-renew-timestamp": "2021-12-21T07:57:54Z",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301715893",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"serviceaccount.resources.gardener.cloud/token-renew-timestamp": "2021-12-21T07:36:18Z",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301715904",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"serviceaccount.resources.gardener.cloud/token-renew-timestamp": "2021-12-21T07:53:46Z",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301715908",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
...
```
**What you expected to happen**:
The secret to be updated only if necessary? Well, at least not more than 1000 times.
**How to reproduce it (as minimally and precisely as possible)**:
1. Create a shoot
2. Observe the excessive updates with `k get secret -w -ojson --field-selector=metadata.name=shoot-access-cloud-config-downloader`
**Anything else we need to know?**:
This is critical because it will effectively DDoS Seeds (kube-apiserver / etcd) where many new shoots are created in parallel.
/priority 2
**Environment**:
- Gardener version: current master (131fadc0875b7f35ceb7cc3b116bf1023b7304d0)
- Kubernetes version (use `kubectl version`): 1.20.11
- Cloud provider or hardware configuration: AWS, GCP
| 1.0 | resource-manager excessively updates `shoot-access-cloud-config-downloader` Secret on creation - **How to categorize this issue?**
<!--
Please select area, kind, and priority for this issue. This helps the community categorizing it.
Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion.
If multiple identifiers make sense you can also state the commands multiple times, e.g.
/area control-plane
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
-->
/area robustness scalability
/kind bug
**What happened**:
During shoot creation, gardener-resource-manager excessively updated the secret `shoot-access-cloud-config-downloader` (more than 1000 times).
```
$ k get secret -w -ojson --field-selector=metadata.name=shoot-access-cloud-config-downloader
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301712061",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"serviceaccount.resources.gardener.cloud/token-renew-timestamp": "2021-12-21T07:57:54Z",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301715893",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"serviceaccount.resources.gardener.cloud/token-renew-timestamp": "2021-12-21T07:36:18Z",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301715904",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"annotations": {
"serviceaccount.resources.gardener.cloud/name": "cloud-config-downloader",
"serviceaccount.resources.gardener.cloud/namespace": "kube-system",
"serviceaccount.resources.gardener.cloud/token-expiration-duration": "2160h",
"serviceaccount.resources.gardener.cloud/token-renew-timestamp": "2021-12-21T07:53:46Z",
"token-requestor.resources.gardener.cloud/target-secret-name": "cloud-config-downloader",
"token-requestor.resources.gardener.cloud/target-secret-namespace": "kube-system"
},
"creationTimestamp": "2021-12-20T11:48:20Z",
"labels": {
"resources.gardener.cloud/purpose": "token-requestor"
},
"name": "shoot-access-cloud-config-downloader",
"namespace": "shoot--d067603--local-p2tjv",
"resourceVersion": "301715908",
"uid": "8aadb29e-c389-41c7-b26e-c9f92d430344"
},
"type": "Opaque"
}
...
```
**What you expected to happen**:
The secret to be updated only if necessary? Well, at least not more than 1000 times.
**How to reproduce it (as minimally and precisely as possible)**:
1. Create a shoot
2. Observe the excessive updates with `k get secret -w -ojson --field-selector=metadata.name=shoot-access-cloud-config-downloader`
**Anything else we need to know?**:
This is critical because it will effectively DDoS Seeds (kube-apiserver / etcd) where many new shoots are created in parallel.
/priority 2
**Environment**:
- Gardener version: current master (131fadc0875b7f35ceb7cc3b116bf1023b7304d0)
- Kubernetes version (use `kubectl version`): 1.20.11
- Cloud provider or hardware configuration: AWS, GCP
| non_defect | resource manager excessively updates shoot access cloud config downloader secret on creation how to categorize this issue please select area kind and priority for this issue this helps the community categorizing it replace below todos or exchange the existing identifiers with those that fit best in your opinion if multiple identifiers make sense you can also state the commands multiple times e g area control plane area auto scaling area identifiers audit logging auto scaling backup certification control plane migration control plane cost delivery dev productivity disaster recovery documentation high availability logging metering monitoring networking open source ops productivity os performance quality robustness scalability security storage testing usability user management kind identifiers api change bug cleanup discussion enhancement epic impediment poc post mortem question regression task technical debt test area robustness scalability kind bug what happened during shoot creation gardener resource manager excessively updated the secret shoot access cloud config downloader more than times k get secret w ojson field selector metadata name shoot access cloud config downloader apiversion kind secret metadata annotations serviceaccount resources gardener cloud name cloud config downloader serviceaccount resources gardener cloud namespace kube system serviceaccount resources gardener cloud token expiration duration token requestor resources gardener cloud target secret name cloud config downloader token requestor resources gardener cloud target secret namespace kube system creationtimestamp labels resources gardener cloud purpose token requestor name shoot access cloud config downloader namespace shoot local resourceversion uid type opaque apiversion kind secret metadata annotations serviceaccount resources gardener cloud name cloud config downloader serviceaccount resources gardener cloud namespace kube system serviceaccount resources gardener cloud token expiration duration serviceaccount resources gardener cloud token renew timestamp token requestor resources gardener cloud target secret name cloud config downloader token requestor resources gardener cloud target secret namespace kube system creationtimestamp labels resources gardener cloud purpose token requestor name shoot access cloud config downloader namespace shoot local resourceversion uid type opaque apiversion kind secret metadata annotations serviceaccount resources gardener cloud name cloud config downloader serviceaccount resources gardener cloud namespace kube system serviceaccount resources gardener cloud token expiration duration serviceaccount resources gardener cloud token renew timestamp token requestor resources gardener cloud target secret name cloud config downloader token requestor resources gardener cloud target secret namespace kube system creationtimestamp labels resources gardener cloud purpose token requestor name shoot access cloud config downloader namespace shoot local resourceversion uid type opaque apiversion kind secret metadata annotations serviceaccount resources gardener cloud name cloud config downloader serviceaccount resources gardener cloud namespace kube system serviceaccount resources gardener cloud token expiration duration serviceaccount resources gardener cloud token renew timestamp token requestor resources gardener cloud target secret name cloud config downloader token requestor resources gardener cloud target secret namespace kube system creationtimestamp labels resources gardener cloud purpose token requestor name shoot access cloud config downloader namespace shoot local resourceversion uid type opaque what you expected to happen the secret to be updated only if necessary well at least not more than times how to reproduce it as minimally and precisely as possible create a shoot observe the excessive updates with k get secret w ojson field selector metadata name shoot access cloud config downloader anything else we need to know this is critical because it will effectively ddos seeds kube apiserver etcd where many new shoots are created in parallel priority environment gardener version current master kubernetes version use kubectl version cloud provider or hardware configuration aws gcp | 0 |
61,092 | 7,439,057,186 | IssuesEvent | 2018-03-27 04:09:08 | EarlyWormGames/SkedaddleBugs | https://api.github.com/repos/EarlyWormGames/SkedaddleBugs | closed | Electricity not working in 2-9 | bug design/level design gameplay programming | **Location:** _2-9_
**Reproducibility:** _Always_
**Blocks gameplay:** _No_
**Version (if known):** _Editor-Current (27/03/18)_
**Priority:** _3(Medium)_
_Electricity not working in 2-9_

| 2.0 | Electricity not working in 2-9 - **Location:** _2-9_
**Reproducibility:** _Always_
**Blocks gameplay:** _No_
**Version (if known):** _Editor-Current (27/03/18)_
**Priority:** _3(Medium)_
_Electricity not working in 2-9_

| non_defect | electricity not working in location reproducibility always blocks gameplay no version if known editor current priority medium electricity not working in | 0 |
49,367 | 13,186,641,396 | IssuesEvent | 2020-08-13 00:50:34 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | weighting unit test (Trac #1239) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1239">https://code.icecube.wisc.edu/ticket/1239</a>, reported by kjmeagher and owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:03",
"description": "since weighting is pure python our code coverage report does not give info, but there are two python scripts in resources/test\n\ncompare_oneweight.py appears to be a useful unit test for nugen weighting but the assert is commented out \ncorsika_weight_calculator.py wants cli input and does not appear to be a unit test",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1458335643235016",
"component": "combo reconstruction",
"summary": "weighting unit test",
"priority": "normal",
"keywords": "",
"time": "2015-08-20T08:32:45",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| 1.0 | weighting unit test (Trac #1239) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1239">https://code.icecube.wisc.edu/ticket/1239</a>, reported by kjmeagher and owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:03",
"description": "since weighting is pure python our code coverage report does not give info, but there are two python scripts in resources/test\n\ncompare_oneweight.py appears to be a useful unit test for nugen weighting but the assert is commented out \ncorsika_weight_calculator.py wants cli input and does not appear to be a unit test",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1458335643235016",
"component": "combo reconstruction",
"summary": "weighting unit test",
"priority": "normal",
"keywords": "",
"time": "2015-08-20T08:32:45",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| defect | weighting unit test trac migrated from json status closed changetime description since weighting is pure python our code coverage report does not give info but there are two python scripts in resources test n ncompare oneweight py appears to be a useful unit test for nugen weighting but the assert is commented out ncorsika weight calculator py wants cli input and does not appear to be a unit test reporter kjmeagher cc resolution fixed ts component combo reconstruction summary weighting unit test priority normal keywords time milestone owner jvansanten type defect | 1 |
240,178 | 26,254,330,932 | IssuesEvent | 2023-01-05 22:33:20 | MValle21/ts-components | https://api.github.com/repos/MValle21/ts-components | opened | CVE-2021-23368 (Medium) detected in postcss-7.0.35.tgz | security vulnerability | ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-6.1.15.tgz (Root Library)
- core-6.1.15.tgz
- autoprefixer-9.8.6.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MValle21/ts-components/commit/7fad0c3ea819de1f1c13e58ea95c3582352491a9">7fad0c3ea819de1f1c13e58ea95c3582352491a9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (@storybook/react): 6.1.16</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2021-23368 (Medium) detected in postcss-7.0.35.tgz - ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-6.1.15.tgz (Root Library)
- core-6.1.15.tgz
- autoprefixer-9.8.6.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MValle21/ts-components/commit/7fad0c3ea819de1f1c13e58ea95c3582352491a9">7fad0c3ea819de1f1c13e58ea95c3582352491a9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (@storybook/react): 6.1.16</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_defect | cve medium detected in postcss tgz cve medium severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules postcss package json dependency hierarchy react tgz root library core tgz autoprefixer tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss from and before are vulnerable to regular expression denial of service redos during source map parsing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss direct dependency fix resolution storybook react rescue worker helmet automatic remediation is available for this issue | 0 |
71,009 | 23,408,294,040 | IssuesEvent | 2022-08-12 14:51:11 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Incorrect presentation role on Tooltips in CMS | Needs refining ⭐️ Sitewide CMS 508/Accessibility 508-defect-2 | ## Description
Within the read only text sections of the CMS, there is a tooltip that when hovered over or focused on displays additional information. However, a screen reader does not read this tooltip due to the presentation role making it not available to screen reader users
## Screenshot

## Accessibility Standard
WCAG version 2.1 AA, [Criterion 1.4.13](https://www.w3.org/WAI/WCAG21/Understanding/content-on-hover-or-focus.html)
## Acceptance Criteria
- [ ] UX/IA review
- [ ] Interactive Design review and documentation in Design System
- [ ] Technical review
- [ ] Change management consulted
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
| 1.0 | Incorrect presentation role on Tooltips in CMS - ## Description
Within the read only text sections of the CMS, there is a tooltip that when hovered over or focused on displays additional information. However, a screen reader does not read this tooltip due to the presentation role making it not available to screen reader users
## Screenshot

## Accessibility Standard
WCAG version 2.1 AA, [Criterion 1.4.13](https://www.w3.org/WAI/WCAG21/Understanding/content-on-hover-or-focus.html)
## Acceptance Criteria
- [ ] UX/IA review
- [ ] Interactive Design review and documentation in Design System
- [ ] Technical review
- [ ] Change management consulted
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
| defect | incorrect presentation role on tooltips in cms description within the read only text sections of the cms there is a tooltip that when hovered over or focused on displays additional information however a screen reader does not read this tooltip due to the presentation role making it not available to screen reader users screenshot accessibility standard wcag version aa acceptance criteria ux ia review interactive design review and documentation in design system technical review change management consulted cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support | 1 |
303,675 | 23,035,189,713 | IssuesEvent | 2022-07-22 17:53:07 | typescript-eslint/typescript-eslint | https://api.github.com/repos/typescript-eslint/typescript-eslint | closed | Docs: Proofread Getting Started guide for clarity | documentation accepting prs | ### Before You File a Documentation Request Please Confirm You Have Done The Following...
- [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal.
- [X] I have [read the FAQ](https://typescript-eslint.io/docs/linting/troubleshooting) and my problem is not listed.
### Suggested Changes
Similar to #4861, but for the Getting Started guide. It hasn't been reworked in a little while and I'd like to take an editing pass on it.
### Affected URL(s)
- https://typescript-eslint.io/docs/linting
- https://typescript-eslint.io/docs/linting/* | 1.0 | Docs: Proofread Getting Started guide for clarity - ### Before You File a Documentation Request Please Confirm You Have Done The Following...
- [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal.
- [X] I have [read the FAQ](https://typescript-eslint.io/docs/linting/troubleshooting) and my problem is not listed.
### Suggested Changes
Similar to #4861, but for the Getting Started guide. It hasn't been reworked in a little while and I'd like to take an editing pass on it.
### Affected URL(s)
- https://typescript-eslint.io/docs/linting
- https://typescript-eslint.io/docs/linting/* | non_defect | docs proofread getting started guide for clarity before you file a documentation request please confirm you have done the following i have looked for existing that match my proposal i have and my problem is not listed suggested changes similar to but for the getting started guide it hasn t been reworked in a little while and i d like to take an editing pass on it affected url s | 0 |
389,558 | 11,503,841,678 | IssuesEvent | 2020-02-12 21:55:36 | cloudflare/wrangler | https://api.github.com/repos/cloudflare/wrangler | closed | include plain text bindings in upload form | category - feature priority - critical subject - secrets | when we wrangler publish a project with plain text bindings, we need to include the bindings in the upload form metadata.json. | 1.0 | include plain text bindings in upload form - when we wrangler publish a project with plain text bindings, we need to include the bindings in the upload form metadata.json. | non_defect | include plain text bindings in upload form when we wrangler publish a project with plain text bindings we need to include the bindings in the upload form metadata json | 0 |
31,340 | 7,345,972,924 | IssuesEvent | 2018-03-07 19:09:18 | ChurchCRM/CRM | https://api.github.com/repos/ChurchCRM/CRM | opened | Move Issue Reporter to new Util Class | Code Smell | Issue Reporter is currently in ```SystemService```. This should be in it's own class | 1.0 | Move Issue Reporter to new Util Class - Issue Reporter is currently in ```SystemService```. This should be in it's own class | non_defect | move issue reporter to new util class issue reporter is currently in systemservice this should be in it s own class | 0 |
173,309 | 27,418,624,725 | IssuesEvent | 2023-03-01 15:19:01 | learningequality/kolibri | https://api.github.com/repos/learningequality/kolibri | closed | improvement on channel deletion | TAG: ux update TAG: new feature TAG: user strings APP: Device design | ## Observed behavior
As i am trying to delete a (large 50GB+) channel, the screen for deletion seems to idle for a good 8 minutes, and then starts showing progress in the green percentage bar, with the corresponding % being displayed.
Proposed solution: Create a message such as "Deletion is being prepared" until the actual % bar starts making progress.
## Expected behavior
Indication of some sort of progress would be great, even if it doesn't count towards the percentage bar. Also, logging (at level INFO) should indicate that the deletion has completed successfully. As it is, the log reads `INFO Deleting all channel metadata` and never indicates that the process finishes.
## User-facing consequences
It is not helpful to the user to be left wondering whether the deletion has started or is doing anything.
## Steps to reproduce
Import a large channel, delete it, and tail the log generated by the iceqube/task/job worker.
…
## Context
* Kolibri version 0.15.0b2
* Operating system Windows 11
* Browser Chrome
…
| 1.0 | improvement on channel deletion - ## Observed behavior
As i am trying to delete a (large 50GB+) channel, the screen for deletion seems to idle for a good 8 minutes, and then starts showing progress in the green percentage bar, with the corresponding % being displayed.
Proposed solution: Create a message such as "Deletion is being prepared" until the actual % bar starts making progress.
## Expected behavior
Indication of some sort of progress would be great, even if it doesn't count towards the percentage bar. Also, logging (at level INFO) should indicate that the deletion has completed successfully. As it is, the log reads `INFO Deleting all channel metadata` and never indicates that the process finishes.
## User-facing consequences
It is not helpful to the user to be left wondering whether the deletion has started or is doing anything.
## Steps to reproduce
Import a large channel, delete it, and tail the log generated by the iceqube/task/job worker.
…
## Context
* Kolibri version 0.15.0b2
* Operating system Windows 11
* Browser Chrome
…
| non_defect | improvement on channel deletion observed behavior as i am trying to delete a large channel the screen for deletion seems to idle for a good minutes and then starts showing progress in the green percentage bar with the corresponding being displayed proposed solution create a message such as deletion is being prepared until the actual bar starts making progress expected behavior indication of some sort of progress would be great even if it doesn t count towards the percentage bar also logging at level info should indicate that the deletion has completed successfully as it is the log reads info deleting all channel metadata and never indicates that the process finishes user facing consequences it is not helpful to the user to be left wondering whether the deletion has started or is doing anything steps to reproduce import a large channel delete it and tail the log generated by the iceqube task job worker … context kolibri version operating system windows browser chrome … | 0 |
24,839 | 4,110,750,939 | IssuesEvent | 2016-06-07 01:08:13 | prettydiff/prettydiff | https://api.github.com/repos/prettydiff/prettydiff | opened | SCSS: removed space in value | Defect Not started Parsing | margin: 0 {{ some_variable }}em;
Becomes
margin:0{{some_variable}}em;
The curly braces represent a variable reference, so the preprocessed value could look something like `0 20em` | 1.0 | SCSS: removed space in value - margin: 0 {{ some_variable }}em;
Becomes
margin:0{{some_variable}}em;
The curly braces represent a variable reference, so the preprocessed value could look something like `0 20em` | defect | scss removed space in value margin some variable em becomes margin some variable em the curly braces represent a variable reference so the preprocessed value could look something like | 1 |
15,328 | 2,850,628,887 | IssuesEvent | 2015-05-31 18:50:46 | damonkohler/sl4a | https://api.github.com/repos/damonkohler/sl4a | opened | smsSend does not store the message in the sms application | auto-migrated Priority-Medium Type-Defect | _From @GoogleCodeExporter on May 31, 2015 11:29_
```
What device(s) are you experiencing the problem on?
HTC Tattoo (Click)
What steps will reproduce the problem?
`droid.smsSend(address, msg)`
What is the expected output? What do you see instead?
I would expect that the sent message appears in the SMS application.
What version of the product are you using? On what operating system?
Android 2.3.3 (CM 7.0.3)
```
Original issue reported on code.google.com by `marc.sch...@gmail.com` on 3 Jun 2011 at 11:28
_Copied from original issue: damonkohler/android-scripting#551_ | 1.0 | smsSend does not store the message in the sms application - _From @GoogleCodeExporter on May 31, 2015 11:29_
```
What device(s) are you experiencing the problem on?
HTC Tattoo (Click)
What steps will reproduce the problem?
`droid.smsSend(address, msg)`
What is the expected output? What do you see instead?
I would expect that the sent message appears in the SMS application.
What version of the product are you using? On what operating system?
Android 2.3.3 (CM 7.0.3)
```
Original issue reported on code.google.com by `marc.sch...@gmail.com` on 3 Jun 2011 at 11:28
_Copied from original issue: damonkohler/android-scripting#551_ | defect | smssend does not store the message in the sms application from googlecodeexporter on may what device s are you experiencing the problem on htc tattoo click what steps will reproduce the problem droid smssend address msg what is the expected output what do you see instead i would expect that the sent message appears in the sms application what version of the product are you using on what operating system android cm original issue reported on code google com by marc sch gmail com on jun at copied from original issue damonkohler android scripting | 1 |
53,613 | 13,185,180,752 | IssuesEvent | 2020-08-12 20:53:00 | GoogleContainerTools/skaffold | https://api.github.com/repos/GoogleContainerTools/skaffold | opened | Load images into minikube rather than replacing host's docker env | area/build kind/feature-request platform/minikube priority/p2 | currently in skaffold, if we detect a user has `minikube` as their active kubecontext, we run a `minikube docker-env` so that `docker` commands on the host machine build to minikube's docker daemon. this can cause issues if we're only trying to run a `skaffold build` while minikube is stopped - something that should be possible, but skaffold can't get around because it only knows to try and use minikube's docker daemon.
instead, we could use `minikube cache add` on images that are built locally, and do this **on push**, so builds will still work even when minikube's docker daemon is not accessible.
https://minikube.sigs.k8s.io/docs/handbook/pushing | 1.0 | Load images into minikube rather than replacing host's docker env - currently in skaffold, if we detect a user has `minikube` as their active kubecontext, we run a `minikube docker-env` so that `docker` commands on the host machine build to minikube's docker daemon. this can cause issues if we're only trying to run a `skaffold build` while minikube is stopped - something that should be possible, but skaffold can't get around because it only knows to try and use minikube's docker daemon.
instead, we could use `minikube cache add` on images that are built locally, and do this **on push**, so builds will still work even when minikube's docker daemon is not accessible.
https://minikube.sigs.k8s.io/docs/handbook/pushing | non_defect | load images into minikube rather than replacing host s docker env currently in skaffold if we detect a user has minikube as their active kubecontext we run a minikube docker env so that docker commands on the host machine build to minikube s docker daemon this can cause issues if we re only trying to run a skaffold build while minikube is stopped something that should be possible but skaffold can t get around because it only knows to try and use minikube s docker daemon instead we could use minikube cache add on images that are built locally and do this on push so builds will still work even when minikube s docker daemon is not accessible | 0 |
71,056 | 23,428,097,920 | IssuesEvent | 2022-08-14 17:51:06 | dkfans/keeperfx | https://api.github.com/repos/dkfans/keeperfx | closed | Flower petals drawn in zoombox | Type-Defect Component-UI | This is how the zoombox looks now:

This is how it used to look:

| 1.0 | Flower petals drawn in zoombox - This is how the zoombox looks now:

This is how it used to look:

| defect | flower petals drawn in zoombox this is how the zoombox looks now this is how it used to look | 1 |
351,250 | 10,514,575,551 | IssuesEvent | 2019-09-28 01:45:49 | AY1920S1-CS2113T-W17-4/main | https://api.github.com/repos/AY1920S1-CS2113T-W17-4/main | opened | As an Undergraduate Tutor, I can have two instances of calendar | priority.Medium type.Enhancement type.Story | such that I can separate my tutor tasks and personal tasks.
| 1.0 | As an Undergraduate Tutor, I can have two instances of calendar - such that I can separate my tutor tasks and personal tasks.
| non_defect | as an undergraduate tutor i can have two instances of calendar such that i can separate my tutor tasks and personal tasks | 0 |
23,412 | 3,813,881,420 | IssuesEvent | 2016-03-28 09:12:21 | night-ghost/minimosd-extra | https://api.github.com/repos/night-ghost/minimosd-extra | closed | Minim osd horizon jumbled text- firmware upload fail? | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.Clear eeprom
2.load firmware
3. turn on osd, look at footage
What is the expected output? What do you see instead?
Expected output is smooth. Instead it is a bunch of odd characters.
In mwosd, this is due to uploading fonts incorrectly.
What version of the product are you using? On what operating system?
2.3.2 pre 727, windows 7, 64 bit
Please provide any additional information below.
The osd text is jumbled, and while it can be made out, the horizon is
completely screwed up. This is what happens when firmware upload failed in mw
osd.
```
Original issue reported on code.google.com by `ryanfrie...@gmail.com` on 15 Feb 2015 at 10:32
Attachments:
* [minimbug.PNG](https://storage.googleapis.com/google-code-attachments/minimosd-extra/issue-119/comment-0/minimbug.PNG)
| 1.0 | Minim osd horizon jumbled text- firmware upload fail? - ```
What steps will reproduce the problem?
1.Clear eeprom
2.load firmware
3. turn on osd, look at footage
What is the expected output? What do you see instead?
Expected output is smooth. Instead it is a bunch of odd characters.
In mwosd, this is due to uploading fonts incorrectly.
What version of the product are you using? On what operating system?
2.3.2 pre 727, windows 7, 64 bit
Please provide any additional information below.
The osd text is jumbled, and while it can be made out, the horizon is
completely screwed up. This is what happens when firmware upload failed in mw
osd.
```
Original issue reported on code.google.com by `ryanfrie...@gmail.com` on 15 Feb 2015 at 10:32
Attachments:
* [minimbug.PNG](https://storage.googleapis.com/google-code-attachments/minimosd-extra/issue-119/comment-0/minimbug.PNG)
| defect | minim osd horizon jumbled text firmware upload fail what steps will reproduce the problem clear eeprom load firmware turn on osd look at footage what is the expected output what do you see instead expected output is smooth instead it is a bunch of odd characters in mwosd this is due to uploading fonts incorrectly what version of the product are you using on what operating system pre windows bit please provide any additional information below the osd text is jumbled and while it can be made out the horizon is completely screwed up this is what happens when firmware upload failed in mw osd original issue reported on code google com by ryanfrie gmail com on feb at attachments | 1 |
87,494 | 10,919,815,997 | IssuesEvent | 2019-11-21 19:51:12 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | [Design] Remove gating for Original Claims | 526 design vsa-benefits | ## User Story or Problem Statement
As a first time disability claimant, after logging in, I need to be notified of how I can proceed with completing my Disability Compensation Claim if I do not have the required VA identification so that I can start the form.
## Goal
_A Veteran who is not able to have a CORP ID created because they are missing a BIRLS ID needs to be notified (before ITF) of how they can proceed with completing their 526_
## Tasks
- [x] _Error handling of CORP ID, but no BIRLS ID
- [x] _Identify where the error should be displayed (ITF - Intent to File or SIP - Save in Progress)
- [x] _Identify what displays to user who is logged in and gets to gating but is missing BIRLS ID
- [ ] _(Technical) Add mock user
## Acceptance Criteria
- [x] Design of messaging identified and completed. | 1.0 | [Design] Remove gating for Original Claims - ## User Story or Problem Statement
As a first time disability claimant, after logging in, I need to be notified of how I can proceed with completing my Disability Compensation Claim if I do not have the required VA identification so that I can start the form.
## Goal
_A Veteran who is not able to have a CORP ID created because they are missing a BIRLS ID needs to be notified (before ITF) of how they can proceed with completing their 526_
## Tasks
- [x] _Error handling of CORP ID, but no BIRLS ID
- [x] _Identify where the error should be displayed (ITF - Intent to File or SIP - Save in Progress)
- [x] _Identify what displays to user who is logged in and gets to gating but is missing BIRLS ID
- [ ] _(Technical) Add mock user
## Acceptance Criteria
- [x] Design of messaging identified and completed. | non_defect | remove gating for original claims user story or problem statement as a first time disability claimant after logging in i need to be notified of how i can proceed with completing my disability compensation claim if i do not have the required va identification so that i can start the form goal a veteran who is not able to have a corp id created because they are missing a birls id needs to be notified before itf of how they can proceed with completing their tasks error handling of corp id but no birls id identify where the error should be displayed itf intent to file or sip save in progress identify what displays to user who is logged in and gets to gating but is missing birls id technical add mock user acceptance criteria design of messaging identified and completed | 0 |
229,914 | 18,455,298,102 | IssuesEvent | 2021-10-15 15:40:27 | ESMValGroup/ESMValTool | https://api.github.com/repos/ESMValGroup/ESMValTool | opened | [recipe testing strategy] Testing recipes on a weekly basis | enhancement is-enes testing | **Is your feature request related to a problem? Please describe.**
During the meeting on the recipe testing strategy #2259 on October 14, 2021, we discussed the possibility to test some recipes more frequently than shortly before a release. These tests would be done on a weekly basis for a subset of recipes. This testing approach would help to check if changes in the Core affect the recipes. Several options were discussed to this end:
- In order to limit the computational efforts required, we would need to find a sensible way to split recipes in subsets. Would it make sense to run recipes that are known to fail or problematic more often than others? Shall the recipes with a too long runtime be excluded from this testing approach?
- What kind of computational resources would be needed for that? Are the resources available with the ESMValBot enough?
- Is it possible and relevant to perform this testing without calling the diagnostic scripts?
- How do we handle the "missing data problem"? And do we need to test data changes as well? What about the availability of observational data?
- Would it help to have a "test version" of each/many recipes that could be computationally less demanding, possibly using less datasets and a narrower time range? That would imply twice more recipes to maintain.
- Could we develop synthetic data on which all recipes would run? Consensus was reached that developing such dataset would be difficult as ESMValTool runs on a great variety of data. These data may also not capture the trends and thresholds analyzed in recipes.
- Could the recipes be tested using a very low resolution only? Not clear if that would work for all recipes.
- Should we define a better way to record backwards incompatible changes such as a "live changelogs"?
Feel free to edit this summary or to comment in this issue. (This issue could be transferred to the Core if more appropriate.)
| 1.0 | [recipe testing strategy] Testing recipes on a weekly basis - **Is your feature request related to a problem? Please describe.**
During the meeting on the recipe testing strategy #2259 on October 14, 2021, we discussed the possibility to test some recipes more frequently than shortly before a release. These tests would be done on a weekly basis for a subset of recipes. This testing approach would help to check if changes in the Core affect the recipes. Several options were discussed to this end:
- In order to limit the computational efforts required, we would need to find a sensible way to split recipes in subsets. Would it make sense to run recipes that are known to fail or problematic more often than others? Shall the recipes with a too long runtime be excluded from this testing approach?
- What kind of computational resources would be needed for that? Are the resources available with the ESMValBot enough?
- Is it possible and relevant to perform this testing without calling the diagnostic scripts?
- How do we handle the "missing data problem"? And do we need to test data changes as well? What about the availability of observational data?
- Would it help to have a "test version" of each/many recipes that could be computationally less demanding, possibly using less datasets and a narrower time range? That would imply twice more recipes to maintain.
- Could we develop synthetic data on which all recipes would run? Consensus was reached that developing such dataset would be difficult as ESMValTool runs on a great variety of data. These data may also not capture the trends and thresholds analyzed in recipes.
- Could the recipes be tested using a very low resolution only? Not clear if that would work for all recipes.
- Should we define a better way to record backwards incompatible changes such as a "live changelogs"?
Feel free to edit this summary or to comment in this issue. (This issue could be transferred to the Core if more appropriate.)
| non_defect | testing recipes on a weekly basis is your feature request related to a problem please describe during the meeting on the recipe testing strategy on october we discussed the possibility to test some recipes more frequently than shortly before a release these tests would be done on a weekly basis for a subset of recipes this testing approach would help to check if changes in the core affect the recipes several options were discussed to this end in order to limit the computational efforts required we would need to find a sensible way to split recipes in subsets would it make sense to run recipes that are known to fail or problematic more often than others shall the recipes with a too long runtime be excluded from this testing approach what kind of computational resources would be needed for that are the resources available with the esmvalbot enough is it possible and relevant to perform this testing without calling the diagnostic scripts how do we handle the missing data problem and do we need to test data changes as well what about the availability of observational data would it help to have a test version of each many recipes that could be computationally less demanding possibly using less datasets and a narrower time range that would imply twice more recipes to maintain could we develop synthetic data on which all recipes would run consensus was reached that developing such dataset would be difficult as esmvaltool runs on a great variety of data these data may also not capture the trends and thresholds analyzed in recipes could the recipes be tested using a very low resolution only not clear if that would work for all recipes should we define a better way to record backwards incompatible changes such as a live changelogs feel free to edit this summary or to comment in this issue this issue could be transferred to the core if more appropriate | 0 |
711,907 | 24,479,338,488 | IssuesEvent | 2022-10-08 16:03:45 | Princeton-LSI-ResearchComputing/tracebase | https://api.github.com/repos/Princeton-LSI-ResearchComputing/tracebase | closed | Downloaded PeakGroup data is incomplete | type:bug priority:1-blocking component:2-templates | <!-- markdownlint-disable-next-line first-line-heading -->
## BUG DESCRIPTION
### Problem
Exporting (downloading) PeakGroup data from advanced search table, or from the "Download" button at the top of page does not include all rows.
### Steps to reproduce
downloading all peakgroups data
1. tracebase-dev.princeton.edu
2. click Download > all PeakGroups data
downloading subset of peakgroups data
1. tracebase-dev.princeton.edu
2. Advanced Search
3. change fields to "Study" "contains" "ob"
4. page indicates 16395 Rows
5. click blue "export data" button to download tsv
6. downloaded tsv only includes 1126 rows (when copy pasted from notepad to excel)
### Current behavior
Some downloads of PeakGroups do not include expected number of rows of data. For example, the ob/ob dataset is missing some rows (like compounds where labeled element is N). Downloading all peakgroups data completes after 33kb, when true dataset should be in mb or bigger.
### Expected behavior
Expected full download of all PeakGroups associated with filtered selection.
### Suggested Change
Maybe the download features need to be updated to match PeakGroup selections?
### Comment
This problem appears to be specific to PeakGroups. Fcirc and PeakData seem to be downloaded properly.
Downloading the full dataset from current version of main site tracebase.princeton.edu seems to work properly.
-----
## ISSUE OWNER SECTION
### Assumptions
- List of assumptions made WRT the code
- E.g. We will assume input is correct (explaining why there is no validation)
### Requirements
- [ ] 1. List of numbered conditions to be met for the feature
- [ ] 2. E.g. Every column/row must display a value, i.e. cannot be empty
- [ ] 3. Numbers for reference & checkboxes for progress tracking
### Limitations
- A list of things this work will specifically not do
- E.g. This feature will only handle the most frequent use case X
### Affected Components
A list of repository items, dependencies, etc, labeled with add, delete, or
change. One item per line. (Mostly, this will be a list of files.)
- change: File path or DB table ...
- add: Environment variable or server setting
- delete: External executable or cron job
### DESIGN
#### GUI Change description
Describe changes the user will see.
#### Code Change Description
Describe code changes planned for the feature. *(Pseudocode encouraged)*
#### Tests
- [ ] 1. A description of at least one test for each requirement above.
- [ ] 2. E.g. Test for req 2 that there's an exception when display value is ''
- [ ] 3. Numbers for reference & checkboxes for progress tracking
| 1.0 | Downloaded PeakGroup data is incomplete - <!-- markdownlint-disable-next-line first-line-heading -->
## BUG DESCRIPTION
### Problem
Exporting (downloading) PeakGroup data from advanced search table, or from the "Download" button at the top of page does not include all rows.
### Steps to reproduce
downloading all peakgroups data
1. tracebase-dev.princeton.edu
2. click Download > all PeakGroups data
downloading subset of peakgroups data
1. tracebase-dev.princeton.edu
2. Advanced Search
3. change fields to "Study" "contains" "ob"
4. page indicates 16395 Rows
5. click blue "export data" button to download tsv
6. downloaded tsv only includes 1126 rows (when copy pasted from notepad to excel)
### Current behavior
Some downloads of PeakGroups do not include expected number of rows of data. For example, the ob/ob dataset is missing some rows (like compounds where labeled element is N). Downloading all peakgroups data completes after 33kb, when true dataset should be in mb or bigger.
### Expected behavior
Expected full download of all PeakGroups associated with filtered selection.
### Suggested Change
Maybe the download features need to be updated to match PeakGroup selections?
### Comment
This problem appears to be specific to PeakGroups. Fcirc and PeakData seem to be downloaded properly.
Downloading the full dataset from current version of main site tracebase.princeton.edu seems to work properly.
-----
## ISSUE OWNER SECTION
### Assumptions
- List of assumptions made WRT the code
- E.g. We will assume input is correct (explaining why there is no validation)
### Requirements
- [ ] 1. List of numbered conditions to be met for the feature
- [ ] 2. E.g. Every column/row must display a value, i.e. cannot be empty
- [ ] 3. Numbers for reference & checkboxes for progress tracking
### Limitations
- A list of things this work will specifically not do
- E.g. This feature will only handle the most frequent use case X
### Affected Components
A list of repository items, dependencies, etc, labeled with add, delete, or
change. One item per line. (Mostly, this will be a list of files.)
- change: File path or DB table ...
- add: Environment variable or server setting
- delete: External executable or cron job
### DESIGN
#### GUI Change description
Describe changes the user will see.
#### Code Change Description
Describe code changes planned for the feature. *(Pseudocode encouraged)*
#### Tests
- [ ] 1. A description of at least one test for each requirement above.
- [ ] 2. E.g. Test for req 2 that there's an exception when display value is ''
- [ ] 3. Numbers for reference & checkboxes for progress tracking
| non_defect | downloaded peakgroup data is incomplete bug description problem exporting downloading peakgroup data from advanced search table or from the download button at the top of page does not include all rows steps to reproduce downloading all peakgroups data tracebase dev princeton edu click download all peakgroups data downloading subset of peakgroups data tracebase dev princeton edu advanced search change fields to study contains ob page indicates rows click blue export data button to download tsv downloaded tsv only includes rows when copy pasted from notepad to excel current behavior some downloads of peakgroups do not include expected number of rows of data for example the ob ob dataset is missing some rows like compounds where labeled element is n downloading all peakgroups data completes after when true dataset should be in mb or bigger expected behavior expected full download of all peakgroups associated with filtered selection suggested change maybe the download features need to be updated to match peakgroup selections comment this problem appears to be specific to peakgroups fcirc and peakdata seem to be downloaded properly downloading the full dataset from current version of main site tracebase princeton edu seems to work properly issue owner section assumptions list of assumptions made wrt the code e g we will assume input is correct explaining why there is no validation requirements list of numbered conditions to be met for the feature e g every column row must display a value i e cannot be empty numbers for reference checkboxes for progress tracking limitations a list of things this work will specifically not do e g this feature will only handle the most frequent use case x affected components a list of repository items dependencies etc labeled with add delete or change one item per line mostly this will be a list of files change file path or db table add environment variable or server setting delete external executable or cron job design gui change description describe changes the user will see code change description describe code changes planned for the feature pseudocode encouraged tests a description of at least one test for each requirement above e g test for req that there s an exception when display value is numbers for reference checkboxes for progress tracking | 0 |
22,207 | 3,618,679,754 | IssuesEvent | 2016-02-08 12:53:49 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | Cakephp 3.0 Shell Progress bar offset incorrectly | console Defect On hold Windows | When I am using the cake3 progress bar in the shell, it 'works', but it is displaying it starting on the far right of my screen:

it seems to be either:
- indenting my progress bar by 116 px
- backspacing two spaces (causing it to appear on the previous line of the page to the far right), and then printing my progress bar
Thanks.
JM | 1.0 | Cakephp 3.0 Shell Progress bar offset incorrectly - When I am using the cake3 progress bar in the shell, it 'works', but it is displaying it starting on the far right of my screen:

it seems to be either:
- indenting my progress bar by 116 px
- backspacing two spaces (causing it to appear on the previous line of the page to the far right), and then printing my progress bar
Thanks.
JM | defect | cakephp shell progress bar offset incorrectly when i am using the progress bar in the shell it works but it is displaying it starting on the far right of my screen it seems to be either indenting my progress bar by px backspacing two spaces causing it to appear on the previous line of the page to the far right and then printing my progress bar thanks jm | 1 |
14,989 | 2,834,414,134 | IssuesEvent | 2015-05-26 04:23:09 | ibus/ibus | https://api.github.com/repos/ibus/ibus | reopened | type "icka" does not produce "IC卡" | Component-ibus-pinyin Priority-Medium Type-Defect | ```
I am using ibus 1.3.7. Inside phrases.txt, there is a line stating icka=IC卡. However,
when I type icka, ibus does not display "IC卡" as an candidate. It seems to me that
for those phrases starting with "i", ibus refuse to display it in the candidate window
```
Original issue reported on code.google.com by `FreeToGo` on 2010-09-14 07:34:56 | 1.0 | type "icka" does not produce "IC卡" - ```
I am using ibus 1.3.7. Inside phrases.txt, there is a line stating icka=IC卡. However,
when I type icka, ibus does not display "IC卡" as an candidate. It seems to me that
for those phrases starting with "i", ibus refuse to display it in the candidate window
```
Original issue reported on code.google.com by `FreeToGo` on 2010-09-14 07:34:56 | defect | type icka does not produce ic卡 i am using ibus inside phrases txt there is a line stating icka ic卡 however when i type icka ibus does not display ic卡 as an candidate it seems to me that for those phrases starting with i ibus refuse to display it in the candidate window original issue reported on code google com by freetogo on | 1 |
82,072 | 31,926,774,993 | IssuesEvent | 2023-09-19 02:49:41 | snuailab/waffle_hub | https://api.github.com/repos/snuailab/waffle_hub | closed | A dependency issue has been identified between the numpy and torchmetric libraries. | defect | ### Search before asking
- [X] I have searched the [issues](https://github.com///issues) and found no similar bug report.
### Select Component
_No response_
### Bug
When the numpy version is 1.24 or higher, a dependency issue arises with torchmetric version 1.0.0 or higher.
Switching the numpy version to 1.23.5 resolves the issue and allows for normal operation.
reason:
np.float (deprecated) -> np.float64
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_ | 1.0 | A dependency issue has been identified between the numpy and torchmetric libraries. - ### Search before asking
- [X] I have searched the [issues](https://github.com///issues) and found no similar bug report.
### Select Component
_No response_
### Bug
When the numpy version is 1.24 or higher, a dependency issue arises with torchmetric version 1.0.0 or higher.
Switching the numpy version to 1.23.5 resolves the issue and allows for normal operation.
reason:
np.float (deprecated) -> np.float64
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_ | defect | a dependency issue has been identified between the numpy and torchmetric libraries search before asking i have searched the and found no similar bug report select component no response bug when the numpy version is or higher a dependency issue arises with torchmetric version or higher switching the numpy version to resolves the issue and allows for normal operation reason np float deprecated np environment no response minimal reproducible example no response additional no response | 1 |
16,933 | 2,964,591,621 | IssuesEvent | 2015-07-10 17:33:05 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Dart2dart does not rename malformed types | Area-Dart2Dart NotPlanned Priority-Unassigned Triaged Type-Defect | This program:
lib.dart:
class C {}
main.dart:
import "lib.dart" as lib;
void main() {
lib.F a = new lib.C();
}
Compiled with:
> dart2js main.dart --output-type=dart
Gives this output code:
class C{}void main(){lib.F a=new C();}
//# sourceMappingURL=out.dart.map
//@ sourceMappingURL=out.dart.map
Which fails to run because dart does not know the lib prefix:
> dart out.dart
'/out.dart': error: line 1 pos 31: semicolon expected
class C<T>{}void main(){lib.F a=new C();} | 1.0 | Dart2dart does not rename malformed types - This program:
lib.dart:
class C {}
main.dart:
import "lib.dart" as lib;
void main() {
lib.F a = new lib.C();
}
Compiled with:
> dart2js main.dart --output-type=dart
Gives this output code:
class C{}void main(){lib.F a=new C();}
//# sourceMappingURL=out.dart.map
//@ sourceMappingURL=out.dart.map
Which fails to run because dart does not know the lib prefix:
> dart out.dart
'/out.dart': error: line 1 pos 31: semicolon expected
class C<T>{}void main(){lib.F a=new C();} | defect | does not rename malformed types this program lib dart class c main dart import quot lib dart quot as lib void main nbsp nbsp lib f a new lib c compiled with gt main dart output type dart gives this output code class c void main lib f a new c sourcemappingurl out dart map sourcemappingurl out dart map which fails to run because dart does not know the lib prefix gt dart out dart out dart error line pos semicolon expected class c lt t gt void main lib f a new c | 1 |
300,498 | 25,972,417,968 | IssuesEvent | 2022-12-19 12:21:00 | gear-tech/gear | https://api.github.com/repos/gear-tech/gear | closed | Tests in `gear-test` fail if the repo cloned with another name | C0-bug D4-test A4-insubstantial | ### Problem
-
### Steps
1.
2.
3.
### Possible Solution
use `CARGO_MANIFEST_DIR`
### Notes
_No response_
### Relevant Log Output
<details><summary>Click to expand/collapse</summary>
<p>
```
thread 'js::tests::check_vec' panicked at 'Could not find file: "Gear root directory not found"', gear-test/src/js/mod.rs:283:41
```
</p>
</details>
| 1.0 | Tests in `gear-test` fail if the repo cloned with another name - ### Problem
-
### Steps
1.
2.
3.
### Possible Solution
use `CARGO_MANIFEST_DIR`
### Notes
_No response_
### Relevant Log Output
<details><summary>Click to expand/collapse</summary>
<p>
```
thread 'js::tests::check_vec' panicked at 'Could not find file: "Gear root directory not found"', gear-test/src/js/mod.rs:283:41
```
</p>
</details>
| non_defect | tests in gear test fail if the repo cloned with another name problem steps possible solution use cargo manifest dir notes no response relevant log output click to expand collapse thread js tests check vec panicked at could not find file gear root directory not found gear test src js mod rs | 0 |
24,221 | 7,468,482,647 | IssuesEvent | 2018-04-02 19:08:37 | dart-lang/build | https://api.github.com/repos/dart-lang/build | closed | If there are 2 targets we only whitelist files which are included in any `includes` for a target. | package: build_runner | We need to treat any omitted `include` as an indication to fallback to the default whitelist - we currently only do that if there is exactly 1 target.
cc @nshahan | 1.0 | If there are 2 targets we only whitelist files which are included in any `includes` for a target. - We need to treat any omitted `include` as an indication to fallback to the default whitelist - we currently only do that if there is exactly 1 target.
cc @nshahan | non_defect | if there are targets we only whitelist files which are included in any includes for a target we need to treat any omitted include as an indication to fallback to the default whitelist we currently only do that if there is exactly target cc nshahan | 0 |
60,421 | 17,023,421,534 | IssuesEvent | 2021-07-03 01:56:48 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | [PATCH] Missing bar when not logged in | Component: website Priority: trivial Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 9.34pm, Monday, 8th June 2009]**
The bar between the login and the sign up link is missing. Patch attached. | 1.0 | [PATCH] Missing bar when not logged in - **[Submitted to the original trac issue database at 9.34pm, Monday, 8th June 2009]**
The bar between the login and the sign up link is missing. Patch attached. | defect | missing bar when not logged in the bar between the login and the sign up link is missing patch attached | 1 |
629,040 | 20,021,754,680 | IssuesEvent | 2022-02-01 17:00:58 | Energy-Innovation/eps-us | https://api.github.com/repos/Energy-Innovation/eps-us | opened | Demand response capacity and generation should reduce peaker capacity factors and generation | low priority 3.3.2 | Currently demand response deployment in the model can reduce emissions by avoiding the new for new peaker plants that run at a fixed capacity factor and/or adding grid flexibility that avoids curtailment. But if there is no demand for new peaker plants and there is not curtailment, the addition of DR doesn't result in any emissions reductions.
In reality, adding DR to the system would result in decreased dispatch of peaker plants. We should account for this emissions reduction.
The EIA has data on capacity and energy saved from DR in form 861. We could use this to develop a capacity factor for demand response and multiply it by the total installed capacity in a given year to find the MWh of DR dispatched in given year. We would then need to reduce the capacity factors of peaker plants to account for the increased dispatch from demand response. We ought to reduce to electricity generation by the same amount, since DR _avoids_ generation.
Though it may not show up as a huge amount in the current US model, DR is being heavily deployed in policy scenarios and we would expect it to have some emissions reductions even if there's no need for new peakers. | 1.0 | Demand response capacity and generation should reduce peaker capacity factors and generation - Currently demand response deployment in the model can reduce emissions by avoiding the new for new peaker plants that run at a fixed capacity factor and/or adding grid flexibility that avoids curtailment. But if there is no demand for new peaker plants and there is not curtailment, the addition of DR doesn't result in any emissions reductions.
In reality, adding DR to the system would result in decreased dispatch of peaker plants. We should account for this emissions reduction.
The EIA has data on capacity and energy saved from DR in form 861. We could use this to develop a capacity factor for demand response and multiply it by the total installed capacity in a given year to find the MWh of DR dispatched in given year. We would then need to reduce the capacity factors of peaker plants to account for the increased dispatch from demand response. We ought to reduce to electricity generation by the same amount, since DR _avoids_ generation.
Though it may not show up as a huge amount in the current US model, DR is being heavily deployed in policy scenarios and we would expect it to have some emissions reductions even if there's no need for new peakers. | non_defect | demand response capacity and generation should reduce peaker capacity factors and generation currently demand response deployment in the model can reduce emissions by avoiding the new for new peaker plants that run at a fixed capacity factor and or adding grid flexibility that avoids curtailment but if there is no demand for new peaker plants and there is not curtailment the addition of dr doesn t result in any emissions reductions in reality adding dr to the system would result in decreased dispatch of peaker plants we should account for this emissions reduction the eia has data on capacity and energy saved from dr in form we could use this to develop a capacity factor for demand response and multiply it by the total installed capacity in a given year to find the mwh of dr dispatched in given year we would then need to reduce the capacity factors of peaker plants to account for the increased dispatch from demand response we ought to reduce to electricity generation by the same amount since dr avoids generation though it may not show up as a huge amount in the current us model dr is being heavily deployed in policy scenarios and we would expect it to have some emissions reductions even if there s no need for new peakers | 0 |
16,655 | 12,093,637,279 | IssuesEvent | 2020-04-19 20:27:03 | DigitalExcellence/dex-epics | https://api.github.com/repos/DigitalExcellence/dex-epics | opened | How can we facilitate the infrastructure? | epic infrastructure ongoing | In GitLab by @Brend-Smits on Mar 4, 2020, 20:36
* What hosting should be used? dex-backend#16
* How can we assure that it's easily maintainable for future groups?
* How can we assure that governance can easily 'sell' the platform to internal IT from Fontys?
* How can we easily scale the platform? dex-backend#25
* Set up a pipeline and release every sprint dex-backend#18 | 1.0 | How can we facilitate the infrastructure? - In GitLab by @Brend-Smits on Mar 4, 2020, 20:36
* What hosting should be used? dex-backend#16
* How can we assure that it's easily maintainable for future groups?
* How can we assure that governance can easily 'sell' the platform to internal IT from Fontys?
* How can we easily scale the platform? dex-backend#25
* Set up a pipeline and release every sprint dex-backend#18 | non_defect | how can we facilitate the infrastructure in gitlab by brend smits on mar what hosting should be used dex backend how can we assure that it s easily maintainable for future groups how can we assure that governance can easily sell the platform to internal it from fontys how can we easily scale the platform dex backend set up a pipeline and release every sprint dex backend | 0 |
397,843 | 27,179,760,124 | IssuesEvent | 2023-02-18 13:23:37 | pyo92/project-lottery | https://api.github.com/repos/pyo92/project-lottery | closed | 로또 6/45 도메인 설계 변경 | documentation enhancement | 스크랩핑 기능을 구현하면서 테스트하다 보니 도메인에 대한 수정이 필요해보였다.
기존에 구현했던 entity class 와 ERD 에 대한 수정을 통해 이를 반영한다. | 1.0 | 로또 6/45 도메인 설계 변경 - 스크랩핑 기능을 구현하면서 테스트하다 보니 도메인에 대한 수정이 필요해보였다.
기존에 구현했던 entity class 와 ERD 에 대한 수정을 통해 이를 반영한다. | non_defect | 로또 도메인 설계 변경 스크랩핑 기능을 구현하면서 테스트하다 보니 도메인에 대한 수정이 필요해보였다 기존에 구현했던 entity class 와 erd 에 대한 수정을 통해 이를 반영한다 | 0 |
25,358 | 25,051,413,703 | IssuesEvent | 2022-11-05 23:28:23 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Can't select more then one anchor point | enhancement topic:editor confirmed usability | When editing some polygon - like Polygon2D or CollisionPolygon2D and using box selection tool, the only thing that can be selected are whole objects, but can't select just a couple of anchor points to move them simultaneously.
It should be done in this way:
- drag a selection box to select anchors
- click on one of them and drag or use arrow keys to move them
like in Adobe Illustrator or any other vector graphics editor
| True | Can't select more then one anchor point - When editing some polygon - like Polygon2D or CollisionPolygon2D and using box selection tool, the only thing that can be selected are whole objects, but can't select just a couple of anchor points to move them simultaneously.
It should be done in this way:
- drag a selection box to select anchors
- click on one of them and drag or use arrow keys to move them
like in Adobe Illustrator or any other vector graphics editor
| non_defect | can t select more then one anchor point when editing some polygon like or and using box selection tool the only thing that can be selected are whole objects but can t select just a couple of anchor points to move them simultaneously it should be done in this way drag a selection box to select anchors click on one of them and drag or use arrow keys to move them like in adobe illustrator or any other vector graphics editor | 0 |
175,637 | 21,316,813,640 | IssuesEvent | 2022-04-16 12:27:39 | serhii73/place2live.com | https://api.github.com/repos/serhii73/place2live.com | closed | CVE-2020-14343 (High) detected in PyYAML-5.3.1.tar.gz | wontfix security vulnerability | ## CVE-2020-14343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>PyYAML-5.3.1.tar.gz</b></p></summary>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz">https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz</a></p>
<p>Path to dependency file: place2live.com/requirements.txt</p>
<p>Path to vulnerable library: place2live.com/requirements.txt</p>
<p>
Dependency Hierarchy:
- pre_commit-1.20.0-py2.py3-none-any.whl (Root Library)
- aspy.yaml-1.3.0-py2.py3-none-any.whl
- :x: **PyYAML-5.3.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/serhii73/place2live.com/commit/a8bb441204660cfeb049cb08c4014e4cb66eed1c">a8bb441204660cfeb049cb08c4014e4cb66eed1c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was discovered in the PyYAML library in all versions, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. .load() defaults to using FullLoader and FullLoader is still vulnerable to RCE when run on untrusted input. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor.
The fix for CVE-2020-1747 was not enough to fix this issue.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343>CVE-2020-14343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-14343 (High) detected in PyYAML-5.3.1.tar.gz - ## CVE-2020-14343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>PyYAML-5.3.1.tar.gz</b></p></summary>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz">https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz</a></p>
<p>Path to dependency file: place2live.com/requirements.txt</p>
<p>Path to vulnerable library: place2live.com/requirements.txt</p>
<p>
Dependency Hierarchy:
- pre_commit-1.20.0-py2.py3-none-any.whl (Root Library)
- aspy.yaml-1.3.0-py2.py3-none-any.whl
- :x: **PyYAML-5.3.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/serhii73/place2live.com/commit/a8bb441204660cfeb049cb08c4014e4cb66eed1c">a8bb441204660cfeb049cb08c4014e4cb66eed1c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was discovered in the PyYAML library in all versions, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. .load() defaults to using FullLoader and FullLoader is still vulnerable to RCE when run on untrusted input. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor.
The fix for CVE-2020-1747 was not enough to fix this issue.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343>CVE-2020-14343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in pyyaml tar gz cve high severity vulnerability vulnerable library pyyaml tar gz yaml parser and emitter for python library home page a href path to dependency file com requirements txt path to vulnerable library com requirements txt dependency hierarchy pre commit none any whl root library aspy yaml none any whl x pyyaml tar gz vulnerable library found in head commit a href vulnerability details a vulnerability was discovered in the pyyaml library in all versions where it is susceptible to arbitrary code execution when it processes untrusted yaml files through the full load method or with the fullloader loader load defaults to using fullloader and fullloader is still vulnerable to rce when run on untrusted input applications that use the library to process untrusted input may be vulnerable to this flaw an attacker could use this flaw to execute arbitrary code on the system by abusing the python object new constructor the fix for cve was not enough to fix this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
23,922 | 3,872,598,917 | IssuesEvent | 2016-04-11 14:24:36 | KytechN24/xbox360wirelesschatpad | https://api.github.com/repos/KytechN24/xbox360wirelesschatpad | closed | Controller out of control | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.the Controller out of control
2.always push ↘
3.Most of key are not be identified
What version of the product are you using? On what operating system?
Win7 64bit
Please provide any additional information below.
hi, this app is so cooooooool!~~ I so excited to use chatpad on PC, thx for
your work.
I use the mouse mode on PC, it's works very will. When I push hold down
LT+RT+back to turn off the mouse mode, the Controller out of control and always
push ↘. Most of key are not be identified.
PS, I still don't understand how to setting the FFXIV...
I'm sorry for my bad English. And your APP is so great! Thanks a lot!~~
```
Original issue reported on code.google.com by `shindoun...@gmail.com` on 17 May 2014 at 1:56 | 1.0 | Controller out of control - ```
What steps will reproduce the problem?
1.the Controller out of control
2.always push ↘
3.Most of key are not be identified
What version of the product are you using? On what operating system?
Win7 64bit
Please provide any additional information below.
hi, this app is so cooooooool!~~ I so excited to use chatpad on PC, thx for
your work.
I use the mouse mode on PC, it's works very will. When I push hold down
LT+RT+back to turn off the mouse mode, the Controller out of control and always
push ↘. Most of key are not be identified.
PS, I still don't understand how to setting the FFXIV...
I'm sorry for my bad English. And your APP is so great! Thanks a lot!~~
```
Original issue reported on code.google.com by `shindoun...@gmail.com` on 17 May 2014 at 1:56 | defect | controller out of control what steps will reproduce the problem the controller out of control always push ↘ most of key are not be identified what version of the product are you using on what operating system please provide any additional information below hi this app is so cooooooool i so excited to use chatpad on pc thx for your work i use the mouse mode on pc it s works very will when i push hold down lt rt back to turn off the mouse mode the controller out of control and always push ↘ most of key are not be identified ps i still don t understand how to setting the ffxiv i m sorry for my bad english and your app is so great thanks a lot original issue reported on code google com by shindoun gmail com on may at | 1 |
60,380 | 14,542,467,706 | IssuesEvent | 2020-12-15 15:44:58 | GooseWSS/kittydar | https://api.github.com/repos/GooseWSS/kittydar | opened | CVE-2020-11023 (Medium) detected in jquery-1.7.1.min.js | security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: kittydar/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: kittydar/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/GooseWSS/kittydar/commit/d47cdc79e976369ea4b8d754bfbe5c6578421393">d47cdc79e976369ea4b8d754bfbe5c6578421393</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-11023 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: kittydar/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: kittydar/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/GooseWSS/kittydar/commit/d47cdc79e976369ea4b8d754bfbe5c6578421393">d47cdc79e976369ea4b8d754bfbe5c6578421393</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_defect | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file kittydar node modules vm browserify example run index html path to vulnerable library kittydar node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl | 0 |
49,167 | 6,015,547,126 | IssuesEvent | 2017-06-07 02:37:17 | ushahidi/platform | https://api.github.com/repos/ushahidi/platform | closed | Add "In progress" or "Waiting" bar to notify users of actions happening in the background. | Feature request P0 - Unbreak now! Stage: Testing | Feedback from user.
"When we click "Invia" (or SEND), there is no "message" like waiting or rounding wheel or similar... this means that people click on the button twice or thrice. It is possible to have a pop-up or a message for waiting?"
User complained of duplication of submissions because no feedback was being given to a user on success of e.g submission of a post, creation of a field | 1.0 | Add "In progress" or "Waiting" bar to notify users of actions happening in the background. - Feedback from user.
"When we click "Invia" (or SEND), there is no "message" like waiting or rounding wheel or similar... this means that people click on the button twice or thrice. It is possible to have a pop-up or a message for waiting?"
User complained of duplication of submissions because no feedback was being given to a user on success of e.g submission of a post, creation of a field | non_defect | add in progress or waiting bar to notify users of actions happening in the background feedback from user when we click invia or send there is no message like waiting or rounding wheel or similar this means that people click on the button twice or thrice it is possible to have a pop up or a message for waiting user complained of duplication of submissions because no feedback was being given to a user on success of e g submission of a post creation of a field | 0 |
50,797 | 13,187,750,049 | IssuesEvent | 2020-08-13 04:27:19 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | [dst] write header at end of every run (Trac #1383) | Migrated from Trac combo reconstruction defect | A rare edge case came up where the last subrun had less frames than the prescale, so was missing a dst header. The solution is to just write a header at the end of every run.
Implementation hints:
- probably need a buffer of 1 frame, so you can modify the last frame and push it on receiving a finish
Note: an alternative would be to change the MultiWriter so you could get a signal whenever switching to a new file, but this is unlikely to happen.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1383">https://code.icecube.wisc.edu/ticket/1383</a>, reported by david.schultz and owned by tschmidt</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:18",
"description": "A rare edge case came up where the last subrun had less frames than the prescale, so was missing a dst header. The solution is to just write a header at the end of every run.\n\nImplementation hints:\n- probably need a buffer of 1 frame, so you can modify the last frame and push it on receiving a finish\n\n\nNote: an alternative would be to change the MultiWriter so you could get a signal whenever switching to a new file, but this is unlikely to happen.",
"reporter": "david.schultz",
"cc": "blaufuss",
"resolution": "wontfix",
"_ts": "1550067318169976",
"component": "combo reconstruction",
"summary": "[dst] write header at end of every run",
"priority": "major",
"keywords": "",
"time": "2015-10-05T16:39:39",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [dst] write header at end of every run (Trac #1383) - A rare edge case came up where the last subrun had less frames than the prescale, so was missing a dst header. The solution is to just write a header at the end of every run.
Implementation hints:
- probably need a buffer of 1 frame, so you can modify the last frame and push it on receiving a finish
Note: an alternative would be to change the MultiWriter so you could get a signal whenever switching to a new file, but this is unlikely to happen.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1383">https://code.icecube.wisc.edu/ticket/1383</a>, reported by david.schultz and owned by tschmidt</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:18",
"description": "A rare edge case came up where the last subrun had less frames than the prescale, so was missing a dst header. The solution is to just write a header at the end of every run.\n\nImplementation hints:\n- probably need a buffer of 1 frame, so you can modify the last frame and push it on receiving a finish\n\n\nNote: an alternative would be to change the MultiWriter so you could get a signal whenever switching to a new file, but this is unlikely to happen.",
"reporter": "david.schultz",
"cc": "blaufuss",
"resolution": "wontfix",
"_ts": "1550067318169976",
"component": "combo reconstruction",
"summary": "[dst] write header at end of every run",
"priority": "major",
"keywords": "",
"time": "2015-10-05T16:39:39",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| defect | write header at end of every run trac a rare edge case came up where the last subrun had less frames than the prescale so was missing a dst header the solution is to just write a header at the end of every run implementation hints probably need a buffer of frame so you can modify the last frame and push it on receiving a finish note an alternative would be to change the multiwriter so you could get a signal whenever switching to a new file but this is unlikely to happen migrated from json status closed changetime description a rare edge case came up where the last subrun had less frames than the prescale so was missing a dst header the solution is to just write a header at the end of every run n nimplementation hints n probably need a buffer of frame so you can modify the last frame and push it on receiving a finish n n nnote an alternative would be to change the multiwriter so you could get a signal whenever switching to a new file but this is unlikely to happen reporter david schultz cc blaufuss resolution wontfix ts component combo reconstruction summary write header at end of every run priority major keywords time milestone owner tschmidt type defect | 1 |
167,598 | 26,519,196,875 | IssuesEvent | 2023-01-19 00:10:30 | aws-controllers-k8s/community | https://api.github.com/repos/aws-controllers-k8s/community | closed | Semantics of destructive operations | kind/enhancement help wanted lifecycle/frozen kind/design | If an ACK user deletes a Kubernetes namespace that has, say, an S3 bucket custom resource in it, is the S3 bucket also deleted or not?
The answer to this question (and the respective UX) should follow the "no surprises" principle. IOW: opting into (forcing) destructive operations rather than silently cascading the delete from the Kubernetes cluster perimeter to AWS services. | 1.0 | Semantics of destructive operations - If an ACK user deletes a Kubernetes namespace that has, say, an S3 bucket custom resource in it, is the S3 bucket also deleted or not?
The answer to this question (and the respective UX) should follow the "no surprises" principle. IOW: opting into (forcing) destructive operations rather than silently cascading the delete from the Kubernetes cluster perimeter to AWS services. | non_defect | semantics of destructive operations if an ack user deletes a kubernetes namespace that has say an bucket custom resource in it is the bucket also deleted or not the answer to this question and the respective ux should follow the no surprises principle iow opting into forcing destructive operations rather than silently cascading the delete from the kubernetes cluster perimeter to aws services | 0 |
57,899 | 16,134,869,411 | IssuesEvent | 2021-04-29 10:26:35 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Coerce should delegate all QueryPartInternal calls to the wrapped field | C: Functionality E: All Editions P: Medium T: Defect | When wrapping a `Field` with `Field.coerce(type)`, the resulting `org.jooq.impl.Coerce` implementation implements all `QueryPartInternal` flags using the default from `AbstractQueryPart`, when in fact things like `declaresFields()` should delegate to the wrapped field. | 1.0 | Coerce should delegate all QueryPartInternal calls to the wrapped field - When wrapping a `Field` with `Field.coerce(type)`, the resulting `org.jooq.impl.Coerce` implementation implements all `QueryPartInternal` flags using the default from `AbstractQueryPart`, when in fact things like `declaresFields()` should delegate to the wrapped field. | defect | coerce should delegate all querypartinternal calls to the wrapped field when wrapping a field with field coerce type the resulting org jooq impl coerce implementation implements all querypartinternal flags using the default from abstractquerypart when in fact things like declaresfields should delegate to the wrapped field | 1 |
387,899 | 26,743,733,109 | IssuesEvent | 2023-01-30 14:41:09 | ImperialCollegeLondon/paricia | https://api.github.com/repos/ImperialCollegeLondon/paricia | closed | System initialisation + docs | documentation | We have moved to a Dockerised container setup for running the system. We need to make sure everything is started up automatically and that there are simple instructions on how to initialise the system from scratch.
These tasks could be further broken down into individual issues.
- ~Load initial data into the database automatically when it is initialised (currently done my manually running `load_initial_data.py`)~ - Probably don't want this
- [x] Get initial roles with the right permissions loaded as an initial data migration
- [x] Make sure there is nothing in `docs/` that is still necessary, then remove those docs
- [x] Update docs (in README) on how to start the system and load initial data | 1.0 | System initialisation + docs - We have moved to a Dockerised container setup for running the system. We need to make sure everything is started up automatically and that there are simple instructions on how to initialise the system from scratch.
These tasks could be further broken down into individual issues.
- ~Load initial data into the database automatically when it is initialised (currently done my manually running `load_initial_data.py`)~ - Probably don't want this
- [x] Get initial roles with the right permissions loaded as an initial data migration
- [x] Make sure there is nothing in `docs/` that is still necessary, then remove those docs
- [x] Update docs (in README) on how to start the system and load initial data | non_defect | system initialisation docs we have moved to a dockerised container setup for running the system we need to make sure everything is started up automatically and that there are simple instructions on how to initialise the system from scratch these tasks could be further broken down into individual issues load initial data into the database automatically when it is initialised currently done my manually running load initial data py probably don t want this get initial roles with the right permissions loaded as an initial data migration make sure there is nothing in docs that is still necessary then remove those docs update docs in readme on how to start the system and load initial data | 0 |
13,972 | 2,789,816,241 | IssuesEvent | 2015-05-08 21:40:35 | google/google-visualization-api-issues | https://api.github.com/repos/google/google-visualization-api-issues | opened | ScatterChart logScaleX scales wrong/mis-labels x axis | Priority-Medium Type-Defect | Original [issue 195](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=195) created by orwant on 2010-02-10T19:15:46.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1.Create a ScatterChart
2.Supply values in the range (y axis) of 0.0 to 1.0
3.Supply values in the domain (x axis) of 1E10-7 to 1E10-11
4.View plot as it is, then enable logScaleX: true
5.The Automatic X scaling appears broken with all the data being compressed
to the left of the chart and the X axis labels are limited to 1 digit (0)
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
ScatterChart (logScaleX)
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO
<b>What operating system and browser are you using?</b>
Mac OS X, Firefox 3.5 + Safari
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| 1.0 | ScatterChart logScaleX scales wrong/mis-labels x axis - Original [issue 195](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=195) created by orwant on 2010-02-10T19:15:46.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1.Create a ScatterChart
2.Supply values in the range (y axis) of 0.0 to 1.0
3.Supply values in the domain (x axis) of 1E10-7 to 1E10-11
4.View plot as it is, then enable logScaleX: true
5.The Automatic X scaling appears broken with all the data being compressed
to the left of the chart and the X axis labels are limited to 1 digit (0)
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
ScatterChart (logScaleX)
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO
<b>What operating system and browser are you using?</b>
Mac OS X, Firefox 3.5 + Safari
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| defect | scatterchart logscalex scales wrong mis labels x axis original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code create a scatterchart supply values in the range y axis of to supply values in the domain x axis of to view plot as it is then enable logscalex true the automatic x scaling appears broken with all the data being compressed to the left of the chart and the x axis labels are limited to digit what component is this issue related to piechart linechart datatable query etc scatterchart logscalex are you using the test environment version if you are not sure answer no no what operating system and browser are you using mac os x firefox safari for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved | 1 |
536,845 | 15,715,795,241 | IssuesEvent | 2021-03-28 03:26:44 | GC-spigot/AdvancedEnchantments | https://api.github.com/repos/GC-spigot/AdvancedEnchantments | closed | Feature for Custom Model Data Items [Request] | duplicate high-priority pending request |
Hello there! Before to explain why this feature will be very usefull (on my opinion) I have to say sorry for my bad English and i hope you understand me what im proposing.
Soo here is the idea - Many servers have texture pack and many custom visions for tools/items and etc. and will be cool if the plugin supports custom model data items, so items like Orbs, White Scrolls, Black Scrolls, scrolls for books (which open with right click and gives the different books) and all other item aspects of the plugin. This will add a lot more freedom of the servers to create unique settings ( visual ) of that plugin.
The config feature can be like " material: BONE or material: BONE#3
CustomModelData: 1 " (where the number is the CMD)
A lot free plugins/paid has that option and i think this one can be a nice feature for one plugin. | 1.0 | Feature for Custom Model Data Items [Request] -
Hello there! Before to explain why this feature will be very usefull (on my opinion) I have to say sorry for my bad English and i hope you understand me what im proposing.
Soo here is the idea - Many servers have texture pack and many custom visions for tools/items and etc. and will be cool if the plugin supports custom model data items, so items like Orbs, White Scrolls, Black Scrolls, scrolls for books (which open with right click and gives the different books) and all other item aspects of the plugin. This will add a lot more freedom of the servers to create unique settings ( visual ) of that plugin.
The config feature can be like " material: BONE or material: BONE#3
CustomModelData: 1 " (where the number is the CMD)
A lot free plugins/paid has that option and i think this one can be a nice feature for one plugin. | non_defect | feature for custom model data items hello there before to explain why this feature will be very usefull on my opinion i have to say sorry for my bad english and i hope you understand me what im proposing soo here is the idea many servers have texture pack and many custom visions for tools items and etc and will be cool if the plugin supports custom model data items so items like orbs white scrolls black scrolls scrolls for books which open with right click and gives the different books and all other item aspects of the plugin this will add a lot more freedom of the servers to create unique settings visual of that plugin the config feature can be like material bone or material bone custommodeldata where the number is the cmd a lot free plugins paid has that option and i think this one can be a nice feature for one plugin | 0 |
136,657 | 30,569,444,115 | IssuesEvent | 2023-07-20 20:39:45 | openxla/iree | https://api.github.com/repos/openxla/iree | closed | [spirv] Missing integer type cast for i8 during conversion | bug 🐞 codegen/spirv | ### What happened?
I'm running into issues dealing with `arith.extui` operation while compiling for a vulkan target.
### Steps to reproduce your issue
IREE compile comand: ```../iree-build/tools/iree-compile --iree-input-type=tm_tensor --iree-vm-bytecode-module-output-format=flatbuffer-binary --iree-hal-target-backends=vulkan --iree-llvmcpu-embedded-linker-path=/home/nod/Documents/iree-build/compiler/bindings/python/iree/compiler/tools/../_mlir_libs/iree-lld --mlir-print-debuginfo --mlir-print-op-on-diagnostic=false --iree-stream-resource-index-bits=64 --iree-vm-target-index-bits=64 --iree-vm-bytecode-module-strip-source-map=true --iree-vulkan-target-triple=adreno-a740-linux --iree-util-zero-fill-elided-attrs --iree-spirv-index-bits=32 --iree-hal-dump-executable-sources-to="$BENCHMARK_DIR" "$MLIR_PATH" -o "$VMFB_PATH"```
Error:
```
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:934:11: error: failed to materialize conversion for result #0 of operation 'memref.load' that remained live after conversion
%42 = linalg.generic {indexing_maps = [#map2, #map13, #map13, #map2], iterator_types = ["parallel", "parallel", "parallel"]} ins(%expanded_753, %cst_737, %cst_736 : tensor<4096x32x128xi8>, tensor<4096x32x1xf32>, tensor<4096x32x1xf32>) outs(%39 : tensor<4096x32x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:29:3: note: called from
func.func @forward(%arg0: tensor<1x?xi64>) -> (tensor<1x?x32000xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:936:15: note: see existing live user here: %144 = "spirv.UConvert"(%123) : (i8) -> i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":936:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%2150 = arith.extui %in : i8 to i32
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:934:11: error: failed to run translation of source executable to target executable for backend #hal.executable.target<"vulkan", "vulkan-spirv-fb", {spirv.target_env = #spirv.target_env<#spirv.vce<v1.6, [Shader, Float16, Int16, Int8, StorageBuffer16BitAccess, GroupNonUniform, GroupNonUniformVote, GroupNonUniformArithmetic, GroupNonUniformBallot, GroupNonUniformShuffle, GroupNonUniformShuffleRelative, GroupNonUniformQuad, VariablePointers, VariablePointersStorageBuffer], [SPV_KHR_16bit_storage, SPV_KHR_storage_buffer_storage_class, SPV_KHR_variable_pointers]>, api=Vulkan, Qualcomm:IntegratedGPU, #spirv.resource_limits<max_compute_shared_memory_size = 32768, max_compute_workgroup_invocations = 1024, max_compute_workgroup_size = [1024, 1024, 1024], subgroup_size = 64, cooperative_matrix_properties_nv = []>>}>
%42 = linalg.generic {indexing_maps = [#map2, #map13, #map13, #map2], iterator_types = ["parallel", "parallel", "parallel"]} ins(%expanded_753, %cst_737, %cst_736 : tensor<4096x32x128xi8>, tensor<4096x32x1xf32>, tensor<4096x32x1xf32>) outs(%39 : tensor<4096x32x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:29:3: note: called from
func.func @forward(%arg0: tensor<1x?xi64>) -> (tensor<1x?x32000xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:934:11: error: failed to serialize executables
%42 = linalg.generic {indexing_maps = [#map2, #map13, #map13, #map2], iterator_types = ["parallel", "parallel", "parallel"]} ins(%expanded_753, %cst_737, %cst_736 : tensor<4096x32x128xi8>, tensor<4096x32x1xf32>, tensor<4096x32x1xf32>) outs(%39 : tensor<4096x32x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:29:3: note: called from
func.func @forward(%arg0: tensor<1x?xi64>) -> (tensor<1x?x32000xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>) {
```
Minimal IR to Reproduce (extui works on scalar, fails on tensor mapping regardless of tensor rank):
```
func.func @spirv_extui_success_scalar(
%i1 : i8,
%o1 : i32
)
-> i32 {
%res = arith.extui %i1 : i8 to i32
return %res : i32
}
#mapping = affine_map<(d0) -> (d0)>
func.func @spirv_extui_failure_tensor(
%i1 : tensor<?xi8>,
%o1 : tensor<?xi32>
)
-> tensor<?xi32> {
%res = linalg.generic {
indexing_maps = [#mapping, #mapping], iterator_types = ["parallel"]
} ins(%i1: tensor<?xi8>) outs(%o1 : tensor<?xi32>) {
^bb0(%in: i8, %out: i32):
%2150 = arith.extui %in : i8 to i32
linalg.yield %2150 : i32
} -> tensor<?xi32>
return %res : tensor<?xi32>
}
```
Full MLIR :
```
hal.executable public @forward_dispatch_2 {
hal.executable.variant public @vulkan_spirv_fb, target = <"vulkan", "vulkan-spirv-fb", {spirv.target_env = #spirv.target_env<#spirv.vce<v1.3, [Shader, Float16, Int16, Int8, StorageBuffer16BitAccess, GroupNonUniform, GroupNonUniformVote, GroupNonUniformArithmetic, GroupNonUniformBallot, GroupNonUniformShuffle, GroupNonUniformShuffleRelative, GroupNonUniformQuad, VariablePointers, VariablePointersStorageBuffer], [SPV_KHR_16bit_storage, SPV_KHR_storage_buffer_storage_class, SPV_KHR_variable_pointers]>, api=Vulkan, Qualcomm:IntegratedGPU, #spirv.resource_limits<max_compute_shared_memory_size = 32768, max_compute_workgroup_invocations = 1024, max_compute_workgroup_size = [1024, 1024, 1024], subgroup_size = 64, cooperative_matrix_properties_nv = []>>}> {
hal.executable.export public @forward_dispatch_2_generic_131072x128_i8xf32xf32xf32 ordinal(0) layout(#hal.pipeline.layout<push_constants = 6, sets = [<0, bindings = [<0, storage_buffer, ReadOnly>, <1, storage_buffer>]>]>) {
^bb0(%arg0: !hal.device loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))):
%x, %y, %z = flow.dispatch.workgroup_count_from_slice loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
hal.return %x, %y, %z : index, index, index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
builtin.module {
func.func @forward_dispatch_2_generic_131072x128_i8xf32xf32xf32() {
%c32_i64 = arith.constant 32 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%c0 = arith.constant 0 : index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%0 = hal.interface.constant.load[0] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%1 = hal.interface.constant.load[1] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%2 = hal.interface.constant.load[2] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%3 = hal.interface.constant.load[3] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%4 = hal.interface.constant.load[4] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%5 = hal.interface.constant.load[5] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%6 = arith.extui %0 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%7 = arith.extui %1 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%8 = arith.shli %7, %c32_i64 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%9 = arith.ori %6, %8 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%10 = arith.index_castui %9 {stream.alignment = 65536 : index, stream.values = [0 : index, 17825792 : index, 35651584 : index, 53477376 : index, 215023616 : index, 232849408 : index, 250675200 : index, 268500992 : index, 430047232 : index, 447873024 : index, 465698816 : index, 483524608 : index, 645070848 : index, 662896640 : index, 680722432 : index, 698548224 : index, 860094464 : index, 877920256 : index, 895746048 : index, 913571840 : index, 1075118080 : index, 1092943872 : index, 1110769664 : index, 1128595456 : index, 1290141696 : index, 1307967488 : index, 1325793280 : index, 1343619072 : index, 1505165312 : index, 1522991104 : index, 1540816896 : index, 1558642688 : index, 1720188928 : index, 1738014720 : index, 1755840512 : index, 1773666304 : index, 1935212544 : index, 1953038336 : index, 1970864128 : index, 1988689920 : index, 2150236160 : index, 2168061952 : index, 2185887744 : index, 2203713536 : index, 2365259776 : index, 2383085568 : index, 2400911360 : index, 2418737152 : index, 2580283392 : index, 2598109184 : index, 2615934976 : index, 2633760768 : index, 2795307008 : index, 2813132800 : index, 2830958592 : index, 2848784384 : index, 3010330624 : index, 3028156416 : index, 3045982208 : index, 3063808000 : index, 3225354240 : index, 3243180032 : index, 3261005824 : index, 3278831616 : index, 3440377856 : index, 3458203648 : index, 3476029440 : index, 3493855232 : index, 3655401472 : index, 3673227264 : index, 3691053056 : index, 3708878848 : index, 3870425088 : index, 3888250880 : index, 3906076672 : index, 3923902464 : index, 4085448704 : index, 4103274496 : index, 4121100288 : index, 4138926080 : index, 4300472320 : index, 4318298112 : index, 4336123904 : index, 4353949696 : index, 4515495936 : index, 4533321728 : index, 4551147520 : index, 4568973312 : index, 4730519552 : index, 4748345344 : index, 4766171136 : index, 4783996928 : index, 4945543168 : index, 4963368960 : index, 4981194752 : index, 4999020544 : index, 5160566784 : index, 5178392576 : index, 5196218368 : index, 5214044160 : index, 5375590400 : index, 5393416192 : index, 5411241984 : index, 5429067776 : index, 5590614016 : index, 5608439808 : index, 5626265600 : index, 5644091392 : index, 5805637632 : index, 5823463424 : index, 5841289216 : index, 5859115008 : index, 6020661248 : index, 6038487040 : index, 6056312832 : index, 6074138624 : index, 6235684864 : index, 6253510656 : index, 6271336448 : index, 6289162240 : index, 6450708480 : index, 6468534272 : index, 6486360064 : index, 6504185856 : index, 6665732096 : index, 6683557888 : index, 6701383680 : index, 6719209472 : index]} : i64 to index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%11 = arith.extui %2 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%12 = arith.extui %3 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%13 = arith.shli %12, %c32_i64 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%14 = arith.ori %11, %13 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%15 = arith.index_castui %14 {stream.alignment = 65536 : index, stream.values = [16777216 : index, 34603008 : index, 52428800 : index, 70254592 : index, 231800832 : index, 249626624 : index, 267452416 : index, 285278208 : index, 446824448 : index, 464650240 : index, 482476032 : index, 500301824 : index, 661848064 : index, 679673856 : index, 697499648 : index, 715325440 : index, 876871680 : index, 894697472 : index, 912523264 : index, 930349056 : index, 1091895296 : index, 1109721088 : index, 1127546880 : index, 1145372672 : index, 1306918912 : index, 1324744704 : index, 1342570496 : index, 1360396288 : index, 1521942528 : index, 1539768320 : index, 1557594112 : index, 1575419904 : index, 1736966144 : index, 1754791936 : index, 1772617728 : index, 1790443520 : index, 1951989760 : index, 1969815552 : index, 1987641344 : index, 2005467136 : index, 2167013376 : index, 2184839168 : index, 2202664960 : index, 2220490752 : index, 2382036992 : index, 2399862784 : index, 2417688576 : index, 2435514368 : index, 2597060608 : index, 2614886400 : index, 2632712192 : index, 2650537984 : index, 2812084224 : index, 2829910016 : index, 2847735808 : index, 2865561600 : index, 3027107840 : index, 3044933632 : index, 3062759424 : index, 3080585216 : index, 3242131456 : index, 3259957248 : index, 3277783040 : index, 3295608832 : index, 3457155072 : index, 3474980864 : index, 3492806656 : index, 3510632448 : index, 3672178688 : index, 3690004480 : index, 3707830272 : index, 3725656064 : index, 3887202304 : index, 3905028096 : index, 3922853888 : index, 3940679680 : index, 4102225920 : index, 4120051712 : index, 4137877504 : index, 4155703296 : index, 4317249536 : index, 4335075328 : index, 4352901120 : index, 4370726912 : index, 4532273152 : index, 4550098944 : index, 4567924736 : index, 4585750528 : index, 4747296768 : index, 4765122560 : index, 4782948352 : index, 4800774144 : index, 4962320384 : index, 4980146176 : index, 4997971968 : index, 5015797760 : index, 5177344000 : index, 5195169792 : index, 5212995584 : index, 5230821376 : index, 5392367616 : index, 5410193408 : index, 5428019200 : index, 5445844992 : index, 5607391232 : index, 5625217024 : index, 5643042816 : index, 5660868608 : index, 5822414848 : index, 5840240640 : index, 5858066432 : index, 5875892224 : index, 6037438464 : index, 6055264256 : index, 6073090048 : index, 6090915840 : index, 6252462080 : index, 6270287872 : index, 6288113664 : index, 6305939456 : index, 6467485696 : index, 6485311488 : index, 6503137280 : index, 6520963072 : index, 6682509312 : index, 6700335104 : index, 6718160896 : index, 6735986688 : index]} : i64 to index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%16 = arith.extui %4 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%17 = arith.extui %5 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%18 = arith.shli %17, %c32_i64 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%19 = arith.ori %16, %18 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%20 = arith.index_castui %19 {stream.alignment = 65536 : index, stream.values = [17301504 : index, 35127296 : index, 52953088 : index, 70778880 : index, 232325120 : index, 250150912 : index, 267976704 : index, 285802496 : index, 447348736 : index, 465174528 : index, 483000320 : index, 500826112 : index, 662372352 : index, 680198144 : index, 698023936 : index, 715849728 : index, 877395968 : index, 895221760 : index, 913047552 : index, 930873344 : index, 1092419584 : index, 1110245376 : index, 1128071168 : index, 1145896960 : index, 1307443200 : index, 1325268992 : index, 1343094784 : index, 1360920576 : index, 1522466816 : index, 1540292608 : index, 1558118400 : index, 1575944192 : index, 1737490432 : index, 1755316224 : index, 1773142016 : index, 1790967808 : index, 1952514048 : index, 1970339840 : index, 1988165632 : index, 2005991424 : index, 2167537664 : index, 2185363456 : index, 2203189248 : index, 2221015040 : index, 2382561280 : index, 2400387072 : index, 2418212864 : index, 2436038656 : index, 2597584896 : index, 2615410688 : index, 2633236480 : index, 2651062272 : index, 2812608512 : index, 2830434304 : index, 2848260096 : index, 2866085888 : index, 3027632128 : index, 3045457920 : index, 3063283712 : index, 3081109504 : index, 3242655744 : index, 3260481536 : index, 3278307328 : index, 3296133120 : index, 3457679360 : index, 3475505152 : index, 3493330944 : index, 3511156736 : index, 3672702976 : index, 3690528768 : index, 3708354560 : index, 3726180352 : index, 3887726592 : index, 3905552384 : index, 3923378176 : index, 3941203968 : index, 4102750208 : index, 4120576000 : index, 4138401792 : index, 4156227584 : index, 4317773824 : index, 4335599616 : index, 4353425408 : index, 4371251200 : index, 4532797440 : index, 4550623232 : index, 4568449024 : index, 4586274816 : index, 4747821056 : index, 4765646848 : index, 4783472640 : index, 4801298432 : index, 4962844672 : index, 4980670464 : index, 4998496256 : index, 5016322048 : index, 5177868288 : index, 5195694080 : index, 5213519872 : index, 5231345664 : index, 5392891904 : index, 5410717696 : index, 5428543488 : index, 5446369280 : index, 5607915520 : index, 5625741312 : index, 5643567104 : index, 5661392896 : index, 5822939136 : index, 5840764928 : index, 5858590720 : index, 5876416512 : index, 6037962752 : index, 6055788544 : index, 6073614336 : index, 6091440128 : index, 6252986368 : index, 6270812160 : index, 6288637952 : index, 6306463744 : index, 6468009984 : index, 6485835776 : index, 6503661568 : index, 6521487360 : index, 6683033600 : index, 6700859392 : index, 6718685184 : index, 6736510976 : index]} : i64 to index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%21 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) alignment(64) offset(%10) flags(ReadOnly) : !flow.dispatch.tensor<readonly:tensor<131072x128xi8>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%22 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) alignment(64) offset(%15) flags(ReadOnly) : !flow.dispatch.tensor<readonly:tensor<131072xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%23 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) alignment(64) offset(%20) flags(ReadOnly) : !flow.dispatch.tensor<readonly:tensor<131072xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%24 = hal.interface.binding.subspan set(0) binding(1) type(storage_buffer) alignment(64) offset(%c0) : !flow.dispatch.tensor<writeonly:tensor<131072x128xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%25 = flow.dispatch.tensor.load %21, offsets = [0, 0], sizes = [131072, 128], strides = [1, 1] : !flow.dispatch.tensor<readonly:tensor<131072x128xi8>> -> tensor<131072x128xi8> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%26 = flow.dispatch.tensor.load %22, offsets = [0], sizes = [131072], strides = [1] : !flow.dispatch.tensor<readonly:tensor<131072xf32>> -> tensor<131072xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%27 = flow.dispatch.tensor.load %23, offsets = [0], sizes = [131072], strides = [1] : !flow.dispatch.tensor<readonly:tensor<131072xf32>> -> tensor<131072xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%28 = tensor.empty() : tensor<131072x128xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%29 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0)>, affine_map<(d0, d1) -> (d0)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%25, %26, %27 : tensor<131072x128xi8>, tensor<131072xf32>, tensor<131072xf32>) outs(%28 : tensor<131072x128xf32>) {
^bb0(%in: i8 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3)), %in_0: f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3)), %in_1: f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3)), %out: f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))):
%30 = arith.extui %in : i8 to i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":936:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%31 = arith.uitofp %30 : i32 to f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":937:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%32 = arith.subf %31, %in_1 : f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":938:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%33 = arith.mulf %32, %in_0 : f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":939:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
linalg.yield %33 : f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":940:7 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} -> tensor<131072x128xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
flow.dispatch.tensor.store %29, %24, offsets = [0, 0], sizes = [131072, 128], strides = [1, 1] : tensor<131072x128xf32> -> !flow.dispatch.tensor<writeonly:tensor<131072x128xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
return loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
```
### What component(s) does this issue relate to?
MLIR, Compiler
### Version information
_No response_
### Additional context
_No response_ | 1.0 | [spirv] Missing integer type cast for i8 during conversion - ### What happened?
I'm running into issues dealing with `arith.extui` operation while compiling for a vulkan target.
### Steps to reproduce your issue
IREE compile comand: ```../iree-build/tools/iree-compile --iree-input-type=tm_tensor --iree-vm-bytecode-module-output-format=flatbuffer-binary --iree-hal-target-backends=vulkan --iree-llvmcpu-embedded-linker-path=/home/nod/Documents/iree-build/compiler/bindings/python/iree/compiler/tools/../_mlir_libs/iree-lld --mlir-print-debuginfo --mlir-print-op-on-diagnostic=false --iree-stream-resource-index-bits=64 --iree-vm-target-index-bits=64 --iree-vm-bytecode-module-strip-source-map=true --iree-vulkan-target-triple=adreno-a740-linux --iree-util-zero-fill-elided-attrs --iree-spirv-index-bits=32 --iree-hal-dump-executable-sources-to="$BENCHMARK_DIR" "$MLIR_PATH" -o "$VMFB_PATH"```
Error:
```
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:934:11: error: failed to materialize conversion for result #0 of operation 'memref.load' that remained live after conversion
%42 = linalg.generic {indexing_maps = [#map2, #map13, #map13, #map2], iterator_types = ["parallel", "parallel", "parallel"]} ins(%expanded_753, %cst_737, %cst_736 : tensor<4096x32x128xi8>, tensor<4096x32x1xf32>, tensor<4096x32x1xf32>) outs(%39 : tensor<4096x32x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:29:3: note: called from
func.func @forward(%arg0: tensor<1x?xi64>) -> (tensor<1x?x32000xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:936:15: note: see existing live user here: %144 = "spirv.UConvert"(%123) : (i8) -> i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":936:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%2150 = arith.extui %in : i8 to i32
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:934:11: error: failed to run translation of source executable to target executable for backend #hal.executable.target<"vulkan", "vulkan-spirv-fb", {spirv.target_env = #spirv.target_env<#spirv.vce<v1.6, [Shader, Float16, Int16, Int8, StorageBuffer16BitAccess, GroupNonUniform, GroupNonUniformVote, GroupNonUniformArithmetic, GroupNonUniformBallot, GroupNonUniformShuffle, GroupNonUniformShuffleRelative, GroupNonUniformQuad, VariablePointers, VariablePointersStorageBuffer], [SPV_KHR_16bit_storage, SPV_KHR_storage_buffer_storage_class, SPV_KHR_variable_pointers]>, api=Vulkan, Qualcomm:IntegratedGPU, #spirv.resource_limits<max_compute_shared_memory_size = 32768, max_compute_workgroup_invocations = 1024, max_compute_workgroup_size = [1024, 1024, 1024], subgroup_size = 64, cooperative_matrix_properties_nv = []>>}>
%42 = linalg.generic {indexing_maps = [#map2, #map13, #map13, #map2], iterator_types = ["parallel", "parallel", "parallel"]} ins(%expanded_753, %cst_737, %cst_736 : tensor<4096x32x128xi8>, tensor<4096x32x1xf32>, tensor<4096x32x1xf32>) outs(%39 : tensor<4096x32x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:29:3: note: called from
func.func @forward(%arg0: tensor<1x?xi64>) -> (tensor<1x?x32000xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:934:11: error: failed to serialize executables
%42 = linalg.generic {indexing_maps = [#map2, #map13, #map13, #map2], iterator_types = ["parallel", "parallel", "parallel"]} ins(%expanded_753, %cst_737, %cst_736 : tensor<4096x32x128xi8>, tensor<4096x32x1xf32>, tensor<4096x32x1xf32>) outs(%39 : tensor<4096x32x128xf32>) {
^
/home/nod/Documents/SHARK/first_vicuna_int8.mlir:29:3: note: called from
func.func @forward(%arg0: tensor<1x?xi64>) -> (tensor<1x?x32000xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>, tensor<1x32x?x128xf32>) {
```
Minimal IR to Reproduce (extui works on scalar, fails on tensor mapping regardless of tensor rank):
```
func.func @spirv_extui_success_scalar(
%i1 : i8,
%o1 : i32
)
-> i32 {
%res = arith.extui %i1 : i8 to i32
return %res : i32
}
#mapping = affine_map<(d0) -> (d0)>
func.func @spirv_extui_failure_tensor(
%i1 : tensor<?xi8>,
%o1 : tensor<?xi32>
)
-> tensor<?xi32> {
%res = linalg.generic {
indexing_maps = [#mapping, #mapping], iterator_types = ["parallel"]
} ins(%i1: tensor<?xi8>) outs(%o1 : tensor<?xi32>) {
^bb0(%in: i8, %out: i32):
%2150 = arith.extui %in : i8 to i32
linalg.yield %2150 : i32
} -> tensor<?xi32>
return %res : tensor<?xi32>
}
```
Full MLIR :
```
hal.executable public @forward_dispatch_2 {
hal.executable.variant public @vulkan_spirv_fb, target = <"vulkan", "vulkan-spirv-fb", {spirv.target_env = #spirv.target_env<#spirv.vce<v1.3, [Shader, Float16, Int16, Int8, StorageBuffer16BitAccess, GroupNonUniform, GroupNonUniformVote, GroupNonUniformArithmetic, GroupNonUniformBallot, GroupNonUniformShuffle, GroupNonUniformShuffleRelative, GroupNonUniformQuad, VariablePointers, VariablePointersStorageBuffer], [SPV_KHR_16bit_storage, SPV_KHR_storage_buffer_storage_class, SPV_KHR_variable_pointers]>, api=Vulkan, Qualcomm:IntegratedGPU, #spirv.resource_limits<max_compute_shared_memory_size = 32768, max_compute_workgroup_invocations = 1024, max_compute_workgroup_size = [1024, 1024, 1024], subgroup_size = 64, cooperative_matrix_properties_nv = []>>}> {
hal.executable.export public @forward_dispatch_2_generic_131072x128_i8xf32xf32xf32 ordinal(0) layout(#hal.pipeline.layout<push_constants = 6, sets = [<0, bindings = [<0, storage_buffer, ReadOnly>, <1, storage_buffer>]>]>) {
^bb0(%arg0: !hal.device loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))):
%x, %y, %z = flow.dispatch.workgroup_count_from_slice loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
hal.return %x, %y, %z : index, index, index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
builtin.module {
func.func @forward_dispatch_2_generic_131072x128_i8xf32xf32xf32() {
%c32_i64 = arith.constant 32 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%c0 = arith.constant 0 : index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%0 = hal.interface.constant.load[0] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%1 = hal.interface.constant.load[1] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%2 = hal.interface.constant.load[2] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%3 = hal.interface.constant.load[3] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%4 = hal.interface.constant.load[4] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%5 = hal.interface.constant.load[5] : i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%6 = arith.extui %0 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%7 = arith.extui %1 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%8 = arith.shli %7, %c32_i64 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%9 = arith.ori %6, %8 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%10 = arith.index_castui %9 {stream.alignment = 65536 : index, stream.values = [0 : index, 17825792 : index, 35651584 : index, 53477376 : index, 215023616 : index, 232849408 : index, 250675200 : index, 268500992 : index, 430047232 : index, 447873024 : index, 465698816 : index, 483524608 : index, 645070848 : index, 662896640 : index, 680722432 : index, 698548224 : index, 860094464 : index, 877920256 : index, 895746048 : index, 913571840 : index, 1075118080 : index, 1092943872 : index, 1110769664 : index, 1128595456 : index, 1290141696 : index, 1307967488 : index, 1325793280 : index, 1343619072 : index, 1505165312 : index, 1522991104 : index, 1540816896 : index, 1558642688 : index, 1720188928 : index, 1738014720 : index, 1755840512 : index, 1773666304 : index, 1935212544 : index, 1953038336 : index, 1970864128 : index, 1988689920 : index, 2150236160 : index, 2168061952 : index, 2185887744 : index, 2203713536 : index, 2365259776 : index, 2383085568 : index, 2400911360 : index, 2418737152 : index, 2580283392 : index, 2598109184 : index, 2615934976 : index, 2633760768 : index, 2795307008 : index, 2813132800 : index, 2830958592 : index, 2848784384 : index, 3010330624 : index, 3028156416 : index, 3045982208 : index, 3063808000 : index, 3225354240 : index, 3243180032 : index, 3261005824 : index, 3278831616 : index, 3440377856 : index, 3458203648 : index, 3476029440 : index, 3493855232 : index, 3655401472 : index, 3673227264 : index, 3691053056 : index, 3708878848 : index, 3870425088 : index, 3888250880 : index, 3906076672 : index, 3923902464 : index, 4085448704 : index, 4103274496 : index, 4121100288 : index, 4138926080 : index, 4300472320 : index, 4318298112 : index, 4336123904 : index, 4353949696 : index, 4515495936 : index, 4533321728 : index, 4551147520 : index, 4568973312 : index, 4730519552 : index, 4748345344 : index, 4766171136 : index, 4783996928 : index, 4945543168 : index, 4963368960 : index, 4981194752 : index, 4999020544 : index, 5160566784 : index, 5178392576 : index, 5196218368 : index, 5214044160 : index, 5375590400 : index, 5393416192 : index, 5411241984 : index, 5429067776 : index, 5590614016 : index, 5608439808 : index, 5626265600 : index, 5644091392 : index, 5805637632 : index, 5823463424 : index, 5841289216 : index, 5859115008 : index, 6020661248 : index, 6038487040 : index, 6056312832 : index, 6074138624 : index, 6235684864 : index, 6253510656 : index, 6271336448 : index, 6289162240 : index, 6450708480 : index, 6468534272 : index, 6486360064 : index, 6504185856 : index, 6665732096 : index, 6683557888 : index, 6701383680 : index, 6719209472 : index]} : i64 to index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%11 = arith.extui %2 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%12 = arith.extui %3 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%13 = arith.shli %12, %c32_i64 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%14 = arith.ori %11, %13 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%15 = arith.index_castui %14 {stream.alignment = 65536 : index, stream.values = [16777216 : index, 34603008 : index, 52428800 : index, 70254592 : index, 231800832 : index, 249626624 : index, 267452416 : index, 285278208 : index, 446824448 : index, 464650240 : index, 482476032 : index, 500301824 : index, 661848064 : index, 679673856 : index, 697499648 : index, 715325440 : index, 876871680 : index, 894697472 : index, 912523264 : index, 930349056 : index, 1091895296 : index, 1109721088 : index, 1127546880 : index, 1145372672 : index, 1306918912 : index, 1324744704 : index, 1342570496 : index, 1360396288 : index, 1521942528 : index, 1539768320 : index, 1557594112 : index, 1575419904 : index, 1736966144 : index, 1754791936 : index, 1772617728 : index, 1790443520 : index, 1951989760 : index, 1969815552 : index, 1987641344 : index, 2005467136 : index, 2167013376 : index, 2184839168 : index, 2202664960 : index, 2220490752 : index, 2382036992 : index, 2399862784 : index, 2417688576 : index, 2435514368 : index, 2597060608 : index, 2614886400 : index, 2632712192 : index, 2650537984 : index, 2812084224 : index, 2829910016 : index, 2847735808 : index, 2865561600 : index, 3027107840 : index, 3044933632 : index, 3062759424 : index, 3080585216 : index, 3242131456 : index, 3259957248 : index, 3277783040 : index, 3295608832 : index, 3457155072 : index, 3474980864 : index, 3492806656 : index, 3510632448 : index, 3672178688 : index, 3690004480 : index, 3707830272 : index, 3725656064 : index, 3887202304 : index, 3905028096 : index, 3922853888 : index, 3940679680 : index, 4102225920 : index, 4120051712 : index, 4137877504 : index, 4155703296 : index, 4317249536 : index, 4335075328 : index, 4352901120 : index, 4370726912 : index, 4532273152 : index, 4550098944 : index, 4567924736 : index, 4585750528 : index, 4747296768 : index, 4765122560 : index, 4782948352 : index, 4800774144 : index, 4962320384 : index, 4980146176 : index, 4997971968 : index, 5015797760 : index, 5177344000 : index, 5195169792 : index, 5212995584 : index, 5230821376 : index, 5392367616 : index, 5410193408 : index, 5428019200 : index, 5445844992 : index, 5607391232 : index, 5625217024 : index, 5643042816 : index, 5660868608 : index, 5822414848 : index, 5840240640 : index, 5858066432 : index, 5875892224 : index, 6037438464 : index, 6055264256 : index, 6073090048 : index, 6090915840 : index, 6252462080 : index, 6270287872 : index, 6288113664 : index, 6305939456 : index, 6467485696 : index, 6485311488 : index, 6503137280 : index, 6520963072 : index, 6682509312 : index, 6700335104 : index, 6718160896 : index, 6735986688 : index]} : i64 to index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%16 = arith.extui %4 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%17 = arith.extui %5 : i32 to i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%18 = arith.shli %17, %c32_i64 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%19 = arith.ori %16, %18 : i64 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%20 = arith.index_castui %19 {stream.alignment = 65536 : index, stream.values = [17301504 : index, 35127296 : index, 52953088 : index, 70778880 : index, 232325120 : index, 250150912 : index, 267976704 : index, 285802496 : index, 447348736 : index, 465174528 : index, 483000320 : index, 500826112 : index, 662372352 : index, 680198144 : index, 698023936 : index, 715849728 : index, 877395968 : index, 895221760 : index, 913047552 : index, 930873344 : index, 1092419584 : index, 1110245376 : index, 1128071168 : index, 1145896960 : index, 1307443200 : index, 1325268992 : index, 1343094784 : index, 1360920576 : index, 1522466816 : index, 1540292608 : index, 1558118400 : index, 1575944192 : index, 1737490432 : index, 1755316224 : index, 1773142016 : index, 1790967808 : index, 1952514048 : index, 1970339840 : index, 1988165632 : index, 2005991424 : index, 2167537664 : index, 2185363456 : index, 2203189248 : index, 2221015040 : index, 2382561280 : index, 2400387072 : index, 2418212864 : index, 2436038656 : index, 2597584896 : index, 2615410688 : index, 2633236480 : index, 2651062272 : index, 2812608512 : index, 2830434304 : index, 2848260096 : index, 2866085888 : index, 3027632128 : index, 3045457920 : index, 3063283712 : index, 3081109504 : index, 3242655744 : index, 3260481536 : index, 3278307328 : index, 3296133120 : index, 3457679360 : index, 3475505152 : index, 3493330944 : index, 3511156736 : index, 3672702976 : index, 3690528768 : index, 3708354560 : index, 3726180352 : index, 3887726592 : index, 3905552384 : index, 3923378176 : index, 3941203968 : index, 4102750208 : index, 4120576000 : index, 4138401792 : index, 4156227584 : index, 4317773824 : index, 4335599616 : index, 4353425408 : index, 4371251200 : index, 4532797440 : index, 4550623232 : index, 4568449024 : index, 4586274816 : index, 4747821056 : index, 4765646848 : index, 4783472640 : index, 4801298432 : index, 4962844672 : index, 4980670464 : index, 4998496256 : index, 5016322048 : index, 5177868288 : index, 5195694080 : index, 5213519872 : index, 5231345664 : index, 5392891904 : index, 5410717696 : index, 5428543488 : index, 5446369280 : index, 5607915520 : index, 5625741312 : index, 5643567104 : index, 5661392896 : index, 5822939136 : index, 5840764928 : index, 5858590720 : index, 5876416512 : index, 6037962752 : index, 6055788544 : index, 6073614336 : index, 6091440128 : index, 6252986368 : index, 6270812160 : index, 6288637952 : index, 6306463744 : index, 6468009984 : index, 6485835776 : index, 6503661568 : index, 6521487360 : index, 6683033600 : index, 6700859392 : index, 6718685184 : index, 6736510976 : index]} : i64 to index loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%21 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) alignment(64) offset(%10) flags(ReadOnly) : !flow.dispatch.tensor<readonly:tensor<131072x128xi8>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%22 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) alignment(64) offset(%15) flags(ReadOnly) : !flow.dispatch.tensor<readonly:tensor<131072xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%23 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) alignment(64) offset(%20) flags(ReadOnly) : !flow.dispatch.tensor<readonly:tensor<131072xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%24 = hal.interface.binding.subspan set(0) binding(1) type(storage_buffer) alignment(64) offset(%c0) : !flow.dispatch.tensor<writeonly:tensor<131072x128xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%25 = flow.dispatch.tensor.load %21, offsets = [0, 0], sizes = [131072, 128], strides = [1, 1] : !flow.dispatch.tensor<readonly:tensor<131072x128xi8>> -> tensor<131072x128xi8> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%26 = flow.dispatch.tensor.load %22, offsets = [0], sizes = [131072], strides = [1] : !flow.dispatch.tensor<readonly:tensor<131072xf32>> -> tensor<131072xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%27 = flow.dispatch.tensor.load %23, offsets = [0], sizes = [131072], strides = [1] : !flow.dispatch.tensor<readonly:tensor<131072xf32>> -> tensor<131072xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%28 = tensor.empty() : tensor<131072x128xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%29 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0)>, affine_map<(d0, d1) -> (d0)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%25, %26, %27 : tensor<131072x128xi8>, tensor<131072xf32>, tensor<131072xf32>) outs(%28 : tensor<131072x128xf32>) {
^bb0(%in: i8 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3)), %in_0: f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3)), %in_1: f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3)), %out: f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))):
%30 = arith.extui %in : i8 to i32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":936:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%31 = arith.uitofp %30 : i32 to f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":937:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%32 = arith.subf %31, %in_1 : f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":938:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
%33 = arith.mulf %32, %in_0 : f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":939:15 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
linalg.yield %33 : f32 loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":940:7 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} -> tensor<131072x128xf32> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
flow.dispatch.tensor.store %29, %24, offsets = [0, 0], sizes = [131072, 128], strides = [1, 1] : tensor<131072x128xf32> -> !flow.dispatch.tensor<writeonly:tensor<131072x128xf32>> loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
return loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
} loc(callsite("/home/nod/Documents/SHARK/first_vicuna_int8.mlir":934:11 at "/home/nod/Documents/SHARK/first_vicuna_int8.mlir":29:3))
```
### What component(s) does this issue relate to?
MLIR, Compiler
### Version information
_No response_
### Additional context
_No response_ | non_defect | missing integer type cast for during conversion what happened i m running into issues dealing with arith extui operation while compiling for a vulkan target steps to reproduce your issue iree compile comand iree build tools iree compile iree input type tm tensor iree vm bytecode module output format flatbuffer binary iree hal target backends vulkan iree llvmcpu embedded linker path home nod documents iree build compiler bindings python iree compiler tools mlir libs iree lld mlir print debuginfo mlir print op on diagnostic false iree stream resource index bits iree vm target index bits iree vm bytecode module strip source map true iree vulkan target triple adreno linux iree util zero fill elided attrs iree spirv index bits iree hal dump executable sources to benchmark dir mlir path o vmfb path error home nod documents shark first vicuna mlir error failed to materialize conversion for result of operation memref load that remained live after conversion linalg generic indexing maps iterator types ins expanded cst cst tensor tensor tensor outs tensor home nod documents shark first vicuna mlir note called from func func forward tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor home nod documents shark first vicuna mlir note see existing live user here spirv uconvert loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui in to home nod documents shark first vicuna mlir error failed to run translation of source executable to target executable for backend hal executable target api vulkan qualcomm integratedgpu spirv resource limits linalg generic indexing maps iterator types ins expanded cst cst tensor tensor tensor outs tensor home nod documents shark first vicuna mlir note called from func func forward tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor home nod documents shark first vicuna mlir error failed to serialize executables linalg generic indexing maps iterator types ins expanded cst cst tensor tensor tensor outs tensor home nod documents shark first vicuna mlir note called from func func forward tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor minimal ir to reproduce extui works on scalar fails on tensor mapping regardless of tensor rank func func spirv extui success scalar res arith extui to return res mapping affine map func func spirv extui failure tensor tensor tensor tensor res linalg generic indexing maps iterator types ins tensor outs tensor in out arith extui in to linalg yield tensor return res tensor full mlir hal executable public forward dispatch hal executable variant public vulkan spirv fb target api vulkan qualcomm integratedgpu spirv resource limits hal executable export public forward dispatch generic ordinal layout hal pipeline layout hal device loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir x y z flow dispatch workgroup count from slice loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal return x y z index index index loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir builtin module func func forward dispatch generic arith constant loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith constant index loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface constant load loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface constant load loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface constant load loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface constant load loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface constant load loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface constant load loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith shli loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith ori loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith index castui stream alignment index stream values to index loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith shli loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith ori loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith index castui stream alignment index stream values to index loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith shli loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith ori loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith index castui stream alignment index stream values to index loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface binding subspan set binding type storage buffer alignment offset flags readonly flow dispatch tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface binding subspan set binding type storage buffer alignment offset flags readonly flow dispatch tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface binding subspan set binding type storage buffer alignment offset flags readonly flow dispatch tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir hal interface binding subspan set binding type storage buffer alignment offset flow dispatch tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir flow dispatch tensor load offsets sizes strides flow dispatch tensor tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir flow dispatch tensor load offsets sizes strides flow dispatch tensor tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir flow dispatch tensor load offsets sizes strides flow dispatch tensor tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir tensor empty tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir linalg generic indexing maps iterator types ins tensor tensor tensor outs tensor in loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir in loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir in loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir out loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith extui in to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith uitofp to loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith subf in loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir arith mulf in loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir linalg yield loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir flow dispatch tensor store offsets sizes strides tensor flow dispatch tensor loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir return loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir loc callsite home nod documents shark first vicuna mlir at home nod documents shark first vicuna mlir what component s does this issue relate to mlir compiler version information no response additional context no response | 0 |
557,497 | 16,509,697,836 | IssuesEvent | 2021-05-26 01:23:12 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | 3.1.2.2 Deployment of applications to multi-cluster domain fails if custom resource adapter not deployed to all clusters in domain. | ERR: Assignee Priority: Major Stale Type: Bug | 1\. Create a 2 cluster domain
2\. Deploy a custom resource adapter to cluster1\.
3\. Deploy an application containing an EJB with a @Resource annotation to the cluster2 and an NPE is thrown when evaluating the appInfo.getMetaData expression:
```
if (isRAConnectionFactory(type, appInfo.getMetaData(Application.class)))
```
from the following method:
```
public static boolean isRAConnectionFactory(Habitat habitat,
String type, Application thisApp) {
// first check if this is a connection factory defined in a resource
// adapter in this application
if (isRAConnectionFactory(type, thisApp)) {
return true;
}
// then check if this is a connection factory defined in a standalone
// resource adapter
Applications applications = habitat.getComponent(Applications.class);
if (applications != null) {
List<com.sun.enterprise.config.serverbeans.Application> raApps = applications.getApplicationsWithSnifferType(com.sun.enterprise.config.serverbeans.Application.CONNECTOR_SNIFFER_TYPE, true);
ApplicationRegistry appRegistry = habitat.getComponent(ApplicationRegistry.class);
for (com.sun.enterprise.config.serverbeans.Application raApp : raApps) {
ApplicationInfo appInfo = appRegistry.get(raApp.getName());
if (isRAConnectionFactory(type, appInfo.getMetaData(Application.class))) {
return true;
}
}
}
return false;
}
```
The code assumes that the appInfo is available on all clusters but this does not appear to be the case for cluster where the custom RA is deployed.
My work around has been to deploy the custom RA to target all clusters in the domain so that the required info is available to prevent the NPE from arising. An alternative is to put a null-check is the condition but I was not sure if this should be fixed further up stack, or if there would be side-effects to doing so.
#### Environment
GF 3.1.2.2 Multi-clustered domain. | 1.0 | 3.1.2.2 Deployment of applications to multi-cluster domain fails if custom resource adapter not deployed to all clusters in domain. - 1\. Create a 2 cluster domain
2\. Deploy a custom resource adapter to cluster1\.
3\. Deploy an application containing an EJB with a @Resource annotation to the cluster2 and an NPE is thrown when evaluating the appInfo.getMetaData expression:
```
if (isRAConnectionFactory(type, appInfo.getMetaData(Application.class)))
```
from the following method:
```
public static boolean isRAConnectionFactory(Habitat habitat,
String type, Application thisApp) {
// first check if this is a connection factory defined in a resource
// adapter in this application
if (isRAConnectionFactory(type, thisApp)) {
return true;
}
// then check if this is a connection factory defined in a standalone
// resource adapter
Applications applications = habitat.getComponent(Applications.class);
if (applications != null) {
List<com.sun.enterprise.config.serverbeans.Application> raApps = applications.getApplicationsWithSnifferType(com.sun.enterprise.config.serverbeans.Application.CONNECTOR_SNIFFER_TYPE, true);
ApplicationRegistry appRegistry = habitat.getComponent(ApplicationRegistry.class);
for (com.sun.enterprise.config.serverbeans.Application raApp : raApps) {
ApplicationInfo appInfo = appRegistry.get(raApp.getName());
if (isRAConnectionFactory(type, appInfo.getMetaData(Application.class))) {
return true;
}
}
}
return false;
}
```
The code assumes that the appInfo is available on all clusters but this does not appear to be the case for cluster where the custom RA is deployed.
My work around has been to deploy the custom RA to target all clusters in the domain so that the required info is available to prevent the NPE from arising. An alternative is to put a null-check is the condition but I was not sure if this should be fixed further up stack, or if there would be side-effects to doing so.
#### Environment
GF 3.1.2.2 Multi-clustered domain. | non_defect | deployment of applications to multi cluster domain fails if custom resource adapter not deployed to all clusters in domain create a cluster domain deploy a custom resource adapter to deploy an application containing an ejb with a resource annotation to the and an npe is thrown when evaluating the appinfo getmetadata expression if israconnectionfactory type appinfo getmetadata application class from the following method public static boolean israconnectionfactory habitat habitat string type application thisapp first check if this is a connection factory defined in a resource adapter in this application if israconnectionfactory type thisapp return true then check if this is a connection factory defined in a standalone resource adapter applications applications habitat getcomponent applications class if applications null list raapps applications getapplicationswithsniffertype com sun enterprise config serverbeans application connector sniffer type true applicationregistry appregistry habitat getcomponent applicationregistry class for com sun enterprise config serverbeans application raapp raapps applicationinfo appinfo appregistry get raapp getname if israconnectionfactory type appinfo getmetadata application class return true return false the code assumes that the appinfo is available on all clusters but this does not appear to be the case for cluster where the custom ra is deployed my work around has been to deploy the custom ra to target all clusters in the domain so that the required info is available to prevent the npe from arising an alternative is to put a null check is the condition but i was not sure if this should be fixed further up stack or if there would be side effects to doing so environment gf multi clustered domain | 0 |
30,779 | 14,673,941,337 | IssuesEvent | 2020-12-30 14:15:58 | gramps-project/gramps-webapi | https://api.github.com/repos/gramps-project/gramps-webapi | closed | Very slow relationship calculation | performance | Starting to work on a timeline view, I hit the issue that the person timeline endpoint is *very* slow on my tree. It takes more than 20 seconds to fetch my timeline (with default arguments), which makes it unusable in practice. I am pretty sure this is a problem within Gramps, not of our timeline code...
Most of the time is spent calculating the relationship between my 2 sisters and me, which each takes almost 10 seconds (the result being `Sister`). Something must be wrong there. Snakeviz shows that, for a single such relationship calculation, it instantiates more `Person` and `Family` objects than are present in the database.
Has anyone else encountered such problems? | True | Very slow relationship calculation - Starting to work on a timeline view, I hit the issue that the person timeline endpoint is *very* slow on my tree. It takes more than 20 seconds to fetch my timeline (with default arguments), which makes it unusable in practice. I am pretty sure this is a problem within Gramps, not of our timeline code...
Most of the time is spent calculating the relationship between my 2 sisters and me, which each takes almost 10 seconds (the result being `Sister`). Something must be wrong there. Snakeviz shows that, for a single such relationship calculation, it instantiates more `Person` and `Family` objects than are present in the database.
Has anyone else encountered such problems? | non_defect | very slow relationship calculation starting to work on a timeline view i hit the issue that the person timeline endpoint is very slow on my tree it takes more than seconds to fetch my timeline with default arguments which makes it unusable in practice i am pretty sure this is a problem within gramps not of our timeline code most of the time is spent calculating the relationship between my sisters and me which each takes almost seconds the result being sister something must be wrong there snakeviz shows that for a single such relationship calculation it instantiates more person and family objects than are present in the database has anyone else encountered such problems | 0 |
48,785 | 13,184,740,203 | IssuesEvent | 2020-08-12 20:00:28 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | I3Time does not correctly add and subtract across year boundries (Trac #204) | Incomplete Migration Migrated from Trac dataclasses defect | <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/204
, reported by kjmeagher and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2010-06-02T16:24:15",
"description": "MAX_DAQTIME in I3Time.h is incorrect and so adding and subtracting across year boundries does not work. Unit test \"year_transitions\" added to I3TimeTest.cxx.\n\n\nIn [2]: t1=dataclasses.I3Time()\n\nIn [3]: t2=dataclasses.I3Time()\n\nIn [4]: t1.SetUTCCalDate(2007,12,31,23,59,59,0)\n\nIn [5]: t2.SetUTCCalDate(2008,1,1,0,0,0,0)\n\nIn [6]: ( t1+icetray.I3Units.s ) == t2\n\nOut[6]: False\n\nIn [7]: t1\n\nOut[7]: I3Time(2007,315359990000000000)\n\nIn [8]: t2\n\nOut[8]: I3Time(2008,0)\n\nIn [9]: print t1\n\n2007-12-31 23:59:59.000,000,000,0 UTC\n\nIn [10]: print t2\n\n2008-01-01 00:00:00.000,000,000,0 UTC\n\nIn [11]: print t1+icetray.I3Units.s\n\n2008-01-01 00:00:00.000,000,000,0 UTC\n\nIn [12]: t1+icetray.I3Units.s\n\nOut[12]: I3Time(2007,315360000000000000)\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1275495855000000",
"component": "dataclasses",
"summary": "I3Time does not correctly add and subtract across year boundries",
"priority": "normal",
"keywords": "",
"time": "2010-04-14T15:13:51",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
| 1.0 | I3Time does not correctly add and subtract across year boundries (Trac #204) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/204
, reported by kjmeagher and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2010-06-02T16:24:15",
"description": "MAX_DAQTIME in I3Time.h is incorrect and so adding and subtracting across year boundries does not work. Unit test \"year_transitions\" added to I3TimeTest.cxx.\n\n\nIn [2]: t1=dataclasses.I3Time()\n\nIn [3]: t2=dataclasses.I3Time()\n\nIn [4]: t1.SetUTCCalDate(2007,12,31,23,59,59,0)\n\nIn [5]: t2.SetUTCCalDate(2008,1,1,0,0,0,0)\n\nIn [6]: ( t1+icetray.I3Units.s ) == t2\n\nOut[6]: False\n\nIn [7]: t1\n\nOut[7]: I3Time(2007,315359990000000000)\n\nIn [8]: t2\n\nOut[8]: I3Time(2008,0)\n\nIn [9]: print t1\n\n2007-12-31 23:59:59.000,000,000,0 UTC\n\nIn [10]: print t2\n\n2008-01-01 00:00:00.000,000,000,0 UTC\n\nIn [11]: print t1+icetray.I3Units.s\n\n2008-01-01 00:00:00.000,000,000,0 UTC\n\nIn [12]: t1+icetray.I3Units.s\n\nOut[12]: I3Time(2007,315360000000000000)\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1275495855000000",
"component": "dataclasses",
"summary": "I3Time does not correctly add and subtract across year boundries",
"priority": "normal",
"keywords": "",
"time": "2010-04-14T15:13:51",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
| defect | does not correctly add and subtract across year boundries trac migrated from reported by kjmeagher and owned by blaufuss json status closed changetime description max daqtime in h is incorrect and so adding and subtracting across year boundries does not work unit test year transitions added to cxx n n nin dataclasses n nin dataclasses n nin setutccaldate n nin setutccaldate n nin icetray s n nout false n nin n nout n nin n nout n nin print n utc n nin print n utc n nin print icetray s n utc n nin icetray s n nout n reporter kjmeagher cc resolution fixed ts component dataclasses summary does not correctly add and subtract across year boundries priority normal keywords time milestone owner blaufuss type defect | 1 |
40,147 | 9,855,647,279 | IssuesEvent | 2019-06-19 19:57:25 | openanthem/nimbus-core | https://api.github.com/repos/openanthem/nimbus-core | reopened | Notes captured by users in the grid are wrapping and running down into multiple entries in the grid | Defect Open | Notes captured by users in the grid are wrapping and running down into multiple entries in the grid
# Issue Details
**Type of Issue** (check one with "X")
```
[X] Bug Report => Please search GitHub for a similar issue or PR before submitting
[ ] Feature Request => Please ensure feature is not already in progress
[ ] Support Request => Please do not submit support requests here, instead see: https://discourse.oss.antheminc.com/
```
## Current Behavior
When the users edit a line item in the grid and add a long note entry, the note gets wrapped and overlaps into the next line item in the grid making it Un-Readable.
## Expected Behavior
The note entries should not be wrapped and yet be fully readable on hovering the mouse pointer over the the note box.
## How to Reproduce the Issue
### Steps to Reproduce
1. Login into the application
2. Edit one of the line items in the grid and add a long note and Save it
3. Check the note entry getting wrapped and overlapping into the next line items seen down below.
# Environment Details
* **Nimbus Version:** **1.3.1.M1**
* **Browser:** : **Google Chrome**
<!--
Please list all browsers where this could be reproduced.
-->

| 1.0 | Notes captured by users in the grid are wrapping and running down into multiple entries in the grid - Notes captured by users in the grid are wrapping and running down into multiple entries in the grid
# Issue Details
**Type of Issue** (check one with "X")
```
[X] Bug Report => Please search GitHub for a similar issue or PR before submitting
[ ] Feature Request => Please ensure feature is not already in progress
[ ] Support Request => Please do not submit support requests here, instead see: https://discourse.oss.antheminc.com/
```
## Current Behavior
When the users edit a line item in the grid and add a long note entry, the note gets wrapped and overlaps into the next line item in the grid making it Un-Readable.
## Expected Behavior
The note entries should not be wrapped and yet be fully readable on hovering the mouse pointer over the the note box.
## How to Reproduce the Issue
### Steps to Reproduce
1. Login into the application
2. Edit one of the line items in the grid and add a long note and Save it
3. Check the note entry getting wrapped and overlapping into the next line items seen down below.
# Environment Details
* **Nimbus Version:** **1.3.1.M1**
* **Browser:** : **Google Chrome**
<!--
Please list all browsers where this could be reproduced.
-->

| defect | notes captured by users in the grid are wrapping and running down into multiple entries in the grid notes captured by users in the grid are wrapping and running down into multiple entries in the grid issue details type of issue check one with x bug report please search github for a similar issue or pr before submitting feature request please ensure feature is not already in progress support request please do not submit support requests here instead see current behavior when the users edit a line item in the grid and add a long note entry the note gets wrapped and overlaps into the next line item in the grid making it un readable expected behavior the note entries should not be wrapped and yet be fully readable on hovering the mouse pointer over the the note box how to reproduce the issue steps to reproduce login into the application edit one of the line items in the grid and add a long note and save it check the note entry getting wrapped and overlapping into the next line items seen down below environment details nimbus version browser google chrome please list all browsers where this could be reproduced | 1 |
392,201 | 11,584,795,037 | IssuesEvent | 2020-02-22 19:27:11 | anitab-org/mentorship-backend | https://api.github.com/repos/anitab-org/mentorship-backend | closed | Use Amazon Relational Database Service (Amazon RDS) for persistence. | Category: Coding Priority: HIGH | ## Description
As a developer
I need to use the Amazon Relational Database Service (Amazon RDS),
so that we can have DB persistence even after redeploying the app.
## Acceptance Criteria
### Update [Required]
- [ ] DB persistence even after redeploying the app.
- [ ] All environments have an Amazon RDS database.
## Definition of Done
- [ ] All of the required items are completed.
- [x] Approval by 1 mentor.
## Estimation
4 hours
| 1.0 | Use Amazon Relational Database Service (Amazon RDS) for persistence. - ## Description
As a developer
I need to use the Amazon Relational Database Service (Amazon RDS),
so that we can have DB persistence even after redeploying the app.
## Acceptance Criteria
### Update [Required]
- [ ] DB persistence even after redeploying the app.
- [ ] All environments have an Amazon RDS database.
## Definition of Done
- [ ] All of the required items are completed.
- [x] Approval by 1 mentor.
## Estimation
4 hours
| non_defect | use amazon relational database service amazon rds for persistence description as a developer i need to use the amazon relational database service amazon rds so that we can have db persistence even after redeploying the app acceptance criteria update db persistence even after redeploying the app all environments have an amazon rds database definition of done all of the required items are completed approval by mentor estimation hours | 0 |
70,887 | 23,359,490,843 | IssuesEvent | 2022-08-10 10:22:17 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | "Great, that'll help people know it's you" tooltip is shown all the time | T-Defect | ### Steps to reproduce
1. Have an avatar on your account
2. Go to the home screen (Ctrl+Alt+H)
### Outcome
#### What did you expect?
No tooltip, since any onboarding I did was years ago.
#### What happened instead?
A tooltip that only makes sense in the context of onboarding is shown:

### Operating system
NixOS unstable
### Browser information
Firefox 102.0.1
### URL for webapp
develop.element.io
### Application version
Element version: 822e262a932c-react-e63072e21fa7-js-3e37c7426420 Olm version: 3.2.12
### Homeserver
Synapse 1.63.1
### Will you send logs?
No | 1.0 | "Great, that'll help people know it's you" tooltip is shown all the time - ### Steps to reproduce
1. Have an avatar on your account
2. Go to the home screen (Ctrl+Alt+H)
### Outcome
#### What did you expect?
No tooltip, since any onboarding I did was years ago.
#### What happened instead?
A tooltip that only makes sense in the context of onboarding is shown:

### Operating system
NixOS unstable
### Browser information
Firefox 102.0.1
### URL for webapp
develop.element.io
### Application version
Element version: 822e262a932c-react-e63072e21fa7-js-3e37c7426420 Olm version: 3.2.12
### Homeserver
Synapse 1.63.1
### Will you send logs?
No | defect | great that ll help people know it s you tooltip is shown all the time steps to reproduce have an avatar on your account go to the home screen ctrl alt h outcome what did you expect no tooltip since any onboarding i did was years ago what happened instead a tooltip that only makes sense in the context of onboarding is shown operating system nixos unstable browser information firefox url for webapp develop element io application version element version react js olm version homeserver synapse will you send logs no | 1 |
308,500 | 23,251,336,757 | IssuesEvent | 2022-08-04 04:16:52 | odpf/compass | https://api.github.com/repos/odpf/compass | closed | Increase coverage to 87% | documentation | **Is your feature request related to a problem? Please describe.**
We have integrated compass with coverall and manage to reach 80% coverage threshold. We could aim higher coverage to 87% for compass.
**Describe the solution you'd like**
Increase coverage to 87%
| 1.0 | Increase coverage to 87% - **Is your feature request related to a problem? Please describe.**
We have integrated compass with coverall and manage to reach 80% coverage threshold. We could aim higher coverage to 87% for compass.
**Describe the solution you'd like**
Increase coverage to 87%
| non_defect | increase coverage to is your feature request related to a problem please describe we have integrated compass with coverall and manage to reach coverage threshold we could aim higher coverage to for compass describe the solution you d like increase coverage to | 0 |
51,092 | 13,188,106,935 | IssuesEvent | 2020-08-13 05:34:26 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | simprod-scripts MuonGun module not saving S-Frames. (Trac #1966) | Migrated from Trac combo simulation defect | IPModule only writes Q-Frames. S-Frames are needed to make MuonGun usable. In principle only one S-Frame is needed per sub-directory but they don't take up that much space so we can just included them with each file.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1966">https://code.icecube.wisc.edu/ticket/1966</a>, reported by juancarlos and owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-03-17T21:04:43",
"description": "IPModule only writes Q-Frames. S-Frames are needed to make MuonGun usable. In principle only one S-Frame is needed per sub-directory but they don't take up that much space so we can just included them with each file.",
"reporter": "juancarlos",
"cc": "",
"resolution": "fixed",
"_ts": "1489784683913282",
"component": "combo simulation",
"summary": "simprod-scripts MuonGun module not saving S-Frames.",
"priority": "major",
"keywords": "simprod-scripts",
"time": "2017-03-17T20:59:05",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| 1.0 | simprod-scripts MuonGun module not saving S-Frames. (Trac #1966) - IPModule only writes Q-Frames. S-Frames are needed to make MuonGun usable. In principle only one S-Frame is needed per sub-directory but they don't take up that much space so we can just included them with each file.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1966">https://code.icecube.wisc.edu/ticket/1966</a>, reported by juancarlos and owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-03-17T21:04:43",
"description": "IPModule only writes Q-Frames. S-Frames are needed to make MuonGun usable. In principle only one S-Frame is needed per sub-directory but they don't take up that much space so we can just included them with each file.",
"reporter": "juancarlos",
"cc": "",
"resolution": "fixed",
"_ts": "1489784683913282",
"component": "combo simulation",
"summary": "simprod-scripts MuonGun module not saving S-Frames.",
"priority": "major",
"keywords": "simprod-scripts",
"time": "2017-03-17T20:59:05",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| defect | simprod scripts muongun module not saving s frames trac ipmodule only writes q frames s frames are needed to make muongun usable in principle only one s frame is needed per sub directory but they don t take up that much space so we can just included them with each file migrated from json status closed changetime description ipmodule only writes q frames s frames are needed to make muongun usable in principle only one s frame is needed per sub directory but they don t take up that much space so we can just included them with each file reporter juancarlos cc resolution fixed ts component combo simulation summary simprod scripts muongun module not saving s frames priority major keywords simprod scripts time milestone owner juancarlos type defect | 1 |
141,074 | 5,428,890,519 | IssuesEvent | 2017-03-03 16:58:42 | SciSpike/yaktor-issues | https://api.github.com/repos/SciSpike/yaktor-issues | closed | Enhance mongo configuration & mongoose initializer to support replica sets & mongos | platform:nodejs priority:high status:reviewNeeded team:core type:enhancement | The current config options for mongo only allow the specification of a single host & port. In order to use mongo replica sets and `mongos` config servers, you need to be able to specify multiple `host:port` entries. | 1.0 | Enhance mongo configuration & mongoose initializer to support replica sets & mongos - The current config options for mongo only allow the specification of a single host & port. In order to use mongo replica sets and `mongos` config servers, you need to be able to specify multiple `host:port` entries. | non_defect | enhance mongo configuration mongoose initializer to support replica sets mongos the current config options for mongo only allow the specification of a single host port in order to use mongo replica sets and mongos config servers you need to be able to specify multiple host port entries | 0 |
1,302 | 2,603,751,191 | IssuesEvent | 2015-02-24 17:44:10 | chrsmith/bwapi | https://api.github.com/repos/chrsmith/bwapi | opened | BWAPI fails to load AIModule or crashes | auto-migrated Type-Defect | ```
When I build the ExampleAIModule in Debug mode BWAPI fails to load it.
The LoadLibrary call in BWAPI Returns NULL and GetLastError says 998:
Invalid memory access. either something is wrong with dynamic linking or
the BWAPI::Init() call in the DLL makes problems:
http://support.microsoft.com/kb/196069
Sometimes there also seems to be a problem with BWTA at readMap().
When I used the debug BWAPI.dll with the Release BWAPI.lib together with
BWTA and ExampleAIModule in Release everything crashed on game start (maybe
readMap()??) I used the VC++ debugger and it stopped at GameImpl.cpp line
1007 after the ASM call: popad (GameImpl::printEx
)
```
-----
Original issue reported on code.google.com by `tren...@gmail.com` on 19 Mar 2010 at 1:20 | 1.0 | BWAPI fails to load AIModule or crashes - ```
When I build the ExampleAIModule in Debug mode BWAPI fails to load it.
The LoadLibrary call in BWAPI Returns NULL and GetLastError says 998:
Invalid memory access. either something is wrong with dynamic linking or
the BWAPI::Init() call in the DLL makes problems:
http://support.microsoft.com/kb/196069
Sometimes there also seems to be a problem with BWTA at readMap().
When I used the debug BWAPI.dll with the Release BWAPI.lib together with
BWTA and ExampleAIModule in Release everything crashed on game start (maybe
readMap()??) I used the VC++ debugger and it stopped at GameImpl.cpp line
1007 after the ASM call: popad (GameImpl::printEx
)
```
-----
Original issue reported on code.google.com by `tren...@gmail.com` on 19 Mar 2010 at 1:20 | defect | bwapi fails to load aimodule or crashes when i build the exampleaimodule in debug mode bwapi fails to load it the loadlibrary call in bwapi returns null and getlasterror says invalid memory access either something is wrong with dynamic linking or the bwapi init call in the dll makes problems sometimes there also seems to be a problem with bwta at readmap when i used the debug bwapi dll with the release bwapi lib together with bwta and exampleaimodule in release everything crashed on game start maybe readmap i used the vc debugger and it stopped at gameimpl cpp line after the asm call popad gameimpl printex original issue reported on code google com by tren gmail com on mar at | 1 |
22,151 | 3,604,347,791 | IssuesEvent | 2016-02-03 22:30:51 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | No operators in outline | analyzer-server area-analyzer Priority-Medium Type-Defect | Enter the following in a Dart file in IntelliJ:
class C {
int get getter => 0;
C operator +(C other) => null;
bool operator ==(C other) => false;
int method() => 0;
}
Note that 'getter' and 'method' both appear in the outline, but the operators do not. I think that they ought to. I don't know whether this is a server of IntelliJ issue, but I expect it's a server issue. | 1.0 | No operators in outline - Enter the following in a Dart file in IntelliJ:
class C {
int get getter => 0;
C operator +(C other) => null;
bool operator ==(C other) => false;
int method() => 0;
}
Note that 'getter' and 'method' both appear in the outline, but the operators do not. I think that they ought to. I don't know whether this is a server of IntelliJ issue, but I expect it's a server issue. | defect | no operators in outline enter the following in a dart file in intellij class c int get getter c operator c other null bool operator c other false int method note that getter and method both appear in the outline but the operators do not i think that they ought to i don t know whether this is a server of intellij issue but i expect it s a server issue | 1 |
66,644 | 20,387,729,630 | IssuesEvent | 2022-02-22 08:55:19 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Render process crash when scrolling up on Windows 7 | T-Defect Z-Platform-Specific X-Cannot-Reproduce S-Critical A-Timeline Z-Upstream O-Uncommon | <!-- A picture's worth a thousand words: PLEASE INCLUDE A SCREENSHOT :P -->
<!-- Please report security issues by email to security@matrix.org -->
<!-- This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Screen become white when scroll up
### Steps to reproduce
- For bugs, list the steps
- that reproduce the bug
- using hyphens as bullet points
Open a room and scroll up for a long time. Then the screen will become white. (In test I used a encrypted room, other rooms not tested)
Just click the scroll bar and move up and don't release then soon it will be whiteboard.

<!-- Please send us logs for your bug report. They're very important for bugs
which are hard to reproduce. To do this, create this issue then go to your
account settings and click 'Submit Debug Logs' from the Help & About tab -->
Logs being sent: yes
<!-- Include screenshots if possible: you can drag and drop images below. -->
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: web (in-browser) or desktop?
- desktop
For the web app:
- **Browser**: Chrome, Firefox, Safari, Edge? which version?
- **OS**: Windows, macOS, Ubuntu, Arch Linux, etc?
- **URL**: develop.element.io / app.element.io / somewhere else? If a private server, what version of Element Web?
For the desktop app:
- **OS**: Windows, macOS, Ubuntu, Arch Linux, etc?
- Win7 x64
- **Version**: 1.x.y <!-- check the user settings panel if unsure -->
- 1.7.27
| 1.0 | Render process crash when scrolling up on Windows 7 - <!-- A picture's worth a thousand words: PLEASE INCLUDE A SCREENSHOT :P -->
<!-- Please report security issues by email to security@matrix.org -->
<!-- This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Screen become white when scroll up
### Steps to reproduce
- For bugs, list the steps
- that reproduce the bug
- using hyphens as bullet points
Open a room and scroll up for a long time. Then the screen will become white. (In test I used a encrypted room, other rooms not tested)
Just click the scroll bar and move up and don't release then soon it will be whiteboard.

<!-- Please send us logs for your bug report. They're very important for bugs
which are hard to reproduce. To do this, create this issue then go to your
account settings and click 'Submit Debug Logs' from the Help & About tab -->
Logs being sent: yes
<!-- Include screenshots if possible: you can drag and drop images below. -->
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: web (in-browser) or desktop?
- desktop
For the web app:
- **Browser**: Chrome, Firefox, Safari, Edge? which version?
- **OS**: Windows, macOS, Ubuntu, Arch Linux, etc?
- **URL**: develop.element.io / app.element.io / somewhere else? If a private server, what version of Element Web?
For the desktop app:
- **OS**: Windows, macOS, Ubuntu, Arch Linux, etc?
- Win7 x64
- **Version**: 1.x.y <!-- check the user settings panel if unsure -->
- 1.7.27
| defect | render process crash when scrolling up on windows this is a bug report template by following the instructions below and filling out the sections with your information you will help the us to get all the necessary data to fix your issue you can also preview your report before submitting it you may remove sections that aren t relevant to your particular case text between marks will be invisible in the report description screen become white when scroll up steps to reproduce for bugs list the steps that reproduce the bug using hyphens as bullet points open a room and scroll up for a long time then the screen will become white in test i used a encrypted room other rooms not tested just click the scroll bar and move up and don t release then soon it will be whiteboard please send us logs for your bug report they re very important for bugs which are hard to reproduce to do this create this issue then go to your account settings and click submit debug logs from the help about tab logs being sent yes version information platform web in browser or desktop desktop for the web app browser chrome firefox safari edge which version os windows macos ubuntu arch linux etc url develop element io app element io somewhere else if a private server what version of element web for the desktop app os windows macos ubuntu arch linux etc version x y | 1 |
62,072 | 17,023,844,732 | IssuesEvent | 2021-07-03 04:08:25 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | [Trac] "/newticket?component=website" does not cause the proper component to be preselected | Component: admin Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 5.00pm, Monday, 10th December 2012]**
According to Trac documentation it should work:
http://trac.edgewall.org/wiki/TracTickets#PresetValuesforNewTickets
Not sure if this is a bug in Trac or trac.osm.org configuration.
The use case is that I wanted to create "add ticket" links on the Trac home page for the main components (website, mapnik, nominatim etc) and it does not work. | 1.0 | [Trac] "/newticket?component=website" does not cause the proper component to be preselected - **[Submitted to the original trac issue database at 5.00pm, Monday, 10th December 2012]**
According to Trac documentation it should work:
http://trac.edgewall.org/wiki/TracTickets#PresetValuesforNewTickets
Not sure if this is a bug in Trac or trac.osm.org configuration.
The use case is that I wanted to create "add ticket" links on the Trac home page for the main components (website, mapnik, nominatim etc) and it does not work. | defect | newticket component website does not cause the proper component to be preselected according to trac documentation it should work not sure if this is a bug in trac or trac osm org configuration the use case is that i wanted to create add ticket links on the trac home page for the main components website mapnik nominatim etc and it does not work | 1 |
24,858 | 24,390,911,350 | IssuesEvent | 2022-10-04 15:09:32 | forcedotcom/salesforcedx-vscode | https://api.github.com/repos/forcedotcom/salesforcedx-vscode | closed | Allow selection of multiple files/folders on Deploy/Retrieve source | type:feedback area:usability area:deploy/retrieve | ### Summary
If I select multiple files and right click>Deploy Source to org or Retrieve source from org, it only deploys the file that I right clicked on and not all files that were selected.
The command output shows only one file that it is trying to source:deploy, so I think it is possibly an extension issue.
### Steps To Reproduce:
1. Connect to sandbox
2. Select two files using control click or shift click
3. Right click and select "deploy source to org" or "retrieve source from org"
### Expected result
Both files are deployed/retrieved
### Actual result
Only the file that is right clicked on is retrieved
### Additional information
It seems like there is a multiple file command for at least the deploy part (sfdx.force.source.deploy.multiple.source.paths) but it isn't mapped to a right click action
**VS Code Version**: 1.32.3
**SFDX CLI Version**: 7.1.4-79f97a7df8
**OS and version**: Windows 10 1803 | True | Allow selection of multiple files/folders on Deploy/Retrieve source - ### Summary
If I select multiple files and right click>Deploy Source to org or Retrieve source from org, it only deploys the file that I right clicked on and not all files that were selected.
The command output shows only one file that it is trying to source:deploy, so I think it is possibly an extension issue.
### Steps To Reproduce:
1. Connect to sandbox
2. Select two files using control click or shift click
3. Right click and select "deploy source to org" or "retrieve source from org"
### Expected result
Both files are deployed/retrieved
### Actual result
Only the file that is right clicked on is retrieved
### Additional information
It seems like there is a multiple file command for at least the deploy part (sfdx.force.source.deploy.multiple.source.paths) but it isn't mapped to a right click action
**VS Code Version**: 1.32.3
**SFDX CLI Version**: 7.1.4-79f97a7df8
**OS and version**: Windows 10 1803 | non_defect | allow selection of multiple files folders on deploy retrieve source summary if i select multiple files and right click deploy source to org or retrieve source from org it only deploys the file that i right clicked on and not all files that were selected the command output shows only one file that it is trying to source deploy so i think it is possibly an extension issue steps to reproduce connect to sandbox select two files using control click or shift click right click and select deploy source to org or retrieve source from org expected result both files are deployed retrieved actual result only the file that is right clicked on is retrieved additional information it seems like there is a multiple file command for at least the deploy part sfdx force source deploy multiple source paths but it isn t mapped to a right click action vs code version sfdx cli version os and version windows | 0 |
135,845 | 18,722,141,899 | IssuesEvent | 2021-11-03 13:00:29 | KDWSS/dd-trace-java | https://api.github.com/repos/KDWSS/dd-trace-java | opened | CVE-2018-1273 (High) detected in spring-data-commons-1.13.9.RELEASE.jar, spring-data-commons-2.0.0.RELEASE.jar | security vulnerability | ## CVE-2018-1273 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-data-commons-1.13.9.RELEASE.jar</b>, <b>spring-data-commons-2.0.0.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-data-commons-1.13.9.RELEASE.jar</b></p></summary>
<p>Global parent pom.xml to be used by Spring Data modules</p>
<p>Library home page: <a href="http://www.spring.io/spring-data">http://www.spring.io/spring-data</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.data/spring-data-commons/1.13.9.RELEASE/3910a598235d2e9c1ca56f34c5e62bb5ce23778/spring-data-commons-1.13.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-1.5.9.RELEASE.jar (Root Library)
- spring-data-jpa-1.11.9.RELEASE.jar
- :x: **spring-data-commons-1.13.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-data-commons-2.0.0.RELEASE.jar</b></p></summary>
<p>Global parent pom.xml to be used by Spring Data modules</p>
<p>Library home page: <a href="http://www.spring.io/spring-data">http://www.spring.io/spring-data</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/elasticsearch/transport-5.3/transport-5.3.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.data/spring-data-commons/2.0.0.RELEASE/b97dad2d501c2baf49b497733bb19f7a05d52142/spring-data-commons-2.0.0.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-data-elasticsearch-3.0.0.RELEASE.jar (Root Library)
- :x: **spring-data-commons-2.0.0.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Data Commons, versions prior to 1.13 to 1.13.10, 2.0 to 2.0.5, and older unsupported versions, contain a property binder vulnerability caused by improper neutralization of special elements. An unauthenticated remote malicious user (or attacker) can supply specially crafted request parameters against Spring Data REST backed HTTP resources or using Spring Data's projection-based request payload binding hat can lead to a remote code execution attack.
<p>Publish Date: 2018-04-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1273>CVE-2018-1273</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1273">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1273</a></p>
<p>Release Date: 2018-04-11</p>
<p>Fix Resolution: 1.13.11.RELEASE,2.0.6.RELEASE</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.data","packageName":"spring-data-commons","packageVersion":"1.13.9.RELEASE","packageFilePaths":["/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-jpa:1.5.9.RELEASE;org.springframework.data:spring-data-jpa:1.11.9.RELEASE;org.springframework.data:spring-data-commons:1.13.9.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.13.11.RELEASE,2.0.6.RELEASE"},{"packageType":"Java","groupId":"org.springframework.data","packageName":"spring-data-commons","packageVersion":"2.0.0.RELEASE","packageFilePaths":["/dd-java-agent/instrumentation/elasticsearch/transport-5.3/transport-5.3.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.data:spring-data-elasticsearch:3.0.0.RELEASE;org.springframework.data:spring-data-commons:2.0.0.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.13.11.RELEASE,2.0.6.RELEASE"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-1273","vulnerabilityDetails":"Spring Data Commons, versions prior to 1.13 to 1.13.10, 2.0 to 2.0.5, and older unsupported versions, contain a property binder vulnerability caused by improper neutralization of special elements. An unauthenticated remote malicious user (or attacker) can supply specially crafted request parameters against Spring Data REST backed HTTP resources or using Spring Data\u0027s projection-based request payload binding hat can lead to a remote code execution attack.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1273","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-1273 (High) detected in spring-data-commons-1.13.9.RELEASE.jar, spring-data-commons-2.0.0.RELEASE.jar - ## CVE-2018-1273 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-data-commons-1.13.9.RELEASE.jar</b>, <b>spring-data-commons-2.0.0.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-data-commons-1.13.9.RELEASE.jar</b></p></summary>
<p>Global parent pom.xml to be used by Spring Data modules</p>
<p>Library home page: <a href="http://www.spring.io/spring-data">http://www.spring.io/spring-data</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.data/spring-data-commons/1.13.9.RELEASE/3910a598235d2e9c1ca56f34c5e62bb5ce23778/spring-data-commons-1.13.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-1.5.9.RELEASE.jar (Root Library)
- spring-data-jpa-1.11.9.RELEASE.jar
- :x: **spring-data-commons-1.13.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-data-commons-2.0.0.RELEASE.jar</b></p></summary>
<p>Global parent pom.xml to be used by Spring Data modules</p>
<p>Library home page: <a href="http://www.spring.io/spring-data">http://www.spring.io/spring-data</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/elasticsearch/transport-5.3/transport-5.3.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.data/spring-data-commons/2.0.0.RELEASE/b97dad2d501c2baf49b497733bb19f7a05d52142/spring-data-commons-2.0.0.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-data-elasticsearch-3.0.0.RELEASE.jar (Root Library)
- :x: **spring-data-commons-2.0.0.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Data Commons, versions prior to 1.13 to 1.13.10, 2.0 to 2.0.5, and older unsupported versions, contain a property binder vulnerability caused by improper neutralization of special elements. An unauthenticated remote malicious user (or attacker) can supply specially crafted request parameters against Spring Data REST backed HTTP resources or using Spring Data's projection-based request payload binding hat can lead to a remote code execution attack.
<p>Publish Date: 2018-04-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1273>CVE-2018-1273</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1273">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1273</a></p>
<p>Release Date: 2018-04-11</p>
<p>Fix Resolution: 1.13.11.RELEASE,2.0.6.RELEASE</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.data","packageName":"spring-data-commons","packageVersion":"1.13.9.RELEASE","packageFilePaths":["/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-jpa:1.5.9.RELEASE;org.springframework.data:spring-data-jpa:1.11.9.RELEASE;org.springframework.data:spring-data-commons:1.13.9.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.13.11.RELEASE,2.0.6.RELEASE"},{"packageType":"Java","groupId":"org.springframework.data","packageName":"spring-data-commons","packageVersion":"2.0.0.RELEASE","packageFilePaths":["/dd-java-agent/instrumentation/elasticsearch/transport-5.3/transport-5.3.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.data:spring-data-elasticsearch:3.0.0.RELEASE;org.springframework.data:spring-data-commons:2.0.0.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.13.11.RELEASE,2.0.6.RELEASE"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-1273","vulnerabilityDetails":"Spring Data Commons, versions prior to 1.13 to 1.13.10, 2.0 to 2.0.5, and older unsupported versions, contain a property binder vulnerability caused by improper neutralization of special elements. An unauthenticated remote malicious user (or attacker) can supply specially crafted request parameters against Spring Data REST backed HTTP resources or using Spring Data\u0027s projection-based request payload binding hat can lead to a remote code execution attack.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1273","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in spring data commons release jar spring data commons release jar cve high severity vulnerability vulnerable libraries spring data commons release jar spring data commons release jar spring data commons release jar global parent pom xml to be used by spring data modules library home page a href path to dependency file dd trace java dd java agent appsec weblog weblog spring app weblog spring app gradle path to vulnerable library home wss scanner gradle caches modules files org springframework data spring data commons release spring data commons release jar dependency hierarchy spring boot starter data jpa release jar root library spring data jpa release jar x spring data commons release jar vulnerable library spring data commons release jar global parent pom xml to be used by spring data modules library home page a href path to dependency file dd trace java dd java agent instrumentation elasticsearch transport transport gradle path to vulnerable library home wss scanner gradle caches modules files org springframework data spring data commons release spring data commons release jar dependency hierarchy spring data elasticsearch release jar root library x spring data commons release jar vulnerable library found in head commit a href found in base branch master vulnerability details spring data commons versions prior to to to and older unsupported versions contain a property binder vulnerability caused by improper neutralization of special elements an unauthenticated remote malicious user or attacker can supply specially crafted request parameters against spring data rest backed http resources or using spring data s projection based request payload binding hat can lead to a remote code execution attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution release release isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter data jpa release org springframework data spring data jpa release org springframework data spring data commons release isminimumfixversionavailable true minimumfixversion release release packagetype java groupid org springframework data packagename spring data commons packageversion release packagefilepaths istransitivedependency true dependencytree org springframework data spring data elasticsearch release org springframework data spring data commons release isminimumfixversionavailable true minimumfixversion release release basebranches vulnerabilityidentifier cve vulnerabilitydetails spring data commons versions prior to to to and older unsupported versions contain a property binder vulnerability caused by improper neutralization of special elements an unauthenticated remote malicious user or attacker can supply specially crafted request parameters against spring data rest backed http resources or using spring data projection based request payload binding hat can lead to a remote code execution attack vulnerabilityurl | 0 |
16,054 | 2,870,254,196 | IssuesEvent | 2015-06-07 00:40:45 | pdelia/away3d | https://api.github.com/repos/pdelia/away3d | closed | GeometryData destroy with error | auto-migrated Priority-Medium Type-Defect | #109 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:52Z
```
What steps will reproduce the problem?
1. try to destroy a GemetryData
2.
3.
What is the expected output? What do you see instead?
Should work, but instead it throws "TypeError: Error #1007: Instantiation
attempted on a non-constructor".
What version of the product are you using? On what operating system?
Lite Branch
Please provide any additional information below.
Change line 120 of GeometryData
from vertices = new null;
to vertices = null;
```
Original issue reported on code.google.com by `filipesilvestrim` on 13 Apr 2010 at 3:00 | 1.0 | GeometryData destroy with error - #109 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:52Z
```
What steps will reproduce the problem?
1. try to destroy a GemetryData
2.
3.
What is the expected output? What do you see instead?
Should work, but instead it throws "TypeError: Error #1007: Instantiation
attempted on a non-constructor".
What version of the product are you using? On what operating system?
Lite Branch
Please provide any additional information below.
Change line 120 of GeometryData
from vertices = new null;
to vertices = null;
```
Original issue reported on code.google.com by `filipesilvestrim` on 13 Apr 2010 at 3:00 | defect | geometrydata destroy with error issue by googlecodeexporter created on what steps will reproduce the problem try to destroy a gemetrydata what is the expected output what do you see instead should work but instead it throws typeerror error instantiation attempted on a non constructor what version of the product are you using on what operating system lite branch please provide any additional information below change line of geometrydata from vertices new null to vertices null original issue reported on code google com by filipesilvestrim on apr at | 1 |
67,883 | 21,219,349,590 | IssuesEvent | 2022-04-11 10:22:56 | combatopera/lagoon | https://api.github.com/repos/combatopera/lagoon | opened | lagoon_sic module that does not transform underscores | defect | currently `from lagoon import foo_bar` may import foo-bar or foo_bar depending on what's installed on your system, which is going to cause surprising behaviour. add a lagoon_sic module that does not transform underscores, and always transform underscores in the main lagoon module. also need a binary version of the new module | 1.0 | lagoon_sic module that does not transform underscores - currently `from lagoon import foo_bar` may import foo-bar or foo_bar depending on what's installed on your system, which is going to cause surprising behaviour. add a lagoon_sic module that does not transform underscores, and always transform underscores in the main lagoon module. also need a binary version of the new module | defect | lagoon sic module that does not transform underscores currently from lagoon import foo bar may import foo bar or foo bar depending on what s installed on your system which is going to cause surprising behaviour add a lagoon sic module that does not transform underscores and always transform underscores in the main lagoon module also need a binary version of the new module | 1 |
68,765 | 21,883,222,255 | IssuesEvent | 2022-05-19 16:00:27 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | New HazelcastInstance instantiation takes up to five minutes since Hazelcast 5.1 | Type: Defect | <!--
Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently.
-->
Since we upgraded to Hazelcast 5.1, booting the application started taking much longer time on some machines. Seems that a new instance instantiation - `Hazelcast.newHazelcastInstance()` - takes a couple of minutes in **machines with a lot of network interfaces**.
From a debugging session it looks like the area of `com.hazelcast.internal.server.tcp.LocalAddressRegistry#registerLocalAddresses` takes a long time.
Happens when I use the default configuration or other configurations.
**Expected behavior**
Instantiation of a new instance member should take a couple of seconds as it did in the past.
**To Reproduce**
I think that booting a member in a computer with multiple network interfaces, container (Docker, Podman) installations, etc.
**Additional context**
- Unfortunately I did not see anything in the logs that indicate an issue.
- Hazelcast 5.1.1
- A single member or pod in the cluster
- Only members, no clients.
- AdoptOpenJDK 11
- Windows 10 | 1.0 | New HazelcastInstance instantiation takes up to five minutes since Hazelcast 5.1 - <!--
Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently.
-->
Since we upgraded to Hazelcast 5.1, booting the application started taking much longer time on some machines. Seems that a new instance instantiation - `Hazelcast.newHazelcastInstance()` - takes a couple of minutes in **machines with a lot of network interfaces**.
From a debugging session it looks like the area of `com.hazelcast.internal.server.tcp.LocalAddressRegistry#registerLocalAddresses` takes a long time.
Happens when I use the default configuration or other configurations.
**Expected behavior**
Instantiation of a new instance member should take a couple of seconds as it did in the past.
**To Reproduce**
I think that booting a member in a computer with multiple network interfaces, container (Docker, Podman) installations, etc.
**Additional context**
- Unfortunately I did not see anything in the logs that indicate an issue.
- Hazelcast 5.1.1
- A single member or pod in the cluster
- Only members, no clients.
- AdoptOpenJDK 11
- Windows 10 | defect | new hazelcastinstance instantiation takes up to five minutes since hazelcast thanks for reporting your issue please share with us the following information to help us resolve your issue quickly and efficiently since we upgraded to hazelcast booting the application started taking much longer time on some machines seems that a new instance instantiation hazelcast newhazelcastinstance takes a couple of minutes in machines with a lot of network interfaces from a debugging session it looks like the area of com hazelcast internal server tcp localaddressregistry registerlocaladdresses takes a long time happens when i use the default configuration or other configurations expected behavior instantiation of a new instance member should take a couple of seconds as it did in the past to reproduce i think that booting a member in a computer with multiple network interfaces container docker podman installations etc additional context unfortunately i did not see anything in the logs that indicate an issue hazelcast a single member or pod in the cluster only members no clients adoptopenjdk windows | 1 |
273,587 | 29,831,033,962 | IssuesEvent | 2023-06-18 09:21:12 | RG4421/ampere-centos-kernel | https://api.github.com/repos/RG4421/ampere-centos-kernel | closed | CVE-2019-18660 (Medium) detected in linuxv5.2 - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-18660 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/powerpc/include/asm/security_features.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/powerpc/include/asm/security_features.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.4.1 on powerpc allows Information Exposure because the Spectre-RSB mitigation is not in place for all applicable CPUs, aka CID-39e72bf96f58. This is related to arch/powerpc/kernel/entry_64.S and arch/powerpc/kernel/security.c.
<p>Publish Date: 2019-11-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-18660>CVE-2019-18660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-18660">https://www.linuxkernelcves.com/cves/CVE-2019-18660</a></p>
<p>Release Date: 2020-01-28</p>
<p>Fix Resolution: v5.5-rc1,v4.14.157,v4.19.87,v4.4.204,v4.9.204,v5.3.14,v5.4.1</p>
</p>
</details>
<p></p>
| True | CVE-2019-18660 (Medium) detected in linuxv5.2 - autoclosed - ## CVE-2019-18660 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/powerpc/include/asm/security_features.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/powerpc/include/asm/security_features.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.4.1 on powerpc allows Information Exposure because the Spectre-RSB mitigation is not in place for all applicable CPUs, aka CID-39e72bf96f58. This is related to arch/powerpc/kernel/entry_64.S and arch/powerpc/kernel/security.c.
<p>Publish Date: 2019-11-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-18660>CVE-2019-18660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-18660">https://www.linuxkernelcves.com/cves/CVE-2019-18660</a></p>
<p>Release Date: 2020-01-28</p>
<p>Fix Resolution: v5.5-rc1,v4.14.157,v4.19.87,v4.4.204,v4.9.204,v5.3.14,v5.4.1</p>
</p>
</details>
<p></p>
| non_defect | cve medium detected in autoclosed cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files arch powerpc include asm security features h arch powerpc include asm security features h vulnerability details the linux kernel before on powerpc allows information exposure because the spectre rsb mitigation is not in place for all applicable cpus aka cid this is related to arch powerpc kernel entry s and arch powerpc kernel security c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
115,285 | 9,789,569,941 | IssuesEvent | 2019-06-10 10:10:23 | DivanteLtd/storefront-ui | https://api.github.com/repos/DivanteLtd/storefront-ui | closed | [ Desktop ] Create SfTopBar | 1: Easy good first issue styling tests | [Designs on Figma](https://www.figma.com/file/hrwE3VsMBHgdJoS86rVr4W/Desktop-%26-Mobile-Vue-Storefront?node-id=303%3A1239)

Imo we may handle just the bar with this component, languages menu (country flags) would be another component.
# Proposed API
- `default` slot for centralized content;
- `left` and `right` slots for custom content on respective sides;
No props, no methods, no events...
Please mind that it's **just a proposal**. We're open to discussion regarding component API. | 1.0 | [ Desktop ] Create SfTopBar - [Designs on Figma](https://www.figma.com/file/hrwE3VsMBHgdJoS86rVr4W/Desktop-%26-Mobile-Vue-Storefront?node-id=303%3A1239)

Imo we may handle just the bar with this component, languages menu (country flags) would be another component.
# Proposed API
- `default` slot for centralized content;
- `left` and `right` slots for custom content on respective sides;
No props, no methods, no events...
Please mind that it's **just a proposal**. We're open to discussion regarding component API. | non_defect | create sftopbar imo we may handle just the bar with this component languages menu country flags would be another component proposed api default slot for centralized content left and right slots for custom content on respective sides no props no methods no events please mind that it s just a proposal we re open to discussion regarding component api | 0 |
61,774 | 17,023,776,667 | IssuesEvent | 2021-07-03 03:47:36 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | No Rendering of generator:source=wind | Component: mapnik Priority: minor Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 5.27pm, Wednesday, 22nd February 2012]**
It seems that currently only the old tag power_source=wind is rendered and the new one (generator:source=wind) is not rendered by mapnik. JOSM for example shows a warning message that power_source=wind is replaced with generator:source=wind and should therfor be used. Maybe changing the tags to new one in database is also a job for fixbot. | 1.0 | No Rendering of generator:source=wind - **[Submitted to the original trac issue database at 5.27pm, Wednesday, 22nd February 2012]**
It seems that currently only the old tag power_source=wind is rendered and the new one (generator:source=wind) is not rendered by mapnik. JOSM for example shows a warning message that power_source=wind is replaced with generator:source=wind and should therfor be used. Maybe changing the tags to new one in database is also a job for fixbot. | defect | no rendering of generator source wind it seems that currently only the old tag power source wind is rendered and the new one generator source wind is not rendered by mapnik josm for example shows a warning message that power source wind is replaced with generator source wind and should therfor be used maybe changing the tags to new one in database is also a job for fixbot | 1 |
61,285 | 14,621,061,523 | IssuesEvent | 2020-12-22 20:53:18 | SmartBear/idea-collaborator-plugin | https://api.github.com/repos/SmartBear/idea-collaborator-plugin | opened | CVE-2020-10968 (High) detected in jackson-databind-2.5.0.jar | security vulnerability | ## CVE-2020-10968 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.5.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: idea-collaborator-plugin/client/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collabplugin/collaborator/collaborator/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collaborator-0_7-BETA/collaborator/lib/jackson-databind-2.5.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.5.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/idea-collaborator-plugin/commit/3e67fb2d437ffeadf07751b7979f4e35dbc282a2">3e67fb2d437ffeadf07751b7979f4e35dbc282a2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).
<p>Publish Date: 2020-03-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968>CVE-2020-10968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10968">https://nvd.nist.gov/vuln/detail/CVE-2020-10968</a></p>
<p>Release Date: 2020-03-26</p>
<p>Fix Resolution: jackson-databind-2.9.10.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.0","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.9.10.4"}],"vulnerabilityIdentifier":"CVE-2020-10968","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-10968 (High) detected in jackson-databind-2.5.0.jar - ## CVE-2020-10968 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.5.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: idea-collaborator-plugin/client/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collabplugin/collaborator/collaborator/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collaborator-0_7-BETA/collaborator/lib/jackson-databind-2.5.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.5.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/idea-collaborator-plugin/commit/3e67fb2d437ffeadf07751b7979f4e35dbc282a2">3e67fb2d437ffeadf07751b7979f4e35dbc282a2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).
<p>Publish Date: 2020-03-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968>CVE-2020-10968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10968">https://nvd.nist.gov/vuln/detail/CVE-2020-10968</a></p>
<p>Release Date: 2020-03-26</p>
<p>Fix Resolution: jackson-databind-2.9.10.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.0","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.9.10.4"}],"vulnerabilityIdentifier":"CVE-2020-10968","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library idea collaborator plugin client lib jackson databind jar idea collaborator plugin collabplugin collaborator collaborator lib jackson databind jar idea collaborator plugin collaborator beta collaborator lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org aoju bus proxy provider remoting rmiprovider aka bus proxy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org aoju bus proxy provider remoting rmiprovider aka bus proxy vulnerabilityurl | 0 |
301,516 | 26,053,913,379 | IssuesEvent | 2022-12-22 22:03:15 | opensearch-project/OpenSearch | https://api.github.com/repos/opensearch-project/OpenSearch | closed | [CI] o.o.aliases.IndexAliasesIT.testSameAlias failure | bug CI flaky-test | Caught on PR #2574 this test was not reproducible and looks like a one off CI blip. Documenting for posterity:
```
REPRODUCE WITH: ./gradlew ':server:internalClusterTest' --tests "org.opensearch.aliases.IndexAliasesIT.testSameAlias" -Dtests.seed=D759B90667E7AFC3 -Dtests.security.manager=true -Dtests.jvm.argline="-XX:TieredStopAtLevel=1 -XX:ReservedCodeCacheSize=64m" -Dtests.locale=sl -Dtests.timezone=Etc/GMT+12 -Druntime.java=17
```
```
2> java.lang.AssertionError: AcknowledgedResponse failed - not acked
Expected: <true>
but: was <false>
at __randomizedtesting.SeedInfo.seed([D759B90667E7AFC3:551A0CCCC00FF5E5]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked(OpenSearchAssertions.java:127)
at org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked(OpenSearchAssertions.java:115)
at org.opensearch.aliases.IndexAliasesIT.lambda$testSameAlias$52(IndexAliasesIT.java:803)
at org.opensearch.aliases.IndexAliasesIT.assertAliasesVersionIncreases(IndexAliasesIT.java:1489)
at org.opensearch.aliases.IndexAliasesIT.assertAliasesVersionIncreases(IndexAliasesIT.java:1480)
at org.opensearch.aliases.IndexAliasesIT.testSameAlias(IndexAliasesIT.java:801)
``` | 1.0 | [CI] o.o.aliases.IndexAliasesIT.testSameAlias failure - Caught on PR #2574 this test was not reproducible and looks like a one off CI blip. Documenting for posterity:
```
REPRODUCE WITH: ./gradlew ':server:internalClusterTest' --tests "org.opensearch.aliases.IndexAliasesIT.testSameAlias" -Dtests.seed=D759B90667E7AFC3 -Dtests.security.manager=true -Dtests.jvm.argline="-XX:TieredStopAtLevel=1 -XX:ReservedCodeCacheSize=64m" -Dtests.locale=sl -Dtests.timezone=Etc/GMT+12 -Druntime.java=17
```
```
2> java.lang.AssertionError: AcknowledgedResponse failed - not acked
Expected: <true>
but: was <false>
at __randomizedtesting.SeedInfo.seed([D759B90667E7AFC3:551A0CCCC00FF5E5]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked(OpenSearchAssertions.java:127)
at org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked(OpenSearchAssertions.java:115)
at org.opensearch.aliases.IndexAliasesIT.lambda$testSameAlias$52(IndexAliasesIT.java:803)
at org.opensearch.aliases.IndexAliasesIT.assertAliasesVersionIncreases(IndexAliasesIT.java:1489)
at org.opensearch.aliases.IndexAliasesIT.assertAliasesVersionIncreases(IndexAliasesIT.java:1480)
at org.opensearch.aliases.IndexAliasesIT.testSameAlias(IndexAliasesIT.java:801)
``` | non_defect | o o aliases indexaliasesit testsamealias failure caught on pr this test was not reproducible and looks like a one off ci blip documenting for posterity reproduce with gradlew server internalclustertest tests org opensearch aliases indexaliasesit testsamealias dtests seed dtests security manager true dtests jvm argline xx tieredstopatlevel xx reservedcodecachesize dtests locale sl dtests timezone etc gmt druntime java java lang assertionerror acknowledgedresponse failed not acked expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org opensearch test hamcrest opensearchassertions assertacked opensearchassertions java at org opensearch test hamcrest opensearchassertions assertacked opensearchassertions java at org opensearch aliases indexaliasesit lambda testsamealias indexaliasesit java at org opensearch aliases indexaliasesit assertaliasesversionincreases indexaliasesit java at org opensearch aliases indexaliasesit assertaliasesversionincreases indexaliasesit java at org opensearch aliases indexaliasesit testsamealias indexaliasesit java | 0 |
100,437 | 12,522,159,561 | IssuesEvent | 2020-06-03 18:40:03 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | List block: confusing icons | Needs Design Feedback | **Describe the bug**
In my opinion, the icons for the list type look very similar to each other. Every time I want to change the list type it takes me some time to get the right one. The numbering could be a little bit bigger, or it could include also 2 and 3. I'm not sure what the purpose is of the upper line? Cc @jasmussen
<img width="277" alt="Screenshot 2020-05-18 at 23 40 46" src="https://user-images.githubusercontent.com/4710635/82262342-0603a100-9961-11ea-9c17-2a976c3f4466.png">
| 1.0 | List block: confusing icons - **Describe the bug**
In my opinion, the icons for the list type look very similar to each other. Every time I want to change the list type it takes me some time to get the right one. The numbering could be a little bit bigger, or it could include also 2 and 3. I'm not sure what the purpose is of the upper line? Cc @jasmussen
<img width="277" alt="Screenshot 2020-05-18 at 23 40 46" src="https://user-images.githubusercontent.com/4710635/82262342-0603a100-9961-11ea-9c17-2a976c3f4466.png">
| non_defect | list block confusing icons describe the bug in my opinion the icons for the list type look very similar to each other every time i want to change the list type it takes me some time to get the right one the numbering could be a little bit bigger or it could include also and i m not sure what the purpose is of the upper line cc jasmussen img width alt screenshot at src | 0 |
64,679 | 26,841,583,461 | IssuesEvent | 2023-02-03 01:13:58 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | opened | RunWhen - evaluation summary | ops and shared services | **Describe the issue**
This is to summarize what we've found out about RunWhen product.
**Definition of done**
- [ ] learn about RunWhen and discuss use cases
- [ ] sum up the following:
- use cases and value-add
- multi-tenant model
- platform services team effort to onboard, implement and maintain
- platform services team effort to support this as a shared service
- additional information
| 1.0 | RunWhen - evaluation summary - **Describe the issue**
This is to summarize what we've found out about RunWhen product.
**Definition of done**
- [ ] learn about RunWhen and discuss use cases
- [ ] sum up the following:
- use cases and value-add
- multi-tenant model
- platform services team effort to onboard, implement and maintain
- platform services team effort to support this as a shared service
- additional information
| non_defect | runwhen evaluation summary describe the issue this is to summarize what we ve found out about runwhen product definition of done learn about runwhen and discuss use cases sum up the following use cases and value add multi tenant model platform services team effort to onboard implement and maintain platform services team effort to support this as a shared service additional information | 0 |
9,618 | 2,615,163,786 | IssuesEvent | 2015-03-01 06:43:33 | chrsmith/reaver-wps | https://api.github.com/repos/chrsmith/reaver-wps | opened | 1 second per key suddenly timeouts | auto-migrated Priority-Triage Type-Defect | ```
0. What version of Reaver are you using?
version 1.4
1. What operating system are you using (Linux is the only supported OS)?
Backtrack 5 R2, but I did an apt-get update, apt-get upgrade, and apt-get
dist-upgrade.
2. Is your wireless card in monitor mode (yes/no)?
yes
3. What is the signal strength of the Access Point you are trying to crack?
-53
4. What is the manufacturer and model # of the device you are trying to
crack?
Linksys WRT54G
5. What is the entire command line string you are supplying to reaver?
This is what I used to provide the output below
reaver -i mon0 -f -c 1 -b 00:1A:EF:10:B9:DA -d 2 -N -x 60 -vv
This was the one I was using, so it's only showing the keys that reaver has
tried.
reaver -i mon0 -f -c 1 -b 00:1A:EF:10:B9:DA -d 2 -N -x 60 -v
6. Please describe what you think the issue is.
Router lockup. It started with 1 second per key, I though it was doing fine, it
was able to reach 00635677 from start, then it just stopped and had timeouts.
7. Paste the output from Reaver below.
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 1
[?] Restore previous session for 00:1A:EF:10:B9:DA? [n/Y] Y
[+] Restored previous session
[+] Waiting for beacon from 00:1A:EF:10:B9:DA
[+] Associated with 00:1A:EF:10:B9:DA (ESSID: DSL)
[+] Trying pin 00635677
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
I think that my router is locking me out from trying anymore keys, but I tried
spoofing the mac address to a different on, it still refuses to start. I
thought I'll be able to crack this is 3 hours or less with the 1 second per
key. Can anyone help?
```
Original issue reported on code.google.com by `x.tactic...@gmail.com` on 22 Aug 2012 at 10:24 | 1.0 | 1 second per key suddenly timeouts - ```
0. What version of Reaver are you using?
version 1.4
1. What operating system are you using (Linux is the only supported OS)?
Backtrack 5 R2, but I did an apt-get update, apt-get upgrade, and apt-get
dist-upgrade.
2. Is your wireless card in monitor mode (yes/no)?
yes
3. What is the signal strength of the Access Point you are trying to crack?
-53
4. What is the manufacturer and model # of the device you are trying to
crack?
Linksys WRT54G
5. What is the entire command line string you are supplying to reaver?
This is what I used to provide the output below
reaver -i mon0 -f -c 1 -b 00:1A:EF:10:B9:DA -d 2 -N -x 60 -vv
This was the one I was using, so it's only showing the keys that reaver has
tried.
reaver -i mon0 -f -c 1 -b 00:1A:EF:10:B9:DA -d 2 -N -x 60 -v
6. Please describe what you think the issue is.
Router lockup. It started with 1 second per key, I though it was doing fine, it
was able to reach 00635677 from start, then it just stopped and had timeouts.
7. Paste the output from Reaver below.
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 1
[?] Restore previous session for 00:1A:EF:10:B9:DA? [n/Y] Y
[+] Restored previous session
[+] Waiting for beacon from 00:1A:EF:10:B9:DA
[+] Associated with 00:1A:EF:10:B9:DA (ESSID: DSL)
[+] Trying pin 00635677
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
I think that my router is locking me out from trying anymore keys, but I tried
spoofing the mac address to a different on, it still refuses to start. I
thought I'll be able to crack this is 3 hours or less with the 1 second per
key. Can anyone help?
```
Original issue reported on code.google.com by `x.tactic...@gmail.com` on 22 Aug 2012 at 10:24 | defect | second per key suddenly timeouts what version of reaver are you using version what operating system are you using linux is the only supported os backtrack but i did an apt get update apt get upgrade and apt get dist upgrade is your wireless card in monitor mode yes no yes what is the signal strength of the access point you are trying to crack what is the manufacturer and model of the device you are trying to crack linksys what is the entire command line string you are supplying to reaver this is what i used to provide the output below reaver i f c b ef da d n x vv this was the one i was using so it s only showing the keys that reaver has tried reaver i f c b ef da d n x v please describe what you think the issue is router lockup it started with second per key i though it was doing fine it was able to reach from start then it just stopped and had timeouts paste the output from reaver below reaver wifi protected setup attack tool copyright c tactical network solutions craig heffner switching to channel restore previous session for ef da y restored previous session waiting for beacon from ef da associated with ef da essid dsl trying pin sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request i think that my router is locking me out from trying anymore keys but i tried spoofing the mac address to a different on it still refuses to start i thought i ll be able to crack this is hours or less with the second per key can anyone help original issue reported on code google com by x tactic gmail com on aug at | 1 |
2,502 | 2,607,905,569 | IssuesEvent | 2015-02-26 00:15:28 | chrsmithdemos/zen-coding | https://api.github.com/repos/chrsmithdemos/zen-coding | closed | Замена табов на 4 пробела | auto-migrated Priority-Medium Type-Defect | ```
Подключил сий замечательный плагин к Aptana IDE.
Так повелось, привык использовать в
качестве отступы 4 пробела.
В настройках IDE выставлены нужные пробелы,
но сам плагин, насколько могу судить, по
умолчанию использует отступы табами.
Ну и вопрос: как заставить после
разворачивания свойств типа table+, dl+ отступы
в виде пробелов?
```
-----
Original issue reported on code.google.com by `alexei.k...@gmail.com` on 13 Sep 2010 at 4:13 | 1.0 | Замена табов на 4 пробела - ```
Подключил сий замечательный плагин к Aptana IDE.
Так повелось, привык использовать в
качестве отступы 4 пробела.
В настройках IDE выставлены нужные пробелы,
но сам плагин, насколько могу судить, по
умолчанию использует отступы табами.
Ну и вопрос: как заставить после
разворачивания свойств типа table+, dl+ отступы
в виде пробелов?
```
-----
Original issue reported on code.google.com by `alexei.k...@gmail.com` on 13 Sep 2010 at 4:13 | defect | замена табов на пробела подключил сий замечательный плагин к aptana ide так повелось привык использовать в качестве отступы пробела в настройках ide выставлены нужные пробелы но сам плагин насколько могу судить по умолчанию использует отступы табами ну и вопрос как заставить после разворачивания свойств типа table dl отступы в виде пробелов original issue reported on code google com by alexei k gmail com on sep at | 1 |
33,598 | 7,177,590,353 | IssuesEvent | 2018-01-31 14:08:46 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | LazyLoading fails for TabView | defect | **I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
[Plunkr (with primeng@5.2.0-rc.1)](http://plnkr.co/edit/Ci4oZ61TNiagAaxTFyDr?p=preview)
**Current behavior**
TabView does not render templates in lazy loading (with `pTemplate="content"`).
**Minimal reproduction of the problem with instructions**
You can see on plunkr that I copy-pasted the code from TabView's docs. Here the code cycles through TabView's contentChildren: https://github.com/primefaces/primeng/blob/1e66a7c2acb0a77ec102ab9d44caa8f5fbff27fd/src/app/components/tabview/tabview.ts#L113
Though, they still are PrimeTemplates. https://github.com/primefaces/primeng/blob/1e66a7c2acb0a77ec102ab9d44caa8f5fbff27fd/src/app/components/tabview/tabview.ts#L96
And since there is no `<p-templateLoader>` anymore...
I haven't checked on other components, but this might be a shared issue.
* **PrimeNG version:** 5.2.0.rc-1
EDIT: FileUpload has the same issue. https://github.com/primefaces/primeng/blob/1e66a7c2acb0a77ec102ab9d44caa8f5fbff27fd/src/app/components/fileupload/fileupload.ts#L126 | 1.0 | LazyLoading fails for TabView - **I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
[Plunkr (with primeng@5.2.0-rc.1)](http://plnkr.co/edit/Ci4oZ61TNiagAaxTFyDr?p=preview)
**Current behavior**
TabView does not render templates in lazy loading (with `pTemplate="content"`).
**Minimal reproduction of the problem with instructions**
You can see on plunkr that I copy-pasted the code from TabView's docs. Here the code cycles through TabView's contentChildren: https://github.com/primefaces/primeng/blob/1e66a7c2acb0a77ec102ab9d44caa8f5fbff27fd/src/app/components/tabview/tabview.ts#L113
Though, they still are PrimeTemplates. https://github.com/primefaces/primeng/blob/1e66a7c2acb0a77ec102ab9d44caa8f5fbff27fd/src/app/components/tabview/tabview.ts#L96
And since there is no `<p-templateLoader>` anymore...
I haven't checked on other components, but this might be a shared issue.
* **PrimeNG version:** 5.2.0.rc-1
EDIT: FileUpload has the same issue. https://github.com/primefaces/primeng/blob/1e66a7c2acb0a77ec102ab9d44caa8f5fbff27fd/src/app/components/fileupload/fileupload.ts#L126 | defect | lazyloading fails for tabview i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports current behavior tabview does not render templates in lazy loading with ptemplate content minimal reproduction of the problem with instructions you can see on plunkr that i copy pasted the code from tabview s docs here the code cycles through tabview s contentchildren though they still are primetemplates and since there is no anymore i haven t checked on other components but this might be a shared issue primeng version rc edit fileupload has the same issue | 1 |
891 | 2,594,280,084 | IssuesEvent | 2015-02-20 01:25:40 | BALL-Project/ball | https://api.github.com/repos/BALL-Project/ball | closed | Animation recording broken in BALL 1.3 | C: VIEW P: major R: fixed T: defect | **Reported by nicste on 4 Aug 39519473 13:20 UTC**
Apparently the animation recording in BALL 1.3(beta2) is broken. When having recorded a movement, the subsequent playback of the record by pressing "Start" in the Animation menu results in a BALLView hang up. Most probably a (Q)thread issue. | 1.0 | Animation recording broken in BALL 1.3 - **Reported by nicste on 4 Aug 39519473 13:20 UTC**
Apparently the animation recording in BALL 1.3(beta2) is broken. When having recorded a movement, the subsequent playback of the record by pressing "Start" in the Animation menu results in a BALLView hang up. Most probably a (Q)thread issue. | defect | animation recording broken in ball reported by nicste on aug utc apparently the animation recording in ball is broken when having recorded a movement the subsequent playback of the record by pressing start in the animation menu results in a ballview hang up most probably a q thread issue | 1 |
242,577 | 20,254,111,453 | IssuesEvent | 2022-02-14 21:04:07 | OllisGit/OctoPrint-PrintJobHistory | https://api.github.com/repos/OllisGit/OctoPrint-PrintJobHistory | closed | No Print Job History since tHE 26th ("bad transparency mask") | status: waitingForTestFeedback | All of a sudden I lost my print job history. I deleted the database and recreated, still have nothing. Attached my log File. Anything else you need?
[octoprint.log](https://github.com/OllisGit/OctoPrint-PrintJobHistory/files/7614179/octoprint.log)
| 1.0 | No Print Job History since tHE 26th ("bad transparency mask") - All of a sudden I lost my print job history. I deleted the database and recreated, still have nothing. Attached my log File. Anything else you need?
[octoprint.log](https://github.com/OllisGit/OctoPrint-PrintJobHistory/files/7614179/octoprint.log)
| non_defect | no print job history since the bad transparency mask all of a sudden i lost my print job history i deleted the database and recreated still have nothing attached my log file anything else you need | 0 |
48,377 | 20,119,403,259 | IssuesEvent | 2022-02-07 23:37:00 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | Amplify: Error downloading #current-cloud-backend.zip from deployment bucket: | service/s3 service/amplify needs-triage | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
Teraform: 1.0.1
AWS Provider: v3.65.0
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_amplify_backend_environment
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
resource "aws_amplify_app" "core_web_app" {
name = "core-web-app-${var.env}"
}
resource "aws_amplify_branch" "main" {
app_id = aws_amplify_app.core_web_app.id
branch_name = "main"
}
resource "aws_amplify_domain_association" "core_web_app" {
app_id = aws_amplify_app.core_web_app.id
domain_name = "app.${var.env}.example.com"
sub_domain {
branch_name = aws_amplify_branch.main.branch_name
prefix = ""
}
}
resource "aws_amplify_backend_environment" "core_web_app" {
app_id = aws_amplify_app.core_web_app.id
environment_name = var.env
deployment_artifacts = aws_s3_bucket.core_web_app.id
stack_name = "core-web-app-stack"
}
resource "aws_s3_bucket" "core_web_app" {
bucket = "core-web-app-deployment"
}
```
### Problem Description
After creating an Amplify app with the Terraform code above, I then tried to pull the app using the amplify cli and I get the following error.
```bash
$ amplify pull
? Select the authentication method you want to use: AWS profile
For more information on AWS Profiles, see:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
? Please choose the profile you want to use amplify-dev
? Which app are you working on? d8jlijw8d5yq7
Backend environment 'dev' found. Initializing...
Error downloading #current-cloud-backend.zip from deployment bucket: core-web-app-deployment, the error is: The specified key does not exist.
```
Perhaps I am missing something, but this seems like unintended behavior. Why can I not pull the terraform created amplify app? | 2.0 | Amplify: Error downloading #current-cloud-backend.zip from deployment bucket: - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
Teraform: 1.0.1
AWS Provider: v3.65.0
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_amplify_backend_environment
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
resource "aws_amplify_app" "core_web_app" {
name = "core-web-app-${var.env}"
}
resource "aws_amplify_branch" "main" {
app_id = aws_amplify_app.core_web_app.id
branch_name = "main"
}
resource "aws_amplify_domain_association" "core_web_app" {
app_id = aws_amplify_app.core_web_app.id
domain_name = "app.${var.env}.example.com"
sub_domain {
branch_name = aws_amplify_branch.main.branch_name
prefix = ""
}
}
resource "aws_amplify_backend_environment" "core_web_app" {
app_id = aws_amplify_app.core_web_app.id
environment_name = var.env
deployment_artifacts = aws_s3_bucket.core_web_app.id
stack_name = "core-web-app-stack"
}
resource "aws_s3_bucket" "core_web_app" {
bucket = "core-web-app-deployment"
}
```
### Problem Description
After creating an Amplify app with the Terraform code above, I then tried to pull the app using the amplify cli and I get the following error.
```bash
$ amplify pull
? Select the authentication method you want to use: AWS profile
For more information on AWS Profiles, see:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
? Please choose the profile you want to use amplify-dev
? Which app are you working on? d8jlijw8d5yq7
Backend environment 'dev' found. Initializing...
Error downloading #current-cloud-backend.zip from deployment bucket: core-web-app-deployment, the error is: The specified key does not exist.
```
Perhaps I am missing something, but this seems like unintended behavior. Why can I not pull the terraform created amplify app? | non_defect | amplify error downloading current cloud backend zip from deployment bucket please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version teraform aws provider affected resource s aws amplify backend environment terraform configuration files please include all terraform configurations required to reproduce the bug bug reports without a functional reproduction may be closed without investigation hcl resource aws amplify app core web app name core web app var env resource aws amplify branch main app id aws amplify app core web app id branch name main resource aws amplify domain association core web app app id aws amplify app core web app id domain name app var env example com sub domain branch name aws amplify branch main branch name prefix resource aws amplify backend environment core web app app id aws amplify app core web app id environment name var env deployment artifacts aws bucket core web app id stack name core web app stack resource aws bucket core web app bucket core web app deployment problem description after creating an amplify app with the terraform code above i then tried to pull the app using the amplify cli and i get the following error bash amplify pull select the authentication method you want to use aws profile for more information on aws profiles see please choose the profile you want to use amplify dev which app are you working on backend environment dev found initializing error downloading current cloud backend zip from deployment bucket core web app deployment the error is the specified key does not exist perhaps i am missing something but this seems like unintended behavior why can i not pull the terraform created amplify app | 0 |
23,026 | 3,754,911,075 | IssuesEvent | 2016-03-12 08:54:51 | openwrt/luci | https://api.github.com/repos/openwrt/luci | closed | add luci-uvc library from fon-ng (patch included) | C: LuCI Applications P: major T: defect | **Reported by reporter on 29 Aug 2009 11:19 UTC**
Here is a patch to add the luci-uvc library from fon-ng (http://trac.fonosfera.org/fon-ng/browser/trunk/luci/libs/uvc). No idea if this is the right way of doing it. The uvc library builds but on the router I get:
```
root@OpenWrt:~# lua test.lua
lua: error loading module 'uvc' from file '/usr/lib/lua/uvc.so':
File not found
stack traceback:
[?
[C](C]:): in function 'require'
test.lua:3: in main chunk
[C]: ?
root@OpenWrt:~# ls -ahl /usr/lib/lua/uvc.so
-rwxr-xr-x 1 root root 11.7K Aug 28 20:11 /usr/lib/lua/uvc.so
root@OpenWrt:~# cat /root/test.lua
#!/usr/bin/lua
local uvc = require("uvc")
local i = 0
if uvc then
while true do
local res
os.execute("sleep 1")
i = i + 1
if i % 2 == 1 then
res = uvc.grab()
else
res = uvc.dump("abc.jpg")
end
if res == false then
print("no cam attached")
elseif res == true then
print("got snapshot")
elseif type(res) == "string" then
print("got raw data")
else
print("Error, cam was removed")
end
end
end
root@OpenWrt:~#
```
| 1.0 | add luci-uvc library from fon-ng (patch included) - **Reported by reporter on 29 Aug 2009 11:19 UTC**
Here is a patch to add the luci-uvc library from fon-ng (http://trac.fonosfera.org/fon-ng/browser/trunk/luci/libs/uvc). No idea if this is the right way of doing it. The uvc library builds but on the router I get:
```
root@OpenWrt:~# lua test.lua
lua: error loading module 'uvc' from file '/usr/lib/lua/uvc.so':
File not found
stack traceback:
[?
[C](C]:): in function 'require'
test.lua:3: in main chunk
[C]: ?
root@OpenWrt:~# ls -ahl /usr/lib/lua/uvc.so
-rwxr-xr-x 1 root root 11.7K Aug 28 20:11 /usr/lib/lua/uvc.so
root@OpenWrt:~# cat /root/test.lua
#!/usr/bin/lua
local uvc = require("uvc")
local i = 0
if uvc then
while true do
local res
os.execute("sleep 1")
i = i + 1
if i % 2 == 1 then
res = uvc.grab()
else
res = uvc.dump("abc.jpg")
end
if res == false then
print("no cam attached")
elseif res == true then
print("got snapshot")
elseif type(res) == "string" then
print("got raw data")
else
print("Error, cam was removed")
end
end
end
root@OpenWrt:~#
```
| defect | add luci uvc library from fon ng patch included reported by reporter on aug utc here is a patch to add the luci uvc library from fon ng no idea if this is the right way of doing it the uvc library builds but on the router i get root openwrt lua test lua lua error loading module uvc from file usr lib lua uvc so file not found stack traceback c in function require test lua in main chunk root openwrt ls ahl usr lib lua uvc so rwxr xr x root root aug usr lib lua uvc so root openwrt cat root test lua usr bin lua local uvc require uvc local i if uvc then while true do local res os execute sleep i i if i then res uvc grab else res uvc dump abc jpg end if res false then print no cam attached elseif res true then print got snapshot elseif type res string then print got raw data else print error cam was removed end end end root openwrt | 1 |
47,563 | 10,120,563,222 | IssuesEvent | 2019-07-31 13:58:48 | atomist-blogs/org-visualizer | https://api.github.com/repos/atomist-blogs/org-visualizer | reopened | Code Inspection: Tslint on master | code-inspection env:gke-int-production:testing env:k8s-internal-production:testing | ### no-duplicate-imports
- [`lib/page/sunburstScript.ts:18`](https://github.com/atomist-blogs/org-visualizer/blob/d95a7f5015bf00a10a312f8cf1b9746e8e5a55b4/lib/page/sunburstScript.ts#L18): _(error)_ Multiple imports from 'd3' can be combined into one.
[atomist:code-inspection:master=@atomist/atomist-sdm] | 1.0 | Code Inspection: Tslint on master - ### no-duplicate-imports
- [`lib/page/sunburstScript.ts:18`](https://github.com/atomist-blogs/org-visualizer/blob/d95a7f5015bf00a10a312f8cf1b9746e8e5a55b4/lib/page/sunburstScript.ts#L18): _(error)_ Multiple imports from 'd3' can be combined into one.
[atomist:code-inspection:master=@atomist/atomist-sdm] | non_defect | code inspection tslint on master no duplicate imports error multiple imports from can be combined into one | 0 |
13,331 | 2,753,791,195 | IssuesEvent | 2015-04-25 01:50:49 | kuri65536/python-for-android | https://api.github.com/repos/kuri65536/python-for-android | closed | sys.getfilesystemencoding() returns None | auto-migrated Defect | ```
What steps will reproduce the problem?
1. Open Python shell on an android device.
2. Enter 'import sys; print sys.filesystemencoding()'
What is the expected output? What do you see instead?
A string "UTF-8" or "mbcs" expected, but sees None.
This causes unicode exception on many os module functions accessing the file
system with filename/path containing non ascii char.
What version of the product are you using? On what operating system?
Tested Android (2.2, 2.3, 3.1). Used PythonForAndroid_r5.apk
Please provide any additional information below.
See linked ASE issue:
http://code.google.com/p/android-scripting/issues/detail?id=575
```
Original issue reported on code.google.com by `anthony....@gmail.com` on 28 Nov 2011 at 10:26 | 1.0 | sys.getfilesystemencoding() returns None - ```
What steps will reproduce the problem?
1. Open Python shell on an android device.
2. Enter 'import sys; print sys.filesystemencoding()'
What is the expected output? What do you see instead?
A string "UTF-8" or "mbcs" expected, but sees None.
This causes unicode exception on many os module functions accessing the file
system with filename/path containing non ascii char.
What version of the product are you using? On what operating system?
Tested Android (2.2, 2.3, 3.1). Used PythonForAndroid_r5.apk
Please provide any additional information below.
See linked ASE issue:
http://code.google.com/p/android-scripting/issues/detail?id=575
```
Original issue reported on code.google.com by `anthony....@gmail.com` on 28 Nov 2011 at 10:26 | defect | sys getfilesystemencoding returns none what steps will reproduce the problem open python shell on an android device enter import sys print sys filesystemencoding what is the expected output what do you see instead a string utf or mbcs expected but sees none this causes unicode exception on many os module functions accessing the file system with filename path containing non ascii char what version of the product are you using on what operating system tested android used pythonforandroid apk please provide any additional information below see linked ase issue original issue reported on code google com by anthony gmail com on nov at | 1 |
39,207 | 15,888,774,528 | IssuesEvent | 2021-04-10 08:56:02 | microsoft/botframework-cli | https://api.github.com/repos/microsoft/botframework-cli | closed | Error: Cannot find module 'antlr4/index' | Bot Services customer-replied-to customer-reported | BF Version: latest
Node: 12.22
OS: Win 64
Tool: PS

When executing the bf luis:build command it fails with the following error:

During install it already gave this warning:

| 1.0 | Error: Cannot find module 'antlr4/index' - BF Version: latest
Node: 12.22
OS: Win 64
Tool: PS

When executing the bf luis:build command it fails with the following error:

During install it already gave this warning:

| non_defect | error cannot find module index bf version latest node os win tool ps when executing the bf luis build command it fails with the following error during install it already gave this warning | 0 |
100,624 | 30,743,371,334 | IssuesEvent | 2023-07-28 13:16:36 | VirtusLab/git-machete | https://api.github.com/repos/VirtusLab/git-machete | opened | Move requirements.* files to requirements/ directory | code quality minor build | So that they don't clutter up the top-level directory.
Multiple places need to be subsequently updated, including:
* tox.ini
* dependabot config
* debian options for `tar-ignore`
* ...
| 1.0 | Move requirements.* files to requirements/ directory - So that they don't clutter up the top-level directory.
Multiple places need to be subsequently updated, including:
* tox.ini
* dependabot config
* debian options for `tar-ignore`
* ...
| non_defect | move requirements files to requirements directory so that they don t clutter up the top level directory multiple places need to be subsequently updated including tox ini dependabot config debian options for tar ignore | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.