Unnamed: 0 int64 3 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 2 430 | labels stringlengths 4 347 | body stringlengths 5 237k | index stringclasses 7 values | text_combine stringlengths 96 237k | label stringclasses 2 values | text stringlengths 96 219k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7,003 | 9,292,889,746 | IssuesEvent | 2019-03-22 05:37:25 | pingcap/tidb | https://api.github.com/repos/pingcap/tidb | closed | TiDB doesn't handle `signed + unsigned` properly | type/compatibility | ## Bug Report
Please answer these questions before submitting your issue. Thanks!
1. What did you do?
If possible, provide a recipe for reproducing the error.
`select cast(1 as signed) + cast(9223372036854775807 as unsigned);`
2. What did you expect to see?
```
mysql> select cast(1 as signed) + cast(9223372036854775807 as unsigned);
+-----------------------------------------------------------+
| cast(1 as signed) + cast(9223372036854775807 as unsigned) |
+-----------------------------------------------------------+
| 9223372036854775808 |
+-----------------------------------------------------------+
```
3. What did you see instead?
```
mysql> select cast(1 as signed) + cast(9223372036854775807 as unsigned);
ERROR 1690 (22003): BIGINT UNSIGNED value is out of range in '(1 + 9223372036854775807)'
```
4. What version of TiDB are you using (`tidb-server -V` or run `select tidb_version();` on TiDB)?
```
mysql> select tidb_version();
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tidb_version() |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Release Version: v3.0.0-beta-231-g20463d6-dirty
Git Commit Hash: 20463d6da90fdf12c0d9d18c15dc33a78334882d
Git Branch: master
UTC Build Time: 2019-03-20 03:32:11
GoVersion: go version go1.12 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
```
| True | TiDB doesn't handle `signed + unsigned` properly - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
1. What did you do?
If possible, provide a recipe for reproducing the error.
`select cast(1 as signed) + cast(9223372036854775807 as unsigned);`
2. What did you expect to see?
```
mysql> select cast(1 as signed) + cast(9223372036854775807 as unsigned);
+-----------------------------------------------------------+
| cast(1 as signed) + cast(9223372036854775807 as unsigned) |
+-----------------------------------------------------------+
| 9223372036854775808 |
+-----------------------------------------------------------+
```
3. What did you see instead?
```
mysql> select cast(1 as signed) + cast(9223372036854775807 as unsigned);
ERROR 1690 (22003): BIGINT UNSIGNED value is out of range in '(1 + 9223372036854775807)'
```
4. What version of TiDB are you using (`tidb-server -V` or run `select tidb_version();` on TiDB)?
```
mysql> select tidb_version();
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| tidb_version() |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Release Version: v3.0.0-beta-231-g20463d6-dirty
Git Commit Hash: 20463d6da90fdf12c0d9d18c15dc33a78334882d
Git Branch: master
UTC Build Time: 2019-03-20 03:32:11
GoVersion: go version go1.12 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
```
| comp | tidb doesn t handle signed unsigned properly bug report please answer these questions before submitting your issue thanks what did you do if possible provide a recipe for reproducing the error select cast as signed cast as unsigned what did you expect to see mysql select cast as signed cast as unsigned cast as signed cast as unsigned what did you see instead mysql select cast as signed cast as unsigned error bigint unsigned value is out of range in what version of tidb are you using tidb server v or run select tidb version on tidb mysql select tidb version tidb version release version beta dirty git commit hash git branch master utc build time goversion go version linux race enabled false tikv min version alpha check table before drop false row in set sec | 1 |
42,882 | 23,028,554,126 | IssuesEvent | 2022-07-22 11:43:10 | pingcap/tiflow | https://api.github.com/repos/pingcap/tiflow | closed | An enhancement of MySQL sink to turn off safe-mode automatically | component/sink subject/performance area/ticdc type/feature | ### Is your feature request related to a problem?
By default, TiCDC translates upstream `INSERT/UPDATE` into `REPLACE` (the safe-mode) which is slow and may cause deadlock in MySQL.
### Describe the feature you'd like
When TiCDC replicates a table, it can turn off safe-mode automatically when there is no duplicated events that has been replicated before.
TiCDC may output duplicated events when a table is scheduled to another node. The table have to replicate start from the global checkpoint (a checkpoint of the slowest tables) which it may has already replicated.
So turn off safe-mode, it has to know a ts that it has **not** replicated before. The trick is simple, it takes the latest ts from PD after it starts replicating, so any txn that is greater than the tso, it must have never replicated before. after the table see such txn, it can safely turn off safe-mode.
### Describe alternatives you've considered
_No response_
### Teachability, Documentation, Adoption, Migration Strategy
_No response_ | True | An enhancement of MySQL sink to turn off safe-mode automatically - ### Is your feature request related to a problem?
By default, TiCDC translates upstream `INSERT/UPDATE` into `REPLACE` (the safe-mode) which is slow and may cause deadlock in MySQL.
### Describe the feature you'd like
When TiCDC replicates a table, it can turn off safe-mode automatically when there is no duplicated events that has been replicated before.
TiCDC may output duplicated events when a table is scheduled to another node. The table have to replicate start from the global checkpoint (a checkpoint of the slowest tables) which it may has already replicated.
So turn off safe-mode, it has to know a ts that it has **not** replicated before. The trick is simple, it takes the latest ts from PD after it starts replicating, so any txn that is greater than the tso, it must have never replicated before. after the table see such txn, it can safely turn off safe-mode.
### Describe alternatives you've considered
_No response_
### Teachability, Documentation, Adoption, Migration Strategy
_No response_ | non_comp | an enhancement of mysql sink to turn off safe mode automatically is your feature request related to a problem by default ticdc translates upstream insert update into replace the safe mode which is slow and may cause deadlock in mysql describe the feature you d like when ticdc replicates a table it can turn off safe mode automatically when there is no duplicated events that has been replicated before ticdc may output duplicated events when a table is scheduled to another node the table have to replicate start from the global checkpoint a checkpoint of the slowest tables which it may has already replicated so turn off safe mode it has to know a ts that it has not replicated before the trick is simple it takes the latest ts from pd after it starts replicating so any txn that is greater than the tso it must have never replicated before after the table see such txn it can safely turn off safe mode describe alternatives you ve considered no response teachability documentation adoption migration strategy no response | 0 |
206,197 | 23,366,738,443 | IssuesEvent | 2022-08-10 15:58:41 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | closed | Move some tests from Lite to Full for acme_fat bucket | team:Wendigo East team:Core Security | The acme_fat bucket's mean duration is 00:07:22. To meet the goal of 5 minutes for lite buckets, review tests to bump a few more to full. | True | Move some tests from Lite to Full for acme_fat bucket - The acme_fat bucket's mean duration is 00:07:22. To meet the goal of 5 minutes for lite buckets, review tests to bump a few more to full. | non_comp | move some tests from lite to full for acme fat bucket the acme fat bucket s mean duration is to meet the goal of minutes for lite buckets review tests to bump a few more to full | 0 |
9,946 | 11,948,471,688 | IssuesEvent | 2020-04-03 11:56:02 | SaucyPigeon/Wild-Cultivation-Fan-Update | https://api.github.com/repos/SaucyPigeon/Wild-Cultivation-Fan-Update | closed | Vegetable Garden Project: duplicate xml node | bug mod compatibility | troopersmith1 11 hours ago
With both this and VGP loaded I'm getting a red error. Something about duplicate xml node for sowtags, load order doesn't matter. | True | Vegetable Garden Project: duplicate xml node - troopersmith1 11 hours ago
With both this and VGP loaded I'm getting a red error. Something about duplicate xml node for sowtags, load order doesn't matter. | comp | vegetable garden project duplicate xml node hours ago with both this and vgp loaded i m getting a red error something about duplicate xml node for sowtags load order doesn t matter | 1 |
4,842 | 7,459,663,172 | IssuesEvent | 2018-03-30 16:16:51 | dominique-mueller/angular-package-builder | https://api.github.com/repos/dominique-mueller/angular-package-builder | closed | Support multiple versions of Angular | BREAKING CHANGE discussion type: compatibility type: feature | - Is it even possible?
- Configuration flag in `.angular-package.json` file | True | Support multiple versions of Angular - - Is it even possible?
- Configuration flag in `.angular-package.json` file | comp | support multiple versions of angular is it even possible configuration flag in angular package json file | 1 |
3,028 | 2,607,969,976 | IssuesEvent | 2015-02-26 00:44:08 | chrsmithdemos/leveldb | https://api.github.com/repos/chrsmithdemos/leveldb | opened | Tunable allowed_seeks - feature request | auto-migrated Priority-Medium Type-Defect | ```
The 'allowed_seeks' is asigned to file_size /16KB, which is only fine to some
specific circumstance.
Here is the assumptions in code:
"
// We arrange to automatically compact this file after
// a certain number of seeks. Let's assume:
// (1) One seek costs 10ms
// (2) Writing or reading 1MB costs 10ms (100MB/s)
// (3) A compaction of 1MB does 25MB of IO:
...
"
About assumption (1) :
A get operation which seeks several files does not means the seek on first file actually seek disk. It's very likely that bloom filter told us the data we want is not in that file, and filter data itself is likely in ram while table cache is big enough. On the other hand, even if the result of bloom filter is false positive, the file data is likely in leveldb block cache or system page cache. So a seek does not necessarily cost 10ms.
In some read-heavy workload, some people simply disable compaction triggered by
read, while this may degrade read performance.
My suggestion is how about a tunable allow_seeks value that can be set
according to specific circumstance based on measurement?
Thanks in advance.
```
-----
Original issue reported on code.google.com by `alghak@gmail.com` on 14 Jan 2014 at 10:00 | 1.0 | Tunable allowed_seeks - feature request - ```
The 'allowed_seeks' is asigned to file_size /16KB, which is only fine to some
specific circumstance.
Here is the assumptions in code:
"
// We arrange to automatically compact this file after
// a certain number of seeks. Let's assume:
// (1) One seek costs 10ms
// (2) Writing or reading 1MB costs 10ms (100MB/s)
// (3) A compaction of 1MB does 25MB of IO:
...
"
About assumption (1) :
A get operation which seeks several files does not means the seek on first file actually seek disk. It's very likely that bloom filter told us the data we want is not in that file, and filter data itself is likely in ram while table cache is big enough. On the other hand, even if the result of bloom filter is false positive, the file data is likely in leveldb block cache or system page cache. So a seek does not necessarily cost 10ms.
In some read-heavy workload, some people simply disable compaction triggered by
read, while this may degrade read performance.
My suggestion is how about a tunable allow_seeks value that can be set
according to specific circumstance based on measurement?
Thanks in advance.
```
-----
Original issue reported on code.google.com by `alghak@gmail.com` on 14 Jan 2014 at 10:00 | non_comp | tunable allowed seeks feature request the allowed seeks is asigned to file size which is only fine to some specific circumstance here is the assumptions in code we arrange to automatically compact this file after a certain number of seeks let s assume one seek costs writing or reading costs s a compaction of does of io about assumption a get operation which seeks several files does not means the seek on first file actually seek disk it s very likely that bloom filter told us the data we want is not in that file and filter data itself is likely in ram while table cache is big enough on the other hand even if the result of bloom filter is false positive the file data is likely in leveldb block cache or system page cache so a seek does not necessarily cost in some read heavy workload some people simply disable compaction triggered by read while this may degrade read performance my suggestion is how about a tunable allow seeks value that can be set according to specific circumstance based on measurement thanks in advance original issue reported on code google com by alghak gmail com on jan at | 0 |
70,994 | 3,350,183,154 | IssuesEvent | 2015-11-17 13:38:45 | coreos/rkt | https://api.github.com/repos/coreos/rkt | closed | Add command to cat the pod-manifest | area/usability component/stage0 kind/enhancement low hanging fruit priority/P2 | It would be nice to have a `cat-manifest` command to get the pod-manifest of an active pod. Currently we can only get the pod-manifest by knowing where it is in the rkt root directory.
Something like `rkt cat-manifest UUID`
This would be nice in order to easily rerun a pod using the generated manifest instead of reproducing the command line entered to run the original pod, for example. | 1.0 | Add command to cat the pod-manifest - It would be nice to have a `cat-manifest` command to get the pod-manifest of an active pod. Currently we can only get the pod-manifest by knowing where it is in the rkt root directory.
Something like `rkt cat-manifest UUID`
This would be nice in order to easily rerun a pod using the generated manifest instead of reproducing the command line entered to run the original pod, for example. | non_comp | add command to cat the pod manifest it would be nice to have a cat manifest command to get the pod manifest of an active pod currently we can only get the pod manifest by knowing where it is in the rkt root directory something like rkt cat manifest uuid this would be nice in order to easily rerun a pod using the generated manifest instead of reproducing the command line entered to run the original pod for example | 0 |
1,495 | 4,019,160,629 | IssuesEvent | 2016-05-16 13:59:38 | Yannici/bedwars-reloaded | https://api.github.com/repos/Yannici/bedwars-reloaded | closed | Potions in shop not working | needs discussion Spigot compatibility | From review on spigotmc.org:
I using LAST spigot build for 1.9.2, but all potion in shop are uncraftable. Any help?
We should check if that problem still exists. I will ask him for the shop.yml | True | Potions in shop not working - From review on spigotmc.org:
I using LAST spigot build for 1.9.2, but all potion in shop are uncraftable. Any help?
We should check if that problem still exists. I will ask him for the shop.yml | comp | potions in shop not working from review on spigotmc org i using last spigot build for but all potion in shop are uncraftable any help we should check if that problem still exists i will ask him for the shop yml | 1 |
18,852 | 26,188,516,456 | IssuesEvent | 2023-01-03 05:51:17 | Corail31/tombstone_lite | https://api.github.com/repos/Corail31/tombstone_lite | closed | Corail tombstone and Dragon Survival Mod Crash after claiming inventory if there is any Claw tool. | compatibility 1.18 | ### Minecraft Version
1.18.2
### Forge Version
40.1.84
### Corail Tombstone Version
tombstone-7.5.8-1.18.2
### What happened?
When you die as a dragon with any tool compatible with Claw slot, and try to claim your grave the game crashes instantly.
a temporary solution would be: disable Auto-equipped in client-config for Corail tombstone
### Gametype
Singleplayer, Dedicated Server
### Happen by testing with only Corail Tombstone (required)
Yes
### Occurence
occurs all the time
### Other relevant mods
Dragon Survival Mod and GeckoLib Mod
### Log Link
https://github.com/DragonSurvivalTeam/DragonSurvival/files/10299202/crash-2022-12-24_15.44.19-server.txt
### Tested without any Forge "alternatives" (Magma, Mohist, and so on)
- [X] Yes
### Valid Launcher
- [X] Yes | True | Corail tombstone and Dragon Survival Mod Crash after claiming inventory if there is any Claw tool. - ### Minecraft Version
1.18.2
### Forge Version
40.1.84
### Corail Tombstone Version
tombstone-7.5.8-1.18.2
### What happened?
When you die as a dragon with any tool compatible with Claw slot, and try to claim your grave the game crashes instantly.
a temporary solution would be: disable Auto-equipped in client-config for Corail tombstone
### Gametype
Singleplayer, Dedicated Server
### Happen by testing with only Corail Tombstone (required)
Yes
### Occurence
occurs all the time
### Other relevant mods
Dragon Survival Mod and GeckoLib Mod
### Log Link
https://github.com/DragonSurvivalTeam/DragonSurvival/files/10299202/crash-2022-12-24_15.44.19-server.txt
### Tested without any Forge "alternatives" (Magma, Mohist, and so on)
- [X] Yes
### Valid Launcher
- [X] Yes | comp | corail tombstone and dragon survival mod crash after claiming inventory if there is any claw tool minecraft version forge version corail tombstone version tombstone what happened when you die as a dragon with any tool compatible with claw slot and try to claim your grave the game crashes instantly a temporary solution would be disable auto equipped in client config for corail tombstone gametype singleplayer dedicated server happen by testing with only corail tombstone required yes occurence occurs all the time other relevant mods dragon survival mod and geckolib mod log link tested without any forge alternatives magma mohist and so on yes valid launcher yes | 1 |
19,211 | 26,708,076,406 | IssuesEvent | 2023-01-27 20:11:55 | Keksuccino/FancyMenu | https://api.github.com/repos/Keksuccino/FancyMenu | closed | ReplayMod GUI cannot be opened by Button Action | mod incompatibility | **Describe the issue**
The replay viewer from Replay Mod cannot be opened by a button action.
**Game Log**
<https://gist.github.com/EinsteinMaker/2edb0cea08f12efe1d9b08eafaeb44c2>
**Screenshots**

**Basic Informations (please complete the following information):**
- OS: Ubuntu 22.04.1
- FancyMenu Version: 2.13.2
- Forge/Fabric Version: 0.14.11
- Minecraft Version: 1.19.3
- Active Mods: <https://gist.github.com/EinsteinMaker/fa0af7141f1a98da659785ab46f5b129> (Sorry but I didn't want to type out 151 mods so I copied them from the log)
| True | ReplayMod GUI cannot be opened by Button Action - **Describe the issue**
The replay viewer from Replay Mod cannot be opened by a button action.
**Game Log**
<https://gist.github.com/EinsteinMaker/2edb0cea08f12efe1d9b08eafaeb44c2>
**Screenshots**

**Basic Informations (please complete the following information):**
- OS: Ubuntu 22.04.1
- FancyMenu Version: 2.13.2
- Forge/Fabric Version: 0.14.11
- Minecraft Version: 1.19.3
- Active Mods: <https://gist.github.com/EinsteinMaker/fa0af7141f1a98da659785ab46f5b129> (Sorry but I didn't want to type out 151 mods so I copied them from the log)
| comp | replaymod gui cannot be opened by button action describe the issue the replay viewer from replay mod cannot be opened by a button action game log screenshots basic informations please complete the following information os ubuntu fancymenu version forge fabric version minecraft version active mods sorry but i didn t want to type out mods so i copied them from the log | 1 |
3,634 | 6,522,861,421 | IssuesEvent | 2017-08-29 05:46:14 | progwml6/Natura | https://api.github.com/repos/progwml6/Natura | closed | [1.10.2-4.0.0.93] Nether trees and bushes don't generate in NetherEx biomes | 1.10.2 bug mod compatibility Needs more Information | Forge-12.18.2.2151
NetherEx-1.1.0
NetherEx adds new biomes to the nether that none of Natura's plants generate in. | True | [1.10.2-4.0.0.93] Nether trees and bushes don't generate in NetherEx biomes - Forge-12.18.2.2151
NetherEx-1.1.0
NetherEx adds new biomes to the nether that none of Natura's plants generate in. | comp | nether trees and bushes don t generate in netherex biomes forge netherex netherex adds new biomes to the nether that none of natura s plants generate in | 1 |
31,208 | 2,732,850,452 | IssuesEvent | 2015-04-17 09:44:36 | tiku01/oryx-editor | https://api.github.com/repos/tiku01/oryx-editor | closed | XPath Browser for Property Value | auto-migrated Priority-Medium Type-Feature | ```
What is the feature desired?
A GUI plugin that allows a user to build an XPath expression and save this
expression into a property value for a shape.
Why is this feature important?
We are hand writing XPath functions which can become very complex and for
business users creating processes this is very complicated.
Any applications that provide this or a comparable feature?
There are XPath function builders in Oxygen XML editor, and some Eclipse
plugins.
Could anyone point me in the direction of a developer who could write this Oryx
plugin for me?
I will pay, and contribute the code back to the community ;)
Thanks,
Darren
```
Original issue reported on code.google.com by `entro...@gmail.com` on 3 Nov 2011 at 5:10 | 1.0 | XPath Browser for Property Value - ```
What is the feature desired?
A GUI plugin that allows a user to build an XPath expression and save this
expression into a property value for a shape.
Why is this feature important?
We are hand writing XPath functions which can become very complex and for
business users creating processes this is very complicated.
Any applications that provide this or a comparable feature?
There are XPath function builders in Oxygen XML editor, and some Eclipse
plugins.
Could anyone point me in the direction of a developer who could write this Oryx
plugin for me?
I will pay, and contribute the code back to the community ;)
Thanks,
Darren
```
Original issue reported on code.google.com by `entro...@gmail.com` on 3 Nov 2011 at 5:10 | non_comp | xpath browser for property value what is the feature desired a gui plugin that allows a user to build an xpath expression and save this expression into a property value for a shape why is this feature important we are hand writing xpath functions which can become very complex and for business users creating processes this is very complicated any applications that provide this or a comparable feature there are xpath function builders in oxygen xml editor and some eclipse plugins could anyone point me in the direction of a developer who could write this oryx plugin for me i will pay and contribute the code back to the community thanks darren original issue reported on code google com by entro gmail com on nov at | 0 |
538,535 | 15,771,347,300 | IssuesEvent | 2021-03-31 20:26:55 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | opened | Add support for GPU parameters at StepChain spec level | Further Discussion GPU High Priority New Feature | **Impact of the new feature**
ReqMgr2
**Is your feature request related to a problem? Please describe.**
It's a new project to support GPU processing in central production workflows, where WM system will be the bridge between Offline/CMSSW and the grid resources made available to CMS (through the glideinWMS layer).
**Describe the solution you'd like**
This GH issue depends on #10388, which contains all the specific details to be implemented in this ticket as well.
Then, the solution expected from this issue is to support the same `RequiresGPU` and `GPUParams` parameters at the StepChain spec. Those 2 parameters should also be supported at the Step dictionary level. In case the Step dictionary is missing these 2 parameters, they should be inherited from the top level specification (just as it's done for Multicore).
Note that the same validation should also apply to the Step level parameters.
**Describe alternatives you've considered**
Not to validate the **optional** parameters.
**Additional context**
Major discussion happened here: https://github.com/cms-sw/cmssw/pull/33057
and description of these parameters: https://github.com/cms-sw/cmssw/pull/33057#issuecomment-810914270
| 1.0 | Add support for GPU parameters at StepChain spec level - **Impact of the new feature**
ReqMgr2
**Is your feature request related to a problem? Please describe.**
It's a new project to support GPU processing in central production workflows, where WM system will be the bridge between Offline/CMSSW and the grid resources made available to CMS (through the glideinWMS layer).
**Describe the solution you'd like**
This GH issue depends on #10388, which contains all the specific details to be implemented in this ticket as well.
Then, the solution expected from this issue is to support the same `RequiresGPU` and `GPUParams` parameters at the StepChain spec. Those 2 parameters should also be supported at the Step dictionary level. In case the Step dictionary is missing these 2 parameters, they should be inherited from the top level specification (just as it's done for Multicore).
Note that the same validation should also apply to the Step level parameters.
**Describe alternatives you've considered**
Not to validate the **optional** parameters.
**Additional context**
Major discussion happened here: https://github.com/cms-sw/cmssw/pull/33057
and description of these parameters: https://github.com/cms-sw/cmssw/pull/33057#issuecomment-810914270
| non_comp | add support for gpu parameters at stepchain spec level impact of the new feature is your feature request related to a problem please describe it s a new project to support gpu processing in central production workflows where wm system will be the bridge between offline cmssw and the grid resources made available to cms through the glideinwms layer describe the solution you d like this gh issue depends on which contains all the specific details to be implemented in this ticket as well then the solution expected from this issue is to support the same requiresgpu and gpuparams parameters at the stepchain spec those parameters should also be supported at the step dictionary level in case the step dictionary is missing these parameters they should be inherited from the top level specification just as it s done for multicore note that the same validation should also apply to the step level parameters describe alternatives you ve considered not to validate the optional parameters additional context major discussion happened here and description of these parameters | 0 |
8,965 | 10,987,654,197 | IssuesEvent | 2019-12-02 09:39:38 | ably/ably-js | https://api.github.com/repos/ably/ably-js | closed | Investigate possible node 12 compatibility issue | compatibility investigate | node 12 build seems to be failing at node-gyp rebuild: https://travis-ci.org/ably/ably-js/jobs/549912367 | True | Investigate possible node 12 compatibility issue - node 12 build seems to be failing at node-gyp rebuild: https://travis-ci.org/ably/ably-js/jobs/549912367 | comp | investigate possible node compatibility issue node build seems to be failing at node gyp rebuild | 1 |
2,549 | 5,279,612,855 | IssuesEvent | 2017-02-07 11:49:46 | IRIS-Solutions-Team/IRIS-Toolbox | https://api.github.com/repos/IRIS-Solutions-Team/IRIS-Toolbox | closed | FAVAR: different std's of factors | bkw compatibility issue bug | FAVAR code:
Old IRIS:
When running kalman filter to fill missing observation I get warning, but it is not the issue.
`[fw,fdb,fcc,ff,fu,fe] = filter(w,dv,smplfull,'cross',false);`
_Warning: IRIS Toolbox Warning @ obsolete.
*** Using tseries objects as input/output data to/from VAR objectsis obsolete, and will be removed from a future version of IRIS. Use databases (structs) instead._
The problem is that standard deviations of factors (ff.std) in the old version of IRIS are smaller (for first two factors).
[selection_old_iris.pdf](https://github.com/IRIS-Solutions-Team/Tutorial-Simple-SPBC-Model/files/756262/selection_old_iris.pdf)
[selection_new_iris.pdf](https://github.com/IRIS-Solutions-Team/Tutorial-Simple-SPBC-Model/files/756263/selection_new_iris.pdf)
I do not know, which one is correct.
Thanks.
| True | FAVAR: different std's of factors - FAVAR code:
Old IRIS:
When running kalman filter to fill missing observation I get warning, but it is not the issue.
`[fw,fdb,fcc,ff,fu,fe] = filter(w,dv,smplfull,'cross',false);`
_Warning: IRIS Toolbox Warning @ obsolete.
*** Using tseries objects as input/output data to/from VAR objectsis obsolete, and will be removed from a future version of IRIS. Use databases (structs) instead._
The problem is that standard deviations of factors (ff.std) in the old version of IRIS are smaller (for first two factors).
[selection_old_iris.pdf](https://github.com/IRIS-Solutions-Team/Tutorial-Simple-SPBC-Model/files/756262/selection_old_iris.pdf)
[selection_new_iris.pdf](https://github.com/IRIS-Solutions-Team/Tutorial-Simple-SPBC-Model/files/756263/selection_new_iris.pdf)
I do not know, which one is correct.
Thanks.
| comp | favar different std s of factors favar code old iris when running kalman filter to fill missing observation i get warning but it is not the issue filter w dv smplfull cross false warning iris toolbox warning obsolete using tseries objects as input output data to from var objectsis obsolete and will be removed from a future version of iris use databases structs instead the problem is that standard deviations of factors ff std in the old version of iris are smaller for first two factors i do not know which one is correct thanks | 1 |
5,717 | 8,179,563,958 | IssuesEvent | 2018-08-28 16:44:19 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | opened | Invalid block scrim hides block options on mobile | Blocks Mobile App Compatibility [Type] Bug | **Describe the bug**
If a block becomes invalid and you are viewing on mobile then the grey scrim hides the block options.

**To Reproduce**
Steps to reproduce the behavior:
1. View editor using a mobile-sized display (i.e. Chrome dev tools > Device)
2. Make a block invalid (remove the trailing `</p>` of a paragraph)
3. Scroll down to the bottom of the block
4. Note that the block dropdown menu, and other buttons, are hidden behind the grey scrim
**Expected behavior**
The dropdown menu button to be available as it is on desktop. I'm not sure about the other buttons
| True | Invalid block scrim hides block options on mobile - **Describe the bug**
If a block becomes invalid and you are viewing on mobile then the grey scrim hides the block options.

**To Reproduce**
Steps to reproduce the behavior:
1. View editor using a mobile-sized display (i.e. Chrome dev tools > Device)
2. Make a block invalid (remove the trailing `</p>` of a paragraph)
3. Scroll down to the bottom of the block
4. Note that the block dropdown menu, and other buttons, are hidden behind the grey scrim
**Expected behavior**
The dropdown menu button to be available as it is on desktop. I'm not sure about the other buttons
| comp | invalid block scrim hides block options on mobile describe the bug if a block becomes invalid and you are viewing on mobile then the grey scrim hides the block options to reproduce steps to reproduce the behavior view editor using a mobile sized display i e chrome dev tools device make a block invalid remove the trailing of a paragraph scroll down to the bottom of the block note that the block dropdown menu and other buttons are hidden behind the grey scrim expected behavior the dropdown menu button to be available as it is on desktop i m not sure about the other buttons | 1 |
13,215 | 15,577,466,934 | IssuesEvent | 2021-03-17 13:34:19 | KazDragon/terminalpp | https://api.github.com/repos/KazDragon/terminalpp | closed | Draw errors when changing terminal size | Bug Compatibility | On certain terminals, changing the size of the terminal does not leave the cursor in the same position, meaning that the next element drawn may be in the wrong place. | True | Draw errors when changing terminal size - On certain terminals, changing the size of the terminal does not leave the cursor in the same position, meaning that the next element drawn may be in the wrong place. | comp | draw errors when changing terminal size on certain terminals changing the size of the terminal does not leave the cursor in the same position meaning that the next element drawn may be in the wrong place | 1 |
6,375 | 8,685,891,647 | IssuesEvent | 2018-12-03 09:17:42 | jmock-developers/jmock-library | https://api.github.com/repos/jmock-developers/jmock-library | closed | JunitRuleMockery assertIsSatisfied not behaving as expected | Imposterising compatibility | I have been using Junit4 alongside JMock for a very long time now, only recently moved to Junit5 and trying to use my favorite library for mocking objects, I found this release. BUT somehting interesting happened: when mocked object expectations are not satisfied (not called at all) the tests still pass.
Here's some sample code:
Note that i'm using project lombock for automatic generation of setters getters constructors etc so you may need to modify the code in order for it to work
```Java
import lombok.RequiredArgsConstructor;
/**
* Simple counter containing a field which if it hasn't reached it's desired minimum counts to pass
* will throw an exception.
*
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
@RequiredArgsConstructor
class MyCounter {
public final int minCountsToPass;
private int counter = 0;
int count() {
counter++;
if (counter < minCountsToPass) {
throw new IllegalStateException();
}
return counter;
}
}
```
```Java
import com.google.common.base.Throwables;
import lombok.RequiredArgsConstructor;
import java.util.concurrent.Callable;
/**
* Retry will be a class used to retry method invocations.
* Note that callable cannot be used with void methods.
*
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
@RequiredArgsConstructor
public class Retry {
private final int MAX_RETRY_ATTEMPTS;
private final Callable callable;
private final SystemWrapper system;
private boolean success = false;
private String message;
void execute() {
for (int attempt = 1; attempt <= MAX_RETRY_ATTEMPTS; attempt++) {
try {
callable.call();
success = true;
break;
} catch (Exception e) {
message = Throwables.getStackTraceAsString(e);
}
}
if (!success) {
system.notify(String.format("Failed with %d retries.\n Reason: %s", MAX_RETRY_ATTEMPTS, message));
system.exit(-1);
}
}
}
```
```Java
import org.jmock.Expectations;
import org.jmock.integration.junit4.JUnitRuleMockery;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
/**
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
class RetryTest {
@RegisterExtension
public final JUnitRuleMockery context = new JUnitRuleMockery();
private final SystemWrapper system = context.mock(SystemWrapper.class);
@Test
void retryPasses() {
MyCounter myCounter = new MyCounter(1);
Retry integerRetry = new Retry(2, myCounter::count, system);
integerRetry.execute();
}
@Test
void retryFails() {
MyCounter myCounter = new MyCounter(4);
Retry integerRetry = new Retry(3, myCounter::count, system);
context.checking(new Expectations() {{
oneOf(system).notify(with(any(String.class)));
oneOf(system).exit(-1);
}});
integerRetry.execute();
}
@Test
void testShouldFail() {
MyCounter myCounter = new MyCounter(2);
Retry integerRetry = new Retry(3, myCounter::count, system);
context.checking(new Expectations() {{
oneOf(system).notify(with(any(String.class)));
oneOf(system).exit(-1);
}});
integerRetry.execute();
}
}
```
Note that the last test should fail as the retry mechanism would have called the counter 3 times so the minimum has been met. But the test passes. I have been doing some attempts of method coding a custom extension which would simulate the call which looks something like this :
Note this is a work in progress just for a suggestion
```Java
@RunWith(JUnit4.class)
public class Jmock4ContextImpl implements Jmock4Context {
private JUnitRuleMockery jUnitRuleMockery = new JUnitRuleMockery();
@Override
public void checking(AbstractExpectations expectations, Callable callable) throws Exception {
jUnitRuleMockery.checking(expectations);
callable.call();
jUnitRuleMockery.assertIsSatisfied();
}
@Override
public void checking(AbstractExpectations expectations, Method method) throws Exception {
jUnitRuleMockery.checking(expectations);
method.invoke(null);
jUnitRuleMockery.assertIsSatisfied();
}
@Override
public void checking(AbstractExpectations expectations, Method method, Object... params) throws Exception {
jUnitRuleMockery.checking(expectations);
method.invoke(params);
jUnitRuleMockery.assertIsSatisfied();
}
@Override
public <T> T mock(Class<T> typeToMock) {
return jUnitRuleMockery.mock(typeToMock);
}
}
```
```Java
import org.junit.jupiter.api.extension.ExtensionContext;
import org.junit.jupiter.api.extension.ParameterContext;
import org.junit.jupiter.api.extension.ParameterResolutionException;
import org.junit.jupiter.api.extension.ParameterResolver;
/**
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
public class Jmock4Extention implements ParameterResolver {
@Override
public boolean supportsParameter(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
return parameterContext.getParameter().getType() == Jmock4Context.class;
}
@Override
public Object resolveParameter(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
return new Jmock4ContextImpl();
}
}
```
Would love some feedback on the matter. It's possible i'm doing something wrong but i am stuck on this for a very long time. | True | JunitRuleMockery assertIsSatisfied not behaving as expected - I have been using Junit4 alongside JMock for a very long time now, only recently moved to Junit5 and trying to use my favorite library for mocking objects, I found this release. BUT somehting interesting happened: when mocked object expectations are not satisfied (not called at all) the tests still pass.
Here's some sample code:
Note that i'm using project lombock for automatic generation of setters getters constructors etc so you may need to modify the code in order for it to work
```Java
import lombok.RequiredArgsConstructor;
/**
* Simple counter containing a field which if it hasn't reached it's desired minimum counts to pass
* will throw an exception.
*
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
@RequiredArgsConstructor
class MyCounter {
public final int minCountsToPass;
private int counter = 0;
int count() {
counter++;
if (counter < minCountsToPass) {
throw new IllegalStateException();
}
return counter;
}
}
```
```Java
import com.google.common.base.Throwables;
import lombok.RequiredArgsConstructor;
import java.util.concurrent.Callable;
/**
* Retry will be a class used to retry method invocations.
* Note that callable cannot be used with void methods.
*
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
@RequiredArgsConstructor
public class Retry {
private final int MAX_RETRY_ATTEMPTS;
private final Callable callable;
private final SystemWrapper system;
private boolean success = false;
private String message;
void execute() {
for (int attempt = 1; attempt <= MAX_RETRY_ATTEMPTS; attempt++) {
try {
callable.call();
success = true;
break;
} catch (Exception e) {
message = Throwables.getStackTraceAsString(e);
}
}
if (!success) {
system.notify(String.format("Failed with %d retries.\n Reason: %s", MAX_RETRY_ATTEMPTS, message));
system.exit(-1);
}
}
}
```
```Java
import org.jmock.Expectations;
import org.jmock.integration.junit4.JUnitRuleMockery;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
/**
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
class RetryTest {
@RegisterExtension
public final JUnitRuleMockery context = new JUnitRuleMockery();
private final SystemWrapper system = context.mock(SystemWrapper.class);
@Test
void retryPasses() {
MyCounter myCounter = new MyCounter(1);
Retry integerRetry = new Retry(2, myCounter::count, system);
integerRetry.execute();
}
@Test
void retryFails() {
MyCounter myCounter = new MyCounter(4);
Retry integerRetry = new Retry(3, myCounter::count, system);
context.checking(new Expectations() {{
oneOf(system).notify(with(any(String.class)));
oneOf(system).exit(-1);
}});
integerRetry.execute();
}
@Test
void testShouldFail() {
MyCounter myCounter = new MyCounter(2);
Retry integerRetry = new Retry(3, myCounter::count, system);
context.checking(new Expectations() {{
oneOf(system).notify(with(any(String.class)));
oneOf(system).exit(-1);
}});
integerRetry.execute();
}
}
```
Note that the last test should fail as the retry mechanism would have called the counter 3 times so the minimum has been met. But the test passes. I have been doing some attempts of method coding a custom extension which would simulate the call which looks something like this :
Note this is a work in progress just for a suggestion
```Java
@RunWith(JUnit4.class)
public class Jmock4ContextImpl implements Jmock4Context {
private JUnitRuleMockery jUnitRuleMockery = new JUnitRuleMockery();
@Override
public void checking(AbstractExpectations expectations, Callable callable) throws Exception {
jUnitRuleMockery.checking(expectations);
callable.call();
jUnitRuleMockery.assertIsSatisfied();
}
@Override
public void checking(AbstractExpectations expectations, Method method) throws Exception {
jUnitRuleMockery.checking(expectations);
method.invoke(null);
jUnitRuleMockery.assertIsSatisfied();
}
@Override
public void checking(AbstractExpectations expectations, Method method, Object... params) throws Exception {
jUnitRuleMockery.checking(expectations);
method.invoke(params);
jUnitRuleMockery.assertIsSatisfied();
}
@Override
public <T> T mock(Class<T> typeToMock) {
return jUnitRuleMockery.mock(typeToMock);
}
}
```
```Java
import org.junit.jupiter.api.extension.ExtensionContext;
import org.junit.jupiter.api.extension.ParameterContext;
import org.junit.jupiter.api.extension.ParameterResolutionException;
import org.junit.jupiter.api.extension.ParameterResolver;
/**
* @author Vasil Mitov (vasil.mitov@clouway.com).
*/
public class Jmock4Extention implements ParameterResolver {
@Override
public boolean supportsParameter(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
return parameterContext.getParameter().getType() == Jmock4Context.class;
}
@Override
public Object resolveParameter(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
return new Jmock4ContextImpl();
}
}
```
Would love some feedback on the matter. It's possible i'm doing something wrong but i am stuck on this for a very long time. | comp | junitrulemockery assertissatisfied not behaving as expected i have been using alongside jmock for a very long time now only recently moved to and trying to use my favorite library for mocking objects i found this release but somehting interesting happened when mocked object expectations are not satisfied not called at all the tests still pass here s some sample code note that i m using project lombock for automatic generation of setters getters constructors etc so you may need to modify the code in order for it to work java import lombok requiredargsconstructor simple counter containing a field which if it hasn t reached it s desired minimum counts to pass will throw an exception author vasil mitov vasil mitov clouway com requiredargsconstructor class mycounter public final int mincountstopass private int counter int count counter if counter mincountstopass throw new illegalstateexception return counter java import com google common base throwables import lombok requiredargsconstructor import java util concurrent callable retry will be a class used to retry method invocations note that callable cannot be used with void methods author vasil mitov vasil mitov clouway com requiredargsconstructor public class retry private final int max retry attempts private final callable callable private final systemwrapper system private boolean success false private string message void execute for int attempt attempt max retry attempts attempt try callable call success true break catch exception e message throwables getstacktraceasstring e if success system notify string format failed with d retries n reason s max retry attempts message system exit java import org jmock expectations import org jmock integration junitrulemockery import org junit jupiter api test import org junit jupiter api extension registerextension author vasil mitov vasil mitov clouway com class retrytest registerextension public final junitrulemockery context new junitrulemockery private final systemwrapper system context mock systemwrapper class test void retrypasses mycounter mycounter new mycounter retry integerretry new retry mycounter count system integerretry execute test void retryfails mycounter mycounter new mycounter retry integerretry new retry mycounter count system context checking new expectations oneof system notify with any string class oneof system exit integerretry execute test void testshouldfail mycounter mycounter new mycounter retry integerretry new retry mycounter count system context checking new expectations oneof system notify with any string class oneof system exit integerretry execute note that the last test should fail as the retry mechanism would have called the counter times so the minimum has been met but the test passes i have been doing some attempts of method coding a custom extension which would simulate the call which looks something like this note this is a work in progress just for a suggestion java runwith class public class implements private junitrulemockery junitrulemockery new junitrulemockery override public void checking abstractexpectations expectations callable callable throws exception junitrulemockery checking expectations callable call junitrulemockery assertissatisfied override public void checking abstractexpectations expectations method method throws exception junitrulemockery checking expectations method invoke null junitrulemockery assertissatisfied override public void checking abstractexpectations expectations method method object params throws exception junitrulemockery checking expectations method invoke params junitrulemockery assertissatisfied override public t mock class typetomock return junitrulemockery mock typetomock java import org junit jupiter api extension extensioncontext import org junit jupiter api extension parametercontext import org junit jupiter api extension parameterresolutionexception import org junit jupiter api extension parameterresolver author vasil mitov vasil mitov clouway com public class implements parameterresolver override public boolean supportsparameter parametercontext parametercontext extensioncontext extensioncontext throws parameterresolutionexception return parametercontext getparameter gettype class override public object resolveparameter parametercontext parametercontext extensioncontext extensioncontext throws parameterresolutionexception return new would love some feedback on the matter it s possible i m doing something wrong but i am stuck on this for a very long time | 1 |
69,902 | 22,744,619,423 | IssuesEvent | 2022-07-07 08:04:47 | melink14/rikaikun | https://api.github.com/repos/melink14/rikaikun | closed | Error message | Type-Defect auto-migrated P2 obsolete | ```
I've been trying to save words into file using Rikaichan, but seem to always
get this error:
Error while saving: [Exception..."Component returned failure code 0x800040005"
(NS_ERROR_FAILURE) [nslFileOutputStream.init]" nsresult: "0x80004005
(NS_ERROR_FAILURE)" location: "JS frame ::
Chrome://rikaichan/content/ricaichan.js :: anonymous :: line 697" data:no]
Any idea what might be wrong?
```
Original issue reported on code.google.com by `joshuagi...@yahoo.co.uk` on 16 Aug 2013 at 4:33
| 1.0 | Error message - ```
I've been trying to save words into file using Rikaichan, but seem to always
get this error:
Error while saving: [Exception..."Component returned failure code 0x800040005"
(NS_ERROR_FAILURE) [nslFileOutputStream.init]" nsresult: "0x80004005
(NS_ERROR_FAILURE)" location: "JS frame ::
Chrome://rikaichan/content/ricaichan.js :: anonymous :: line 697" data:no]
Any idea what might be wrong?
```
Original issue reported on code.google.com by `joshuagi...@yahoo.co.uk` on 16 Aug 2013 at 4:33
| non_comp | error message i ve been trying to save words into file using rikaichan but seem to always get this error error while saving exception component returned failure code ns error failure nsresult ns error failure location js frame chrome rikaichan content ricaichan js anonymous line data no any idea what might be wrong original issue reported on code google com by joshuagi yahoo co uk on aug at | 0 |
83,740 | 3,641,421,661 | IssuesEvent | 2016-02-13 16:33:52 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | misplaced cassandra-controller.yaml excerpt in examples/cassandra | kind/bug kind/example priority/P1 team/ux | It should be `cassandra.yaml` at the place of the current first occurrence of the sample `cassandra-controller.yaml`. | 1.0 | misplaced cassandra-controller.yaml excerpt in examples/cassandra - It should be `cassandra.yaml` at the place of the current first occurrence of the sample `cassandra-controller.yaml`. | non_comp | misplaced cassandra controller yaml excerpt in examples cassandra it should be cassandra yaml at the place of the current first occurrence of the sample cassandra controller yaml | 0 |
73,562 | 14,103,733,402 | IssuesEvent | 2020-11-06 10:43:52 | RadarCOVID/radar-covid-android | https://api.github.com/repos/RadarCOVID/radar-covid-android | closed | Curly braces should be used on conditionals | code quality | 🔑
From Kotlin code convention: https://kotlinlang.org/docs/reference/control-flow.html
```// Traditional usage
var max = a
if (a < b) max = b
// With else
var max: Int
if (a > b) {
max = a
} else {
max = b
}
// As expression
val max = if (a > b) a else b
```
There are a few places where the curly braces are omitted | 1.0 | Curly braces should be used on conditionals - 🔑
From Kotlin code convention: https://kotlinlang.org/docs/reference/control-flow.html
```// Traditional usage
var max = a
if (a < b) max = b
// With else
var max: Int
if (a > b) {
max = a
} else {
max = b
}
// As expression
val max = if (a > b) a else b
```
There are a few places where the curly braces are omitted | non_comp | curly braces should be used on conditionals 🔑 from kotlin code convention traditional usage var max a if a b max b with else var max int if a b max a else max b as expression val max if a b a else b there are a few places where the curly braces are omitted | 0 |
2,568 | 5,295,120,303 | IssuesEvent | 2017-02-09 12:59:16 | presscustomizr/customizr | https://api.github.com/repos/presscustomizr/customizr | opened | Shortcodes Ultimate : Slider shortcode does not display correctly | bug compatibility-issue | As reported by a user
> When using the slider shortcode in a page and Previewing changes then the slider displays correctly.
>
> Yet viewing the page via menu's then the slider does not display.
>
> Example: [su_slider source="media: 364,365,366,367" width="500" height="340" responsive="no" title="no" centered="no" arrows="no" pages="no" mousewheel="no"]
>
>
> I'm using a child theme of Customizr v3.5.1 with WP version 4.7.2.
>
> Shortcodes Ultimate version 4.9.9 | True | Shortcodes Ultimate : Slider shortcode does not display correctly - As reported by a user
> When using the slider shortcode in a page and Previewing changes then the slider displays correctly.
>
> Yet viewing the page via menu's then the slider does not display.
>
> Example: [su_slider source="media: 364,365,366,367" width="500" height="340" responsive="no" title="no" centered="no" arrows="no" pages="no" mousewheel="no"]
>
>
> I'm using a child theme of Customizr v3.5.1 with WP version 4.7.2.
>
> Shortcodes Ultimate version 4.9.9 | comp | shortcodes ultimate slider shortcode does not display correctly as reported by a user when using the slider shortcode in a page and previewing changes then the slider displays correctly yet viewing the page via menu s then the slider does not display example i m using a child theme of customizr with wp version shortcodes ultimate version | 1 |
228,899 | 7,569,293,993 | IssuesEvent | 2018-04-23 03:28:02 | turt2live/matrix-voyager-bot | https://api.github.com/repos/turt2live/matrix-voyager-bot | opened | Use user ID hashes when comparing users and track servers explicitly | enhancement priority t2bot.io instance | Currently Voyager stores a large list of user IDs to ensure it doesn't accidentally track duplicates. However, this means that (in production) more than 700k users are recorded for the purposes of getting a global user count across the visible network. Instead of storing the user IDs explicitly, the user IDs should be hashed to ensure that voyager can detect duplicate users, making the count accurate, but not collect the user ID explicitly.
This would ensure that users who wish to remain anonymous (the default) are not able to be linked to a particular room in any way without explicitly opting in. Users who opt in to the linking by making their profile public on the graph would be linked, but only for the duration of their opt-in period. Once they opt-out again, their profile information should be discarded such that they become another hash in the database.
The act of hashing user IDs means that the server information is not accessible in a database query. For this reason, voyager should track a distinct list of servers it has encountered (by aliases, user IDs, etc). This should not contain a number of users/aliases/etc, just that the server has been seen.
The processing logic would be something like:
* When an `m.room.member` event needs processing, hash the user ID and split out the server name
* Persist the server name in the store if it is unique
* Discard the server name
* Store the user ID hash if it is unique, updating any applicable counts
When a user explicitly opts-in to linking/tracking on the public graph, Voyager can associate the user ID to the hash by calculating the hash of the requesting user. This is when Voyager can queue a profile fetch. | 1.0 | Use user ID hashes when comparing users and track servers explicitly - Currently Voyager stores a large list of user IDs to ensure it doesn't accidentally track duplicates. However, this means that (in production) more than 700k users are recorded for the purposes of getting a global user count across the visible network. Instead of storing the user IDs explicitly, the user IDs should be hashed to ensure that voyager can detect duplicate users, making the count accurate, but not collect the user ID explicitly.
This would ensure that users who wish to remain anonymous (the default) are not able to be linked to a particular room in any way without explicitly opting in. Users who opt in to the linking by making their profile public on the graph would be linked, but only for the duration of their opt-in period. Once they opt-out again, their profile information should be discarded such that they become another hash in the database.
The act of hashing user IDs means that the server information is not accessible in a database query. For this reason, voyager should track a distinct list of servers it has encountered (by aliases, user IDs, etc). This should not contain a number of users/aliases/etc, just that the server has been seen.
The processing logic would be something like:
* When an `m.room.member` event needs processing, hash the user ID and split out the server name
* Persist the server name in the store if it is unique
* Discard the server name
* Store the user ID hash if it is unique, updating any applicable counts
When a user explicitly opts-in to linking/tracking on the public graph, Voyager can associate the user ID to the hash by calculating the hash of the requesting user. This is when Voyager can queue a profile fetch. | non_comp | use user id hashes when comparing users and track servers explicitly currently voyager stores a large list of user ids to ensure it doesn t accidentally track duplicates however this means that in production more than users are recorded for the purposes of getting a global user count across the visible network instead of storing the user ids explicitly the user ids should be hashed to ensure that voyager can detect duplicate users making the count accurate but not collect the user id explicitly this would ensure that users who wish to remain anonymous the default are not able to be linked to a particular room in any way without explicitly opting in users who opt in to the linking by making their profile public on the graph would be linked but only for the duration of their opt in period once they opt out again their profile information should be discarded such that they become another hash in the database the act of hashing user ids means that the server information is not accessible in a database query for this reason voyager should track a distinct list of servers it has encountered by aliases user ids etc this should not contain a number of users aliases etc just that the server has been seen the processing logic would be something like when an m room member event needs processing hash the user id and split out the server name persist the server name in the store if it is unique discard the server name store the user id hash if it is unique updating any applicable counts when a user explicitly opts in to linking tracking on the public graph voyager can associate the user id to the hash by calculating the hash of the requesting user this is when voyager can queue a profile fetch | 0 |
13,814 | 5,467,265,993 | IssuesEvent | 2017-03-10 00:31:53 | mitchellh/packer | https://api.github.com/repos/mitchellh/packer | closed | build -force should remove existing AWS AMI | bug builder/amazon | According to the docs, -force should remove existing artifacts before building, however this seems to work for QEMU but not for amazon-ebs builder:
# packer build -force template.json
[...]
==> amazon-ebs: Error: name conflicts with an existing AMI: ami-0f160a63
Build 'amazon-ebs' errored: Error: name conflicts with an existing AMI: ami-0f160a63
| 1.0 | build -force should remove existing AWS AMI - According to the docs, -force should remove existing artifacts before building, however this seems to work for QEMU but not for amazon-ebs builder:
# packer build -force template.json
[...]
==> amazon-ebs: Error: name conflicts with an existing AMI: ami-0f160a63
Build 'amazon-ebs' errored: Error: name conflicts with an existing AMI: ami-0f160a63
| non_comp | build force should remove existing aws ami according to the docs force should remove existing artifacts before building however this seems to work for qemu but not for amazon ebs builder packer build force template json amazon ebs error name conflicts with an existing ami ami build amazon ebs errored error name conflicts with an existing ami ami | 0 |
489,586 | 14,108,481,823 | IssuesEvent | 2020-11-06 17:53:27 | AY2021S1-CS2103T-W16-2/tp | https://api.github.com/repos/AY2021S1-CS2103T-W16-2/tp | closed | [PE-D] Format part of UG is sometimes not clear | priority.High severity.Low | When describing the format of the commands, the square bracket placement is not always consistent, so it can be unclear whether the prefixes themselves are compulsory.

<!--session: 1604044893317-510a778e-4091-496a-839f-7db76e56ef50-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: hakujitsu/ped#8 | 1.0 | [PE-D] Format part of UG is sometimes not clear - When describing the format of the commands, the square bracket placement is not always consistent, so it can be unclear whether the prefixes themselves are compulsory.

<!--session: 1604044893317-510a778e-4091-496a-839f-7db76e56ef50-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: hakujitsu/ped#8 | non_comp | format part of ug is sometimes not clear when describing the format of the commands the square bracket placement is not always consistent so it can be unclear whether the prefixes themselves are compulsory labels severity low type documentationbug original hakujitsu ped | 0 |
200,628 | 22,820,373,013 | IssuesEvent | 2022-07-12 01:13:40 | Rossb0b/Next_bac-a-sable | https://api.github.com/repos/Rossb0b/Next_bac-a-sable | opened | CVE-2022-31127 (High) detected in next-auth-3.11.2.tgz | security vulnerability | ## CVE-2022-31127 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>next-auth-3.11.2.tgz</b></p></summary>
<p>Authentication for Next.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/next-auth/-/next-auth-3.11.2.tgz">https://registry.npmjs.org/next-auth/-/next-auth-3.11.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/location/package.json</p>
<p>Path to vulnerable library: /location/node_modules/next-auth/package.json</p>
<p>
Dependency Hierarchy:
- :x: **next-auth-3.11.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
NextAuth.js is a complete open source authentication solution for Next.js applications. An attacker can pass a compromised input to the e-mail [signin endpoint](https://next-auth.js.org/getting-started/rest-api#post-apiauthsigninprovider) that contains some malicious HTML, tricking the e-mail server to send it to the user, so they can perform a phishing attack. Eg.: `balazs@email.com, <a href="http://attacker.com">Before signing in, claim your money!</a>`. This was previously sent to `balazs@email.com`, and the content of the email containing a link to the attacker's site was rendered in the HTML. This has been remedied in the following releases, by simply not rendering that e-mail in the HTML, since it should be obvious to the receiver what e-mail they used: next-auth v3 users before version 3.29.8 are impacted. (We recommend upgrading to v4, as v3 is considered unmaintained. next-auth v4 users before version 4.9.0 are impacted. If for some reason you cannot upgrade, the workaround requires you to sanitize the `email` parameter that is passed to `sendVerificationRequest` and rendered in the HTML. If you haven't created a custom `sendVerificationRequest`, you only need to upgrade. Otherwise, make sure to either exclude `email` from the HTML body or efficiently sanitize it.
<p>Publish Date: 2022-07-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31127>CVE-2022-31127</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/nextauthjs/next-auth/security/advisories/GHSA-pgjx-7f9g-9463">https://github.com/nextauthjs/next-auth/security/advisories/GHSA-pgjx-7f9g-9463</a></p>
<p>Release Date: 2022-07-06</p>
<p>Fix Resolution: next-auth@v4.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-31127 (High) detected in next-auth-3.11.2.tgz - ## CVE-2022-31127 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>next-auth-3.11.2.tgz</b></p></summary>
<p>Authentication for Next.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/next-auth/-/next-auth-3.11.2.tgz">https://registry.npmjs.org/next-auth/-/next-auth-3.11.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/location/package.json</p>
<p>Path to vulnerable library: /location/node_modules/next-auth/package.json</p>
<p>
Dependency Hierarchy:
- :x: **next-auth-3.11.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
NextAuth.js is a complete open source authentication solution for Next.js applications. An attacker can pass a compromised input to the e-mail [signin endpoint](https://next-auth.js.org/getting-started/rest-api#post-apiauthsigninprovider) that contains some malicious HTML, tricking the e-mail server to send it to the user, so they can perform a phishing attack. Eg.: `balazs@email.com, <a href="http://attacker.com">Before signing in, claim your money!</a>`. This was previously sent to `balazs@email.com`, and the content of the email containing a link to the attacker's site was rendered in the HTML. This has been remedied in the following releases, by simply not rendering that e-mail in the HTML, since it should be obvious to the receiver what e-mail they used: next-auth v3 users before version 3.29.8 are impacted. (We recommend upgrading to v4, as v3 is considered unmaintained. next-auth v4 users before version 4.9.0 are impacted. If for some reason you cannot upgrade, the workaround requires you to sanitize the `email` parameter that is passed to `sendVerificationRequest` and rendered in the HTML. If you haven't created a custom `sendVerificationRequest`, you only need to upgrade. Otherwise, make sure to either exclude `email` from the HTML body or efficiently sanitize it.
<p>Publish Date: 2022-07-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31127>CVE-2022-31127</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/nextauthjs/next-auth/security/advisories/GHSA-pgjx-7f9g-9463">https://github.com/nextauthjs/next-auth/security/advisories/GHSA-pgjx-7f9g-9463</a></p>
<p>Release Date: 2022-07-06</p>
<p>Fix Resolution: next-auth@v4.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_comp | cve high detected in next auth tgz cve high severity vulnerability vulnerable library next auth tgz authentication for next js library home page a href path to dependency file tmp ws scm location package json path to vulnerable library location node modules next auth package json dependency hierarchy x next auth tgz vulnerable library found in base branch main vulnerability details nextauth js is a complete open source authentication solution for next js applications an attacker can pass a compromised input to the e mail that contains some malicious html tricking the e mail server to send it to the user so they can perform a phishing attack eg balazs email com this was previously sent to balazs email com and the content of the email containing a link to the attacker s site was rendered in the html this has been remedied in the following releases by simply not rendering that e mail in the html since it should be obvious to the receiver what e mail they used next auth users before version are impacted we recommend upgrading to as is considered unmaintained next auth users before version are impacted if for some reason you cannot upgrade the workaround requires you to sanitize the email parameter that is passed to sendverificationrequest and rendered in the html if you haven t created a custom sendverificationrequest you only need to upgrade otherwise make sure to either exclude email from the html body or efficiently sanitize it publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution next auth step up your open source security game with mend | 0 |
20,740 | 30,832,180,809 | IssuesEvent | 2023-08-02 03:13:11 | modded-factorio/bobsmods | https://api.github.com/repos/modded-factorio/bobsmods | closed | Error with Py mods | mod compatibility Bob's Enemies | https://mods.factorio.com/mod/bobenemies/discussion/63527213b0aeacff191d6099
```
failed to load mods: pypostprocessing/prototypes/functions/auto_tech.lua:150:
ERROR: dependency loop detected
stack traceback:
[C]: in function 'error'
pypostprocessing/prototypes/functions/auto_tech.lua:150: in function 'run'
pypostprocessing/data-final-fixes:144: in main chunk
mods to be disabled:
pypostprocessing (0.1.0)
```
### Minimal Mod list
- bobenemies
- boblibrary
- stdlib
- pycoalprocessing
- pycoalprocessinggraphics
- pyindustry
- pypostprocessing
### Info From King

There's a function in pypp to create a temp tech for a recipe that can help debug it as it'll let pypp move it to where it thinks it should be. | True | Error with Py mods - https://mods.factorio.com/mod/bobenemies/discussion/63527213b0aeacff191d6099
```
failed to load mods: pypostprocessing/prototypes/functions/auto_tech.lua:150:
ERROR: dependency loop detected
stack traceback:
[C]: in function 'error'
pypostprocessing/prototypes/functions/auto_tech.lua:150: in function 'run'
pypostprocessing/data-final-fixes:144: in main chunk
mods to be disabled:
pypostprocessing (0.1.0)
```
### Minimal Mod list
- bobenemies
- boblibrary
- stdlib
- pycoalprocessing
- pycoalprocessinggraphics
- pyindustry
- pypostprocessing
### Info From King

There's a function in pypp to create a temp tech for a recipe that can help debug it as it'll let pypp move it to where it thinks it should be. | comp | error with py mods failed to load mods pypostprocessing prototypes functions auto tech lua error dependency loop detected stack traceback in function error pypostprocessing prototypes functions auto tech lua in function run pypostprocessing data final fixes in main chunk mods to be disabled pypostprocessing minimal mod list bobenemies boblibrary stdlib pycoalprocessing pycoalprocessinggraphics pyindustry pypostprocessing info from king there s a function in pypp to create a temp tech for a recipe that can help debug it as it ll let pypp move it to where it thinks it should be | 1 |
7,764 | 9,995,059,470 | IssuesEvent | 2019-07-11 19:15:48 | ChrisAdderley/StationPartsExpansionRedux | https://api.github.com/repos/ChrisAdderley/StationPartsExpansionRedux | closed | USI-LS compat, ModuleHabitation doesn't consume EC | enhancement ext. compatibility next version | Both `PPD-24 'Panorama' Observation Module` and `PXL-9 Astrogation Module` parts has no EC consumption for the ModuleHabitation.
The module configuration is handled by `StationPartsExpansionRedux\Patches\SSPXR-USILS.cfg` patch (l.106-128) | True | USI-LS compat, ModuleHabitation doesn't consume EC - Both `PPD-24 'Panorama' Observation Module` and `PXL-9 Astrogation Module` parts has no EC consumption for the ModuleHabitation.
The module configuration is handled by `StationPartsExpansionRedux\Patches\SSPXR-USILS.cfg` patch (l.106-128) | comp | usi ls compat modulehabitation doesn t consume ec both ppd panorama observation module and pxl astrogation module parts has no ec consumption for the modulehabitation the module configuration is handled by stationpartsexpansionredux patches sspxr usils cfg patch l | 1 |
516,535 | 14,983,625,313 | IssuesEvent | 2021-01-28 17:27:33 | yalla-coop/scouts | https://api.github.com/repos/yalla-coop/scouts | closed | I can see a dashboard showing me my stats and presenting me further options (volunteer) | back-end backlog front-end priority-3 volunteer | - dashboard for volunteers (new user, not yet signed up) and existing user (mobile and desktop)
- wireframes (existing user): https://www.figma.com/file/9oFoQawgjFYwHi1dG77RH7/The-Scouts-Association-MVP?node-id=1449%3A6093
- wireframes (new user): https://www.figma.com/file/9oFoQawgjFYwHi1dG77RH7/The-Scouts-Association-MVP?node-id=1449%3A5590
---
### Acceptance Criteria:
- [ ] Welcome section w. title with user's name and description
- [ ] Digital score section with title, circle diagram showing user's score (out of 100) and collapsable explanation text (tell me more button)
- [ ] check for user updates: if user has up-skilled it says X points since joining
- [ ] Digital skills section with title, completed skills, completed skills since joining (if any) and collapsable explanation (tell me more button)
- [ ] Become a champion section with title, explanation and CTA button (→ skill areas recommendations page)
- [ ] Create options for first time user, links #56 (when coming from assessment, see wireframes for dashboard - new user)
- [ ] create backend logic (queries, use-cases, controllers, api tests) | 1.0 | I can see a dashboard showing me my stats and presenting me further options (volunteer) - - dashboard for volunteers (new user, not yet signed up) and existing user (mobile and desktop)
- wireframes (existing user): https://www.figma.com/file/9oFoQawgjFYwHi1dG77RH7/The-Scouts-Association-MVP?node-id=1449%3A6093
- wireframes (new user): https://www.figma.com/file/9oFoQawgjFYwHi1dG77RH7/The-Scouts-Association-MVP?node-id=1449%3A5590
---
### Acceptance Criteria:
- [ ] Welcome section w. title with user's name and description
- [ ] Digital score section with title, circle diagram showing user's score (out of 100) and collapsable explanation text (tell me more button)
- [ ] check for user updates: if user has up-skilled it says X points since joining
- [ ] Digital skills section with title, completed skills, completed skills since joining (if any) and collapsable explanation (tell me more button)
- [ ] Become a champion section with title, explanation and CTA button (→ skill areas recommendations page)
- [ ] Create options for first time user, links #56 (when coming from assessment, see wireframes for dashboard - new user)
- [ ] create backend logic (queries, use-cases, controllers, api tests) | non_comp | i can see a dashboard showing me my stats and presenting me further options volunteer dashboard for volunteers new user not yet signed up and existing user mobile and desktop wireframes existing user wireframes new user acceptance criteria welcome section w title with user s name and description digital score section with title circle diagram showing user s score out of and collapsable explanation text tell me more button check for user updates if user has up skilled it says x points since joining digital skills section with title completed skills completed skills since joining if any and collapsable explanation tell me more button become a champion section with title explanation and cta button → skill areas recommendations page create options for first time user links when coming from assessment see wireframes for dashboard new user create backend logic queries use cases controllers api tests | 0 |
12,407 | 9,648,221,093 | IssuesEvent | 2019-05-17 15:42:35 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | Bug Fixes/Enhancements to Application Gateway | bug enhancement service/application-gateway | ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Affected Resource(s)
* `azurerm_application_gateway`
---
The `azurerm_application_gateway` resource is currently missing a selection of fields and also has some bugs which need resolving. Unfortunately there's [a bug in the Application Gateway API where the Application Gateway isn't actually deleted](https://github.com/Azure/azure-rest-api-specs/issues/2187) which prevents us from proceeding this work, since our tests for these resources are failing around 80% of the time due to the Application Gateway not being deleted (meaning this fills up our quota's).
Rather than trying to track the status of this bug across multiple issues - I'm opening this meta-issue to keep track of these bugs and enhancements in one place. Once the bug in the API is resolved - it should be possible to add these enhancements/investigate fixing these bugs; however these are blocked at the moment.
## Blocking API issues
- ~[The Application Gateway API returns that an Application Gateway has been deleted when it hasn't](https://github.com/Azure/azure-rest-api-specs/issues/2187) (which was previously tracked in this repository in https://github.com/terraform-providers/terraform-provider-azurerm/issues/608)~
## Enhancements
- ~[Support for Disabled Rule Groups](https://github.com/terraform-providers/terraform-provider-azurerm/issues/451)~ (fixed in #3394)
- ~[Support for Redirect Rules](https://github.com/terraform-providers/terraform-provider-azurerm/issues/552)~
- ~[Support for Affinity cookie name](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1519)~ (fixed in #3434)
- ~[Support for Connection draining](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1519)~ (fixed in #2778)
- ~[Support for Hostname](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1875)~ (fixed in #2990)
- ~[Support for Diagnostics Logs](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1519)~ (this is the regular Diagnostics resource, being tracked in #657)
- ~[Split Application Gateway resource into multiple resources](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1542)~ not possible at this time due to the API
- ~[Tags](
https://github.com/terraform-providers/terraform-provider-azurerm/issues/1576#issuecomment-406302735)~ (fixed in #2054)
## Bug Fixes
- ~[SSL Certificate recreated every apply](https://github.com/terraform-providers/terraform-provider-azurerm/issues/583)~ (fixed in #2054)
- ~[Changing the Subnet fails](https://github.com/terraform-providers/terraform-provider-azurerm/issues/747)~ (fixed in #3437)
- [Virtual Network Peering gets disconnected when changing a backend pool](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1274) | 1.0 | Bug Fixes/Enhancements to Application Gateway - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Affected Resource(s)
* `azurerm_application_gateway`
---
The `azurerm_application_gateway` resource is currently missing a selection of fields and also has some bugs which need resolving. Unfortunately there's [a bug in the Application Gateway API where the Application Gateway isn't actually deleted](https://github.com/Azure/azure-rest-api-specs/issues/2187) which prevents us from proceeding this work, since our tests for these resources are failing around 80% of the time due to the Application Gateway not being deleted (meaning this fills up our quota's).
Rather than trying to track the status of this bug across multiple issues - I'm opening this meta-issue to keep track of these bugs and enhancements in one place. Once the bug in the API is resolved - it should be possible to add these enhancements/investigate fixing these bugs; however these are blocked at the moment.
## Blocking API issues
- ~[The Application Gateway API returns that an Application Gateway has been deleted when it hasn't](https://github.com/Azure/azure-rest-api-specs/issues/2187) (which was previously tracked in this repository in https://github.com/terraform-providers/terraform-provider-azurerm/issues/608)~
## Enhancements
- ~[Support for Disabled Rule Groups](https://github.com/terraform-providers/terraform-provider-azurerm/issues/451)~ (fixed in #3394)
- ~[Support for Redirect Rules](https://github.com/terraform-providers/terraform-provider-azurerm/issues/552)~
- ~[Support for Affinity cookie name](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1519)~ (fixed in #3434)
- ~[Support for Connection draining](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1519)~ (fixed in #2778)
- ~[Support for Hostname](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1875)~ (fixed in #2990)
- ~[Support for Diagnostics Logs](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1519)~ (this is the regular Diagnostics resource, being tracked in #657)
- ~[Split Application Gateway resource into multiple resources](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1542)~ not possible at this time due to the API
- ~[Tags](
https://github.com/terraform-providers/terraform-provider-azurerm/issues/1576#issuecomment-406302735)~ (fixed in #2054)
## Bug Fixes
- ~[SSL Certificate recreated every apply](https://github.com/terraform-providers/terraform-provider-azurerm/issues/583)~ (fixed in #2054)
- ~[Changing the Subnet fails](https://github.com/terraform-providers/terraform-provider-azurerm/issues/747)~ (fixed in #3437)
- [Virtual Network Peering gets disconnected when changing a backend pool](https://github.com/terraform-providers/terraform-provider-azurerm/issues/1274) | non_comp | bug fixes enhancements to application gateway community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment affected resource s azurerm application gateway the azurerm application gateway resource is currently missing a selection of fields and also has some bugs which need resolving unfortunately there s which prevents us from proceeding this work since our tests for these resources are failing around of the time due to the application gateway not being deleted meaning this fills up our quota s rather than trying to track the status of this bug across multiple issues i m opening this meta issue to keep track of these bugs and enhancements in one place once the bug in the api is resolved it should be possible to add these enhancements investigate fixing these bugs however these are blocked at the moment blocking api issues which was previously tracked in this repository in enhancements fixed in fixed in fixed in fixed in this is the regular diagnostics resource being tracked in not possible at this time due to the api fixed in bug fixes fixed in fixed in | 0 |
11,074 | 9,211,258,582 | IssuesEvent | 2019-03-09 13:51:03 | dpb587/ssoca | https://api.github.com/repos/dpb587/ssoca | opened | `openvpn exec` with management interface doesn't work on Debian Buster | bug platform/linux service/openvpn | When running `openvpn exec`, connections may fail with the following error:
$ ssoca openvpn exec
Thu Mar 7 22:20:02 2019 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 20 2019
Thu Mar 7 22:20:02 2019 library versions: OpenSSL 1.1.1a 20 Nov 2018, LZO 2.10
Thu Mar 7 22:20:02 2019 MANAGEMENT: Connected to management server at [AF_INET]127.0.0.1:37153
Thu Mar 7 22:20:02 2019 MANAGEMENT: CMD 'certificate'
Thu Mar 7 22:20:03 2019 TCP/UDP: Preserving recently used remote address: [AF_INET]192.0.2.1:1194
Thu Mar 7 22:20:03 2019 Socket Buffers: R=[87380->87380] S=[16384->16384]
Thu Mar 7 22:20:03 2019 Attempting to establish TCP connection with [AF_INET]192.0.2.1:1194 [nonblock]
Thu Mar 7 22:20:04 2019 TCP connection established with [AF_INET]192.0.2.1:1194
Thu Mar 7 22:20:04 2019 TCP_CLIENT link local: (not bound)
Thu Mar 7 22:20:04 2019 TCP_CLIENT link remote: [AF_INET]192.0.2.1:1194
Thu Mar 7 22:20:04 2019 TLS: Initial packet from [AF_INET]192.0.2.1, sid=9ec63bb4 f2d748f6
Thu Mar 7 22:20:04 2019 VERIFY OK: depth=1, CN=ssoca
Thu Mar 7 22:20:04 2019 VERIFY KU OK
Thu Mar 7 22:20:04 2019 Validating certificate extended key usage
Thu Mar 7 22:20:04 2019 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
Thu Mar 7 22:20:04 2019 VERIFY EKU OK
Thu Mar 7 22:20:04 2019 VERIFY OK: depth=0, C=USA, O=Cloud Foundry, CN=openvpn
Thu Mar 7 22:20:04 2019 OpenSSL: error:04066076:rsa routines:rsa_ossl_private_encrypt:unknown padding type
Thu Mar 7 22:20:04 2019 OpenSSL: error:141F0006:SSL routines:tls_construct_cert_verify:EVP lib
Thu Mar 7 22:20:04 2019 TLS_ERROR: BIO read tls_read_plaintext error
Thu Mar 7 22:20:04 2019 TLS Error: TLS object -> incoming plaintext read error
Thu Mar 7 22:20:04 2019 TLS Error: TLS handshake failed
Thu Mar 7 22:20:04 2019 Fatal TLS error (check_tls_errors_co), restarting
Thu Mar 7 22:20:04 2019 SIGHUP[soft,tls-error] received, process restarting
It appears to be related to the management interface providing signing operations because when using static certificates, it works.
This has been observed on Debian Buster and can be reproduced with the [`debian:buster` image](https://hub.docker.com/_/debian). This problem was originally misidentified as a bug with Buster's openvpn/openssl packages.
As a workaround, the `--static-certificate` option can be used:
$ ssoca openvpn exec --static-certificate | 1.0 | `openvpn exec` with management interface doesn't work on Debian Buster - When running `openvpn exec`, connections may fail with the following error:
$ ssoca openvpn exec
Thu Mar 7 22:20:02 2019 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 20 2019
Thu Mar 7 22:20:02 2019 library versions: OpenSSL 1.1.1a 20 Nov 2018, LZO 2.10
Thu Mar 7 22:20:02 2019 MANAGEMENT: Connected to management server at [AF_INET]127.0.0.1:37153
Thu Mar 7 22:20:02 2019 MANAGEMENT: CMD 'certificate'
Thu Mar 7 22:20:03 2019 TCP/UDP: Preserving recently used remote address: [AF_INET]192.0.2.1:1194
Thu Mar 7 22:20:03 2019 Socket Buffers: R=[87380->87380] S=[16384->16384]
Thu Mar 7 22:20:03 2019 Attempting to establish TCP connection with [AF_INET]192.0.2.1:1194 [nonblock]
Thu Mar 7 22:20:04 2019 TCP connection established with [AF_INET]192.0.2.1:1194
Thu Mar 7 22:20:04 2019 TCP_CLIENT link local: (not bound)
Thu Mar 7 22:20:04 2019 TCP_CLIENT link remote: [AF_INET]192.0.2.1:1194
Thu Mar 7 22:20:04 2019 TLS: Initial packet from [AF_INET]192.0.2.1, sid=9ec63bb4 f2d748f6
Thu Mar 7 22:20:04 2019 VERIFY OK: depth=1, CN=ssoca
Thu Mar 7 22:20:04 2019 VERIFY KU OK
Thu Mar 7 22:20:04 2019 Validating certificate extended key usage
Thu Mar 7 22:20:04 2019 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
Thu Mar 7 22:20:04 2019 VERIFY EKU OK
Thu Mar 7 22:20:04 2019 VERIFY OK: depth=0, C=USA, O=Cloud Foundry, CN=openvpn
Thu Mar 7 22:20:04 2019 OpenSSL: error:04066076:rsa routines:rsa_ossl_private_encrypt:unknown padding type
Thu Mar 7 22:20:04 2019 OpenSSL: error:141F0006:SSL routines:tls_construct_cert_verify:EVP lib
Thu Mar 7 22:20:04 2019 TLS_ERROR: BIO read tls_read_plaintext error
Thu Mar 7 22:20:04 2019 TLS Error: TLS object -> incoming plaintext read error
Thu Mar 7 22:20:04 2019 TLS Error: TLS handshake failed
Thu Mar 7 22:20:04 2019 Fatal TLS error (check_tls_errors_co), restarting
Thu Mar 7 22:20:04 2019 SIGHUP[soft,tls-error] received, process restarting
It appears to be related to the management interface providing signing operations because when using static certificates, it works.
This has been observed on Debian Buster and can be reproduced with the [`debian:buster` image](https://hub.docker.com/_/debian). This problem was originally misidentified as a bug with Buster's openvpn/openssl packages.
As a workaround, the `--static-certificate` option can be used:
$ ssoca openvpn exec --static-certificate | non_comp | openvpn exec with management interface doesn t work on debian buster when running openvpn exec connections may fail with the following error ssoca openvpn exec thu mar openvpn pc linux gnu built on feb thu mar library versions openssl nov lzo thu mar management connected to management server at thu mar management cmd certificate thu mar tcp udp preserving recently used remote address thu mar socket buffers r s thu mar attempting to establish tcp connection with thu mar tcp connection established with thu mar tcp client link local not bound thu mar tcp client link remote thu mar tls initial packet from sid thu mar verify ok depth cn ssoca thu mar verify ku ok thu mar validating certificate extended key usage thu mar certificate has eku str tls web server authentication expects tls web server authentication thu mar verify eku ok thu mar verify ok depth c usa o cloud foundry cn openvpn thu mar openssl error rsa routines rsa ossl private encrypt unknown padding type thu mar openssl error ssl routines tls construct cert verify evp lib thu mar tls error bio read tls read plaintext error thu mar tls error tls object incoming plaintext read error thu mar tls error tls handshake failed thu mar fatal tls error check tls errors co restarting thu mar sighup received process restarting it appears to be related to the management interface providing signing operations because when using static certificates it works this has been observed on debian buster and can be reproduced with the this problem was originally misidentified as a bug with buster s openvpn openssl packages as a workaround the static certificate option can be used ssoca openvpn exec static certificate | 0 |
83 | 2,550,601,335 | IssuesEvent | 2015-02-01 18:50:48 | angular-ui/ui-sortable | https://api.github.com/repos/angular-ui/ui-sortable | closed | Does ui-sortable play nice with ui.bootstrap.accordion? | incompatibility | I'm having trouble getting a sortable accordion to work. I've [created a jsfiddle](http://jsfiddle.net/whippy/bwq82bvs/) illustrating the problem and asked a [question on stackoverflow](http://stackoverflow.com/questions/26520131/how-can-i-create-a-sortable-accordion-with-angularjs) and am raising the issue here in the hope you can help me see why it is not working. | True | Does ui-sortable play nice with ui.bootstrap.accordion? - I'm having trouble getting a sortable accordion to work. I've [created a jsfiddle](http://jsfiddle.net/whippy/bwq82bvs/) illustrating the problem and asked a [question on stackoverflow](http://stackoverflow.com/questions/26520131/how-can-i-create-a-sortable-accordion-with-angularjs) and am raising the issue here in the hope you can help me see why it is not working. | comp | does ui sortable play nice with ui bootstrap accordion i m having trouble getting a sortable accordion to work i ve illustrating the problem and asked a and am raising the issue here in the hope you can help me see why it is not working | 1 |
116,737 | 4,705,895,587 | IssuesEvent | 2016-10-13 15:39:04 | aces/cbrain | https://api.github.com/repos/aces/cbrain | opened | Signup system to render the "new user" page when approving. | Enhancement Priority: Normal | The signup system should be improved: it should no longer create users at all when clicking the "approve" link or button. Instead, the admin will be redirected to (or more precisely, the system `will render`) the user "new" page with all the input elements already filled in. This is very easy to do.
As a side effect, we must also remove the mass "approve" mechanism: approbation must be done one by one. That's ok with me.
We can cleanup the code quite a bit with this change, as the User model code takes over validation. The only prob is how to update the Signup object to indicate the account was created already. I suggest a quick `after_create` callback in the User model. | 1.0 | Signup system to render the "new user" page when approving. - The signup system should be improved: it should no longer create users at all when clicking the "approve" link or button. Instead, the admin will be redirected to (or more precisely, the system `will render`) the user "new" page with all the input elements already filled in. This is very easy to do.
As a side effect, we must also remove the mass "approve" mechanism: approbation must be done one by one. That's ok with me.
We can cleanup the code quite a bit with this change, as the User model code takes over validation. The only prob is how to update the Signup object to indicate the account was created already. I suggest a quick `after_create` callback in the User model. | non_comp | signup system to render the new user page when approving the signup system should be improved it should no longer create users at all when clicking the approve link or button instead the admin will be redirected to or more precisely the system will render the user new page with all the input elements already filled in this is very easy to do as a side effect we must also remove the mass approve mechanism approbation must be done one by one that s ok with me we can cleanup the code quite a bit with this change as the user model code takes over validation the only prob is how to update the signup object to indicate the account was created already i suggest a quick after create callback in the user model | 0 |
466,341 | 13,400,428,583 | IssuesEvent | 2020-09-03 15:48:33 | way-of-elendil/3.3.5 | https://api.github.com/repos/way-of-elendil/3.3.5 | opened | NPC: Thane Korth'azz et Baron Vaillefendre - Naxxramas | bug priority-medium type-boss type-dungeon | **Description**
16064 - [Thane Korth'azz]
30549 - [Baron Vaillefendre]
10 et 25
Les 2 NPCs qui peuvent attaquer à l'auto attaque et qui se placent devant suivent leur cible au lieu de rester au point où il doit se trouver
https://wowwiki.fandom.com/wiki/Four_Horsemen | 1.0 | NPC: Thane Korth'azz et Baron Vaillefendre - Naxxramas - **Description**
16064 - [Thane Korth'azz]
30549 - [Baron Vaillefendre]
10 et 25
Les 2 NPCs qui peuvent attaquer à l'auto attaque et qui se placent devant suivent leur cible au lieu de rester au point où il doit se trouver
https://wowwiki.fandom.com/wiki/Four_Horsemen | non_comp | npc thane korth azz et baron vaillefendre naxxramas description et les npcs qui peuvent attaquer à l auto attaque et qui se placent devant suivent leur cible au lieu de rester au point où il doit se trouver | 0 |
1,484 | 4,006,453,578 | IssuesEvent | 2016-05-12 14:58:25 | yiisoft/yii | https://api.github.com/repos/yiisoft/yii | closed | Cannot regenerate session id - session is not active | compatibility:PHP7 type:bug | I use Codeception with Yiiibridge.
```
1) Tests\Unit\ScoutSearchTest::testSearchAge
PHPUnit_Framework_Exception: session_regenerate_id(): Cannot regenerate session id - session is not active
#1 Codeception\Subscriber\ErrorHandler->errorHandler
#2 /var/www/html/corporation/protected/vendor/yiisoft/yii/framework/web/CHttpSession.php:182
#3 /var/www/html/corporation/protected/vendor/yiisoft/yii/framework/web/auth/CWebUser.php:715
#4 /var/www/html/corporation/protected/vendor/yiisoft/yii/framework/web/auth/CWebUser.php:233
#5 /var/www/html/corporation/protected/models/LoginForm.php:71
#6 /var/www/html/corporation/protected/tests/cui/tests/unit/ScoutSearchTest.php:51
```
Environment:
* CentOS release 6.7
* PHP 7.0.5
* Yii 1.1.17 | True | Cannot regenerate session id - session is not active - I use Codeception with Yiiibridge.
```
1) Tests\Unit\ScoutSearchTest::testSearchAge
PHPUnit_Framework_Exception: session_regenerate_id(): Cannot regenerate session id - session is not active
#1 Codeception\Subscriber\ErrorHandler->errorHandler
#2 /var/www/html/corporation/protected/vendor/yiisoft/yii/framework/web/CHttpSession.php:182
#3 /var/www/html/corporation/protected/vendor/yiisoft/yii/framework/web/auth/CWebUser.php:715
#4 /var/www/html/corporation/protected/vendor/yiisoft/yii/framework/web/auth/CWebUser.php:233
#5 /var/www/html/corporation/protected/models/LoginForm.php:71
#6 /var/www/html/corporation/protected/tests/cui/tests/unit/ScoutSearchTest.php:51
```
Environment:
* CentOS release 6.7
* PHP 7.0.5
* Yii 1.1.17 | comp | cannot regenerate session id session is not active i use codeception with yiiibridge tests unit scoutsearchtest testsearchage phpunit framework exception session regenerate id cannot regenerate session id session is not active codeception subscriber errorhandler errorhandler var www html corporation protected vendor yiisoft yii framework web chttpsession php var www html corporation protected vendor yiisoft yii framework web auth cwebuser php var www html corporation protected vendor yiisoft yii framework web auth cwebuser php var www html corporation protected models loginform php var www html corporation protected tests cui tests unit scoutsearchtest php environment centos release php yii | 1 |
17,253 | 23,795,372,025 | IssuesEvent | 2022-09-02 18:59:14 | mariotaku/moonlight-tv | https://api.github.com/repos/mariotaku/moonlight-tv | opened | [COMPAT] black screen when streaming @4k | compatibility | <!-- Since device support range becomes wider, please submit this issue ONLY when you got problems -->
## Basic information
<!-- Please fill info into following spaces -->
- Device model: (UK6300PUE) <!-- e.g. 55SM8100PJB -->
- Firmware version: (05.45.04) <!-- Can be found in webOS Settings, e.g. 05.00.01 -->
- webOS version: (4.4.0) <!-- Can be found in Moonlight Settings - About, e.g. 4.9.0-53802 -->
- Moonlight version: (1.5.2) <!-- Can be found in Moonlight Settings - About -->
- Is audio working: (Yes) <!-- e.g. Yes/No -->
- Is video working: (720P/1080P Yes 2K/4K No) <!-- e.g. 4K60fps -->
- Is input working: (Yes) <!-- e.g. Gamepad, Remote -->
## Description
<!-- Please describe the issue you have been encountered -->
Thanks for bringing this amazing work to webOS! I recently installed this and enjoy using it so far watching video and playing games with 1080P@60FPS. I am currently trying 2k and 4k, but 2k gives me snow screen and with 4k, it's just a black screen. The sound and input work perfectly, it's only the video that is not working as expected.
I reviewed the troubleshooting guide and tried other video decoders, but none of them worked. I know it's maybe hard to reproduce this so please let me know if there's any log or something I can look for and I could share it here. Thank you so much!
| True | [COMPAT] black screen when streaming @4k - <!-- Since device support range becomes wider, please submit this issue ONLY when you got problems -->
## Basic information
<!-- Please fill info into following spaces -->
- Device model: (UK6300PUE) <!-- e.g. 55SM8100PJB -->
- Firmware version: (05.45.04) <!-- Can be found in webOS Settings, e.g. 05.00.01 -->
- webOS version: (4.4.0) <!-- Can be found in Moonlight Settings - About, e.g. 4.9.0-53802 -->
- Moonlight version: (1.5.2) <!-- Can be found in Moonlight Settings - About -->
- Is audio working: (Yes) <!-- e.g. Yes/No -->
- Is video working: (720P/1080P Yes 2K/4K No) <!-- e.g. 4K60fps -->
- Is input working: (Yes) <!-- e.g. Gamepad, Remote -->
## Description
<!-- Please describe the issue you have been encountered -->
Thanks for bringing this amazing work to webOS! I recently installed this and enjoy using it so far watching video and playing games with 1080P@60FPS. I am currently trying 2k and 4k, but 2k gives me snow screen and with 4k, it's just a black screen. The sound and input work perfectly, it's only the video that is not working as expected.
I reviewed the troubleshooting guide and tried other video decoders, but none of them worked. I know it's maybe hard to reproduce this so please let me know if there's any log or something I can look for and I could share it here. Thank you so much!
| comp | black screen when streaming basic information device model firmware version webos version moonlight version is audio working yes is video working yes no is input working yes description thanks for bringing this amazing work to webos i recently installed this and enjoy using it so far watching video and playing games with i am currently trying and but gives me snow screen and with it s just a black screen the sound and input work perfectly it s only the video that is not working as expected i reviewed the troubleshooting guide and tried other video decoders but none of them worked i know it s maybe hard to reproduce this so please let me know if there s any log or something i can look for and i could share it here thank you so much | 1 |
5,760 | 8,211,783,446 | IssuesEvent | 2018-09-04 14:39:44 | MoonchildProductions/UXP | https://api.github.com/repos/MoonchildProductions/UXP | opened | SVG images with only "use" link ignore parent element size | C: Images Web Compatibility | Demonstration link:
https://info.pmu.fr/programme/courses/R1/C3
Under "Rapports définitifs" should be logos in SVG that are rendered with tiny dimensions because all the SVG includes is a single `use` statement referring to an anchor on the same page. The parent element is sized correctly. As a result, the images are not visible.
This is the same in ESR52 but fixed in Firefox nightly. Fix window wanted. | True | SVG images with only "use" link ignore parent element size - Demonstration link:
https://info.pmu.fr/programme/courses/R1/C3
Under "Rapports définitifs" should be logos in SVG that are rendered with tiny dimensions because all the SVG includes is a single `use` statement referring to an anchor on the same page. The parent element is sized correctly. As a result, the images are not visible.
This is the same in ESR52 but fixed in Firefox nightly. Fix window wanted. | comp | svg images with only use link ignore parent element size demonstration link under rapports définitifs should be logos in svg that are rendered with tiny dimensions because all the svg includes is a single use statement referring to an anchor on the same page the parent element is sized correctly as a result the images are not visible this is the same in but fixed in firefox nightly fix window wanted | 1 |
61,651 | 7,488,748,095 | IssuesEvent | 2018-04-06 03:28:08 | status-im/ideas | https://api.github.com/repos/status-im/ideas | closed | Multi wallet support | clojure design golang | Idea code: DEV#042
Title: Multi wallet support
Status: Draft
Created: 2017-11-23
## Summary
Multiple wallet management
## Vision
Status provides a comprehensive wallet allowing to manage your main Status ethereum address.
As a user I want to leverage this wallet to manage other addresses.
We will also consider if switching address should also impact other status features (DApps browsing, bot usages, ..)
## Swarm Participants
* Lead Contributors: ?
* Contributor: ?
* Contributor: ?
* Evaluator: @jeluard
* Evaluator / Tester : @asemiankevich
## Goals & Implementation Plan
### Wallet switching address
New addresses can be added and browsed in the wallet. All existing features (browsing funds, access to historical data, send/request funds) are functioning.
### Abstraction for external wallet access
Status must be able to interract with `hardwallet` and other hardware wallet. Consider how this impact our current wallet abstractions.
## Goals & Implementation Plan
### Minimum Viable Product
A new address can be added and switched to. All wallet features are functioning.
Goal Date: _To be defined_
Description: any ethereum address can be used in wallet
### Iteration 1
Decide how switching address impacts other Status features
Goal Date: _To be defined_
Description: decision on how address switching impacts Status
### Iteration 2
Identify specific changes needed to interact with `hardwallet`
Goal Date: _To be defined_
Description: `hardwallet` can be used in Status wallet
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). | 1.0 | Multi wallet support - Idea code: DEV#042
Title: Multi wallet support
Status: Draft
Created: 2017-11-23
## Summary
Multiple wallet management
## Vision
Status provides a comprehensive wallet allowing to manage your main Status ethereum address.
As a user I want to leverage this wallet to manage other addresses.
We will also consider if switching address should also impact other status features (DApps browsing, bot usages, ..)
## Swarm Participants
* Lead Contributors: ?
* Contributor: ?
* Contributor: ?
* Evaluator: @jeluard
* Evaluator / Tester : @asemiankevich
## Goals & Implementation Plan
### Wallet switching address
New addresses can be added and browsed in the wallet. All existing features (browsing funds, access to historical data, send/request funds) are functioning.
### Abstraction for external wallet access
Status must be able to interract with `hardwallet` and other hardware wallet. Consider how this impact our current wallet abstractions.
## Goals & Implementation Plan
### Minimum Viable Product
A new address can be added and switched to. All wallet features are functioning.
Goal Date: _To be defined_
Description: any ethereum address can be used in wallet
### Iteration 1
Decide how switching address impacts other Status features
Goal Date: _To be defined_
Description: decision on how address switching impacts Status
### Iteration 2
Identify specific changes needed to interact with `hardwallet`
Goal Date: _To be defined_
Description: `hardwallet` can be used in Status wallet
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). | non_comp | multi wallet support idea code dev title multi wallet support status draft created summary multiple wallet management vision status provides a comprehensive wallet allowing to manage your main status ethereum address as a user i want to leverage this wallet to manage other addresses we will also consider if switching address should also impact other status features dapps browsing bot usages swarm participants lead contributors contributor contributor evaluator jeluard evaluator tester asemiankevich goals implementation plan wallet switching address new addresses can be added and browsed in the wallet all existing features browsing funds access to historical data send request funds are functioning abstraction for external wallet access status must be able to interract with hardwallet and other hardware wallet consider how this impact our current wallet abstractions goals implementation plan minimum viable product a new address can be added and switched to all wallet features are functioning goal date to be defined description any ethereum address can be used in wallet iteration decide how switching address impacts other status features goal date to be defined description decision on how address switching impacts status iteration identify specific changes needed to interact with hardwallet goal date to be defined description hardwallet can be used in status wallet copyright copyright and related rights waived via | 0 |
831,200 | 32,040,899,006 | IssuesEvent | 2023-09-22 19:13:59 | HydrologicEngineeringCenter/HEC-FDA | https://api.github.com/repos/HydrologicEngineeringCenter/HEC-FDA | closed | Levee Validation in Model | enhancement Model PRIORITY | I see. Ok. Please disregard any notion of levee validation, haha. I will handle it in the model.
_Originally posted by @rnugent3 in https://github.com/HydrologicEngineeringCenter/HEC-FDA/issues/963#issuecomment-1728460839_
| 1.0 | Levee Validation in Model - I see. Ok. Please disregard any notion of levee validation, haha. I will handle it in the model.
_Originally posted by @rnugent3 in https://github.com/HydrologicEngineeringCenter/HEC-FDA/issues/963#issuecomment-1728460839_
| non_comp | levee validation in model i see ok please disregard any notion of levee validation haha i will handle it in the model originally posted by in | 0 |
9,346 | 11,386,183,021 | IssuesEvent | 2020-01-29 12:45:28 | AthenaSulisMinerva/CombatExtendedFastTrack | https://api.github.com/repos/AthenaSulisMinerva/CombatExtendedFastTrack | opened | Vanilla Factions Expanded - Medieval | mod compatiblity patch | NOTE: This is for requesting patches for a *new* third-party mod that CE:FT hasn't supported yet; if you are requesting changes to patches for an *existing* mod, please submit a Bug Report instead.
## Basic Information
**Vanilla Factions Expanded - Medieval:**
**Steam Workshop page:**
```
https://steamcommunity.com/sharedfiles/filedetails/?id=1854610483&searchtext=vanilla+medieval
```
## Detailed Information
**What content from the mod needs to be patched by CE:FT?:**
(Tick all the boxes that apply)
- [x] Ammunition (e.g. Shells)
- [x] Apparel, Armor (e.g. Helmets, Vests)
- [x] Apparel, General (e.g. Clothes, Hats, Gasmasks)
- [x] Apparel, Shields (e.g. Viking Shield, Riot Shield)
```
If the mod has feature(s) not covered by the categories above, briefly describe them here:
```
The mod is supposed to spawn medieval sieges with catapults and ballistas replacing mortars if the Vanilla Furniture Expanded - Security mod is installed (https://steamcommunity.com/sharedfiles/filedetails/?id=1845154007&searchtext=vanilla+security).
**Are there existing CE versions of the mod or patches out there?:**
```
N7Huntsman has an incomplete CE patch on the workshop: https://steamcommunity.com/sharedfiles/filedetails/?id=1869326792&searchtext=vanilla+factions+ce
However, it patches the heavy plate armor of this mod inconsistently with the full plate from Vanilla Armor Expanded (i.e. not covering hands and feet).
```
| True | Vanilla Factions Expanded - Medieval - NOTE: This is for requesting patches for a *new* third-party mod that CE:FT hasn't supported yet; if you are requesting changes to patches for an *existing* mod, please submit a Bug Report instead.
## Basic Information
**Vanilla Factions Expanded - Medieval:**
**Steam Workshop page:**
```
https://steamcommunity.com/sharedfiles/filedetails/?id=1854610483&searchtext=vanilla+medieval
```
## Detailed Information
**What content from the mod needs to be patched by CE:FT?:**
(Tick all the boxes that apply)
- [x] Ammunition (e.g. Shells)
- [x] Apparel, Armor (e.g. Helmets, Vests)
- [x] Apparel, General (e.g. Clothes, Hats, Gasmasks)
- [x] Apparel, Shields (e.g. Viking Shield, Riot Shield)
```
If the mod has feature(s) not covered by the categories above, briefly describe them here:
```
The mod is supposed to spawn medieval sieges with catapults and ballistas replacing mortars if the Vanilla Furniture Expanded - Security mod is installed (https://steamcommunity.com/sharedfiles/filedetails/?id=1845154007&searchtext=vanilla+security).
**Are there existing CE versions of the mod or patches out there?:**
```
N7Huntsman has an incomplete CE patch on the workshop: https://steamcommunity.com/sharedfiles/filedetails/?id=1869326792&searchtext=vanilla+factions+ce
However, it patches the heavy plate armor of this mod inconsistently with the full plate from Vanilla Armor Expanded (i.e. not covering hands and feet).
```
| comp | vanilla factions expanded medieval note this is for requesting patches for a new third party mod that ce ft hasn t supported yet if you are requesting changes to patches for an existing mod please submit a bug report instead basic information vanilla factions expanded medieval steam workshop page detailed information what content from the mod needs to be patched by ce ft tick all the boxes that apply ammunition e g shells apparel armor e g helmets vests apparel general e g clothes hats gasmasks apparel shields e g viking shield riot shield if the mod has feature s not covered by the categories above briefly describe them here the mod is supposed to spawn medieval sieges with catapults and ballistas replacing mortars if the vanilla furniture expanded security mod is installed are there existing ce versions of the mod or patches out there has an incomplete ce patch on the workshop however it patches the heavy plate armor of this mod inconsistently with the full plate from vanilla armor expanded i e not covering hands and feet | 1 |
373,273 | 11,038,204,846 | IssuesEvent | 2019-12-08 12:26:17 | TeamHookipa/divecomputer | https://api.github.com/repos/TeamHookipa/divecomputer | closed | Rework Interface to have 6 dives in a row | Front End Low Priority | * Currently, our app only supports 8 dives. This is mainly a limitation of the front-end interface we're using, Semantic UI, which only supports <Grid>'s to only have up to 16 <Grid.Column>'s. 8 dives will take up 8 <Grid.Column>'s plus 7 <Grid.Column>'s for the Surface Interval Input Forms. Rework this such that we can have unlimited dives.
* If we aim to make unlimited dives, we don't want too many columns taking up one row because it's ugly. I think 6 dives in a row is a good compromise. | 1.0 | Rework Interface to have 6 dives in a row - * Currently, our app only supports 8 dives. This is mainly a limitation of the front-end interface we're using, Semantic UI, which only supports <Grid>'s to only have up to 16 <Grid.Column>'s. 8 dives will take up 8 <Grid.Column>'s plus 7 <Grid.Column>'s for the Surface Interval Input Forms. Rework this such that we can have unlimited dives.
* If we aim to make unlimited dives, we don't want too many columns taking up one row because it's ugly. I think 6 dives in a row is a good compromise. | non_comp | rework interface to have dives in a row currently our app only supports dives this is mainly a limitation of the front end interface we re using semantic ui which only supports s to only have up to s dives will take up s plus s for the surface interval input forms rework this such that we can have unlimited dives if we aim to make unlimited dives we don t want too many columns taking up one row because it s ugly i think dives in a row is a good compromise | 0 |
16,793 | 23,153,672,323 | IssuesEvent | 2022-07-29 10:48:34 | arcticicestudio/nord-dircolors | https://api.github.com/repos/arcticicestudio/nord-dircolors | closed | BUG?? Dircolor not working in zsh and alacritty | scope-compatibility scope-ux type-support | Hello,
I am using alacritty with the nord colour pallete and zsh but it's not showing the nord colours in the ls command(it might also not be showing these in others but I have just tested it with ls) BUT it shows them in BASH for whatever reason.....
And yes, I have followed the instructions and put test -r "~/.dir_colors" && eval $(dircolors ~/.dir_colors) in my bash and zshrc as well as put the dir colors file in my home directory. | True | BUG?? Dircolor not working in zsh and alacritty - Hello,
I am using alacritty with the nord colour pallete and zsh but it's not showing the nord colours in the ls command(it might also not be showing these in others but I have just tested it with ls) BUT it shows them in BASH for whatever reason.....
And yes, I have followed the instructions and put test -r "~/.dir_colors" && eval $(dircolors ~/.dir_colors) in my bash and zshrc as well as put the dir colors file in my home directory. | comp | bug dircolor not working in zsh and alacritty hello i am using alacritty with the nord colour pallete and zsh but it s not showing the nord colours in the ls command it might also not be showing these in others but i have just tested it with ls but it shows them in bash for whatever reason and yes i have followed the instructions and put test r dir colors eval dircolors dir colors in my bash and zshrc as well as put the dir colors file in my home directory | 1 |
14,732 | 18,094,858,880 | IssuesEvent | 2021-09-22 07:55:33 | gambitph/Stackable | https://api.github.com/repos/gambitph/Stackable | opened | v2 Container shows block errors on version 3 upgrade | bug [version] V3 v2 compatibility [block] container | <!--
Before posting, make sure that:
1. you are running the latest version of Stackable, and
2. you have searched whether your issue has already been reported
-->
**Describe the bug**
V2 Container block with styles shows block error on v3 upgrade
**To Reproduce**
Steps to reproduce the behavior:
1. Activate 2.17.5
2. Add a v2 Container block > Basic layout
3. Add the following settings:
### General panel
<img width="304" alt="Screen Shot 2021-09-22 at 3 50 11 PM" src="https://user-images.githubusercontent.com/28699204/134303999-4403478a-3e9d-40a1-ba40-4861c7ba9d2f.png">
### Container panel
<img width="304" alt="Screen Shot 2021-09-22 at 3 50 32 PM" src="https://user-images.githubusercontent.com/28699204/134304317-27fec681-d28d-451b-a99b-ea59257de336.png">
### Container Background
<img width="353" alt="Screen Shot 2021-09-22 at 3 50 47 PM" src="https://user-images.githubusercontent.com/28699204/134304382-654c5637-ac16-4169-ba53-1011478e12f4.png">
4. Add a paragraph text inside the Container
5. Text Color panel > Set a Text Color
**Expected behavior**
Should not show block error on upgrade
**Screenshots**
<img width="797" alt="Screen Shot 2021-09-22 at 3 53 04 PM" src="https://user-images.githubusercontent.com/28699204/134304589-4d4c62aa-6164-45e9-a631-c4d4bc7c6a60.png">
| True | v2 Container shows block errors on version 3 upgrade - <!--
Before posting, make sure that:
1. you are running the latest version of Stackable, and
2. you have searched whether your issue has already been reported
-->
**Describe the bug**
V2 Container block with styles shows block error on v3 upgrade
**To Reproduce**
Steps to reproduce the behavior:
1. Activate 2.17.5
2. Add a v2 Container block > Basic layout
3. Add the following settings:
### General panel
<img width="304" alt="Screen Shot 2021-09-22 at 3 50 11 PM" src="https://user-images.githubusercontent.com/28699204/134303999-4403478a-3e9d-40a1-ba40-4861c7ba9d2f.png">
### Container panel
<img width="304" alt="Screen Shot 2021-09-22 at 3 50 32 PM" src="https://user-images.githubusercontent.com/28699204/134304317-27fec681-d28d-451b-a99b-ea59257de336.png">
### Container Background
<img width="353" alt="Screen Shot 2021-09-22 at 3 50 47 PM" src="https://user-images.githubusercontent.com/28699204/134304382-654c5637-ac16-4169-ba53-1011478e12f4.png">
4. Add a paragraph text inside the Container
5. Text Color panel > Set a Text Color
**Expected behavior**
Should not show block error on upgrade
**Screenshots**
<img width="797" alt="Screen Shot 2021-09-22 at 3 53 04 PM" src="https://user-images.githubusercontent.com/28699204/134304589-4d4c62aa-6164-45e9-a631-c4d4bc7c6a60.png">
| comp | container shows block errors on version upgrade before posting make sure that you are running the latest version of stackable and you have searched whether your issue has already been reported describe the bug container block with styles shows block error on upgrade to reproduce steps to reproduce the behavior activate add a container block basic layout add the following settings general panel img width alt screen shot at pm src container panel img width alt screen shot at pm src container background img width alt screen shot at pm src add a paragraph text inside the container text color panel set a text color expected behavior should not show block error on upgrade screenshots img width alt screen shot at pm src | 1 |
46,210 | 11,799,887,088 | IssuesEvent | 2020-03-18 16:36:21 | googleapis/gcs-resumable-upload | https://api.github.com/repos/googleapis/gcs-resumable-upload | closed | Mocha Tests: should resume an interrupted upload failed | api: storage buildcop: issue priority: p1 type: bug | This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
buildID: 0f2c73c94d03168e66e7ae44fe454568b7e5863e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/e08d178f-7c17-4d65-ac77-9376f7b54903), [Sponge](http://sponge2/e08d178f-7c17-4d65-ac77-9376f7b54903)
status: failed | 1.0 | Mocha Tests: should resume an interrupted upload failed - This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
buildID: 0f2c73c94d03168e66e7ae44fe454568b7e5863e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/e08d178f-7c17-4d65-ac77-9376f7b54903), [Sponge](http://sponge2/e08d178f-7c17-4d65-ac77-9376f7b54903)
status: failed | non_comp | mocha tests should resume an interrupted upload failed this test failed to configure my behavior see if i m commenting on this issue too often add the buildcop quiet label and i will stop commenting buildid buildurl status failed | 0 |
12,789 | 15,054,004,262 | IssuesEvent | 2021-02-03 16:55:11 | AlexHuck/TribesRebirth | https://api.github.com/repos/AlexHuck/TribesRebirth | closed | HuffmanProcessor fails to read ShapeBaseData::fileName for the larmor datablock when connecting to a 1.11 server. | bug compatibility | When connecting to a 1.11 server, when the larmor (Light Armor PlayerData) datablock is downloaded, ShapeBase::ShapeBaseData::unpack method reads an incorrect string for the fileName field. Instead of the expected "larmor" it reads "ÿlarmor", all further reading from the BitStream after that point is invalid as well.
This issue is only exhibited when connecting to a 1.11 Tribes server.
It may be possible that some minor adjustment was made to the HuffmanProcessor's lookup tables, some time between this version of Tribes and 1.11.
'ÿ' is ASCII for -1 signed char (255 unsigned char). So, another possible cause may be the way older compilers handed the signedness of 'char' datatype, compared to modern MSVC. If for example Tribes was originally compiled on a toolchain that treats 'char' as signed and modern MSVC is now compiling it as unsigned, that may break some assumptions the HuffmanProcessor depends on.
| True | HuffmanProcessor fails to read ShapeBaseData::fileName for the larmor datablock when connecting to a 1.11 server. - When connecting to a 1.11 server, when the larmor (Light Armor PlayerData) datablock is downloaded, ShapeBase::ShapeBaseData::unpack method reads an incorrect string for the fileName field. Instead of the expected "larmor" it reads "ÿlarmor", all further reading from the BitStream after that point is invalid as well.
This issue is only exhibited when connecting to a 1.11 Tribes server.
It may be possible that some minor adjustment was made to the HuffmanProcessor's lookup tables, some time between this version of Tribes and 1.11.
'ÿ' is ASCII for -1 signed char (255 unsigned char). So, another possible cause may be the way older compilers handed the signedness of 'char' datatype, compared to modern MSVC. If for example Tribes was originally compiled on a toolchain that treats 'char' as signed and modern MSVC is now compiling it as unsigned, that may break some assumptions the HuffmanProcessor depends on.
| comp | huffmanprocessor fails to read shapebasedata filename for the larmor datablock when connecting to a server when connecting to a server when the larmor light armor playerdata datablock is downloaded shapebase shapebasedata unpack method reads an incorrect string for the filename field instead of the expected larmor it reads ÿlarmor all further reading from the bitstream after that point is invalid as well this issue is only exhibited when connecting to a tribes server it may be possible that some minor adjustment was made to the huffmanprocessor s lookup tables some time between this version of tribes and ÿ is ascii for signed char unsigned char so another possible cause may be the way older compilers handed the signedness of char datatype compared to modern msvc if for example tribes was originally compiled on a toolchain that treats char as signed and modern msvc is now compiling it as unsigned that may break some assumptions the huffmanprocessor depends on | 1 |
9,266 | 11,270,623,877 | IssuesEvent | 2020-01-14 11:16:05 | wiremod/wire | https://api.github.com/repos/wiremod/wire | closed | problem with e2 | External compatibility Not a bug Unable to reproduce | so when i try to even CLICK on e2 in the wire tab it shows the wiremod is creating script errors and i cant use e2 at all when this error pops up its said check the console for details, and this is what the console wrote.
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Validate - lua/wire/client/text_editor/wire_expression2_editor.lua:1595
3. SetCode - lua/wire/client/text_editor/wire_expression2_editor.lua:1685
4. NewScript - lua/wire/client/text_editor/wire_expression2_editor.lua:1512
5. Setup - lua/wire/client/text_editor/wire_expression2_editor.lua:1951
6. initE2Editor - lua/wire/stools/expression2.lua:822
7. func - lua/wire/stools/expression2.lua:794
8. FillViaFunction - gamemodes/sandbox/gamemode/spawnmenu/controlpanel.lua:100
9. FillViaTable - gamemodes/sandbox/gamemode/spawnmenu/controlpanel.lua:89
10. ActivateTool - lua/includes/modules/spawnmenu.lua:287
11. DoClick - lua/wire/client/customspawnmenu.lua:577
12. InternalDoClick - lua/vgui/dtree_node.lua:62
13. DoClick - lua/vgui/dtree_node.lua:32
14. unknown - lua/vgui/dlabel.lua:234
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Expression2Upload - lua/wire/stools/expression2.lua:526
3. func - lua/wire/stools/expression2.lua:576
4. unknown - lua/includes/extensions/net.lua:32
Requesting texture value from var "$basetexture" which is not a texture value (material: pp/copy)
] [Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value) 1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48 2. Expression2Upload - lua/wire/stools/expression2.lua:526
Unknown command: [Wiremod]
] [Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value) 1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48 2. Expression2Upload - lua/wire/stools/expression2.lua:526
Unknown command: [Wiremod]
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Expression2Upload - lua/wire/stools/expression2.lua:526
3. func - lua/wire/stools/expression2.lua:576
4. unknown - lua/includes/extensions/net.lua:32
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Validate - lua/wire/client/text_editor/wire_expression2_editor.lua:1595
3. SetV - lua/wire/client/text_editor/wire_expression2_editor.lua:1634
4. Open - lua/wire/client/text_editor/wire_expression2_editor.lua:1706
5. func - lua/wire/stools/expression2.lua:628
6. unknown - lua/includes/extensions/net.lua:32
| True | problem with e2 - so when i try to even CLICK on e2 in the wire tab it shows the wiremod is creating script errors and i cant use e2 at all when this error pops up its said check the console for details, and this is what the console wrote.
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Validate - lua/wire/client/text_editor/wire_expression2_editor.lua:1595
3. SetCode - lua/wire/client/text_editor/wire_expression2_editor.lua:1685
4. NewScript - lua/wire/client/text_editor/wire_expression2_editor.lua:1512
5. Setup - lua/wire/client/text_editor/wire_expression2_editor.lua:1951
6. initE2Editor - lua/wire/stools/expression2.lua:822
7. func - lua/wire/stools/expression2.lua:794
8. FillViaFunction - gamemodes/sandbox/gamemode/spawnmenu/controlpanel.lua:100
9. FillViaTable - gamemodes/sandbox/gamemode/spawnmenu/controlpanel.lua:89
10. ActivateTool - lua/includes/modules/spawnmenu.lua:287
11. DoClick - lua/wire/client/customspawnmenu.lua:577
12. InternalDoClick - lua/vgui/dtree_node.lua:62
13. DoClick - lua/vgui/dtree_node.lua:32
14. unknown - lua/vgui/dlabel.lua:234
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Expression2Upload - lua/wire/stools/expression2.lua:526
3. func - lua/wire/stools/expression2.lua:576
4. unknown - lua/includes/extensions/net.lua:32
Requesting texture value from var "$basetexture" which is not a texture value (material: pp/copy)
] [Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value) 1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48 2. Expression2Upload - lua/wire/stools/expression2.lua:526
Unknown command: [Wiremod]
] [Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value) 1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48 2. Expression2Upload - lua/wire/stools/expression2.lua:526
Unknown command: [Wiremod]
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Expression2Upload - lua/wire/stools/expression2.lua:526
3. func - lua/wire/stools/expression2.lua:576
4. unknown - lua/includes/extensions/net.lua:32
[Wiremod] lua/entities/gmod_wire_expression2/cl_init.lua:48: attempt to index field 'PreProcessor' (a nil value)
1. wire_expression2_validate - lua/entities/gmod_wire_expression2/cl_init.lua:48
2. Validate - lua/wire/client/text_editor/wire_expression2_editor.lua:1595
3. SetV - lua/wire/client/text_editor/wire_expression2_editor.lua:1634
4. Open - lua/wire/client/text_editor/wire_expression2_editor.lua:1706
5. func - lua/wire/stools/expression2.lua:628
6. unknown - lua/includes/extensions/net.lua:32
| comp | problem with so when i try to even click on in the wire tab it shows the wiremod is creating script errors and i cant use at all when this error pops up its said check the console for details and this is what the console wrote lua entities gmod wire cl init lua attempt to index field preprocessor a nil value wire validate lua entities gmod wire cl init lua validate lua wire client text editor wire editor lua setcode lua wire client text editor wire editor lua newscript lua wire client text editor wire editor lua setup lua wire client text editor wire editor lua lua wire stools lua func lua wire stools lua fillviafunction gamemodes sandbox gamemode spawnmenu controlpanel lua fillviatable gamemodes sandbox gamemode spawnmenu controlpanel lua activatetool lua includes modules spawnmenu lua doclick lua wire client customspawnmenu lua internaldoclick lua vgui dtree node lua doclick lua vgui dtree node lua unknown lua vgui dlabel lua lua entities gmod wire cl init lua attempt to index field preprocessor a nil value wire validate lua entities gmod wire cl init lua lua wire stools lua func lua wire stools lua unknown lua includes extensions net lua requesting texture value from var basetexture which is not a texture value material pp copy lua entities gmod wire cl init lua attempt to index field preprocessor a nil value wire validate lua entities gmod wire cl init lua lua wire stools lua unknown command lua entities gmod wire cl init lua attempt to index field preprocessor a nil value wire validate lua entities gmod wire cl init lua lua wire stools lua unknown command lua entities gmod wire cl init lua attempt to index field preprocessor a nil value wire validate lua entities gmod wire cl init lua lua wire stools lua func lua wire stools lua unknown lua includes extensions net lua lua entities gmod wire cl init lua attempt to index field preprocessor a nil value wire validate lua entities gmod wire cl init lua validate lua wire client text editor wire editor lua setv lua wire client text editor wire editor lua open lua wire client text editor wire editor lua func lua wire stools lua unknown lua includes extensions net lua | 1 |
44,127 | 17,836,819,905 | IssuesEvent | 2021-09-03 03:07:31 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | When using VNet integration, "WEBSITE_DNS_SERVER" should be disabled. | app-service/svc triaged cxp doc-enhancement Pri2 | Hi Team,
Regarding "WEBSITE_DNS_SERVER", I had thought that we could overwrite the default value by setting this item.
But when our app uses VNet integration or is in an App Service environment, the DNS server configuration from the VNet is applied and we should not be able to overwrite it even if setting this item.
It should be clearly expressed in the description.
Best Regards,
Takeshi Katayama
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f081389e-21ca-4d1a-80e1-d96c13ec1ffb
* Version Independent ID: 9d1246c8-bfbf-78af-e87f-4d0dd287da90
* Content: [Environment variables and app settings reference - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/reference-app-settings?tabs=kudu%2Cdotnet)
* Content Source: [articles/app-service/reference-app-settings.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/reference-app-settings.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | 1.0 | When using VNet integration, "WEBSITE_DNS_SERVER" should be disabled. - Hi Team,
Regarding "WEBSITE_DNS_SERVER", I had thought that we could overwrite the default value by setting this item.
But when our app uses VNet integration or is in an App Service environment, the DNS server configuration from the VNet is applied and we should not be able to overwrite it even if setting this item.
It should be clearly expressed in the description.
Best Regards,
Takeshi Katayama
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f081389e-21ca-4d1a-80e1-d96c13ec1ffb
* Version Independent ID: 9d1246c8-bfbf-78af-e87f-4d0dd287da90
* Content: [Environment variables and app settings reference - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/reference-app-settings?tabs=kudu%2Cdotnet)
* Content Source: [articles/app-service/reference-app-settings.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/reference-app-settings.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | non_comp | when using vnet integration website dns server should be disabled hi team regarding website dns server i had thought that we could overwrite the default value by setting this item but when our app uses vnet integration or is in an app service environment the dns server configuration from the vnet is applied and we should not be able to overwrite it even if setting this item it should be clearly expressed in the description best regards takeshi katayama document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bfbf content content source service app service github login cephalin microsoft alias cephalin | 0 |
736,993 | 25,495,865,543 | IssuesEvent | 2022-11-27 17:11:21 | codymikol/git-down | https://api.github.com/repos/codymikol/git-down | opened | When selecting a non git directory, nothing happens | MEDIUM PRIORITY | We should bring the user to a screen that will allow them to start a new git repository.
Maybe we can allow them to choose files / directories to ignore as well | 1.0 | When selecting a non git directory, nothing happens - We should bring the user to a screen that will allow them to start a new git repository.
Maybe we can allow them to choose files / directories to ignore as well | non_comp | when selecting a non git directory nothing happens we should bring the user to a screen that will allow them to start a new git repository maybe we can allow them to choose files directories to ignore as well | 0 |
119,610 | 12,035,499,027 | IssuesEvent | 2020-04-13 17:59:57 | liferay/clay | https://api.github.com/repos/liferay/clay | closed | ClayLocalizedInput Storybook example isn't showing flag icon | 3.x comp: documentation type: bug | <!--
Before making a bug, have you used the issue search functionality?
-->
- [X] I have searched the [issues](https://github.com/liferay/clay/issues) of this repository and believe that this is not a duplicate.
## Is there an example you can provide via codesandbox.com?
https://storybook-clayui.netlify.com/?path=/story/components-claylocalizedinput--default
## What are the steps to reproduce?
Open the ClayLocalizedInput example
## What is the expected result?
We can be able to see the flag
## Environment
| Tech | Version |
| ----- | ------- |
| Clay | Development |
| React | ---- |
Hey @kresimir-coko @bryceosterhaus , `spritemap` property value is missing on https://github.com/liferay/clay/blob/master/packages/clay-localized-input/stories/index.tsx#L48 😅 | 1.0 | ClayLocalizedInput Storybook example isn't showing flag icon - <!--
Before making a bug, have you used the issue search functionality?
-->
- [X] I have searched the [issues](https://github.com/liferay/clay/issues) of this repository and believe that this is not a duplicate.
## Is there an example you can provide via codesandbox.com?
https://storybook-clayui.netlify.com/?path=/story/components-claylocalizedinput--default
## What are the steps to reproduce?
Open the ClayLocalizedInput example
## What is the expected result?
We can be able to see the flag
## Environment
| Tech | Version |
| ----- | ------- |
| Clay | Development |
| React | ---- |
Hey @kresimir-coko @bryceosterhaus , `spritemap` property value is missing on https://github.com/liferay/clay/blob/master/packages/clay-localized-input/stories/index.tsx#L48 😅 | non_comp | claylocalizedinput storybook example isn t showing flag icon before making a bug have you used the issue search functionality i have searched the of this repository and believe that this is not a duplicate is there an example you can provide via codesandbox com what are the steps to reproduce open the claylocalizedinput example what is the expected result we can be able to see the flag environment tech version clay development react hey kresimir coko bryceosterhaus spritemap property value is missing on 😅 | 0 |
6,690 | 8,967,585,032 | IssuesEvent | 2019-01-29 04:08:26 | aguilarjose11/ParachuteTimeSheetTimeMngmntSystem | https://api.github.com/repos/aguilarjose11/ParachuteTimeSheetTimeMngmntSystem | opened | Incompatibility of parachuteCore.tstools in linux. | Incompatibility Linux | Incompatibility in linux build due to importation of ctypes (windows-specific library).
Can try making a personal library in c++ to give support to the getcmdxandy() method of parachuteCore.tstools. | True | Incompatibility of parachuteCore.tstools in linux. - Incompatibility in linux build due to importation of ctypes (windows-specific library).
Can try making a personal library in c++ to give support to the getcmdxandy() method of parachuteCore.tstools. | comp | incompatibility of parachutecore tstools in linux incompatibility in linux build due to importation of ctypes windows specific library can try making a personal library in c to give support to the getcmdxandy method of parachutecore tstools | 1 |
13,118 | 15,396,848,548 | IssuesEvent | 2021-03-03 21:14:45 | AmProsius/gothic-1-community-patch | https://api.github.com/repos/AmProsius/gothic-1-community-patch | closed | Potion of Velocity has wrong ore value | compatibility easy fix provided session fix validated | **Describe the bug**
Potion of Velocity and has the wrong ore value.
**Expected behavior**
The Potion of Velocity now has the correct ore value.
*Do **not** write the specific ore values here, because the fix takes the ore values possibly modified by a mod.*
**Additional context**
This issue was split from #41. The same bug in the Potion of Haste is addressed in [TBA].
The issues are split to be able to give a proper fix status if the fix succeeds for one but fails for the other of the items. | True | Potion of Velocity has wrong ore value - **Describe the bug**
Potion of Velocity and has the wrong ore value.
**Expected behavior**
The Potion of Velocity now has the correct ore value.
*Do **not** write the specific ore values here, because the fix takes the ore values possibly modified by a mod.*
**Additional context**
This issue was split from #41. The same bug in the Potion of Haste is addressed in [TBA].
The issues are split to be able to give a proper fix status if the fix succeeds for one but fails for the other of the items. | comp | potion of velocity has wrong ore value describe the bug potion of velocity and has the wrong ore value expected behavior the potion of velocity now has the correct ore value do not write the specific ore values here because the fix takes the ore values possibly modified by a mod additional context this issue was split from the same bug in the potion of haste is addressed in the issues are split to be able to give a proper fix status if the fix succeeds for one but fails for the other of the items | 1 |
13,048 | 15,353,432,856 | IssuesEvent | 2021-03-01 08:34:10 | docker/compose-cli | https://api.github.com/repos/docker/compose-cli | opened | Compose : support compose run --no-deps | compatibility compose | compose run option:
`--no-deps Don't start linked services.` | True | Compose : support compose run --no-deps - compose run option:
`--no-deps Don't start linked services.` | comp | compose support compose run no deps compose run option no deps don t start linked services | 1 |
82,366 | 7,839,238,527 | IssuesEvent | 2018-06-18 13:07:44 | SunwellTracker/issues | https://api.github.com/repos/SunwellTracker/issues | closed | Searing totem bug | Works locally | Requires testing bug | Decription: Searing totem attacking whatever is in a range.
How it works: When you are fighting a mob and you put down a searing totem it attacks a mob. However if a mob dies and there is another in a range, it starts attacking him too...
How it should work: Should attack mobs I'm attacking.
Source (you should point out proofs of your report, please give us some source): Does it need a proof? It is logical kinda like gargoyle which you have presented in a preview video. | 1.0 | Searing totem bug - Decription: Searing totem attacking whatever is in a range.
How it works: When you are fighting a mob and you put down a searing totem it attacks a mob. However if a mob dies and there is another in a range, it starts attacking him too...
How it should work: Should attack mobs I'm attacking.
Source (you should point out proofs of your report, please give us some source): Does it need a proof? It is logical kinda like gargoyle which you have presented in a preview video. | non_comp | searing totem bug decription searing totem attacking whatever is in a range how it works when you are fighting a mob and you put down a searing totem it attacks a mob however if a mob dies and there is another in a range it starts attacking him too how it should work should attack mobs i m attacking source you should point out proofs of your report please give us some source does it need a proof it is logical kinda like gargoyle which you have presented in a preview video | 0 |
787,623 | 27,724,896,923 | IssuesEvent | 2023-03-15 00:53:19 | DoF-6413/chargedUp | https://api.github.com/repos/DoF-6413/chargedUp | closed | Position Presets with Arm | enhancement Medium Priority | Sets Positions for the Arm to be at so drivers don't have to manually do it
- [ ] Place High Grid
- [ ] Place Mid Grid
- [ ] Place Low Grid
- [ ] Pickup HP Station
- [ ] Pickup Floor
Create PID Go To Functions:
- [x] Rotator
- [ ] Telescoper | 1.0 | Position Presets with Arm - Sets Positions for the Arm to be at so drivers don't have to manually do it
- [ ] Place High Grid
- [ ] Place Mid Grid
- [ ] Place Low Grid
- [ ] Pickup HP Station
- [ ] Pickup Floor
Create PID Go To Functions:
- [x] Rotator
- [ ] Telescoper | non_comp | position presets with arm sets positions for the arm to be at so drivers don t have to manually do it place high grid place mid grid place low grid pickup hp station pickup floor create pid go to functions rotator telescoper | 0 |
20,768 | 10,549,879,782 | IssuesEvent | 2019-10-03 09:44:22 | nidhisi/fiware-idm | https://api.github.com/repos/nidhisi/fiware-idm | opened | CVE-2018-19839 (Medium) detected in CSS::Sass-v3.4.11 | security vulnerability | ## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.4.11</b></p></summary>
<p>
<p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhisi/fiware-idm/commit/bc3d677d5bd07a7cd2d8ae62673d63e6c49d17f8">bc3d677d5bd07a7cd2d8ae62673d63e6c49d17f8</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (62)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /fiware-idm/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/output.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/b64/cencode.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/source_map.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_values.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/lexer.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8.h
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_node.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/plugins.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/node.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/base.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/json.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/environment.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/position.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/extend.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8/core.h
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/functions.h
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/node.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/listize.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/c99func.c
- /fiware-idm/node_modules/node-sass/src/libsass/src/position.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/values.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/paths.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/context.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /fiware-idm/node_modules/node-sass/src/libsass/src/source_map.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/lexer.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/json.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/units.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/to_c.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/units.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/b64/encode.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/file.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/environment.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/plugins.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/listize.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/debug.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass2scss.h
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Fix Resolution: 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-19839 (Medium) detected in CSS::Sass-v3.4.11 - ## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.4.11</b></p></summary>
<p>
<p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhisi/fiware-idm/commit/bc3d677d5bd07a7cd2d8ae62673d63e6c49d17f8">bc3d677d5bd07a7cd2d8ae62673d63e6c49d17f8</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (62)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /fiware-idm/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/output.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/b64/cencode.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/source_map.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_values.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/lexer.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8.h
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_node.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/plugins.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/node.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/base.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/json.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/environment.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/position.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/extend.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8/core.h
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/functions.h
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/node.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/listize.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/c99func.c
- /fiware-idm/node_modules/node-sass/src/libsass/src/position.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/values.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/paths.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass/context.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /fiware-idm/node_modules/node-sass/src/libsass/src/source_map.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/lexer.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/json.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/units.cpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/to_c.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/units.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/b64/encode.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/file.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/environment.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /fiware-idm/node_modules/node-sass/src/libsass/src/plugins.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/listize.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/src/debug.hpp
- /fiware-idm/node_modules/node-sass/src/libsass/include/sass2scss.h
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Fix Resolution: 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_comp | cve medium detected in css sass cve medium severity vulnerability vulnerable library css library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries fiware idm node modules node sass src libsass src color maps cpp fiware idm node modules node sass src libsass src sass util hpp fiware idm node modules node sass src libsass src unchecked h fiware idm node modules node sass src libsass src output hpp fiware idm node modules node sass src libsass src cencode h fiware idm node modules node sass src libsass src source map cpp fiware idm node modules node sass src libsass src sass values hpp fiware idm node modules node sass src libsass src lexer cpp fiware idm node modules node sass src libsass src h fiware idm node modules node sass src libsass test test node cpp fiware idm node modules node sass src libsass src string cpp fiware idm node modules node sass src libsass src plugins cpp fiware idm node modules node sass src libsass src node hpp fiware idm node modules node sass src libsass include sass base h fiware idm node modules node sass src libsass src json hpp fiware idm node modules node sass src libsass src environment cpp fiware idm node modules node sass src libsass src position hpp fiware idm node modules node sass src libsass src extend hpp fiware idm node modules node sass src libsass src subset map hpp fiware idm node modules node sass src libsass src remove placeholders cpp fiware idm node modules node sass src libsass src sass context hpp fiware idm node modules node sass src libsass src sass hpp fiware idm node modules node sass src libsass src ast fwd decl cpp fiware idm node modules node sass src libsass contrib plugin cpp fiware idm node modules node sass src libsass src core h fiware idm node modules node sass src libsass include sass functions h fiware idm node modules node sass src libsass test test superselector cpp fiware idm node modules node sass src libsass src sass functions cpp fiware idm node modules node sass src libsass src string hpp fiware idm node modules node sass src libsass src node cpp fiware idm node modules node sass src libsass src subset map cpp fiware idm node modules node sass src libsass src cpp fiware idm node modules node sass src libsass src listize cpp fiware idm node modules node sass src libsass src c fiware idm node modules node sass src libsass src position cpp fiware idm node modules node sass src libsass src remove placeholders hpp fiware idm node modules node sass src libsass include sass values h fiware idm node modules node sass src libsass src sass functions hpp fiware idm node modules node sass src libsass test test subset map cpp fiware idm node modules node sass src libsass src cpp fiware idm node modules node sass src libsass src memory sharedptr cpp fiware idm node modules node sass src libsass src paths hpp fiware idm node modules node sass src libsass include sass context h fiware idm node modules node sass src libsass src color maps hpp fiware idm node modules node sass src libsass test test unification cpp fiware idm node modules node sass src libsass src sass util cpp fiware idm node modules node sass src libsass script test leaks pl fiware idm node modules node sass src libsass src source map hpp fiware idm node modules node sass src libsass src lexer hpp fiware idm node modules node sass src libsass src memory sharedptr hpp fiware idm node modules node sass src libsass src json cpp fiware idm node modules node sass src libsass src units cpp fiware idm node modules node sass src libsass src to c hpp fiware idm node modules node sass src libsass src units hpp fiware idm node modules node sass src libsass src encode h fiware idm node modules node sass src libsass src file hpp fiware idm node modules node sass src libsass src environment hpp fiware idm node modules node sass src libsass src checked h fiware idm node modules node sass src libsass src plugins hpp fiware idm node modules node sass src libsass src listize hpp fiware idm node modules node sass src libsass src debug hpp fiware idm node modules node sass src libsass include h vulnerability details in libsass prior to the function handle error in sass context cpp allows attackers to cause a denial of service resulting from a heap based buffer over read via a crafted sass file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href fix resolution step up your open source security game with whitesource | 0 |
5,890 | 8,356,703,238 | IssuesEvent | 2018-10-02 19:17:09 | NightKosh/Gravestone-mod-Graves | https://api.github.com/repos/NightKosh/Gravestone-mod-Graves | closed | 1.7.10 Compatibility Req w Helpful Villagers | compatibility outdated | Requesting mod compatibility with Helpful Villagers. I'm not a modder but i'm guessing it has something to do with HV wanting to convert all villagers to its mod and it conflicting with the undertaker.
[2018-04-26-7.log.gz](https://github.com/NightKosh/Gravestone-mod-Graves/files/1951977/2018-04-26-7.log.gz)
| True | 1.7.10 Compatibility Req w Helpful Villagers - Requesting mod compatibility with Helpful Villagers. I'm not a modder but i'm guessing it has something to do with HV wanting to convert all villagers to its mod and it conflicting with the undertaker.
[2018-04-26-7.log.gz](https://github.com/NightKosh/Gravestone-mod-Graves/files/1951977/2018-04-26-7.log.gz)
| comp | compatibility req w helpful villagers requesting mod compatibility with helpful villagers i m not a modder but i m guessing it has something to do with hv wanting to convert all villagers to its mod and it conflicting with the undertaker | 1 |
157,131 | 24,628,556,125 | IssuesEvent | 2022-10-16 20:32:56 | dotnet/efcore | https://api.github.com/repos/dotnet/efcore | closed | DbContext Thread Safe | closed-by-design customer-reported | Hi,
Is v 7 will resolve this issue with .NET Core web sites and multiple calls relying to "A second operation was started on this context instance before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913."
Best regards | 1.0 | DbContext Thread Safe - Hi,
Is v 7 will resolve this issue with .NET Core web sites and multiple calls relying to "A second operation was started on this context instance before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913."
Best regards | non_comp | dbcontext thread safe hi is v will resolve this issue with net core web sites and multiple calls relying to a second operation was started on this context instance before a previous operation completed this is usually caused by different threads concurrently using the same instance of dbcontext for more information on how to avoid threading issues with dbcontext see best regards | 0 |
6,818 | 3,910,657,375 | IssuesEvent | 2016-04-20 00:05:39 | haskell/cabal | https://api.github.com/repos/haskell/cabal | closed | Package with custom Setup.hs "Encountered missing dependencies" | bug nix-local-build urgent | New report (edited by @ezyang)
Cabal 1.23 and later #2731 allow you to skip specifying dependencies which are not part of a buildable component. cabal-install was updated to take advantage of this fact.
However, when a package has a `Custom` setup script, it is possible for the Setup script to be built against an old version of Cabal, which is doesn't know to ignore non-buildable dependencies. In this case, cabal-install will pass an insufficient set of dependencies, resulting in an error like this:
```
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
```
(where these are dependencies of non-buildable components.)
A workaround is to explicitly request that all components be built. For example, if there is some flag which must be selected to make a component buildable, you should pass `--constraint="package-name +flagname"`
----
Original bug report:
Trying to build pandoc-citeproc (which has a custom Setup.hs with a couple of hooks) using the latest-packaged version from git in the HVR repository (Version: 1.23+git20160204.0.7aab356~wily) fails to find dependencies already installed in a sandbox (whether the dependencies are installed manually or via the dependency solver).
$ uname -a
Linux <hostname> 4.2.0-30-generic #35-Ubuntu SMP Fri Feb 19 13:52:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.10.3
$ cabal --version
cabal-install version 1.23.0.0
compiled using version 1.23.1.0 of the Cabal library
$ cabal sandbox init
$ cabal install pandoc-citeproc
Configuring pandoc-citeproc-0.9...
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
$ cabal sandbox hc-pkg list process
process-1.2.3.0
$ cabal sandbox hc-pkg list temporary
temporary-1.2.0.4
This was previously filed as jgm/pandoc-citeproc#216 | 1.0 | Package with custom Setup.hs "Encountered missing dependencies" - New report (edited by @ezyang)
Cabal 1.23 and later #2731 allow you to skip specifying dependencies which are not part of a buildable component. cabal-install was updated to take advantage of this fact.
However, when a package has a `Custom` setup script, it is possible for the Setup script to be built against an old version of Cabal, which is doesn't know to ignore non-buildable dependencies. In this case, cabal-install will pass an insufficient set of dependencies, resulting in an error like this:
```
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
```
(where these are dependencies of non-buildable components.)
A workaround is to explicitly request that all components be built. For example, if there is some flag which must be selected to make a component buildable, you should pass `--constraint="package-name +flagname"`
----
Original bug report:
Trying to build pandoc-citeproc (which has a custom Setup.hs with a couple of hooks) using the latest-packaged version from git in the HVR repository (Version: 1.23+git20160204.0.7aab356~wily) fails to find dependencies already installed in a sandbox (whether the dependencies are installed manually or via the dependency solver).
$ uname -a
Linux <hostname> 4.2.0-30-generic #35-Ubuntu SMP Fri Feb 19 13:52:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.10.3
$ cabal --version
cabal-install version 1.23.0.0
compiled using version 1.23.1.0 of the Cabal library
$ cabal sandbox init
$ cabal install pandoc-citeproc
Configuring pandoc-citeproc-0.9...
setup: At least the following dependencies are missing:
process -any, temporary >=1.1
$ cabal sandbox hc-pkg list process
process-1.2.3.0
$ cabal sandbox hc-pkg list temporary
temporary-1.2.0.4
This was previously filed as jgm/pandoc-citeproc#216 | non_comp | package with custom setup hs encountered missing dependencies new report edited by ezyang cabal and later allow you to skip specifying dependencies which are not part of a buildable component cabal install was updated to take advantage of this fact however when a package has a custom setup script it is possible for the setup script to be built against an old version of cabal which is doesn t know to ignore non buildable dependencies in this case cabal install will pass an insufficient set of dependencies resulting in an error like this setup at least the following dependencies are missing process any temporary where these are dependencies of non buildable components a workaround is to explicitly request that all components be built for example if there is some flag which must be selected to make a component buildable you should pass constraint package name flagname original bug report trying to build pandoc citeproc which has a custom setup hs with a couple of hooks using the latest packaged version from git in the hvr repository version wily fails to find dependencies already installed in a sandbox whether the dependencies are installed manually or via the dependency solver uname a linux generic ubuntu smp fri feb utc gnu linux ghc version the glorious glasgow haskell compilation system version cabal version cabal install version compiled using version of the cabal library cabal sandbox init cabal install pandoc citeproc configuring pandoc citeproc setup at least the following dependencies are missing process any temporary cabal sandbox hc pkg list process process cabal sandbox hc pkg list temporary temporary this was previously filed as jgm pandoc citeproc | 0 |
121,346 | 4,807,788,350 | IssuesEvent | 2016-11-02 22:35:49 | ObjectiveSubject/cgu | https://api.github.com/repos/ObjectiveSubject/cgu | opened | News: categories instead of tags should appear | From Client High Priority | On the news & events page news tags are appearing instead of categories at the top of the news section

| 1.0 | News: categories instead of tags should appear - On the news & events page news tags are appearing instead of categories at the top of the news section

| non_comp | news categories instead of tags should appear on the news events page news tags are appearing instead of categories at the top of the news section | 0 |
99,950 | 30,588,755,901 | IssuesEvent | 2023-07-21 15:16:17 | rpopuc/gha-build-homolog | https://api.github.com/repos/rpopuc/gha-build-homolog | closed | Build Homolog | build-homolog | ## Description
Realiza deploy automatizado da aplicação.
## Environments
environment_1
## Branches
essa_e_para_dar_erro_um_erro_bom | 1.0 | Build Homolog - ## Description
Realiza deploy automatizado da aplicação.
## Environments
environment_1
## Branches
essa_e_para_dar_erro_um_erro_bom | non_comp | build homolog description realiza deploy automatizado da aplicação environments environment branches essa e para dar erro um erro bom | 0 |
553,178 | 16,359,543,315 | IssuesEvent | 2021-05-14 07:12:21 | kolint/kolint | https://api.github.com/repos/kolint/kolint | closed | View without viewmodel | priority: low status: confirmed type: bug | **Current behavior**
If no view model is linked, KO Lint will throw this:
```typescript
Diagnostic {
location: undefined,
code: 'KO0002',
message: 'Missing ViewModel reference in file ',
name: 'no-viewmodel-reference',
severity: 3
}
```
**Expected behavior**
To be able not to have linked a view model or suppress the error in some way.
**Steps to reproduce**
```typescript
import * as lint from '...'
;(async () => {
try {
const program = lint.createProgram()
await program.typescriptCompiler.compile('', program.parse('<img data-bind="test: undefined">'))
} catch (err) {
if (err instanceof Error)
throw err
// Will end up here
console.log(err)
}
})()
```
| 1.0 | View without viewmodel - **Current behavior**
If no view model is linked, KO Lint will throw this:
```typescript
Diagnostic {
location: undefined,
code: 'KO0002',
message: 'Missing ViewModel reference in file ',
name: 'no-viewmodel-reference',
severity: 3
}
```
**Expected behavior**
To be able not to have linked a view model or suppress the error in some way.
**Steps to reproduce**
```typescript
import * as lint from '...'
;(async () => {
try {
const program = lint.createProgram()
await program.typescriptCompiler.compile('', program.parse('<img data-bind="test: undefined">'))
} catch (err) {
if (err instanceof Error)
throw err
// Will end up here
console.log(err)
}
})()
```
| non_comp | view without viewmodel current behavior if no view model is linked ko lint will throw this typescript diagnostic location undefined code message missing viewmodel reference in file name no viewmodel reference severity expected behavior to be able not to have linked a view model or suppress the error in some way steps to reproduce typescript import as lint from async try const program lint createprogram await program typescriptcompiler compile program parse catch err if err instanceof error throw err will end up here console log err | 0 |
43,806 | 9,488,180,555 | IssuesEvent | 2019-04-22 18:52:18 | istio/istio | https://api.github.com/repos/istio/istio | opened | Switch existing mixer e2e tests to use new Istio installer | area/policies and telemetry code mauve kind/testing gap | We need to update the various makefiles in istio/istio to use the new installer to generate backwards-compatible (with old installer / paradigm) install artifacts and have that drive the existing e2e tests for policies and telemetry. This will enable transition away from the existing installer while we work to transition tests over to the new framework, etc.
/cc @costinm @mandarjog @kyessenov @ostromart | 1.0 | Switch existing mixer e2e tests to use new Istio installer - We need to update the various makefiles in istio/istio to use the new installer to generate backwards-compatible (with old installer / paradigm) install artifacts and have that drive the existing e2e tests for policies and telemetry. This will enable transition away from the existing installer while we work to transition tests over to the new framework, etc.
/cc @costinm @mandarjog @kyessenov @ostromart | non_comp | switch existing mixer tests to use new istio installer we need to update the various makefiles in istio istio to use the new installer to generate backwards compatible with old installer paradigm install artifacts and have that drive the existing tests for policies and telemetry this will enable transition away from the existing installer while we work to transition tests over to the new framework etc cc costinm mandarjog kyessenov ostromart | 0 |
28,572 | 5,515,878,754 | IssuesEvent | 2017-03-17 18:33:57 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | Docker 1.12 requires go version 1.7.5 while origin documentation points to 1.6 | area/documentation priority/P3 | [provide a description of the issue]
The documentation in contributing.md says we can use 1.6 for development where as for docker 1.1.2, the requirements clearly ask to upgrade 1.7.5. Also, there are some default libraries like context in go 1.7 which are not by default in 1.6.
> Installing Go. Currently, OpenShift supports building using Go 1.6.x. Do NOT use $HOME/go for Go installation, save that for the Go workspace below.
So, the documentation in origin contributing.md has to be updated to use golang 1.7. | 1.0 | Docker 1.12 requires go version 1.7.5 while origin documentation points to 1.6 - [provide a description of the issue]
The documentation in contributing.md says we can use 1.6 for development where as for docker 1.1.2, the requirements clearly ask to upgrade 1.7.5. Also, there are some default libraries like context in go 1.7 which are not by default in 1.6.
> Installing Go. Currently, OpenShift supports building using Go 1.6.x. Do NOT use $HOME/go for Go installation, save that for the Go workspace below.
So, the documentation in origin contributing.md has to be updated to use golang 1.7. | non_comp | docker requires go version while origin documentation points to the documentation in contributing md says we can use for development where as for docker the requirements clearly ask to upgrade also there are some default libraries like context in go which are not by default in installing go currently openshift supports building using go x do not use home go for go installation save that for the go workspace below so the documentation in origin contributing md has to be updated to use golang | 0 |
337,911 | 24,562,027,991 | IssuesEvent | 2022-10-12 21:16:43 | kotools/types | https://api.github.com/repos/kotools/types | closed | `kotools.types` package documentation | documentation common | ## Description
Document the `kotools.types` package and remove the documentation of unused packages.
## Checklist
- [ ] Implement.
- [ ] Test.
- [ ] Refactor.
- [ ] Update `Work in progress` section in changelog.
| 1.0 | `kotools.types` package documentation - ## Description
Document the `kotools.types` package and remove the documentation of unused packages.
## Checklist
- [ ] Implement.
- [ ] Test.
- [ ] Refactor.
- [ ] Update `Work in progress` section in changelog.
| non_comp | kotools types package documentation description document the kotools types package and remove the documentation of unused packages checklist implement test refactor update work in progress section in changelog | 0 |
139,714 | 20,955,131,760 | IssuesEvent | 2022-03-27 02:05:41 | WatershedXiaolan/Xiaolan-s-Gitbook | https://api.github.com/repos/WatershedXiaolan/Xiaolan-s-Gitbook | opened | How does FrontEnd select a backend host to send data to | system design | we can use client side service discovery (as the decision is made by the client)
i.e., FE service get info from metadata, and use hash function or any method to determine
| 1.0 | How does FrontEnd select a backend host to send data to - we can use client side service discovery (as the decision is made by the client)
i.e., FE service get info from metadata, and use hash function or any method to determine
| non_comp | how does frontend select a backend host to send data to we can use client side service discovery as the decision is made by the client i e fe service get info from metadata and use hash function or any method to determine | 0 |
631,424 | 20,151,621,917 | IssuesEvent | 2022-02-09 12:58:34 | ita-social-projects/horondi_client_fe | https://api.github.com/repos/ita-social-projects/horondi_client_fe | closed | [Products Page. Filter] Inconsistent items are displayed when filter items by 'MODEL' | bug priority: high severity: major Functional | **Environment:** Windows 10 Pro 64bit, Firefox 89.0 64bit
**Reproducible:** Always
**Pre-conditions:**
Go to https://horondi-front-staging.azurewebsites.net/
Click on the appropriate category from the drop-down list at the Navigation bar (e. g. menu->backpacks->rolltop)
**Description:**
**Steps to reproduce:**
Choose 'MODEL' (e. g. banana bags)
**Actual result:**
Inconsistent items are displayed
**Expected result:**
The banana bags are showed
[TC_STEP#4](https://jira.softserve.academy/browse/LVHRB-214) | 1.0 | [Products Page. Filter] Inconsistent items are displayed when filter items by 'MODEL' - **Environment:** Windows 10 Pro 64bit, Firefox 89.0 64bit
**Reproducible:** Always
**Pre-conditions:**
Go to https://horondi-front-staging.azurewebsites.net/
Click on the appropriate category from the drop-down list at the Navigation bar (e. g. menu->backpacks->rolltop)
**Description:**
**Steps to reproduce:**
Choose 'MODEL' (e. g. banana bags)
**Actual result:**
Inconsistent items are displayed
**Expected result:**
The banana bags are showed
[TC_STEP#4](https://jira.softserve.academy/browse/LVHRB-214) | non_comp | inconsistent items are displayed when filter items by model environment windows pro firefox reproducible always pre conditions go to click on the appropriate category from the drop down list at the navigation bar e g menu backpacks rolltop description steps to reproduce choose model e g banana bags actual result inconsistent items are displayed expected result the banana bags are showed | 0 |
14,935 | 26,112,674,812 | IssuesEvent | 2022-12-27 22:58:18 | vectordotdev/vector | https://api.github.com/repos/vectordotdev/vector | closed | VRL AST for program reflection and visualization | needs: approval needs: requirements type: feature domain: vrl vrl: parser | This is something I wanted to put on the map so that we can factor it in as we continue to mature VRL. Especially with #6139 on the horizon.
It is very likely that we'll need a way to visualize a VRL program and build it with a GUI while still letting power users write VRL programs directly (think editor settings where you can switch between a GUI and the "source" JSON). In order to achieve this, I assume we'll need the ability to build an AST from a VRL script.
| 1.0 | VRL AST for program reflection and visualization - This is something I wanted to put on the map so that we can factor it in as we continue to mature VRL. Especially with #6139 on the horizon.
It is very likely that we'll need a way to visualize a VRL program and build it with a GUI while still letting power users write VRL programs directly (think editor settings where you can switch between a GUI and the "source" JSON). In order to achieve this, I assume we'll need the ability to build an AST from a VRL script.
| non_comp | vrl ast for program reflection and visualization this is something i wanted to put on the map so that we can factor it in as we continue to mature vrl especially with on the horizon it is very likely that we ll need a way to visualize a vrl program and build it with a gui while still letting power users write vrl programs directly think editor settings where you can switch between a gui and the source json in order to achieve this i assume we ll need the ability to build an ast from a vrl script | 0 |
551,322 | 16,166,269,580 | IssuesEvent | 2021-05-01 15:04:34 | sopra-fs21-group-01/server | https://api.github.com/repos/sopra-fs21-group-01/server | closed | A registered User should be able to log in and out | high priority task | Criteria
*Log in asks for username and password
*if one of them is invalid, display accordingly
*log out leads to the log in screen
time: 2h
part of: #41 | 1.0 | A registered User should be able to log in and out - Criteria
*Log in asks for username and password
*if one of them is invalid, display accordingly
*log out leads to the log in screen
time: 2h
part of: #41 | non_comp | a registered user should be able to log in and out criteria log in asks for username and password if one of them is invalid display accordingly log out leads to the log in screen time part of | 0 |
99,194 | 12,404,724,603 | IssuesEvent | 2020-05-21 16:01:01 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | Problem when using Material Widgets like Cards with ListWheelScrollview | f: material design f: scrolling framework | Hi i have a Problem when using any Material widgets which usually supports the elevation parameter
like the `Card` or even the `Material` Widget. And the problem is that when using these kind of widgets with the `ListWhellScrollView` , it prevents it from scrolling and lays out the cards (as an example) on top of each other . Here is my Code :
```
import 'package:flutter/material.dart';
import 'package:superellipse_shape/superellipse_shape.dart';
class HalfCircleExample extends StatelessWidget {
Widget _buildItem() {
return Center(
child: Container(
width: 50,
height: 50,
child: Card(
shape: SuperellipseShape(
borderRadius: BorderRadius.circular(40),
),
),
),
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('WheelExample'),
),
body: SafeArea(
child: Center(
child: ListWheelScrollView(
itemExtent: 60,
children: List.generate(
10,
(int i) => _buildItem(),
),
),
),
),
);
}
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: HalfCircleExample(),
);
}
}
void main() => runApp(MyApp());
```
and here is an image of the output :

and here is my flutter doctor :
```
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, v1.12.13+hotfix.5, on Microsoft Windows [Version 10.0.17763.914], locale en-US)
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.0-rc1)
[√] Android Studio (version 3.3)
[√] VS Code (version 1.41.1)
[√] Connected device (1 available)
• No issues found!
``` | 1.0 | Problem when using Material Widgets like Cards with ListWheelScrollview - Hi i have a Problem when using any Material widgets which usually supports the elevation parameter
like the `Card` or even the `Material` Widget. And the problem is that when using these kind of widgets with the `ListWhellScrollView` , it prevents it from scrolling and lays out the cards (as an example) on top of each other . Here is my Code :
```
import 'package:flutter/material.dart';
import 'package:superellipse_shape/superellipse_shape.dart';
class HalfCircleExample extends StatelessWidget {
Widget _buildItem() {
return Center(
child: Container(
width: 50,
height: 50,
child: Card(
shape: SuperellipseShape(
borderRadius: BorderRadius.circular(40),
),
),
),
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('WheelExample'),
),
body: SafeArea(
child: Center(
child: ListWheelScrollView(
itemExtent: 60,
children: List.generate(
10,
(int i) => _buildItem(),
),
),
),
),
);
}
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: HalfCircleExample(),
);
}
}
void main() => runApp(MyApp());
```
and here is an image of the output :

and here is my flutter doctor :
```
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, v1.12.13+hotfix.5, on Microsoft Windows [Version 10.0.17763.914], locale en-US)
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.0-rc1)
[√] Android Studio (version 3.3)
[√] VS Code (version 1.41.1)
[√] Connected device (1 available)
• No issues found!
``` | non_comp | problem when using material widgets like cards with listwheelscrollview hi i have a problem when using any material widgets which usually supports the elevation parameter like the card or even the material widget and the problem is that when using these kind of widgets with the listwhellscrollview it prevents it from scrolling and lays out the cards as an example on top of each other here is my code import package flutter material dart import package superellipse shape superellipse shape dart class halfcircleexample extends statelesswidget widget builditem return center child container width height child card shape superellipseshape borderradius borderradius circular override widget build buildcontext context return scaffold appbar appbar title text wheelexample body safearea child center child listwheelscrollview itemextent children list generate int i builditem class myapp extends statelesswidget override widget build buildcontext context return materialapp title flutter demo theme themedata primaryswatch colors blue home halfcircleexample void main runapp myapp and here is an image of the output and here is my flutter doctor doctor summary to see all details run flutter doctor v flutter channel stable hotfix on microsoft windows locale en us android toolchain develop for android devices android sdk version android studio version vs code version connected device available • no issues found | 0 |
159,081 | 20,036,634,211 | IssuesEvent | 2022-02-02 12:37:36 | kapseliboi/dapp | https://api.github.com/repos/kapseliboi/dapp | opened | CVE-2021-23386 (Medium) detected in dns-packet-1.3.1.tgz | security vulnerability | ## CVE-2021-23386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.2.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/dapp/commit/79de7acd382466c6348d970d41ce91b47fc3366d">79de7acd382466c6348d970d41ce91b47fc3366d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution (dns-packet): 1.3.2</p>
<p>Direct dependency fix Resolution (react-scripts): 3.4.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23386 (Medium) detected in dns-packet-1.3.1.tgz - ## CVE-2021-23386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.2.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/dapp/commit/79de7acd382466c6348d970d41ce91b47fc3366d">79de7acd382466c6348d970d41ce91b47fc3366d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution (dns-packet): 1.3.2</p>
<p>Direct dependency fix Resolution (react-scripts): 3.4.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_comp | cve medium detected in dns packet tgz cve medium severity vulnerability vulnerable library dns packet tgz an abstract encoding compliant module for encoding decoding dns packets library home page a href path to dependency file package json path to vulnerable library node modules dns packet package json dependency hierarchy react scripts tgz root library webpack dev server tgz bonjour tgz multicast dns tgz x dns packet tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package dns packet before it creates buffers with allocunsafe and does not always fill them before forming network packets this can expose internal application memory over unencrypted network when querying crafted invalid domain names publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dns packet direct dependency fix resolution react scripts step up your open source security game with whitesource | 0 |
287,453 | 24,829,826,548 | IssuesEvent | 2022-10-26 01:41:27 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Frequent test failures of `TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop` | priority/backlog kind/failing-test | This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[QEMU_macOS](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=QEMU_macOS&test=TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop)|100.00|
|[Docker_Windows](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Windows&test=TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop)|58.33|
|[Docker_Linux_containerd](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_containerd&test=TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop)|42.86| | 1.0 | Frequent test failures of `TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop` - This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[QEMU_macOS](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=QEMU_macOS&test=TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop)|100.00|
|[Docker_Windows](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Windows&test=TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop)|58.33|
|[Docker_Linux_containerd](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_containerd&test=TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop)|42.86| | non_comp | frequent test failures of teststartstop group default different port serial userappexistsafterstop this test has high flake rates for the following environments environment flake rate | 0 |
61,180 | 25,393,803,063 | IssuesEvent | 2022-11-22 06:53:36 | Azure/azure-rest-api-specs | https://api.github.com/repos/Azure/azure-rest-api-specs | closed | Microsoft.EventHub/namespaces missing read-only "status" property definition | Event Hubs Service Attention | A GET on a namespace resource returns a property named "status", but it is not defined in the swagger spec here:
https://github.com/Azure/azure-rest-api-specs/blob/1b0ed8edd58bb7c9ade9a27430759527bd4eec8e/specification/eventhub/resource-manager/Microsoft.EventHub/preview/2018-01-01-preview/namespaces-preview.json#L612-L666
Originally raised under https://github.com/Azure/bicep/issues/2297 | 1.0 | Microsoft.EventHub/namespaces missing read-only "status" property definition - A GET on a namespace resource returns a property named "status", but it is not defined in the swagger spec here:
https://github.com/Azure/azure-rest-api-specs/blob/1b0ed8edd58bb7c9ade9a27430759527bd4eec8e/specification/eventhub/resource-manager/Microsoft.EventHub/preview/2018-01-01-preview/namespaces-preview.json#L612-L666
Originally raised under https://github.com/Azure/bicep/issues/2297 | non_comp | microsoft eventhub namespaces missing read only status property definition a get on a namespace resource returns a property named status but it is not defined in the swagger spec here originally raised under | 0 |
376,373 | 11,146,044,234 | IssuesEvent | 2019-12-23 08:36:59 | teamforus/forus | https://api.github.com/repos/teamforus/forus | closed | Invite current providers: Sent an e-mail to existing provider that a new fund is about to start | Priority: Must have enhancement | ## Main asssignee: @
## Context/goal:
On forus-backend we have mail called fund applicable. This mail is sent when the start date is reached. Providers selected to get this mail were providers that had a specific fund_category. As we removed fund categories this email was disconnected.
## Who are applicable providers
We don't know which providers are applicable for a newly created fund but we do know this for funds that are renewed. This issue is about sending an fund applicable mail with a button to easily apply for the fund to providers that were approved for a different fund from a sponsor.
## Userflow
### Prerequites
- Provider applied for fund_id 1 in 2018
- Sponsor created a new fund, fund_id 2
## Userflow Sponsor
- User logins in sponsor dashboard
- User sees newly created fund that is configured and is now in state 'paused'
- User goes to /organizations/{sponsor_org_id}/providers
- User clicks on fund_id 1
- User clicks invite providers button
- Modal opens where user select fund_id 2; sees preview of e-mail that is sent to providers.
- presses sent and email is sent to all organization owners
## Userflow Provider
- Provider gets a invitation mail for a new fund from a sponsor he already applied for once.
- Provider clicks a magic blue button 'directly apply'
- Provider goes to provider dashboard of fund_id 2 its implementation.
- Provider accepts invitation by accepting invitation token. User will get a option to login to dashboard of implementation of fund invitation
- IN database we have in fund_providers state 'invited'; As user pressed button in mail he is now accepted and gets a SUCCESS! you are ready to serve customers for this fund! popup
this is mvp.
## Tasks
- [x] text for landing page invitation ( cta: update products, organisation details etc )
- [x] text for e-mail
- [x] design for landing page invitation
- [x] create markup for landing page
- [x] implement backend functionality invitation token to accept invitations
Nice to have:
- fund_provider_invitation table for the sponsor to select multiple funds he want to invite the provider list for.
- being able to only select certain provider to invite.
- See status of each invitation
- Resend invitation
- Undo invitation request
- provider can have a setting to auto accept each invitation he gets. ( we need an activity monitor for this to see when provider was last active on our system) | 1.0 | Invite current providers: Sent an e-mail to existing provider that a new fund is about to start - ## Main asssignee: @
## Context/goal:
On forus-backend we have mail called fund applicable. This mail is sent when the start date is reached. Providers selected to get this mail were providers that had a specific fund_category. As we removed fund categories this email was disconnected.
## Who are applicable providers
We don't know which providers are applicable for a newly created fund but we do know this for funds that are renewed. This issue is about sending an fund applicable mail with a button to easily apply for the fund to providers that were approved for a different fund from a sponsor.
## Userflow
### Prerequites
- Provider applied for fund_id 1 in 2018
- Sponsor created a new fund, fund_id 2
## Userflow Sponsor
- User logins in sponsor dashboard
- User sees newly created fund that is configured and is now in state 'paused'
- User goes to /organizations/{sponsor_org_id}/providers
- User clicks on fund_id 1
- User clicks invite providers button
- Modal opens where user select fund_id 2; sees preview of e-mail that is sent to providers.
- presses sent and email is sent to all organization owners
## Userflow Provider
- Provider gets a invitation mail for a new fund from a sponsor he already applied for once.
- Provider clicks a magic blue button 'directly apply'
- Provider goes to provider dashboard of fund_id 2 its implementation.
- Provider accepts invitation by accepting invitation token. User will get a option to login to dashboard of implementation of fund invitation
- IN database we have in fund_providers state 'invited'; As user pressed button in mail he is now accepted and gets a SUCCESS! you are ready to serve customers for this fund! popup
this is mvp.
## Tasks
- [x] text for landing page invitation ( cta: update products, organisation details etc )
- [x] text for e-mail
- [x] design for landing page invitation
- [x] create markup for landing page
- [x] implement backend functionality invitation token to accept invitations
Nice to have:
- fund_provider_invitation table for the sponsor to select multiple funds he want to invite the provider list for.
- being able to only select certain provider to invite.
- See status of each invitation
- Resend invitation
- Undo invitation request
- provider can have a setting to auto accept each invitation he gets. ( we need an activity monitor for this to see when provider was last active on our system) | non_comp | invite current providers sent an e mail to existing provider that a new fund is about to start main asssignee context goal on forus backend we have mail called fund applicable this mail is sent when the start date is reached providers selected to get this mail were providers that had a specific fund category as we removed fund categories this email was disconnected who are applicable providers we don t know which providers are applicable for a newly created fund but we do know this for funds that are renewed this issue is about sending an fund applicable mail with a button to easily apply for the fund to providers that were approved for a different fund from a sponsor userflow prerequites provider applied for fund id in sponsor created a new fund fund id userflow sponsor user logins in sponsor dashboard user sees newly created fund that is configured and is now in state paused user goes to organizations sponsor org id providers user clicks on fund id user clicks invite providers button modal opens where user select fund id sees preview of e mail that is sent to providers presses sent and email is sent to all organization owners userflow provider provider gets a invitation mail for a new fund from a sponsor he already applied for once provider clicks a magic blue button directly apply provider goes to provider dashboard of fund id its implementation provider accepts invitation by accepting invitation token user will get a option to login to dashboard of implementation of fund invitation in database we have in fund providers state invited as user pressed button in mail he is now accepted and gets a success you are ready to serve customers for this fund popup this is mvp tasks text for landing page invitation cta update products organisation details etc text for e mail design for landing page invitation create markup for landing page implement backend functionality invitation token to accept invitations nice to have fund provider invitation table for the sponsor to select multiple funds he want to invite the provider list for being able to only select certain provider to invite see status of each invitation resend invitation undo invitation request provider can have a setting to auto accept each invitation he gets we need an activity monitor for this to see when provider was last active on our system | 0 |
18,039 | 24,906,113,541 | IssuesEvent | 2022-10-29 09:08:16 | fumitoh/spyder-modelx | https://api.github.com/repos/fumitoh/spyder-modelx | closed | spyder-modelx does not work with PySide | bug compatibility | ## Description of your problem
Spyder with PySide backend in stead of PyQt5 crashes when attempting to show the propoerties of the objcet selected in the tree in MxExplorer.
## Versions and main components
* spyder-modelx Version: 0.12.0
* Spyder Version: 5.2.2
* Python Version: 3.10
* Operating system: Windows
| True | spyder-modelx does not work with PySide - ## Description of your problem
Spyder with PySide backend in stead of PyQt5 crashes when attempting to show the propoerties of the objcet selected in the tree in MxExplorer.
## Versions and main components
* spyder-modelx Version: 0.12.0
* Spyder Version: 5.2.2
* Python Version: 3.10
* Operating system: Windows
| comp | spyder modelx does not work with pyside description of your problem spyder with pyside backend in stead of crashes when attempting to show the propoerties of the objcet selected in the tree in mxexplorer versions and main components spyder modelx version spyder version python version operating system windows | 1 |
64,804 | 3,218,749,313 | IssuesEvent | 2015-10-08 04:28:22 | TypeStrong/atom-typescript | https://api.github.com/repos/TypeStrong/atom-typescript | opened | tsconfig exclude cleanup | priority:high | Based on analysis done by Blake here : https://github.com/TypeStrong/tsconfig/pull/7 For exclusions, one should use `/**` without any leading `/**/*` (like we do now).
closes https://github.com/TypeStrong/atom-typescript/issues/634
closes https://github.com/TypeStrong/atom-typescript/issues/332
closes https://github.com/TypeStrong/atom-typescript/issues/568
Todo :
* [ ] check default glob
* [ ] document in FAQ
* [ ] fix glob generated for `exclude` | 1.0 | tsconfig exclude cleanup - Based on analysis done by Blake here : https://github.com/TypeStrong/tsconfig/pull/7 For exclusions, one should use `/**` without any leading `/**/*` (like we do now).
closes https://github.com/TypeStrong/atom-typescript/issues/634
closes https://github.com/TypeStrong/atom-typescript/issues/332
closes https://github.com/TypeStrong/atom-typescript/issues/568
Todo :
* [ ] check default glob
* [ ] document in FAQ
* [ ] fix glob generated for `exclude` | non_comp | tsconfig exclude cleanup based on analysis done by blake here for exclusions one should use without any leading like we do now closes closes closes todo check default glob document in faq fix glob generated for exclude | 0 |
18,057 | 24,923,850,778 | IssuesEvent | 2022-10-31 04:42:02 | zer0Kerbal/SimpleConstruction | https://api.github.com/repos/zer0Kerbal/SimpleConstruction | closed | [Bug 🐞]: SCON+KERB bug | bug 🐛 issue: compatibility/patch hacktoberfest contributions-welcome | ### Brief description of your issue
Science labs and ISRUs modifiers are being overridden by Kerbalism. Labs loose the ability to manufacture and store SCON components. ISRUs seem to maintain SCON material storage but are unable to manufacture.
[9-2-22 Logs.zip](https://github.com/zer0Kerbal/SimpleConstruction/files/9481730/9-2-22.Logs.zip)
### Steps to reproduce
Have SCON and Kerbalism installed.
### Expected behavior
Ability to store/manufacture SCON components.
### Actual behavior
Unable to store/manufacture SCON components.
### Environment
```shell
Mod Version: SimpleConstruction-4.0.99.30-prerelease
KSP Version: 1.12.3.3173
SCON downloaded through Curse Forge most other mods through CKAN.
KSP.log and ModuleManager.ConfigCache in zipped folder.
```
### How did you download and install this?
CurseForge (download and manual installation) | True | [Bug 🐞]: SCON+KERB bug - ### Brief description of your issue
Science labs and ISRUs modifiers are being overridden by Kerbalism. Labs loose the ability to manufacture and store SCON components. ISRUs seem to maintain SCON material storage but are unable to manufacture.
[9-2-22 Logs.zip](https://github.com/zer0Kerbal/SimpleConstruction/files/9481730/9-2-22.Logs.zip)
### Steps to reproduce
Have SCON and Kerbalism installed.
### Expected behavior
Ability to store/manufacture SCON components.
### Actual behavior
Unable to store/manufacture SCON components.
### Environment
```shell
Mod Version: SimpleConstruction-4.0.99.30-prerelease
KSP Version: 1.12.3.3173
SCON downloaded through Curse Forge most other mods through CKAN.
KSP.log and ModuleManager.ConfigCache in zipped folder.
```
### How did you download and install this?
CurseForge (download and manual installation) | comp | scon kerb bug brief description of your issue science labs and isrus modifiers are being overridden by kerbalism labs loose the ability to manufacture and store scon components isrus seem to maintain scon material storage but are unable to manufacture steps to reproduce have scon and kerbalism installed expected behavior ability to store manufacture scon components actual behavior unable to store manufacture scon components environment shell mod version simpleconstruction prerelease ksp version scon downloaded through curse forge most other mods through ckan ksp log and modulemanager configcache in zipped folder how did you download and install this curseforge download and manual installation | 1 |
2,108 | 4,834,703,086 | IssuesEvent | 2016-11-08 15:02:46 | ahebrank/ACF-Link-Picker-Field | https://api.github.com/repos/ahebrank/ACF-Link-Picker-Field | closed | Multiple LP fields - only first one works | ACF 4 compatibility bug | Hello
Just noticed this behaviour. Not sure if this is happening since update to ACF or ACF:LP.
Have multiple link picker fields but when the edit/remove/insert links are used on any bar the first one the edit page just refreshes.
Console shows javascript error but not sure if related...
**/wp-content/plugins/acf-link-picker-field/js/input.js
input.js:118 Uncaught TypeError: Cannot read property 'replace' of undefined(…)**
>>>
if (!post_id || post_id == 0) {
get_postid(url, $postid_input.attr('id').replace('-postid', ''));
}
Also there's is a warning that displays briefly on console before page refresh
**Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the user's experience**
Advanced Custom Fields: Version 4.4.11
Advanced Custom Fields: Link Picker Version 1.2.7
I disabled all plugins bar ACF and makes no difference.
Regards,
Gordon | True | Multiple LP fields - only first one works - Hello
Just noticed this behaviour. Not sure if this is happening since update to ACF or ACF:LP.
Have multiple link picker fields but when the edit/remove/insert links are used on any bar the first one the edit page just refreshes.
Console shows javascript error but not sure if related...
**/wp-content/plugins/acf-link-picker-field/js/input.js
input.js:118 Uncaught TypeError: Cannot read property 'replace' of undefined(…)**
>>>
if (!post_id || post_id == 0) {
get_postid(url, $postid_input.attr('id').replace('-postid', ''));
}
Also there's is a warning that displays briefly on console before page refresh
**Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the user's experience**
Advanced Custom Fields: Version 4.4.11
Advanced Custom Fields: Link Picker Version 1.2.7
I disabled all plugins bar ACF and makes no difference.
Regards,
Gordon | comp | multiple lp fields only first one works hello just noticed this behaviour not sure if this is happening since update to acf or acf lp have multiple link picker fields but when the edit remove insert links are used on any bar the first one the edit page just refreshes console shows javascript error but not sure if related wp content plugins acf link picker field js input js input js uncaught typeerror cannot read property replace of undefined … if post id post id get postid url postid input attr id replace postid also there s is a warning that displays briefly on console before page refresh synchronous xmlhttprequest on the main thread is deprecated because of its detrimental effects to the user s experience advanced custom fields version advanced custom fields link picker version i disabled all plugins bar acf and makes no difference regards gordon | 1 |
165,664 | 26,207,543,158 | IssuesEvent | 2023-01-04 00:57:30 | Expensify/App | https://api.github.com/repos/Expensify/App | closed | [HOLD for BZ checklist] [$2000] mWeb - Manage Members - Blue outline at checkbox @Puneet-here | Reviewing External Engineering Daily Design Awaiting Payment Bug | If you haven’t already, check out our [contributing guidelines](https://github.com/Expensify/ReactNativeChat/blob/main/contributingGuides/CONTRIBUTING.md) for onboarding and email contributors@expensify.com to request to join our Slack channel!
___
## Action Performed:
1. Go to staging.new.expensify.com
2. Go to settings > workspace
3. Manage members page
4. Select a user now tap on the checkbox of that user
## Expected Result:
The checkbox shouldn't get blue outline
## Actual Result:
The checkbox gets blue outline
## Workaround:
Unknown
## Platform:
<!---
Remove any platforms that aren't affected by this issue
--->
Where is this issue occurring?
- Mobile Web
**Version Number:** 1.2.6.0
**Reproducible in staging?:** Yes
**Reproducible in production?:** Yes
**Email or phone of affected tester (no customers):** any
**Logs:** https://stackoverflow.com/c/expensify/questions/4856
**Notes/Photos/Videos:** Any additional supporting documentation
https://user-images.githubusercontent.com/93399543/192280340-d0b1de53-d145-4604-90b4-d4106ec30990.mov
**Expensify/Expensify Issue URL:**
**Issue reported by:** @Puneet-here
**Slack conversation:** https://expensify.slack.com/archives/C01GTK53T8Q/p1662398489632479
[View all open jobs on GitHub](https://github.com/Expensify/App/issues?q=is%3Aopen+is%3Aissue+label%3A%22Help+Wanted%22)
| 1.0 | [HOLD for BZ checklist] [$2000] mWeb - Manage Members - Blue outline at checkbox @Puneet-here - If you haven’t already, check out our [contributing guidelines](https://github.com/Expensify/ReactNativeChat/blob/main/contributingGuides/CONTRIBUTING.md) for onboarding and email contributors@expensify.com to request to join our Slack channel!
___
## Action Performed:
1. Go to staging.new.expensify.com
2. Go to settings > workspace
3. Manage members page
4. Select a user now tap on the checkbox of that user
## Expected Result:
The checkbox shouldn't get blue outline
## Actual Result:
The checkbox gets blue outline
## Workaround:
Unknown
## Platform:
<!---
Remove any platforms that aren't affected by this issue
--->
Where is this issue occurring?
- Mobile Web
**Version Number:** 1.2.6.0
**Reproducible in staging?:** Yes
**Reproducible in production?:** Yes
**Email or phone of affected tester (no customers):** any
**Logs:** https://stackoverflow.com/c/expensify/questions/4856
**Notes/Photos/Videos:** Any additional supporting documentation
https://user-images.githubusercontent.com/93399543/192280340-d0b1de53-d145-4604-90b4-d4106ec30990.mov
**Expensify/Expensify Issue URL:**
**Issue reported by:** @Puneet-here
**Slack conversation:** https://expensify.slack.com/archives/C01GTK53T8Q/p1662398489632479
[View all open jobs on GitHub](https://github.com/Expensify/App/issues?q=is%3Aopen+is%3Aissue+label%3A%22Help+Wanted%22)
| non_comp | mweb manage members blue outline at checkbox puneet here if you haven’t already check out our for onboarding and email contributors expensify com to request to join our slack channel action performed go to staging new expensify com go to settings workspace manage members page select a user now tap on the checkbox of that user expected result the checkbox shouldn t get blue outline actual result the checkbox gets blue outline workaround unknown platform remove any platforms that aren t affected by this issue where is this issue occurring mobile web version number reproducible in staging yes reproducible in production yes email or phone of affected tester no customers any logs notes photos videos any additional supporting documentation expensify expensify issue url issue reported by puneet here slack conversation | 0 |
19,044 | 26,468,734,184 | IssuesEvent | 2023-01-17 04:16:33 | Automattic/woocommerce-subscriptions-core | https://api.github.com/repos/Automattic/woocommerce-subscriptions-core | closed | [HPOS] subscription admin list table has missing status filters and incorrect counts | type: bug compatibility: HPOS | ## Describe the bug
<!-- A clear and concise description of what the bug is. Please be as descriptive as possible, and include screenshots to illustrate. -->
On HPOS environments, the some status filters are missing and don't list the correct subscription counts.
<img width="477" alt="Screen Shot 2023-01-12 at 3 00 12 pm" src="https://user-images.githubusercontent.com/8490476/211981352-dd491837-51a2-4524-a732-74259c94398f.png">
eg Subscription specific statuses like active, pending-cancelled don't have status filters and subscriptions with an active or pending cancelled status, don't contribute to the "All" count.
## To Reproduce
<!-- Describe the steps to reproduce the behavior. -->
1. Enable HPOS
2. Purchase a number of subscriptions.
3. Set the subscription statuses you have a variety of statuses (active, on-hold, pending, pending cancelled, cancelled etc).
4. Go to WooCommerce > Subscrptions list table.
5. Note that the "All" count is incorrect.
5. Note that there aren't filters for all subscription statuses. eg Active.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Like on CPT, all subscription statuses should be shown and the counts should be correct.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
<img width="467" alt="Screen Shot 2023-01-12 at 3 09 40 pm" src="https://user-images.githubusercontent.com/8490476/211982415-ce74b230-e02d-4042-9a7b-27600873c4ea.png">
- All (2) is incorrect. There are 4 subscriptions.
- There's no filter for Active or Pending cancelled.
| True | [HPOS] subscription admin list table has missing status filters and incorrect counts - ## Describe the bug
<!-- A clear and concise description of what the bug is. Please be as descriptive as possible, and include screenshots to illustrate. -->
On HPOS environments, the some status filters are missing and don't list the correct subscription counts.
<img width="477" alt="Screen Shot 2023-01-12 at 3 00 12 pm" src="https://user-images.githubusercontent.com/8490476/211981352-dd491837-51a2-4524-a732-74259c94398f.png">
eg Subscription specific statuses like active, pending-cancelled don't have status filters and subscriptions with an active or pending cancelled status, don't contribute to the "All" count.
## To Reproduce
<!-- Describe the steps to reproduce the behavior. -->
1. Enable HPOS
2. Purchase a number of subscriptions.
3. Set the subscription statuses you have a variety of statuses (active, on-hold, pending, pending cancelled, cancelled etc).
4. Go to WooCommerce > Subscrptions list table.
5. Note that the "All" count is incorrect.
5. Note that there aren't filters for all subscription statuses. eg Active.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Like on CPT, all subscription statuses should be shown and the counts should be correct.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
<img width="467" alt="Screen Shot 2023-01-12 at 3 09 40 pm" src="https://user-images.githubusercontent.com/8490476/211982415-ce74b230-e02d-4042-9a7b-27600873c4ea.png">
- All (2) is incorrect. There are 4 subscriptions.
- There's no filter for Active or Pending cancelled.
| comp | subscription admin list table has missing status filters and incorrect counts describe the bug on hpos environments the some status filters are missing and don t list the correct subscription counts img width alt screen shot at pm src eg subscription specific statuses like active pending cancelled don t have status filters and subscriptions with an active or pending cancelled status don t contribute to the all count to reproduce enable hpos purchase a number of subscriptions set the subscription statuses you have a variety of statuses active on hold pending pending cancelled cancelled etc go to woocommerce subscrptions list table note that the all count is incorrect note that there aren t filters for all subscription statuses eg active expected behavior like on cpt all subscription statuses should be shown and the counts should be correct actual behavior img width alt screen shot at pm src all is incorrect there are subscriptions there s no filter for active or pending cancelled | 1 |
72,023 | 18,961,829,697 | IssuesEvent | 2021-11-19 06:32:24 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Encountered ./gen_rules.sh: not found error when compile pytorch on linux. | needs reproduction module: build module: cuda triaged module: nccl module: arm | I encountered a issue: ./gen_rules.sh not found. The log is:
<details>
<summary>log</summary>
```
[11/5078] Performing build step for 'nccl_external'
FAILED: nccl_external-prefix/src/nccl_external-stamp/nccl_external-build nccl/lib/libnccl_static.a
cd /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/third_party/nccl/nccl && env CCACHE_DISABLE=1 SCCACHE_DISABLE=1 make CXX=/usr/bin/c++ CUDA_HOME=/usr/local/cuda NVCC=/usr/local/cuda/bin/nvcc NVCC_GENCODE=-gencode=arch=compute_62,code=sm_62 BUILDDIR=/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl VERBOSE=0 -j && /usr/bin/cmake -E touch /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl_external-prefix/src/nccl_external-stamp/nccl_external-build
make -C src build BUILDDIR=/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl
make[1]: Entering directory '/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/third_party/nccl/nccl/src'
Grabbing include/nccl_net.h > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/include/nccl_net.h
Generating nccl.h.in > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/include/nccl.h
Generating nccl.pc.in > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/lib/pkgconfig/nccl.pc
Compiling init.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/init.o
Compiling enqueue.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/enqueue.o
Compiling bootstrap.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/bootstrap.o
Compiling transport.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport.o
Compiling group.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/group.o
Compiling debug.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/debug.o
Compiling proxy.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/proxy.o
Compiling misc/argcheck.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/argcheck.o
Compiling misc/ibvwrap.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/ibvwrap.o
Compiling transport/shm.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/shm.o
Compiling misc/nvmlwrap.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/nvmlwrap.o
Compiling collectives/sendrecv.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/sendrecv.o
Compiling misc/utils.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/utils.o
Compiling transport/net_socket.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/net_socket.o
Compiling transport/p2p.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/p2p.o
Compiling transport/net.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/net.o
Compiling transport/net_ib.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/net_ib.o
Compiling transport/coll_net.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/coll_net.o
Compiling collectives/all_gather.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/all_gather.o
Compiling collectives/all_reduce.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/all_reduce.o
Compiling collectives/broadcast.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/broadcast.o
Compiling channel.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/channel.o
Compiling collectives/reduce.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/reduce.o
Compiling collectives/reduce_scatter.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/reduce_scatter.o
Compiling graph/topo.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/topo.o
Compiling graph/paths.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/paths.o
Compiling graph/search.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/search.o
Compiling graph/connect.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/connect.o
Compiling graph/trees.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/trees.o
Compiling graph/rings.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/rings.o
Compiling graph/tuning.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/tuning.o
Compiling graph/xml.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/xml.o
make[2]: Entering directory '/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/third_party/nccl/nccl/src/collectives/device'
Generating rules > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/device/Makefile.rules
/bin/sh: 1: ./gen_rules.sh: not found
Compiling functions.cu > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/device/functions.o
nvlink error : Undefined reference to '_Z20ncclSendRecv_copy_i8P14CollectiveArgs' in '/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/device/functions.o'
```
</details>
cc @malfet @seemethere @ngimel | 1.0 | Encountered ./gen_rules.sh: not found error when compile pytorch on linux. - I encountered a issue: ./gen_rules.sh not found. The log is:
<details>
<summary>log</summary>
```
[11/5078] Performing build step for 'nccl_external'
FAILED: nccl_external-prefix/src/nccl_external-stamp/nccl_external-build nccl/lib/libnccl_static.a
cd /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/third_party/nccl/nccl && env CCACHE_DISABLE=1 SCCACHE_DISABLE=1 make CXX=/usr/bin/c++ CUDA_HOME=/usr/local/cuda NVCC=/usr/local/cuda/bin/nvcc NVCC_GENCODE=-gencode=arch=compute_62,code=sm_62 BUILDDIR=/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl VERBOSE=0 -j && /usr/bin/cmake -E touch /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl_external-prefix/src/nccl_external-stamp/nccl_external-build
make -C src build BUILDDIR=/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl
make[1]: Entering directory '/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/third_party/nccl/nccl/src'
Grabbing include/nccl_net.h > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/include/nccl_net.h
Generating nccl.h.in > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/include/nccl.h
Generating nccl.pc.in > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/lib/pkgconfig/nccl.pc
Compiling init.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/init.o
Compiling enqueue.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/enqueue.o
Compiling bootstrap.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/bootstrap.o
Compiling transport.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport.o
Compiling group.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/group.o
Compiling debug.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/debug.o
Compiling proxy.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/proxy.o
Compiling misc/argcheck.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/argcheck.o
Compiling misc/ibvwrap.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/ibvwrap.o
Compiling transport/shm.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/shm.o
Compiling misc/nvmlwrap.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/nvmlwrap.o
Compiling collectives/sendrecv.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/sendrecv.o
Compiling misc/utils.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/misc/utils.o
Compiling transport/net_socket.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/net_socket.o
Compiling transport/p2p.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/p2p.o
Compiling transport/net.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/net.o
Compiling transport/net_ib.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/net_ib.o
Compiling transport/coll_net.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/transport/coll_net.o
Compiling collectives/all_gather.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/all_gather.o
Compiling collectives/all_reduce.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/all_reduce.o
Compiling collectives/broadcast.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/broadcast.o
Compiling channel.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/channel.o
Compiling collectives/reduce.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/reduce.o
Compiling collectives/reduce_scatter.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/reduce_scatter.o
Compiling graph/topo.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/topo.o
Compiling graph/paths.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/paths.o
Compiling graph/search.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/search.o
Compiling graph/connect.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/connect.o
Compiling graph/trees.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/trees.o
Compiling graph/rings.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/rings.o
Compiling graph/tuning.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/tuning.o
Compiling graph/xml.cc > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/graph/xml.o
make[2]: Entering directory '/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/third_party/nccl/nccl/src/collectives/device'
Generating rules > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/device/Makefile.rules
/bin/sh: 1: ./gen_rules.sh: not found
Compiling functions.cu > /home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/device/functions.o
nvlink error : Undefined reference to '_Z20ncclSendRecv_copy_i8P14CollectiveArgs' in '/home/nvidia/zangbin/pytorch-aarch64-main/build/torch/build/nccl/obj/collectives/device/functions.o'
```
</details>
cc @malfet @seemethere @ngimel | non_comp | encountered gen rules sh not found error when compile pytorch on linux i encountered a issue gen rules sh not found the log is log performing build step for nccl external failed nccl external prefix src nccl external stamp nccl external build nccl lib libnccl static a cd home nvidia zangbin pytorch main build torch third party nccl nccl env ccache disable sccache disable make cxx usr bin c cuda home usr local cuda nvcc usr local cuda bin nvcc nvcc gencode gencode arch compute code sm builddir home nvidia zangbin pytorch main build torch build nccl verbose j usr bin cmake e touch home nvidia zangbin pytorch main build torch build nccl external prefix src nccl external stamp nccl external build make c src build builddir home nvidia zangbin pytorch main build torch build nccl make entering directory home nvidia zangbin pytorch main build torch third party nccl nccl src grabbing include nccl net h home nvidia zangbin pytorch main build torch build nccl include nccl net h generating nccl h in home nvidia zangbin pytorch main build torch build nccl include nccl h generating nccl pc in home nvidia zangbin pytorch main build torch build nccl lib pkgconfig nccl pc compiling init cc home nvidia zangbin pytorch main build torch build nccl obj init o compiling enqueue cc home nvidia zangbin pytorch main build torch build nccl obj enqueue o compiling bootstrap cc home nvidia zangbin pytorch main build torch build nccl obj bootstrap o compiling transport cc home nvidia zangbin pytorch main build torch build nccl obj transport o compiling group cc home nvidia zangbin pytorch main build torch build nccl obj group o compiling debug cc home nvidia zangbin pytorch main build torch build nccl obj debug o compiling proxy cc home nvidia zangbin pytorch main build torch build nccl obj proxy o compiling misc argcheck cc home nvidia zangbin pytorch main build torch build nccl obj misc argcheck o compiling misc ibvwrap cc home nvidia zangbin pytorch main build torch build nccl obj misc ibvwrap o compiling transport shm cc home nvidia zangbin pytorch main build torch build nccl obj transport shm o compiling misc nvmlwrap cc home nvidia zangbin pytorch main build torch build nccl obj misc nvmlwrap o compiling collectives sendrecv cc home nvidia zangbin pytorch main build torch build nccl obj collectives sendrecv o compiling misc utils cc home nvidia zangbin pytorch main build torch build nccl obj misc utils o compiling transport net socket cc home nvidia zangbin pytorch main build torch build nccl obj transport net socket o compiling transport cc home nvidia zangbin pytorch main build torch build nccl obj transport o compiling transport net cc home nvidia zangbin pytorch main build torch build nccl obj transport net o compiling transport net ib cc home nvidia zangbin pytorch main build torch build nccl obj transport net ib o compiling transport coll net cc home nvidia zangbin pytorch main build torch build nccl obj transport coll net o compiling collectives all gather cc home nvidia zangbin pytorch main build torch build nccl obj collectives all gather o compiling collectives all reduce cc home nvidia zangbin pytorch main build torch build nccl obj collectives all reduce o compiling collectives broadcast cc home nvidia zangbin pytorch main build torch build nccl obj collectives broadcast o compiling channel cc home nvidia zangbin pytorch main build torch build nccl obj channel o compiling collectives reduce cc home nvidia zangbin pytorch main build torch build nccl obj collectives reduce o compiling collectives reduce scatter cc home nvidia zangbin pytorch main build torch build nccl obj collectives reduce scatter o compiling graph topo cc home nvidia zangbin pytorch main build torch build nccl obj graph topo o compiling graph paths cc home nvidia zangbin pytorch main build torch build nccl obj graph paths o compiling graph search cc home nvidia zangbin pytorch main build torch build nccl obj graph search o compiling graph connect cc home nvidia zangbin pytorch main build torch build nccl obj graph connect o compiling graph trees cc home nvidia zangbin pytorch main build torch build nccl obj graph trees o compiling graph rings cc home nvidia zangbin pytorch main build torch build nccl obj graph rings o compiling graph tuning cc home nvidia zangbin pytorch main build torch build nccl obj graph tuning o compiling graph xml cc home nvidia zangbin pytorch main build torch build nccl obj graph xml o make entering directory home nvidia zangbin pytorch main build torch third party nccl nccl src collectives device generating rules home nvidia zangbin pytorch main build torch build nccl obj collectives device makefile rules bin sh gen rules sh not found compiling functions cu home nvidia zangbin pytorch main build torch build nccl obj collectives device functions o nvlink error undefined reference to copy in home nvidia zangbin pytorch main build torch build nccl obj collectives device functions o cc malfet seemethere ngimel | 0 |
3,283 | 6,226,150,246 | IssuesEvent | 2017-07-10 17:48:28 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | NullReferenceException from SocketAsyncEventArgs.InnerStartOperationReceiveMessageFrom | area-System.Net.Sockets bug tenet-compatibility | I was working on a UDP library when I hit this miserable exception. Attached is the minimal project I can think of to reproduce the exception.
[SocketAsyncClose.zip](https://github.com/dotnet/corefx/files/1132479/SocketAsyncClose.zip)
Compile and run this project with no debugger attached, and you should see the exception details,
```
System.NullReferenceException: Object reference not set to an instance of an object.
at System.Net.Sockets.SocketAsyncEventArgs.InnerStartOperationReceiveMessageFrom()
at System.Net.Sockets.Socket.ReceiveMessageFromAsync(SocketAsyncEventArgs e)
at SocketAsyncClose.Program.<>c__DisplayClass1_0.<Main>b__0() in c:\users\lextm\documents\visual studio 2017\Projects\SocketAsyncClose\SocketAsyncClose\Program.cs:line 36
```
This piece of code tries to reuse `SocketAsyncEventArgs` objects. And it works fine if `Socket.ReceiveAsync` or `Socket.ReceiveFromAsync` is used.
I tested it on Windows 10 (latest patches, .NET Core 1.0 and 1.1), and both cases give the same exception. I tested on macOS (latest stable. .NET Core 1.0 and 1.1), and cannot reproduce the exception.
Not surprised, as it should be something wrong in `SocketAsyncEventArgs.Windows.cs` I think and is Windows only. | True | NullReferenceException from SocketAsyncEventArgs.InnerStartOperationReceiveMessageFrom - I was working on a UDP library when I hit this miserable exception. Attached is the minimal project I can think of to reproduce the exception.
[SocketAsyncClose.zip](https://github.com/dotnet/corefx/files/1132479/SocketAsyncClose.zip)
Compile and run this project with no debugger attached, and you should see the exception details,
```
System.NullReferenceException: Object reference not set to an instance of an object.
at System.Net.Sockets.SocketAsyncEventArgs.InnerStartOperationReceiveMessageFrom()
at System.Net.Sockets.Socket.ReceiveMessageFromAsync(SocketAsyncEventArgs e)
at SocketAsyncClose.Program.<>c__DisplayClass1_0.<Main>b__0() in c:\users\lextm\documents\visual studio 2017\Projects\SocketAsyncClose\SocketAsyncClose\Program.cs:line 36
```
This piece of code tries to reuse `SocketAsyncEventArgs` objects. And it works fine if `Socket.ReceiveAsync` or `Socket.ReceiveFromAsync` is used.
I tested it on Windows 10 (latest patches, .NET Core 1.0 and 1.1), and both cases give the same exception. I tested on macOS (latest stable. .NET Core 1.0 and 1.1), and cannot reproduce the exception.
Not surprised, as it should be something wrong in `SocketAsyncEventArgs.Windows.cs` I think and is Windows only. | comp | nullreferenceexception from socketasynceventargs innerstartoperationreceivemessagefrom i was working on a udp library when i hit this miserable exception attached is the minimal project i can think of to reproduce the exception compile and run this project with no debugger attached and you should see the exception details system nullreferenceexception object reference not set to an instance of an object at system net sockets socketasynceventargs innerstartoperationreceivemessagefrom at system net sockets socket receivemessagefromasync socketasynceventargs e at socketasyncclose program c b in c users lextm documents visual studio projects socketasyncclose socketasyncclose program cs line this piece of code tries to reuse socketasynceventargs objects and it works fine if socket receiveasync or socket receivefromasync is used i tested it on windows latest patches net core and and both cases give the same exception i tested on macos latest stable net core and and cannot reproduce the exception not surprised as it should be something wrong in socketasynceventargs windows cs i think and is windows only | 1 |
19,392 | 26,908,823,767 | IssuesEvent | 2023-02-06 21:31:48 | WordPress/two-factor | https://api.github.com/repos/WordPress/two-factor | closed | Using auth_cookie filter instead of wp_login hook to start 2FA flow | Compatibility | Currently flow is started when wp_login is triggered, i.e. when the user has already been logged in, and then reversers the last part of the the default login-flow, by removing the the auth-cookie in function wp_login.
```
public static function wp_login( $user_login, $user ) {
if ( ! self::is_user_using_two_factor( $user->ID ) ) {
return;
}
// Invalidate the current login session to prevent from being re-used.
self::destroy_current_session_for_user( $user );
// Also clear the cookies which are no longer valid.
wp_clear_auth_cookie();
self::show_two_factor_login( $user );
exit;
}
```
Why don't instead use the hook auth_cookie filter, to prevent the cookie from being set unit 2FA has been completed?
Or use wp_authenticate action hook that is triggered before the WP backend authentication process is done, removing need to destroy the session
I think that use of wp_login hook, in addition to being somewhat backward, as already completed login is reversed, is more likely to conflict with other hooks in sites that seek to do actions after successful login.
Ref:
https://usersinsights.com/wordpress-user-login-hooks/
https://developer.wordpress.org/reference/hooks/auth_cookie/
https://developer.wordpress.org/reference/hooks/wp_authenticate/
| True | Using auth_cookie filter instead of wp_login hook to start 2FA flow - Currently flow is started when wp_login is triggered, i.e. when the user has already been logged in, and then reversers the last part of the the default login-flow, by removing the the auth-cookie in function wp_login.
```
public static function wp_login( $user_login, $user ) {
if ( ! self::is_user_using_two_factor( $user->ID ) ) {
return;
}
// Invalidate the current login session to prevent from being re-used.
self::destroy_current_session_for_user( $user );
// Also clear the cookies which are no longer valid.
wp_clear_auth_cookie();
self::show_two_factor_login( $user );
exit;
}
```
Why don't instead use the hook auth_cookie filter, to prevent the cookie from being set unit 2FA has been completed?
Or use wp_authenticate action hook that is triggered before the WP backend authentication process is done, removing need to destroy the session
I think that use of wp_login hook, in addition to being somewhat backward, as already completed login is reversed, is more likely to conflict with other hooks in sites that seek to do actions after successful login.
Ref:
https://usersinsights.com/wordpress-user-login-hooks/
https://developer.wordpress.org/reference/hooks/auth_cookie/
https://developer.wordpress.org/reference/hooks/wp_authenticate/
| comp | using auth cookie filter instead of wp login hook to start flow currently flow is started when wp login is triggered i e when the user has already been logged in and then reversers the last part of the the default login flow by removing the the auth cookie in function wp login public static function wp login user login user if self is user using two factor user id return invalidate the current login session to prevent from being re used self destroy current session for user user also clear the cookies which are no longer valid wp clear auth cookie self show two factor login user exit why don t instead use the hook auth cookie filter to prevent the cookie from being set unit has been completed or use wp authenticate action hook that is triggered before the wp backend authentication process is done removing need to destroy the session i think that use of wp login hook in addition to being somewhat backward as already completed login is reversed is more likely to conflict with other hooks in sites that seek to do actions after successful login ref | 1 |
1,775 | 2,666,940,981 | IssuesEvent | 2015-03-22 02:35:30 | benquarmby/jslintnet-test | https://api.github.com/repos/benquarmby/jslintnet-test | opened | Ability to upgrade JSLint version without recompiling | CodePlex | <b>ChrisNielsen[CodePlex]</b> <br />I am using the MSBuild task. I would like to be able to upgrade the JSLint version without needing to recompile JSLintNet. If it could read jslint.js from the file system instead of as an embedded resource, that would be great.
| 1.0 | Ability to upgrade JSLint version without recompiling - <b>ChrisNielsen[CodePlex]</b> <br />I am using the MSBuild task. I would like to be able to upgrade the JSLint version without needing to recompile JSLintNet. If it could read jslint.js from the file system instead of as an embedded resource, that would be great.
| non_comp | ability to upgrade jslint version without recompiling chrisnielsen i am using the msbuild task i would like to be able to upgrade the jslint version without needing to recompile jslintnet if it could read jslint js from the file system instead of as an embedded resource that would be great | 0 |
492 | 2,910,307,980 | IssuesEvent | 2015-06-21 16:36:04 | Yoast/wordpress-seo | https://api.github.com/repos/Yoast/wordpress-seo | closed | Conflict with Jupiter Theme | Compatibility | Hey there - I recently updated to the newest version of the Jupiter theme (version 4.0.7.4)(http://themeforest.net/item/jupiter-multipurpose-responsive-theme/5177775) and have a conflict with the Wordpress SEO Plugin. It seems to kill the Visual Composer which is a required plugin with jupiter (http://vc.wpbakery.com/) as well as Jupiter's global override section on posts and pages. I can send screen captures if you are interested in details. Thanks! | True | Conflict with Jupiter Theme - Hey there - I recently updated to the newest version of the Jupiter theme (version 4.0.7.4)(http://themeforest.net/item/jupiter-multipurpose-responsive-theme/5177775) and have a conflict with the Wordpress SEO Plugin. It seems to kill the Visual Composer which is a required plugin with jupiter (http://vc.wpbakery.com/) as well as Jupiter's global override section on posts and pages. I can send screen captures if you are interested in details. Thanks! | comp | conflict with jupiter theme hey there i recently updated to the newest version of the jupiter theme version and have a conflict with the wordpress seo plugin it seems to kill the visual composer which is a required plugin with jupiter as well as jupiter s global override section on posts and pages i can send screen captures if you are interested in details thanks | 1 |
92,963 | 11,728,749,218 | IssuesEvent | 2020-03-10 18:05:42 | phetsims/molecules-and-light | https://api.github.com/repos/phetsims/molecules-and-light | opened | Representation of photon emitter button in PDOM | design:a11y | From #295 and mentioned over slack, how would we like this button to appear in the PDOM? @terracoda is going to provide some markup to try and then I will listen to how each sounds on our supported platforms. | 1.0 | Representation of photon emitter button in PDOM - From #295 and mentioned over slack, how would we like this button to appear in the PDOM? @terracoda is going to provide some markup to try and then I will listen to how each sounds on our supported platforms. | non_comp | representation of photon emitter button in pdom from and mentioned over slack how would we like this button to appear in the pdom terracoda is going to provide some markup to try and then i will listen to how each sounds on our supported platforms | 0 |
2,603 | 5,329,885,441 | IssuesEvent | 2017-02-15 15:52:03 | yiisoft/yii | https://api.github.com/repos/yiisoft/yii | closed | unserialize error in PHP 7.1 | compatibility:PHP7 | Hi, I'm using Yii 1.1.17. Everything works with PHP 7.0.x. I upgrade to PHP 7.1 and got this issue:
> unserialize(): Unexpected end of serialized data

Is there any way to fix this? Thanks. | True | unserialize error in PHP 7.1 - Hi, I'm using Yii 1.1.17. Everything works with PHP 7.0.x. I upgrade to PHP 7.1 and got this issue:
> unserialize(): Unexpected end of serialized data

Is there any way to fix this? Thanks. | comp | unserialize error in php hi i m using yii everything works with php x i upgrade to php and got this issue unserialize unexpected end of serialized data is there any way to fix this thanks | 1 |
1,602 | 4,161,967,912 | IssuesEvent | 2016-06-17 18:32:14 | einsteinsci/betterbeginnings | https://api.github.com/repos/einsteinsci/betterbeginnings | closed | Galacticraft ores compatibility | compatibility help wanted unconfirmed | Ores from Galacticraft (tin, copper and aluminium ore) are all smelted into copper. Seems it is a meta-data issue, as all ore / ingots share the same block/item ID, only the damage value differs.
Happened on version 0.9.4-pre2
| True | Galacticraft ores compatibility - Ores from Galacticraft (tin, copper and aluminium ore) are all smelted into copper. Seems it is a meta-data issue, as all ore / ingots share the same block/item ID, only the damage value differs.
Happened on version 0.9.4-pre2
| comp | galacticraft ores compatibility ores from galacticraft tin copper and aluminium ore are all smelted into copper seems it is a meta data issue as all ore ingots share the same block item id only the damage value differs happened on version | 1 |
231,922 | 18,820,257,897 | IssuesEvent | 2021-11-10 07:19:23 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | opened | An error dialog pops up when cloning/pasting one file share | 🧪 testing :gear: files :beetle: regression | **Storage Explorer Version**: 1.22.0-dev
**Build Number**: 20211110.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.0.1
**Architecture**: ia32/x64
**How Found**: From running test cases
**Regression From**: Previous releases (1.21.3)
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Select one file share -> Clone the file share with a valid name.
3. Check whether succeed to clone.
## Expected Experience ##
Succeed to clone.
## Actual Experience ##
An error dialog pops up.


## More Info ##
1. There is an ongoing activity log even though the error dialog is closed.

2. The cloned file share displays on the tree view. | 1.0 | An error dialog pops up when cloning/pasting one file share - **Storage Explorer Version**: 1.22.0-dev
**Build Number**: 20211110.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.0.1
**Architecture**: ia32/x64
**How Found**: From running test cases
**Regression From**: Previous releases (1.21.3)
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Select one file share -> Clone the file share with a valid name.
3. Check whether succeed to clone.
## Expected Experience ##
Succeed to clone.
## Actual Experience ##
An error dialog pops up.


## More Info ##
1. There is an ongoing activity log even though the error dialog is closed.

2. The cloned file share displays on the tree view. | non_comp | an error dialog pops up when cloning pasting one file share storage explorer version dev build number branch main platform os windows linux ubuntu macos monterey architecture how found from running test cases regression from previous releases steps to reproduce expand one storage account file shares select one file share clone the file share with a valid name check whether succeed to clone expected experience succeed to clone actual experience an error dialog pops up more info there is an ongoing activity log even though the error dialog is closed the cloned file share displays on the tree view | 0 |
14,440 | 17,420,460,886 | IssuesEvent | 2021-08-04 00:20:52 | VazkiisMods/Botania | https://api.github.com/repos/VazkiisMods/Botania | closed | Entire world goes blank when using Runic Altar and Botanical Brewery | Stale compatibility | Forge version: 1.16.5 - 36.1.16
Botania version: 1.16.5 - 416
# Further Information
Link to crash log: No crash
Steps to reproduce:
1. Throw any appropriate item(s) on the Runic Altar/Brewery to start crafting
2. Look at the Runic Altar (or Brewery with Wand of the Forest in hand)
What I expected to happen:
The runes would craft normally.
What happened instead:
They did, but looking at the Runic Altar when the Altar is receiving mana to craft the runes it causes everything to de-render. Looking away from the Altar makes everything appear again. The only things that can be seen are the tops of liquids, the items in the Altar, and mana bursts. The same thing happens with the Botanical Brewery except only if you are holding a Wand of the Forest. | True | Entire world goes blank when using Runic Altar and Botanical Brewery - Forge version: 1.16.5 - 36.1.16
Botania version: 1.16.5 - 416
# Further Information
Link to crash log: No crash
Steps to reproduce:
1. Throw any appropriate item(s) on the Runic Altar/Brewery to start crafting
2. Look at the Runic Altar (or Brewery with Wand of the Forest in hand)
What I expected to happen:
The runes would craft normally.
What happened instead:
They did, but looking at the Runic Altar when the Altar is receiving mana to craft the runes it causes everything to de-render. Looking away from the Altar makes everything appear again. The only things that can be seen are the tops of liquids, the items in the Altar, and mana bursts. The same thing happens with the Botanical Brewery except only if you are holding a Wand of the Forest. | comp | entire world goes blank when using runic altar and botanical brewery forge version botania version further information link to crash log no crash steps to reproduce throw any appropriate item s on the runic altar brewery to start crafting look at the runic altar or brewery with wand of the forest in hand what i expected to happen the runes would craft normally what happened instead they did but looking at the runic altar when the altar is receiving mana to craft the runes it causes everything to de render looking away from the altar makes everything appear again the only things that can be seen are the tops of liquids the items in the altar and mana bursts the same thing happens with the botanical brewery except only if you are holding a wand of the forest | 1 |
5,002 | 2,765,733,456 | IssuesEvent | 2015-04-29 22:09:37 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | reopened | Proposal: Language support for Tuples | 1 - Planning Area-Language Design Feature Request Language-C# | There are many scenarios where you'd like to group a set of typed values temporarily, without the grouping itself warranting a "concept" or type name of its own.
Other languages use variations over the notion of **tuples** for this. Maybe C# should too.
This proposal follows up on #98 and addresses #102 and #307.
Background
==========
The most common situation where values need to be temporarily grouped, a list of arguments to (e.g.) a method, has syntactic support in C#. However, the probably *second*-most common, a list of *results*, does not.
While there are many situations where tuple support could be useful, the most prevalent by far is the ability to return multiple values from an operation.
Your options today include:
*Out parameters:*
``` c#
public void Tally(IEnumerable<int> values, out int sum, out int count) { ... }
int s, c;
Tally(myValues, out s, out c);
Console.WriteLine($"Sum: {s}, count: {c}");
```
This approach cannot be used for async methods, and it is also rather painful to consume, requiring variables to be first declared (and `var` is not an option), then passed as out parameters in a separate statement, *then* consumed.
On the bright side, because the results are out parameters, they have names, which help indicate which is which.
*System.Tuple:*
``` c#
public Tuple<int, int> Tally(IEnumerable<int> values) { ... }
var t = Tally(myValues);
Console.WriteLine($"Sum: {t.Item1}, count: {t.Item2}");
```
This works for async methods (you could return `Task<Tuple<int, int>>`), and you only need two statements to consume it. On the downside, the consuming code is perfectly obscure - there is nothing to indicate that you are talking about a sum and a count. Finally, there's a cost to allocating the Tuple object.
*Declared transport type*
``` c#
public struct TallyResult { public int Sum; public int Count; }
public TallyResult Tally(IEnumerable<int> values) { ... }
var t = Tally(myValues);
Console.WriteLine($"Sum: {t.Sum}, count: {t.Count}");
```
This has by far the best consumption experience. It works for async methods, the resulting struct has meaningful field names, and being a struct, it doesn't require heap allocation - it is essentially passed on the stack in the same way that the argument list to a method.
The downside of course is the need to declare the transport type. THe declaration is meaningless overhead in itself, and since it doesn't represent a clear concept, it is hard to give it a meaningful name. You can name it after the operation that returns it (like I did above), but then you cannot reuse it for other operations.
Tuple syntax
============
If the most common use case is multiple results, it seems reasonable to strive for symmetry with parameter lists and argument lists. If you can squint and see "things going in" and "things coming out" as two sides of the same coin, then that seems to be a good sign that the feature is well integrated into the existing language, and may in fact *improve* the symmetry instead of (or at least in addition to) adding conceptual weight.
Tuple types
-----------
Tuple types would be introduced with syntax very similar to a parameter list:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values) { ... }
var t = Tally(myValues);
Console.WriteLine($"Sum: {t.sum}, count: {t.count}");
```
The syntax `(int sum, int count)` indicates an anonymous struct type with public fields of the given names and types.
Note that this is different from some notions of tuple, where the members are not given names but only positions. This is a common complaint, though, essentially degrading the consumption scenario to that of `System.Tuple` above. For full usefulness, tuples members need to have names.
This is fully compatible with async:
``` c#
public async Task<(int sum, int count)> TallyAsync(IEnumerable<int> values) { ... }
var t = await TallyAsync(myValues);
Console.WriteLine($"Sum: {t.sum}, count: {t.count}");
```
Tuple literals
--------------
With no further syntax additions to C#, tuple values could be created as
``` c#
var t = new (int sum, int count) { sum = 0, count = 0 };
```
Of course that's not very convenient. We should have a syntax for tuple literals, and given the principle above it should closely mirror that of argument lists.
Creating a tuple value of a known target type, should enable leaving out the member names:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values)
{
var s = 0; var c = 0;
foreach (var value in values) { s += value; c++; }
return (s, c); // target typed to (int sum, int count)
}
```
Using named arguments as a syntax analogy it may also be possible to give the names of the tuple fields directly in the literal:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values)
{
var res = (sum: 0, count: 0); // infer tuple type from names and values
foreach (var value in values) { res.sum += value; res.count++; }
return res;
}
```
Which syntax you use would depend on whether the context provides a target type.
Tuple deconstruction
--------------------
Since the grouping represented by tuples is most often "accidental", the consumer of a tuple is likely not to want to even think of the tuple as a "thing". Instead they want to immediately get at the components of it. Just like you don't *first* bundle up the arguments to a method into an object and *then* send the bundle off, you wouldn't want to *first* receive a bundle of values back from a call and *then* pick out the pieces.
Languages with tuple features typically use a deconstruction syntax to receive and "split out" a tuple in one fell swoop:
``` c#
(var sum, var count) = Tally(myValues); // deconstruct result
Console.WriteLine($"Sum: {sum}, count: {count}");
```
This way there's no evidence in the code that a tuple ever *existed*.
Details
=======
That's the general gist of the proposal. Here are a ton of details to think through in the design process.
Struct or class
---------------
As mentioned, I propose to make tuple types structs rather than classes, so that no allocation penalty is associated with them. They should be as lightweight as possible.
Arguably, structs can end up being more costly, because assignment copies a bigger value. So if they are assigned a lot more than they are created, then structs would be a bad choice.
In their very motivation, though, tuples are ephemeral. You would use them when the parts are more important than the whole. So the common pattern would be to construct, return and immediately deconstruct them. In this situation structs are clearly preferable.
Structs also have a number of other benefits, which will become obvious in the following.
Mutability
----------
Should tuples be mutable or immutable? The nice thing about them being structs is that the user can choose. If a reference to the tuple is readonly then the tuple is readonly.
Now a local variable cannot be readonly, unless we adopt #115 (which is likely), but that isn't too big of a deal, because locals are only *used* locally, and so it is easier to stick to an immutable discipline if you so choose.
If tuples are used as fields, then those fields can be readonly if desired.
Value semantics
---------------
Structs have built-in value semantics: `Equals` and `GetHashCode` are automatically implemented in terms of the struct's fields. This isn't always very *efficiently* implemented, so we should make sure that the compiler-generated struct does this efficiently where the runtime doesn't.
Tuples as fields
----------------
While multiple results may be the most common usage, you can certainly imagine tuples showing up as part of the state of objects. A particular common case might be where generics is involved, and you want to pass a compound of values for one of the type parameters. Think dictionaries with multiple keys and/or multiple values, etc.
Care needs to be taken with mutable structs in the heap: if multiple threads can mutate, tearing can happen.
Conversions
-----------
On top of the member-wise conversions implied by target typing, we can certainly allow implicit conversions between tuple types themselves.
Specifically, covariance seems straightforward, because the tuples are value types: As long as each member of the assigned tuple is assignable to the type of the corresponding member of the receiving tuple, things should be good.
You could imagine going a step further, and allowing pointwise conversions between tuples *regardless* of the member names, as long as the arity and types line up. If you want to "reinterpret" a tuple, why shouldn't you be allowed to? Essentially the view would be that assignment from tuple to tuple is just memberwise assignment by position.
``` c#
(double sum, long count) weaken = Tally(...); // why not?
(int s, int c) rename = Tally(...) // why not?
```
Unification across assemblies
-----------------------------
One big question is whether tuple types should unify across assemblies. Currently, compiler generated types don't. As a matter of fact, anonymous types are deliberately kept assembly-local by limitations in the language, such as the fact that there's no type syntax for them!
It might seem obvious that there should be unification of tuple types across assemblies - i.e. that `(int sum, int count)` is the same type when it occurs in assembly A and assembly B. However, given that structs aren't expected to be passed around much, you can certainly imagine them still being useful without that.
Even so, it would probably come as a surprise to developers if there was no interoperability between tuples across assembly boundaries. This may range from having implicit conversions between them, supported by the compiler, to having a true unification supported by the runtime, or implemented with very clever tricks. Such tricks might lead to a less straightforward layout in metadata (such as carrying the tuple member names in separate attributes instead of as actual member names on the generated struct).
This needs further investigation. What would it take to implement tuple unification? Is it worth the price? Are tuples worth doing without it?
Deconstruction and declaration
------------------------------
There's a design issue around whether deconstruction syntax is only for declaring *new* variables for tuple components, or whether it can be used with existing variables:
``` c#
(var sum, var count) = Tally(myValues); // deconstruct into fresh variables
(sum, count) = Tally(otherValues); // deconstruct into existing variables?
```
In other words is the form `(_, _, _) = e;` a declaration statement, an assignment expression, or something in between?
This discussion intersects meaningfully with #254, declaration expressions.
Relationship with anonymous types
---------------------------------
Since tuples would be compiler generated types just like anonymous types are today, it's useful to consider rationalizing the two with each other as much as possible. With tuples being structs and anonymous types being classes, they won't completely unify, but they could be very similar. Specifically, anonymous types could pick up these properties from tuples:
- There could be a syntax to denote the types! E.g. `{ string Name, int Age}`. If so, we'd need to also figure out the cross-assembly story for them.
- There could be deconstruction syntax for them.
Optional enhancements
=====================
Once in the language, there are additional conveniences that you can imagine adding for tuples.
Tuple members in scope in method body
-------------------------------------
One (the only?) nice aspect of out parameters is that no returning is needed from the method body - they are just assigned to. For the case where a tuple type occurs as a return type of a method you could imagine a similar shortcut:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values)
{
sum = 0; count = 0;
foreach (var value in values) { sum += value; count++; }
}
```
Just like parameters, the names of the tuple are in scope in the method body, and just like out parameters, the only requirement is that they be definitely assigned at the end of the method.
This is taking the parameter-result analogy one step further. However, it would special-case the tuples-for-multiple-returns scenario over other tuple scenarios, and it would also preclude seeing in one place what gets returned.
Splatting
---------
If a method expects n arguments, we could allow a suitable n-tuple to be passed to it. Just like with params arrays, we would first check if there's a method that takes the tuple directly, and otherwise we would try again with the tuple's members as individual arguments:
``` c#
public double Avg(int sum, int count) => count==0 ? 0 : sum/count;
Console.WriteLine($"Avg: {Avg(Tally(myValues))}");
```
Here, `Tally` returns a tuple of type `(int sum, int count)` that gets splatted to the two arguments to `Avg`.
Conversely, if a method expects a tuple we could allow it to be called with individual arguments, having the compiler automatically assemble them to a tuple, provided that no overload was applicable to the individual arguments.
I doubt that a method would commonly be declared directly to just take a tuple. But it may be a method on a generic type that gets instantiated with a tuple type:
``` c#
var list = List<(string name, int age)>();
list.Add("John Doe", 66); // "unsplatting" to a tuple
```
There are probably a lot of details to figure out with the splatting and unsplatting rules.
| 1.0 | Proposal: Language support for Tuples - There are many scenarios where you'd like to group a set of typed values temporarily, without the grouping itself warranting a "concept" or type name of its own.
Other languages use variations over the notion of **tuples** for this. Maybe C# should too.
This proposal follows up on #98 and addresses #102 and #307.
Background
==========
The most common situation where values need to be temporarily grouped, a list of arguments to (e.g.) a method, has syntactic support in C#. However, the probably *second*-most common, a list of *results*, does not.
While there are many situations where tuple support could be useful, the most prevalent by far is the ability to return multiple values from an operation.
Your options today include:
*Out parameters:*
``` c#
public void Tally(IEnumerable<int> values, out int sum, out int count) { ... }
int s, c;
Tally(myValues, out s, out c);
Console.WriteLine($"Sum: {s}, count: {c}");
```
This approach cannot be used for async methods, and it is also rather painful to consume, requiring variables to be first declared (and `var` is not an option), then passed as out parameters in a separate statement, *then* consumed.
On the bright side, because the results are out parameters, they have names, which help indicate which is which.
*System.Tuple:*
``` c#
public Tuple<int, int> Tally(IEnumerable<int> values) { ... }
var t = Tally(myValues);
Console.WriteLine($"Sum: {t.Item1}, count: {t.Item2}");
```
This works for async methods (you could return `Task<Tuple<int, int>>`), and you only need two statements to consume it. On the downside, the consuming code is perfectly obscure - there is nothing to indicate that you are talking about a sum and a count. Finally, there's a cost to allocating the Tuple object.
*Declared transport type*
``` c#
public struct TallyResult { public int Sum; public int Count; }
public TallyResult Tally(IEnumerable<int> values) { ... }
var t = Tally(myValues);
Console.WriteLine($"Sum: {t.Sum}, count: {t.Count}");
```
This has by far the best consumption experience. It works for async methods, the resulting struct has meaningful field names, and being a struct, it doesn't require heap allocation - it is essentially passed on the stack in the same way that the argument list to a method.
The downside of course is the need to declare the transport type. THe declaration is meaningless overhead in itself, and since it doesn't represent a clear concept, it is hard to give it a meaningful name. You can name it after the operation that returns it (like I did above), but then you cannot reuse it for other operations.
Tuple syntax
============
If the most common use case is multiple results, it seems reasonable to strive for symmetry with parameter lists and argument lists. If you can squint and see "things going in" and "things coming out" as two sides of the same coin, then that seems to be a good sign that the feature is well integrated into the existing language, and may in fact *improve* the symmetry instead of (or at least in addition to) adding conceptual weight.
Tuple types
-----------
Tuple types would be introduced with syntax very similar to a parameter list:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values) { ... }
var t = Tally(myValues);
Console.WriteLine($"Sum: {t.sum}, count: {t.count}");
```
The syntax `(int sum, int count)` indicates an anonymous struct type with public fields of the given names and types.
Note that this is different from some notions of tuple, where the members are not given names but only positions. This is a common complaint, though, essentially degrading the consumption scenario to that of `System.Tuple` above. For full usefulness, tuples members need to have names.
This is fully compatible with async:
``` c#
public async Task<(int sum, int count)> TallyAsync(IEnumerable<int> values) { ... }
var t = await TallyAsync(myValues);
Console.WriteLine($"Sum: {t.sum}, count: {t.count}");
```
Tuple literals
--------------
With no further syntax additions to C#, tuple values could be created as
``` c#
var t = new (int sum, int count) { sum = 0, count = 0 };
```
Of course that's not very convenient. We should have a syntax for tuple literals, and given the principle above it should closely mirror that of argument lists.
Creating a tuple value of a known target type, should enable leaving out the member names:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values)
{
var s = 0; var c = 0;
foreach (var value in values) { s += value; c++; }
return (s, c); // target typed to (int sum, int count)
}
```
Using named arguments as a syntax analogy it may also be possible to give the names of the tuple fields directly in the literal:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values)
{
var res = (sum: 0, count: 0); // infer tuple type from names and values
foreach (var value in values) { res.sum += value; res.count++; }
return res;
}
```
Which syntax you use would depend on whether the context provides a target type.
Tuple deconstruction
--------------------
Since the grouping represented by tuples is most often "accidental", the consumer of a tuple is likely not to want to even think of the tuple as a "thing". Instead they want to immediately get at the components of it. Just like you don't *first* bundle up the arguments to a method into an object and *then* send the bundle off, you wouldn't want to *first* receive a bundle of values back from a call and *then* pick out the pieces.
Languages with tuple features typically use a deconstruction syntax to receive and "split out" a tuple in one fell swoop:
``` c#
(var sum, var count) = Tally(myValues); // deconstruct result
Console.WriteLine($"Sum: {sum}, count: {count}");
```
This way there's no evidence in the code that a tuple ever *existed*.
Details
=======
That's the general gist of the proposal. Here are a ton of details to think through in the design process.
Struct or class
---------------
As mentioned, I propose to make tuple types structs rather than classes, so that no allocation penalty is associated with them. They should be as lightweight as possible.
Arguably, structs can end up being more costly, because assignment copies a bigger value. So if they are assigned a lot more than they are created, then structs would be a bad choice.
In their very motivation, though, tuples are ephemeral. You would use them when the parts are more important than the whole. So the common pattern would be to construct, return and immediately deconstruct them. In this situation structs are clearly preferable.
Structs also have a number of other benefits, which will become obvious in the following.
Mutability
----------
Should tuples be mutable or immutable? The nice thing about them being structs is that the user can choose. If a reference to the tuple is readonly then the tuple is readonly.
Now a local variable cannot be readonly, unless we adopt #115 (which is likely), but that isn't too big of a deal, because locals are only *used* locally, and so it is easier to stick to an immutable discipline if you so choose.
If tuples are used as fields, then those fields can be readonly if desired.
Value semantics
---------------
Structs have built-in value semantics: `Equals` and `GetHashCode` are automatically implemented in terms of the struct's fields. This isn't always very *efficiently* implemented, so we should make sure that the compiler-generated struct does this efficiently where the runtime doesn't.
Tuples as fields
----------------
While multiple results may be the most common usage, you can certainly imagine tuples showing up as part of the state of objects. A particular common case might be where generics is involved, and you want to pass a compound of values for one of the type parameters. Think dictionaries with multiple keys and/or multiple values, etc.
Care needs to be taken with mutable structs in the heap: if multiple threads can mutate, tearing can happen.
Conversions
-----------
On top of the member-wise conversions implied by target typing, we can certainly allow implicit conversions between tuple types themselves.
Specifically, covariance seems straightforward, because the tuples are value types: As long as each member of the assigned tuple is assignable to the type of the corresponding member of the receiving tuple, things should be good.
You could imagine going a step further, and allowing pointwise conversions between tuples *regardless* of the member names, as long as the arity and types line up. If you want to "reinterpret" a tuple, why shouldn't you be allowed to? Essentially the view would be that assignment from tuple to tuple is just memberwise assignment by position.
``` c#
(double sum, long count) weaken = Tally(...); // why not?
(int s, int c) rename = Tally(...) // why not?
```
Unification across assemblies
-----------------------------
One big question is whether tuple types should unify across assemblies. Currently, compiler generated types don't. As a matter of fact, anonymous types are deliberately kept assembly-local by limitations in the language, such as the fact that there's no type syntax for them!
It might seem obvious that there should be unification of tuple types across assemblies - i.e. that `(int sum, int count)` is the same type when it occurs in assembly A and assembly B. However, given that structs aren't expected to be passed around much, you can certainly imagine them still being useful without that.
Even so, it would probably come as a surprise to developers if there was no interoperability between tuples across assembly boundaries. This may range from having implicit conversions between them, supported by the compiler, to having a true unification supported by the runtime, or implemented with very clever tricks. Such tricks might lead to a less straightforward layout in metadata (such as carrying the tuple member names in separate attributes instead of as actual member names on the generated struct).
This needs further investigation. What would it take to implement tuple unification? Is it worth the price? Are tuples worth doing without it?
Deconstruction and declaration
------------------------------
There's a design issue around whether deconstruction syntax is only for declaring *new* variables for tuple components, or whether it can be used with existing variables:
``` c#
(var sum, var count) = Tally(myValues); // deconstruct into fresh variables
(sum, count) = Tally(otherValues); // deconstruct into existing variables?
```
In other words is the form `(_, _, _) = e;` a declaration statement, an assignment expression, or something in between?
This discussion intersects meaningfully with #254, declaration expressions.
Relationship with anonymous types
---------------------------------
Since tuples would be compiler generated types just like anonymous types are today, it's useful to consider rationalizing the two with each other as much as possible. With tuples being structs and anonymous types being classes, they won't completely unify, but they could be very similar. Specifically, anonymous types could pick up these properties from tuples:
- There could be a syntax to denote the types! E.g. `{ string Name, int Age}`. If so, we'd need to also figure out the cross-assembly story for them.
- There could be deconstruction syntax for them.
Optional enhancements
=====================
Once in the language, there are additional conveniences that you can imagine adding for tuples.
Tuple members in scope in method body
-------------------------------------
One (the only?) nice aspect of out parameters is that no returning is needed from the method body - they are just assigned to. For the case where a tuple type occurs as a return type of a method you could imagine a similar shortcut:
``` c#
public (int sum, int count) Tally(IEnumerable<int> values)
{
sum = 0; count = 0;
foreach (var value in values) { sum += value; count++; }
}
```
Just like parameters, the names of the tuple are in scope in the method body, and just like out parameters, the only requirement is that they be definitely assigned at the end of the method.
This is taking the parameter-result analogy one step further. However, it would special-case the tuples-for-multiple-returns scenario over other tuple scenarios, and it would also preclude seeing in one place what gets returned.
Splatting
---------
If a method expects n arguments, we could allow a suitable n-tuple to be passed to it. Just like with params arrays, we would first check if there's a method that takes the tuple directly, and otherwise we would try again with the tuple's members as individual arguments:
``` c#
public double Avg(int sum, int count) => count==0 ? 0 : sum/count;
Console.WriteLine($"Avg: {Avg(Tally(myValues))}");
```
Here, `Tally` returns a tuple of type `(int sum, int count)` that gets splatted to the two arguments to `Avg`.
Conversely, if a method expects a tuple we could allow it to be called with individual arguments, having the compiler automatically assemble them to a tuple, provided that no overload was applicable to the individual arguments.
I doubt that a method would commonly be declared directly to just take a tuple. But it may be a method on a generic type that gets instantiated with a tuple type:
``` c#
var list = List<(string name, int age)>();
list.Add("John Doe", 66); // "unsplatting" to a tuple
```
There are probably a lot of details to figure out with the splatting and unsplatting rules.
| non_comp | proposal language support for tuples there are many scenarios where you d like to group a set of typed values temporarily without the grouping itself warranting a concept or type name of its own other languages use variations over the notion of tuples for this maybe c should too this proposal follows up on and addresses and background the most common situation where values need to be temporarily grouped a list of arguments to e g a method has syntactic support in c however the probably second most common a list of results does not while there are many situations where tuple support could be useful the most prevalent by far is the ability to return multiple values from an operation your options today include out parameters c public void tally ienumerable values out int sum out int count int s c tally myvalues out s out c console writeline sum s count c this approach cannot be used for async methods and it is also rather painful to consume requiring variables to be first declared and var is not an option then passed as out parameters in a separate statement then consumed on the bright side because the results are out parameters they have names which help indicate which is which system tuple c public tuple tally ienumerable values var t tally myvalues console writeline sum t count t this works for async methods you could return task and you only need two statements to consume it on the downside the consuming code is perfectly obscure there is nothing to indicate that you are talking about a sum and a count finally there s a cost to allocating the tuple object declared transport type c public struct tallyresult public int sum public int count public tallyresult tally ienumerable values var t tally myvalues console writeline sum t sum count t count this has by far the best consumption experience it works for async methods the resulting struct has meaningful field names and being a struct it doesn t require heap allocation it is essentially passed on the stack in the same way that the argument list to a method the downside of course is the need to declare the transport type the declaration is meaningless overhead in itself and since it doesn t represent a clear concept it is hard to give it a meaningful name you can name it after the operation that returns it like i did above but then you cannot reuse it for other operations tuple syntax if the most common use case is multiple results it seems reasonable to strive for symmetry with parameter lists and argument lists if you can squint and see things going in and things coming out as two sides of the same coin then that seems to be a good sign that the feature is well integrated into the existing language and may in fact improve the symmetry instead of or at least in addition to adding conceptual weight tuple types tuple types would be introduced with syntax very similar to a parameter list c public int sum int count tally ienumerable values var t tally myvalues console writeline sum t sum count t count the syntax int sum int count indicates an anonymous struct type with public fields of the given names and types note that this is different from some notions of tuple where the members are not given names but only positions this is a common complaint though essentially degrading the consumption scenario to that of system tuple above for full usefulness tuples members need to have names this is fully compatible with async c public async task tallyasync ienumerable values var t await tallyasync myvalues console writeline sum t sum count t count tuple literals with no further syntax additions to c tuple values could be created as c var t new int sum int count sum count of course that s not very convenient we should have a syntax for tuple literals and given the principle above it should closely mirror that of argument lists creating a tuple value of a known target type should enable leaving out the member names c public int sum int count tally ienumerable values var s var c foreach var value in values s value c return s c target typed to int sum int count using named arguments as a syntax analogy it may also be possible to give the names of the tuple fields directly in the literal c public int sum int count tally ienumerable values var res sum count infer tuple type from names and values foreach var value in values res sum value res count return res which syntax you use would depend on whether the context provides a target type tuple deconstruction since the grouping represented by tuples is most often accidental the consumer of a tuple is likely not to want to even think of the tuple as a thing instead they want to immediately get at the components of it just like you don t first bundle up the arguments to a method into an object and then send the bundle off you wouldn t want to first receive a bundle of values back from a call and then pick out the pieces languages with tuple features typically use a deconstruction syntax to receive and split out a tuple in one fell swoop c var sum var count tally myvalues deconstruct result console writeline sum sum count count this way there s no evidence in the code that a tuple ever existed details that s the general gist of the proposal here are a ton of details to think through in the design process struct or class as mentioned i propose to make tuple types structs rather than classes so that no allocation penalty is associated with them they should be as lightweight as possible arguably structs can end up being more costly because assignment copies a bigger value so if they are assigned a lot more than they are created then structs would be a bad choice in their very motivation though tuples are ephemeral you would use them when the parts are more important than the whole so the common pattern would be to construct return and immediately deconstruct them in this situation structs are clearly preferable structs also have a number of other benefits which will become obvious in the following mutability should tuples be mutable or immutable the nice thing about them being structs is that the user can choose if a reference to the tuple is readonly then the tuple is readonly now a local variable cannot be readonly unless we adopt which is likely but that isn t too big of a deal because locals are only used locally and so it is easier to stick to an immutable discipline if you so choose if tuples are used as fields then those fields can be readonly if desired value semantics structs have built in value semantics equals and gethashcode are automatically implemented in terms of the struct s fields this isn t always very efficiently implemented so we should make sure that the compiler generated struct does this efficiently where the runtime doesn t tuples as fields while multiple results may be the most common usage you can certainly imagine tuples showing up as part of the state of objects a particular common case might be where generics is involved and you want to pass a compound of values for one of the type parameters think dictionaries with multiple keys and or multiple values etc care needs to be taken with mutable structs in the heap if multiple threads can mutate tearing can happen conversions on top of the member wise conversions implied by target typing we can certainly allow implicit conversions between tuple types themselves specifically covariance seems straightforward because the tuples are value types as long as each member of the assigned tuple is assignable to the type of the corresponding member of the receiving tuple things should be good you could imagine going a step further and allowing pointwise conversions between tuples regardless of the member names as long as the arity and types line up if you want to reinterpret a tuple why shouldn t you be allowed to essentially the view would be that assignment from tuple to tuple is just memberwise assignment by position c double sum long count weaken tally why not int s int c rename tally why not unification across assemblies one big question is whether tuple types should unify across assemblies currently compiler generated types don t as a matter of fact anonymous types are deliberately kept assembly local by limitations in the language such as the fact that there s no type syntax for them it might seem obvious that there should be unification of tuple types across assemblies i e that int sum int count is the same type when it occurs in assembly a and assembly b however given that structs aren t expected to be passed around much you can certainly imagine them still being useful without that even so it would probably come as a surprise to developers if there was no interoperability between tuples across assembly boundaries this may range from having implicit conversions between them supported by the compiler to having a true unification supported by the runtime or implemented with very clever tricks such tricks might lead to a less straightforward layout in metadata such as carrying the tuple member names in separate attributes instead of as actual member names on the generated struct this needs further investigation what would it take to implement tuple unification is it worth the price are tuples worth doing without it deconstruction and declaration there s a design issue around whether deconstruction syntax is only for declaring new variables for tuple components or whether it can be used with existing variables c var sum var count tally myvalues deconstruct into fresh variables sum count tally othervalues deconstruct into existing variables in other words is the form e a declaration statement an assignment expression or something in between this discussion intersects meaningfully with declaration expressions relationship with anonymous types since tuples would be compiler generated types just like anonymous types are today it s useful to consider rationalizing the two with each other as much as possible with tuples being structs and anonymous types being classes they won t completely unify but they could be very similar specifically anonymous types could pick up these properties from tuples there could be a syntax to denote the types e g string name int age if so we d need to also figure out the cross assembly story for them there could be deconstruction syntax for them optional enhancements once in the language there are additional conveniences that you can imagine adding for tuples tuple members in scope in method body one the only nice aspect of out parameters is that no returning is needed from the method body they are just assigned to for the case where a tuple type occurs as a return type of a method you could imagine a similar shortcut c public int sum int count tally ienumerable values sum count foreach var value in values sum value count just like parameters the names of the tuple are in scope in the method body and just like out parameters the only requirement is that they be definitely assigned at the end of the method this is taking the parameter result analogy one step further however it would special case the tuples for multiple returns scenario over other tuple scenarios and it would also preclude seeing in one place what gets returned splatting if a method expects n arguments we could allow a suitable n tuple to be passed to it just like with params arrays we would first check if there s a method that takes the tuple directly and otherwise we would try again with the tuple s members as individual arguments c public double avg int sum int count count sum count console writeline avg avg tally myvalues here tally returns a tuple of type int sum int count that gets splatted to the two arguments to avg conversely if a method expects a tuple we could allow it to be called with individual arguments having the compiler automatically assemble them to a tuple provided that no overload was applicable to the individual arguments i doubt that a method would commonly be declared directly to just take a tuple but it may be a method on a generic type that gets instantiated with a tuple type c var list list list add john doe unsplatting to a tuple there are probably a lot of details to figure out with the splatting and unsplatting rules | 0 |
104,814 | 13,124,829,012 | IssuesEvent | 2020-08-06 05:02:05 | ZcashFoundation/zebra | https://api.github.com/repos/ZcashFoundation/zebra | closed | Restrict zebrad commands that are actually available on release build | C-design Poll::Ready | We can:
* delete some commands, or
* move some commands to the utils crate

| 1.0 | Restrict zebrad commands that are actually available on release build - We can:
* delete some commands, or
* move some commands to the utils crate

| non_comp | restrict zebrad commands that are actually available on release build we can delete some commands or move some commands to the utils crate | 0 |
12,502 | 14,785,334,406 | IssuesEvent | 2021-01-12 02:29:21 | kami-blue/client | https://api.github.com/repos/kami-blue/client | closed | Slight cape rendering bugs | -incompatible -module bug | - [ ] Text layer disappears after shifting, also happens while flying. Reappears after a second.
- [ ] Weird "shimmering" affect with enchanted capes
- [ ] Sort of inverted effect in PlayerModel? not super noticeable, see below for image
 | True | Slight cape rendering bugs - - [ ] Text layer disappears after shifting, also happens while flying. Reappears after a second.
- [ ] Weird "shimmering" affect with enchanted capes
- [ ] Sort of inverted effect in PlayerModel? not super noticeable, see below for image
 | comp | slight cape rendering bugs text layer disappears after shifting also happens while flying reappears after a second weird shimmering affect with enchanted capes sort of inverted effect in playermodel not super noticeable see below for image | 1 |
211,975 | 7,225,651,168 | IssuesEvent | 2018-02-10 00:03:43 | KB-Support/kb-support | https://api.github.com/repos/KB-Support/kb-support | opened | Last name field on submission form | Priority: Medium enhancement | Do not protect the last name field from deletion.
The field should still be included when creating new forms (and during installation) but it should be removable. | 1.0 | Last name field on submission form - Do not protect the last name field from deletion.
The field should still be included when creating new forms (and during installation) but it should be removable. | non_comp | last name field on submission form do not protect the last name field from deletion the field should still be included when creating new forms and during installation but it should be removable | 0 |
2,822 | 5,624,019,504 | IssuesEvent | 2017-04-04 16:06:40 | Angry-Pixel/The-Betweenlands | https://api.github.com/repos/Angry-Pixel/The-Betweenlands | closed | Call PreRenderShadersEvent only after RenderWorldLastEvent was called | 1.10 Incompatibility | Add safety check in case another mod somehow cancels the renderWorld call in updateCameraAndRender or the RenderWorldLastEvent
http://pastebin.com/kSPJy6ec | True | Call PreRenderShadersEvent only after RenderWorldLastEvent was called - Add safety check in case another mod somehow cancels the renderWorld call in updateCameraAndRender or the RenderWorldLastEvent
http://pastebin.com/kSPJy6ec | comp | call prerendershadersevent only after renderworldlastevent was called add safety check in case another mod somehow cancels the renderworld call in updatecameraandrender or the renderworldlastevent | 1 |
363,451 | 10,741,348,139 | IssuesEvent | 2019-10-29 20:02:59 | ampproject/amp-github-apps | https://api.github.com/repos/ampproject/amp-github-apps | closed | [owners] Migrate from GCE to GAE | Category: Owners P1: High Priority | The original implementation of the Owners bot was based around a Google Compute Engine (GCE) deployment. This decision was made because GCE provides local filesystem I/O, which is useful for maintaining a copy of the repository, using command-line-fu to identify owners files, and reading the contents of those owners files. No other files are read, and none are written.
Using GCE introduces a handful of issues, most notably:
- deployment is fragile, takes down the app, and relies on a startup script that is not checked in
- there is no reliable way to manage environment variables (especially sensitive keys/tokens); they are just embedded in the startup script, and can't be checked into the repository used to deploy
- the team is less familiar with GCE than GAE, making maintenance and debugging harder
- firewall restrictions around GCE make SSH and debugging difficult
- the amount of bootstrapping and configuration embedded only in GCE instance metadata makes it hard/impossible for other projects to make use of the owners bot
As a resolution to these issues, the plan is to migrate to Google App Engine (GAE). This will be a lot of work up-front, but should make automated and continuous deployment and development of the app much easier and less error-prone going forward.
Steps needed to migrate to a GAE deployment process:
- [x] configure app to read from `.env` file; write the `.env` file within the startup script during migration
- [x] hoist a base `Repository` class out of the `LocalRepository` class, and implement a `RemoteRepository` subclass
- [x] using the [GitHub code search API](https://developer.github.com/v3/search/#search-code), provide the `listOwnersFiles` method
- [x] using the [GitHub file contents API](https://developer.github.com/v3/repos/contents/#get-contents), provide the `readFile` method
- [x] maintain a map from owners file paths to the SHA of their latest commit
- [x] implement a caching layer to prevent overloading GitHub API on file reads
- [x] lift the collection and parsing of owners files/the ownership tree out of the owners check and into a single server-shared tree
- [x] isolate bootstrapping code (ie. initializing teams and ownership trees)
- [x] add webhooks to catch updates to owners files and update the corresponding rules in the cache
- [x] create `app.yaml` and deploy to GAE
- [x] migrate team and tree updating code to use Cron tasks
- [x] spin down GCE version of app
/cc @rsimha @erwinmombay @danielrozenberg | 1.0 | [owners] Migrate from GCE to GAE - The original implementation of the Owners bot was based around a Google Compute Engine (GCE) deployment. This decision was made because GCE provides local filesystem I/O, which is useful for maintaining a copy of the repository, using command-line-fu to identify owners files, and reading the contents of those owners files. No other files are read, and none are written.
Using GCE introduces a handful of issues, most notably:
- deployment is fragile, takes down the app, and relies on a startup script that is not checked in
- there is no reliable way to manage environment variables (especially sensitive keys/tokens); they are just embedded in the startup script, and can't be checked into the repository used to deploy
- the team is less familiar with GCE than GAE, making maintenance and debugging harder
- firewall restrictions around GCE make SSH and debugging difficult
- the amount of bootstrapping and configuration embedded only in GCE instance metadata makes it hard/impossible for other projects to make use of the owners bot
As a resolution to these issues, the plan is to migrate to Google App Engine (GAE). This will be a lot of work up-front, but should make automated and continuous deployment and development of the app much easier and less error-prone going forward.
Steps needed to migrate to a GAE deployment process:
- [x] configure app to read from `.env` file; write the `.env` file within the startup script during migration
- [x] hoist a base `Repository` class out of the `LocalRepository` class, and implement a `RemoteRepository` subclass
- [x] using the [GitHub code search API](https://developer.github.com/v3/search/#search-code), provide the `listOwnersFiles` method
- [x] using the [GitHub file contents API](https://developer.github.com/v3/repos/contents/#get-contents), provide the `readFile` method
- [x] maintain a map from owners file paths to the SHA of their latest commit
- [x] implement a caching layer to prevent overloading GitHub API on file reads
- [x] lift the collection and parsing of owners files/the ownership tree out of the owners check and into a single server-shared tree
- [x] isolate bootstrapping code (ie. initializing teams and ownership trees)
- [x] add webhooks to catch updates to owners files and update the corresponding rules in the cache
- [x] create `app.yaml` and deploy to GAE
- [x] migrate team and tree updating code to use Cron tasks
- [x] spin down GCE version of app
/cc @rsimha @erwinmombay @danielrozenberg | non_comp | migrate from gce to gae the original implementation of the owners bot was based around a google compute engine gce deployment this decision was made because gce provides local filesystem i o which is useful for maintaining a copy of the repository using command line fu to identify owners files and reading the contents of those owners files no other files are read and none are written using gce introduces a handful of issues most notably deployment is fragile takes down the app and relies on a startup script that is not checked in there is no reliable way to manage environment variables especially sensitive keys tokens they are just embedded in the startup script and can t be checked into the repository used to deploy the team is less familiar with gce than gae making maintenance and debugging harder firewall restrictions around gce make ssh and debugging difficult the amount of bootstrapping and configuration embedded only in gce instance metadata makes it hard impossible for other projects to make use of the owners bot as a resolution to these issues the plan is to migrate to google app engine gae this will be a lot of work up front but should make automated and continuous deployment and development of the app much easier and less error prone going forward steps needed to migrate to a gae deployment process configure app to read from env file write the env file within the startup script during migration hoist a base repository class out of the localrepository class and implement a remoterepository subclass using the provide the listownersfiles method using the provide the readfile method maintain a map from owners file paths to the sha of their latest commit implement a caching layer to prevent overloading github api on file reads lift the collection and parsing of owners files the ownership tree out of the owners check and into a single server shared tree isolate bootstrapping code ie initializing teams and ownership trees add webhooks to catch updates to owners files and update the corresponding rules in the cache create app yaml and deploy to gae migrate team and tree updating code to use cron tasks spin down gce version of app cc rsimha erwinmombay danielrozenberg | 0 |
2,196 | 4,954,074,601 | IssuesEvent | 2016-12-01 16:37:48 | tex-xet/bidi | https://api.github.com/repos/tex-xet/bidi | opened | Footnotes inside multicols environment overlap footer | Bug Compatibility | If the text inside a `multicols` environment spans more than a page and there are some footnotes, then when the `extrafootnotefeatures` of the `bidi` package is activated and any of the commands
`\paragraphfootnotes`, `\twocolumnfootnotes`, `\threecolumnfootnotes`, ..., and `\tencolumnfootnotes` is used, the footnotes overlap the footer.
````latex
\documentclass{article}
\usepackage{lipsum}
\usepackage{multicol}
\usepackage[extrafootnotefeatures]{bidi}
\paragraphfootnotes
\begin{document}
\begin{multicols}{2}
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\lipsum
\end{multicols}
\end{document}
```` | True | Footnotes inside multicols environment overlap footer - If the text inside a `multicols` environment spans more than a page and there are some footnotes, then when the `extrafootnotefeatures` of the `bidi` package is activated and any of the commands
`\paragraphfootnotes`, `\twocolumnfootnotes`, `\threecolumnfootnotes`, ..., and `\tencolumnfootnotes` is used, the footnotes overlap the footer.
````latex
\documentclass{article}
\usepackage{lipsum}
\usepackage{multicol}
\usepackage[extrafootnotefeatures]{bidi}
\paragraphfootnotes
\begin{document}
\begin{multicols}{2}
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\footnote{This is footnote \thefootnote.}%
\lipsum
\end{multicols}
\end{document}
```` | comp | footnotes inside multicols environment overlap footer if the text inside a multicols environment spans more than a page and there are some footnotes then when the extrafootnotefeatures of the bidi package is activated and any of the commands paragraphfootnotes twocolumnfootnotes threecolumnfootnotes and tencolumnfootnotes is used the footnotes overlap the footer latex documentclass article usepackage lipsum usepackage multicol usepackage bidi paragraphfootnotes begin document begin multicols footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote footnote this is footnote thefootnote lipsum end multicols end document | 1 |
132,559 | 5,188,395,893 | IssuesEvent | 2017-01-20 19:48:47 | giantotter/giantotter_public | https://api.github.com/repos/giantotter/giantotter_public | closed | Script to build an environment | backend: database priority: B | Currently, we do not capture the schemas of Dynamo tables anywhere. We just create them through the AWS dashboard, or on the fly from code. We should have a script that creates the correct Dynamo tables.
Is there anything else the script should create, in terms of AWS?
And, is should this be the same task as issue 648 -- should (and can) the same script setup Dynamo locally and/or on AWS?
| 1.0 | Script to build an environment - Currently, we do not capture the schemas of Dynamo tables anywhere. We just create them through the AWS dashboard, or on the fly from code. We should have a script that creates the correct Dynamo tables.
Is there anything else the script should create, in terms of AWS?
And, is should this be the same task as issue 648 -- should (and can) the same script setup Dynamo locally and/or on AWS?
| non_comp | script to build an environment currently we do not capture the schemas of dynamo tables anywhere we just create them through the aws dashboard or on the fly from code we should have a script that creates the correct dynamo tables is there anything else the script should create in terms of aws and is should this be the same task as issue should and can the same script setup dynamo locally and or on aws | 0 |
14,168 | 17,032,819,243 | IssuesEvent | 2021-07-04 22:53:07 | proofit404/dependencies | https://api.github.com/repos/proofit404/dependencies | closed | Deny to resolve scalar dependencies directly. | backward incompatible feature | Only direct dependencies could be accessed from client code.
Indirect and evaluated dependencies could not be accessed from client code.
Scalar direct dependencies are useless to be stored on the injection scope on their own.
Only classes are allowed to be retrieved directly.
If you need `list` or `dict` to be built from dependencies defined inside the `Injector`, the `shield` object would be considered a class.
```python
class Container(Injector):
foo = 1
Container.foo # error
``` | True | Deny to resolve scalar dependencies directly. - Only direct dependencies could be accessed from client code.
Indirect and evaluated dependencies could not be accessed from client code.
Scalar direct dependencies are useless to be stored on the injection scope on their own.
Only classes are allowed to be retrieved directly.
If you need `list` or `dict` to be built from dependencies defined inside the `Injector`, the `shield` object would be considered a class.
```python
class Container(Injector):
foo = 1
Container.foo # error
``` | comp | deny to resolve scalar dependencies directly only direct dependencies could be accessed from client code indirect and evaluated dependencies could not be accessed from client code scalar direct dependencies are useless to be stored on the injection scope on their own only classes are allowed to be retrieved directly if you need list or dict to be built from dependencies defined inside the injector the shield object would be considered a class python class container injector foo container foo error | 1 |
76,725 | 9,961,746,769 | IssuesEvent | 2019-07-07 08:13:23 | threatspec/threatspec | https://api.github.com/repos/threatspec/threatspec | opened | Generate documentation from code and put it on readthedocs.io | documentation | Hard to keep documentation in sync with changes at the moment, generating from code should help. | 1.0 | Generate documentation from code and put it on readthedocs.io - Hard to keep documentation in sync with changes at the moment, generating from code should help. | non_comp | generate documentation from code and put it on readthedocs io hard to keep documentation in sync with changes at the moment generating from code should help | 0 |
36,664 | 4,751,131,648 | IssuesEvent | 2016-10-22 18:21:56 | eloipuertas/ES2016F | https://api.github.com/repos/eloipuertas/ES2016F | opened | [B] Basic model of a Battering Ram | Design Team-B | Description:
- 3.7. As a designer, I want a model of barracks, so I want to have the riders of Rohan could wait to attack .
[High priority]
> _Acceptance Criteria:_
* _The shape of the Barracks is rectangular._
* _They have a large front door, so the riders can enter in the barracks with their horses._
* _They have a container where the horses eat. In the container there are horses food._
Estimated time effort:: 8h | 1.0 | [B] Basic model of a Battering Ram - Description:
- 3.7. As a designer, I want a model of barracks, so I want to have the riders of Rohan could wait to attack .
[High priority]
> _Acceptance Criteria:_
* _The shape of the Barracks is rectangular._
* _They have a large front door, so the riders can enter in the barracks with their horses._
* _They have a container where the horses eat. In the container there are horses food._
Estimated time effort:: 8h | non_comp | basic model of a battering ram description as a designer i want a model of barracks so i want to have the riders of rohan could wait to attack acceptance criteria the shape of the barracks is rectangular they have a large front door so the riders can enter in the barracks with their horses they have a container where the horses eat in the container there are horses food estimated time effort | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.