Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
596,421 | 18,104,276,532 | IssuesEvent | 2021-09-22 17:23:03 | NOAA-GSL/VxLegacyIngest | https://api.github.com/repos/NOAA-GSL/VxLegacyIngest | closed | Overhaul the entire FIM verification workflow | Type: Task Priority: Low | ---
Author Name: **jeffrey.a.hamilton** (jeffrey.a.hamilton)
Original Redmine Issue: 26344, https://vlab.ncep.noaa.gov/redmine/issues/26344
Original Date: 2016-11-15
Original Assignee: jeffrey.a.hamilton
---
The verification scripts included inside the FIM workflow for AC, upper air, and surface verification are in dire need of attention. These scripts do not handle new models, have hard coded paths included, and are reliant on executables compiled outside of the workflow area. All of this leads to an incredibly difficult system to manage and maintain. Work to alleviate these problems and upgrade where practical and possible.
| 1.0 | Overhaul the entire FIM verification workflow - ---
Author Name: **jeffrey.a.hamilton** (jeffrey.a.hamilton)
Original Redmine Issue: 26344, https://vlab.ncep.noaa.gov/redmine/issues/26344
Original Date: 2016-11-15
Original Assignee: jeffrey.a.hamilton
---
The verification scripts included inside the FIM workflow for AC, upper air, and surface verification are in dire need of attention. These scripts do not handle new models, have hard coded paths included, and are reliant on executables compiled outside of the workflow area. All of this leads to an incredibly difficult system to manage and maintain. Work to alleviate these problems and upgrade where practical and possible.
| priority | overhaul the entire fim verification workflow author name jeffrey a hamilton jeffrey a hamilton original redmine issue original date original assignee jeffrey a hamilton the verification scripts included inside the fim workflow for ac upper air and surface verification are in dire need of attention these scripts do not handle new models have hard coded paths included and are reliant on executables compiled outside of the workflow area all of this leads to an incredibly difficult system to manage and maintain work to alleviate these problems and upgrade where practical and possible | 1 |
234,782 | 7,726,319,478 | IssuesEvent | 2018-05-24 20:48:29 | SpongePowered/Ore | https://api.github.com/repos/SpongePowered/Ore | closed | Simplify downloads for headless servers | component: backend priority: low status: input wanted | When trying to download from ore on a headless server using curl -O -J -L https://ore.spongepowered.org/Nucleus/Nucleus/versions/1.2.2-S7.0/download
it's provides a download link in a readme file involving tokens see below with a warning that it hasn't been reviewd
curl -O -J -L -d -X "https://ore.spongepowered.org/Nucleus/Nucleus/versions/1.2.2-S7.0/confirm?downloadType=0&token=zxzxzxzxzx&csrfToken=zzxzxzxzxzxz"
Can we remame the file from `README.txt` to something like `<plugin-slug>-<channel>-<version>-README.txt` and add support for Wget maybe with warning to use at your own risk
| 1.0 | Simplify downloads for headless servers - When trying to download from ore on a headless server using curl -O -J -L https://ore.spongepowered.org/Nucleus/Nucleus/versions/1.2.2-S7.0/download
it's provides a download link in a readme file involving tokens see below with a warning that it hasn't been reviewd
curl -O -J -L -d -X "https://ore.spongepowered.org/Nucleus/Nucleus/versions/1.2.2-S7.0/confirm?downloadType=0&token=zxzxzxzxzx&csrfToken=zzxzxzxzxzxz"
Can we remame the file from `README.txt` to something like `<plugin-slug>-<channel>-<version>-README.txt` and add support for Wget maybe with warning to use at your own risk
| priority | simplify downloads for headless servers when trying to download from ore on a headless server using curl o j l it s provides a download link in a readme file involving tokens see below with a warning that it hasn t been reviewd curl o j l d x can we remame the file from readme txt to something like readme txt and add support for wget maybe with warning to use at your own risk | 1 |
261,856 | 8,246,932,550 | IssuesEvent | 2018-09-11 14:16:47 | ansible/galaxy | https://api.github.com/repos/ansible/galaxy | closed | Ansible pulp plugin | priority/low | Work for this is happening on the Pulp side. Our role will be to consult, and test.
To see the actual status, visit https://pulp.plan.io/projects/ansible_plugin/issues?fixed_version_id=49&set_filter=1&status_id=%2A
Actual plugin code lives here: https://github.com/bmbouter/pulp_ansible | 1.0 | Ansible pulp plugin - Work for this is happening on the Pulp side. Our role will be to consult, and test.
To see the actual status, visit https://pulp.plan.io/projects/ansible_plugin/issues?fixed_version_id=49&set_filter=1&status_id=%2A
Actual plugin code lives here: https://github.com/bmbouter/pulp_ansible | priority | ansible pulp plugin work for this is happening on the pulp side our role will be to consult and test to see the actual status visit actual plugin code lives here | 1 |
20,669 | 2,622,856,253 | IssuesEvent | 2015-03-04 08:07:55 | max99x/dict-lookup-chrome-ext | https://api.github.com/repos/max99x/dict-lookup-chrome-ext | closed | Back and Forward buttons for previous definitions | auto-migrated Priority-Low Type-Enhancement | ```
What steps will reproduce the problem?
1.n/a
2.n/a
3.n/a
What is the expected output? What do you see instead?
n/a
What version of the product are you using? On what operating system?
Dictionary Lookup v.4.1.5...Windows 7
Please provide any additional information below.
IT WOULD BE NICE TO BE ABLE TO BACK-TRACK TO PREVIOUS DEFINITIONS...OR OPEN
MULTIPLE INLINE BOXES!
```
Original issue reported on code.google.com by `dominict...@gmail.com` on 24 Jul 2010 at 5:52 | 1.0 | Back and Forward buttons for previous definitions - ```
What steps will reproduce the problem?
1.n/a
2.n/a
3.n/a
What is the expected output? What do you see instead?
n/a
What version of the product are you using? On what operating system?
Dictionary Lookup v.4.1.5...Windows 7
Please provide any additional information below.
IT WOULD BE NICE TO BE ABLE TO BACK-TRACK TO PREVIOUS DEFINITIONS...OR OPEN
MULTIPLE INLINE BOXES!
```
Original issue reported on code.google.com by `dominict...@gmail.com` on 24 Jul 2010 at 5:52 | priority | back and forward buttons for previous definitions what steps will reproduce the problem n a n a n a what is the expected output what do you see instead n a what version of the product are you using on what operating system dictionary lookup v windows please provide any additional information below it would be nice to be able to back track to previous definitions or open multiple inline boxes original issue reported on code google com by dominict gmail com on jul at | 1 |
803,606 | 29,185,263,262 | IssuesEvent | 2023-05-19 14:57:05 | inlang/inlang | https://api.github.com/repos/inlang/inlang | opened | `inlang machine translate` logs incorrect number of languages | type: bug effort: low priority: low scope: cli | ## Problem
_From https://github.com/osmosis-labs/osmosis-frontend/pull/1575#issuecomment-1554695982_
<img width="910" alt="CleanShot 2023-05-19 at 16 56 42@2x" src="https://github.com/inlang/inlang/assets/35429197/88cc4255-791b-42cc-bde6-2fd1b0aaeb6a">
| 1.0 | `inlang machine translate` logs incorrect number of languages - ## Problem
_From https://github.com/osmosis-labs/osmosis-frontend/pull/1575#issuecomment-1554695982_
<img width="910" alt="CleanShot 2023-05-19 at 16 56 42@2x" src="https://github.com/inlang/inlang/assets/35429197/88cc4255-791b-42cc-bde6-2fd1b0aaeb6a">
| priority | inlang machine translate logs incorrect number of languages problem from img width alt cleanshot at src | 1 |
704,554 | 24,200,696,890 | IssuesEvent | 2022-09-24 14:22:14 | poja/RL | https://api.github.com/repos/poja/RL | closed | Model Compare: save training data from comparison games | priority-low | Currently the number of games played in model comparison is low, we would like to increase it to get more accurate measure of our model performance.
But, these games required a lot of compute, and it's not worth it with the current implementation to invest so much time in comparison.
By saving training data entries during the comparison games we can afford more games | 1.0 | Model Compare: save training data from comparison games - Currently the number of games played in model comparison is low, we would like to increase it to get more accurate measure of our model performance.
But, these games required a lot of compute, and it's not worth it with the current implementation to invest so much time in comparison.
By saving training data entries during the comparison games we can afford more games | priority | model compare save training data from comparison games currently the number of games played in model comparison is low we would like to increase it to get more accurate measure of our model performance but these games required a lot of compute and it s not worth it with the current implementation to invest so much time in comparison by saving training data entries during the comparison games we can afford more games | 1 |
307,054 | 9,414,153,705 | IssuesEvent | 2019-04-10 09:29:27 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Handles of valves correction. | Art Low Priority | In real life, handle of valve, show flow direction if open, and looks as "intersection" in closed state.

In eco, his models are incorrect.

| 1.0 | Handles of valves correction. - In real life, handle of valve, show flow direction if open, and looks as "intersection" in closed state.

In eco, his models are incorrect.

| priority | handles of valves correction in real life handle of valve show flow direction if open and looks as intersection in closed state in eco his models are incorrect | 1 |
791,705 | 27,873,225,539 | IssuesEvent | 2023-03-21 14:38:53 | BoBAdministration/QA-Bug-Reports | https://api.github.com/repos/BoBAdministration/QA-Bug-Reports | closed | Stray vertice on max fat Megaraptor | low priority | **Describe the Bug**
A stray vertice appears on Megaraptor's throat while it is sitting and max fat
**To Reproduce**
Steps to reproduce the behavior in detail. Please include ALL steps, even menial ones.
1. Log onto any server
2. Spawn in as a Megaraptor and grow to 1.2
3. Eat until max fat
4. Play the sit animation and view the throat area for the stray vertice
**Expected behavior**
There are no stray vertices on Megaraptor's max fat morph
**Actual behavior**
There is a stray vertice on Megaraptor's max fat morph in the throat area
**Screenshots & Video**

**Branch Version**
Live
**Character Information**
1.2 Megaraptor with max food/fat
**Additional Information**
N/A
| 1.0 | Stray vertice on max fat Megaraptor - **Describe the Bug**
A stray vertice appears on Megaraptor's throat while it is sitting and max fat
**To Reproduce**
Steps to reproduce the behavior in detail. Please include ALL steps, even menial ones.
1. Log onto any server
2. Spawn in as a Megaraptor and grow to 1.2
3. Eat until max fat
4. Play the sit animation and view the throat area for the stray vertice
**Expected behavior**
There are no stray vertices on Megaraptor's max fat morph
**Actual behavior**
There is a stray vertice on Megaraptor's max fat morph in the throat area
**Screenshots & Video**

**Branch Version**
Live
**Character Information**
1.2 Megaraptor with max food/fat
**Additional Information**
N/A
| priority | stray vertice on max fat megaraptor describe the bug a stray vertice appears on megaraptor s throat while it is sitting and max fat to reproduce steps to reproduce the behavior in detail please include all steps even menial ones log onto any server spawn in as a megaraptor and grow to eat until max fat play the sit animation and view the throat area for the stray vertice expected behavior there are no stray vertices on megaraptor s max fat morph actual behavior there is a stray vertice on megaraptor s max fat morph in the throat area screenshots video branch version live character information megaraptor with max food fat additional information n a | 1 |
611,884 | 18,983,790,712 | IssuesEvent | 2021-11-21 11:11:51 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] `spicetify-cli-git` | request:new-pkg priority:low | - Link to the package(s) in AUR: [aur.archlinux.org/spicetify-cli-git](https://aur.archlinux.org/packages/spicetify-cli-git)
- Utility this package has for you:
Add
Use the upstream patch fixes without having to wait for a release of the tool
- Do you consider this package(s) to be useful for **every** chaotic user?:
- [ ] YES
- [ ] No, but yes for a great amount.
- [x] No, but yes for a few.
- [ ] No, it's useful only for me.
- Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?:
- [x] YES
- [ ] NO
- Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?:
- [x] YES
- Have you tested if this package builds in a clean chroot?:
- [ ] YES
- [x] NO
- Does the package's license allows us to redistribute it?:
- [x] YES (LGPL-2.1 licensed)
- [ ] No clue.
- [ ] No, but the author doesn't really care, it's just for bureaucracy.
- Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?:
- [x] YES
- Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?:
- [x] YES | 1.0 | [Request] `spicetify-cli-git` - - Link to the package(s) in AUR: [aur.archlinux.org/spicetify-cli-git](https://aur.archlinux.org/packages/spicetify-cli-git)
- Utility this package has for you:
Add
Use the upstream patch fixes without having to wait for a release of the tool
- Do you consider this package(s) to be useful for **every** chaotic user?:
- [ ] YES
- [ ] No, but yes for a great amount.
- [x] No, but yes for a few.
- [ ] No, it's useful only for me.
- Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?:
- [x] YES
- [ ] NO
- Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?:
- [x] YES
- Have you tested if this package builds in a clean chroot?:
- [ ] YES
- [x] NO
- Does the package's license allows us to redistribute it?:
- [x] YES (LGPL-2.1 licensed)
- [ ] No clue.
- [ ] No, but the author doesn't really care, it's just for bureaucracy.
- Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?:
- [x] YES
- Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?:
- [x] YES | priority | spicetify cli git link to the package s in aur utility this package has for you add use the upstream patch fixes without having to wait for a release of the tool do you consider this package s to be useful for every chaotic user yes no but yes for a great amount no but yes for a few no it s useful only for me do you consider this package s to be useful for feature testing preview e g mesa aco wine wayland yes no are you sure we don t have this package already test with pacman ss yes have you tested if this package builds in a clean chroot yes no does the package s license allows us to redistribute it yes lgpl licensed no clue no but the author doesn t really care it s just for bureaucracy have you searched the to ensure this request is new not duplicated yes have you read the to ensure this package is not banned yes | 1 |
387,318 | 11,459,867,017 | IssuesEvent | 2020-02-07 08:27:45 | python-discord/bot | https://api.github.com/repos/python-discord/bot | closed | Tag fuzzy matching and aliasing | area: information priority: 3 - low status: WIP | As discussed briefly in #118 and at least a few times on the server in the recent past, I believe tags could stand to gain from fuzzy matching and possibly from aliasing as well.
In short:
- Fuzzy Matching
- Allows for solving of the plurals problem, like `codeblock` vs `codeblocks` and `f-string` vs `f-strings`
- Potentially solves things like `fstring` vs `f-string`
- Embed should indicate to the user that a fuzzy match was used (e.g. `"Did you mean ... ?"`)
- Aliasing (See #187)
- Requires adding a field to the backend tag model
- Allows for things that fuzzy matching may not realistically be able to hit, like `ytdl` vs `youtube-dl` | 1.0 | Tag fuzzy matching and aliasing - As discussed briefly in #118 and at least a few times on the server in the recent past, I believe tags could stand to gain from fuzzy matching and possibly from aliasing as well.
In short:
- Fuzzy Matching
- Allows for solving of the plurals problem, like `codeblock` vs `codeblocks` and `f-string` vs `f-strings`
- Potentially solves things like `fstring` vs `f-string`
- Embed should indicate to the user that a fuzzy match was used (e.g. `"Did you mean ... ?"`)
- Aliasing (See #187)
- Requires adding a field to the backend tag model
- Allows for things that fuzzy matching may not realistically be able to hit, like `ytdl` vs `youtube-dl` | priority | tag fuzzy matching and aliasing as discussed briefly in and at least a few times on the server in the recent past i believe tags could stand to gain from fuzzy matching and possibly from aliasing as well in short fuzzy matching allows for solving of the plurals problem like codeblock vs codeblocks and f string vs f strings potentially solves things like fstring vs f string embed should indicate to the user that a fuzzy match was used e g did you mean aliasing see requires adding a field to the backend tag model allows for things that fuzzy matching may not realistically be able to hit like ytdl vs youtube dl | 1 |
97,340 | 3,989,030,835 | IssuesEvent | 2016-05-09 12:32:19 | RigsOfRods/rigs-of-rods | https://api.github.com/repos/RigsOfRods/rigs-of-rods | closed | RoR crashes on Auriga when spawning a Gavril Zeta LWB | bug crash low-priority | Steps to reproduce:
+ Load `Auriga Proving Grounds`
+ Do not move, wait until the character touches the asphalt surface
+ Press `CTRL+G`
+ Spawn a `Gavril Zeta LWB`
-> `Segmentation fault`
It does not happen with `0.4.5.1`.
It does not happen when the vehicle is spawned mid air (not touching the ground).
It does not happen when the vehicle is spawned via the truck spawner.
**Edit:** Caused by this commit: 121f507e8ca0b0aa1273f6955dda0fa473181c73
**Edit:** It does not happen when `Caelum` is disabled
**Edit:** It does not seem to happen on resolutions lower than 3840x2160 | 1.0 | RoR crashes on Auriga when spawning a Gavril Zeta LWB - Steps to reproduce:
+ Load `Auriga Proving Grounds`
+ Do not move, wait until the character touches the asphalt surface
+ Press `CTRL+G`
+ Spawn a `Gavril Zeta LWB`
-> `Segmentation fault`
It does not happen with `0.4.5.1`.
It does not happen when the vehicle is spawned mid air (not touching the ground).
It does not happen when the vehicle is spawned via the truck spawner.
**Edit:** Caused by this commit: 121f507e8ca0b0aa1273f6955dda0fa473181c73
**Edit:** It does not happen when `Caelum` is disabled
**Edit:** It does not seem to happen on resolutions lower than 3840x2160 | priority | ror crashes on auriga when spawning a gavril zeta lwb steps to reproduce load auriga proving grounds do not move wait until the character touches the asphalt surface press ctrl g spawn a gavril zeta lwb segmentation fault it does not happen with it does not happen when the vehicle is spawned mid air not touching the ground it does not happen when the vehicle is spawned via the truck spawner edit caused by this commit edit it does not happen when caelum is disabled edit it does not seem to happen on resolutions lower than | 1 |
113,038 | 4,541,881,188 | IssuesEvent | 2016-09-09 19:16:50 | PowerlineApp/powerline-mobile | https://api.github.com/repos/PowerlineApp/powerline-mobile | opened | Change of Address Cache Issue | bug P3 - Low Priority | When user changes address, group cache should be refreshed to avoid problem where user is linked to multiple town state country groups at once. User should only ever be linked to one set of town/state/country groups. | 1.0 | Change of Address Cache Issue - When user changes address, group cache should be refreshed to avoid problem where user is linked to multiple town state country groups at once. User should only ever be linked to one set of town/state/country groups. | priority | change of address cache issue when user changes address group cache should be refreshed to avoid problem where user is linked to multiple town state country groups at once user should only ever be linked to one set of town state country groups | 1 |
129,975 | 5,107,342,513 | IssuesEvent | 2017-01-05 14:40:45 | chef/chef | https://api.github.com/repos/chef/chef | closed | powershell_out with a code block requires double escape for double quotes | Low Priority Windows | ## Description
If I have a script block defined in a library and then pass that to powershell_out only double-escaped double quotes will end up as valid PowerShell syntax. Since the double quotes are missing PowerShell will fail trying to run the code that is no longer in a string.
I think the issue is in the powershell_out mixin where the script block is passed into a command to run PowerShell.
[powershell_out.rb](https://github.com/chef/chef/blob/master/lib/chef/mixin/powershell_out.rb#L94)
_Note_: This is happening inside of a LWRP
## Chef Version
``` powershell
> chef -v
Chef Development Kit Version: 0.14.25
chef-client version: 12.10.24
berks version: 4.3.3
kitchen version: 1.8.0
```
## Platform Version
Windows 10 latest
``` powershell
> $PSVersionTable
Name Value
---- -----
PSVersion 5.0.10586.122
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.10586.122
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
```
## Replication Case
``` ruby
def correct_version_running?
script_block = <<-EOF
$myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\\, \#)' -Verbose
Write-Verbose \\"String with $myvar interpolation\\" -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
EOF
::Chef::Log.debug("Script block:\n#{script_block}")
result = powershell_out(script_block)
end
```
## Client Output
``` cmd
Mixlib::ShellOut::ShellCommandFailed
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of powershell.exe -NoLogo -NonInteractive -NoProfile -ExecutionPolicy Unrestricted -InputFormat None -Command " $myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\, #)' -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
" ----
STDOUT: VERBOSE: Only single escape needed for other characters: (\, #)
VERBOSE: String with variable interpolation
STDERR: Write-Verbose : A positional parameter cannot be found that accepts argument 'with'.
At line:4 char:7
+ Write-Verbose String with $myvar interpolation -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Write-Verbose], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.WriteVerboseCommand
Write-Verbose : A positional parameter cannot be found that accepts argument 'with'.
At line:5 char:7
+ Write-Verbose String with $myvar interpolation -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Write-Verbose], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.WriteVerboseCommand
---- End output of powershell.exe -NoLogo -NonInteractive -NoProfile -ExecutionPolicy Unrestricted -InputFormat None -Command " $myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\, #)' -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
" ----
Ran powershell.exe -NoLogo -NonInteractive -NoProfile -ExecutionPolicy Unrestricted -InputFormat None -Command " $myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\, #)' -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
" returned 1
```
## Stacktrace
[chef-stacktrace.txt](https://github.com/chef/chef/files/328952/chef-stacktrace.txt)
| 1.0 | powershell_out with a code block requires double escape for double quotes - ## Description
If I have a script block defined in a library and then pass that to powershell_out only double-escaped double quotes will end up as valid PowerShell syntax. Since the double quotes are missing PowerShell will fail trying to run the code that is no longer in a string.
I think the issue is in the powershell_out mixin where the script block is passed into a command to run PowerShell.
[powershell_out.rb](https://github.com/chef/chef/blob/master/lib/chef/mixin/powershell_out.rb#L94)
_Note_: This is happening inside of a LWRP
## Chef Version
``` powershell
> chef -v
Chef Development Kit Version: 0.14.25
chef-client version: 12.10.24
berks version: 4.3.3
kitchen version: 1.8.0
```
## Platform Version
Windows 10 latest
``` powershell
> $PSVersionTable
Name Value
---- -----
PSVersion 5.0.10586.122
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.10586.122
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
```
## Replication Case
``` ruby
def correct_version_running?
script_block = <<-EOF
$myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\\, \#)' -Verbose
Write-Verbose \\"String with $myvar interpolation\\" -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
EOF
::Chef::Log.debug("Script block:\n#{script_block}")
result = powershell_out(script_block)
end
```
## Client Output
``` cmd
Mixlib::ShellOut::ShellCommandFailed
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of powershell.exe -NoLogo -NonInteractive -NoProfile -ExecutionPolicy Unrestricted -InputFormat None -Command " $myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\, #)' -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
" ----
STDOUT: VERBOSE: Only single escape needed for other characters: (\, #)
VERBOSE: String with variable interpolation
STDERR: Write-Verbose : A positional parameter cannot be found that accepts argument 'with'.
At line:4 char:7
+ Write-Verbose String with $myvar interpolation -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Write-Verbose], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.WriteVerboseCommand
Write-Verbose : A positional parameter cannot be found that accepts argument 'with'.
At line:5 char:7
+ Write-Verbose String with $myvar interpolation -Verbose
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Write-Verbose], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.WriteVerboseCommand
---- End output of powershell.exe -NoLogo -NonInteractive -NoProfile -ExecutionPolicy Unrestricted -InputFormat None -Command " $myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\, #)' -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
" ----
Ran powershell.exe -NoLogo -NonInteractive -NoProfile -ExecutionPolicy Unrestricted -InputFormat None -Command " $myvar= 'variable'
Write-Verbose 'Only single escape needed for other characters: (\, #)' -Verbose
Write-Verbose \"String with $myvar interpolation\" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
Write-Verbose "String with $myvar interpolation" -Verbose
" returned 1
```
## Stacktrace
[chef-stacktrace.txt](https://github.com/chef/chef/files/328952/chef-stacktrace.txt)
| priority | powershell out with a code block requires double escape for double quotes description if i have a script block defined in a library and then pass that to powershell out only double escaped double quotes will end up as valid powershell syntax since the double quotes are missing powershell will fail trying to run the code that is no longer in a string i think the issue is in the powershell out mixin where the script block is passed into a command to run powershell note this is happening inside of a lwrp chef version powershell chef v chef development kit version chef client version berks version kitchen version platform version windows latest powershell psversiontable name value psversion pscompatibleversions buildversion clrversion wsmanstackversion psremotingprotocolversion serializationversion replication case ruby def correct version running script block eof myvar variable write verbose only single escape needed for other characters verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose eof chef log debug script block n script block result powershell out script block end client output cmd mixlib shellout shellcommandfailed mixlib shellout shellcommandfailed expected process to exit with but received begin output of powershell exe nologo noninteractive noprofile executionpolicy unrestricted inputformat none command myvar variable write verbose only single escape needed for other characters verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose stdout verbose only single escape needed for other characters verbose string with variable interpolation stderr write verbose a positional parameter cannot be found that accepts argument with at line char write verbose string with myvar interpolation verbose categoryinfo invalidargument parameterbindingexception fullyqualifiederrorid positionalparameternotfound microsoft powershell commands writeverbosecommand write verbose a positional parameter cannot be found that accepts argument with at line char write verbose string with myvar interpolation verbose categoryinfo invalidargument parameterbindingexception fullyqualifiederrorid positionalparameternotfound microsoft powershell commands writeverbosecommand end output of powershell exe nologo noninteractive noprofile executionpolicy unrestricted inputformat none command myvar variable write verbose only single escape needed for other characters verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose ran powershell exe nologo noninteractive noprofile executionpolicy unrestricted inputformat none command myvar variable write verbose only single escape needed for other characters verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose write verbose string with myvar interpolation verbose returned stacktrace | 1 |
292,436 | 8,958,019,437 | IssuesEvent | 2019-01-27 10:37:09 | OpenPrinting/openprinting.github.io | https://api.github.com/repos/OpenPrinting/openprinting.github.io | closed | Create the Menu Bar: | bug difficulty/low enhancement priority/high | List:
Home Page
Downloads
Tutorials
Project List
Blogs
Contact
Donation
News / Events
Driverless Printing | 1.0 | Create the Menu Bar: - List:
Home Page
Downloads
Tutorials
Project List
Blogs
Contact
Donation
News / Events
Driverless Printing | priority | create the menu bar list home page downloads tutorials project list blogs contact donation news events driverless printing | 1 |
617,756 | 19,404,027,221 | IssuesEvent | 2021-12-19 17:37:16 | NiclasOlofsson/MiNET | https://api.github.com/repos/NiclasOlofsson/MiNET | opened | Update Anvil provider | enhancement Low Priority | The current anvil provider implementation only supports up to 1.11 worlds.
We should update this! | 1.0 | Update Anvil provider - The current anvil provider implementation only supports up to 1.11 worlds.
We should update this! | priority | update anvil provider the current anvil provider implementation only supports up to worlds we should update this | 1 |
530,481 | 15,429,480,063 | IssuesEvent | 2021-03-06 03:49:01 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | Make use of unused Navigation Drawer String | Good First Issue! Help Wanted Priority-Low Stale Strings | `02-string.xml`: `drawer_open` and `drawer_open`
Make use of these for content descriptions on the hamburger button to open and close the drawer. | 1.0 | Make use of unused Navigation Drawer String - `02-string.xml`: `drawer_open` and `drawer_open`
Make use of these for content descriptions on the hamburger button to open and close the drawer. | priority | make use of unused navigation drawer string string xml drawer open and drawer open make use of these for content descriptions on the hamburger button to open and close the drawer | 1 |
448,481 | 12,951,153,007 | IssuesEvent | 2020-07-19 15:45:03 | alanqchen/Bear-Post | https://api.github.com/repos/alanqchen/Bear-Post | closed | Investigate using localStorage for back button | Low Priority enhancement frontend | When the user goes back, the request should not be made again. Instead the data from the first request should be saved and rendered. | 1.0 | Investigate using localStorage for back button - When the user goes back, the request should not be made again. Instead the data from the first request should be saved and rendered. | priority | investigate using localstorage for back button when the user goes back the request should not be made again instead the data from the first request should be saved and rendered | 1 |
767,794 | 26,940,822,416 | IssuesEvent | 2023-02-08 01:57:54 | ADACS-Australia/TraceT | https://api.github.com/repos/ADACS-Australia/TraceT | closed | Make the event_log update more than one line per minute | Low Priority Ready to be pulled | The `event_log` only updates 1 line per minute.
As a result, the event log on the website can occasionally get behind. I've seen it be 20 mins behind, but later was able to catch up.
Recommend we update the `twisted_comet_wrapper.py` to poll the output of the twisted log more often (5sec) | 1.0 | Make the event_log update more than one line per minute - The `event_log` only updates 1 line per minute.
As a result, the event log on the website can occasionally get behind. I've seen it be 20 mins behind, but later was able to catch up.
Recommend we update the `twisted_comet_wrapper.py` to poll the output of the twisted log more often (5sec) | priority | make the event log update more than one line per minute the event log only updates line per minute as a result the event log on the website can occasionally get behind i ve seen it be mins behind but later was able to catch up recommend we update the twisted comet wrapper py to poll the output of the twisted log more often | 1 |
771,795 | 27,092,531,860 | IssuesEvent | 2023-02-14 22:28:26 | glific/glific-frontend | https://api.github.com/repos/glific/glific-frontend | closed | Few updates for the settings screen | enhancement good first issue Priority : Low | 1. Options on the settings page are increasing. Going into the page and trying to find a setting might get more time consuming as we add sheet integration and other options. For that, it might be good to provide a faster access to the options:
This list can show up on hovering over the settings icon similar to how it happens for the contact management icon. Clicking on the setting icon will still take users to the page with all the blocks.
<img width="533" alt="image" src="https://user-images.githubusercontent.com/16714604/194508269-6f12297b-bed0-49e3-84f5-85e1baf77563.png">
2. Make the entire block along with the text inside clickable instead of just the edit icon
<img width="737" alt="image" src="https://user-images.githubusercontent.com/16714604/194512041-a2a7e2ce-a7bb-4cf3-aeb3-bf476a802773.png">
| 1.0 | Few updates for the settings screen - 1. Options on the settings page are increasing. Going into the page and trying to find a setting might get more time consuming as we add sheet integration and other options. For that, it might be good to provide a faster access to the options:
This list can show up on hovering over the settings icon similar to how it happens for the contact management icon. Clicking on the setting icon will still take users to the page with all the blocks.
<img width="533" alt="image" src="https://user-images.githubusercontent.com/16714604/194508269-6f12297b-bed0-49e3-84f5-85e1baf77563.png">
2. Make the entire block along with the text inside clickable instead of just the edit icon
<img width="737" alt="image" src="https://user-images.githubusercontent.com/16714604/194512041-a2a7e2ce-a7bb-4cf3-aeb3-bf476a802773.png">
| priority | few updates for the settings screen options on the settings page are increasing going into the page and trying to find a setting might get more time consuming as we add sheet integration and other options for that it might be good to provide a faster access to the options this list can show up on hovering over the settings icon similar to how it happens for the contact management icon clicking on the setting icon will still take users to the page with all the blocks img width alt image src make the entire block along with the text inside clickable instead of just the edit icon img width alt image src | 1 |
655,367 | 21,687,406,186 | IssuesEvent | 2022-05-09 12:38:06 | serverlessworkflow/synapse | https://api.github.com/repos/serverlessworkflow/synapse | opened | Improve workflow instance details display speed & Blazor monaco feedback | enhancement dashboard priority: low weight: 1 | **What would you like to be added**:
Checking the details of a workflow instance can be very slow, the more activities (input/output payloads), the slower it gets. This is due to Blazor Monaco being a little slow and the accumulation its of instances in the workflow instance details. To prevent that problem, accordions should not render their body if it's not "expanded" (instead of just toggling css classes).
It would also be nice to add a loader indicator while Blazor Monaco is processing the data.
**Why is this needed**:
Better UX.
| 1.0 | Improve workflow instance details display speed & Blazor monaco feedback - **What would you like to be added**:
Checking the details of a workflow instance can be very slow, the more activities (input/output payloads), the slower it gets. This is due to Blazor Monaco being a little slow and the accumulation its of instances in the workflow instance details. To prevent that problem, accordions should not render their body if it's not "expanded" (instead of just toggling css classes).
It would also be nice to add a loader indicator while Blazor Monaco is processing the data.
**Why is this needed**:
Better UX.
| priority | improve workflow instance details display speed blazor monaco feedback what would you like to be added checking the details of a workflow instance can be very slow the more activities input output payloads the slower it gets this is due to blazor monaco being a little slow and the accumulation its of instances in the workflow instance details to prevent that problem accordions should not render their body if it s not expanded instead of just toggling css classes it would also be nice to add a loader indicator while blazor monaco is processing the data why is this needed better ux | 1 |
493,703 | 14,236,985,202 | IssuesEvent | 2020-11-18 16:39:07 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [Studio] dates in recently published dashboard should stay expanded when screen refreshes | bug priority: low | Original JIRA Fix Versions:
2.5.2, Original JIRA Components:
Studio,
----------
Original JIRA Description: :
----------
Original JIRA Comments:
I can't reproduce this. Please check the following video:
http://screencast.com/t/dWsIOje1RXSf
----------
Original JIRA:
http://issues.craftercms.org/browse/CRAFTERCMS-2038
---------- | 1.0 | [Studio] dates in recently published dashboard should stay expanded when screen refreshes - Original JIRA Fix Versions:
2.5.2, Original JIRA Components:
Studio,
----------
Original JIRA Description: :
----------
Original JIRA Comments:
I can't reproduce this. Please check the following video:
http://screencast.com/t/dWsIOje1RXSf
----------
Original JIRA:
http://issues.craftercms.org/browse/CRAFTERCMS-2038
---------- | priority | dates in recently published dashboard should stay expanded when screen refreshes original jira fix versions original jira components studio original jira description original jira comments i can t reproduce this please check the following video original jira | 1 |
1,693 | 2,517,631,827 | IssuesEvent | 2015-01-16 16:09:21 | duckduckgo/community-platform | https://api.github.com/repos/duckduckgo/community-platform | closed | Don't bump forum posts when comments set to deleted user | Bug Forum Priority: Low | During account deletion, comment contributions are set to a "deleted user" account (I don't like that there has to be a 'deleted user' account, but that's a separate issue). For each comment encountered, its 'updated' timestamp is reset which bumps the associated thread up the list on the forum.
*edit* A related issue is threads don't have their user reset - only the related comments. | 1.0 | Don't bump forum posts when comments set to deleted user - During account deletion, comment contributions are set to a "deleted user" account (I don't like that there has to be a 'deleted user' account, but that's a separate issue). For each comment encountered, its 'updated' timestamp is reset which bumps the associated thread up the list on the forum.
*edit* A related issue is threads don't have their user reset - only the related comments. | priority | don t bump forum posts when comments set to deleted user during account deletion comment contributions are set to a deleted user account i don t like that there has to be a deleted user account but that s a separate issue for each comment encountered its updated timestamp is reset which bumps the associated thread up the list on the forum edit a related issue is threads don t have their user reset only the related comments | 1 |
168,350 | 6,370,220,071 | IssuesEvent | 2017-08-01 13:46:03 | Cxbx-Reloaded/Cxbx-Reloaded | https://api.github.com/repos/Cxbx-Reloaded/Cxbx-Reloaded | opened | Add "Drag and Drop" XBE-loading feature to GUI | good-for-beginners low-priority user interface | Fo this issue, extend the Cxbx GUI such, that an Xbe can be loaded using Drag and Drop.
This should only be accepted when emulation is not running.
If no XBE is loaded yet, load the dropped XBE. If an XBE is already loaded, replace it with the dropped XBE.
Be carefull to use correct paths. | 1.0 | Add "Drag and Drop" XBE-loading feature to GUI - Fo this issue, extend the Cxbx GUI such, that an Xbe can be loaded using Drag and Drop.
This should only be accepted when emulation is not running.
If no XBE is loaded yet, load the dropped XBE. If an XBE is already loaded, replace it with the dropped XBE.
Be carefull to use correct paths. | priority | add drag and drop xbe loading feature to gui fo this issue extend the cxbx gui such that an xbe can be loaded using drag and drop this should only be accepted when emulation is not running if no xbe is loaded yet load the dropped xbe if an xbe is already loaded replace it with the dropped xbe be carefull to use correct paths | 1 |
35,839 | 2,793,225,074 | IssuesEvent | 2015-05-11 09:30:26 | bounswe/bounswe2015group3 | https://api.github.com/repos/bounswe/bounswe2015group3 | closed | wiki sidebar | auto-migrated Priority-Low Type-Task | ```
The sidebar needs to have a better look.
```
Original issue reported on code.google.com by `tuba.top...@gmail.com` on 20 Feb 2015 at 11:03 | 1.0 | wiki sidebar - ```
The sidebar needs to have a better look.
```
Original issue reported on code.google.com by `tuba.top...@gmail.com` on 20 Feb 2015 at 11:03 | priority | wiki sidebar the sidebar needs to have a better look original issue reported on code google com by tuba top gmail com on feb at | 1 |
165,333 | 6,274,625,330 | IssuesEvent | 2017-07-18 03:02:18 | Railcraft/Railcraft | https://api.github.com/repos/Railcraft/Railcraft | closed | Fluid Cart fluid issues | bug carts implemented inventory priority-low | testing on the creative server
- [ ] fluid carts won't accept fluid from items
- [ ] fluid doesn't update until gui is opened
- [ ] somehow, fluid can exist/filter can be changed on a tank cart with a different filter

| 1.0 | Fluid Cart fluid issues - testing on the creative server
- [ ] fluid carts won't accept fluid from items
- [ ] fluid doesn't update until gui is opened
- [ ] somehow, fluid can exist/filter can be changed on a tank cart with a different filter

| priority | fluid cart fluid issues testing on the creative server fluid carts won t accept fluid from items fluid doesn t update until gui is opened somehow fluid can exist filter can be changed on a tank cart with a different filter | 1 |
123,654 | 4,866,587,637 | IssuesEvent | 2016-11-15 00:23:32 | lgblgblgb/xemu | https://api.github.com/repos/lgblgblgb/xemu | closed | Audio support for VIC-20 emulator | feature low priority STALLING | Write some (not so precise) audio support emulation for the VIC-20. As it's a relatively simple stuff (compared to SIDs) I can write/test it. Next round will be then generic audio layer then, can be used for all emulated machines based on this work.
| 1.0 | Audio support for VIC-20 emulator - Write some (not so precise) audio support emulation for the VIC-20. As it's a relatively simple stuff (compared to SIDs) I can write/test it. Next round will be then generic audio layer then, can be used for all emulated machines based on this work.
| priority | audio support for vic emulator write some not so precise audio support emulation for the vic as it s a relatively simple stuff compared to sids i can write test it next round will be then generic audio layer then can be used for all emulated machines based on this work | 1 |
598,132 | 18,237,655,488 | IssuesEvent | 2021-10-01 09:00:31 | Darklight-03/minecraft-cloudformation-server | https://api.github.com/repos/Darklight-03/minecraft-cloudformation-server | opened | fix timeout on start/stop server | bug priority:low | sometimes when clicking start or stop on the menu, you get `this interaction failed.`, but then the server does start/stop afterwards.
This is because the api calls to start/stop the server take too long to complete within the 3 second discord timeout window.
We can return fast and call a new lambda function which actually executes the commands. | 1.0 | fix timeout on start/stop server - sometimes when clicking start or stop on the menu, you get `this interaction failed.`, but then the server does start/stop afterwards.
This is because the api calls to start/stop the server take too long to complete within the 3 second discord timeout window.
We can return fast and call a new lambda function which actually executes the commands. | priority | fix timeout on start stop server sometimes when clicking start or stop on the menu you get this interaction failed but then the server does start stop afterwards this is because the api calls to start stop the server take too long to complete within the second discord timeout window we can return fast and call a new lambda function which actually executes the commands | 1 |
78,061 | 3,509,208,742 | IssuesEvent | 2016-01-08 21:34:13 | leeensminger/OED_Wetlands | https://api.github.com/repos/leeensminger/OED_Wetlands | reopened | Mitigation Reviews - Erroneous required fields | bug - low priority | The following fields are required fields, but should be nullable:
• SITE_SELECTION_PHASE.AGENCY_APPROVAL
• SITE_SELECTION_PHASE.NEPA_REVIEW_APPROVAL
• FINAL_SITE_DETERMINATION_PHASE.NEPA_ISSUES
• FINAL_SITE_DETERMINATION_PHASE.ORE_APPROAL
• FINAL_SITE_DETERMINATION_PHASE.SELECTION_LETTER_SENT
• SITE_ACQUISITION_PHASE.ACQUISITION_APPROVAL
• SITE_ACQUISITION_PHASE.ACQUISITION_LETTER_SENT
| 1.0 | Mitigation Reviews - Erroneous required fields - The following fields are required fields, but should be nullable:
• SITE_SELECTION_PHASE.AGENCY_APPROVAL
• SITE_SELECTION_PHASE.NEPA_REVIEW_APPROVAL
• FINAL_SITE_DETERMINATION_PHASE.NEPA_ISSUES
• FINAL_SITE_DETERMINATION_PHASE.ORE_APPROAL
• FINAL_SITE_DETERMINATION_PHASE.SELECTION_LETTER_SENT
• SITE_ACQUISITION_PHASE.ACQUISITION_APPROVAL
• SITE_ACQUISITION_PHASE.ACQUISITION_LETTER_SENT
| priority | mitigation reviews erroneous required fields the following fields are required fields but should be nullable • site selection phase agency approval • site selection phase nepa review approval • final site determination phase nepa issues • final site determination phase ore approal • final site determination phase selection letter sent • site acquisition phase acquisition approval • site acquisition phase acquisition letter sent | 1 |
252,116 | 8,032,114,298 | IssuesEvent | 2018-07-28 11:19:36 | MoonchildProductions/UXP | https://api.github.com/repos/MoonchildProductions/UXP | closed | Pale Moon can't load about: pages from about:support and about:about when the latter is loaded via various "load in new tab/window" UI actions | App: Pale Moon Assigned C: Networking C: UI Low Priority | It seems this issue is related to the following bugs:
* https://bugzilla.mozilla.org/show_bug.cgi?id=1329032
* https://bugzilla.mozilla.org/show_bug.cgi?id=1331686
* https://bugzilla.mozilla.org/show_bug.cgi?id=1333726
I also found that while these bugs are targeted to Firefox 53-54, they are somehow already partially implemented in the current UXP tree.
At the same time Basilisk doesn't have this problem. | 1.0 | Pale Moon can't load about: pages from about:support and about:about when the latter is loaded via various "load in new tab/window" UI actions - It seems this issue is related to the following bugs:
* https://bugzilla.mozilla.org/show_bug.cgi?id=1329032
* https://bugzilla.mozilla.org/show_bug.cgi?id=1331686
* https://bugzilla.mozilla.org/show_bug.cgi?id=1333726
I also found that while these bugs are targeted to Firefox 53-54, they are somehow already partially implemented in the current UXP tree.
At the same time Basilisk doesn't have this problem. | priority | pale moon can t load about pages from about support and about about when the latter is loaded via various load in new tab window ui actions it seems this issue is related to the following bugs i also found that while these bugs are targeted to firefox they are somehow already partially implemented in the current uxp tree at the same time basilisk doesn t have this problem | 1 |
699,707 | 24,029,010,005 | IssuesEvent | 2022-09-15 13:45:34 | s1lentq/ReGameDLL_CS | https://api.github.com/repos/s1lentq/ReGameDLL_CS | closed | Defuse kit scoreboard msg don't update on pickUp | Type: Bug in original gamedll Priority: Low | > Defuse kit scoreboard message is only set on buy, if you happen to pick it up - it will not be networked.
Very useful imho for when you pick up defuse kits later in the round - and your teammate didn’t buy kits either and he thinks the round is lost… starts defusing it thinking the other teammates don’t have kits.
https://www.youtube.com/watch?v=5e7CUNDLi-Q
Plugin fix:
```pawn
#include <amxmodx>
#include <reapi>
#define AUTHOR "deprale"
#define VERSION "1.0"
#define PLUGIN "[REAPI] DKIT PICKUP FIX"
#define SCORE_STATUS_DEFKIT (1<<3)
#define ITEM_TO_MONITOR ITEM_DEFUSEKIT
public plugin_init( ) {
register_plugin( AUTHOR, VERSION, PLUGIN )
RegisterHookChain( RG_CBasePlayer_HasRestrictItem, "Fw_HasRestrictItem_Pre", 0 )
}
public Fw_HasRestrictItem_Pre( id, ItemID:iItem, ItemRestType: iType )
{
// Hook only player touching
if ( iType == ITEM_TYPE_TOUCHED )
{
if ( iItem == ITEM_TO_MONITOR )
{
message_begin( MSG_BROADCAST, get_user_msgid( "ScoreAttrib" ) )
write_byte( id )
write_byte( SCORE_STATUS_DEFKIT )
message_end( )
}
}
return HC_CONTINUE
}
```
report source:
https://forums.fastcup.net/t/quality-of-life-improvement/6128/2 | 1.0 | Defuse kit scoreboard msg don't update on pickUp - > Defuse kit scoreboard message is only set on buy, if you happen to pick it up - it will not be networked.
Very useful imho for when you pick up defuse kits later in the round - and your teammate didn’t buy kits either and he thinks the round is lost… starts defusing it thinking the other teammates don’t have kits.
https://www.youtube.com/watch?v=5e7CUNDLi-Q
Plugin fix:
```pawn
#include <amxmodx>
#include <reapi>
#define AUTHOR "deprale"
#define VERSION "1.0"
#define PLUGIN "[REAPI] DKIT PICKUP FIX"
#define SCORE_STATUS_DEFKIT (1<<3)
#define ITEM_TO_MONITOR ITEM_DEFUSEKIT
public plugin_init( ) {
register_plugin( AUTHOR, VERSION, PLUGIN )
RegisterHookChain( RG_CBasePlayer_HasRestrictItem, "Fw_HasRestrictItem_Pre", 0 )
}
public Fw_HasRestrictItem_Pre( id, ItemID:iItem, ItemRestType: iType )
{
// Hook only player touching
if ( iType == ITEM_TYPE_TOUCHED )
{
if ( iItem == ITEM_TO_MONITOR )
{
message_begin( MSG_BROADCAST, get_user_msgid( "ScoreAttrib" ) )
write_byte( id )
write_byte( SCORE_STATUS_DEFKIT )
message_end( )
}
}
return HC_CONTINUE
}
```
report source:
https://forums.fastcup.net/t/quality-of-life-improvement/6128/2 | priority | defuse kit scoreboard msg don t update on pickup defuse kit scoreboard message is only set on buy if you happen to pick it up it will not be networked very useful imho for when you pick up defuse kits later in the round and your teammate didn’t buy kits either and he thinks the round is lost… starts defusing it thinking the other teammates don’t have kits plugin fix pawn include include define author deprale define version define plugin dkit pickup fix define score status defkit define item to monitor item defusekit public plugin init register plugin author version plugin registerhookchain rg cbaseplayer hasrestrictitem fw hasrestrictitem pre public fw hasrestrictitem pre id itemid iitem itemresttype itype hook only player touching if itype item type touched if iitem item to monitor message begin msg broadcast get user msgid scoreattrib write byte id write byte score status defkit message end return hc continue report source | 1 |
781,019 | 27,418,836,480 | IssuesEvent | 2023-03-01 15:26:14 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | Historical Flows do not show server alias or IP, but one of the L7 web domains | Low-Priority Bug In Progress | **Environment**:
* OS name: Debian
* OS version: 11.6
* Architecture: x86_64
* ntopng version/revision: ntopng Enterprise M v.5.6.230221
When browsing the *historical flows*, it seems the server hostname is using neither the reverse DNS, the IP or the alias, but just any of the domains the server is providing (webserver).
The server where ntopng runs is a web server providing dozens of domains, but I added an alias to the server and the reverse lookup of the IP also resolves to the alias.
This shows an example in the historical flows. It shows the server using "www.the...." as domain, which is neither the reverse lookup nor the alias. The Info column shows the actual web request domains which have been requested like www.tipo... .
<kbd>
<img width="2244" alt="image" src="https://user-images.githubusercontent.com/8441971/220681888-ec31a69e-da6f-4eaf-8344-a159878fe8b8.png">
</kbd>
When accessing the details, it shows that the web request actually was for a totally different domain www.tipo...
<kbd>
<img width="1524" alt="image" src="https://user-images.githubusercontent.com/8441971/220681916-b955a2fd-da76-476c-8602-c33fe5e84a53.png" border="1">
</kbd>
I would expect that the historical flow explorer shows it the same way than the live flows, having always the alias for the server as "server" column, or at least the IP or the reverse lookup but not anything of the L7 headers.
<kbd>
<img width="2619" alt="image" src="https://user-images.githubusercontent.com/8441971/220684338-ba686a3f-25ae-41b9-8c21-df794ef5ec67.png">
</kbd>
| 1.0 | Historical Flows do not show server alias or IP, but one of the L7 web domains - **Environment**:
* OS name: Debian
* OS version: 11.6
* Architecture: x86_64
* ntopng version/revision: ntopng Enterprise M v.5.6.230221
When browsing the *historical flows*, it seems the server hostname is using neither the reverse DNS, the IP or the alias, but just any of the domains the server is providing (webserver).
The server where ntopng runs is a web server providing dozens of domains, but I added an alias to the server and the reverse lookup of the IP also resolves to the alias.
This shows an example in the historical flows. It shows the server using "www.the...." as domain, which is neither the reverse lookup nor the alias. The Info column shows the actual web request domains which have been requested like www.tipo... .
<kbd>
<img width="2244" alt="image" src="https://user-images.githubusercontent.com/8441971/220681888-ec31a69e-da6f-4eaf-8344-a159878fe8b8.png">
</kbd>
When accessing the details, it shows that the web request actually was for a totally different domain www.tipo...
<kbd>
<img width="1524" alt="image" src="https://user-images.githubusercontent.com/8441971/220681916-b955a2fd-da76-476c-8602-c33fe5e84a53.png" border="1">
</kbd>
I would expect that the historical flow explorer shows it the same way than the live flows, having always the alias for the server as "server" column, or at least the IP or the reverse lookup but not anything of the L7 headers.
<kbd>
<img width="2619" alt="image" src="https://user-images.githubusercontent.com/8441971/220684338-ba686a3f-25ae-41b9-8c21-df794ef5ec67.png">
</kbd>
| priority | historical flows do not show server alias or ip but one of the web domains environment os name debian os version architecture ntopng version revision ntopng enterprise m v when browsing the historical flows it seems the server hostname is using neither the reverse dns the ip or the alias but just any of the domains the server is providing webserver the server where ntopng runs is a web server providing dozens of domains but i added an alias to the server and the reverse lookup of the ip also resolves to the alias this shows an example in the historical flows it shows the server using as domain which is neither the reverse lookup nor the alias the info column shows the actual web request domains which have been requested like img width alt image src when accessing the details it shows that the web request actually was for a totally different domain i would expect that the historical flow explorer shows it the same way than the live flows having always the alias for the server as server column or at least the ip or the reverse lookup but not anything of the headers img width alt image src | 1 |
639,460 | 20,754,534,211 | IssuesEvent | 2022-03-15 10:53:57 | Plutonomicon/cardano-browser-tx | https://api.github.com/repos/Plutonomicon/cardano-browser-tx | closed | numbers or adt enums as ffi to number based enums | question lower-priority | Cardano-serialization-library exposes enums as numbers in its API should we stick to it or prepare our own ADT enums? First results in less code and conversion latter in PS native feel and no smart ctors.
> I liked the previous representation of `NetworkTag`, what was the reason for this change? In the functions that return a `BaseAddress` below, couldn't `netTagToInt` be used?
_Originally posted by @ngua in https://github.com/Plutonomicon/cardano-browser-tx/pull/111#discussion_r808670183_ | 1.0 | numbers or adt enums as ffi to number based enums - Cardano-serialization-library exposes enums as numbers in its API should we stick to it or prepare our own ADT enums? First results in less code and conversion latter in PS native feel and no smart ctors.
> I liked the previous representation of `NetworkTag`, what was the reason for this change? In the functions that return a `BaseAddress` below, couldn't `netTagToInt` be used?
_Originally posted by @ngua in https://github.com/Plutonomicon/cardano-browser-tx/pull/111#discussion_r808670183_ | priority | numbers or adt enums as ffi to number based enums cardano serialization library exposes enums as numbers in its api should we stick to it or prepare our own adt enums first results in less code and conversion latter in ps native feel and no smart ctors i liked the previous representation of networktag what was the reason for this change in the functions that return a baseaddress below couldn t nettagtoint be used originally posted by ngua in | 1 |
524,421 | 15,213,309,342 | IssuesEvent | 2021-02-17 11:36:45 | nf-core/tools | https://api.github.com/repos/nf-core/tools | closed | Shields.io badge not recognised | linting low-priority | From https://shields.io/
> We support .svg and .json. The default is .svg, which can be omitted from the URL.
So such a badge works:
```Markdown
[](http://bioconda.github.io/)
```
[](http://bioconda.github.io/)
But it's not recognised by linting.
I'm guessing we're checking for the exact existence of this line:
```Markdown
[](http://bioconda.github.io/)
```
We could be slightly more lenient | 1.0 | Shields.io badge not recognised - From https://shields.io/
> We support .svg and .json. The default is .svg, which can be omitted from the URL.
So such a badge works:
```Markdown
[](http://bioconda.github.io/)
```
[](http://bioconda.github.io/)
But it's not recognised by linting.
I'm guessing we're checking for the exact existence of this line:
```Markdown
[](http://bioconda.github.io/)
```
We could be slightly more lenient | priority | shields io badge not recognised from we support svg and json the default is svg which can be omitted from the url so such a badge works markdown but it s not recognised by linting i m guessing we re checking for the exact existence of this line markdown we could be slightly more lenient | 1 |
201,254 | 7,028,052,127 | IssuesEvent | 2017-12-25 06:18:26 | WeRelic/RL_ReplayName | https://api.github.com/repos/WeRelic/RL_ReplayName | opened | [LOW] Replay renaming is broken. | bug low priority | Replay names can no longer be changed (or found).
Possible issue with JSON output from rattletrap. | 1.0 | [LOW] Replay renaming is broken. - Replay names can no longer be changed (or found).
Possible issue with JSON output from rattletrap. | priority | replay renaming is broken replay names can no longer be changed or found possible issue with json output from rattletrap | 1 |
442,644 | 12,748,649,819 | IssuesEvent | 2020-06-26 20:37:10 | ooni/probe | https://api.github.com/repos/ooni/probe | opened | Migrate away from unmaintained lottie library | chore effort/XS ooni/probe-desktop priority/low | `probe-desktop` depends on [chenqingspring/react-lottie](https://github.com/chenqingspring/react-lottie) which isn't maintained anymore. We should consider moving to a fork like [this one](https://github.com/crello/react-lottie). There could be others suggested in the issues/PRs of the abandoned repo. | 1.0 | Migrate away from unmaintained lottie library - `probe-desktop` depends on [chenqingspring/react-lottie](https://github.com/chenqingspring/react-lottie) which isn't maintained anymore. We should consider moving to a fork like [this one](https://github.com/crello/react-lottie). There could be others suggested in the issues/PRs of the abandoned repo. | priority | migrate away from unmaintained lottie library probe desktop depends on which isn t maintained anymore we should consider moving to a fork like there could be others suggested in the issues prs of the abandoned repo | 1 |
387,830 | 11,470,829,251 | IssuesEvent | 2020-02-09 06:40:00 | IlchCMS/Ilch-2.0 | https://api.github.com/repos/IlchCMS/Ilch-2.0 | opened | Update-Funktion: getUpdate() anschließend ausführen | Priority: Low Type: Enhancement | Aktuell ist es der Fall, dass man die Reihenfolge in der die getUpdate() ausgeführt werden berücksichtigen muss.
Beispiel "Ersetzen des vendor-Ordners":
Den neuen vendor-Ordner muss man z.B. "_vendor" nennen, damit dieser auf jeden Fall vor der Ausführung der getUpdate() des Admin-Moduls existiert.
Hätte man den neuen Ordner z.B. "vendor_" genannt, wäre die getUpdate() des Admin-Moduls zu einem Zeitpunkt ausgeführt worden, in den der "vendor_"-Ordner noch nicht existiert.
Das z.B. Umbenennen des "vendor_"-Ordners in "vendor" würde dann fehlschlagen.
Hier wäre es wahrscheinlich sinnvoller, wenn erst das komplette Update entpackt wird und die getUpdate() aller Module usw. anschließend ausgeführt werden. | 1.0 | Update-Funktion: getUpdate() anschließend ausführen - Aktuell ist es der Fall, dass man die Reihenfolge in der die getUpdate() ausgeführt werden berücksichtigen muss.
Beispiel "Ersetzen des vendor-Ordners":
Den neuen vendor-Ordner muss man z.B. "_vendor" nennen, damit dieser auf jeden Fall vor der Ausführung der getUpdate() des Admin-Moduls existiert.
Hätte man den neuen Ordner z.B. "vendor_" genannt, wäre die getUpdate() des Admin-Moduls zu einem Zeitpunkt ausgeführt worden, in den der "vendor_"-Ordner noch nicht existiert.
Das z.B. Umbenennen des "vendor_"-Ordners in "vendor" würde dann fehlschlagen.
Hier wäre es wahrscheinlich sinnvoller, wenn erst das komplette Update entpackt wird und die getUpdate() aller Module usw. anschließend ausgeführt werden. | priority | update funktion getupdate anschließend ausführen aktuell ist es der fall dass man die reihenfolge in der die getupdate ausgeführt werden berücksichtigen muss beispiel ersetzen des vendor ordners den neuen vendor ordner muss man z b vendor nennen damit dieser auf jeden fall vor der ausführung der getupdate des admin moduls existiert hätte man den neuen ordner z b vendor genannt wäre die getupdate des admin moduls zu einem zeitpunkt ausgeführt worden in den der vendor ordner noch nicht existiert das z b umbenennen des vendor ordners in vendor würde dann fehlschlagen hier wäre es wahrscheinlich sinnvoller wenn erst das komplette update entpackt wird und die getupdate aller module usw anschließend ausgeführt werden | 1 |
321,714 | 9,807,910,797 | IssuesEvent | 2019-06-12 14:37:41 | lbryio/lbry | https://api.github.com/repos/lbryio/lbry | closed | Allow batching of transactions, including publishes/updates | area: wallet priority: low type: new feature | <!--
Thanks for reporting an issue to LBRY and helping us improve!
To make it possible for us to help you, please fill out below information carefully.
Before reporting any issues, please make sure that you're using the latest version.
- App: https://github.com/lbryio/lbry-desktop/releases
- Daemon: https://github.com/lbryio/lbry/releases
We are also available on Discord at https://chat.lbry.io
-->
## The Issue
The SDK should allow support for batching of transactions. This will make large claim updates or publishes possible in a single transaction, which is especially useful for YouTube sync claim updates we plan to make after adding new metadata.
On the transaction list side, should these appear as a single entry with a "batch" tx type/details area?
## System Configuration
<!-- For the app, this info is in the About section at the bottom of the Help page.
You can include a screenshot instead of typing it out -->
<!-- For the daemon, run:
curl 'http://localhost:5279' --data '{"method":"version"}'
and include the full output -->
- LBRY Daemon version:
- LBRY App version:
- LBRY Installation ID:
- Operating system:
## Anything Else
<!-- Include anything else that does not fit into the above sections -->
## Screenshots
<!-- If a screenshot would help explain the bug, please include one or two here -->
## Internal Use
### Acceptance Criteria
1.
2.
3.
### Definition of Done
- [ ] Tested against acceptance criteria
- [ ] Tested against the assumptions of user story
- [ ] The project builds without errors
- [ ] Unit tests are written and passing
- [ ] Tests on devices/browsers listed in the issue have passed
- [ ] QA performed & issues resolved
- [ ] Refactoring completed
- [ ] Any configuration or build changes documented
- [ ] Documentation updated
- [ ] Peer Code Review performed
| 1.0 | Allow batching of transactions, including publishes/updates - <!--
Thanks for reporting an issue to LBRY and helping us improve!
To make it possible for us to help you, please fill out below information carefully.
Before reporting any issues, please make sure that you're using the latest version.
- App: https://github.com/lbryio/lbry-desktop/releases
- Daemon: https://github.com/lbryio/lbry/releases
We are also available on Discord at https://chat.lbry.io
-->
## The Issue
The SDK should allow support for batching of transactions. This will make large claim updates or publishes possible in a single transaction, which is especially useful for YouTube sync claim updates we plan to make after adding new metadata.
On the transaction list side, should these appear as a single entry with a "batch" tx type/details area?
## System Configuration
<!-- For the app, this info is in the About section at the bottom of the Help page.
You can include a screenshot instead of typing it out -->
<!-- For the daemon, run:
curl 'http://localhost:5279' --data '{"method":"version"}'
and include the full output -->
- LBRY Daemon version:
- LBRY App version:
- LBRY Installation ID:
- Operating system:
## Anything Else
<!-- Include anything else that does not fit into the above sections -->
## Screenshots
<!-- If a screenshot would help explain the bug, please include one or two here -->
## Internal Use
### Acceptance Criteria
1.
2.
3.
### Definition of Done
- [ ] Tested against acceptance criteria
- [ ] Tested against the assumptions of user story
- [ ] The project builds without errors
- [ ] Unit tests are written and passing
- [ ] Tests on devices/browsers listed in the issue have passed
- [ ] QA performed & issues resolved
- [ ] Refactoring completed
- [ ] Any configuration or build changes documented
- [ ] Documentation updated
- [ ] Peer Code Review performed
| priority | allow batching of transactions including publishes updates thanks for reporting an issue to lbry and helping us improve to make it possible for us to help you please fill out below information carefully before reporting any issues please make sure that you re using the latest version app daemon we are also available on discord at the issue the sdk should allow support for batching of transactions this will make large claim updates or publishes possible in a single transaction which is especially useful for youtube sync claim updates we plan to make after adding new metadata on the transaction list side should these appear as a single entry with a batch tx type details area system configuration for the app this info is in the about section at the bottom of the help page you can include a screenshot instead of typing it out for the daemon run curl data method version and include the full output lbry daemon version lbry app version lbry installation id operating system anything else screenshots internal use acceptance criteria definition of done tested against acceptance criteria tested against the assumptions of user story the project builds without errors unit tests are written and passing tests on devices browsers listed in the issue have passed qa performed issues resolved refactoring completed any configuration or build changes documented documentation updated peer code review performed | 1 |
657,619 | 21,798,169,877 | IssuesEvent | 2022-05-15 23:03:25 | skepfusky/pandapaco-drawing-stats | https://api.github.com/repos/skepfusky/pandapaco-drawing-stats | closed | Update from the old Vue CLI to Next.js + Flask | enhancement priority: LOW | bruh at this point you using ancient technology, but the vite dynamic img prop will be a problem tho, so i dunno
unless if I figured how to pass a img prop from a component, then yeah, I will migrate the project from there
## Todo
- [x] make a FurAffinity and deviant web crawler in Python — pacopanda in FA, and pandapaco in DA
- [x] use bs4 to scrape metadata and to jsonify it but only select of this month's range and not the whole goddamn thing
- [ ] Run this script for 2 days consecutively
- [ ] manually remove bits and pieces to be converted and parsed visually
- [ ] finally update the database with new info
| 1.0 | Update from the old Vue CLI to Next.js + Flask - bruh at this point you using ancient technology, but the vite dynamic img prop will be a problem tho, so i dunno
unless if I figured how to pass a img prop from a component, then yeah, I will migrate the project from there
## Todo
- [x] make a FurAffinity and deviant web crawler in Python — pacopanda in FA, and pandapaco in DA
- [x] use bs4 to scrape metadata and to jsonify it but only select of this month's range and not the whole goddamn thing
- [ ] Run this script for 2 days consecutively
- [ ] manually remove bits and pieces to be converted and parsed visually
- [ ] finally update the database with new info
| priority | update from the old vue cli to next js flask bruh at this point you using ancient technology but the vite dynamic img prop will be a problem tho so i dunno unless if i figured how to pass a img prop from a component then yeah i will migrate the project from there todo make a furaffinity and deviant web crawler in python — pacopanda in fa and pandapaco in da use to scrape metadata and to jsonify it but only select of this month s range and not the whole goddamn thing run this script for days consecutively manually remove bits and pieces to be converted and parsed visually finally update the database with new info | 1 |
176,997 | 6,572,533,113 | IssuesEvent | 2017-09-11 03:04:50 | elementary/switchboard-plug-keyboard | https://api.github.com/repos/elementary/switchboard-plug-keyboard | closed | Layouts should be sorted correctly | Priority: Low | Ok It's more a suggestion than a bug. It probably goes beyond the aim of the switchboard plug anyway.
But the sorting of all the keyboards layout is bugging me. It's labeled as "language" then "layout", but the language category really is a mix between languages, layouts, scripts, countries and computer types. There is no logic as to where to find a specific layout.
My suggestion is either:
- make the languages list more accurate: American english, British english and Indian english are still english and should be together. "German (Switzerland)" should disappear and its entries dispatched between French and German. I don't like this one as there will be problems with where to put some international keyboards (Canadian multilingual, etc) and probably with nationalism. Which would be solved with the second suggestion:
- make a list per country. There will be duplicates, especially with the american qwerty going everywhere (unless you assume users from countries with only this layout know that they are in fact using US qwerty) but it would be easier for the user to find its favourite layout.
- Or you make the layout come first: qwerty-based, azerty-based, dvorak-based, devanagari, etc. It assumes that the users knows precisely what he's looking for and that there would be just a few base-layouts. This solution would be the neater as it completely eliminates the duplicates and the single "default" entries in the current languages. Possibly with three levels (script, base-layout, layout)? I've put this suggestion in last but I think it's probably the most elegant. People know what alphabet they want to use, then they can select the base layout by looking quickly at their keyboard, and finally pick the correct layout, most likely identified by its country of origin.
What do you think?
Launchpad Details: [#LP1132574](https://bugs.launchpad.net/bugs/1132574) Damien - 2013-02-24 23:30:44 +0000 | 1.0 | Layouts should be sorted correctly - Ok It's more a suggestion than a bug. It probably goes beyond the aim of the switchboard plug anyway.
But the sorting of all the keyboards layout is bugging me. It's labeled as "language" then "layout", but the language category really is a mix between languages, layouts, scripts, countries and computer types. There is no logic as to where to find a specific layout.
My suggestion is either:
- make the languages list more accurate: American english, British english and Indian english are still english and should be together. "German (Switzerland)" should disappear and its entries dispatched between French and German. I don't like this one as there will be problems with where to put some international keyboards (Canadian multilingual, etc) and probably with nationalism. Which would be solved with the second suggestion:
- make a list per country. There will be duplicates, especially with the american qwerty going everywhere (unless you assume users from countries with only this layout know that they are in fact using US qwerty) but it would be easier for the user to find its favourite layout.
- Or you make the layout come first: qwerty-based, azerty-based, dvorak-based, devanagari, etc. It assumes that the users knows precisely what he's looking for and that there would be just a few base-layouts. This solution would be the neater as it completely eliminates the duplicates and the single "default" entries in the current languages. Possibly with three levels (script, base-layout, layout)? I've put this suggestion in last but I think it's probably the most elegant. People know what alphabet they want to use, then they can select the base layout by looking quickly at their keyboard, and finally pick the correct layout, most likely identified by its country of origin.
What do you think?
Launchpad Details: [#LP1132574](https://bugs.launchpad.net/bugs/1132574) Damien - 2013-02-24 23:30:44 +0000 | priority | layouts should be sorted correctly ok it s more a suggestion than a bug it probably goes beyond the aim of the switchboard plug anyway but the sorting of all the keyboards layout is bugging me it s labeled as language then layout but the language category really is a mix between languages layouts scripts countries and computer types there is no logic as to where to find a specific layout my suggestion is either make the languages list more accurate american english british english and indian english are still english and should be together german switzerland should disappear and its entries dispatched between french and german i don t like this one as there will be problems with where to put some international keyboards canadian multilingual etc and probably with nationalism which would be solved with the second suggestion make a list per country there will be duplicates especially with the american qwerty going everywhere unless you assume users from countries with only this layout know that they are in fact using us qwerty but it would be easier for the user to find its favourite layout or you make the layout come first qwerty based azerty based dvorak based devanagari etc it assumes that the users knows precisely what he s looking for and that there would be just a few base layouts this solution would be the neater as it completely eliminates the duplicates and the single default entries in the current languages possibly with three levels script base layout layout i ve put this suggestion in last but i think it s probably the most elegant people know what alphabet they want to use then they can select the base layout by looking quickly at their keyboard and finally pick the correct layout most likely identified by its country of origin what do you think launchpad details damien | 1 |
790,526 | 27,828,504,331 | IssuesEvent | 2023-03-20 01:05:27 | OpenPrinting/libcups | https://api.github.com/repos/OpenPrinting/libcups | opened | GREASE-like support for ipptool and ippserver | enhancement priority-low | [Copied from original ippsample issue 71](https://github.com/istopwg/ippsample/issues/71)
Investigate implementing [fuzzing](https://en.wikipedia.org/wiki/Fuzzing) and [GREASE](https://tools.ietf.org/html/draft-davidben-tls-grease-01) support into ipptool, specifically for inserting randomized attributes with different names, values, and syntaxes.
Probably there should be a way to insert a random value tag, attribute name, and/or value, plus insert N random attributes.
----
In fuzzing/grease mode we need to make sure we report/record the actual IPP message so that bug reports, etc. can include the full request for testing and debugging.
----
Feedback from morning IPP session:
- Should also include ippserver
- Perhaps use a separate ipptool file generator for the fuzzing
- GREASE support likely needs to be made part of the ipptool/ippserver code (maybe add an API to libcups to append random attributes to the request?)
----
Work list for this bug:
- [ ] libcups: new "void ippAddRandomAttributes(ipp_t *ipp)" API for adding random attributes and values to a request or response in current group. Needs to be public API so that servers based on the public API can implement it.
- [ ] ipptool: new "-g" (GREASE) option to automatically inject random attributes into every request, and a "GREASE" directive inside test files.
- [ ] ippserver: new "-g" (GREASE) option and system.conf "GREASE" directive to enable random attributes in every response.
- [ ] Documentation of GREASE extensions
I'll file a separate issue to track a new fuzzing tool.
| 1.0 | GREASE-like support for ipptool and ippserver - [Copied from original ippsample issue 71](https://github.com/istopwg/ippsample/issues/71)
Investigate implementing [fuzzing](https://en.wikipedia.org/wiki/Fuzzing) and [GREASE](https://tools.ietf.org/html/draft-davidben-tls-grease-01) support into ipptool, specifically for inserting randomized attributes with different names, values, and syntaxes.
Probably there should be a way to insert a random value tag, attribute name, and/or value, plus insert N random attributes.
----
In fuzzing/grease mode we need to make sure we report/record the actual IPP message so that bug reports, etc. can include the full request for testing and debugging.
----
Feedback from morning IPP session:
- Should also include ippserver
- Perhaps use a separate ipptool file generator for the fuzzing
- GREASE support likely needs to be made part of the ipptool/ippserver code (maybe add an API to libcups to append random attributes to the request?)
----
Work list for this bug:
- [ ] libcups: new "void ippAddRandomAttributes(ipp_t *ipp)" API for adding random attributes and values to a request or response in current group. Needs to be public API so that servers based on the public API can implement it.
- [ ] ipptool: new "-g" (GREASE) option to automatically inject random attributes into every request, and a "GREASE" directive inside test files.
- [ ] ippserver: new "-g" (GREASE) option and system.conf "GREASE" directive to enable random attributes in every response.
- [ ] Documentation of GREASE extensions
I'll file a separate issue to track a new fuzzing tool.
| priority | grease like support for ipptool and ippserver investigate implementing and support into ipptool specifically for inserting randomized attributes with different names values and syntaxes probably there should be a way to insert a random value tag attribute name and or value plus insert n random attributes in fuzzing grease mode we need to make sure we report record the actual ipp message so that bug reports etc can include the full request for testing and debugging feedback from morning ipp session should also include ippserver perhaps use a separate ipptool file generator for the fuzzing grease support likely needs to be made part of the ipptool ippserver code maybe add an api to libcups to append random attributes to the request work list for this bug libcups new void ippaddrandomattributes ipp t ipp api for adding random attributes and values to a request or response in current group needs to be public api so that servers based on the public api can implement it ipptool new g grease option to automatically inject random attributes into every request and a grease directive inside test files ippserver new g grease option and system conf grease directive to enable random attributes in every response documentation of grease extensions i ll file a separate issue to track a new fuzzing tool | 1 |
322,394 | 9,817,154,471 | IssuesEvent | 2019-06-13 16:03:59 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Carbon X] Notification text color token does not work on dark themes | priority: low type: enhancement 💡 type: visual :art: | ## Detailed description
The color variable assigned to notification text is `$text-01`, [link](https://github.com/IBM/carbon-components/blob/master/src/components/notification/_inline-notification.scss#L123).
In dark themes like theme-g100, the color will render as Gray 10 - #f3f3f3, making this text illegible. We may need a new token for this @aagonzales ?

| 1.0 | [Carbon X] Notification text color token does not work on dark themes - ## Detailed description
The color variable assigned to notification text is `$text-01`, [link](https://github.com/IBM/carbon-components/blob/master/src/components/notification/_inline-notification.scss#L123).
In dark themes like theme-g100, the color will render as Gray 10 - #f3f3f3, making this text illegible. We may need a new token for this @aagonzales ?

| priority | notification text color token does not work on dark themes detailed description the color variable assigned to notification text is text in dark themes like theme the color will render as gray making this text illegible we may need a new token for this aagonzales | 1 |
451,974 | 13,044,263,977 | IssuesEvent | 2020-07-29 04:07:34 | TerriaJS/terriajs | https://api.github.com/repos/TerriaJS/terriajs | opened | DE UX Testing: Improve 'Explore map data' & 'Data catalogue' terminology | Low priority | Covering issue no. 32 & 23 of [spreadsheet](https://docs.google.com/spreadsheets/d/10eWTU44vUEk-NKm2UcUPEK7qeHSDnRzrXUbMV6nDuBE/edit#gid=0) (high priority):
**Issue**
> Explore map data doesn't make sense for people interested in 'imagery'. Most novice users don't associate satellite images with 'data'
> Data catalogue is a technical term for novice users, especially those who don't associate satellite imagery with data
**Fixes**
Vic to look at alternate options | 1.0 | DE UX Testing: Improve 'Explore map data' & 'Data catalogue' terminology - Covering issue no. 32 & 23 of [spreadsheet](https://docs.google.com/spreadsheets/d/10eWTU44vUEk-NKm2UcUPEK7qeHSDnRzrXUbMV6nDuBE/edit#gid=0) (high priority):
**Issue**
> Explore map data doesn't make sense for people interested in 'imagery'. Most novice users don't associate satellite images with 'data'
> Data catalogue is a technical term for novice users, especially those who don't associate satellite imagery with data
**Fixes**
Vic to look at alternate options | priority | de ux testing improve explore map data data catalogue terminology covering issue no of high priority issue explore map data doesn t make sense for people interested in imagery most novice users don t associate satellite images with data data catalogue is a technical term for novice users especially those who don t associate satellite imagery with data fixes vic to look at alternate options | 1 |
387,403 | 11,460,800,736 | IssuesEvent | 2020-02-07 10:29:28 | python-discord/bot | https://api.github.com/repos/python-discord/bot | opened | Suggest a list of available commands in case of typos | area: moderation priority: 3 - low type: feature | When we try to invoke a command, we can make little typos, for example
`!muet @shirayuki 72H oh come on, cant you make a proper issue?`
In this case, the bot can then steps in and do a friendly suggestions:
`did you perhaps mean`
`!mute @shirayuki 72H oh come on, cant you make a proper issue?`
The typo in this example was `!muet` when trying to do `!mute`
This will greatly relieve the stress of typing every command rights, and thus enhancing the experience with the bot much further. | 1.0 | Suggest a list of available commands in case of typos - When we try to invoke a command, we can make little typos, for example
`!muet @shirayuki 72H oh come on, cant you make a proper issue?`
In this case, the bot can then steps in and do a friendly suggestions:
`did you perhaps mean`
`!mute @shirayuki 72H oh come on, cant you make a proper issue?`
The typo in this example was `!muet` when trying to do `!mute`
This will greatly relieve the stress of typing every command rights, and thus enhancing the experience with the bot much further. | priority | suggest a list of available commands in case of typos when we try to invoke a command we can make little typos for example muet shirayuki oh come on cant you make a proper issue in this case the bot can then steps in and do a friendly suggestions did you perhaps mean mute shirayuki oh come on cant you make a proper issue the typo in this example was muet when trying to do mute this will greatly relieve the stress of typing every command rights and thus enhancing the experience with the bot much further | 1 |
757,136 | 26,497,803,427 | IssuesEvent | 2023-01-18 07:49:07 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Enable dReal in pip wheel | component: distribution type: feature request priority: low configuration: python | As of the v0.35.0 release, we had to disable dReal in our wheel builds, because rebuilding its dependencies from source was too difficult. We should work to enable it again. | 1.0 | Enable dReal in pip wheel - As of the v0.35.0 release, we had to disable dReal in our wheel builds, because rebuilding its dependencies from source was too difficult. We should work to enable it again. | priority | enable dreal in pip wheel as of the release we had to disable dreal in our wheel builds because rebuilding its dependencies from source was too difficult we should work to enable it again | 1 |
441,713 | 12,730,606,577 | IssuesEvent | 2020-06-25 07:48:14 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | SNMP table sort not preserved after refresh | SNMP low-priority bug user interface waiting for review | When the table with SNMP devices is refreshed (either periodically or manually by clicking refresh) the order is lost and rows change their position. This happens when the sort column doesn't guarantee per-row uniqueness (e.g, sort by community, sort by interfaces with errors).
| 1.0 | SNMP table sort not preserved after refresh - When the table with SNMP devices is refreshed (either periodically or manually by clicking refresh) the order is lost and rows change their position. This happens when the sort column doesn't guarantee per-row uniqueness (e.g, sort by community, sort by interfaces with errors).
| priority | snmp table sort not preserved after refresh when the table with snmp devices is refreshed either periodically or manually by clicking refresh the order is lost and rows change their position this happens when the sort column doesn t guarantee per row uniqueness e g sort by community sort by interfaces with errors | 1 |
259,887 | 8,201,171,031 | IssuesEvent | 2018-09-01 14:31:33 | richelbilderbeek/BrainWeaver | https://api.github.com/repos/richelbilderbeek/BrainWeaver | closed | Left mouse click with shift should select additively | enhancement low-priority | Currently, LMB always selects a single item. | 1.0 | Left mouse click with shift should select additively - Currently, LMB always selects a single item. | priority | left mouse click with shift should select additively currently lmb always selects a single item | 1 |
747,678 | 26,095,245,726 | IssuesEvent | 2022-12-26 18:18:08 | canaltin-byte/SWE573-SDP-Can | https://api.github.com/repos/canaltin-byte/SWE573-SDP-Can | closed | Home Page Name and surname | enhancement priority : Low Front-end Effort: Medium Home Page | User id should not be accessible. User Name and Surname should be there | 1.0 | Home Page Name and surname - User id should not be accessible. User Name and Surname should be there | priority | home page name and surname user id should not be accessible user name and surname should be there | 1 |
287,591 | 8,817,133,322 | IssuesEvent | 2018-12-30 19:45:24 | nextcloud/user_sql | https://api.github.com/repos/nextcloud/user_sql | closed | Add support for algorithm parameters | feature low priority | Developed on: https://github.com/nextcloud/user_sql/tree/feature/issue%2346
In settings admin panel add hash algorithm options eg.
- CryptBlowfish -> cost,
- CryptArgon2 -> memory cost, time cost, threads
- ...
Now workaround is to change it in code (constructor).
TODO:
- [x] Add dynamically generetaed numer fields in admin panel when choosing appropriate hash (Hash interface should now return array with name, visible name, default value and range - new method)
- [x] Add saving parameters in database
- [x] Add verifying new parameters range when saving configuration
- [x] Check and use these parameters when creating new hash instance
- [x] Add description of parameters in readme
- [x] Update changelog | 1.0 | Add support for algorithm parameters - Developed on: https://github.com/nextcloud/user_sql/tree/feature/issue%2346
In settings admin panel add hash algorithm options eg.
- CryptBlowfish -> cost,
- CryptArgon2 -> memory cost, time cost, threads
- ...
Now workaround is to change it in code (constructor).
TODO:
- [x] Add dynamically generetaed numer fields in admin panel when choosing appropriate hash (Hash interface should now return array with name, visible name, default value and range - new method)
- [x] Add saving parameters in database
- [x] Add verifying new parameters range when saving configuration
- [x] Check and use these parameters when creating new hash instance
- [x] Add description of parameters in readme
- [x] Update changelog | priority | add support for algorithm parameters developed on in settings admin panel add hash algorithm options eg cryptblowfish cost memory cost time cost threads now workaround is to change it in code constructor todo add dynamically generetaed numer fields in admin panel when choosing appropriate hash hash interface should now return array with name visible name default value and range new method add saving parameters in database add verifying new parameters range when saving configuration check and use these parameters when creating new hash instance add description of parameters in readme update changelog | 1 |
832,379 | 32,077,793,643 | IssuesEvent | 2023-09-25 12:11:38 | abpframework/abp | https://api.github.com/repos/abpframework/abp | closed | No case-sensitive filtering at the Users page | abp-module-identity priority:low effort-xs | Filter users with no case sensitivity at the Users page.
Reported at https://github.com/volosoft/vs-internal/issues/2821 | 1.0 | No case-sensitive filtering at the Users page - Filter users with no case sensitivity at the Users page.
Reported at https://github.com/volosoft/vs-internal/issues/2821 | priority | no case sensitive filtering at the users page filter users with no case sensitivity at the users page reported at | 1 |
608,302 | 18,821,247,920 | IssuesEvent | 2021-11-10 08:35:22 | betagouv/service-national-universel | https://api.github.com/repos/betagouv/service-national-universel | closed | fix(inbox): beautify notification badges | enhancement priority-LOW UI/UX | ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
il faudrait que tout soit centré, la c'est pas beau.
je propose d'utiliser ca : https://react-svgr.com/playground/
pour passer les svg en composant -> ca m'a sauvé quelques prises de tetes
### Commentaires
https://prnt.sc/1wy2cru | 1.0 | fix(inbox): beautify notification badges - ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
il faudrait que tout soit centré, la c'est pas beau.
je propose d'utiliser ca : https://react-svgr.com/playground/
pour passer les svg en composant -> ca m'a sauvé quelques prises de tetes
### Commentaires
https://prnt.sc/1wy2cru | priority | fix inbox beautify notification badges fonctionnalité liée à un problème no response fonctionnalité il faudrait que tout soit centré la c est pas beau je propose d utiliser ca pour passer les svg en composant ca m a sauvé quelques prises de tetes commentaires | 1 |
342,376 | 10,315,971,380 | IssuesEvent | 2019-08-30 08:54:18 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | Feature shallow clone for Git brings error (Version 1.18.1) | complex: low component: scm priority: high stage: queue type: bug | To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Hello,
I am using Conan 1.18.1 on my Windows 10.
I use internally `git describe` to build my version number of my package. This uses the latest tag, that has been put into git.
Now with the new feature `shallow clone`, to safe time and space, I get the problem, that there are no `tags`in the history and I am unable to generate my package version.
It would be very nice, to have a property `shallow` in the `scm` tool. The best would be, to set the default, to `shallow=False` to not break compatibility.
| 1.0 | Feature shallow clone for Git brings error (Version 1.18.1) - To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Hello,
I am using Conan 1.18.1 on my Windows 10.
I use internally `git describe` to build my version number of my package. This uses the latest tag, that has been put into git.
Now with the new feature `shallow clone`, to safe time and space, I get the problem, that there are no `tags`in the history and I am unable to generate my package version.
It would be very nice, to have a property `shallow` in the `scm` tool. The best would be, to set the default, to `shallow=False` to not break compatibility.
| priority | feature shallow clone for git brings error version to help us debug your issue please explain i ve read the i ve specified the conan version operating system version and any tool that can be relevant i ve explained the steps to reproduce the error or the motivation use case of the question suggestion hello i am using conan on my windows i use internally git describe to build my version number of my package this uses the latest tag that has been put into git now with the new feature shallow clone to safe time and space i get the problem that there are no tags in the history and i am unable to generate my package version it would be very nice to have a property shallow in the scm tool the best would be to set the default to shallow false to not break compatibility | 1 |
314,037 | 9,592,235,862 | IssuesEvent | 2019-05-09 08:26:06 | luna/dataframes | https://api.github.com/repos/luna/dataframes | opened | Add new types of visualizations for Dataframes | Category: Visualisation Change: Non-Breaking Difficulty: Beginner Priority: Low Type: Enhancement | Connect a new visualization libraries (JS) with Dataframes.
| 1.0 | Add new types of visualizations for Dataframes - Connect a new visualization libraries (JS) with Dataframes.
| priority | add new types of visualizations for dataframes connect a new visualization libraries js with dataframes | 1 |
725,678 | 24,971,367,564 | IssuesEvent | 2022-11-02 01:42:23 | aws-samples/aws-last-mile-delivery-hyperlocal | https://api.github.com/repos/aws-samples/aws-last-mile-delivery-hyperlocal | closed | Setup yarn dependency checks for all packages | enhancement dependencies priority:medium component:all effort:low | * first candidate would be [depcheck](https://github.com/depcheck/depcheck).
* to make sure that there are no missing dependencies in any packages | 1.0 | Setup yarn dependency checks for all packages - * first candidate would be [depcheck](https://github.com/depcheck/depcheck).
* to make sure that there are no missing dependencies in any packages | priority | setup yarn dependency checks for all packages first candidate would be to make sure that there are no missing dependencies in any packages | 1 |
622,174 | 19,609,584,225 | IssuesEvent | 2022-01-06 13:55:25 | AbsaOSS/hyperdrive-trigger | https://api.github.com/repos/AbsaOSS/hyperdrive-trigger | closed | Other bulk actions analysis | enhancement priority: low UX backend | Other bulk actions analysis. (What is possible and how)
See also #312 (activate / deactivate), #335 (import), #334 (export)
## Delete
The main use-case is to delete workflows that have been created wrongly using bulk create / import / copy. Bulk delete should be subject to the following limitations
- Only workflows in deactivated mode
- Only workflows that the user has created himself
- Maybe: Only workflows without runs
- Maybe: Only workflows of the same project
- Maybe: Only max. 50 workflows at once
- Maybe: Only workflows that have been created more than e.g. 1 week ago
## View Runs
The url should be able to accept a list of ids, then the runs table can be filtered for multiple ids.
## Actions unsuitable for bulk execution
- Copy: Not suitable, because the user has to change workflow names 1-by-1 anyway.
- Edit: Not suitable, because currently it makes sense only for very few fields, e.g. Project Name. Might be hard to implement and hard to reason about to introduce bulk edit on dynamic fields (e.g. app arguments). This can be better solved using templates and variables.
- Run: Not suitable, because this action is only intended for manual intervention and not bulk action. A user should configure sensors and can then bulk activate workflows.
- Show History: Not suitable. A global history could make sense in the context of a separate admin section.
- Show: Screen is too small to display more than two workflows. A comparison of two workflows could make sense. As a workaround, an (expert) user could export two workflows as json and compare them using a diff tool. | 1.0 | Other bulk actions analysis - Other bulk actions analysis. (What is possible and how)
See also #312 (activate / deactivate), #335 (import), #334 (export)
## Delete
The main use-case is to delete workflows that have been created wrongly using bulk create / import / copy. Bulk delete should be subject to the following limitations
- Only workflows in deactivated mode
- Only workflows that the user has created himself
- Maybe: Only workflows without runs
- Maybe: Only workflows of the same project
- Maybe: Only max. 50 workflows at once
- Maybe: Only workflows that have been created more than e.g. 1 week ago
## View Runs
The url should be able to accept a list of ids, then the runs table can be filtered for multiple ids.
## Actions unsuitable for bulk execution
- Copy: Not suitable, because the user has to change workflow names 1-by-1 anyway.
- Edit: Not suitable, because currently it makes sense only for very few fields, e.g. Project Name. Might be hard to implement and hard to reason about to introduce bulk edit on dynamic fields (e.g. app arguments). This can be better solved using templates and variables.
- Run: Not suitable, because this action is only intended for manual intervention and not bulk action. A user should configure sensors and can then bulk activate workflows.
- Show History: Not suitable. A global history could make sense in the context of a separate admin section.
- Show: Screen is too small to display more than two workflows. A comparison of two workflows could make sense. As a workaround, an (expert) user could export two workflows as json and compare them using a diff tool. | priority | other bulk actions analysis other bulk actions analysis what is possible and how see also activate deactivate import export delete the main use case is to delete workflows that have been created wrongly using bulk create import copy bulk delete should be subject to the following limitations only workflows in deactivated mode only workflows that the user has created himself maybe only workflows without runs maybe only workflows of the same project maybe only max workflows at once maybe only workflows that have been created more than e g week ago view runs the url should be able to accept a list of ids then the runs table can be filtered for multiple ids actions unsuitable for bulk execution copy not suitable because the user has to change workflow names by anyway edit not suitable because currently it makes sense only for very few fields e g project name might be hard to implement and hard to reason about to introduce bulk edit on dynamic fields e g app arguments this can be better solved using templates and variables run not suitable because this action is only intended for manual intervention and not bulk action a user should configure sensors and can then bulk activate workflows show history not suitable a global history could make sense in the context of a separate admin section show screen is too small to display more than two workflows a comparison of two workflows could make sense as a workaround an expert user could export two workflows as json and compare them using a diff tool | 1 |
105,160 | 4,231,706,516 | IssuesEvent | 2016-07-04 17:23:12 | JustArchi/ArchiSteamFarm | https://api.github.com/repos/JustArchi/ArchiSteamFarm | closed | Use log4net for logging | Enhancement Low priority Wishlist | It's great to see ArchiSteamFarm getting more and more powerful now. The logging part has been done great so far. But with the ever increasing logging items, I would like to recommend using log4net, which is quite powerful and flexible. Since you've already wrapped up the logging class very nicely, I think it won't be too complicated to move to log4net. | 1.0 | Use log4net for logging - It's great to see ArchiSteamFarm getting more and more powerful now. The logging part has been done great so far. But with the ever increasing logging items, I would like to recommend using log4net, which is quite powerful and flexible. Since you've already wrapped up the logging class very nicely, I think it won't be too complicated to move to log4net. | priority | use for logging it s great to see archisteamfarm getting more and more powerful now the logging part has been done great so far but with the ever increasing logging items i would like to recommend using which is quite powerful and flexible since you ve already wrapped up the logging class very nicely i think it won t be too complicated to move to | 1 |
74,823 | 3,448,846,959 | IssuesEvent | 2015-12-16 10:35:55 | blackwatchint/blackwatchint | https://api.github.com/repos/blackwatchint/blackwatchint | closed | Valtatie 5 (VT5) | Accepted Low Priority Modpack Request | **Description:**
Name of the map is Valtatie 5, or "VT5", and it translates to "Highway 5". I've been working it about a year now and with the help I've had its beginning to be ready for releasing. The map is still work in progress.
**Download:** http://www.armaholic.com/page.php?id=28984
**Size:** 152MB | 1.0 | Valtatie 5 (VT5) - **Description:**
Name of the map is Valtatie 5, or "VT5", and it translates to "Highway 5". I've been working it about a year now and with the help I've had its beginning to be ready for releasing. The map is still work in progress.
**Download:** http://www.armaholic.com/page.php?id=28984
**Size:** 152MB | priority | valtatie description name of the map is valtatie or and it translates to highway i ve been working it about a year now and with the help i ve had its beginning to be ready for releasing the map is still work in progress download size | 1 |
435,812 | 12,541,663,454 | IssuesEvent | 2020-06-05 12:46:20 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | Quick Draw Shot Cannot Be Used When ASPD 193 | component:skill mode:renewal priority:low type:bug | <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: https://github.com/rathena/rathena/commit/d87ac219862dcacc7c5ade32e752cd401a6ea5d5
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**: 20180620
<!-- Please specify the client date you used. -->
* **Server Mode**: Renewal
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**: Rebellion cannot use Quick Draw Shot when ASPD is 193 (In my case, 193 = max ASPD) even after Eternal Chain used.
* Result: I have tried using ASPD 192 and Quick Draw Shot can be used, but when trying it with ASPD 193, Quick Draw Shot cannot be used anymore.<!-- Describe the issue that you experienced in detail. -->
* Expected Result: Quick Draw Shot can be used too except this is the official behaviour. <!-- Describe what you would expect to happen in detail. -->
* How to Reproduce: Try to use Quick Draw Shot after Eternal Chain used during aspd 193 and also below 193.<!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->
| 1.0 | Quick Draw Shot Cannot Be Used When ASPD 193 - <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: https://github.com/rathena/rathena/commit/d87ac219862dcacc7c5ade32e752cd401a6ea5d5
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**: 20180620
<!-- Please specify the client date you used. -->
* **Server Mode**: Renewal
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**: Rebellion cannot use Quick Draw Shot when ASPD is 193 (In my case, 193 = max ASPD) even after Eternal Chain used.
* Result: I have tried using ASPD 192 and Quick Draw Shot can be used, but when trying it with ASPD 193, Quick Draw Shot cannot be used anymore.<!-- Describe the issue that you experienced in detail. -->
* Expected Result: Quick Draw Shot can be used too except this is the official behaviour. <!-- Describe what you would expect to happen in detail. -->
* How to Reproduce: Try to use Quick Draw Shot after Eternal Chain used during aspd 193 and also below 193.<!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->
| priority | quick draw shot cannot be used when aspd rathena hash please specify the rathena on which you encountered this issue how to get your github hash cd your rathena directory git rev parse short head copy the resulting hash client date server mode renewal description of issue rebellion cannot use quick draw shot when aspd is in my case max aspd even after eternal chain used result i have tried using aspd and quick draw shot can be used but when trying it with aspd quick draw shot cannot be used anymore expected result quick draw shot can be used too except this is the official behaviour how to reproduce try to use quick draw shot after eternal chain used during aspd and also below | 1 |
692,773 | 23,748,498,220 | IssuesEvent | 2022-08-31 18:13:49 | midas-network/midas-data | https://api.github.com/repos/midas-network/midas-data | closed | Style | low priority | The different terms can be written in different "style", for example:
- Case counts (first letter upper case)
- Diagnostic Tests (first letter of each word in upper case)
- infection case list (lower case)
- sequence_collection (lower case and "special character")
Should we choose one "style" to avoid confusion? | 1.0 | Style - The different terms can be written in different "style", for example:
- Case counts (first letter upper case)
- Diagnostic Tests (first letter of each word in upper case)
- infection case list (lower case)
- sequence_collection (lower case and "special character")
Should we choose one "style" to avoid confusion? | priority | style the different terms can be written in different style for example case counts first letter upper case diagnostic tests first letter of each word in upper case infection case list lower case sequence collection lower case and special character should we choose one style to avoid confusion | 1 |
184,399 | 6,712,757,477 | IssuesEvent | 2017-10-13 10:36:59 | xcodeswift/xcproj | https://api.github.com/repos/xcodeswift/xcproj | opened | Auto generation of schemes | difficulty:moderate priority:low type:thread | ## Context 🕵️♀️
The scheme window in `Xcode` provides a checkbox to autogenerate schemes.
This are stored `project.xcodeproj/xcuserdata/<username>/schemes` and since they are specific to each user not usually carried over on repos, etc.
<img width="316" alt="screen shot 2017-10-13 at 12 28 45" src="https://user-images.githubusercontent.com/2642850/31542392-16ffedf0-b012-11e7-9c75-348d19410ced.png">
## What 🌱
This is a 2 fold issue:
1. `xcproj` doesn't handle the flag (e.g. if we read a project and then save it we are not carrying over said flag) `IDEWorkspaceSharedSettings_AutocreateContextsIfNeeded` located in `Fixtures/iOS/Project.xcodeproj/project.xcworkspace/xcshareddata/<username>/WorkspaceSettings.xcsettings`. We should handle this since its part of the shared data
1. Do we want to support the autogeneration of schemes?
## Proposal 🎉
I think the first point we should address definitely; the second I'm not entirely sure if is something that should fall under the responsibility of `xcproj`; maybe something that falls under [XcodeGen](https://github.com/yonaskolb/XcodeGen) ? | 1.0 | Auto generation of schemes - ## Context 🕵️♀️
The scheme window in `Xcode` provides a checkbox to autogenerate schemes.
This are stored `project.xcodeproj/xcuserdata/<username>/schemes` and since they are specific to each user not usually carried over on repos, etc.
<img width="316" alt="screen shot 2017-10-13 at 12 28 45" src="https://user-images.githubusercontent.com/2642850/31542392-16ffedf0-b012-11e7-9c75-348d19410ced.png">
## What 🌱
This is a 2 fold issue:
1. `xcproj` doesn't handle the flag (e.g. if we read a project and then save it we are not carrying over said flag) `IDEWorkspaceSharedSettings_AutocreateContextsIfNeeded` located in `Fixtures/iOS/Project.xcodeproj/project.xcworkspace/xcshareddata/<username>/WorkspaceSettings.xcsettings`. We should handle this since its part of the shared data
1. Do we want to support the autogeneration of schemes?
## Proposal 🎉
I think the first point we should address definitely; the second I'm not entirely sure if is something that should fall under the responsibility of `xcproj`; maybe something that falls under [XcodeGen](https://github.com/yonaskolb/XcodeGen) ? | priority | auto generation of schemes context 🕵️♀️ the scheme window in xcode provides a checkbox to autogenerate schemes this are stored project xcodeproj xcuserdata schemes and since they are specific to each user not usually carried over on repos etc img width alt screen shot at src what 🌱 this is a fold issue xcproj doesn t handle the flag e g if we read a project and then save it we are not carrying over said flag ideworkspacesharedsettings autocreatecontextsifneeded located in fixtures ios project xcodeproj project xcworkspace xcshareddata workspacesettings xcsettings we should handle this since its part of the shared data do we want to support the autogeneration of schemes proposal 🎉 i think the first point we should address definitely the second i m not entirely sure if is something that should fall under the responsibility of xcproj maybe something that falls under | 1 |
541,852 | 15,834,922,175 | IssuesEvent | 2021-04-06 17:24:04 | zeoflow/jx | https://api.github.com/repos/zeoflow/jx | opened | SourceVersion is not found | @bug @priority-very-low | **Description:** the SourceVersion is not found in an android library
###### To help us triage faster, please check to make sure you are using the [latest version](https://github.com/zeoflow/jx/releases) of the library.
###### We also happily accept [pull requests](https://github.com/zeoflow/jx/pulls).
| 1.0 | SourceVersion is not found - **Description:** the SourceVersion is not found in an android library
###### To help us triage faster, please check to make sure you are using the [latest version](https://github.com/zeoflow/jx/releases) of the library.
###### We also happily accept [pull requests](https://github.com/zeoflow/jx/pulls).
| priority | sourceversion is not found description the sourceversion is not found in an android library to help us triage faster please check to make sure you are using the of the library we also happily accept | 1 |
579,305 | 17,188,445,070 | IssuesEvent | 2021-07-16 07:28:07 | codeforpakistan/Disability-Certificate---One-Window-Operation | https://api.github.com/repos/codeforpakistan/Disability-Certificate---One-Window-Operation | closed | Updating dashboard with new data | priority: low | What is the current mechanism for updating the dashboard?
- Does the user manually refresh?
- Does the page auto-refresh after X seconds?
- Is the page updated silently using AJAX? (assuming its using Live-Wire) | 1.0 | Updating dashboard with new data - What is the current mechanism for updating the dashboard?
- Does the user manually refresh?
- Does the page auto-refresh after X seconds?
- Is the page updated silently using AJAX? (assuming its using Live-Wire) | priority | updating dashboard with new data what is the current mechanism for updating the dashboard does the user manually refresh does the page auto refresh after x seconds is the page updated silently using ajax assuming its using live wire | 1 |
637,174 | 20,622,823,394 | IssuesEvent | 2022-03-07 19:10:40 | authzed/spicedb | https://api.github.com/repos/authzed/spicedb | closed | Developer API should report duplicate relations/permissions | hint/good first issue area/api v0 priority/3 low area/tooling area/api devtools | They are not legal in a schema, so developer API should report those issues as well | 1.0 | Developer API should report duplicate relations/permissions - They are not legal in a schema, so developer API should report those issues as well | priority | developer api should report duplicate relations permissions they are not legal in a schema so developer api should report those issues as well | 1 |
257,540 | 8,138,764,677 | IssuesEvent | 2018-08-20 15:36:39 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | Hovering a recent space link in home component can introduce an unneccessary scrollbar | SEV4-low area/UI priority/P4 team/platform type/bug | When hovering a link in the recent spaces widget of the home component, a scrollbar can appear on the highlighted list group. This currently only occurs in feature-flag >= experimental, because the trigger for this is the addition of a tooltip when highlighting the space link [0].
Here's an example:

After a bit of digging, what seems to be happening is that the tooltip is causing a text overflow, which is causing a scrollbar to be shown. The f8-card has a `overflow-x: hidden` [1], which when removed stops the scrollbar from appearing, but has a side-effect of lightening the colour of the card border so it's not as quick a stylesheet fix as that.

[0] https://github.com/fabric8-ui/fabric8-ui/blob/master/src/app/home/home.component.html#L92
[1] https://github.com/fabric8-ui/fabric8-ui/blob/master/src/assets/stylesheets/shared/_cards.less#L20 | 1.0 | Hovering a recent space link in home component can introduce an unneccessary scrollbar - When hovering a link in the recent spaces widget of the home component, a scrollbar can appear on the highlighted list group. This currently only occurs in feature-flag >= experimental, because the trigger for this is the addition of a tooltip when highlighting the space link [0].
Here's an example:

After a bit of digging, what seems to be happening is that the tooltip is causing a text overflow, which is causing a scrollbar to be shown. The f8-card has a `overflow-x: hidden` [1], which when removed stops the scrollbar from appearing, but has a side-effect of lightening the colour of the card border so it's not as quick a stylesheet fix as that.

[0] https://github.com/fabric8-ui/fabric8-ui/blob/master/src/app/home/home.component.html#L92
[1] https://github.com/fabric8-ui/fabric8-ui/blob/master/src/assets/stylesheets/shared/_cards.less#L20 | priority | hovering a recent space link in home component can introduce an unneccessary scrollbar when hovering a link in the recent spaces widget of the home component a scrollbar can appear on the highlighted list group this currently only occurs in feature flag experimental because the trigger for this is the addition of a tooltip when highlighting the space link here s an example after a bit of digging what seems to be happening is that the tooltip is causing a text overflow which is causing a scrollbar to be shown the card has a overflow x hidden which when removed stops the scrollbar from appearing but has a side effect of lightening the colour of the card border so it s not as quick a stylesheet fix as that | 1 |
652,161 | 21,524,158,475 | IssuesEvent | 2022-04-28 16:41:38 | pantheon-systems/documentation | https://api.github.com/repos/pantheon-systems/documentation | closed | Add new page in the docs that educates user about headers | New Content Low Priority | Re: Add new page in the docs that educates user about headers
Priority: Low
## Issue Description
Since PS offers Header modification via ACDN: https://pantheon.io/docs/guides/professional-services/advanced-global-cdn#modify-and-filter-headers-at-the-edge, it might be a good internal resource instead of pointing them out in various external sources
### How will this impact users?
This can also help out other depts(sales/cse,etc) that needs to educate customers about headers
### Context
Having an internal page that explains all about headers, might be a good internal resource instead of pointing them out in various external sources
## Suggested Resolution
Add a page that outlines what are headers all about, we can start outlining the headers of pantheon.io and explaining the parts
```
curl -sIL pantheon.io
HTTP/1.1 301 Moved Permanently
Content-Type: text/html
Location: https://pantheon.io/
Server: nginx
X-Pantheon-Styx-Hostname: styx-fe2-a-6b6d6f77d6-2h8pw
X-Styx-Req-Id: fc9fff1e-22f6-11ec-b5ec-8e8cf3dee576
Cache-Control: public, max-age=86400
Content-Length: 162
Date: Sat, 02 Oct 2021 05:54:17 GMT
Connection: keep-alive
X-Served-By: cache-mdw17383-MDW, cache-mnl9724-MNL
X-Cache: HIT, HIT
X-Cache-Hits: 1, 1
X-Timer: S1633154057.404067,VS0,VE1
Vary: Cookie, Cookie
Age: 33584
Accept-Ranges: bytes
Via: 1.1 varnish, 1.1 varnish
HTTP/2 200
cache-control: public, max-age=3600
content-language: en
content-security-policy: frame-ancestors https://app.experiencewelcome.com/ https://test-panther.pantheonsite.io/;
content-type: text/html; charset=utf-8
etag: W/"1633149256-0"
expires: Sun, 19 Nov 1978 05:00:00 GMT
last-modified: Sat, 02 Oct 2021 04:34:16 GMT
link: <https://pantheon.io/>; rel="canonical"
permissions-policy: interest-cohort=()
server: nginx
strict-transport-security: max-age=31622400
x-drupal-cache: HIT
x-frame-options: SAMEORIGIN
x-pantheon-styx-hostname: styx-fe2-b-56496ffc66-drgj6
x-styx-req-id: 8fac5f23-2340-11ec-b570-325a77174e1b
date: Sat, 02 Oct 2021 05:54:20 GMT
x-served-by: cache-mdw17373-MDW, cache-mnl9724-MNL
x-cache: HIT, MISS
x-cache-hits: 1, 0
x-timer: S1633154060.092324,VS0,VE245
vary: Accept-Encoding, Cookie, Cookie, Cookie
age: 1986
accept-ranges: bytes
via: 1.1 varnish, 1.1 varnish
content-length: 156587
```
## Additional Information
We can also add other sections like
- security headers( Content-Security-Policy, Referrer-Policy, Permissions-Policy, X-Content-Type-Options, X-Frame-Options
- webp/image optimization is in effect( so PS can also refer here so users can double-check that their purchase ACDN IO is in effect after implementation)
- what to check in the headers if the WAF is in effect as well as the ACDN
- caching data in the headers
| 1.0 | Add new page in the docs that educates user about headers - Re: Add new page in the docs that educates user about headers
Priority: Low
## Issue Description
Since PS offers Header modification via ACDN: https://pantheon.io/docs/guides/professional-services/advanced-global-cdn#modify-and-filter-headers-at-the-edge, it might be a good internal resource instead of pointing them out in various external sources
### How will this impact users?
This can also help out other depts(sales/cse,etc) that needs to educate customers about headers
### Context
Having an internal page that explains all about headers, might be a good internal resource instead of pointing them out in various external sources
## Suggested Resolution
Add a page that outlines what are headers all about, we can start outlining the headers of pantheon.io and explaining the parts
```
curl -sIL pantheon.io
HTTP/1.1 301 Moved Permanently
Content-Type: text/html
Location: https://pantheon.io/
Server: nginx
X-Pantheon-Styx-Hostname: styx-fe2-a-6b6d6f77d6-2h8pw
X-Styx-Req-Id: fc9fff1e-22f6-11ec-b5ec-8e8cf3dee576
Cache-Control: public, max-age=86400
Content-Length: 162
Date: Sat, 02 Oct 2021 05:54:17 GMT
Connection: keep-alive
X-Served-By: cache-mdw17383-MDW, cache-mnl9724-MNL
X-Cache: HIT, HIT
X-Cache-Hits: 1, 1
X-Timer: S1633154057.404067,VS0,VE1
Vary: Cookie, Cookie
Age: 33584
Accept-Ranges: bytes
Via: 1.1 varnish, 1.1 varnish
HTTP/2 200
cache-control: public, max-age=3600
content-language: en
content-security-policy: frame-ancestors https://app.experiencewelcome.com/ https://test-panther.pantheonsite.io/;
content-type: text/html; charset=utf-8
etag: W/"1633149256-0"
expires: Sun, 19 Nov 1978 05:00:00 GMT
last-modified: Sat, 02 Oct 2021 04:34:16 GMT
link: <https://pantheon.io/>; rel="canonical"
permissions-policy: interest-cohort=()
server: nginx
strict-transport-security: max-age=31622400
x-drupal-cache: HIT
x-frame-options: SAMEORIGIN
x-pantheon-styx-hostname: styx-fe2-b-56496ffc66-drgj6
x-styx-req-id: 8fac5f23-2340-11ec-b570-325a77174e1b
date: Sat, 02 Oct 2021 05:54:20 GMT
x-served-by: cache-mdw17373-MDW, cache-mnl9724-MNL
x-cache: HIT, MISS
x-cache-hits: 1, 0
x-timer: S1633154060.092324,VS0,VE245
vary: Accept-Encoding, Cookie, Cookie, Cookie
age: 1986
accept-ranges: bytes
via: 1.1 varnish, 1.1 varnish
content-length: 156587
```
## Additional Information
We can also add other sections like
- security headers( Content-Security-Policy, Referrer-Policy, Permissions-Policy, X-Content-Type-Options, X-Frame-Options
- webp/image optimization is in effect( so PS can also refer here so users can double-check that their purchase ACDN IO is in effect after implementation)
- what to check in the headers if the WAF is in effect as well as the ACDN
- caching data in the headers
| priority | add new page in the docs that educates user about headers re add new page in the docs that educates user about headers priority low issue description since ps offers header modification via acdn it might be a good internal resource instead of pointing them out in various external sources how will this impact users this can also help out other depts sales cse etc that needs to educate customers about headers context having an internal page that explains all about headers might be a good internal resource instead of pointing them out in various external sources suggested resolution add a page that outlines what are headers all about we can start outlining the headers of pantheon io and explaining the parts curl sil pantheon io http moved permanently content type text html location server nginx x pantheon styx hostname styx a x styx req id cache control public max age content length date sat oct gmt connection keep alive x served by cache mdw cache mnl x cache hit hit x cache hits x timer vary cookie cookie age accept ranges bytes via varnish varnish http cache control public max age content language en content security policy frame ancestors content type text html charset utf etag w expires sun nov gmt last modified sat oct gmt link rel canonical permissions policy interest cohort server nginx strict transport security max age x drupal cache hit x frame options sameorigin x pantheon styx hostname styx b x styx req id date sat oct gmt x served by cache mdw cache mnl x cache hit miss x cache hits x timer vary accept encoding cookie cookie cookie age accept ranges bytes via varnish varnish content length additional information we can also add other sections like security headers content security policy referrer policy permissions policy x content type options x frame options webp image optimization is in effect so ps can also refer here so users can double check that their purchase acdn io is in effect after implementation what to check in the headers if the waf is in effect as well as the acdn caching data in the headers | 1 |
113,835 | 4,579,028,561 | IssuesEvent | 2016-09-18 01:58:46 | agauniyal/isaac-core | https://api.github.com/repos/agauniyal/isaac-core | opened | mark non usable gpio pins as occupied | Priority: Low Status: Pending Type: Bug | before entering `main()` since that table is a static member. | 1.0 | mark non usable gpio pins as occupied - before entering `main()` since that table is a static member. | priority | mark non usable gpio pins as occupied before entering main since that table is a static member | 1 |
217,716 | 7,327,622,805 | IssuesEvent | 2018-03-04 12:30:33 | play2-maven-plugin/play2-maven-plugin | https://api.github.com/repos/play2-maven-plugin/play2-maven-plugin | closed | Add 'ebeanModels' plugin configuration parameter | Component-Maven-Plugin Priority-Low Type-Enhancement | See it in [Play Ebean SBT plugin](https://github.com/playframework/play-ebean/blob/1.0.0/sbt-play-ebean/src/main/scala/play/ebean/sbt/PlayEbean.scala#L15). More information in playframework/play-ebean#25
This parameter is needed only when all the below conditions are met:
- multi-module Maven project
- Ebean models are defined in separate module
- Ebean models are not located in default `models` package
- Play! Framework versions 2.1.x - 2.3.x
For Play! Framework versions 2.4.x+ in module containing Ebean models create `reference.conf` (instead of `application.conf`) file containing:
```
ebean.default="custom.models.package.*"
```
The solution with `reference.conf` file is better, because Play! reads this configuration while starting Ebean server. This allows proper Ebean configuration for unit testing. Unit tests for Ebean models can be located in the same module.
`ebeanModels` configuration parameters is used only by Maven `ebean-enhance` goal. In this case Ebean models located in modules without `reference.conf` file must be included in `application.conf` file of the application using them. Models unit tests must be moved to application module as well.
| 1.0 | Add 'ebeanModels' plugin configuration parameter - See it in [Play Ebean SBT plugin](https://github.com/playframework/play-ebean/blob/1.0.0/sbt-play-ebean/src/main/scala/play/ebean/sbt/PlayEbean.scala#L15). More information in playframework/play-ebean#25
This parameter is needed only when all the below conditions are met:
- multi-module Maven project
- Ebean models are defined in separate module
- Ebean models are not located in default `models` package
- Play! Framework versions 2.1.x - 2.3.x
For Play! Framework versions 2.4.x+ in module containing Ebean models create `reference.conf` (instead of `application.conf`) file containing:
```
ebean.default="custom.models.package.*"
```
The solution with `reference.conf` file is better, because Play! reads this configuration while starting Ebean server. This allows proper Ebean configuration for unit testing. Unit tests for Ebean models can be located in the same module.
`ebeanModels` configuration parameters is used only by Maven `ebean-enhance` goal. In this case Ebean models located in modules without `reference.conf` file must be included in `application.conf` file of the application using them. Models unit tests must be moved to application module as well.
| priority | add ebeanmodels plugin configuration parameter see it in more information in playframework play ebean this parameter is needed only when all the below conditions are met multi module maven project ebean models are defined in separate module ebean models are not located in default models package play framework versions x x for play framework versions x in module containing ebean models create reference conf instead of application conf file containing ebean default custom models package the solution with reference conf file is better because play reads this configuration while starting ebean server this allows proper ebean configuration for unit testing unit tests for ebean models can be located in the same module ebeanmodels configuration parameters is used only by maven ebean enhance goal in this case ebean models located in modules without reference conf file must be included in application conf file of the application using them models unit tests must be moved to application module as well | 1 |
131,560 | 5,156,645,759 | IssuesEvent | 2017-01-16 00:51:22 | facelessuser/pymdown-extensions | https://api.github.com/repos/facelessuser/pymdown-extensions | closed | EscapeAll special considerations for escaped new lines and spaces | Feature Priority - Low | EscapeAll should handle spaces and newlines like Pandoc. Escaped new lines will be `<br>` and escaped spaces will be ` `. Make this optional.
| 1.0 | EscapeAll special considerations for escaped new lines and spaces - EscapeAll should handle spaces and newlines like Pandoc. Escaped new lines will be `<br>` and escaped spaces will be ` `. Make this optional.
| priority | escapeall special considerations for escaped new lines and spaces escapeall should handle spaces and newlines like pandoc escaped new lines will be and escaped spaces will be nbsp make this optional | 1 |
676,211 | 23,119,314,183 | IssuesEvent | 2022-07-27 19:41:28 | midas-network/midas-data | https://api.github.com/repos/midas-network/midas-data | closed | Adding topics | low priority | Should we add the topics:
- In the environmental topic
- land use/land cover
- weather
- air quality
Low priority: we don't have any data yet on these topics but might be necessary later on. | 1.0 | Adding topics - Should we add the topics:
- In the environmental topic
- land use/land cover
- weather
- air quality
Low priority: we don't have any data yet on these topics but might be necessary later on. | priority | adding topics should we add the topics in the environmental topic land use land cover weather air quality low priority we don t have any data yet on these topics but might be necessary later on | 1 |
156,526 | 5,970,779,102 | IssuesEvent | 2017-05-30 23:51:07 | Ks89/angular-modal-gallery | https://api.github.com/repos/Ks89/angular-modal-gallery | closed | replace elementref with renderer to support angular-universal | comp:client effort1:easy (hours) priority:low type:feature | This could be not enough for anglar-universal, but it's a starting point | 1.0 | replace elementref with renderer to support angular-universal - This could be not enough for anglar-universal, but it's a starting point | priority | replace elementref with renderer to support angular universal this could be not enough for anglar universal but it s a starting point | 1 |
163,001 | 6,187,604,429 | IssuesEvent | 2017-07-04 08:01:05 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | If paperSize option is used together with footer template pdf export is misaligned | Bug C: Grid F: PDF Export Needs QA Priority 1 SEV: Low | ### Bug report
If paperSize option is used together with aggregates the grid pdf export is misaligned
### Reproduction of the problem
Please refer to the http://dojo.telerik.com/AdAlU example. When grid is exported the footer (last row) is misaligned.
### Environment
* **Kendo UI version:** 2017.2.504
* **jQuery version:** 1.12.3
* **Browser:** [all ]
| 1.0 | If paperSize option is used together with footer template pdf export is misaligned - ### Bug report
If paperSize option is used together with aggregates the grid pdf export is misaligned
### Reproduction of the problem
Please refer to the http://dojo.telerik.com/AdAlU example. When grid is exported the footer (last row) is misaligned.
### Environment
* **Kendo UI version:** 2017.2.504
* **jQuery version:** 1.12.3
* **Browser:** [all ]
| priority | if papersize option is used together with footer template pdf export is misaligned bug report if papersize option is used together with aggregates the grid pdf export is misaligned reproduction of the problem please refer to the example when grid is exported the footer last row is misaligned environment kendo ui version jquery version browser | 1 |
625,612 | 19,758,547,587 | IssuesEvent | 2022-01-16 02:11:39 | ScottUK/ladojrp-issues | https://api.github.com/repos/ScottUK/ladojrp-issues | reopened | Replace vMenu? | Class: enhancement Scope: scripts Priority: low | **Describe the feature you'd like implemented**
This is very low on my list, but one day I'd like to replace vMenu with something much more efficient. It would have to be done in Lua since FiveM performs best using Lua. I want to put every feature we use, but remove features we don't. Main reason for this being vMenu causes the majority of lag on the server, this would help circumnavigate this.
| 1.0 | Replace vMenu? - **Describe the feature you'd like implemented**
This is very low on my list, but one day I'd like to replace vMenu with something much more efficient. It would have to be done in Lua since FiveM performs best using Lua. I want to put every feature we use, but remove features we don't. Main reason for this being vMenu causes the majority of lag on the server, this would help circumnavigate this.
| priority | replace vmenu describe the feature you d like implemented this is very low on my list but one day i d like to replace vmenu with something much more efficient it would have to be done in lua since fivem performs best using lua i want to put every feature we use but remove features we don t main reason for this being vmenu causes the majority of lag on the server this would help circumnavigate this | 1 |
487,809 | 14,059,915,698 | IssuesEvent | 2020-11-03 04:32:51 | JuezUN/INGInious | https://api.github.com/repos/JuezUN/INGInious | closed | Changes on 'tags' tab in task editor. | Change request Course Administration Feature request Frontend Low Priority Task | 1. Editing a task on the 'tags' tab, the checkbox to decide if the tag is shown to students, that checkbox is not too nice or too big (at list it is not shown very well on Firefox). It should be smaller.
2. The second thing not working well is the tag id, it not shown when you create the tag and enter again to this tab.
3. Every tag should have an option to be delete it as the only way to delete a tag is editing it and leaving blank the tag id.
These concerns are shown in the image:

| 1.0 | Changes on 'tags' tab in task editor. - 1. Editing a task on the 'tags' tab, the checkbox to decide if the tag is shown to students, that checkbox is not too nice or too big (at list it is not shown very well on Firefox). It should be smaller.
2. The second thing not working well is the tag id, it not shown when you create the tag and enter again to this tab.
3. Every tag should have an option to be delete it as the only way to delete a tag is editing it and leaving blank the tag id.
These concerns are shown in the image:

| priority | changes on tags tab in task editor editing a task on the tags tab the checkbox to decide if the tag is shown to students that checkbox is not too nice or too big at list it is not shown very well on firefox it should be smaller the second thing not working well is the tag id it not shown when you create the tag and enter again to this tab every tag should have an option to be delete it as the only way to delete a tag is editing it and leaving blank the tag id these concerns are shown in the image | 1 |
356,233 | 10,590,398,341 | IssuesEvent | 2019-10-09 08:41:19 | kiwicom/schemathesis | https://api.github.com/repos/kiwicom/schemathesis | closed | Add a user-agent to CLI runner | Priority: Low Type: Enhancement | It will make the requests distinguishable
It should include version info | 1.0 | Add a user-agent to CLI runner - It will make the requests distinguishable
It should include version info | priority | add a user agent to cli runner it will make the requests distinguishable it should include version info | 1 |
298,008 | 9,188,370,926 | IssuesEvent | 2019-03-06 07:10:34 | cilium/cilium | https://api.github.com/repos/cilium/cilium | opened | Extraneous warning log messages while deleting endpoint ("Ignoring error while deleting endpoint") | priority/low | Hit in #7277 (~master), but doesn't appear related to the PR:
https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/10501/testReport/junit/k8s-1/13/K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled/
During endpoint deletion, we hit these errors:
```
2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon
2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon
```
The "no such file or directory" likely just means that we are attempting to remove an element from the map and the element was already removed from the map.
<details>
<summary>cilium.log filtered by endpointID (below the fold, open to see)</summary>
2019-03-06T02:39:43.401641147Z level=info msg="New endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:39:43.401679297Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint
2019-03-06T02:39:43.40168597Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:pod-template-generation Value:1 Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401689447Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:controller-revision-hash Value:56cf897587 Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401717094Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.namespace Value:default Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401723877Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.serviceaccount Value:default Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401727794Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.cluster Value:default Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.40173152Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:zgroup Value:testDSClient Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.40176299Z level=debug msg="Endpoint has reserved identity, changing synchronously" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:39:43.40177066Z level=debug msg="Resolving identity for labels" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:39:45.33361152Z level=debug msg="Associated container event with endpoint" containerID=6673a04111 containerName=/k8s_POD_testclient-596bw_default_113b2a39-3fb9-11e9-8ea7-080027051dad_1 endpointID=1968 maxRetry=20 retry=2 subsys=workload-watcher willRetry=true
2019-03-06T02:39:45.343950813Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="container:annotation.kubernetes.io/config.seen=2019-03-06T02:39:30.205869039Z,container:annotation.kubernetes.io/config.source=api,container:io.kubernetes.container.name=POD,container:io.kubernetes.docker.type=podsandbox,container:io.kubernetes.pod.name=testclient-596bw,container:io.kubernetes.pod.uid=113b2a39-3fb9-11e9-8ea7-080027051dad,k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint
2019-03-06T02:39:45.343987734Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.docker.type Value:podsandbox Source:container}" subsys=endpoint
2019-03-06T02:39:45.344018277Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.source Value:api Source:container}" subsys=endpoint
2019-03-06T02:39:45.344023001Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.seen Value:2019-03-06T02:39:30.205869039Z Source:container}" subsys=endpoint
2019-03-06T02:39:45.344025616Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.container.name Value:POD Source:container}" subsys=endpoint
2019-03-06T02:39:45.344049539Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.name Value:testclient-596bw Source:container}" subsys=endpoint
2019-03-06T02:39:45.344054119Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.uid Value:113b2a39-3fb9-11e9-8ea7-080027051dad Source:container}" subsys=endpoint
2019-03-06T02:39:53.41266576Z level=debug msg="Deleting CEP on first run" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:13.440507596Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:23.440877672Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:33.441221111Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:43.443382057Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:53.443486945Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:41:03.447793751Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:41:13.433721275Z level=debug msg="Deleting endpoint" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnecting ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0
2019-03-06T02:41:13.43372899Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.43373182Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next_fail endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.43373424Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.43374721Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_stale endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498430541Z level=debug msg="Endpoint removed" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnected ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0
2019-03-06T02:41:13.498433116Z level=info msg="Removed endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498470701Z level=debug msg="Waiting for proxy updates to complete..." containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498478653Z level=debug msg="Wait time for proxy updates: 15.136µs" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon
2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon
</details>
[pod-kube-system-cilium-f8gx9-cilium-agent.log](https://github.com/cilium/cilium/files/2934898/pod-kube-system-cilium-f8gx9-cilium-agent.log)
[e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip](https://github.com/cilium/cilium/files/2934899/e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip)
| 1.0 | Extraneous warning log messages while deleting endpoint ("Ignoring error while deleting endpoint") - Hit in #7277 (~master), but doesn't appear related to the PR:
https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/10501/testReport/junit/k8s-1/13/K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled/
During endpoint deletion, we hit these errors:
```
2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon
2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon
```
The "no such file or directory" likely just means that we are attempting to remove an element from the map and the element was already removed from the map.
<details>
<summary>cilium.log filtered by endpointID (below the fold, open to see)</summary>
2019-03-06T02:39:43.401641147Z level=info msg="New endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:39:43.401679297Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint
2019-03-06T02:39:43.40168597Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:pod-template-generation Value:1 Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401689447Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:controller-revision-hash Value:56cf897587 Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401717094Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.namespace Value:default Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401723877Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.serviceaccount Value:default Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.401727794Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.cluster Value:default Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.40173152Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:zgroup Value:testDSClient Source:k8s}" subsys=endpoint
2019-03-06T02:39:43.40176299Z level=debug msg="Endpoint has reserved identity, changing synchronously" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:39:43.40177066Z level=debug msg="Resolving identity for labels" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:39:45.33361152Z level=debug msg="Associated container event with endpoint" containerID=6673a04111 containerName=/k8s_POD_testclient-596bw_default_113b2a39-3fb9-11e9-8ea7-080027051dad_1 endpointID=1968 maxRetry=20 retry=2 subsys=workload-watcher willRetry=true
2019-03-06T02:39:45.343950813Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="container:annotation.kubernetes.io/config.seen=2019-03-06T02:39:30.205869039Z,container:annotation.kubernetes.io/config.source=api,container:io.kubernetes.container.name=POD,container:io.kubernetes.docker.type=podsandbox,container:io.kubernetes.pod.name=testclient-596bw,container:io.kubernetes.pod.uid=113b2a39-3fb9-11e9-8ea7-080027051dad,k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint
2019-03-06T02:39:45.343987734Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.docker.type Value:podsandbox Source:container}" subsys=endpoint
2019-03-06T02:39:45.344018277Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.source Value:api Source:container}" subsys=endpoint
2019-03-06T02:39:45.344023001Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.seen Value:2019-03-06T02:39:30.205869039Z Source:container}" subsys=endpoint
2019-03-06T02:39:45.344025616Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.container.name Value:POD Source:container}" subsys=endpoint
2019-03-06T02:39:45.344049539Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.name Value:testclient-596bw Source:container}" subsys=endpoint
2019-03-06T02:39:45.344054119Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.uid Value:113b2a39-3fb9-11e9-8ea7-080027051dad Source:container}" subsys=endpoint
2019-03-06T02:39:53.41266576Z level=debug msg="Deleting CEP on first run" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:13.440507596Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:23.440877672Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:33.441221111Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:43.443382057Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:40:53.443486945Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:41:03.447793751Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer
2019-03-06T02:41:13.433721275Z level=debug msg="Deleting endpoint" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnecting ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0
2019-03-06T02:41:13.43372899Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.43373182Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next_fail endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.43373424Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.43374721Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_stale endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498430541Z level=debug msg="Endpoint removed" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnected ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0
2019-03-06T02:41:13.498433116Z level=info msg="Removed endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498470701Z level=debug msg="Waiting for proxy updates to complete..." containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498478653Z level=debug msg="Wait time for proxy updates: 15.136µs" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint
2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon
2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon
</details>
[pod-kube-system-cilium-f8gx9-cilium-agent.log](https://github.com/cilium/cilium/files/2934898/pod-kube-system-cilium-f8gx9-cilium-agent.log)
[e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip](https://github.com/cilium/cilium/files/2934899/e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip)
| priority | extraneous warning log messages while deleting endpoint ignoring error while deleting endpoint hit in master but doesn t appear related to the pr during endpoint deletion we hit these errors level warning msg ignoring error while deleting endpoint endpointid error unable to delete key from sys fs bpf tc globals cilium lxc unable to delete element from map cilium lxc no such file or directory subsys daemon level warning msg ignoring error while deleting endpoint endpointid error unable to remove endpoint from global policy map unable to delete element from map cilium policy no such file or directory subsys daemon the no such file or directory likely just means that we are attempting to remove an element from the map and the element was already removed from the map cilium log filtered by endpointid below the fold open to see level info msg new endpoint containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level debug msg refreshing labels of endpoint containerid endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient infolabels controller revision hash pod template generation subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key pod template generation value source subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key controller revision hash value source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes pod namespace value default source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io cilium policy serviceaccount value default source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io cilium policy cluster value default source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key zgroup value testdsclient source subsys endpoint level debug msg endpoint has reserved identity changing synchronously containerid datapathpolicyrevision desiredpolicyrevision endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient default testclient subsys endpoint level debug msg resolving identity for labels containerid datapathpolicyrevision desiredpolicyrevision endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient default testclient subsys endpoint level debug msg associated container event with endpoint containerid containername pod testclient default endpointid maxretry retry subsys workload watcher willretry true level debug msg refreshing labels of endpoint containerid endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient infolabels container annotation kubernetes io config seen container annotation kubernetes io config source api container io kubernetes container name pod container io kubernetes docker type podsandbox container io kubernetes pod name testclient container io kubernetes pod uid controller revision hash pod template generation subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes docker type value podsandbox source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key annotation kubernetes io config source value api source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key annotation kubernetes io config seen value source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes container name value pod source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes pod name value testclient source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes pod uid value source container subsys endpoint level debug msg deleting cep on first run containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg deleting endpoint code ok containerid datapathpolicyrevision desiredpolicyrevision endpointid endpointstate disconnecting default testclient policyrevision subsys endpoint type level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory endpointid default testclient subsys endpoint level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory next fail endpointid default testclient subsys endpoint level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory next endpointid default testclient subsys endpoint level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory stale endpointid default testclient subsys endpoint level debug msg endpoint removed code ok containerid datapathpolicyrevision desiredpolicyrevision endpointid endpointstate disconnected default testclient policyrevision subsys endpoint type level info msg removed endpoint containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level debug msg waiting for proxy updates to complete containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level debug msg wait time for proxy updates containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level warning msg ignoring error while deleting endpoint endpointid error unable to delete key from sys fs bpf tc globals cilium lxc unable to delete element from map cilium lxc no such file or directory subsys daemon level warning msg ignoring error while deleting endpoint endpointid error unable to remove endpoint from global policy map unable to delete element from map cilium policy no such file or directory subsys daemon | 1 |
752,455 | 26,286,459,178 | IssuesEvent | 2023-01-07 22:13:05 | tchassijordan/Street-Chickn | https://api.github.com/repos/tchassijordan/Street-Chickn | opened | Store cart items in local storage for retrieval after reload | enhancement low priority | When you add items to the cart, they are stored in as state in the app. But the way it's wired up now if you reload the page all the content in the will be lost since the app state is reinitialized on page reload.
Storing the items in the cart will give us the possibility to make the cart work like a non-volatile storage media, where the items chosen before can be access even after reload. | 1.0 | Store cart items in local storage for retrieval after reload - When you add items to the cart, they are stored in as state in the app. But the way it's wired up now if you reload the page all the content in the will be lost since the app state is reinitialized on page reload.
Storing the items in the cart will give us the possibility to make the cart work like a non-volatile storage media, where the items chosen before can be access even after reload. | priority | store cart items in local storage for retrieval after reload when you add items to the cart they are stored in as state in the app but the way it s wired up now if you reload the page all the content in the will be lost since the app state is reinitialized on page reload storing the items in the cart will give us the possibility to make the cart work like a non volatile storage media where the items chosen before can be access even after reload | 1 |
663,064 | 22,160,796,712 | IssuesEvent | 2022-06-04 13:32:22 | PurplePalette/sonolus-fastapi | https://api.github.com/repos/PurplePalette/sonolus-fastapi | opened | アナウンスに テンプレート文字列/マクロ機能を追加 | [Type] new feature✨ [Status] help wanted [Priority] low 💤 | ## 提案理由
- より有用なアナウンス情報をユーザーに表示するため
## 提案内容
- アナウンス表示上で `[TOTAL_LEVELS]` などの特定の文字列を入れると置き換えられるテンプレート文字列をサポートする
- [TOTAL_LEVELS] / [TOTAL_SKINS] / [TOTAL_PARTICLES] / [TOTAL_BACKGROUNDS] / [TOTAL_ENGINES]
- サーバー上に登録された〇〇総数を表示
- [LAST_UPDATE_LEVEL_DATE_M] [LAST_UPDATE_LEVEL_DATE_D] [LAST_UPDATE_LEVEL_DATE_Y]
- サーバー上に最後に登録されたデータの更新日時(年月日)を表示
- LEVELだけ例として載せるが、他の要素も対応したほうがいいと思う
- (他言語での日付表記変更に対応するため分割)
- [LAST_UPDATE_LEVEL_NAME]
- サーバー上に最後に登録されたデータの名称を表示
- LEVELだけ例として載せるが、他の要素も対応したほうがいいと思う | 1.0 | アナウンスに テンプレート文字列/マクロ機能を追加 - ## 提案理由
- より有用なアナウンス情報をユーザーに表示するため
## 提案内容
- アナウンス表示上で `[TOTAL_LEVELS]` などの特定の文字列を入れると置き換えられるテンプレート文字列をサポートする
- [TOTAL_LEVELS] / [TOTAL_SKINS] / [TOTAL_PARTICLES] / [TOTAL_BACKGROUNDS] / [TOTAL_ENGINES]
- サーバー上に登録された〇〇総数を表示
- [LAST_UPDATE_LEVEL_DATE_M] [LAST_UPDATE_LEVEL_DATE_D] [LAST_UPDATE_LEVEL_DATE_Y]
- サーバー上に最後に登録されたデータの更新日時(年月日)を表示
- LEVELだけ例として載せるが、他の要素も対応したほうがいいと思う
- (他言語での日付表記変更に対応するため分割)
- [LAST_UPDATE_LEVEL_NAME]
- サーバー上に最後に登録されたデータの名称を表示
- LEVELだけ例として載せるが、他の要素も対応したほうがいいと思う | priority | アナウンスに テンプレート文字列 マクロ機能を追加 提案理由 より有用なアナウンス情報をユーザーに表示するため 提案内容 アナウンス表示上で などの特定の文字列を入れると置き換えられるテンプレート文字列をサポートする サーバー上に登録された〇〇総数を表示 サーバー上に最後に登録されたデータの更新日時 年月日 を表示 levelだけ例として載せるが、他の要素も対応したほうがいいと思う 他言語での日付表記変更に対応するため分割 サーバー上に最後に登録されたデータの名称を表示 levelだけ例として載せるが、他の要素も対応したほうがいいと思う | 1 |
209,278 | 7,167,138,383 | IssuesEvent | 2018-01-29 19:32:19 | jaedb/Iris | https://api.github.com/repos/jaedb/Iris | closed | [Feature request] Play all tracks from Discover | low priority | I like the Discover tab. But on mobile device it is hard to just play all discovered tracks, at least I don't know any other way to just tap on every track and then play them. Is it possible to add something like "play" button which will add all track to current playlist and play them ? | 1.0 | [Feature request] Play all tracks from Discover - I like the Discover tab. But on mobile device it is hard to just play all discovered tracks, at least I don't know any other way to just tap on every track and then play them. Is it possible to add something like "play" button which will add all track to current playlist and play them ? | priority | play all tracks from discover i like the discover tab but on mobile device it is hard to just play all discovered tracks at least i don t know any other way to just tap on every track and then play them is it possible to add something like play button which will add all track to current playlist and play them | 1 |
754,720 | 26,399,342,236 | IssuesEvent | 2023-01-12 22:55:21 | aave/interface | https://api.github.com/repos/aave/interface | reopened | Rounding error causing transactions to fail when supplying coins | bug priority:low | **Describe the bug**
When attempting to supply all of a wallet's USDC to AAVE's polygon network, the transaction fails.
The UI supplies error text "Internal JSON-RPC error."
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'AAVE Polygon USDC market'
2. Click on 'Supply'
3. Select "Max"
4. Send transaction
5. Observe transaction failure.
**Expected behavior**
The transaction should succeed with all coins from the wallet.
**Screenshots**
N/A
**Desktop (please complete the following information):**
- OS: [e.g. iOS] Windows
- Browser [e.g. chrome, safari] Edge
- Version [e.g. 22] 101
**Additional context**
The transaction succeeded after I manually reduced the transaction size by 0.1 coins. My assumption is that there is a rounding mismatch happening somewhere.
| 1.0 | Rounding error causing transactions to fail when supplying coins - **Describe the bug**
When attempting to supply all of a wallet's USDC to AAVE's polygon network, the transaction fails.
The UI supplies error text "Internal JSON-RPC error."
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'AAVE Polygon USDC market'
2. Click on 'Supply'
3. Select "Max"
4. Send transaction
5. Observe transaction failure.
**Expected behavior**
The transaction should succeed with all coins from the wallet.
**Screenshots**
N/A
**Desktop (please complete the following information):**
- OS: [e.g. iOS] Windows
- Browser [e.g. chrome, safari] Edge
- Version [e.g. 22] 101
**Additional context**
The transaction succeeded after I manually reduced the transaction size by 0.1 coins. My assumption is that there is a rounding mismatch happening somewhere.
| priority | rounding error causing transactions to fail when supplying coins describe the bug when attempting to supply all of a wallet s usdc to aave s polygon network the transaction fails the ui supplies error text internal json rpc error to reproduce steps to reproduce the behavior go to aave polygon usdc market click on supply select max send transaction observe transaction failure expected behavior the transaction should succeed with all coins from the wallet screenshots n a desktop please complete the following information os windows browser edge version additional context the transaction succeeded after i manually reduced the transaction size by coins my assumption is that there is a rounding mismatch happening somewhere | 1 |
463,066 | 13,259,049,936 | IssuesEvent | 2020-08-20 16:10:56 | rstudio/plumber | https://api.github.com/repos/rstudio/plumber | closed | `removeNAOrNulls` breaks on no swagger spec | difficulty: novice effort: low help wanted priority: high | From #417
```r
pr <- plumber$new()
pr$handle("GET", "/:path/here", function(){})
pr$run(
port = 1234,
swagger = function(pr_, spec, ...) {
spec$info$title <- Sys.time()
spec
}
)
```
Visiting http://127.0.0.1:1234/openapi.json causes an error
```
<simpleError in if (any(toRemove)) { x[toRemove] <- NULL}: missing value where TRUE/FALSE needed>
``` | 1.0 | `removeNAOrNulls` breaks on no swagger spec - From #417
```r
pr <- plumber$new()
pr$handle("GET", "/:path/here", function(){})
pr$run(
port = 1234,
swagger = function(pr_, spec, ...) {
spec$info$title <- Sys.time()
spec
}
)
```
Visiting http://127.0.0.1:1234/openapi.json causes an error
```
<simpleError in if (any(toRemove)) { x[toRemove] <- NULL}: missing value where TRUE/FALSE needed>
``` | priority | removenaornulls breaks on no swagger spec from r pr plumber new pr handle get path here function pr run port swagger function pr spec spec info title sys time spec visiting causes an error | 1 |
193,600 | 6,886,382,354 | IssuesEvent | 2017-11-21 19:16:55 | WallarooLabs/wallaroo | https://api.github.com/repos/WallarooLabs/wallaroo | closed | Wesley should provide more information on failures. | priority: low | Currently, when failures occur during the Drone CI builds, we have no information on what went wrong but only a high level message from Wesley that something didn't match. It would be ideal if Wesley printed out additional information.
Additionally, we should look into saving sent.txt and received.txt for each run for debugging.
| 1.0 | Wesley should provide more information on failures. - Currently, when failures occur during the Drone CI builds, we have no information on what went wrong but only a high level message from Wesley that something didn't match. It would be ideal if Wesley printed out additional information.
Additionally, we should look into saving sent.txt and received.txt for each run for debugging.
| priority | wesley should provide more information on failures currently when failures occur during the drone ci builds we have no information on what went wrong but only a high level message from wesley that something didn t match it would be ideal if wesley printed out additional information additionally we should look into saving sent txt and received txt for each run for debugging | 1 |
292,430 | 8,957,933,430 | IssuesEvent | 2019-01-27 09:42:02 | joe27g/EnhancedDiscord | https://api.github.com/repos/joe27g/EnhancedDiscord | closed | Server Full format emote | category: minor category: plugins priority: low status: planned type: enhancement | A plugin that show all available emotes of the server in the full format like `<a:yeet:416559878291718144>` | 1.0 | Server Full format emote - A plugin that show all available emotes of the server in the full format like `<a:yeet:416559878291718144>` | priority | server full format emote a plugin that show all available emotes of the server in the full format like | 1 |
400,798 | 11,780,797,153 | IssuesEvent | 2020-03-16 20:52:34 | echo-lab/collab-playlist | https://api.github.com/repos/echo-lab/collab-playlist | opened | Focus on MessageEditor input when add/remove is clicked | enhancement low-priority | This allows users to immediately begin typing without having to click on the input first | 1.0 | Focus on MessageEditor input when add/remove is clicked - This allows users to immediately begin typing without having to click on the input first | priority | focus on messageeditor input when add remove is clicked this allows users to immediately begin typing without having to click on the input first | 1 |
259,131 | 8,188,172,073 | IssuesEvent | 2018-08-30 00:13:49 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [Art] Weird Graphic effect when deep underwater looking up and around | Low Priority | **Version:** 0.7.0.0 beta staging-d2423858





The last one took me time to do to prove its Not Meteor or the Sun , also on the sun you can see a sort of invisible Glitch as well.
looking at this invisible one, i think it might be the central point of the sky , but its only visible after you go down 20+ blocks or more | 1.0 | [Art] Weird Graphic effect when deep underwater looking up and around - **Version:** 0.7.0.0 beta staging-d2423858





The last one took me time to do to prove its Not Meteor or the Sun , also on the sun you can see a sort of invisible Glitch as well.
looking at this invisible one, i think it might be the central point of the sky , but its only visible after you go down 20+ blocks or more | priority | weird graphic effect when deep underwater looking up and around version beta staging the last one took me time to do to prove its not meteor or the sun also on the sun you can see a sort of invisible glitch as well looking at this invisible one i think it might be the central point of the sky but its only visible after you go down blocks or more | 1 |
300,032 | 9,206,074,519 | IssuesEvent | 2019-03-08 12:38:18 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | No attribute values for splitted feature | Category: Digitising Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report | ---
Author Name: **cmoe -** (cmoe -)
Original Redmine Issue: 1381, https://issues.qgis.org/issues/1381
Original Assignee: nobody -
---
If I split a feature, only one part keeps the attribute values:
1. load a layer
2. toggle editing state to start editing
3. split a feature
-> one of the now two parts of the feature has no attribute values in the attribute table
tested with lines und polygon as shapefile and as postgis layer
| 1.0 | No attribute values for splitted feature - ---
Author Name: **cmoe -** (cmoe -)
Original Redmine Issue: 1381, https://issues.qgis.org/issues/1381
Original Assignee: nobody -
---
If I split a feature, only one part keeps the attribute values:
1. load a layer
2. toggle editing state to start editing
3. split a feature
-> one of the now two parts of the feature has no attribute values in the attribute table
tested with lines und polygon as shapefile and as postgis layer
| priority | no attribute values for splitted feature author name cmoe cmoe original redmine issue original assignee nobody if i split a feature only one part keeps the attribute values load a layer toggle editing state to start editing split a feature one of the now two parts of the feature has no attribute values in the attribute table tested with lines und polygon as shapefile and as postgis layer | 1 |
5,963 | 2,581,331,944 | IssuesEvent | 2015-02-14 00:23:37 | nprapps/syria | https://api.github.com/repos/nprapps/syria | closed | Chrome/iOS: Title image doesn't display (just occasionally flickers) | Priority: Low | I see a gray screen behind the text instead.
Only Chrome/iOS. Fine in Safari. | 1.0 | Chrome/iOS: Title image doesn't display (just occasionally flickers) - I see a gray screen behind the text instead.
Only Chrome/iOS. Fine in Safari. | priority | chrome ios title image doesn t display just occasionally flickers i see a gray screen behind the text instead only chrome ios fine in safari | 1 |
272,708 | 8,516,427,895 | IssuesEvent | 2018-11-01 02:39:57 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | cancelling work item "add label" or "add assignee" causes "Workitem updated." toast notification | SEV4-low area/planner priority/P4 team/planner type/bug | Even though no changes to a work item have occurred, the user is presented with a toast notification "Workitem updated" when cancelling the addition of a label or an assignee.
**Steps to reproduce:**
- Create a work item
- Open work item
- Click assign labels button
- Click the x button to close the popup
**Observe:** Toast notification saying "Workitem updated."
**Expected:** No toast notification and for dialog to close silently. | 1.0 | cancelling work item "add label" or "add assignee" causes "Workitem updated." toast notification - Even though no changes to a work item have occurred, the user is presented with a toast notification "Workitem updated" when cancelling the addition of a label or an assignee.
**Steps to reproduce:**
- Create a work item
- Open work item
- Click assign labels button
- Click the x button to close the popup
**Observe:** Toast notification saying "Workitem updated."
**Expected:** No toast notification and for dialog to close silently. | priority | cancelling work item add label or add assignee causes workitem updated toast notification even though no changes to a work item have occurred the user is presented with a toast notification workitem updated when cancelling the addition of a label or an assignee steps to reproduce create a work item open work item click assign labels button click the x button to close the popup observe toast notification saying workitem updated expected no toast notification and for dialog to close silently | 1 |
25,238 | 2,678,318,749 | IssuesEvent | 2015-03-26 09:55:52 | cs2103jan2015-t15-1j/main | https://api.github.com/repos/cs2103jan2015-t15-1j/main | closed | Shortcut commands | priority.low type.enhancement | List command abbreviation requests here!
<add>
'a'
<display>
'dis'
'ls'
<edit>
'e'
<delete>
'del'
'rm'
'remove'
<done>
'do'
<undo>
'u'
<redo>
'r'
| 1.0 | Shortcut commands - List command abbreviation requests here!
<add>
'a'
<display>
'dis'
'ls'
<edit>
'e'
<delete>
'del'
'rm'
'remove'
<done>
'do'
<undo>
'u'
<redo>
'r'
| priority | shortcut commands list command abbreviation requests here a dis ls e del rm remove do u r | 1 |
552,904 | 16,330,451,445 | IssuesEvent | 2021-05-12 08:35:04 | sopra-fs21-group-22/client | https://api.github.com/repos/sopra-fs21-group-22/client | closed | #40 As a user I want my distance between me and any other player to be decreased by 1 if I own the SCOPE card, this decrease only goes one way. | Blue cards low priority task user story | Acceptance Criteria:
- [x] The holder of the SCOPE card can shoot at anyone that is +1 of his actual range.
- [x] The range for other players against the holder of this card must stay unchanged
⏰ Time estimate: 1h | 1.0 | #40 As a user I want my distance between me and any other player to be decreased by 1 if I own the SCOPE card, this decrease only goes one way. - Acceptance Criteria:
- [x] The holder of the SCOPE card can shoot at anyone that is +1 of his actual range.
- [x] The range for other players against the holder of this card must stay unchanged
⏰ Time estimate: 1h | priority | as a user i want my distance between me and any other player to be decreased by if i own the scope card this decrease only goes one way acceptance criteria the holder of the scope card can shoot at anyone that is of his actual range the range for other players against the holder of this card must stay unchanged ⏰ time estimate | 1 |
382,456 | 11,306,701,468 | IssuesEvent | 2020-01-18 15:55:38 | ayumi-cloud/oc-security-module | https://api.github.com/repos/ayumi-cloud/oc-security-module | opened | Add some whitelist exceptions to definitions file | Add to Whitelist Firewall Priority: Low enhancement in-progress | ### Enhancement idea
- [ ] Facebook Tracking
```
/?fbclid=IwAR3nGXGTREt4cH5TvgqxXHF-6gmu2fWaLh2q4QGLe4FVDSVerdfveKK7VsE
```
- [ ] Index is only php allowed
```
index.php
```
| 1.0 | Add some whitelist exceptions to definitions file - ### Enhancement idea
- [ ] Facebook Tracking
```
/?fbclid=IwAR3nGXGTREt4cH5TvgqxXHF-6gmu2fWaLh2q4QGLe4FVDSVerdfveKK7VsE
```
- [ ] Index is only php allowed
```
index.php
```
| priority | add some whitelist exceptions to definitions file enhancement idea facebook tracking fbclid index is only php allowed index php | 1 |
670,763 | 22,703,088,730 | IssuesEvent | 2022-07-05 12:31:13 | hermeznetwork/bridge-ui | https://api.github.com/repos/hermeznetwork/bridge-ui | closed | Improve the tokens search bar | priority: low type: enhancement | Currently, the token list search bar is not aligned with the designs. For example, it's missing the search icon and the clear button is not very aligned with the other icons in the UI (it's also missing padding).
Also, if you try to search for a token address, and then you try to select with the mouse this address but you end up with the mouse outside of the popup it gets closed. The popup should be closing only when you click outside of the popup. | 1.0 | Improve the tokens search bar - Currently, the token list search bar is not aligned with the designs. For example, it's missing the search icon and the clear button is not very aligned with the other icons in the UI (it's also missing padding).
Also, if you try to search for a token address, and then you try to select with the mouse this address but you end up with the mouse outside of the popup it gets closed. The popup should be closing only when you click outside of the popup. | priority | improve the tokens search bar currently the token list search bar is not aligned with the designs for example it s missing the search icon and the clear button is not very aligned with the other icons in the ui it s also missing padding also if you try to search for a token address and then you try to select with the mouse this address but you end up with the mouse outside of the popup it gets closed the popup should be closing only when you click outside of the popup | 1 |
2,818 | 2,533,551,505 | IssuesEvent | 2015-01-24 00:43:30 | pybox2d/pybox2d | https://api.github.com/repos/pybox2d/pybox2d | closed | Reorganize the repository | enhancement imported Priority-Low | _From [santagada](https://code.google.com/u/santagada/) on October 29, 2008 15:37:50_
A more pythonic repository will make it simple for new people to checkout
and install the library
the new format would be:
INSTALL all the info on installing the library on linux/windows/osx
README a simple description of the library and the project
LICENSE the license
setup.py the new setuptools based setup?
Box2d/ the old directory or maybe it could be box2d2 to be compliant
to PEP-8
testbed/ the now Python/testbed directory
contrib/ Should this directory stay? maybe we can move one level down to
be like trunk, branches, tag, wiki, contrib. Maybe also a README there to
explain each of the projects
_Original issue: http://code.google.com/p/pybox2d/issues/detail?id=13_ | 1.0 | Reorganize the repository - _From [santagada](https://code.google.com/u/santagada/) on October 29, 2008 15:37:50_
A more pythonic repository will make it simple for new people to checkout
and install the library
the new format would be:
INSTALL all the info on installing the library on linux/windows/osx
README a simple description of the library and the project
LICENSE the license
setup.py the new setuptools based setup?
Box2d/ the old directory or maybe it could be box2d2 to be compliant
to PEP-8
testbed/ the now Python/testbed directory
contrib/ Should this directory stay? maybe we can move one level down to
be like trunk, branches, tag, wiki, contrib. Maybe also a README there to
explain each of the projects
_Original issue: http://code.google.com/p/pybox2d/issues/detail?id=13_ | priority | reorganize the repository from on october a more pythonic repository will make it simple for new people to checkout and install the library the new format would be install all the info on installing the library on linux windows osx readme a simple description of the library and the project license the license setup py the new setuptools based setup the old directory or maybe it could be to be compliant to pep testbed the now python testbed directory contrib should this directory stay maybe we can move one level down to be like trunk branches tag wiki contrib maybe also a readme there to explain each of the projects original issue | 1 |
504,557 | 14,620,206,816 | IssuesEvent | 2020-12-22 19:17:01 | TheSLinux/gs | https://api.github.com/repos/TheSLinux/gs | closed | Viết tài liệu / chương trình (GUI) giao diện cho nilfs | _important _low_priority clean_up_2020 feature-request | `nilfs` không có giao diện đồ họa, có thể viết một chương trình đơn giản, ví dụ bằng `qt` hoặc `libncurses` để quản lý các `check-points`.
Nhưng đầu tiên là phải chuẩn bị một tài liệu thật tốt cho `nilfs`
| 1.0 | Viết tài liệu / chương trình (GUI) giao diện cho nilfs - `nilfs` không có giao diện đồ họa, có thể viết một chương trình đơn giản, ví dụ bằng `qt` hoặc `libncurses` để quản lý các `check-points`.
Nhưng đầu tiên là phải chuẩn bị một tài liệu thật tốt cho `nilfs`
| priority | viết tài liệu chương trình gui giao diện cho nilfs nilfs không có giao diện đồ họa có thể viết một chương trình đơn giản ví dụ bằng qt hoặc libncurses để quản lý các check points nhưng đầu tiên là phải chuẩn bị một tài liệu thật tốt cho nilfs | 1 |
274,658 | 8,563,864,219 | IssuesEvent | 2018-11-09 15:14:28 | autonomy-and-verification-uol/autonomy-and-verification-uol.github.io | https://api.github.com/repos/autonomy-and-verification-uol/autonomy-and-verification-uol.github.io | closed | orcIDs for researchers | enhancement low priority | It would be nice to have the orcIDs (including the nice pic) for each researcher of the lab who has it | 1.0 | orcIDs for researchers - It would be nice to have the orcIDs (including the nice pic) for each researcher of the lab who has it | priority | orcids for researchers it would be nice to have the orcids including the nice pic for each researcher of the lab who has it | 1 |
552,007 | 16,192,866,417 | IssuesEvent | 2021-05-04 10:59:41 | Haivision/srt | https://api.github.com/repos/Haivision/srt | closed | [BUG] Cookie contest behavior does not match documentation | Priority: Low Type: Bug [docs] | **Describe the bug**
The documentation states (in handshake.md):
> When one party's cookie value is greater than its peer's, it wins the cookie contest and becomes Initiator (the other party becomes the Responder).
However, the cookie contest subtracts the peer's cookie from the host's cookie, then compares with zero. This works in many cases, but cookies can have large absolute value, as they are hashes so each bit (in theory) has equal entropy. This can cause integer underflow, and have a negative cookie win against a positive cookie.
Adding a few print statements to `CUDT::cookieContest`, it's typical to get
```
cookieContest: agent=-1852209696 peer=1950268229
cookie contest won, initator
```
**To Reproduce**
Steps to reproduce the behavior:
1. Add the print statements
2. Run SRT in rendezvous modes a few times
**Expected behavior**
Documentation matches. This probably means updating docs with the real cookie contest algorithm, as it likely shouldn't change
**Desktop (please provide the following information):**
- OS: [e.g. Windows, Linux, macOS,...] linux, but doesn't matter
- SRT Version / commit ID: db097fad533938aa49f5beaf318160947c408499
**Additional context**
I'm writing an alternative implementation of SRT, and found it surprising.
| 1.0 | [BUG] Cookie contest behavior does not match documentation - **Describe the bug**
The documentation states (in handshake.md):
> When one party's cookie value is greater than its peer's, it wins the cookie contest and becomes Initiator (the other party becomes the Responder).
However, the cookie contest subtracts the peer's cookie from the host's cookie, then compares with zero. This works in many cases, but cookies can have large absolute value, as they are hashes so each bit (in theory) has equal entropy. This can cause integer underflow, and have a negative cookie win against a positive cookie.
Adding a few print statements to `CUDT::cookieContest`, it's typical to get
```
cookieContest: agent=-1852209696 peer=1950268229
cookie contest won, initator
```
**To Reproduce**
Steps to reproduce the behavior:
1. Add the print statements
2. Run SRT in rendezvous modes a few times
**Expected behavior**
Documentation matches. This probably means updating docs with the real cookie contest algorithm, as it likely shouldn't change
**Desktop (please provide the following information):**
- OS: [e.g. Windows, Linux, macOS,...] linux, but doesn't matter
- SRT Version / commit ID: db097fad533938aa49f5beaf318160947c408499
**Additional context**
I'm writing an alternative implementation of SRT, and found it surprising.
| priority | cookie contest behavior does not match documentation describe the bug the documentation states in handshake md when one party s cookie value is greater than its peer s it wins the cookie contest and becomes initiator the other party becomes the responder however the cookie contest subtracts the peer s cookie from the host s cookie then compares with zero this works in many cases but cookies can have large absolute value as they are hashes so each bit in theory has equal entropy this can cause integer underflow and have a negative cookie win against a positive cookie adding a few print statements to cudt cookiecontest it s typical to get cookiecontest agent peer cookie contest won initator to reproduce steps to reproduce the behavior add the print statements run srt in rendezvous modes a few times expected behavior documentation matches this probably means updating docs with the real cookie contest algorithm as it likely shouldn t change desktop please provide the following information os linux but doesn t matter srt version commit id additional context i m writing an alternative implementation of srt and found it surprising | 1 |
720,259 | 24,786,016,524 | IssuesEvent | 2022-10-24 09:49:40 | JamCoreModding/utility-belt | https://api.github.com/repos/JamCoreModding/utility-belt | closed | Allow keyboard hotkeys to select belt-bar items when belt is toggled | enhancement priority: low | It would be great to have the ability to select the tools on my belt using my keyboard keys as long as I've toggled the belt on | 1.0 | Allow keyboard hotkeys to select belt-bar items when belt is toggled - It would be great to have the ability to select the tools on my belt using my keyboard keys as long as I've toggled the belt on | priority | allow keyboard hotkeys to select belt bar items when belt is toggled it would be great to have the ability to select the tools on my belt using my keyboard keys as long as i ve toggled the belt on | 1 |
390,882 | 11,565,326,665 | IssuesEvent | 2020-02-20 10:18:09 | bounswe/bounswe2020group4 | https://api.github.com/repos/bounswe/bounswe2020group4 | closed | Meeting #2 Time Decision | Effort: Low Group Work Priority: Critical Status: Pending Type: Communication | We need to set a meeting time for the second meeting. A Doodle may help. | 1.0 | Meeting #2 Time Decision - We need to set a meeting time for the second meeting. A Doodle may help. | priority | meeting time decision we need to set a meeting time for the second meeting a doodle may help | 1 |
132,054 | 5,168,693,578 | IssuesEvent | 2017-01-17 22:17:24 | cryptonomex/graphene | https://api.github.com/repos/cryptonomex/graphene | closed | Refactor proposed tx approvals into separate object type | Low Priority refactor | Instead of storing approvals in a `flat_set` in the proposal object, we should store them as separate implementation objects.
| 1.0 | Refactor proposed tx approvals into separate object type - Instead of storing approvals in a `flat_set` in the proposal object, we should store them as separate implementation objects.
| priority | refactor proposed tx approvals into separate object type instead of storing approvals in a flat set in the proposal object we should store them as separate implementation objects | 1 |
94,972 | 3,933,558,715 | IssuesEvent | 2016-04-25 19:33:18 | ghutchis/avogadro | https://api.github.com/repos/ghutchis/avogadro | closed | Mistake in the Hartree to eV conversion in the Orbitals display | auto-migrated low priority v_1.1.0 | When I open a .log file generated with GAMESS to see the calculated MO's, the energy of the MO's is displayed in the Orbitals sidebar in eV's. However, the energies displayed are the Hartree energies multiplied by 27.2107^2, rather than by 27.2107.
Reported by: *anonymous | 1.0 | Mistake in the Hartree to eV conversion in the Orbitals display - When I open a .log file generated with GAMESS to see the calculated MO's, the energy of the MO's is displayed in the Orbitals sidebar in eV's. However, the energies displayed are the Hartree energies multiplied by 27.2107^2, rather than by 27.2107.
Reported by: *anonymous | priority | mistake in the hartree to ev conversion in the orbitals display when i open a log file generated with gamess to see the calculated mo s the energy of the mo s is displayed in the orbitals sidebar in ev s however the energies displayed are the hartree energies multiplied by rather than by reported by anonymous | 1 |
530,581 | 15,434,197,572 | IssuesEvent | 2021-03-07 01:54:22 | batect/batect | https://api.github.com/repos/batect/batect | closed | Quoted shell-evaluated commands don't accept additional arguments | is:enhancement priority:low stale | From what I've read in the documentation, tasks can be executed with additional arguments passed to the specified command like so: `batect my-task -- arg1 arg2`. Also, when attempting to use shell interpolation in the command, it is required to modify the entrypoint to be something like `/bin/bash -c`, necessitating the command be quoted.
Unfortunately, because the command is considered a single unit, passing additional arguments to the task doesn't add them on the inside. Instead of `my command` being executed as `/bin/bash -c "my command arg1 arg2"`, it gets executed as `/bin/bash -c "my command" arg1 arg2`.
Some fiddling around shows that this can be worked-around by specifying the command as `"my command ${*}" --`. Though adding this to all commands that require any kind of interpolation seems like it could be standardised with an option of some description. | 1.0 | Quoted shell-evaluated commands don't accept additional arguments - From what I've read in the documentation, tasks can be executed with additional arguments passed to the specified command like so: `batect my-task -- arg1 arg2`. Also, when attempting to use shell interpolation in the command, it is required to modify the entrypoint to be something like `/bin/bash -c`, necessitating the command be quoted.
Unfortunately, because the command is considered a single unit, passing additional arguments to the task doesn't add them on the inside. Instead of `my command` being executed as `/bin/bash -c "my command arg1 arg2"`, it gets executed as `/bin/bash -c "my command" arg1 arg2`.
Some fiddling around shows that this can be worked-around by specifying the command as `"my command ${*}" --`. Though adding this to all commands that require any kind of interpolation seems like it could be standardised with an option of some description. | priority | quoted shell evaluated commands don t accept additional arguments from what i ve read in the documentation tasks can be executed with additional arguments passed to the specified command like so batect my task also when attempting to use shell interpolation in the command it is required to modify the entrypoint to be something like bin bash c necessitating the command be quoted unfortunately because the command is considered a single unit passing additional arguments to the task doesn t add them on the inside instead of my command being executed as bin bash c my command it gets executed as bin bash c my command some fiddling around shows that this can be worked around by specifying the command as my command though adding this to all commands that require any kind of interpolation seems like it could be standardised with an option of some description | 1 |
818,545 | 30,694,437,131 | IssuesEvent | 2023-07-26 17:26:29 | Ore-Design/Ore-3D-Reports-Changelog | https://api.github.com/repos/Ore-Design/Ore-3D-Reports-Changelog | closed | NS Data Warehouse Middle-man | enhancement low priority | Create a data middleman hosted on Core and updated daily for the initial pull of NS data on Ore3D load.
Add a checkbox to the launcher that allows the user to decided if they want to pull directly from NS (slow) or Data Warehouse (fast). | 1.0 | NS Data Warehouse Middle-man - Create a data middleman hosted on Core and updated daily for the initial pull of NS data on Ore3D load.
Add a checkbox to the launcher that allows the user to decided if they want to pull directly from NS (slow) or Data Warehouse (fast). | priority | ns data warehouse middle man create a data middleman hosted on core and updated daily for the initial pull of ns data on load add a checkbox to the launcher that allows the user to decided if they want to pull directly from ns slow or data warehouse fast | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.