Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31,982 | 15,170,449,514 | IssuesEvent | 2021-02-12 23:18:18 | microsoft/pxt-arcade | https://api.github.com/repos/microsoft/pxt-arcade | closed | Duplicating a gallery asset is noticably laggy | asseteditor performance | **Describe the bug**
Significantly slower than dupicating a custom asset | True | Duplicating a gallery asset is noticably laggy - **Describe the bug**
Significantly slower than dupicating a custom asset | non_main | duplicating a gallery asset is noticably laggy describe the bug significantly slower than dupicating a custom asset | 0 |
51,968 | 12,831,704,885 | IssuesEvent | 2020-07-07 06:06:16 | jorgicio/jorgicio-gentoo-overlay | https://api.github.com/repos/jorgicio/jorgicio-gentoo-overlay | closed | x11-misc/optimus-manager: can't work in Gentoo | ebuild fail missing file | I'm using lightdm and i3. optimus-manager can't switch gpu.
```
➜ optimus-manager --status
ERROR: a GPU setup was initiated but Xorg post-start hook did not run.
Log at /var/log/optimus-manager/switch/switch-20200624T234730.log
If your login manager is GDM, make sure to follow those instructions:
https://github.com/Askannz/optimus-manager#important--gnome-and-gdm-users
If your display manager is neither GDM, SDDM nor LightDM, or if you don't use one, read the wiki:
https://github.com/Askannz/optimus-manager/wiki/FAQ,-common-issues,-troubleshooting
Cannot execute command because of previous errors.
```
And there is something wrong with `/etc/lightdm.conf.d/20-optimus-manager.conf`:
```
➜ ag display-setup-script= /etc/lightdm.conf.d
/etc/lightdm.conf.d/20-optimus-manager.conf
4:display-setup-script=/sbin/prime-offload
```
prime-offload is in `/usr/bin/prime-offload`, instead of `/sbin/prime-offload`.
But even after set the right path in `/etc/lightdm.conf.d/20-optimus-manager.conf`, the error still same with before. | 1.0 | x11-misc/optimus-manager: can't work in Gentoo - I'm using lightdm and i3. optimus-manager can't switch gpu.
```
➜ optimus-manager --status
ERROR: a GPU setup was initiated but Xorg post-start hook did not run.
Log at /var/log/optimus-manager/switch/switch-20200624T234730.log
If your login manager is GDM, make sure to follow those instructions:
https://github.com/Askannz/optimus-manager#important--gnome-and-gdm-users
If your display manager is neither GDM, SDDM nor LightDM, or if you don't use one, read the wiki:
https://github.com/Askannz/optimus-manager/wiki/FAQ,-common-issues,-troubleshooting
Cannot execute command because of previous errors.
```
And there is something wrong with `/etc/lightdm.conf.d/20-optimus-manager.conf`:
```
➜ ag display-setup-script= /etc/lightdm.conf.d
/etc/lightdm.conf.d/20-optimus-manager.conf
4:display-setup-script=/sbin/prime-offload
```
prime-offload is in `/usr/bin/prime-offload`, instead of `/sbin/prime-offload`.
But even after set the right path in `/etc/lightdm.conf.d/20-optimus-manager.conf`, the error still same with before. | non_main | misc optimus manager can t work in gentoo i m using lightdm and optimus manager can t switch gpu ➜ optimus manager status error a gpu setup was initiated but xorg post start hook did not run log at var log optimus manager switch switch log if your login manager is gdm make sure to follow those instructions if your display manager is neither gdm sddm nor lightdm or if you don t use one read the wiki cannot execute command because of previous errors and there is something wrong with etc lightdm conf d optimus manager conf ➜ ag display setup script etc lightdm conf d etc lightdm conf d optimus manager conf display setup script sbin prime offload prime offload is in usr bin prime offload instead of sbin prime offload but even after set the right path in etc lightdm conf d optimus manager conf the error still same with before | 0 |
207,605 | 7,131,602,151 | IssuesEvent | 2018-01-22 11:37:27 | NukkitX/Nukkit | https://api.github.com/repos/NukkitX/Nukkit | closed | GUI dissapears | [Priority] High [Status] Fixed [Type] Bug | ### Issue Description
<!--- Use our forum https://forums.nukkit.io for questions -->
Crafting wooden items using altenatives to oak planks can cause the clients GUI to disappear.
### Steps to Reproduce the Issue
<!--- Help us to find the problem by adding steps to reproduce the issue -->
Example can be seen here https://gfycat.com/PettyUnsungArchaeopteryx
### OS and Versions
<!--- Use the 'version' command in Nukkit -->
* Nukkit Version: 7a1b84f
<!--- Use 'java -version' in command line -->
* Java Version: 8
```
On Mobile and Win10 using both GUI's
```
### Crashdump, Backtrace or Other Files
None at the moment | 1.0 | GUI dissapears - ### Issue Description
<!--- Use our forum https://forums.nukkit.io for questions -->
Crafting wooden items using altenatives to oak planks can cause the clients GUI to disappear.
### Steps to Reproduce the Issue
<!--- Help us to find the problem by adding steps to reproduce the issue -->
Example can be seen here https://gfycat.com/PettyUnsungArchaeopteryx
### OS and Versions
<!--- Use the 'version' command in Nukkit -->
* Nukkit Version: 7a1b84f
<!--- Use 'java -version' in command line -->
* Java Version: 8
```
On Mobile and Win10 using both GUI's
```
### Crashdump, Backtrace or Other Files
None at the moment | non_main | gui dissapears issue description crafting wooden items using altenatives to oak planks can cause the clients gui to disappear steps to reproduce the issue example can be seen here os and versions nukkit version java version on mobile and using both gui s crashdump backtrace or other files none at the moment | 0 |
1,497 | 6,486,142,268 | IssuesEvent | 2017-08-19 16:58:18 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | brew cask search fails with "undefined method `map' for nil:NilClass" | awaiting maintainer feedback | This _just_ start happening (like I ran `brew cask search <something>`, the ran it again two minutes later and it started failing - I _think_ I ran ``brew update`` in that interval).
#### General troubleshooting steps
- [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [x] I ran `brew update-reset && brew update` and retried my command.
- [x] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
```
% brew cask search whatever
Error: undefined method `map' for nil:NilClass
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:20:in `search_remote'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:44:in `search'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:5:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
```
#### Output of your command with `--verbose --debug`
```
% brew cask search whatever --verbose --debug
Error: undefined method `map' for nil:NilClass
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:20:in `search_remote'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:44:in `search'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:5:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `rescue in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
```
#### Output of `brew cask doctor`
```
% brew cask doctor
==> Homebrew-Cask Version
Homebrew-Cask 1.3.1-62-g3b92f69
caskroom/homebrew-cask (git revision a152a; last commit 2017-08-19)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (3 files, 127.4MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3699 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1110 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (164 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG="en_US.UTF-8"
PATH="~/.gimme/versions/go/bin:~/opt/src/chapel-code/chapel-basic/bin/darwin:~/opt/src/chapel-code/chapel-basic/util:~/bin:~/.rbenv/shims:~/.dotfiles/bin:~/.cargo/bin:/usr/local/bin:/usr/local/sbin:~/.config/yarn/global/node_modules/.bin:/usr/local/share/npm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Homebrew/Library/Taps/buo/homebrew-cask-upgrade/cmd:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/usr/local/bin/bash"
``` | True | brew cask search fails with "undefined method `map' for nil:NilClass" - This _just_ start happening (like I ran `brew cask search <something>`, the ran it again two minutes later and it started failing - I _think_ I ran ``brew update`` in that interval).
#### General troubleshooting steps
- [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [x] I ran `brew update-reset && brew update` and retried my command.
- [x] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
```
% brew cask search whatever
Error: undefined method `map' for nil:NilClass
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:20:in `search_remote'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:44:in `search'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:5:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
```
#### Output of your command with `--verbose --debug`
```
% brew cask search whatever --verbose --debug
Error: undefined method `map' for nil:NilClass
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:20:in `search_remote'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:44:in `search'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/search.rb:5:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `rescue in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
```
#### Output of `brew cask doctor`
```
% brew cask doctor
==> Homebrew-Cask Version
Homebrew-Cask 1.3.1-62-g3b92f69
caskroom/homebrew-cask (git revision a152a; last commit 2017-08-19)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (3 files, 127.4MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3699 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1110 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (164 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG="en_US.UTF-8"
PATH="~/.gimme/versions/go/bin:~/opt/src/chapel-code/chapel-basic/bin/darwin:~/opt/src/chapel-code/chapel-basic/util:~/bin:~/.rbenv/shims:~/.dotfiles/bin:~/.cargo/bin:/usr/local/bin:/usr/local/sbin:~/.config/yarn/global/node_modules/.bin:/usr/local/share/npm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Homebrew/Library/Taps/buo/homebrew-cask-upgrade/cmd:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/usr/local/bin/bash"
``` | main | brew cask search fails with undefined method map for nil nilclass this just start happening like i ran brew cask search the ran it again two minutes later and it started failing i think i ran brew update in that interval general troubleshooting steps i have checked the instructions for or before opening the issue i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue brew cask search whatever error undefined method map for nil nilclass follow the instructions here usr local homebrew library homebrew cask lib hbc cli search rb in search remote usr local homebrew library homebrew cask lib hbc cli search rb in search usr local homebrew library homebrew cask lib hbc cli search rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of your command with verbose debug brew cask search whatever verbose debug error undefined method map for nil nilclass follow the instructions here usr local homebrew library homebrew cask lib hbc cli search rb in search remote usr local homebrew library homebrew cask lib hbc cli search rb in search usr local homebrew library homebrew cask lib hbc cli search rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps caskroom homebrew fonts casks usr local homebrew library taps caskroom homebrew versions casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lang en us utf path gimme versions go bin opt src chapel code chapel basic bin darwin opt src chapel code chapel basic util bin rbenv shims dotfiles bin cargo bin usr local bin usr local sbin config yarn global node modules bin usr local share npm bin usr bin bin usr sbin sbin usr local homebrew library taps buo homebrew cask upgrade cmd usr local homebrew library taps homebrew homebrew services cmd usr local homebrew library homebrew shims scm shell usr local bin bash | 1 |
1,035 | 4,827,661,326 | IssuesEvent | 2016-11-07 14:18:38 | jenkinsci/slack-plugin | https://api.github.com/repos/jenkinsci/slack-plugin | opened | Release 2.1 | maintainer communication | This issue is to track progress of releasing slack plugin 2.1
TODO:
- [x] Create a new issue to track the release and give it the label `maintainer communication`.
- [ ] Create a release branch. `git checkout origin/master -b prepare_release`
- [ ] Update the release notes in `CHANGELOG.md`.
- [ ] Open a pull request from `prepare_release` branch to `master` branch. Merge it.
- [ ] Fetch updated `master`.
- [ ] Execute the release plugin.
- [ ] Wait for the plugin to be released into the Jenkins Update Center.
- [ ] Successfully perform an upgrade from the last stable plugin release to the current release. | True | Release 2.1 - This issue is to track progress of releasing slack plugin 2.1
TODO:
- [x] Create a new issue to track the release and give it the label `maintainer communication`.
- [ ] Create a release branch. `git checkout origin/master -b prepare_release`
- [ ] Update the release notes in `CHANGELOG.md`.
- [ ] Open a pull request from `prepare_release` branch to `master` branch. Merge it.
- [ ] Fetch updated `master`.
- [ ] Execute the release plugin.
- [ ] Wait for the plugin to be released into the Jenkins Update Center.
- [ ] Successfully perform an upgrade from the last stable plugin release to the current release. | main | release this issue is to track progress of releasing slack plugin todo create a new issue to track the release and give it the label maintainer communication create a release branch git checkout origin master b prepare release update the release notes in changelog md open a pull request from prepare release branch to master branch merge it fetch updated master execute the release plugin wait for the plugin to be released into the jenkins update center successfully perform an upgrade from the last stable plugin release to the current release | 1 |
396,398 | 27,116,839,001 | IssuesEvent | 2023-02-15 19:16:32 | bkielbasa/go-ecommerce | https://api.github.com/repos/bkielbasa/go-ecommerce | opened | Describe "quick start" in the main Readme.md file | documentation | I want a pleasant developer experience so I have to describe how to quickly start the whole project and develop new features/fix bugs. | 1.0 | Describe "quick start" in the main Readme.md file - I want a pleasant developer experience so I have to describe how to quickly start the whole project and develop new features/fix bugs. | non_main | describe quick start in the main readme md file i want a pleasant developer experience so i have to describe how to quickly start the whole project and develop new features fix bugs | 0 |
47,853 | 10,163,355,387 | IssuesEvent | 2019-08-07 09:07:07 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Moving `k8s.io/kuberbetes/pkg/serviceaccount` to a staging repo | area/code-organization kind/feature priority/important-longterm sig/auth | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
I would like to see `k8s.io/kubernetes/pkg/serviceaccount` moved out of `k8s.io/kubernetes` into a staging repo. I'm not sure where would be the best place to put this - perhaps `k8s.io/apiserver`?
**Why is this needed**:
I would like to depend on this package on an external project but am restricted on importing `k8s.io/kubernetes/*`. My current solution is to copy the code into the project which is not ideal.
Is this feasible to do and does this make sense?
/cc @munnerz | 1.0 | Moving `k8s.io/kuberbetes/pkg/serviceaccount` to a staging repo - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
I would like to see `k8s.io/kubernetes/pkg/serviceaccount` moved out of `k8s.io/kubernetes` into a staging repo. I'm not sure where would be the best place to put this - perhaps `k8s.io/apiserver`?
**Why is this needed**:
I would like to depend on this package on an external project but am restricted on importing `k8s.io/kubernetes/*`. My current solution is to copy the code into the project which is not ideal.
Is this feasible to do and does this make sense?
/cc @munnerz | non_main | moving io kuberbetes pkg serviceaccount to a staging repo what would you like to be added i would like to see io kubernetes pkg serviceaccount moved out of io kubernetes into a staging repo i m not sure where would be the best place to put this perhaps io apiserver why is this needed i would like to depend on this package on an external project but am restricted on importing io kubernetes my current solution is to copy the code into the project which is not ideal is this feasible to do and does this make sense cc munnerz | 0 |
530,254 | 15,419,280,501 | IssuesEvent | 2021-03-05 09:57:03 | zeebe-io/zeebe | https://api.github.com/repos/zeebe-io/zeebe | closed | Format to json using JSONPB in all zbctl commands | Impact: Usability Priority: Low Scope: clients/go Status: Needs Review Type: Enhancement | **Description**
https://github.com/zeebe-io/zeebe/pull/5943 introduced the --output parameter to the `zbctl status` command, so that output formatting could be switched between `human` (i.e. human readable; default) and `json`.
This json formatting required the use of JSONPB to format the json due to issues with protobuf (read the original PR for more info). It was chosen not to do this for all existing commands because JSONPB marshals int64 as Go string making this a breaking change.
This library should be used for json formatting in all other commands.
| 1.0 | Format to json using JSONPB in all zbctl commands - **Description**
https://github.com/zeebe-io/zeebe/pull/5943 introduced the --output parameter to the `zbctl status` command, so that output formatting could be switched between `human` (i.e. human readable; default) and `json`.
This json formatting required the use of JSONPB to format the json due to issues with protobuf (read the original PR for more info). It was chosen not to do this for all existing commands because JSONPB marshals int64 as Go string making this a breaking change.
This library should be used for json formatting in all other commands.
| non_main | format to json using jsonpb in all zbctl commands description introduced the output parameter to the zbctl status command so that output formatting could be switched between human i e human readable default and json this json formatting required the use of jsonpb to format the json due to issues with protobuf read the original pr for more info it was chosen not to do this for all existing commands because jsonpb marshals as go string making this a breaking change this library should be used for json formatting in all other commands | 0 |
402,180 | 27,356,860,565 | IssuesEvent | 2023-02-27 13:27:03 | adobe/react-spectrum | https://api.github.com/repos/adobe/react-spectrum | closed | Required to have `id` present on items passed to TableBody but not documented | documentation help wanted typescript | # 🙋 Documentation Request
Hi, I'm trying to build some table components in Typescript. It seems that for rendering dynamic collections, I need to include an `id` on the items prop assed to `<TableBody>`. This is not described explicitly in the docs. I expected it to be around here: https://react-spectrum.adobe.com/react-aria/useTable.html#dynamic-collections.
I also figure that the generic on `TableBody` should be something like
```
export let TableBody: <T extends { id: number | string, }>(props: TableBodyProps<T>) => JSX.Element;
```
To make sure that Id is passed. Not passing Ids lead to a runtime crash, so this would be very nice to catch in the types instead.
## 🧢 Joachim from Climaider | 1.0 | Required to have `id` present on items passed to TableBody but not documented - # 🙋 Documentation Request
Hi, I'm trying to build some table components in Typescript. It seems that for rendering dynamic collections, I need to include an `id` on the items prop assed to `<TableBody>`. This is not described explicitly in the docs. I expected it to be around here: https://react-spectrum.adobe.com/react-aria/useTable.html#dynamic-collections.
I also figure that the generic on `TableBody` should be something like
```
export let TableBody: <T extends { id: number | string, }>(props: TableBodyProps<T>) => JSX.Element;
```
To make sure that Id is passed. Not passing Ids lead to a runtime crash, so this would be very nice to catch in the types instead.
## 🧢 Joachim from Climaider | non_main | required to have id present on items passed to tablebody but not documented 🙋 documentation request hi i m trying to build some table components in typescript it seems that for rendering dynamic collections i need to include an id on the items prop assed to this is not described explicitly in the docs i expected it to be around here i also figure that the generic on tablebody should be something like export let tablebody props tablebodyprops jsx element to make sure that id is passed not passing ids lead to a runtime crash so this would be very nice to catch in the types instead 🧢 joachim from climaider | 0 |
242,234 | 26,260,786,142 | IssuesEvent | 2023-01-06 07:32:27 | samq-wsdemo/AdvanceAutoParts-datafastlane | https://api.github.com/repos/samq-wsdemo/AdvanceAutoParts-datafastlane | opened | CVE-2022-25647 (High) detected in gson-2.8.6.jar | security vulnerability | ## CVE-2022-25647 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gson-2.8.6.jar</b></p></summary>
<p>Gson JSON library</p>
<p>Library home page: <a href="https://github.com/google/gson">https://github.com/google/gson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /le/code/gson/gson/2.8.6/gson-2.8.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **gson-2.8.6.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package com.google.code.gson:gson before 2.8.9 are vulnerable to Deserialization of Untrusted Data via the writeReplace() method in internal classes, which may lead to DoS attacks.
<p>Publish Date: 2022-05-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25647>CVE-2022-25647</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`</a></p>
<p>Release Date: 2022-05-01</p>
<p>Fix Resolution: 2.8.9</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2022-25647 (High) detected in gson-2.8.6.jar - ## CVE-2022-25647 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gson-2.8.6.jar</b></p></summary>
<p>Gson JSON library</p>
<p>Library home page: <a href="https://github.com/google/gson">https://github.com/google/gson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /le/code/gson/gson/2.8.6/gson-2.8.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **gson-2.8.6.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package com.google.code.gson:gson before 2.8.9 are vulnerable to Deserialization of Untrusted Data via the writeReplace() method in internal classes, which may lead to DoS attacks.
<p>Publish Date: 2022-05-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25647>CVE-2022-25647</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`</a></p>
<p>Release Date: 2022-05-01</p>
<p>Fix Resolution: 2.8.9</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_main | cve high detected in gson jar cve high severity vulnerability vulnerable library gson jar gson json library library home page a href path to dependency file pom xml path to vulnerable library le code gson gson gson jar dependency hierarchy x gson jar vulnerable library found in base branch master vulnerability details the package com google code gson gson before are vulnerable to deserialization of untrusted data via the writereplace method in internal classes which may lead to dos attacks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue | 0 |
1,550 | 6,572,245,377 | IssuesEvent | 2017-09-11 00:32:50 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | selinux_permissive no_reload option should be renamed/deprecated | affects_2.0 feature_idea in progress waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
selinux_permissive module
##### ANSIBLE VERSION
```
ansible 2.0.2.0
```
##### OS / ENVIRONMENT
CentOS 7
##### SUMMARY
The "no_reload" option is badly named. It starts with a "negative" concept (no_), then you set either true or false to enable/disable the negative concept. This double-negative is confusing and should be renamed to use a more obvious/positive concept.
Name should be changed to "reload" with default "true". The no_reload option could still be supported as deprecated to prevent breakage in current playbooks.
| True | selinux_permissive no_reload option should be renamed/deprecated - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
selinux_permissive module
##### ANSIBLE VERSION
```
ansible 2.0.2.0
```
##### OS / ENVIRONMENT
CentOS 7
##### SUMMARY
The "no_reload" option is badly named. It starts with a "negative" concept (no_), then you set either true or false to enable/disable the negative concept. This double-negative is confusing and should be renamed to use a more obvious/positive concept.
Name should be changed to "reload" with default "true". The no_reload option could still be supported as deprecated to prevent breakage in current playbooks.
| main | selinux permissive no reload option should be renamed deprecated issue type feature idea component name selinux permissive module ansible version ansible os environment centos summary the no reload option is badly named it starts with a negative concept no then you set either true or false to enable disable the negative concept this double negative is confusing and should be renamed to use a more obvious positive concept name should be changed to reload with default true the no reload option could still be supported as deprecated to prevent breakage in current playbooks | 1 |
3,533 | 13,912,056,802 | IssuesEvent | 2020-10-20 18:17:43 | grey-software/Twitter-Focus | https://api.github.com/repos/grey-software/Twitter-Focus | opened | 🚀 Feature Request: Add Grey Software sticker & Project logo | Domain: User Experience Role: Maintainer Type: Enhancement hacktoberfest-accepted | ### Problem Overview 👁️🗨️
Users should be able to see the Grey Software sticker and the Twitter-Focus logo on the README.md file.
### What would you like? 🧰
We want the Grey Software sticker and the Twitter-Focus logo as the header on the README.md file. Below is an example. You would also need to add the Grey Software sticker next to it.

### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
The Twitter-Focus logo image can be found in this repository under the **src** folder. The file name is **icon.png**. Below is the image of the Grey Software sticker that should be used.

| True | 🚀 Feature Request: Add Grey Software sticker & Project logo - ### Problem Overview 👁️🗨️
Users should be able to see the Grey Software sticker and the Twitter-Focus logo on the README.md file.
### What would you like? 🧰
We want the Grey Software sticker and the Twitter-Focus logo as the header on the README.md file. Below is an example. You would also need to add the Grey Software sticker next to it.

### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
The Twitter-Focus logo image can be found in this repository under the **src** folder. The file name is **icon.png**. Below is the image of the Grey Software sticker that should be used.

| main | 🚀 feature request add grey software sticker project logo problem overview 👁️🗨️ users should be able to see the grey software sticker and the twitter focus logo on the readme md file what would you like 🧰 we want the grey software sticker and the twitter focus logo as the header on the readme md file below is an example you would also need to add the grey software sticker next to it what alternatives have you considered 🔍 n a additional details ℹ️ the twitter focus logo image can be found in this repository under the src folder the file name is icon png below is the image of the grey software sticker that should be used | 1 |
1,290 | 5,467,343,237 | IssuesEvent | 2017-03-10 00:49:41 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_group: add tags | affects_1.8 aws cloud feature_idea waiting_on_maintainer | ##### Issue Type:
Feature Idea
##### Ansible Version:
ansible 1.8
##### Environment:
N/A
##### Summary:
Please add the ability to create and modify tags associated with the security group. At least being able to set the Name tag would be helpful.
##### Steps To Reproduce:
It would be nice if the feature was implemented like the instance_tags parameter in the ec2 module.
##### Expected Results:
The ability to set tags for security groups.
##### Actual Results:
N/A
| True | ec2_group: add tags - ##### Issue Type:
Feature Idea
##### Ansible Version:
ansible 1.8
##### Environment:
N/A
##### Summary:
Please add the ability to create and modify tags associated with the security group. At least being able to set the Name tag would be helpful.
##### Steps To Reproduce:
It would be nice if the feature was implemented like the instance_tags parameter in the ec2 module.
##### Expected Results:
The ability to set tags for security groups.
##### Actual Results:
N/A
| main | group add tags issue type feature idea ansible version ansible environment n a summary please add the ability to create and modify tags associated with the security group at least being able to set the name tag would be helpful steps to reproduce it would be nice if the feature was implemented like the instance tags parameter in the module expected results the ability to set tags for security groups actual results n a | 1 |
183,987 | 31,799,925,969 | IssuesEvent | 2023-09-13 10:24:10 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | opened | Using of CancellationToken in RemoteJSDataStream | design-proposal | I analyzed the source code of ASP.NET Core v6.0.18 using a Svace static analyzer. He found an error of category **HANDLE_LEAK** with the following message
> CancellationTokenSource.CreateLinkedTokenSource(a, b) is not disposed at the end of the function
in method `GetLinkedCancellationToken()`. Here's a source
https://github.com/dotnet/aspnetcore/blob/28b2bfd3ac67f07a5985550f1bec2e659af02aea/src/Components/Server/src/Circuits/RemoteJSDataStream.cs#L190-L202
First of all, its useful to note, that method `CancellationTokenSource.CreateLinkedTokenSource()` (link below) already has its own all necessary checks that are implemented in the method above
https://github.com/dotnet/corert/blob/master/src/System.Private.CoreLib/shared/System/Threading/CancellationTokenSource.cs#L748
Moreover, calling `CreateLinkedTokenSource()` actually creates instance of `TokenSource`, which can be disposed
### **Proposed update**
This method is applied only twice in this class: in methods
https://github.com/dotnet/aspnetcore/blob/28b2bfd3ac67f07a5985550f1bec2e659af02aea/src/Components/Server/src/Circuits/RemoteJSDataStream.cs#L178-L182
and
https://github.com/dotnet/aspnetcore/blob/28b2bfd3ac67f07a5985550f1bec2e659af02aea/src/Components/Server/src/Circuits/RemoteJSDataStream.cs#L184-L188
and I think, these methods can be improved with using `using` construction and removing `GetLinkedCancellationToken()` like this:
```c#
public override async Task<int> ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken)
{
using var linkedCancellationToken = CancellationTokenSource.CreateLinkedTokenSource(_streamCancellationToken, cancellationToken).Token;
return await _pipeReaderStream.ReadAsync(buffer.AsMemory(offset, count), linkedCancellationToken);
}
```
```c#
public override async ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken = default)
{
using var linkedCancellationToken = CancellationTokenSource.CreateLinkedTokenSource(_streamCancellationToken, cancellationToken).Token;
return await _pipeReaderStream.ReadAsync(buffer, linkedCancellationToken);
}
```
| 1.0 | Using of CancellationToken in RemoteJSDataStream - I analyzed the source code of ASP.NET Core v6.0.18 using a Svace static analyzer. He found an error of category **HANDLE_LEAK** with the following message
> CancellationTokenSource.CreateLinkedTokenSource(a, b) is not disposed at the end of the function
in method `GetLinkedCancellationToken()`. Here's a source
https://github.com/dotnet/aspnetcore/blob/28b2bfd3ac67f07a5985550f1bec2e659af02aea/src/Components/Server/src/Circuits/RemoteJSDataStream.cs#L190-L202
First of all, its useful to note, that method `CancellationTokenSource.CreateLinkedTokenSource()` (link below) already has its own all necessary checks that are implemented in the method above
https://github.com/dotnet/corert/blob/master/src/System.Private.CoreLib/shared/System/Threading/CancellationTokenSource.cs#L748
Moreover, calling `CreateLinkedTokenSource()` actually creates instance of `TokenSource`, which can be disposed
### **Proposed update**
This method is applied only twice in this class: in methods
https://github.com/dotnet/aspnetcore/blob/28b2bfd3ac67f07a5985550f1bec2e659af02aea/src/Components/Server/src/Circuits/RemoteJSDataStream.cs#L178-L182
and
https://github.com/dotnet/aspnetcore/blob/28b2bfd3ac67f07a5985550f1bec2e659af02aea/src/Components/Server/src/Circuits/RemoteJSDataStream.cs#L184-L188
and I think, these methods can be improved with using `using` construction and removing `GetLinkedCancellationToken()` like this:
```c#
public override async Task<int> ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken)
{
using var linkedCancellationToken = CancellationTokenSource.CreateLinkedTokenSource(_streamCancellationToken, cancellationToken).Token;
return await _pipeReaderStream.ReadAsync(buffer.AsMemory(offset, count), linkedCancellationToken);
}
```
```c#
public override async ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken = default)
{
using var linkedCancellationToken = CancellationTokenSource.CreateLinkedTokenSource(_streamCancellationToken, cancellationToken).Token;
return await _pipeReaderStream.ReadAsync(buffer, linkedCancellationToken);
}
```
| non_main | using of cancellationtoken in remotejsdatastream i analyzed the source code of asp net core using a svace static analyzer he found an error of category handle leak with the following message cancellationtokensource createlinkedtokensource a b is not disposed at the end of the function in method getlinkedcancellationtoken here s a source first of all its useful to note that method cancellationtokensource createlinkedtokensource link below already has its own all necessary checks that are implemented in the method above moreover calling createlinkedtokensource actually creates instance of tokensource which can be disposed proposed update this method is applied only twice in this class in methods and and i think these methods can be improved with using using construction and removing getlinkedcancellationtoken like this c public override async task readasync byte buffer int offset int count cancellationtoken cancellationtoken using var linkedcancellationtoken cancellationtokensource createlinkedtokensource streamcancellationtoken cancellationtoken token return await pipereaderstream readasync buffer asmemory offset count linkedcancellationtoken c public override async valuetask readasync memory buffer cancellationtoken cancellationtoken default using var linkedcancellationtoken cancellationtokensource createlinkedtokensource streamcancellationtoken cancellationtoken token return await pipereaderstream readasync buffer linkedcancellationtoken | 0 |
1,335 | 5,718,848,594 | IssuesEvent | 2017-04-19 20:31:19 | ACP3/cms | https://api.github.com/repos/ACP3/cms | opened | Consider using PSR-6 | core enhancement Maintainability modules Performance | We should look into using the PSR-6 (Caching) spec so that we can quickly replace the underlying caching library. | True | Consider using PSR-6 - We should look into using the PSR-6 (Caching) spec so that we can quickly replace the underlying caching library. | main | consider using psr we should look into using the psr caching spec so that we can quickly replace the underlying caching library | 1 |
487 | 3,773,377,549 | IssuesEvent | 2016-03-17 01:49:10 | chinleock/FunctionProfiler | https://api.github.com/repos/chinleock/FunctionProfiler | opened | 2016031700005 Power Breakout 新竹北大店 | Category - Maintainance Region - N2 Severity - Minor Status - Open | ### 站名
##### 新竹北大店站
-----------------------------------------------------------------------------
### VM功能位置
##### TWA0000001
-----------------------------------------------------------------------------
### Region
##### N2 - 新竹
-----------------------------------------------------------------------------
### Occurrence Time
##### 2016-03-16 14:30:25
-----------------------------------------------------------------------------
### Occurrence Duration
##### 2 HR
-----------------------------------------------------------------------------
### Error Code
##### DEF00000001
-----------------------------------------------------------------------------
### Hardware Category
##### n/a
-----------------------------------------------------------------------------
### Description
##### 台電例行性停電
-----------------------------------------------------------------------------
### Action
##### n/a
-----------------------------------------------------------------------------
### 相關Ticket
##### n/a
-----------------------------------------------------------------------------
### 工單
##### n/a
-----------------------------------------------------------------------------
### 零件編號
##### n/a
-----------------------------------------------------------------------------
### Remark
##### n/a | True | 2016031700005 Power Breakout 新竹北大店 - ### 站名
##### 新竹北大店站
-----------------------------------------------------------------------------
### VM功能位置
##### TWA0000001
-----------------------------------------------------------------------------
### Region
##### N2 - 新竹
-----------------------------------------------------------------------------
### Occurrence Time
##### 2016-03-16 14:30:25
-----------------------------------------------------------------------------
### Occurrence Duration
##### 2 HR
-----------------------------------------------------------------------------
### Error Code
##### DEF00000001
-----------------------------------------------------------------------------
### Hardware Category
##### n/a
-----------------------------------------------------------------------------
### Description
##### 台電例行性停電
-----------------------------------------------------------------------------
### Action
##### n/a
-----------------------------------------------------------------------------
### 相關Ticket
##### n/a
-----------------------------------------------------------------------------
### 工單
##### n/a
-----------------------------------------------------------------------------
### 零件編號
##### n/a
-----------------------------------------------------------------------------
### Remark
##### n/a | main | power breakout 新竹北大店 站名 新竹北大店站 vm功能位置 region 新竹 occurrence time occurrence duration hr error code hardware category n a description 台電例行性停電 action n a 相關ticket n a 工單 n a 零件編號 n a remark n a | 1 |
348 | 3,244,623,158 | IssuesEvent | 2015-10-16 04:09:40 | Homebrew/homebrew | https://api.github.com/repos/Homebrew/homebrew | closed | Don't install node as a dependency if a node version manager is present | features maintainer feedback | This isn't a huge deal, but it would be nice if Homebrew recognized that a node version manager (like `nvm` or `n`) is an acceptable stand in for a `node` dependency. This way, Homebrew wouldn't needlessly install a separate version outside of that version manager – perhaps it could show a caveat about making sure some sort of "default" version of node is installed using that manager | True | Don't install node as a dependency if a node version manager is present - This isn't a huge deal, but it would be nice if Homebrew recognized that a node version manager (like `nvm` or `n`) is an acceptable stand in for a `node` dependency. This way, Homebrew wouldn't needlessly install a separate version outside of that version manager – perhaps it could show a caveat about making sure some sort of "default" version of node is installed using that manager | main | don t install node as a dependency if a node version manager is present this isn t a huge deal but it would be nice if homebrew recognized that a node version manager like nvm or n is an acceptable stand in for a node dependency this way homebrew wouldn t needlessly install a separate version outside of that version manager – perhaps it could show a caveat about making sure some sort of default version of node is installed using that manager | 1 |
46,259 | 13,055,880,015 | IssuesEvent | 2020-07-30 03:00:31 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | test ticket (Trac #869) | Incomplete Migration Migrated from Trac cmake defect | Migrated from https://code.icecube.wisc.edu/ticket/869
```json
{
"status": "closed",
"changetime": "2015-02-12T06:31:40",
"description": "",
"reporter": "nega",
"cc": "",
"resolution": "invalid",
"_ts": "1423722700498868",
"component": "cmake",
"summary": "test ticket",
"priority": "normal",
"keywords": "",
"time": "2015-02-11T23:11:24",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
| 1.0 | test ticket (Trac #869) - Migrated from https://code.icecube.wisc.edu/ticket/869
```json
{
"status": "closed",
"changetime": "2015-02-12T06:31:40",
"description": "",
"reporter": "nega",
"cc": "",
"resolution": "invalid",
"_ts": "1423722700498868",
"component": "cmake",
"summary": "test ticket",
"priority": "normal",
"keywords": "",
"time": "2015-02-11T23:11:24",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
| non_main | test ticket trac migrated from json status closed changetime description reporter nega cc resolution invalid ts component cmake summary test ticket priority normal keywords time milestone owner nega type defect | 0 |
83,806 | 16,373,319,110 | IssuesEvent | 2021-05-15 15:42:33 | chanh2000kh/ProjectSE | https://api.github.com/repos/chanh2000kh/ProjectSE | closed | Xem thông tin dinh dưỡng | code | Người dùng có cái nhìn tổng quan về dinh dưỡng, các nhóm chất tốt cho sức khỏe, các nhóm chất nên hạn chế. Đưa ra một số tip hoặc thông tin hữu ích từ các trang thông tin lớn và uy tín | 1.0 | Xem thông tin dinh dưỡng - Người dùng có cái nhìn tổng quan về dinh dưỡng, các nhóm chất tốt cho sức khỏe, các nhóm chất nên hạn chế. Đưa ra một số tip hoặc thông tin hữu ích từ các trang thông tin lớn và uy tín | non_main | xem thông tin dinh dưỡng người dùng có cái nhìn tổng quan về dinh dưỡng các nhóm chất tốt cho sức khỏe các nhóm chất nên hạn chế đưa ra một số tip hoặc thông tin hữu ích từ các trang thông tin lớn và uy tín | 0 |
2,598 | 8,823,795,645 | IssuesEvent | 2019-01-02 14:55:52 | citrusframework/citrus | https://api.github.com/repos/citrusframework/citrus | closed | MessageContentBuilder interface broken for buildMessageContent | Prio: High TO REVIEW Type: Maintainance | **Citrus Version**
>= 2.7.3
**Description**
If you upgrade your Citrus version to 2.7.3 or higher, we've a breaking change in the `MessageContentBuilder` Interface affecting all implementing classes. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
/**
* Builds the control message.
* @param context the current test context.
* @param messageType the message type to build.
* @return the constructed message object.
*/
Message buildMessageContent(TestContext context, String messageType);
```
**API after change**
```java
/**
* Builds the control message.
* @param context the current test context.
* @param messageType the message type to build.
* @param direction
* @return the constructed message object.
*/
Message buildMessageContent(TestContext context, String messageType, MessageDirection direction);
```
**Additional information**
We'd have to extend the interface by the older version, mark it as deprecated and ensure that all implementing classes implement the interface correctly
Issue: #310
Commit: https://github.com/citrusframework/citrus/commit/51e9fa326e0947dcbb46eb9de3dd3790073c4c2c#diff-89200ee0f0faf01c7071b52f73d6f7b5R71
BR,
Sven | True | MessageContentBuilder interface broken for buildMessageContent - **Citrus Version**
>= 2.7.3
**Description**
If you upgrade your Citrus version to 2.7.3 or higher, we've a breaking change in the `MessageContentBuilder` Interface affecting all implementing classes. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
/**
* Builds the control message.
* @param context the current test context.
* @param messageType the message type to build.
* @return the constructed message object.
*/
Message buildMessageContent(TestContext context, String messageType);
```
**API after change**
```java
/**
* Builds the control message.
* @param context the current test context.
* @param messageType the message type to build.
* @param direction
* @return the constructed message object.
*/
Message buildMessageContent(TestContext context, String messageType, MessageDirection direction);
```
**Additional information**
We'd have to extend the interface by the older version, mark it as deprecated and ensure that all implementing classes implement the interface correctly
Issue: #310
Commit: https://github.com/citrusframework/citrus/commit/51e9fa326e0947dcbb46eb9de3dd3790073c4c2c#diff-89200ee0f0faf01c7071b52f73d6f7b5R71
BR,
Sven | main | messagecontentbuilder interface broken for buildmessagecontent citrus version description if you upgrade your citrus version to or higher we ve a breaking change in the messagecontentbuilder interface affecting all implementing classes we ll correct this with one of the future releases to ensure effortless version upgrades api before change java builds the control message param context the current test context param messagetype the message type to build return the constructed message object message buildmessagecontent testcontext context string messagetype api after change java builds the control message param context the current test context param messagetype the message type to build param direction return the constructed message object message buildmessagecontent testcontext context string messagetype messagedirection direction additional information we d have to extend the interface by the older version mark it as deprecated and ensure that all implementing classes implement the interface correctly issue commit br sven | 1 |
5,478 | 27,371,677,880 | IssuesEvent | 2023-02-28 00:20:31 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Refresh of downloaded docker images broke build | type/bug stage/bug-repro maintainer/need-response | ### Description
It appears that `sam local start-lambda` does not ensure reproducible behavior, and I believe this is due to the use of `amazon/aws-sam-cli-emulation-image-java11:latest` when building the (appropriately versioned) `amazon/aws-sam-cli-emulation-image-java11:rapid-1.1.0` docker image.
Our integrated testing framework executes `sam local start-lambda` using version 1.1.0, and had been working fine for weeks. When developers on Windows cleared their docker images using `docker system prune -a` to reclaim hard drive space, the next execution of the build failed. This issue does not effect developers on Linux.
This may be 2 different issues.
1. Behavior is not reproducible due to use of `latest`
2. Something changed to break building the `amazon/aws-sam-cli-emulation-image-java11:rapid-X.Y.Z` image on Windows (and docker?)
### Steps to reproduce
We are running `sam local start-lambda` with a Java application inside of a docker container. When on Windows
* Ensure Docker Desktop is installed and listening to on a TCP socket
* Add volume `-v /c/temp:/run/desktop/mnt/host/c/tmp`
* In the container, the TEMPDIR ENV is set to `/run/desktop/mnt/host/c/tmp`
* In the container, the DOCKER_HOST ENV is set to `tcp://host.docker.internal:2375`
* Build using the Git Bash shell, not the WSL Linux instances, as WSL Linux does not mount with deterministically mapped mounts allowing for accurate TEMPDIR mapping of shared files across all "systems" (host machine, SAM docker image, lambda docker image launched by SAM) to have matching paths.
### Observed result
```docker.trip-processor-infrastructure - STDERR: Found one Lambda function with name 'TfpTripProcessorSetupFunction'
docker.trip-processor-infrastructure - STDERR: Invoking com.vnomicscorp.LambdaMethodHandler::handleRequest (java11)
docker.trip-processor-infrastructure - STDERR: Environment variables overrides data is standard format
docker.trip-processor-infrastructure - STDERR: Loading AWS credentials from session with profile 'None'
docker.trip-processor-infrastructure - STDERR: Resolving code path. Cwd=/app/.aws-sam/build, CodeUri=TfpTripProcessorSetupFunction.zip
docker.trip-processor-infrastructure - STDERR: Resolved absolute path to code is /app/.aws-sam/build/TfpTripProcessorSetupFunction.zip
docker.trip-processor-infrastructure - STDERR: Decompressing /app/.aws-sam/build/TfpTripProcessorSetupFunction.zip
docker.trip-processor-infrastructure - STDERR: Image was not found.
docker.trip-processor-infrastructure - STDERR: Building image...2020-10-01 21:10:34 Exception on /2015-03-31/functions/TfpTripProcessorSetupFunction/invocations [POST]
docker.trip-processor-infrastructure - STDERR: Traceback (most recent call last):
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2317, in wsgi_app
docker.trip-processor-infrastructure - STDERR: response = self.full_dispatch_request()
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1840, in full_dispatch_request
docker.trip-processor-infrastructure - STDERR: rv = self.handle_user_exception(e)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1743, in handle_user_exception
docker.trip-processor-infrastructure - STDERR: reraise(exc_type, exc_value, tb)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 36, in reraise
docker.trip-processor-infrastructure - STDERR: raise value
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1838, in full_dispatch_request
docker.trip-processor-infrastructure - STDERR: rv = self.dispatch_request()
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1824, in dispatch_request
docker.trip-processor-infrastructure - STDERR: return self.view_functions[rule.endpoint](**req.view_args)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/lambda_service/local_lambda_invoke_service.py", line 151, in _invoke_request_handler
docker.trip-processor-infrastructure - STDERR: self.lambda_runner.invoke(function_name, request_data, stdout=stdout_stream_writer, stderr=self.stderr)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 100, in invoke
docker.trip-processor-infrastructure - STDERR: self.local_runtime.invoke(config, event, debug_context=self.debug_context, stdout=stdout, stderr=stderr)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 69, in invoke
docker.trip-processor-infrastructure - STDERR: container = LambdaContainer(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 72, in __init__
docker.trip-processor-infrastructure - STDERR: image = LambdaContainer._get_image(image_builder, runtime, layers, debug_options)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 176, in _get_image
docker.trip-processor-infrastructure - STDERR: return image_builder.build(runtime, layers, is_debug)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 125, in build
docker.trip-processor-infrastructure - STDERR: self._build_image(base_image, image_tag, downloaded_layers, is_debug_go, stream=stream_writer)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 202, in _build_image
docker.trip-processor-infrastructure - STDERR: resp_stream = self.docker_client.api.build(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/docker/api/build.py", line 263, in build
docker.trip-processor-infrastructure - STDERR: response = self._post(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/docker/utils/decorators.py", line 46, in inner
docker.trip-processor-infrastructure - STDERR: return f(self, *args, **kwargs)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 226, in _post
docker.trip-processor-infrastructure - STDERR: return self.post(url, **self._set_request_timeout(kwargs))
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 578, in post
docker.trip-processor-infrastructure - STDERR: return self.request('POST', url, data=data, json=json, **kwargs)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 516, in request
docker.trip-processor-infrastructure - STDERR: prep = self.prepare_request(req)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 449, in prepare_request
docker.trip-processor-infrastructure - STDERR: p.prepare(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 317, in prepare
docker.trip-processor-infrastructure - STDERR: self.prepare_body(data, files, json)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 477, in prepare_body
docker.trip-processor-infrastructure - STDERR: length = super_len(data)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/utils.py", line 124, in super_len
docker.trip-processor-infrastructure - STDERR: total_length = os.fstat(fileno).st_size
docker.trip-processor-infrastructure - STDERR: FileNotFoundError: [Errno 2] No such file or directory
docker.trip-processor-infrastructure - STDERR: 2020-10-01 21:10:34 172.27.0.16 - - [01/Oct/2020 21:10:34] "POST /2015-03-31/functions/TfpTripProcessorSetupFunction/invocations HTTP/1.1" 500 -
```
### Expected result
Building with our SAM CLI version pinned to 1.1.0 should behave the same way today as it did 4 weeks ago (before the release of 1.2.0) even when re-creating the available list of docker containers cached in the local docker repository.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows with Docker Desktop 2.3.0.4 configured to use WSL and TCP connection to docker host
2. `sam --version`: SAM CLI, version 1.1.0
### Work-around
We found that we can `docker save...` and `docker load...` the `amazon/aws-sam-cli-emulation-image-java11:rapid-X.Y.Z` image from a functioning Linux build into a windows machine and bring our automated tests back into working order. | True | Refresh of downloaded docker images broke build - ### Description
It appears that `sam local start-lambda` does not ensure reproducible behavior, and I believe this is due to the use of `amazon/aws-sam-cli-emulation-image-java11:latest` when building the (appropriately versioned) `amazon/aws-sam-cli-emulation-image-java11:rapid-1.1.0` docker image.
Our integrated testing framework executes `sam local start-lambda` using version 1.1.0, and had been working fine for weeks. When developers on Windows cleared their docker images using `docker system prune -a` to reclaim hard drive space, the next execution of the build failed. This issue does not effect developers on Linux.
This may be 2 different issues.
1. Behavior is not reproducible due to use of `latest`
2. Something changed to break building the `amazon/aws-sam-cli-emulation-image-java11:rapid-X.Y.Z` image on Windows (and docker?)
### Steps to reproduce
We are running `sam local start-lambda` with a Java application inside of a docker container. When on Windows
* Ensure Docker Desktop is installed and listening to on a TCP socket
* Add volume `-v /c/temp:/run/desktop/mnt/host/c/tmp`
* In the container, the TEMPDIR ENV is set to `/run/desktop/mnt/host/c/tmp`
* In the container, the DOCKER_HOST ENV is set to `tcp://host.docker.internal:2375`
* Build using the Git Bash shell, not the WSL Linux instances, as WSL Linux does not mount with deterministically mapped mounts allowing for accurate TEMPDIR mapping of shared files across all "systems" (host machine, SAM docker image, lambda docker image launched by SAM) to have matching paths.
### Observed result
```docker.trip-processor-infrastructure - STDERR: Found one Lambda function with name 'TfpTripProcessorSetupFunction'
docker.trip-processor-infrastructure - STDERR: Invoking com.vnomicscorp.LambdaMethodHandler::handleRequest (java11)
docker.trip-processor-infrastructure - STDERR: Environment variables overrides data is standard format
docker.trip-processor-infrastructure - STDERR: Loading AWS credentials from session with profile 'None'
docker.trip-processor-infrastructure - STDERR: Resolving code path. Cwd=/app/.aws-sam/build, CodeUri=TfpTripProcessorSetupFunction.zip
docker.trip-processor-infrastructure - STDERR: Resolved absolute path to code is /app/.aws-sam/build/TfpTripProcessorSetupFunction.zip
docker.trip-processor-infrastructure - STDERR: Decompressing /app/.aws-sam/build/TfpTripProcessorSetupFunction.zip
docker.trip-processor-infrastructure - STDERR: Image was not found.
docker.trip-processor-infrastructure - STDERR: Building image...2020-10-01 21:10:34 Exception on /2015-03-31/functions/TfpTripProcessorSetupFunction/invocations [POST]
docker.trip-processor-infrastructure - STDERR: Traceback (most recent call last):
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2317, in wsgi_app
docker.trip-processor-infrastructure - STDERR: response = self.full_dispatch_request()
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1840, in full_dispatch_request
docker.trip-processor-infrastructure - STDERR: rv = self.handle_user_exception(e)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1743, in handle_user_exception
docker.trip-processor-infrastructure - STDERR: reraise(exc_type, exc_value, tb)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 36, in reraise
docker.trip-processor-infrastructure - STDERR: raise value
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1838, in full_dispatch_request
docker.trip-processor-infrastructure - STDERR: rv = self.dispatch_request()
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1824, in dispatch_request
docker.trip-processor-infrastructure - STDERR: return self.view_functions[rule.endpoint](**req.view_args)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/lambda_service/local_lambda_invoke_service.py", line 151, in _invoke_request_handler
docker.trip-processor-infrastructure - STDERR: self.lambda_runner.invoke(function_name, request_data, stdout=stdout_stream_writer, stderr=self.stderr)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 100, in invoke
docker.trip-processor-infrastructure - STDERR: self.local_runtime.invoke(config, event, debug_context=self.debug_context, stdout=stdout, stderr=stderr)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 69, in invoke
docker.trip-processor-infrastructure - STDERR: container = LambdaContainer(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 72, in __init__
docker.trip-processor-infrastructure - STDERR: image = LambdaContainer._get_image(image_builder, runtime, layers, debug_options)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 176, in _get_image
docker.trip-processor-infrastructure - STDERR: return image_builder.build(runtime, layers, is_debug)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 125, in build
docker.trip-processor-infrastructure - STDERR: self._build_image(base_image, image_tag, downloaded_layers, is_debug_go, stream=stream_writer)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 202, in _build_image
docker.trip-processor-infrastructure - STDERR: resp_stream = self.docker_client.api.build(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/docker/api/build.py", line 263, in build
docker.trip-processor-infrastructure - STDERR: response = self._post(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/docker/utils/decorators.py", line 46, in inner
docker.trip-processor-infrastructure - STDERR: return f(self, *args, **kwargs)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 226, in _post
docker.trip-processor-infrastructure - STDERR: return self.post(url, **self._set_request_timeout(kwargs))
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 578, in post
docker.trip-processor-infrastructure - STDERR: return self.request('POST', url, data=data, json=json, **kwargs)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 516, in request
docker.trip-processor-infrastructure - STDERR: prep = self.prepare_request(req)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 449, in prepare_request
docker.trip-processor-infrastructure - STDERR: p.prepare(
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 317, in prepare
docker.trip-processor-infrastructure - STDERR: self.prepare_body(data, files, json)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 477, in prepare_body
docker.trip-processor-infrastructure - STDERR: length = super_len(data)
docker.trip-processor-infrastructure - STDERR: File "/usr/local/lib/python3.8/site-packages/requests/utils.py", line 124, in super_len
docker.trip-processor-infrastructure - STDERR: total_length = os.fstat(fileno).st_size
docker.trip-processor-infrastructure - STDERR: FileNotFoundError: [Errno 2] No such file or directory
docker.trip-processor-infrastructure - STDERR: 2020-10-01 21:10:34 172.27.0.16 - - [01/Oct/2020 21:10:34] "POST /2015-03-31/functions/TfpTripProcessorSetupFunction/invocations HTTP/1.1" 500 -
```
### Expected result
Building with our SAM CLI version pinned to 1.1.0 should behave the same way today as it did 4 weeks ago (before the release of 1.2.0) even when re-creating the available list of docker containers cached in the local docker repository.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows with Docker Desktop 2.3.0.4 configured to use WSL and TCP connection to docker host
2. `sam --version`: SAM CLI, version 1.1.0
### Work-around
We found that we can `docker save...` and `docker load...` the `amazon/aws-sam-cli-emulation-image-java11:rapid-X.Y.Z` image from a functioning Linux build into a windows machine and bring our automated tests back into working order. | main | refresh of downloaded docker images broke build description it appears that sam local start lambda does not ensure reproducible behavior and i believe this is due to the use of amazon aws sam cli emulation image latest when building the appropriately versioned amazon aws sam cli emulation image rapid docker image our integrated testing framework executes sam local start lambda using version and had been working fine for weeks when developers on windows cleared their docker images using docker system prune a to reclaim hard drive space the next execution of the build failed this issue does not effect developers on linux this may be different issues behavior is not reproducible due to use of latest something changed to break building the amazon aws sam cli emulation image rapid x y z image on windows and docker steps to reproduce we are running sam local start lambda with a java application inside of a docker container when on windows ensure docker desktop is installed and listening to on a tcp socket add volume v c temp run desktop mnt host c tmp in the container the tempdir env is set to run desktop mnt host c tmp in the container the docker host env is set to tcp host docker internal build using the git bash shell not the wsl linux instances as wsl linux does not mount with deterministically mapped mounts allowing for accurate tempdir mapping of shared files across all systems host machine sam docker image lambda docker image launched by sam to have matching paths observed result docker trip processor infrastructure stderr found one lambda function with name tfptripprocessorsetupfunction docker trip processor infrastructure stderr invoking com vnomicscorp lambdamethodhandler handlerequest docker trip processor infrastructure stderr environment variables overrides data is standard format docker trip processor infrastructure stderr loading aws credentials from session with profile none docker trip processor infrastructure stderr resolving code path cwd app aws sam build codeuri tfptripprocessorsetupfunction zip docker trip processor infrastructure stderr resolved absolute path to code is app aws sam build tfptripprocessorsetupfunction zip docker trip processor infrastructure stderr decompressing app aws sam build tfptripprocessorsetupfunction zip docker trip processor infrastructure stderr image was not found docker trip processor infrastructure stderr building image exception on functions tfptripprocessorsetupfunction invocations docker trip processor infrastructure stderr traceback most recent call last docker trip processor infrastructure stderr file usr local lib site packages flask app py line in wsgi app docker trip processor infrastructure stderr response self full dispatch request docker trip processor infrastructure stderr file usr local lib site packages flask app py line in full dispatch request docker trip processor infrastructure stderr rv self handle user exception e docker trip processor infrastructure stderr file usr local lib site packages flask app py line in handle user exception docker trip processor infrastructure stderr reraise exc type exc value tb docker trip processor infrastructure stderr file usr local lib site packages flask compat py line in reraise docker trip processor infrastructure stderr raise value docker trip processor infrastructure stderr file usr local lib site packages flask app py line in full dispatch request docker trip processor infrastructure stderr rv self dispatch request docker trip processor infrastructure stderr file usr local lib site packages flask app py line in dispatch request docker trip processor infrastructure stderr return self view functions req view args docker trip processor infrastructure stderr file usr local lib site packages samcli local lambda service local lambda invoke service py line in invoke request handler docker trip processor infrastructure stderr self lambda runner invoke function name request data stdout stdout stream writer stderr self stderr docker trip processor infrastructure stderr file usr local lib site packages samcli commands local lib local lambda py line in invoke docker trip processor infrastructure stderr self local runtime invoke config event debug context self debug context stdout stdout stderr stderr docker trip processor infrastructure stderr file usr local lib site packages samcli local lambdafn runtime py line in invoke docker trip processor infrastructure stderr container lambdacontainer docker trip processor infrastructure stderr file usr local lib site packages samcli local docker lambda container py line in init docker trip processor infrastructure stderr image lambdacontainer get image image builder runtime layers debug options docker trip processor infrastructure stderr file usr local lib site packages samcli local docker lambda container py line in get image docker trip processor infrastructure stderr return image builder build runtime layers is debug docker trip processor infrastructure stderr file usr local lib site packages samcli local docker lambda image py line in build docker trip processor infrastructure stderr self build image base image image tag downloaded layers is debug go stream stream writer docker trip processor infrastructure stderr file usr local lib site packages samcli local docker lambda image py line in build image docker trip processor infrastructure stderr resp stream self docker client api build docker trip processor infrastructure stderr file usr local lib site packages docker api build py line in build docker trip processor infrastructure stderr response self post docker trip processor infrastructure stderr file usr local lib site packages docker utils decorators py line in inner docker trip processor infrastructure stderr return f self args kwargs docker trip processor infrastructure stderr file usr local lib site packages docker api client py line in post docker trip processor infrastructure stderr return self post url self set request timeout kwargs docker trip processor infrastructure stderr file usr local lib site packages requests sessions py line in post docker trip processor infrastructure stderr return self request post url data data json json kwargs docker trip processor infrastructure stderr file usr local lib site packages requests sessions py line in request docker trip processor infrastructure stderr prep self prepare request req docker trip processor infrastructure stderr file usr local lib site packages requests sessions py line in prepare request docker trip processor infrastructure stderr p prepare docker trip processor infrastructure stderr file usr local lib site packages requests models py line in prepare docker trip processor infrastructure stderr self prepare body data files json docker trip processor infrastructure stderr file usr local lib site packages requests models py line in prepare body docker trip processor infrastructure stderr length super len data docker trip processor infrastructure stderr file usr local lib site packages requests utils py line in super len docker trip processor infrastructure stderr total length os fstat fileno st size docker trip processor infrastructure stderr filenotfounderror no such file or directory docker trip processor infrastructure stderr post functions tfptripprocessorsetupfunction invocations http expected result building with our sam cli version pinned to should behave the same way today as it did weeks ago before the release of even when re creating the available list of docker containers cached in the local docker repository additional environment details ex windows mac amazon linux etc os windows with docker desktop configured to use wsl and tcp connection to docker host sam version sam cli version work around we found that we can docker save and docker load the amazon aws sam cli emulation image rapid x y z image from a functioning linux build into a windows machine and bring our automated tests back into working order | 1 |
587,499 | 17,617,693,108 | IssuesEvent | 2021-08-18 11:52:55 | cormas/cormas | https://api.github.com/repos/cormas/cormas | opened | Rename carre -> square | porting priority 3 | Method occurrences of carre should be replaced by its english translation: square.
| 1.0 | Rename carre -> square - Method occurrences of carre should be replaced by its english translation: square.
| non_main | rename carre square method occurrences of carre should be replaced by its english translation square | 0 |
5,772 | 30,589,068,330 | IssuesEvent | 2023-07-21 15:29:18 | precice/precice | https://api.github.com/repos/precice/precice | closed | Resetting sent data in coupling scheme to zero after sending affects downstream calculations. Why? | bug maintainability | **Describe your setup**
https://github.com/precice/precice/commit/b7b5739b62cbab3948c98b97911f211090d6dea2
**Describe the problem**
To clean up things and to make sure there are no strange, I changed the source code to reset data after sending and before receiving in the coupling scheme (see link to commit above). However, I observed that some tests are failing after this change.
**Step To Reproduce**
1. Check out branch
2. Run tests
**Expected behaviour**
As far as I know the data should not be used anymore after sending. I also think that this would be the most intuitive behavior. Any idea why this is happening?
**Additional context**
This is a pure software engineering issue. But it might also be a bug. I'm not sure here. | True | Resetting sent data in coupling scheme to zero after sending affects downstream calculations. Why? - **Describe your setup**
https://github.com/precice/precice/commit/b7b5739b62cbab3948c98b97911f211090d6dea2
**Describe the problem**
To clean up things and to make sure there are no strange, I changed the source code to reset data after sending and before receiving in the coupling scheme (see link to commit above). However, I observed that some tests are failing after this change.
**Step To Reproduce**
1. Check out branch
2. Run tests
**Expected behaviour**
As far as I know the data should not be used anymore after sending. I also think that this would be the most intuitive behavior. Any idea why this is happening?
**Additional context**
This is a pure software engineering issue. But it might also be a bug. I'm not sure here. | main | resetting sent data in coupling scheme to zero after sending affects downstream calculations why describe your setup describe the problem to clean up things and to make sure there are no strange i changed the source code to reset data after sending and before receiving in the coupling scheme see link to commit above however i observed that some tests are failing after this change step to reproduce check out branch run tests expected behaviour as far as i know the data should not be used anymore after sending i also think that this would be the most intuitive behavior any idea why this is happening additional context this is a pure software engineering issue but it might also be a bug i m not sure here | 1 |
444,095 | 12,806,238,264 | IssuesEvent | 2020-07-03 09:04:26 | Leasehold/lisk-dex-ui | https://api.github.com/repos/Leasehold/lisk-dex-ui | closed | Check the user's balance (accounting for pending orders) before submitting the order transaction | priority | To prevent the user from sending signed transactions if they don't have enough money, we should check that the user has enough balance on the front end (instead of backend). | 1.0 | Check the user's balance (accounting for pending orders) before submitting the order transaction - To prevent the user from sending signed transactions if they don't have enough money, we should check that the user has enough balance on the front end (instead of backend). | non_main | check the user s balance accounting for pending orders before submitting the order transaction to prevent the user from sending signed transactions if they don t have enough money we should check that the user has enough balance on the front end instead of backend | 0 |
2,450 | 8,639,870,109 | IssuesEvent | 2018-11-23 22:14:29 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | RPitx modulation very poor | V1 related (not maintained) | I am trying to get rpitx running properly.
When running it in VFO mode it will perform pretty good.
But when trying to transmit a wav file the modulation is very poor , distorted.
The wav i use has a 48k sample rate
I run rpitx like this: sudo rpitx -i file.wav -m IQ -f 144600 -s 48000 -l
or like this : sudo rpitx -i file.wav -m IQ -f 144600 -l -c 1
or like this : sudo rpitx -i file.wav -m IQFLOAT -f 144600 -l
or like this: sudo rpitx -i file.ft -m IQ -f 144600 -s 48000 -l
or different combinations of the above.
All with the same result ; very poor modulation just squeeqs and distortion.
Any ideas how to fix this?
My final plan is to turn an few rpi's into 2M ARDF beacons
with filtering etc of course.
73
PA2LB
Lute van de Bult | True | RPitx modulation very poor - I am trying to get rpitx running properly.
When running it in VFO mode it will perform pretty good.
But when trying to transmit a wav file the modulation is very poor , distorted.
The wav i use has a 48k sample rate
I run rpitx like this: sudo rpitx -i file.wav -m IQ -f 144600 -s 48000 -l
or like this : sudo rpitx -i file.wav -m IQ -f 144600 -l -c 1
or like this : sudo rpitx -i file.wav -m IQFLOAT -f 144600 -l
or like this: sudo rpitx -i file.ft -m IQ -f 144600 -s 48000 -l
or different combinations of the above.
All with the same result ; very poor modulation just squeeqs and distortion.
Any ideas how to fix this?
My final plan is to turn an few rpi's into 2M ARDF beacons
with filtering etc of course.
73
PA2LB
Lute van de Bult | main | rpitx modulation very poor i am trying to get rpitx running properly when running it in vfo mode it will perform pretty good but when trying to transmit a wav file the modulation is very poor distorted the wav i use has a sample rate i run rpitx like this sudo rpitx i file wav m iq f s l or like this sudo rpitx i file wav m iq f l c or like this sudo rpitx i file wav m iqfloat f l or like this sudo rpitx i file ft m iq f s l or different combinations of the above all with the same result very poor modulation just squeeqs and distortion any ideas how to fix this my final plan is to turn an few rpi s into ardf beacons with filtering etc of course lute van de bult | 1 |
220,993 | 24,590,371,875 | IssuesEvent | 2022-10-14 01:11:03 | btmluiz/rpg_system | https://api.github.com/repos/btmluiz/rpg_system | opened | CVE-2022-37601 (High) detected in loader-utils-1.2.3.tgz, loader-utils-1.4.0.tgz | security vulnerability | ## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-1.2.3.tgz</b>, <b>loader-utils-1.4.0.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-1.2.3.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p>
<p>Path to dependency file: /frontend/package.json</p>
<p>Path to vulnerable library: /frontend/node_modules/resolve-url-loader/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.1.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **loader-utils-1.2.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.4.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz</a></p>
<p>Path to dependency file: /frontend/package.json</p>
<p>Path to vulnerable library: /frontend/node_modules/babel-loader/node_modules/loader-utils/package.json,/frontend/node_modules/mini-css-extract-plugin/node_modules/loader-utils/package.json,/frontend/node_modules/sass-loader/node_modules/loader-utils/package.json,/frontend/node_modules/webpack/node_modules/loader-utils/package.json,/frontend/node_modules/postcss-loader/node_modules/loader-utils/package.json,/frontend/node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.1.tgz (Root Library)
- html-webpack-plugin-4.5.0.tgz
- :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/btmluiz/rpg_system/commit/642081f26c767e4c1dfa3f0f6bea8d382bfbf66a">642081f26c767e4c1dfa3f0f6bea8d382bfbf66a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.1</p><p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-37601 (High) detected in loader-utils-1.2.3.tgz, loader-utils-1.4.0.tgz - ## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-1.2.3.tgz</b>, <b>loader-utils-1.4.0.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-1.2.3.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p>
<p>Path to dependency file: /frontend/package.json</p>
<p>Path to vulnerable library: /frontend/node_modules/resolve-url-loader/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.1.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **loader-utils-1.2.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.4.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz</a></p>
<p>Path to dependency file: /frontend/package.json</p>
<p>Path to vulnerable library: /frontend/node_modules/babel-loader/node_modules/loader-utils/package.json,/frontend/node_modules/mini-css-extract-plugin/node_modules/loader-utils/package.json,/frontend/node_modules/sass-loader/node_modules/loader-utils/package.json,/frontend/node_modules/webpack/node_modules/loader-utils/package.json,/frontend/node_modules/postcss-loader/node_modules/loader-utils/package.json,/frontend/node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.1.tgz (Root Library)
- html-webpack-plugin-4.5.0.tgz
- :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/btmluiz/rpg_system/commit/642081f26c767e4c1dfa3f0f6bea8d382bfbf66a">642081f26c767e4c1dfa3f0f6bea8d382bfbf66a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.1</p><p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in loader utils tgz loader utils tgz cve high severity vulnerability vulnerable libraries loader utils tgz loader utils tgz loader utils tgz utils for webpack loaders library home page a href path to dependency file frontend package json path to vulnerable library frontend node modules resolve url loader node modules loader utils package json dependency hierarchy react scripts tgz root library resolve url loader tgz x loader utils tgz vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file frontend package json path to vulnerable library frontend node modules babel loader node modules loader utils package json frontend node modules mini css extract plugin node modules loader utils package json frontend node modules sass loader node modules loader utils package json frontend node modules webpack node modules loader utils package json frontend node modules postcss loader node modules loader utils package json frontend node modules html webpack plugin node modules loader utils package json dependency hierarchy react scripts tgz root library html webpack plugin tgz x loader utils tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in function parsequery in parsequery js in webpack loader utils via the name variable in parsequery js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils direct dependency fix resolution react scripts fix resolution loader utils direct dependency fix resolution react scripts step up your open source security game with mend | 0 |
116,810 | 4,707,026,539 | IssuesEvent | 2016-10-13 18:53:36 | ivlab/MinVR2 | https://api.github.com/repos/ivlab/MinVR2 | closed | .ZIP download | sep23priority | When downloading the repository as a .Zip File as opposed to git clone, a build directory already exists and cmake . . does not execute correctly. | 1.0 | .ZIP download - When downloading the repository as a .Zip File as opposed to git clone, a build directory already exists and cmake . . does not execute correctly. | non_main | zip download when downloading the repository as a zip file as opposed to git clone a build directory already exists and cmake does not execute correctly | 0 |
86,398 | 10,740,134,747 | IssuesEvent | 2019-10-29 17:36:36 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | [DESIGN] Original Claim without BIRLS ID | 526 design vsa-benefits | ## User Story or Problem Statement
As a first time disability claimant, after logging in, I need to be notified of how I can proceed with completing my Disability Compensation Claim if I do not have the required VA identification so that I can start the form.
## Goal
_A Veteran who is not able to have a CORP ID created because they are missing a BIRLS ID needs to be notified (before ITF) of how they can proceed with completing their 526_
## Acceptance Criteria
- [ ] _Error handling of CORP ID, but no BIRLS ID
- [ ] _Identify where the error should be displayed (ITF - Intent to File or SIP - Save in Progress) | 1.0 | [DESIGN] Original Claim without BIRLS ID - ## User Story or Problem Statement
As a first time disability claimant, after logging in, I need to be notified of how I can proceed with completing my Disability Compensation Claim if I do not have the required VA identification so that I can start the form.
## Goal
_A Veteran who is not able to have a CORP ID created because they are missing a BIRLS ID needs to be notified (before ITF) of how they can proceed with completing their 526_
## Acceptance Criteria
- [ ] _Error handling of CORP ID, but no BIRLS ID
- [ ] _Identify where the error should be displayed (ITF - Intent to File or SIP - Save in Progress) | non_main | original claim without birls id user story or problem statement as a first time disability claimant after logging in i need to be notified of how i can proceed with completing my disability compensation claim if i do not have the required va identification so that i can start the form goal a veteran who is not able to have a corp id created because they are missing a birls id needs to be notified before itf of how they can proceed with completing their acceptance criteria error handling of corp id but no birls id identify where the error should be displayed itf intent to file or sip save in progress | 0 |
27,248 | 27,954,936,820 | IssuesEvent | 2023-03-24 11:40:05 | drawpile/Drawpile | https://api.github.com/repos/drawpile/Drawpile | closed | Cannot resize window with all docks visible | Usability | Drawpile, by default, starts up with most of the dockable windows docked and visible. This makes the window taller than the space I have available.

When maximized, the window still doesn't fit on the screen.

After detaching or hiding some of the docks, the window became very flexible. I think that Drawpile should avoid this issue either by being able to squeeze the docks, making the sidebar scrollable, making it possible to 'roll up' the docks (like in Inkscape), or open with fewer docks by default.
I am running Parabola GNU/Linux (read Arch Linux) with KDE and Plasma 5.
| True | Cannot resize window with all docks visible - Drawpile, by default, starts up with most of the dockable windows docked and visible. This makes the window taller than the space I have available.

When maximized, the window still doesn't fit on the screen.

After detaching or hiding some of the docks, the window became very flexible. I think that Drawpile should avoid this issue either by being able to squeeze the docks, making the sidebar scrollable, making it possible to 'roll up' the docks (like in Inkscape), or open with fewer docks by default.
I am running Parabola GNU/Linux (read Arch Linux) with KDE and Plasma 5.
| non_main | cannot resize window with all docks visible drawpile by default starts up with most of the dockable windows docked and visible this makes the window taller than the space i have available when maximized the window still doesn t fit on the screen after detaching or hiding some of the docks the window became very flexible i think that drawpile should avoid this issue either by being able to squeeze the docks making the sidebar scrollable making it possible to roll up the docks like in inkscape or open with fewer docks by default i am running parabola gnu linux read arch linux with kde and plasma | 0 |
1,188 | 5,103,431,043 | IssuesEvent | 2017-01-04 21:23:46 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_container module doesn't match existing container when entrypoint is used | affects_2.1 bug_report cloud docker waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
`docker_container`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux/Fedora 22
##### SUMMARY
<!--- Explain the problem briefly -->
When running a task with an `entrypoint` parameter, a docker container is destroyed and recreated each time after the first time it is run.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
Fill in the variables and run this twice:
```
- name: Create data container
docker_container:
name: "{{ docker_data_name }}"
image: "{{ image }}"
state: present
entrypoint: "/bin/echo Data-only container for {{ name }}"
```
On the second run this roduces:
```
TASK [docker-image : Create data container] ************************************
changed: [hostname]
```
Destroy that container, comment out the entrypoint, then run this twice
```
- name: Create data container
docker_container:
name: "{{ docker_data_name }}"
image: "{{ image }}"
state: present
#entrypoint: "/bin/echo Data-only container for {{ name }}"
```
On the second run this poduces:
```
TASK [docker-image : Create data container] ************************************
ok: [hostname]
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Ansible reports no change. The created timestamp should be the same as the first time it was run.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Ansible reports a change, and the created timestamp becomes recent.
| True | docker_container module doesn't match existing container when entrypoint is used - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
`docker_container`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux/Fedora 22
##### SUMMARY
<!--- Explain the problem briefly -->
When running a task with an `entrypoint` parameter, a docker container is destroyed and recreated each time after the first time it is run.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
Fill in the variables and run this twice:
```
- name: Create data container
docker_container:
name: "{{ docker_data_name }}"
image: "{{ image }}"
state: present
entrypoint: "/bin/echo Data-only container for {{ name }}"
```
On the second run this roduces:
```
TASK [docker-image : Create data container] ************************************
changed: [hostname]
```
Destroy that container, comment out the entrypoint, then run this twice
```
- name: Create data container
docker_container:
name: "{{ docker_data_name }}"
image: "{{ image }}"
state: present
#entrypoint: "/bin/echo Data-only container for {{ name }}"
```
On the second run this poduces:
```
TASK [docker-image : Create data container] ************************************
ok: [hostname]
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Ansible reports no change. The created timestamp should be the same as the first time it was run.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Ansible reports a change, and the created timestamp becomes recent.
| main | docker container module doesn t match existing container when entrypoint is used issue type bug report component name docker container ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux fedora summary when running a task with an entrypoint parameter a docker container is destroyed and recreated each time after the first time it is run steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used fill in the variables and run this twice name create data container docker container name docker data name image image state present entrypoint bin echo data only container for name on the second run this roduces task changed destroy that container comment out the entrypoint then run this twice name create data container docker container name docker data name image image state present entrypoint bin echo data only container for name on the second run this poduces task ok expected results ansible reports no change the created timestamp should be the same as the first time it was run actual results ansible reports a change and the created timestamp becomes recent | 1 |
956 | 4,702,099,560 | IssuesEvent | 2016-10-13 00:16:34 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Document EOS Min Version | affects_2.2 bug_report in progress networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_template, eos_config
##### ANSIBLE VERSION
2.2
##### SUMMARY
The latest implementation in devel for 2.2 uses a feature in EOS, session-config. This feature was introduced in EOS 4.15.0F. Therefore, the module documentation should clearly indicate this, otherwise you end up with:
```
TASK [Arista EOS Base Configuration] *******************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: localhost(config-s-ansibl)#
fatal: [172.16.130.201]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\", line 213, in <module>\n main()\n File \"/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\", line 205, in main\n commit=True)\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/netcfg.py\", line 58, in load_config\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\", line 78, in load_config\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\", line 102, in diff_config\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: show session-config diffs\r\n% Invalid input\r\nlocalhost(config-s-ansibl)#\n", "module_stdout": "", "msg": "MODULE FAILURE"}
to retry, use: --limit @/ansi/base_configuration.retry
PLAY RECAP *********************************************************************
172.16.130.201 : ok=0 changed=0 unreachable=0 failed=1
```
I'd also like to recommend looking for ``invalid input`` and maybe offering a better message which hints to the user that their version of EOS is too old. | True | Document EOS Min Version - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_template, eos_config
##### ANSIBLE VERSION
2.2
##### SUMMARY
The latest implementation in devel for 2.2 uses a feature in EOS, session-config. This feature was introduced in EOS 4.15.0F. Therefore, the module documentation should clearly indicate this, otherwise you end up with:
```
TASK [Arista EOS Base Configuration] *******************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: localhost(config-s-ansibl)#
fatal: [172.16.130.201]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\", line 213, in <module>\n main()\n File \"/tmp/ansible_5GSPsQ/ansible_module_eos_template.py\", line 205, in main\n commit=True)\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/netcfg.py\", line 58, in load_config\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\", line 78, in load_config\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/eos.py\", line 102, in diff_config\n File \"/tmp/ansible_5GSPsQ/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: show session-config diffs\r\n% Invalid input\r\nlocalhost(config-s-ansibl)#\n", "module_stdout": "", "msg": "MODULE FAILURE"}
to retry, use: --limit @/ansi/base_configuration.retry
PLAY RECAP *********************************************************************
172.16.130.201 : ok=0 changed=0 unreachable=0 failed=1
```
I'd also like to recommend looking for ``invalid input`` and maybe offering a better message which hints to the user that their version of EOS is too old. | main | document eos min version issue type bug report component name eos template eos config ansible version summary the latest implementation in devel for uses a feature in eos session config this feature was introduced in eos therefore the module documentation should clearly indicate this otherwise you end up with task an exception occurred during task execution to see the full traceback use vvv the error was localhost config s ansibl fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible ansible module eos template py line in n main n file tmp ansible ansible module eos template py line in main n commit true n file tmp ansible ansible modlib zip ansible module utils netcfg py line in load config n file tmp ansible ansible modlib zip ansible module utils eos py line in load config n file tmp ansible ansible modlib zip ansible module utils eos py line in diff config n file tmp ansible ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response show session config diffs r n invalid input r nlocalhost config s ansibl n module stdout msg module failure to retry use limit ansi base configuration retry play recap ok changed unreachable failed i d also like to recommend looking for invalid input and maybe offering a better message which hints to the user that their version of eos is too old | 1 |
173,667 | 27,510,173,678 | IssuesEvent | 2023-03-06 08:11:07 | status-im/help.status.im | https://api.github.com/repos/status-im/help.status.im | closed | Adjust Status Help content width size | P:information-design P:platform | We need to check with the Design team what would be the recommended content width size in Status Help.
See [this discussion](https://github.com/squidfunk/mkdocs-material/discussions/2842) in the Material for MkDocs forum for information about how to control this parameter. | 1.0 | Adjust Status Help content width size - We need to check with the Design team what would be the recommended content width size in Status Help.
See [this discussion](https://github.com/squidfunk/mkdocs-material/discussions/2842) in the Material for MkDocs forum for information about how to control this parameter. | non_main | adjust status help content width size we need to check with the design team what would be the recommended content width size in status help see in the material for mkdocs forum for information about how to control this parameter | 0 |
460,602 | 13,213,569,554 | IssuesEvent | 2020-08-16 13:29:43 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID :211474] Unchecked return value in tests/kernel/mutex/mutex_api/src/test_mutex_apis.c | Coverity bug has-pr priority: low |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/476fc405e7/tests/kernel/mutex/mutex_api/src/test_mutex_apis.c#L65
Category: Error handling issues
Function: `tmutex_test_lock`
Component: Tests
CID: [211474](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=211474)
Details:
```
59 {
60 k_mutex_init(pmutex);
61 k_thread_create(&tdata, tstack, STACK_SIZE,
62 entry_fn, pmutex, NULL, NULL,
63 K_PRIO_PREEMPT(0),
64 K_USER | K_INHERIT_PERMS, K_NO_WAIT);
>>> CID 211474: Error handling issues (CHECKED_RETURN)
>>> Calling "k_mutex_lock" without checking return value (as is done elsewhere 14 out of 16 times).
65 k_mutex_lock(pmutex, K_FOREVER);
66 TC_PRINT("access resource from main thread\n");
67
68 /* wait for spawn thread to take action */
69 k_msleep(TIMEOUT);
70 }
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID :211474] Unchecked return value in tests/kernel/mutex/mutex_api/src/test_mutex_apis.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/476fc405e7/tests/kernel/mutex/mutex_api/src/test_mutex_apis.c#L65
Category: Error handling issues
Function: `tmutex_test_lock`
Component: Tests
CID: [211474](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=211474)
Details:
```
59 {
60 k_mutex_init(pmutex);
61 k_thread_create(&tdata, tstack, STACK_SIZE,
62 entry_fn, pmutex, NULL, NULL,
63 K_PRIO_PREEMPT(0),
64 K_USER | K_INHERIT_PERMS, K_NO_WAIT);
>>> CID 211474: Error handling issues (CHECKED_RETURN)
>>> Calling "k_mutex_lock" without checking return value (as is done elsewhere 14 out of 16 times).
65 k_mutex_lock(pmutex, K_FOREVER);
66 TC_PRINT("access resource from main thread\n");
67
68 /* wait for spawn thread to take action */
69 k_msleep(TIMEOUT);
70 }
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| non_main | unchecked return value in tests kernel mutex mutex api src test mutex apis c static code scan issues found in file category error handling issues function tmutex test lock component tests cid details k mutex init pmutex k thread create tdata tstack stack size entry fn pmutex null null k prio preempt k user k inherit perms k no wait cid error handling issues checked return calling k mutex lock without checking return value as is done elsewhere out of times k mutex lock pmutex k forever tc print access resource from main thread n wait for spawn thread to take action k msleep timeout please fix or provide comments in coverity using the link note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 0 |
3,741 | 15,712,838,947 | IssuesEvent | 2021-03-27 13:50:21 | FairlySadPanda/vrcbce | https://api.github.com/repos/FairlySadPanda/vrcbce | opened | Quest Support | Maintainer Ticket enhancement | v0.2.0 has no official support for Quest. This is because it was superfluous to the priority of making a smaller, easier-to-edit prefab.
Quest support is required for v1.0.0, as the table needs to have close to feature parity with the original prefab.
The priority here is simplicity. Personally, when doing cross-compatible work "Quest-first" is the correct route. So for 1.0.0, the code needs to fully work assuming Quest-level hardware is the default. (That means that we switch off PC-only features, or allow those features to work on Quest too).
| True | Quest Support - v0.2.0 has no official support for Quest. This is because it was superfluous to the priority of making a smaller, easier-to-edit prefab.
Quest support is required for v1.0.0, as the table needs to have close to feature parity with the original prefab.
The priority here is simplicity. Personally, when doing cross-compatible work "Quest-first" is the correct route. So for 1.0.0, the code needs to fully work assuming Quest-level hardware is the default. (That means that we switch off PC-only features, or allow those features to work on Quest too).
| main | quest support has no official support for quest this is because it was superfluous to the priority of making a smaller easier to edit prefab quest support is required for as the table needs to have close to feature parity with the original prefab the priority here is simplicity personally when doing cross compatible work quest first is the correct route so for the code needs to fully work assuming quest level hardware is the default that means that we switch off pc only features or allow those features to work on quest too | 1 |
4,655 | 24,096,938,338 | IssuesEvent | 2022-09-19 19:39:04 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Bug: How to use --use-container for sam sync (because local python is not same as runtime) | type/feature stage/waiting-for-release area/sync maintainer/need-followup | I was able to build using --use-container so i dont have to bother about the local python version.
But while syncing it creates problem
```
$ sam sync --debug --stack-name santhosh-testing
2022-06-30 12:35:13,556 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-06-30 12:35:13,563 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-06-30 12:35:13,564 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '5b6c9cb7-e4a9-4c00-84c5-952ce7f5f5ab', 'installationId': 'a87120b0-ef01-4fa9-a2b4-d881501b50fc', 'sessionId': 'fb48edde-cf6e-46ea-8ac2-31688d6f3178', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.0', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployWarning', 'warningCount': 0}}]}
2022-06-30 12:35:14,839 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
2022-06-30 12:35:14,839 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '984e898d-8839-4b3d-8b0e-db500562c47c', 'installationId': 'a87120b0-ef01-4fa9-a2b4-d881501b50fc', 'sessionId': 'fb48edde-cf6e-46ea-8ac2-31688d6f3178', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.0', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployConditionWarning', 'warningCount': 0}}]}
2022-06-30 12:35:16,040 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
2022-06-30 12:35:16,040 | Using config file: samconfig.toml, config environment: default
2022-06-30 12:35:16,040 | Expand command line arguments to:
2022-06-30 12:35:16,041 | --template_file=/home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/template.yaml --stack_name=santhosh-testing --dependency_layer --capabilities=('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-1cqyp2bwbvv8u
Default capabilities applied: ('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
To override with customized capabilities, use --capabilities flag or set it in samconfig.toml
2022-06-30 12:35:17,944 | Using build directory as .aws-sam/auto-dependency-layer
2022-06-30 12:35:17,945 | Using build directory as .aws-sam/auto-dependency-layer
This feature is currently in beta. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
The SAM CLI will use the AWS Lambda, Amazon API Gateway, and AWS StepFunctions APIs to upload your code without
performing a CloudFormation deployment. This will cause drift in your CloudFormation stack.
**The sync command should only be used against a development stack**.
Confirm that you are synchronizing a development stack and want to turn on beta features.
Enter Y to proceed with the command, or enter N to cancel:
[y/N]: Y
2022-06-30 12:35:22,217 |
Experimental features are enabled for this session.
Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
2022-06-30 12:35:22,225 | No Parameters detected in the template
2022-06-30 12:35:22,249 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,249 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,251 | 2 stacks found in the template
2022-06-30 12:35:22,251 | No Parameters detected in the template
2022-06-30 12:35:22,268 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,268 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,269 | 2 resources found in the stack
2022-06-30 12:35:22,269 | No Parameters detected in the template
2022-06-30 12:35:22,289 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,290 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,290 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
2022-06-30 12:35:22,290 | --base-dir is not presented, adjusting uri hello_world/ relative to /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/template.yaml
2022-06-30 12:35:22,290 | No Parameters detected in the template
2022-06-30 12:35:22,307 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,307 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,309 | Executing the build using build context.
2022-06-30 12:35:22,319 | No Parameters detected in the template
2022-06-30 12:35:22,344 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,344 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,345 | Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
2022-06-30 12:35:22,346 | Instantiating build definitions
2022-06-30 12:35:22,351 | Same function build definition found, adding function (Previous: BuildDefinition(python3.7, /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world, Zip, , fb2df3d3-6088-413d-a68d-fef68238b0c7, {}, {}, x86_64, []), Current: BuildDefinition(python3.7, /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world, Zip, , 8dd59822-1269-438b-9689-5ab100c44500, {}, {}, x86_64, []), Function: Function(function_id='HelloWorldFunction', name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.7', memory=None, timeout=500, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/copy_files', 'Method': 'post', 'RestApiId': 'ServerlessRestApi'}}}, metadata={'SamResourceId': 'HelloWorldFunction'}, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], stack_path=''))
2022-06-30 12:35:22,353 | Async execution started
2022-06-30 12:35:22,353 | Invoking function functools.partial(<bound method CachedOrIncrementalBuildStrategyWrapper.build_single_function_definition of <samcli.lib.build.build_strategy.CachedOrIncrementalBuildStrategyWrapper object at 0x7fd85f7e9210>>, <samcli.lib.build.build_graph.FunctionBuildDefinition object at 0x7fd85f7fab10>)
2022-06-30 12:35:22,353 | Running incremental build for runtime python3.7 for build definition fb2df3d3-6088-413d-a68d-fef68238b0c7
2022-06-30 12:35:22,353 | Waiting for async results
2022-06-30 12:35:22,354 | Manifest file is changed (new hash: 523939fbec58410a5ca9b187c33945d0) or dependency folder (.aws-sam/deps/fb2df3d3-6088-413d-a68d-fef68238b0c7) is missing for fb2df3d3-6088-413d-a68d-fef68238b0c7, downloading dependencies and copying/building source
2022-06-30 12:35:22,354 | Building codeuri: /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world runtime: python3.7 metadata: {} architecture: x86_64 functions: ['HelloWorldFunction']
2022-06-30 12:35:22,354 | Building to following folder /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/.aws-sam/auto-dependency-layer/HelloWorldFunction
2022-06-30 12:35:22,355 | Loading workflow module 'aws_lambda_builders.workflows'
2022-06-30 12:35:22,358 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2022-06-30 12:35:22,359 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)'
2022-06-30 12:35:22,362 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)'
2022-06-30 12:35:22,363 | Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)'
2022-06-30 12:35:22,365 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)'
2022-06-30 12:35:22,367 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)'
2022-06-30 12:35:22,369 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)'
2022-06-30 12:35:22,370 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)'
2022-06-30 12:35:22,371 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)'
2022-06-30 12:35:22,374 | Registering workflow 'NodejsNpmEsbuildBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm-esbuild', application_framework=None)'
2022-06-30 12:35:22,375 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2022-06-30 12:35:22,439 | Invalid executable for python at /usr/bin/python
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 66, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 51, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.7, Found version: /usr/bin/python.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
2022-06-30 12:35:22,490 | Invalid executable for python at /bin/python
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 66, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 51, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.7, Found version: /bin/python.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
2022-06-30 12:35:22,491 | Exception raised during the execution
Build Failed
2022-06-30 12:35:22,494 | Sending Telemetry: {'metrics': [{'commandRunExperimental': {'requestId': 'fde47179-0655-4f96-89a7-ebf26da76734', 'installationId': 'a87120b0-ef01-4fa9-a2b4-d881501b50fc', 'sessionId': 'fb48edde-cf6e-46ea-8ac2-31688d6f3178', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.0', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam sync', 'metricSpecificAttributes': {'experimentalAccelerate': True, 'experimentalAll': False, 'experimentalEsbuild': False, 'experimentalMavenScopeAndLayer': False, 'projectType': 'CFN'}, 'duration': 8937, 'exitReason': 'WorkflowFailedError', 'exitCode': 1}}]}
2022-06-30 12:35:23,703 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Error: PythonPipBuilder:Validation - Binary validation failed for python, searched for python in following locations : ['/usr/bin/python', '/bin/python'] which did not satisfy constraints for runtime: python3.7. Do you have python for runtime: python3.7 on your PATH?
```
Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
OS: Linux
sam --version:
$ sam --version
SAM CLI, version 1.40.0
AWS region:
| True | Bug: How to use --use-container for sam sync (because local python is not same as runtime) - I was able to build using --use-container so i dont have to bother about the local python version.
But while syncing it creates problem
```
$ sam sync --debug --stack-name santhosh-testing
2022-06-30 12:35:13,556 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-06-30 12:35:13,563 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-06-30 12:35:13,564 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '5b6c9cb7-e4a9-4c00-84c5-952ce7f5f5ab', 'installationId': 'a87120b0-ef01-4fa9-a2b4-d881501b50fc', 'sessionId': 'fb48edde-cf6e-46ea-8ac2-31688d6f3178', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.0', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployWarning', 'warningCount': 0}}]}
2022-06-30 12:35:14,839 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
2022-06-30 12:35:14,839 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '984e898d-8839-4b3d-8b0e-db500562c47c', 'installationId': 'a87120b0-ef01-4fa9-a2b4-d881501b50fc', 'sessionId': 'fb48edde-cf6e-46ea-8ac2-31688d6f3178', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.0', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployConditionWarning', 'warningCount': 0}}]}
2022-06-30 12:35:16,040 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
2022-06-30 12:35:16,040 | Using config file: samconfig.toml, config environment: default
2022-06-30 12:35:16,040 | Expand command line arguments to:
2022-06-30 12:35:16,041 | --template_file=/home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/template.yaml --stack_name=santhosh-testing --dependency_layer --capabilities=('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-1cqyp2bwbvv8u
Default capabilities applied: ('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
To override with customized capabilities, use --capabilities flag or set it in samconfig.toml
2022-06-30 12:35:17,944 | Using build directory as .aws-sam/auto-dependency-layer
2022-06-30 12:35:17,945 | Using build directory as .aws-sam/auto-dependency-layer
This feature is currently in beta. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
The SAM CLI will use the AWS Lambda, Amazon API Gateway, and AWS StepFunctions APIs to upload your code without
performing a CloudFormation deployment. This will cause drift in your CloudFormation stack.
**The sync command should only be used against a development stack**.
Confirm that you are synchronizing a development stack and want to turn on beta features.
Enter Y to proceed with the command, or enter N to cancel:
[y/N]: Y
2022-06-30 12:35:22,217 |
Experimental features are enabled for this session.
Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
2022-06-30 12:35:22,225 | No Parameters detected in the template
2022-06-30 12:35:22,249 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,249 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,251 | 2 stacks found in the template
2022-06-30 12:35:22,251 | No Parameters detected in the template
2022-06-30 12:35:22,268 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,268 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,269 | 2 resources found in the stack
2022-06-30 12:35:22,269 | No Parameters detected in the template
2022-06-30 12:35:22,289 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,290 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,290 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
2022-06-30 12:35:22,290 | --base-dir is not presented, adjusting uri hello_world/ relative to /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/template.yaml
2022-06-30 12:35:22,290 | No Parameters detected in the template
2022-06-30 12:35:22,307 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,307 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,309 | Executing the build using build context.
2022-06-30 12:35:22,319 | No Parameters detected in the template
2022-06-30 12:35:22,344 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,344 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id
2022-06-30 12:35:22,345 | Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
2022-06-30 12:35:22,346 | Instantiating build definitions
2022-06-30 12:35:22,351 | Same function build definition found, adding function (Previous: BuildDefinition(python3.7, /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world, Zip, , fb2df3d3-6088-413d-a68d-fef68238b0c7, {}, {}, x86_64, []), Current: BuildDefinition(python3.7, /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world, Zip, , 8dd59822-1269-438b-9689-5ab100c44500, {}, {}, x86_64, []), Function: Function(function_id='HelloWorldFunction', name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.7', memory=None, timeout=500, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/copy_files', 'Method': 'post', 'RestApiId': 'ServerlessRestApi'}}}, metadata={'SamResourceId': 'HelloWorldFunction'}, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], stack_path=''))
2022-06-30 12:35:22,353 | Async execution started
2022-06-30 12:35:22,353 | Invoking function functools.partial(<bound method CachedOrIncrementalBuildStrategyWrapper.build_single_function_definition of <samcli.lib.build.build_strategy.CachedOrIncrementalBuildStrategyWrapper object at 0x7fd85f7e9210>>, <samcli.lib.build.build_graph.FunctionBuildDefinition object at 0x7fd85f7fab10>)
2022-06-30 12:35:22,353 | Running incremental build for runtime python3.7 for build definition fb2df3d3-6088-413d-a68d-fef68238b0c7
2022-06-30 12:35:22,353 | Waiting for async results
2022-06-30 12:35:22,354 | Manifest file is changed (new hash: 523939fbec58410a5ca9b187c33945d0) or dependency folder (.aws-sam/deps/fb2df3d3-6088-413d-a68d-fef68238b0c7) is missing for fb2df3d3-6088-413d-a68d-fef68238b0c7, downloading dependencies and copying/building source
2022-06-30 12:35:22,354 | Building codeuri: /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/hello_world runtime: python3.7 metadata: {} architecture: x86_64 functions: ['HelloWorldFunction']
2022-06-30 12:35:22,354 | Building to following folder /home/simha_personal_data/programming_arch_firefox/extra/Unsorted/vid/web_dev/hss_iqgateway/aws_sam/sam-app/.aws-sam/auto-dependency-layer/HelloWorldFunction
2022-06-30 12:35:22,355 | Loading workflow module 'aws_lambda_builders.workflows'
2022-06-30 12:35:22,358 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2022-06-30 12:35:22,359 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)'
2022-06-30 12:35:22,362 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)'
2022-06-30 12:35:22,363 | Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)'
2022-06-30 12:35:22,365 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)'
2022-06-30 12:35:22,367 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)'
2022-06-30 12:35:22,369 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)'
2022-06-30 12:35:22,370 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)'
2022-06-30 12:35:22,371 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)'
2022-06-30 12:35:22,374 | Registering workflow 'NodejsNpmEsbuildBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm-esbuild', application_framework=None)'
2022-06-30 12:35:22,375 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2022-06-30 12:35:22,439 | Invalid executable for python at /usr/bin/python
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 66, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 51, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.7, Found version: /usr/bin/python.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
2022-06-30 12:35:22,490 | Invalid executable for python at /bin/python
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 66, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 51, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.7, Found version: /bin/python.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
2022-06-30 12:35:22,491 | Exception raised during the execution
Build Failed
2022-06-30 12:35:22,494 | Sending Telemetry: {'metrics': [{'commandRunExperimental': {'requestId': 'fde47179-0655-4f96-89a7-ebf26da76734', 'installationId': 'a87120b0-ef01-4fa9-a2b4-d881501b50fc', 'sessionId': 'fb48edde-cf6e-46ea-8ac2-31688d6f3178', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.0', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam sync', 'metricSpecificAttributes': {'experimentalAccelerate': True, 'experimentalAll': False, 'experimentalEsbuild': False, 'experimentalMavenScopeAndLayer': False, 'projectType': 'CFN'}, 'duration': 8937, 'exitReason': 'WorkflowFailedError', 'exitCode': 1}}]}
2022-06-30 12:35:23,703 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Error: PythonPipBuilder:Validation - Binary validation failed for python, searched for python in following locations : ['/usr/bin/python', '/bin/python'] which did not satisfy constraints for runtime: python3.7. Do you have python for runtime: python3.7 on your PATH?
```
Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
OS: Linux
sam --version:
$ sam --version
SAM CLI, version 1.40.0
AWS region:
| main | bug how to use use container for sam sync because local python is not same as runtime i was able to build using use container so i dont have to bother about the local python version but while syncing it creates problem sam sync debug stack name santhosh testing telemetry endpoint configured to be telemetry endpoint configured to be sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout using config file samconfig toml config environment default expand command line arguments to template file home simha personal data programming arch firefox extra unsorted vid web dev hss iqgateway aws sam sam app template yaml stack name santhosh testing dependency layer capabilities capability named iam capability auto expand managed bucket aws sam cli managed default samclisourcebucket default capabilities applied capability named iam capability auto expand to override with customized capabilities use capabilities flag or set it in samconfig toml using build directory as aws sam auto dependency layer using build directory as aws sam auto dependency layer this feature is currently in beta visit the docs page to learn more about the aws beta terms the sam cli will use the aws lambda amazon api gateway and aws stepfunctions apis to upload your code without performing a cloudformation deployment this will cause drift in your cloudformation stack the sync command should only be used against a development stack confirm that you are synchronizing a development stack and want to turn on beta features enter y to proceed with the command or enter n to cancel y experimental features are enabled for this session visit the docs page to learn more about the aws beta terms no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id stacks found in the template no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id resources found in the stack no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id found serverless function with name helloworldfunction and codeuri hello world base dir is not presented adjusting uri hello world relative to home simha personal data programming arch firefox extra unsorted vid web dev hss iqgateway aws sam sam app template yaml no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id executing the build using build context no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id your template contains a resource with logical id serverlessrestapi which is a reserved logical id in aws sam it could result in unexpected behaviors and is not recommended instantiating build definitions same function build definition found adding function previous builddefinition home simha personal data programming arch firefox extra unsorted vid web dev hss iqgateway aws sam sam app hello world zip current builddefinition home simha personal data programming arch firefox extra unsorted vid web dev hss iqgateway aws sam sam app hello world zip function function function id helloworldfunction name helloworldfunction functionname helloworldfunction runtime memory none timeout handler app lambda handler imageuri none packagetype zip imageconfig none codeuri home simha personal data programming arch firefox extra unsorted vid web dev hss iqgateway aws sam sam app hello world environment none rolearn none layers events helloworld type api properties path copy files method post restapiid serverlessrestapi metadata samresourceid helloworldfunction inlinecode none codesign config arn none architectures stack path async execution started invoking function functools partial running incremental build for runtime for build definition waiting for async results manifest file is changed new hash or dependency folder aws sam deps is missing for downloading dependencies and copying building source building codeuri home simha personal data programming arch firefox extra unsorted vid web dev hss iqgateway aws sam sam app hello world runtime metadata architecture functions building to following folder home simha personal data programming arch firefox extra unsorted vid web dev hss iqgateway aws sam sam app aws sam auto dependency layer helloworldfunction loading workflow module aws lambda builders workflows registering workflow pythonpipbuilder with capability capability language python dependency manager pip application framework none registering workflow nodejsnpmbuilder with capability capability language nodejs dependency manager npm application framework none registering workflow rubybundlerbuilder with capability capability language ruby dependency manager bundler application framework none registering workflow godepbuilder with capability capability language go dependency manager dep application framework none registering workflow gomodulesbuilder with capability capability language go dependency manager modules application framework none registering workflow javagradleworkflow with capability capability language java dependency manager gradle application framework none registering workflow javamavenworkflow with capability capability language java dependency manager maven application framework none registering workflow dotnetclipackagebuilder with capability capability language dotnet dependency manager cli package application framework none registering workflow custommakebuilder with capability capability language provided dependency manager none application framework none registering workflow nodejsnpmesbuildbuilder with capability capability language nodejs dependency manager npm esbuild application framework none found workflow pythonpipbuilder to support capabilities capability language python dependency manager pip application framework none invalid executable for python at usr bin python traceback most recent call last file aws lambda builders workflow py line in wrapper file aws lambda builders workflows python pip validator py line in validate aws lambda builders exceptions mismatchruntimeerror python executable found in your path does not match runtime expected version found version usr bin python possibly related invalid executable for python at bin python traceback most recent call last file aws lambda builders workflow py line in wrapper file aws lambda builders workflows python pip validator py line in validate aws lambda builders exceptions mismatchruntimeerror python executable found in your path does not match runtime expected version found version bin python possibly related exception raised during the execution build failed sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout error pythonpipbuilder validation binary validation failed for python searched for python in following locations which did not satisfy constraints for runtime do you have python for runtime on your path additional environment details ex windows mac amazon linux etc os linux sam version sam version sam cli version aws region | 1 |
7,227 | 10,361,526,327 | IssuesEvent | 2019-09-06 10:14:08 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | closed | CMIP6 tables are out of sync | bug cmor paper preprocessor | There is something going on with the CMIP6 tables. The `esmvalcore/cmor/tables/CMIP6/README.md` file claims that it is version 01.00.30, but the actual tables are version 01.00.10, ie quite old and are indeed not suitable for current cmorized data.
For instance, the `standard_name` of (Amon, psl) is still `air_pressure_at_sea_level` where the correct one now is `air_pressure_at_mean_sea_level`. | 1.0 | CMIP6 tables are out of sync - There is something going on with the CMIP6 tables. The `esmvalcore/cmor/tables/CMIP6/README.md` file claims that it is version 01.00.30, but the actual tables are version 01.00.10, ie quite old and are indeed not suitable for current cmorized data.
For instance, the `standard_name` of (Amon, psl) is still `air_pressure_at_sea_level` where the correct one now is `air_pressure_at_mean_sea_level`. | non_main | tables are out of sync there is something going on with the tables the esmvalcore cmor tables readme md file claims that it is version but the actual tables are version ie quite old and are indeed not suitable for current cmorized data for instance the standard name of amon psl is still air pressure at sea level where the correct one now is air pressure at mean sea level | 0 |
5,182 | 26,379,058,738 | IssuesEvent | 2023-01-12 06:49:12 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | NPE performing bazel sync | product: IntelliJ type: user support topic: sync more-data-needed awaiting-maintainer | We've had several users at Stripe experience issues performing a bazel sync through:
+ __Bazel__ -> __Sync__ -> __Sync Project with BUILD Files__
+ __Bazel__ -> __Sync__ -> __Non-Incrementally Sync Project with BUILD Files__
+ Using the Project-View PopupMenu's "__Partially Sync FOLDER/...:all__"
The result is an exception is thrown and the sync is not completed. Stack traces are always from Swing with the error of `Null child not allowed`.
```text
java.lang.NullPointerException: Null child not allowed
at java.desktop/javax.swing.tree.TreePath.pathByAddingChild(TreePath.java:330)
at java.desktop/javax.swing.tree.FixedHeightLayoutCache$SearchInfo.getPath(FixedHeightLayoutCache.java:1468)
at java.desktop/javax.swing.tree.FixedHeightLayoutCache.getPathForRow(FixedHeightLayoutCache.java:213)
at java.desktop/javax.swing.plaf.basic.BasicTreeUI.getPathForRow(BasicTreeUI.java:670)
at java.desktop/javax.swing.JTree.getPathForRow(JTree.java:2210)
at com.intellij.util.ui.tree.TreeUtil.visitVisibleRows(TreeUtil.java:1898)
at com.intellij.util.ui.tree.TreeUtil.visitVisibleRows(TreeUtil.java:1930)
at com.intellij.util.ui.tree.TreeUtil.collectVisibleRows(TreeUtil.java:1950)
at com.intellij.util.ui.tree.TreeUtil.collectExpandedObjects(TreeUtil.java:193)
at com.intellij.util.ui.tree.TreeUtil.collectExpandedPaths(TreeUtil.java:174)
at com.intellij.ide.util.treeView.TreeState.createOn(TreeState.java:159)
at com.intellij.ide.util.treeView.TreeState.createOn(TreeState.java:153)
at com.intellij.ide.projectView.impl.AbstractProjectViewPane.createTreeState(AbstractProjectViewPane.java:724)
at com.intellij.ide.projectView.impl.AbstractProjectViewPane.saveExpandedPaths(AbstractProjectViewPane.java:730)
at com.intellij.ide.scopeView.ScopeViewPane.updateFromRoot(ScopeViewPane.java:219)
at com.intellij.ide.projectView.impl.ProjectViewImpl.refresh(ProjectViewImpl.java:1072)
at com.google.idea.blaze.base.sync.autosync.ProjectTargetManagerImpl$TargetSyncListener.buildStarted(ProjectTargetManagerImpl.java:151)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.lambda$notifyBuildStarted$1(BuildPhaseSyncTask.java:139)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.notifyBuildStarted(BuildPhaseSyncTask.java:139)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.doRun(BuildPhaseSyncTask.java:183)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.run(BuildPhaseSyncTask.java:128)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.runBuildPhase(BuildPhaseSyncTask.java:89)
at com.google.idea.blaze.base.sync.SyncPhaseCoordinator.runSync(SyncPhaseCoordinator.java:432)
at com.google.idea.blaze.base.sync.SyncPhaseCoordinator.lambda$syncProject$0(SyncPhaseCoordinator.java:258)
at com.google.idea.blaze.base.scope.Scope.push(Scope.java:57)
at com.google.idea.blaze.base.sync.SyncPhaseCoordinator.lambda$syncProject$1(SyncPhaseCoordinator.java:238)
at com.google.idea.blaze.base.async.executor.ProgressiveTaskWithProgressIndicator.lambda$submitTask$0(ProgressiveTaskWithProgressIndicator.java:79)
at com.google.idea.blaze.base.async.executor.ProgressiveTaskWithProgressIndicator.lambda$submitTaskWithResult$4(ProgressiveTaskWithProgressIndicator.java:127)
at com.intellij.openapi.progress.ProgressManager.lambda$runProcess$0(ProgressManager.java:57)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$2(CoreProgressManager.java:188)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$executeProcessUnderProgress$12(CoreProgressManager.java:624)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:698)
at com.intellij.openapi.progress.impl.CoreProgressManager.computeUnderProgress(CoreProgressManager.java:646)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:623)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:66)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:175)
at com.intellij.openapi.progress.ProgressManager.runProcess(ProgressManager.java:57)
at com.google.idea.blaze.base.async.executor.ProgressiveTaskWithProgressIndicator.lambda$submitTaskWithResult$5(ProgressiveTaskWithProgressIndicator.java:127)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
From our investigations so far, the problem seems to present itself randomly and without clear resolution. Some users have been able to get past this issue through a battery of fixes, usually after many attempts and it has been unclear what the exact solution has been. Other users have yet to resolve the issue.
Most of our users make use of project-views. While they vary somewhat, they are roughly equivalent to the following example:
__`.ijwb/.bazelproject`__
```yaml
import intellij/my-project.bazelproject
directories:
.
-out
derive_targets_from_directories: false
test_sources:
src/test/*
additional_languages:
scala
```
__`intellij/my-project.bazelproject`__
```yaml
directories:
src
-out
derive_targets_from_directories: false
targets:
//src/main/java/com/stripe/my_project_api/...
//src/test/java/com/stripe/my_project_api/...
test_sources:
src/test/*
```
All of our users are running IntelliJ Ultimate 2021.3 (IU-213.7172.25) and we have noticed this issue across multiple versions of the bazel plugin. The latest report was noticed on __2022.06.28.0.0-api-version-213__. For that user we also attempted to downgrade to __2022.02.23.0.0-api-version-213__, but the issue persisted.
### Resolution Attempts So Far (mixed results):
+ Clean bazel cache `bazel clean --expunge`
+ Invalidate caches and restart
+ Uninstalling/Reinstalling the bazel plugin
+ Uninstalling IJ, cleaning all caches, and re-installing
### Reproduction
We have been unable to reproduce the issues our users are experiencing by replicating their environment. Overriding the experiment settings also doesn't appear to trigger the error, using the following configurations in `idea.properties`:
```properties
experiment.username.override=USER_HAVING_ISSUES
``` | True | NPE performing bazel sync - We've had several users at Stripe experience issues performing a bazel sync through:
+ __Bazel__ -> __Sync__ -> __Sync Project with BUILD Files__
+ __Bazel__ -> __Sync__ -> __Non-Incrementally Sync Project with BUILD Files__
+ Using the Project-View PopupMenu's "__Partially Sync FOLDER/...:all__"
The result is an exception is thrown and the sync is not completed. Stack traces are always from Swing with the error of `Null child not allowed`.
```text
java.lang.NullPointerException: Null child not allowed
at java.desktop/javax.swing.tree.TreePath.pathByAddingChild(TreePath.java:330)
at java.desktop/javax.swing.tree.FixedHeightLayoutCache$SearchInfo.getPath(FixedHeightLayoutCache.java:1468)
at java.desktop/javax.swing.tree.FixedHeightLayoutCache.getPathForRow(FixedHeightLayoutCache.java:213)
at java.desktop/javax.swing.plaf.basic.BasicTreeUI.getPathForRow(BasicTreeUI.java:670)
at java.desktop/javax.swing.JTree.getPathForRow(JTree.java:2210)
at com.intellij.util.ui.tree.TreeUtil.visitVisibleRows(TreeUtil.java:1898)
at com.intellij.util.ui.tree.TreeUtil.visitVisibleRows(TreeUtil.java:1930)
at com.intellij.util.ui.tree.TreeUtil.collectVisibleRows(TreeUtil.java:1950)
at com.intellij.util.ui.tree.TreeUtil.collectExpandedObjects(TreeUtil.java:193)
at com.intellij.util.ui.tree.TreeUtil.collectExpandedPaths(TreeUtil.java:174)
at com.intellij.ide.util.treeView.TreeState.createOn(TreeState.java:159)
at com.intellij.ide.util.treeView.TreeState.createOn(TreeState.java:153)
at com.intellij.ide.projectView.impl.AbstractProjectViewPane.createTreeState(AbstractProjectViewPane.java:724)
at com.intellij.ide.projectView.impl.AbstractProjectViewPane.saveExpandedPaths(AbstractProjectViewPane.java:730)
at com.intellij.ide.scopeView.ScopeViewPane.updateFromRoot(ScopeViewPane.java:219)
at com.intellij.ide.projectView.impl.ProjectViewImpl.refresh(ProjectViewImpl.java:1072)
at com.google.idea.blaze.base.sync.autosync.ProjectTargetManagerImpl$TargetSyncListener.buildStarted(ProjectTargetManagerImpl.java:151)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.lambda$notifyBuildStarted$1(BuildPhaseSyncTask.java:139)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.notifyBuildStarted(BuildPhaseSyncTask.java:139)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.doRun(BuildPhaseSyncTask.java:183)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.run(BuildPhaseSyncTask.java:128)
at com.google.idea.blaze.base.sync.BuildPhaseSyncTask.runBuildPhase(BuildPhaseSyncTask.java:89)
at com.google.idea.blaze.base.sync.SyncPhaseCoordinator.runSync(SyncPhaseCoordinator.java:432)
at com.google.idea.blaze.base.sync.SyncPhaseCoordinator.lambda$syncProject$0(SyncPhaseCoordinator.java:258)
at com.google.idea.blaze.base.scope.Scope.push(Scope.java:57)
at com.google.idea.blaze.base.sync.SyncPhaseCoordinator.lambda$syncProject$1(SyncPhaseCoordinator.java:238)
at com.google.idea.blaze.base.async.executor.ProgressiveTaskWithProgressIndicator.lambda$submitTask$0(ProgressiveTaskWithProgressIndicator.java:79)
at com.google.idea.blaze.base.async.executor.ProgressiveTaskWithProgressIndicator.lambda$submitTaskWithResult$4(ProgressiveTaskWithProgressIndicator.java:127)
at com.intellij.openapi.progress.ProgressManager.lambda$runProcess$0(ProgressManager.java:57)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$2(CoreProgressManager.java:188)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$executeProcessUnderProgress$12(CoreProgressManager.java:624)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:698)
at com.intellij.openapi.progress.impl.CoreProgressManager.computeUnderProgress(CoreProgressManager.java:646)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:623)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:66)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:175)
at com.intellij.openapi.progress.ProgressManager.runProcess(ProgressManager.java:57)
at com.google.idea.blaze.base.async.executor.ProgressiveTaskWithProgressIndicator.lambda$submitTaskWithResult$5(ProgressiveTaskWithProgressIndicator.java:127)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
From our investigations so far, the problem seems to present itself randomly and without clear resolution. Some users have been able to get past this issue through a battery of fixes, usually after many attempts and it has been unclear what the exact solution has been. Other users have yet to resolve the issue.
Most of our users make use of project-views. While they vary somewhat, they are roughly equivalent to the following example:
__`.ijwb/.bazelproject`__
```yaml
import intellij/my-project.bazelproject
directories:
.
-out
derive_targets_from_directories: false
test_sources:
src/test/*
additional_languages:
scala
```
__`intellij/my-project.bazelproject`__
```yaml
directories:
src
-out
derive_targets_from_directories: false
targets:
//src/main/java/com/stripe/my_project_api/...
//src/test/java/com/stripe/my_project_api/...
test_sources:
src/test/*
```
All of our users are running IntelliJ Ultimate 2021.3 (IU-213.7172.25) and we have noticed this issue across multiple versions of the bazel plugin. The latest report was noticed on __2022.06.28.0.0-api-version-213__. For that user we also attempted to downgrade to __2022.02.23.0.0-api-version-213__, but the issue persisted.
### Resolution Attempts So Far (mixed results):
+ Clean bazel cache `bazel clean --expunge`
+ Invalidate caches and restart
+ Uninstalling/Reinstalling the bazel plugin
+ Uninstalling IJ, cleaning all caches, and re-installing
### Reproduction
We have been unable to reproduce the issues our users are experiencing by replicating their environment. Overriding the experiment settings also doesn't appear to trigger the error, using the following configurations in `idea.properties`:
```properties
experiment.username.override=USER_HAVING_ISSUES
``` | main | npe performing bazel sync we ve had several users at stripe experience issues performing a bazel sync through bazel sync sync project with build files bazel sync non incrementally sync project with build files using the project view popupmenu s partially sync folder all the result is an exception is thrown and the sync is not completed stack traces are always from swing with the error of null child not allowed text java lang nullpointerexception null child not allowed at java desktop javax swing tree treepath pathbyaddingchild treepath java at java desktop javax swing tree fixedheightlayoutcache searchinfo getpath fixedheightlayoutcache java at java desktop javax swing tree fixedheightlayoutcache getpathforrow fixedheightlayoutcache java at java desktop javax swing plaf basic basictreeui getpathforrow basictreeui java at java desktop javax swing jtree getpathforrow jtree java at com intellij util ui tree treeutil visitvisiblerows treeutil java at com intellij util ui tree treeutil visitvisiblerows treeutil java at com intellij util ui tree treeutil collectvisiblerows treeutil java at com intellij util ui tree treeutil collectexpandedobjects treeutil java at com intellij util ui tree treeutil collectexpandedpaths treeutil java at com intellij ide util treeview treestate createon treestate java at com intellij ide util treeview treestate createon treestate java at com intellij ide projectview impl abstractprojectviewpane createtreestate abstractprojectviewpane java at com intellij ide projectview impl abstractprojectviewpane saveexpandedpaths abstractprojectviewpane java at com intellij ide scopeview scopeviewpane updatefromroot scopeviewpane java at com intellij ide projectview impl projectviewimpl refresh projectviewimpl java at com google idea blaze base sync autosync projecttargetmanagerimpl targetsynclistener buildstarted projecttargetmanagerimpl java at com google idea blaze base sync buildphasesynctask lambda notifybuildstarted buildphasesynctask java at java base java util iterator foreachremaining iterator java at java base java util spliterators iteratorspliterator foreachremaining spliterators java at java base java util stream referencepipeline head foreach referencepipeline java at com google idea blaze base sync buildphasesynctask notifybuildstarted buildphasesynctask java at com google idea blaze base sync buildphasesynctask dorun buildphasesynctask java at com google idea blaze base sync buildphasesynctask run buildphasesynctask java at com google idea blaze base sync buildphasesynctask runbuildphase buildphasesynctask java at com google idea blaze base sync syncphasecoordinator runsync syncphasecoordinator java at com google idea blaze base sync syncphasecoordinator lambda syncproject syncphasecoordinator java at com google idea blaze base scope scope push scope java at com google idea blaze base sync syncphasecoordinator lambda syncproject syncphasecoordinator java at com google idea blaze base async executor progressivetaskwithprogressindicator lambda submittask progressivetaskwithprogressindicator java at com google idea blaze base async executor progressivetaskwithprogressindicator lambda submittaskwithresult progressivetaskwithprogressindicator java at com intellij openapi progress progressmanager lambda runprocess progressmanager java at com intellij openapi progress impl coreprogressmanager lambda runprocess coreprogressmanager java at com intellij openapi progress impl coreprogressmanager lambda executeprocessunderprogress coreprogressmanager java at com intellij openapi progress impl coreprogressmanager registerindicatorandrun coreprogressmanager java at com intellij openapi progress impl coreprogressmanager computeunderprogress coreprogressmanager java at com intellij openapi progress impl coreprogressmanager executeprocessunderprogress coreprogressmanager java at com intellij openapi progress impl progressmanagerimpl executeprocessunderprogress progressmanagerimpl java at com intellij openapi progress impl coreprogressmanager runprocess coreprogressmanager java at com intellij openapi progress progressmanager runprocess progressmanager java at com google idea blaze base async executor progressivetaskwithprogressindicator lambda submittaskwithresult progressivetaskwithprogressindicator java at com google common util concurrent trustedlistenablefuturetask trustedfutureinterruptibletask runinterruptibly trustedlistenablefuturetask java at com google common util concurrent interruptibletask run interruptibletask java at com google common util concurrent trustedlistenablefuturetask run trustedlistenablefuturetask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java from our investigations so far the problem seems to present itself randomly and without clear resolution some users have been able to get past this issue through a battery of fixes usually after many attempts and it has been unclear what the exact solution has been other users have yet to resolve the issue most of our users make use of project views while they vary somewhat they are roughly equivalent to the following example ijwb bazelproject yaml import intellij my project bazelproject directories out derive targets from directories false test sources src test additional languages scala intellij my project bazelproject yaml directories src out derive targets from directories false targets src main java com stripe my project api src test java com stripe my project api test sources src test all of our users are running intellij ultimate iu and we have noticed this issue across multiple versions of the bazel plugin the latest report was noticed on api version for that user we also attempted to downgrade to api version but the issue persisted resolution attempts so far mixed results clean bazel cache bazel clean expunge invalidate caches and restart uninstalling reinstalling the bazel plugin uninstalling ij cleaning all caches and re installing reproduction we have been unable to reproduce the issues our users are experiencing by replicating their environment overriding the experiment settings also doesn t appear to trigger the error using the following configurations in idea properties properties experiment username override user having issues | 1 |
332,992 | 29,505,119,408 | IssuesEvent | 2023-06-03 07:49:45 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix discrete_fourier_transform.test_numpy_ihfft | NumPy Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix discrete_fourier_transform.test_numpy_ihfft - | | |
|---|---|
|tensorflow|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
| non_main | fix discrete fourier transform test numpy ihfft tensorflow img src torch img src numpy img src jax img src paddle img src | 0 |
2,444 | 8,639,853,889 | IssuesEvent | 2018-11-23 22:06:05 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | Question: How to set AM frequency | V1 related (not maintained) | So there is this code from testam.sh:
```
./piam sampleaudio.wav am.rfa
sudo ./rpitx -m RFA -i am.rfa -f 433900 -l
```
I want to broadcast on 1251 kHz
What do i need to replace 433900 with? I tried with 1251 but nothing happens.
Any help would be appreciated, im still new to frequencies and Linux | True | Question: How to set AM frequency - So there is this code from testam.sh:
```
./piam sampleaudio.wav am.rfa
sudo ./rpitx -m RFA -i am.rfa -f 433900 -l
```
I want to broadcast on 1251 kHz
What do i need to replace 433900 with? I tried with 1251 but nothing happens.
Any help would be appreciated, im still new to frequencies and Linux | main | question how to set am frequency so there is this code from testam sh piam sampleaudio wav am rfa sudo rpitx m rfa i am rfa f l i want to broadcast on khz what do i need to replace with i tried with but nothing happens any help would be appreciated im still new to frequencies and linux | 1 |
3,535 | 13,912,327,981 | IssuesEvent | 2020-10-20 18:42:22 | polyfacet/ArasDeveloperTool | https://api.github.com/repos/polyfacet/ArasDeveloperTool | closed | Split appliction to: ConsoleApp, Interfaces and CommonCommands | enhancement maintainability | CommonCommands = The common implementations.
Also add an external Hello World sample plugin | True | Split appliction to: ConsoleApp, Interfaces and CommonCommands - CommonCommands = The common implementations.
Also add an external Hello World sample plugin | main | split appliction to consoleapp interfaces and commoncommands commoncommands the common implementations also add an external hello world sample plugin | 1 |
14,925 | 10,227,287,716 | IssuesEvent | 2019-08-16 20:19:15 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | The first command in "Configure virtual machine availability set-based AKS clusters for SSH access" section is wrong | Pri1 container-service/svc cxp doc-bug triaged | Looks like it should be `az vmss list`, not `az vm list`.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3b81a147-db1c-b92f-88d1-6dd0991a72fe
* Version Independent ID: f5a2c949-498f-5848-d89c-031d1a757120
* Content: [SSH into Azure Kubernetes Service (AKS) cluster nodes](https://docs.microsoft.com/en-us/azure/aks/ssh#feedback)
* Content Source: [articles/aks/ssh.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/ssh.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | 1.0 | The first command in "Configure virtual machine availability set-based AKS clusters for SSH access" section is wrong - Looks like it should be `az vmss list`, not `az vm list`.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3b81a147-db1c-b92f-88d1-6dd0991a72fe
* Version Independent ID: f5a2c949-498f-5848-d89c-031d1a757120
* Content: [SSH into Azure Kubernetes Service (AKS) cluster nodes](https://docs.microsoft.com/en-us/azure/aks/ssh#feedback)
* Content Source: [articles/aks/ssh.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/ssh.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | non_main | the first command in configure virtual machine availability set based aks clusters for ssh access section is wrong looks like it should be az vmss list not az vm list document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned | 0 |
1,561 | 6,572,254,771 | IssuesEvent | 2017-09-11 00:39:57 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Yum plugin doesn't support update-to | affects_2.3 feature_idea waiting_on_maintainer | With yum I can say `yum update-to foo-1.2`, which will make sure that the package foo gets updated to specifically version 1.2. It won't install foo-1.2 if a version of foo wasn't already on the system and it won't update the foo package to anything higher than version 1.2 (`yum update foo-1.2` is the same as `yum update foo` if the version of foo already installed is 1.2).
Also, update-to without a version is the same as update. e.g. `yum update foo` == `yum update-to foo`
Something like
`yum: name=httpd-2.2.29-1.4.amzn1 state=update-to`?
| True | Yum plugin doesn't support update-to - With yum I can say `yum update-to foo-1.2`, which will make sure that the package foo gets updated to specifically version 1.2. It won't install foo-1.2 if a version of foo wasn't already on the system and it won't update the foo package to anything higher than version 1.2 (`yum update foo-1.2` is the same as `yum update foo` if the version of foo already installed is 1.2).
Also, update-to without a version is the same as update. e.g. `yum update foo` == `yum update-to foo`
Something like
`yum: name=httpd-2.2.29-1.4.amzn1 state=update-to`?
| main | yum plugin doesn t support update to with yum i can say yum update to foo which will make sure that the package foo gets updated to specifically version it won t install foo if a version of foo wasn t already on the system and it won t update the foo package to anything higher than version yum update foo is the same as yum update foo if the version of foo already installed is also update to without a version is the same as update e g yum update foo yum update to foo something like yum name httpd state update to | 1 |
288,121 | 24,882,768,592 | IssuesEvent | 2022-10-28 03:47:10 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Orçamento - Execução - Monjolos | generalization test development template - Memory (66) tag - Orçamento subtag - Execução | DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Monjolos. | 1.0 | Teste de generalizacao para a tag Orçamento - Execução - Monjolos - DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Monjolos. | non_main | teste de generalizacao para a tag orçamento execução monjolos dod realizar o teste de generalização do validador da tag orçamento execução para o município de monjolos | 0 |
4,320 | 21,721,515,702 | IssuesEvent | 2022-05-11 00:58:38 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | closed | [MAINTAIN] bioconductor series | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
<details>
- [x] r-crossmeta
- [x] r-mfa
- [x] r-srgnet
- [x] r-drugvsdisease
- [x] r-generegionscan
- [ ] r-puma
- [x] r-italics
- [x] r-prolocgui
- [x] r-nadfinder
- [x] r-frma
- [x] r-qqconf
- [x] r-mmuphin
- [x] r-travel
- [x] r-rqt
- [x] r-musicatk
- [x] r-affxparser
- [x] r-deconstructsigs
- [x] r-mimager
- [x] r-singlecelltk
- [x] r-affypara
- [x] r-oligo
- [x] r-pdinfobuilder
- [x] r-cn.farms
- [x] r-gramm4r
- [x] r-gcsscore
- [x] r-dmwr
- [x] r-affyilm
- [x] r-arrayexpresshts
- [x] r-kebabs
- [x] r-scan.upc
- [x] r-chipxpress
- [x] r-arrayexpress
- [x] r-lpsymphony
- [x] r-alps
- [x] r-metap
- [x] r-rgin
- [x] r-pd.mapping50k.xba240
- [x] r-proloc
- [ ] r-interactivedisplay
- [x] r-ideal
- [x] r-ccfindr
- [x] r-bibitr
- [x] r-encodeexplorer
- [x] r-eventpointer
- [x] r-ihw
- [x] r-synapter
- [x] r-swimr
- [x] r-sampling
- [x] r-maaslin2
- [x] r-mirsm
- [x] r-gfa
- [x] r-cancer
<details>
**Log of the bug**
<details>
```
put the output here
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| True | [MAINTAIN] bioconductor series - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
<details>
- [x] r-crossmeta
- [x] r-mfa
- [x] r-srgnet
- [x] r-drugvsdisease
- [x] r-generegionscan
- [ ] r-puma
- [x] r-italics
- [x] r-prolocgui
- [x] r-nadfinder
- [x] r-frma
- [x] r-qqconf
- [x] r-mmuphin
- [x] r-travel
- [x] r-rqt
- [x] r-musicatk
- [x] r-affxparser
- [x] r-deconstructsigs
- [x] r-mimager
- [x] r-singlecelltk
- [x] r-affypara
- [x] r-oligo
- [x] r-pdinfobuilder
- [x] r-cn.farms
- [x] r-gramm4r
- [x] r-gcsscore
- [x] r-dmwr
- [x] r-affyilm
- [x] r-arrayexpresshts
- [x] r-kebabs
- [x] r-scan.upc
- [x] r-chipxpress
- [x] r-arrayexpress
- [x] r-lpsymphony
- [x] r-alps
- [x] r-metap
- [x] r-rgin
- [x] r-pd.mapping50k.xba240
- [x] r-proloc
- [ ] r-interactivedisplay
- [x] r-ideal
- [x] r-ccfindr
- [x] r-bibitr
- [x] r-encodeexplorer
- [x] r-eventpointer
- [x] r-ihw
- [x] r-synapter
- [x] r-swimr
- [x] r-sampling
- [x] r-maaslin2
- [x] r-mirsm
- [x] r-gfa
- [x] r-cancer
<details>
**Log of the bug**
<details>
```
put the output here
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| main | bioconductor series please report the error of one package in one issue use multi issues to report multi bugs thanks r crossmeta r mfa r srgnet r drugvsdisease r generegionscan r puma r italics r prolocgui r nadfinder r frma r qqconf r mmuphin r travel r rqt r musicatk r affxparser r deconstructsigs r mimager r singlecelltk r affypara r oligo r pdinfobuilder r cn farms r r gcsscore r dmwr r affyilm r arrayexpresshts r kebabs r scan upc r chipxpress r arrayexpress r lpsymphony r alps r metap r rgin r pd r proloc r interactivedisplay r ideal r ccfindr r bibitr r encodeexplorer r eventpointer r ihw r synapter r swimr r sampling r r mirsm r gfa r cancer log of the bug put the output here packages please complete the following information package name description add any other context about the problem here | 1 |
73,831 | 19,830,936,260 | IssuesEvent | 2022-01-20 11:55:13 | reapit/foundations | https://api.github.com/repos/reapit/foundations | opened | There should be a UI to add custom fields to existing shared entities | feature front-end app-builder | **Background context or User story:**
_I should be able to drag and drop a new field into the app builder UI that extends a shared entity model_
**Specification or Acceptance Criteria:**
- Custom fields should be CRUDable
- Should be stored in meta data against main entity | 1.0 | There should be a UI to add custom fields to existing shared entities - **Background context or User story:**
_I should be able to drag and drop a new field into the app builder UI that extends a shared entity model_
**Specification or Acceptance Criteria:**
- Custom fields should be CRUDable
- Should be stored in meta data against main entity | non_main | there should be a ui to add custom fields to existing shared entities background context or user story i should be able to drag and drop a new field into the app builder ui that extends a shared entity model specification or acceptance criteria custom fields should be crudable should be stored in meta data against main entity | 0 |
5,832 | 30,871,323,130 | IssuesEvent | 2023-08-03 11:33:21 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | opened | LinkedIn - Send like to latest profile post | templates maintainer | This notebook will follow a profile on LinkedIn and send like to its last posts. It is usefull for organizations to increase their visibility on social media.
| True | LinkedIn - Send like to latest profile post - This notebook will follow a profile on LinkedIn and send like to its last posts. It is usefull for organizations to increase their visibility on social media.
| main | linkedin send like to latest profile post this notebook will follow a profile on linkedin and send like to its last posts it is usefull for organizations to increase their visibility on social media | 1 |
71,484 | 15,207,762,167 | IssuesEvent | 2021-02-17 00:57:42 | billmcchesney1/foxtrot | https://api.github.com/repos/billmcchesney1/foxtrot | opened | CVE-2019-16335 (High) detected in jackson-databind-2.9.9.1.jar | security vulnerability | ## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: foxtrot/foxtrot-sql/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-jackson-1.3.13.jar (Root Library)
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p>
<p>Release Date: 2020-10-20</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.1","packageFilePaths":["/foxtrot-sql/pom.xml","/foxtrot-core/pom.xml","/foxtrot-server/pom.xml","/foxtrot-common/pom.xml","/foxtrot-translator/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.dropwizard:dropwizard-jackson:1.3.13;com.fasterxml.jackson.core:jackson-databind:2.9.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.10"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16335","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-16335 (High) detected in jackson-databind-2.9.9.1.jar - ## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: foxtrot/foxtrot-sql/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-jackson-1.3.13.jar (Root Library)
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p>
<p>Release Date: 2020-10-20</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.1","packageFilePaths":["/foxtrot-sql/pom.xml","/foxtrot-core/pom.xml","/foxtrot-server/pom.xml","/foxtrot-common/pom.xml","/foxtrot-translator/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.dropwizard:dropwizard-jackson:1.3.13;com.fasterxml.jackson.core:jackson-databind:2.9.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.10"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16335","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file foxtrot foxtrot sql pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy dropwizard jackson jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree io dropwizard dropwizard jackson com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve vulnerabilityurl | 0 |
5,795 | 30,702,650,743 | IssuesEvent | 2023-07-27 01:47:50 | redcanaryco/atomic-red-team | https://api.github.com/repos/redcanaryco/atomic-red-team | closed | Idea: Standardize location for downloaded prerequisites. | enhancement maintainers Stale | ### Use-cases
When assessing a air gaped system it would be nice if all downloaded prerequisites are located at a standard location. Many of them are in $env:Temp / %TEMP% or /tmp - but I have some issues with this approach:
1. It is MOST, not ALL, of the dependencies. It's like it's the wild west on where stuff are being downloaded to / created.
2. The above mentioned folders are full with other irrelevant stuff that is not needed on the target system.
### Proposal
Standardize the location where things are downloaded to, preferably making the base download directory configurable.
Suggestion 1: I started making changes to the .yaml-files to make sure that files was downloaded to PathToAtomicsFolder\<technique id>\prereq\ but as @clr2of8 mentioned to me on Slack it will dirty down the git repository. Could a similar target directory be specified? It would be great if that directory could be globally defined,
Suggestion 2: Alternative putting things in PathToAtomicsFolder\<technique id>\prereq and have that folder git-ignored?
Suggestion 3: Download files to ($env:Temp|%TEMP%|/tmp)/atomic-prereq/, maybe with the Technique-id in the path as well?
If something can be agreed upon I don't mind spending some time and fix all already existing test-cases and submit a PR.
| True | Idea: Standardize location for downloaded prerequisites. - ### Use-cases
When assessing a air gaped system it would be nice if all downloaded prerequisites are located at a standard location. Many of them are in $env:Temp / %TEMP% or /tmp - but I have some issues with this approach:
1. It is MOST, not ALL, of the dependencies. It's like it's the wild west on where stuff are being downloaded to / created.
2. The above mentioned folders are full with other irrelevant stuff that is not needed on the target system.
### Proposal
Standardize the location where things are downloaded to, preferably making the base download directory configurable.
Suggestion 1: I started making changes to the .yaml-files to make sure that files was downloaded to PathToAtomicsFolder\<technique id>\prereq\ but as @clr2of8 mentioned to me on Slack it will dirty down the git repository. Could a similar target directory be specified? It would be great if that directory could be globally defined,
Suggestion 2: Alternative putting things in PathToAtomicsFolder\<technique id>\prereq and have that folder git-ignored?
Suggestion 3: Download files to ($env:Temp|%TEMP%|/tmp)/atomic-prereq/, maybe with the Technique-id in the path as well?
If something can be agreed upon I don't mind spending some time and fix all already existing test-cases and submit a PR.
| main | idea standardize location for downloaded prerequisites use cases when assessing a air gaped system it would be nice if all downloaded prerequisites are located at a standard location many of them are in env temp temp or tmp but i have some issues with this approach it is most not all of the dependencies it s like it s the wild west on where stuff are being downloaded to created the above mentioned folders are full with other irrelevant stuff that is not needed on the target system proposal standardize the location where things are downloaded to preferably making the base download directory configurable suggestion i started making changes to the yaml files to make sure that files was downloaded to pathtoatomicsfolder prereq but as mentioned to me on slack it will dirty down the git repository could a similar target directory be specified it would be great if that directory could be globally defined suggestion alternative putting things in pathtoatomicsfolder prereq and have that folder git ignored suggestion download files to env temp temp tmp atomic prereq maybe with the technique id in the path as well if something can be agreed upon i don t mind spending some time and fix all already existing test cases and submit a pr | 1 |
199,745 | 6,993,973,682 | IssuesEvent | 2017-12-15 13:41:10 | BlueBrain/neurocurator | https://api.github.com/repos/BlueBrain/neurocurator | closed | [Notebooks] /notebooks should be in the NAT repository | enhancement high priority | Check if the folder _notebooks_ can safely be deleted from the NeuroCurator repository.
These notebooks could have been modified in the NeuroCurator repository after the project splitting and the creation of the NAT repository. | 1.0 | [Notebooks] /notebooks should be in the NAT repository - Check if the folder _notebooks_ can safely be deleted from the NeuroCurator repository.
These notebooks could have been modified in the NeuroCurator repository after the project splitting and the creation of the NAT repository. | non_main | notebooks should be in the nat repository check if the folder notebooks can safely be deleted from the neurocurator repository these notebooks could have been modified in the neurocurator repository after the project splitting and the creation of the nat repository | 0 |
228,606 | 18,244,704,437 | IssuesEvent | 2021-10-01 16:47:59 | ValveSoftware/Proton | https://api.github.com/repos/ValveSoftware/Proton | closed | DOOM 2016 Crash when online multiplayer match begins. (379720) | Need Retest Whitelist Update Request | Every time online match begins the game crash showing a "DOOM Unhandled Exception" message.
Using Vulkan API, Linux MInt 19 64bits, kernel 4.15, driver version 396.51.
My Crash html file: https://pastebin.com/raw/bycgHM8k
Single-player Campaign works fine. | 1.0 | DOOM 2016 Crash when online multiplayer match begins. (379720) - Every time online match begins the game crash showing a "DOOM Unhandled Exception" message.
Using Vulkan API, Linux MInt 19 64bits, kernel 4.15, driver version 396.51.
My Crash html file: https://pastebin.com/raw/bycgHM8k
Single-player Campaign works fine. | non_main | doom crash when online multiplayer match begins every time online match begins the game crash showing a doom unhandled exception message using vulkan api linux mint kernel driver version my crash html file single player campaign works fine | 0 |
5,300 | 26,776,989,427 | IssuesEvent | 2023-01-31 17:53:57 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Modify page routing to allow for any database name | type: enhancement work: backend work: frontend status: draft restricted: maintainers | ## Current behavior
- Many of our pages have URLs that begin with the database name.
- We also have routes that begin with things like `administration` and `auth`.
- Those routing rules produce an ambiguous routing grammar making it impossible to use Mathesar with a database named "administration" (for example).
## Desired behavior
- We should modify the routing rules to be unambiguous.
- One possibility would be changing `/<db_name>/` to `/db/<db_name>`, but we may want to consider other options as well.
| True | Modify page routing to allow for any database name - ## Current behavior
- Many of our pages have URLs that begin with the database name.
- We also have routes that begin with things like `administration` and `auth`.
- Those routing rules produce an ambiguous routing grammar making it impossible to use Mathesar with a database named "administration" (for example).
## Desired behavior
- We should modify the routing rules to be unambiguous.
- One possibility would be changing `/<db_name>/` to `/db/<db_name>`, but we may want to consider other options as well.
| main | modify page routing to allow for any database name current behavior many of our pages have urls that begin with the database name we also have routes that begin with things like administration and auth those routing rules produce an ambiguous routing grammar making it impossible to use mathesar with a database named administration for example desired behavior we should modify the routing rules to be unambiguous one possibility would be changing to db but we may want to consider other options as well | 1 |
333,857 | 24,393,847,590 | IssuesEvent | 2022-10-04 17:24:34 | iannesbitt/readgssi | https://api.github.com/repos/iannesbitt/readgssi | closed | Main Function Documentation | documentation | Not sure if it says this anywhere in the documentation but the output of the main readgssi.readgssi() function claims to be a numpy array but it is actually a dictonary of numpy arrays with the keys being the channel numbers. | 1.0 | Main Function Documentation - Not sure if it says this anywhere in the documentation but the output of the main readgssi.readgssi() function claims to be a numpy array but it is actually a dictonary of numpy arrays with the keys being the channel numbers. | non_main | main function documentation not sure if it says this anywhere in the documentation but the output of the main readgssi readgssi function claims to be a numpy array but it is actually a dictonary of numpy arrays with the keys being the channel numbers | 0 |
344,406 | 10,344,147,025 | IssuesEvent | 2019-09-04 10:30:31 | DisabledMallis/BTDToolbox | https://api.github.com/repos/DisabledMallis/BTDToolbox | closed | Program doesn't exit when pressing exit button | Bug Help Wanted High Priority | When pressing the exit button, if the console is open, it will hide the console, then not exit. You have to wait a few moments and then press the exit button again in order for the program to close. | 1.0 | Program doesn't exit when pressing exit button - When pressing the exit button, if the console is open, it will hide the console, then not exit. You have to wait a few moments and then press the exit button again in order for the program to close. | non_main | program doesn t exit when pressing exit button when pressing the exit button if the console is open it will hide the console then not exit you have to wait a few moments and then press the exit button again in order for the program to close | 0 |
2,186 | 7,716,305,216 | IssuesEvent | 2018-05-23 10:15:16 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Entertainment Monitors Look at template Thunderdome | Bug Maintainability/Hinders improvements Map Issue | Issue reported from Round ID: 85949 (NILONS SERVER [ENGLISH] [US-EAST] [100% FREE NILONS])
Reporting client version: 512
Activate entertainment monitor. Select Thunderdome.
| True | Entertainment Monitors Look at template Thunderdome - Issue reported from Round ID: 85949 (NILONS SERVER [ENGLISH] [US-EAST] [100% FREE NILONS])
Reporting client version: 512
Activate entertainment monitor. Select Thunderdome.
| main | entertainment monitors look at template thunderdome issue reported from round id nilons server reporting client version activate entertainment monitor select thunderdome | 1 |
101,445 | 16,510,883,555 | IssuesEvent | 2021-05-26 03:52:30 | devikab2b/tdm-file-viewer_org | https://api.github.com/repos/devikab2b/tdm-file-viewer_org | opened | CVE-2015-9251 (Medium) detected in jquery-1.10.2.min.js | security vulnerability | ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.10.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: tdm-file-viewer_org/target/site/scoverage/com/capitalone/tdm/AvroViewSQL.scala.html</p>
<p>Path to vulnerable library: tdm-file-viewer_org/target/site/scoverage/com/capitalone/tdm/AvroViewSQL.scala.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devikab2b/tdm-file-viewer_org/commit/5085cf3adcc780a9b59fe63d06fa3c6ab29eea6b">5085cf3adcc780a9b59fe63d06fa3c6ab29eea6b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-9251 (Medium) detected in jquery-1.10.2.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.10.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: tdm-file-viewer_org/target/site/scoverage/com/capitalone/tdm/AvroViewSQL.scala.html</p>
<p>Path to vulnerable library: tdm-file-viewer_org/target/site/scoverage/com/capitalone/tdm/AvroViewSQL.scala.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devikab2b/tdm-file-viewer_org/commit/5085cf3adcc780a9b59fe63d06fa3c6ab29eea6b">5085cf3adcc780a9b59fe63d06fa3c6ab29eea6b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tdm file viewer org target site scoverage com capitalone tdm avroviewsql scala html path to vulnerable library tdm file viewer org target site scoverage com capitalone tdm avroviewsql scala html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
54,513 | 3,068,767,551 | IssuesEvent | 2015-08-18 17:09:36 | loklak/loklak_webclient | https://api.github.com/repos/loklak/loklak_webclient | closed | Implement a thought through choice of media items for wall | Feature Priority 1 - High Twitter Wall - Aneesh | Currently there are options that do not make sense, e.g. the user can choose "only images", but can also choose "show videos". There are choices that exclude each other. Please implement the following functionality and UI.
* [x] add a section below the left hand side area "What do you want to show on the wall?" (as described here https://github.com/loklak/loklak_webclient/issues/330) with the title "Which media do you want to show on the wall"
* [x] add button sliders for the following
* [x] Show images Yes - No - Only
* [x] Show video Yes - No - Only
* [ ] Show audio Yes - No - Only (Is this already an implemented option?)
* [x] If the user chooses at one point "Only", all the other options should change their color to grey and become unavailable | 1.0 | Implement a thought through choice of media items for wall - Currently there are options that do not make sense, e.g. the user can choose "only images", but can also choose "show videos". There are choices that exclude each other. Please implement the following functionality and UI.
* [x] add a section below the left hand side area "What do you want to show on the wall?" (as described here https://github.com/loklak/loklak_webclient/issues/330) with the title "Which media do you want to show on the wall"
* [x] add button sliders for the following
* [x] Show images Yes - No - Only
* [x] Show video Yes - No - Only
* [ ] Show audio Yes - No - Only (Is this already an implemented option?)
* [x] If the user chooses at one point "Only", all the other options should change their color to grey and become unavailable | non_main | implement a thought through choice of media items for wall currently there are options that do not make sense e g the user can choose only images but can also choose show videos there are choices that exclude each other please implement the following functionality and ui add a section below the left hand side area what do you want to show on the wall as described here with the title which media do you want to show on the wall add button sliders for the following show images yes no only show video yes no only show audio yes no only is this already an implemented option if the user chooses at one point only all the other options should change their color to grey and become unavailable | 0 |
66,623 | 12,807,690,161 | IssuesEvent | 2020-07-03 12:03:38 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | opened | Missing connection between map locations | Bug Code High prio | Noticed on Singleplayer
Tested on bugfixes branch
Commit: https://github.com/Regalis11/Barotrauma-development/commit/7800bc37a5bb1cc1a3ef04abf8243da1bdfd9876
Save: [https://app.zenhub.com/files/93301055/ad4f1ef3-d8e7-4a34-b340-72f990953c8e/download](https://app.zenhub.com/files/93301055/ad4f1ef3-d8e7-4a34-b340-72f990953c8e/download)
 | 1.0 | Missing connection between map locations - Noticed on Singleplayer
Tested on bugfixes branch
Commit: https://github.com/Regalis11/Barotrauma-development/commit/7800bc37a5bb1cc1a3ef04abf8243da1bdfd9876
Save: [https://app.zenhub.com/files/93301055/ad4f1ef3-d8e7-4a34-b340-72f990953c8e/download](https://app.zenhub.com/files/93301055/ad4f1ef3-d8e7-4a34-b340-72f990953c8e/download)
 | non_main | missing connection between map locations noticed on singleplayer tested on bugfixes branch commit save | 0 |
3,147 | 12,124,988,916 | IssuesEvent | 2020-04-22 14:55:31 | python-restx/flask-restx | https://api.github.com/repos/python-restx/flask-restx | reopened | Flask-RESTX Models Re-Design | enhancement maintainers question | For quite some time there have been significant issues around data models, request
parsing and response marshalling in `flask-restx` (carried over from
`flask-restplus`). The most obvious of which is the [deprecation warning](https://flask-restx.readthedocs.io/en/latest/parsing.html)
about the `reqparse` module in the documentation that has been in place for *far
too long*. These changes have been put off for various reasons which I won't
discuss here, however now the new fork is steadily underway I (and no doubt others) would like
to start addressing this.
Since this digs quite deep into the architecture of `flask-restx` there will be
significant (and likely breaking) changes required. As such, this issue is to
serve as a discussion around the API we would like to provide and some **initial
ideas** of how to best proceed. This is not intended to be the starting point of
hacking something together which makes things worse!
I will set out my current thoughts on the topic, please contribute by adding
more points and expanding on mine with more discussion.
## High Level Goals:
- Uniform API for request *parsing* and response *marshalling*
- e.g. remove the separation between `reqparse` and `models`
- Generate *correct and valid* Swagger/OpenAPI Specifications
- Validation of input and output data should conform to the generated
Swagger/OpenAPI Specifications
- e.g. If the Swagger/OpenAPI spec considers a value valid, the model should too.
- Define models using JSON Schema
- Supported already, but with numerous issues (@j5awry has been battling for some time)
- OpenAPI 3 support
## General Issues/Discussion Points
- What should the API look like?
- Continue with the `api.marshal` , `api.doc` decorator style?
- How to define models?
- Do we force direct usage of another library e.g. Marshmallow or wrap in
some other API and use the library for the "under the hood" work?
- Model validation
- External libraries e.g. Marshmallow
- Schema Generation
- External libraries e.g. Marshmallow
- Backwards compatibility
- Continue to support `reqparse` and existing `models` interface?
- Swagger 2.0 vs OpenAPI 3.0
- IMO generating both should be a goal if possible
## Resources/Notable Libraries
- https://marshmallow.readthedocs.io/en/stable/
- https://github.com/fuhrysteve/marshmallow-jsonschema
- https://pydantic-docs.helpmanual.io/
- Faust Models, Serialization and Codecs https://faust.readthedocs.io/en/latest/userguide/models.html
- Faust is not a Flask or even REST library but I have found it's Models to be
a nice interface to use.
- https://github.com/apryor6/flask_accepts
- Swagger 2.0 https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md
- OpenAPI 3.0 https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md | True | Flask-RESTX Models Re-Design - For quite some time there have been significant issues around data models, request
parsing and response marshalling in `flask-restx` (carried over from
`flask-restplus`). The most obvious of which is the [deprecation warning](https://flask-restx.readthedocs.io/en/latest/parsing.html)
about the `reqparse` module in the documentation that has been in place for *far
too long*. These changes have been put off for various reasons which I won't
discuss here, however now the new fork is steadily underway I (and no doubt others) would like
to start addressing this.
Since this digs quite deep into the architecture of `flask-restx` there will be
significant (and likely breaking) changes required. As such, this issue is to
serve as a discussion around the API we would like to provide and some **initial
ideas** of how to best proceed. This is not intended to be the starting point of
hacking something together which makes things worse!
I will set out my current thoughts on the topic, please contribute by adding
more points and expanding on mine with more discussion.
## High Level Goals:
- Uniform API for request *parsing* and response *marshalling*
- e.g. remove the separation between `reqparse` and `models`
- Generate *correct and valid* Swagger/OpenAPI Specifications
- Validation of input and output data should conform to the generated
Swagger/OpenAPI Specifications
- e.g. If the Swagger/OpenAPI spec considers a value valid, the model should too.
- Define models using JSON Schema
- Supported already, but with numerous issues (@j5awry has been battling for some time)
- OpenAPI 3 support
## General Issues/Discussion Points
- What should the API look like?
- Continue with the `api.marshal` , `api.doc` decorator style?
- How to define models?
- Do we force direct usage of another library e.g. Marshmallow or wrap in
some other API and use the library for the "under the hood" work?
- Model validation
- External libraries e.g. Marshmallow
- Schema Generation
- External libraries e.g. Marshmallow
- Backwards compatibility
- Continue to support `reqparse` and existing `models` interface?
- Swagger 2.0 vs OpenAPI 3.0
- IMO generating both should be a goal if possible
## Resources/Notable Libraries
- https://marshmallow.readthedocs.io/en/stable/
- https://github.com/fuhrysteve/marshmallow-jsonschema
- https://pydantic-docs.helpmanual.io/
- Faust Models, Serialization and Codecs https://faust.readthedocs.io/en/latest/userguide/models.html
- Faust is not a Flask or even REST library but I have found it's Models to be
a nice interface to use.
- https://github.com/apryor6/flask_accepts
- Swagger 2.0 https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md
- OpenAPI 3.0 https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md | main | flask restx models re design for quite some time there have been significant issues around data models request parsing and response marshalling in flask restx carried over from flask restplus the most obvious of which is the about the reqparse module in the documentation that has been in place for far too long these changes have been put off for various reasons which i won t discuss here however now the new fork is steadily underway i and no doubt others would like to start addressing this since this digs quite deep into the architecture of flask restx there will be significant and likely breaking changes required as such this issue is to serve as a discussion around the api we would like to provide and some initial ideas of how to best proceed this is not intended to be the starting point of hacking something together which makes things worse i will set out my current thoughts on the topic please contribute by adding more points and expanding on mine with more discussion high level goals uniform api for request parsing and response marshalling e g remove the separation between reqparse and models generate correct and valid swagger openapi specifications validation of input and output data should conform to the generated swagger openapi specifications e g if the swagger openapi spec considers a value valid the model should too define models using json schema supported already but with numerous issues has been battling for some time openapi support general issues discussion points what should the api look like continue with the api marshal api doc decorator style how to define models do we force direct usage of another library e g marshmallow or wrap in some other api and use the library for the under the hood work model validation external libraries e g marshmallow schema generation external libraries e g marshmallow backwards compatibility continue to support reqparse and existing models interface swagger vs openapi imo generating both should be a goal if possible resources notable libraries faust models serialization and codecs faust is not a flask or even rest library but i have found it s models to be a nice interface to use swagger openapi | 1 |
3,281 | 12,518,106,198 | IssuesEvent | 2020-06-03 12:23:01 | ansible-collections/community.general | https://api.github.com/repos/ansible-collections/community.general | closed | terraform: Add support for multiple variables_file | affects_2.10 cloud feature module needs_maintainer needs_triage plugins | ##### SUMMARY
Add support for multiple variables_file as we can pass multiple var-files in terraform cli:
`terraform apply -var-file="file-1.tfvars" -var-file="file-2.tfvars" -var-file="file-3.tfvars"`
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
terraform module
##### ADDITIONAL INFORMATION
In many projects there are multiple variable files which are needed to be passed to terrform, but right now, the terraform module only supports a single file ( type "path"). But it could be ( type "list")
Expected usage:
```yaml
- name: Terraform with multiple variables_file(s)
terraform:
project_path: '/home/centos/terraform'
variables_file:
- "/home/terraform/overrides/file-1.tfvars"
- "/home/terraform/overrides/file-2.tfvars"
- "/home/terraform/overrides/somepath/file-3.tfvars"
targets: some-target
``` | True | terraform: Add support for multiple variables_file - ##### SUMMARY
Add support for multiple variables_file as we can pass multiple var-files in terraform cli:
`terraform apply -var-file="file-1.tfvars" -var-file="file-2.tfvars" -var-file="file-3.tfvars"`
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
terraform module
##### ADDITIONAL INFORMATION
In many projects there are multiple variable files which are needed to be passed to terrform, but right now, the terraform module only supports a single file ( type "path"). But it could be ( type "list")
Expected usage:
```yaml
- name: Terraform with multiple variables_file(s)
terraform:
project_path: '/home/centos/terraform'
variables_file:
- "/home/terraform/overrides/file-1.tfvars"
- "/home/terraform/overrides/file-2.tfvars"
- "/home/terraform/overrides/somepath/file-3.tfvars"
targets: some-target
``` | main | terraform add support for multiple variables file summary add support for multiple variables file as we can pass multiple var files in terraform cli terraform apply var file file tfvars var file file tfvars var file file tfvars issue type feature idea component name terraform module additional information in many projects there are multiple variable files which are needed to be passed to terrform but right now the terraform module only supports a single file type path but it could be type list expected usage yaml name terraform with multiple variables file s terraform project path home centos terraform variables file home terraform overrides file tfvars home terraform overrides file tfvars home terraform overrides somepath file tfvars targets some target | 1 |
5,119 | 26,072,020,440 | IssuesEvent | 2022-12-24 00:23:13 | omigroup/media | https://api.github.com/repos/omigroup/media | closed | Mastodon | enhancement Make the metaverse more human Maintain sustainable innovation | OMI has had a masto at `@omi@widerweb.org` that wasn't completely set up. I updated our mastodon today. Added our logo and base brand. Still only accessible by the 1password account but you can follow and engage now.
If you have questions, many of us are happy to answer what we have learned from mastodon over the last year. | True | Mastodon - OMI has had a masto at `@omi@widerweb.org` that wasn't completely set up. I updated our mastodon today. Added our logo and base brand. Still only accessible by the 1password account but you can follow and engage now.
If you have questions, many of us are happy to answer what we have learned from mastodon over the last year. | main | mastodon omi has had a masto at omi widerweb org that wasn t completely set up i updated our mastodon today added our logo and base brand still only accessible by the account but you can follow and engage now if you have questions many of us are happy to answer what we have learned from mastodon over the last year | 1 |
222,536 | 7,433,486,102 | IssuesEvent | 2018-03-26 07:44:57 | Semantic-Org/Semantic-UI | https://api.github.com/repos/Semantic-Org/Semantic-UI | closed | [UI] Count-Up Numbers | Low Priority UI Component stale | I've seen a few sites that animate the SUI equivalent of statistic to the number value it is. So it just counts up from 0 to how ever much your value is over X amount of time.
Not a high priority but would be cool to have.
The example I saw was here: http://saltstack.com/ (scroll down)
| 1.0 | [UI] Count-Up Numbers - I've seen a few sites that animate the SUI equivalent of statistic to the number value it is. So it just counts up from 0 to how ever much your value is over X amount of time.
Not a high priority but would be cool to have.
The example I saw was here: http://saltstack.com/ (scroll down)
| non_main | count up numbers i ve seen a few sites that animate the sui equivalent of statistic to the number value it is so it just counts up from to how ever much your value is over x amount of time not a high priority but would be cool to have the example i saw was here scroll down | 0 |
3,594 | 14,522,552,960 | IssuesEvent | 2020-12-14 08:58:14 | adda-team/adda | https://api.github.com/repos/adda-team/adda | closed | gridspace for rectangular dipoles | comp-Logic enhancement maintainability pri-Medium | 'gridspace' is a legacy variable in ADDA https://github.com/adda-team/adda/blob/master/src/make_particle.c#L2218
Many places in code still use gridspace instead of gridspaceX, Y, and Z, imply that gridspace==gridspaceX, but sometimes mistake happens https://github.com/adda-team/adda/pull/245/commits/d2796c8e2a5cb1e61547340e2c1d1605d486df16
Code could become clearer, if 'gridspace' was completly removed. Maybe we can have gridspace as the maximal one (instead of maxRectScale). Or even have short variables dX, dY, dZ, and dMax. The latter is very convenient for describing the dipole size all over the code and documentation. Then we can avoid a lot of details like "along the greatest dimension".
### Related issues:
https://github.com/adda-team/adda/pull/245
https://github.com/adda-team/adda/issues/196 | True | gridspace for rectangular dipoles - 'gridspace' is a legacy variable in ADDA https://github.com/adda-team/adda/blob/master/src/make_particle.c#L2218
Many places in code still use gridspace instead of gridspaceX, Y, and Z, imply that gridspace==gridspaceX, but sometimes mistake happens https://github.com/adda-team/adda/pull/245/commits/d2796c8e2a5cb1e61547340e2c1d1605d486df16
Code could become clearer, if 'gridspace' was completly removed. Maybe we can have gridspace as the maximal one (instead of maxRectScale). Or even have short variables dX, dY, dZ, and dMax. The latter is very convenient for describing the dipole size all over the code and documentation. Then we can avoid a lot of details like "along the greatest dimension".
### Related issues:
https://github.com/adda-team/adda/pull/245
https://github.com/adda-team/adda/issues/196 | main | gridspace for rectangular dipoles gridspace is a legacy variable in adda many places in code still use gridspace instead of gridspacex y and z imply that gridspace gridspacex but sometimes mistake happens code could become clearer if gridspace was completly removed maybe we can have gridspace as the maximal one instead of maxrectscale or even have short variables dx dy dz and dmax the latter is very convenient for describing the dipole size all over the code and documentation then we can avoid a lot of details like along the greatest dimension related issues | 1 |
5,514 | 27,559,983,611 | IssuesEvent | 2023-03-07 21:06:43 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | Fix T001 template linting errors | engineering maintain | ## Description
```
T001 Variables should be wrapped in a single whitespace. Ex: {{ this }}
```
This should be possible with the following replacement:
```
sed -i.bak "s|{{\([A-z0-9._]*\)}}|{{ \1 }}|g" **/*.html
```
Delete the `.bak` backup files afterwards.
See: https://djlint.com/docs/linter/ | True | Fix T001 template linting errors - ## Description
```
T001 Variables should be wrapped in a single whitespace. Ex: {{ this }}
```
This should be possible with the following replacement:
```
sed -i.bak "s|{{\([A-z0-9._]*\)}}|{{ \1 }}|g" **/*.html
```
Delete the `.bak` backup files afterwards.
See: https://djlint.com/docs/linter/ | main | fix template linting errors description variables should be wrapped in a single whitespace ex this this should be possible with the following replacement sed i bak s g html delete the bak backup files afterwards see | 1 |
5,455 | 27,291,285,378 | IssuesEvent | 2023-02-23 16:46:59 | ipfs/ipfs-docs | https://api.github.com/repos/ipfs/ipfs-docs | closed | Replace algolia with Swiftype for search | P1 kind/enhancement need/maintainers-input | @johndmulhausen could you please add more context here on why we are doing this ? | True | Replace algolia with Swiftype for search - @johndmulhausen could you please add more context here on why we are doing this ? | main | replace algolia with swiftype for search johndmulhausen could you please add more context here on why we are doing this | 1 |
5,629 | 28,243,668,050 | IssuesEvent | 2023-04-06 09:07:25 | coq/platform | https://api.github.com/repos/coq/platform | closed | coq-mathcomp-classic as part of the package pick? | kind: package inclusion approval: has maintainer agreement | coq-mathcomp-classical is currently installed with the Coq platform as a dependency of coq-mathcomp-analysis.
It is however not listed as part of the package pick although it can be useful on it own, for example to perform set theoretic reasoning.
It might be worth considering listing coq-mathcomp-classical along with coq-mathcomp-analysis as part of the package pick.
| True | coq-mathcomp-classic as part of the package pick? - coq-mathcomp-classical is currently installed with the Coq platform as a dependency of coq-mathcomp-analysis.
It is however not listed as part of the package pick although it can be useful on it own, for example to perform set theoretic reasoning.
It might be worth considering listing coq-mathcomp-classical along with coq-mathcomp-analysis as part of the package pick.
| main | coq mathcomp classic as part of the package pick coq mathcomp classical is currently installed with the coq platform as a dependency of coq mathcomp analysis it is however not listed as part of the package pick although it can be useful on it own for example to perform set theoretic reasoning it might be worth considering listing coq mathcomp classical along with coq mathcomp analysis as part of the package pick | 1 |
3,625 | 14,660,816,868 | IssuesEvent | 2020-12-29 01:13:20 | timkendall/tql | https://api.github.com/repos/timkendall/tql | opened | Use @babel/types for Codegen | maintainability | [@babel/types](https://babeljs.io/docs/en/babel-types) provides some nice TypeScript AST utils that we could use to make our `Codegen` class perhaps more maintainable. | True | Use @babel/types for Codegen - [@babel/types](https://babeljs.io/docs/en/babel-types) provides some nice TypeScript AST utils that we could use to make our `Codegen` class perhaps more maintainable. | main | use babel types for codegen provides some nice typescript ast utils that we could use to make our codegen class perhaps more maintainable | 1 |
172,789 | 13,347,320,162 | IssuesEvent | 2020-08-29 12:56:45 | WoWManiaUK/Redemption | https://api.github.com/repos/WoWManiaUK/Redemption | opened | [Spells/HOTS] Healing over time double dipping | Fix - Ready to Test | **What is Happening:**
Like damage over time, hots is getting twice SPELLMOD_DOT (#https://github.com/WoWManiaUK/Redemption/issues/4918) too, making some healers (like druids) get alot of heal
**What Should happen:**
HOTS should get this mod only once.
| 1.0 | [Spells/HOTS] Healing over time double dipping - **What is Happening:**
Like damage over time, hots is getting twice SPELLMOD_DOT (#https://github.com/WoWManiaUK/Redemption/issues/4918) too, making some healers (like druids) get alot of heal
**What Should happen:**
HOTS should get this mod only once.
| non_main | healing over time double dipping what is happening like damage over time hots is getting twice spellmod dot too making some healers like druids get alot of heal what should happen hots should get this mod only once | 0 |
350,180 | 10,479,785,561 | IssuesEvent | 2019-09-24 05:38:40 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | steveblank-com.cdn.ampproject.org - see bug description | browser-firefox-mobile engine-gecko priority-normal type-tracking-protection-basic | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @extra_labels: type-tracking-protection-basic -->
**URL**: https://steveblank-com.cdn.ampproject.org/v/s/steveblank.com/2019/09/17/agilefall-when-waterfall-sneaks-back-into-agile/amp/?usqp=mq331AQEKAFwAQ%3D%3D&_js_v=0.1#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2F
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Desktop site not loading up, only the mobile view stays.
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/9/2ce10370-d823-4618-8b73-986376abb8a4.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190909131947</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: true (basic)</li>
</ul>
<p>Console Messages:</p>
<pre>
['[console.info(Powered by AMP HTML Version 1909141411050, https://steveblank-com.cdn.ampproject.org/v/s/steveblank.com/2019/09/17/agilefall-when-waterfall-sneaks-back-into-agile/amp/?usqp=mq331AQEKAFwAQ%3D%3D&_js_v=0.1#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2F) https://cdn.ampproject.org/rtv/011909141411050/v0.js:546:470]', '[JavaScript Warning: "The resource at https://www.google-analytics.com/r/collect?v=1&_v=a1&ds=AMP&aip&_s=1&dt=AgileFall%20%26%238211%3B%20When%20Waterfall%20Sneaks%20Back%20Into%20Agile&sr=486x915&_utmht=1568737200374&cid=_Vxr-NRxYK01DqBI5kN-tCch_T6HLbdCWhiS8v9X8i3dqaX7s7jo2reGkxHN-GMW&tid=UA-85363375-1&dl=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2Famp%2F&dr=&sd=24&ul=en-us&de=UTF-8&t=pageview&jid=0.17745584273649628&_r=1&a=3693&z=0.17780856576988224 was blocked because content blocking is enabled." {file: "https://steveblank-com.cdn.ampproject.org/v/s/steveblank.com/2019/09/17/agilefall-when-waterfall-sneaks-back-into-agile/amp/?usqp=mq331AQEKAFwAQ%3D%3D&_js_v=0.1#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2F" line: 0}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | steveblank-com.cdn.ampproject.org - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @extra_labels: type-tracking-protection-basic -->
**URL**: https://steveblank-com.cdn.ampproject.org/v/s/steveblank.com/2019/09/17/agilefall-when-waterfall-sneaks-back-into-agile/amp/?usqp=mq331AQEKAFwAQ%3D%3D&_js_v=0.1#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2F
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Desktop site not loading up, only the mobile view stays.
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/9/2ce10370-d823-4618-8b73-986376abb8a4.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190909131947</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: true (basic)</li>
</ul>
<p>Console Messages:</p>
<pre>
['[console.info(Powered by AMP HTML Version 1909141411050, https://steveblank-com.cdn.ampproject.org/v/s/steveblank.com/2019/09/17/agilefall-when-waterfall-sneaks-back-into-agile/amp/?usqp=mq331AQEKAFwAQ%3D%3D&_js_v=0.1#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2F) https://cdn.ampproject.org/rtv/011909141411050/v0.js:546:470]', '[JavaScript Warning: "The resource at https://www.google-analytics.com/r/collect?v=1&_v=a1&ds=AMP&aip&_s=1&dt=AgileFall%20%26%238211%3B%20When%20Waterfall%20Sneaks%20Back%20Into%20Agile&sr=486x915&_utmht=1568737200374&cid=_Vxr-NRxYK01DqBI5kN-tCch_T6HLbdCWhiS8v9X8i3dqaX7s7jo2reGkxHN-GMW&tid=UA-85363375-1&dl=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2Famp%2F&dr=&sd=24&ul=en-us&de=UTF-8&t=pageview&jid=0.17745584273649628&_r=1&a=3693&z=0.17780856576988224 was blocked because content blocking is enabled." {file: "https://steveblank-com.cdn.ampproject.org/v/s/steveblank.com/2019/09/17/agilefall-when-waterfall-sneaks-back-into-agile/amp/?usqp=mq331AQEKAFwAQ%3D%3D&_js_v=0.1#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fsteveblank.com%2F2019%2F09%2F17%2Fagilefall-when-waterfall-sneaks-back-into-agile%2F" line: 0}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | steveblank com cdn ampproject org see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description desktop site not loading up only the mobile view stays steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked true basic console messages from with ❤️ | 0 |
20,529 | 13,979,680,235 | IssuesEvent | 2020-10-27 00:46:44 | algorand/go-algorand | https://api.github.com/repos/algorand/go-algorand | closed | catchupTime resets incorrectly while still catching up | Infrastructure bug | <!--
NOTE: If this issue relates to security, please use the vulnerability disclosure form here:
https://www.algorand.com/resources/blog/security
General, developer or support questions concerning Algorand should be directed to the Algorand Forums https://forum.algorand.org/.
-->
### Subject of the issue
When performing a classic sync (non-catchpoint), the `catchupTime` property returned for `status` resets to 0 whenever it detects a catchpoint file being written. This breaks tools that depend on the non-zero -> zero transition to detect the node being caught up.
This catchpoint-related change, in `catchup/service.go:pipelinedFetch()` broke the implementation:
```
// if we're writing a catchpoint file, stop catching up to reduce the memory pressure. Once we finish writing the file we
// could resume with the catchup.
if s.ledger.IsWritingCatchpointFile() {
s.log.Info("Catchup is stopping due to catchpoint file being written")
return
}
```
`catchup/service.go:sync()` resets `s.syncStartNS` when starting to sync. Returning early from `pipelinedFetch()` causes `sync()` to return; so resuming the catchup means `s.syncStartNS` is reset to `Now()` after every interruption, when logically we don't want to reset it when being interrupted in this instance. (`catchupTime` is computed based on `s.syncStartNS`, so this reset opens the possibility for the caller to see a `catchupTime` value of 0 when the node hasn't caught up).
### Your environment
* Software version: v2.1.3
* Node status if applicable:
* Operating System details. (irrelevant)
### Steps to reproduce
Start a new non-relay node with indexer enabled (this disables catchpoint catchup). Loop calling `v2/status` until catchupTime transitions from non-zero to zero; compare lastRound to expected value (polling at 30-second interval, we see this trigger before syncing to round 50,000)
### Expected behaviour
catchupTime should not be reset when intentionally interrupted due to catchpoint file write. This also messes up the `initialSyncComplete` logic - I forget where that's used.
### Actual behaviour
See above
| 1.0 | catchupTime resets incorrectly while still catching up - <!--
NOTE: If this issue relates to security, please use the vulnerability disclosure form here:
https://www.algorand.com/resources/blog/security
General, developer or support questions concerning Algorand should be directed to the Algorand Forums https://forum.algorand.org/.
-->
### Subject of the issue
When performing a classic sync (non-catchpoint), the `catchupTime` property returned for `status` resets to 0 whenever it detects a catchpoint file being written. This breaks tools that depend on the non-zero -> zero transition to detect the node being caught up.
This catchpoint-related change, in `catchup/service.go:pipelinedFetch()` broke the implementation:
```
// if we're writing a catchpoint file, stop catching up to reduce the memory pressure. Once we finish writing the file we
// could resume with the catchup.
if s.ledger.IsWritingCatchpointFile() {
s.log.Info("Catchup is stopping due to catchpoint file being written")
return
}
```
`catchup/service.go:sync()` resets `s.syncStartNS` when starting to sync. Returning early from `pipelinedFetch()` causes `sync()` to return; so resuming the catchup means `s.syncStartNS` is reset to `Now()` after every interruption, when logically we don't want to reset it when being interrupted in this instance. (`catchupTime` is computed based on `s.syncStartNS`, so this reset opens the possibility for the caller to see a `catchupTime` value of 0 when the node hasn't caught up).
### Your environment
* Software version: v2.1.3
* Node status if applicable:
* Operating System details. (irrelevant)
### Steps to reproduce
Start a new non-relay node with indexer enabled (this disables catchpoint catchup). Loop calling `v2/status` until catchupTime transitions from non-zero to zero; compare lastRound to expected value (polling at 30-second interval, we see this trigger before syncing to round 50,000)
### Expected behaviour
catchupTime should not be reset when intentionally interrupted due to catchpoint file write. This also messes up the `initialSyncComplete` logic - I forget where that's used.
### Actual behaviour
See above
| non_main | catchuptime resets incorrectly while still catching up note if this issue relates to security please use the vulnerability disclosure form here general developer or support questions concerning algorand should be directed to the algorand forums subject of the issue when performing a classic sync non catchpoint the catchuptime property returned for status resets to whenever it detects a catchpoint file being written this breaks tools that depend on the non zero zero transition to detect the node being caught up this catchpoint related change in catchup service go pipelinedfetch broke the implementation if we re writing a catchpoint file stop catching up to reduce the memory pressure once we finish writing the file we could resume with the catchup if s ledger iswritingcatchpointfile s log info catchup is stopping due to catchpoint file being written return catchup service go sync resets s syncstartns when starting to sync returning early from pipelinedfetch causes sync to return so resuming the catchup means s syncstartns is reset to now after every interruption when logically we don t want to reset it when being interrupted in this instance catchuptime is computed based on s syncstartns so this reset opens the possibility for the caller to see a catchuptime value of when the node hasn t caught up your environment software version node status if applicable operating system details irrelevant steps to reproduce start a new non relay node with indexer enabled this disables catchpoint catchup loop calling status until catchuptime transitions from non zero to zero compare lastround to expected value polling at second interval we see this trigger before syncing to round expected behaviour catchuptime should not be reset when intentionally interrupted due to catchpoint file write this also messes up the initialsynccomplete logic i forget where that s used actual behaviour see above | 0 |
55,232 | 11,413,171,996 | IssuesEvent | 2020-02-01 17:53:23 | kalwalt/jsartoolkit5 | https://api.github.com/repos/kalwalt/jsartoolkit5 | opened | [ Feature ] Multi NFT markers support | NFT code design | It will be nice to add the Multi **NFT** markers feature. More infos will be added. | 1.0 | [ Feature ] Multi NFT markers support - It will be nice to add the Multi **NFT** markers feature. More infos will be added. | non_main | multi nft markers support it will be nice to add the multi nft markers feature more infos will be added | 0 |
70,109 | 18,018,259,207 | IssuesEvent | 2021-09-16 16:05:21 | golang/go | https://api.github.com/repos/golang/go | opened | x/build: ios-arm64-corellium builders have long wait times | Builders NeedsFix | Users have reported long wait times with ios-arm64-corellium builds:
```
• ios-arm64-corellium | running 1h38m44s
```
```
ios-arm64-corellium rev cfa233d7 (sub-repo mobile rev 855b5ad0) (trybot set for Ib1a2f53); waiting_for_machine; (nil *buildlet.Client), 1h45m36s ago
2021-09-16T14:15:15Z checking_for_snapshot
2021-09-16T14:15:15Z finish_checking_for_snapshot after 35ms
2021-09-16T14:15:15Z get_buildlet
+6335.7s (now)
```
`host-ios-arm64-corellium-ios: 2/2 (1 missing)`
Perhaps the builders need to be rebooted.
@eliasnaur @golang/release | 1.0 | x/build: ios-arm64-corellium builders have long wait times - Users have reported long wait times with ios-arm64-corellium builds:
```
• ios-arm64-corellium | running 1h38m44s
```
```
ios-arm64-corellium rev cfa233d7 (sub-repo mobile rev 855b5ad0) (trybot set for Ib1a2f53); waiting_for_machine; (nil *buildlet.Client), 1h45m36s ago
2021-09-16T14:15:15Z checking_for_snapshot
2021-09-16T14:15:15Z finish_checking_for_snapshot after 35ms
2021-09-16T14:15:15Z get_buildlet
+6335.7s (now)
```
`host-ios-arm64-corellium-ios: 2/2 (1 missing)`
Perhaps the builders need to be rebooted.
@eliasnaur @golang/release | non_main | x build ios corellium builders have long wait times users have reported long wait times with ios corellium builds • ios corellium running ios corellium rev sub repo mobile rev trybot set for waiting for machine nil buildlet client ago checking for snapshot finish checking for snapshot after get buildlet now host ios corellium ios missing perhaps the builders need to be rebooted eliasnaur golang release | 0 |
5,877 | 31,987,286,429 | IssuesEvent | 2023-09-21 01:06:41 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | opened | different enum type having same enum values in a same proto file considerd as error. | type: bug awaiting-maintainer | ### Description of the bug:
_No response_
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
https://snipboard.io/W8MdLr.jpg
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ IDEA 2023.2.1 (Community Edition) Build #IC-232.9559.62, built on August 23, 2023 Runtime version: 17.0.8+7-b1000.8 amd64 VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o. Linux 6.2.0-32-generic GC: G1 Young Generation, G1 Old Generation Memory: 2048M Cores: 8 Non-Bundled Plugins: idea.plugin.protoeditor (232.9559.10) google-java-format (1.17.0.0) com.google.idea.bazel.ijwb (2023.08.15.0.1-api-version-232) Kotlin: 232-1.9.0-IJ9559.62 Current Desktop: ubuntu:GNOME
### What programming languages and tools are you using? Please provide specific versions.
protobuffers/ java
### What Bazel plugin version are you using?
2023.08.15.0.1-api-version-232
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | True | different enum type having same enum values in a same proto file considerd as error. - ### Description of the bug:
_No response_
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
https://snipboard.io/W8MdLr.jpg
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ IDEA 2023.2.1 (Community Edition) Build #IC-232.9559.62, built on August 23, 2023 Runtime version: 17.0.8+7-b1000.8 amd64 VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o. Linux 6.2.0-32-generic GC: G1 Young Generation, G1 Old Generation Memory: 2048M Cores: 8 Non-Bundled Plugins: idea.plugin.protoeditor (232.9559.10) google-java-format (1.17.0.0) com.google.idea.bazel.ijwb (2023.08.15.0.1-api-version-232) Kotlin: 232-1.9.0-IJ9559.62 Current Desktop: ubuntu:GNOME
### What programming languages and tools are you using? Please provide specific versions.
protobuffers/ java
### What Bazel plugin version are you using?
2023.08.15.0.1-api-version-232
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | main | different enum type having same enum values in a same proto file considerd as error description of the bug no response what s the simplest easiest way to reproduce this bug please provide a minimal example if possible which intellij ide are you using please provide the specific version intellij idea community edition build ic built on august runtime version vm openjdk bit server vm by jetbrains s r o linux generic gc young generation old generation memory cores non bundled plugins idea plugin protoeditor google java format com google idea bazel ijwb api version kotlin current desktop ubuntu gnome what programming languages and tools are you using please provide specific versions protobuffers java what bazel plugin version are you using api version have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response | 1 |
461 | 3,665,323,102 | IssuesEvent | 2016-02-19 15:39:51 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Proposal: move caskroom/unofficial | awaiting maintainer feedback meta | Pinging @caskroom/maintainers. If there are no objections, will go ahead with it next Friday (February 19<sup>th</sup>, 2016).
I went through [caskroom/unofficial](https://github.com/caskroom/homebrew-unofficial) a few times lately, and can’t find a good reason to keep it. Apart from the funny notion we’re *officially* hosting a repo we call *unofficial*:
+ It has less than 40 casks.
+ [Many of them are potentially dangerous](https://github.com/caskroom/homebrew-unofficial/blob/master/README.md#homebrew-unofficial) (though in practice none should be).
+ Most are relatively obscure, and hence provide very little value ([which is important](https://github.com/caskroom/homebrew-versions#acceptable-casks)).
+ Many are *really specifically obscure* (specific forks with tiny modifications).
+ Many/most were introduced to serve a single user ([not something we adhere to](https://github.com/caskroom/homebrew-versions#acceptable-casks)).
+ Most, once added, are pretty much abandoned.
Finally, **all the casks in that repo are prime candidates for user taps**, and we should encourage that.
Things to be done:
- [x] Give [caskroom/unofficial](https://github.com/caskroom/homebrew-unofficial) to @alebcay.
- [x] Close https://github.com/caskroom/homebrew-cask/issues/8027.
- [x] Rectify documentation.
- [x] Update the outdated appcasts scripts. | True | Proposal: move caskroom/unofficial - Pinging @caskroom/maintainers. If there are no objections, will go ahead with it next Friday (February 19<sup>th</sup>, 2016).
I went through [caskroom/unofficial](https://github.com/caskroom/homebrew-unofficial) a few times lately, and can’t find a good reason to keep it. Apart from the funny notion we’re *officially* hosting a repo we call *unofficial*:
+ It has less than 40 casks.
+ [Many of them are potentially dangerous](https://github.com/caskroom/homebrew-unofficial/blob/master/README.md#homebrew-unofficial) (though in practice none should be).
+ Most are relatively obscure, and hence provide very little value ([which is important](https://github.com/caskroom/homebrew-versions#acceptable-casks)).
+ Many are *really specifically obscure* (specific forks with tiny modifications).
+ Many/most were introduced to serve a single user ([not something we adhere to](https://github.com/caskroom/homebrew-versions#acceptable-casks)).
+ Most, once added, are pretty much abandoned.
Finally, **all the casks in that repo are prime candidates for user taps**, and we should encourage that.
Things to be done:
- [x] Give [caskroom/unofficial](https://github.com/caskroom/homebrew-unofficial) to @alebcay.
- [x] Close https://github.com/caskroom/homebrew-cask/issues/8027.
- [x] Rectify documentation.
- [x] Update the outdated appcasts scripts. | main | proposal move caskroom unofficial pinging caskroom maintainers if there are no objections will go ahead with it next friday february th i went through a few times lately and can’t find a good reason to keep it apart from the funny notion we’re officially hosting a repo we call unofficial it has less than casks though in practice none should be most are relatively obscure and hence provide very little value many are really specifically obscure specific forks with tiny modifications many most were introduced to serve a single user most once added are pretty much abandoned finally all the casks in that repo are prime candidates for user taps and we should encourage that things to be done give to alebcay close rectify documentation update the outdated appcasts scripts | 1 |
220,772 | 17,260,160,973 | IssuesEvent | 2021-07-22 06:11:30 | IntellectualSites/FastAsyncWorldEdit | https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit | closed | Error of pasting something had set //rotate -90 , 90 , 45 and -45 | Requires Testing | ### Server Implementation
Paper
### Server Version
1.16.5
### Describe the bug
Hello.. I will report something error in FAWE, it paste something had set rotate 45, -45, 90 and -90 otherwise 180 and -180 and I get something had pasted and it broke.... I want to know how to fix it for //rotate
### To Reproduce
1. I will //pos1 and //pos2 on structure
2. Then //copy and //schem save [name]
3. Then //schem load [name] and //rotate 90, -90, -45 or 45
4. Finally I //paste and it error early.
### Expected behaviour
I think different version between PAPER 1.16.5 and FAWE 1.17
### Screenshots / Videos





### Error log (if applicable)
java.lang.ArrayIndexOutofBoundsException: Index -1 out of bounds for length 17113
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/26bb1221ef28488fa354c1aa94e05178
### Fawe Version
FastAsyncWorldEdit1.17-05;6d360db
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit-1.17/ and the issue still persists.
### Anything else?
_No response_ | 1.0 | Error of pasting something had set //rotate -90 , 90 , 45 and -45 - ### Server Implementation
Paper
### Server Version
1.16.5
### Describe the bug
Hello.. I will report something error in FAWE, it paste something had set rotate 45, -45, 90 and -90 otherwise 180 and -180 and I get something had pasted and it broke.... I want to know how to fix it for //rotate
### To Reproduce
1. I will //pos1 and //pos2 on structure
2. Then //copy and //schem save [name]
3. Then //schem load [name] and //rotate 90, -90, -45 or 45
4. Finally I //paste and it error early.
### Expected behaviour
I think different version between PAPER 1.16.5 and FAWE 1.17
### Screenshots / Videos





### Error log (if applicable)
java.lang.ArrayIndexOutofBoundsException: Index -1 out of bounds for length 17113
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/26bb1221ef28488fa354c1aa94e05178
### Fawe Version
FastAsyncWorldEdit1.17-05;6d360db
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit-1.17/ and the issue still persists.
### Anything else?
_No response_ | non_main | error of pasting something had set rotate and server implementation paper server version describe the bug hello i will report something error in fawe it paste something had set rotate and otherwise and and i get something had pasted and it broke i want to know how to fix it for rotate to reproduce i will and on structure then copy and schem save then schem load and rotate or finally i paste and it error early expected behaviour i think different version between paper and fawe screenshots videos error log if applicable java lang arrayindexoutofboundsexception index out of bounds for length fawe debugpaste fawe version checklist i have included a fawe debugpaste i am using the newest build from and the issue still persists anything else no response | 0 |
5,669 | 29,494,905,203 | IssuesEvent | 2023-06-02 16:06:06 | ipfs/js-ipfs | https://api.github.com/repos/ipfs/js-ipfs | closed | Maybe there should be a way for pin rm command not to throw an error when object is not pinned | kind/enhancement need/maintainer-input kind/maybe-in-helia | Otherwise currently user has first to check if object is pinned, if they just try to forget about an object fully. | True | Maybe there should be a way for pin rm command not to throw an error when object is not pinned - Otherwise currently user has first to check if object is pinned, if they just try to forget about an object fully. | main | maybe there should be a way for pin rm command not to throw an error when object is not pinned otherwise currently user has first to check if object is pinned if they just try to forget about an object fully | 1 |
2,025 | 6,757,646,203 | IssuesEvent | 2017-10-24 11:37:33 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Увеличение размера изображений | css need-maintainer | ### 1. Запрос
Было бы неплохо, если б изображения были больше. Например, в 1,4 раза, — нужно протестировать, возможно, ещё большее увеличение потребуется.
### 2. Аргументация
В некоторых изображениях требуется назвать, что находится под определённой цифрой. мне — человеку, обладающему средним, но никак не выдающимся зрением, — цифр этих не видно или они плохо видны. Примеры:
+ 
`(10, 11, 12) Тросы, дополнительно удерживающие стеньги` — не разберу, где 10, 11, 12.
+ 
`(2) Рычаг руля, передающий крутящий момент` — не разберу, где цифра 2.
### 3. Дополнительные факты
1. я принял для себя условие, что в задании должно содержаться не более 2 изображений: 1 в вопросе (часть задания, которое находится в начале) и 1 в комментариях. Т. е., если увеличить изображение, оно не уйдёт вверх из-за того, что другие изображения тоже увеличатся.
1. Комната Эрика, в общем, не предназначается для массового отыгрыша. Изображение не уйдёт быстро вверх из-за большого количества ответов/вариантов у игроков.
1. Размер изображений можно увеличить за счёт удаления бесполезной полосы снизу, см. **#12**.
Спасибо. | True | [Feature request] Увеличение размера изображений - ### 1. Запрос
Было бы неплохо, если б изображения были больше. Например, в 1,4 раза, — нужно протестировать, возможно, ещё большее увеличение потребуется.
### 2. Аргументация
В некоторых изображениях требуется назвать, что находится под определённой цифрой. мне — человеку, обладающему средним, но никак не выдающимся зрением, — цифр этих не видно или они плохо видны. Примеры:
+ 
`(10, 11, 12) Тросы, дополнительно удерживающие стеньги` — не разберу, где 10, 11, 12.
+ 
`(2) Рычаг руля, передающий крутящий момент` — не разберу, где цифра 2.
### 3. Дополнительные факты
1. я принял для себя условие, что в задании должно содержаться не более 2 изображений: 1 в вопросе (часть задания, которое находится в начале) и 1 в комментариях. Т. е., если увеличить изображение, оно не уйдёт вверх из-за того, что другие изображения тоже увеличатся.
1. Комната Эрика, в общем, не предназначается для массового отыгрыша. Изображение не уйдёт быстро вверх из-за большого количества ответов/вариантов у игроков.
1. Размер изображений можно увеличить за счёт удаления бесполезной полосы снизу, см. **#12**.
Спасибо. | main | увеличение размера изображений запрос было бы неплохо если б изображения были больше например в раза — нужно протестировать возможно ещё большее увеличение потребуется аргументация в некоторых изображениях требуется назвать что находится под определённой цифрой мне — человеку обладающему средним но никак не выдающимся зрением — цифр этих не видно или они плохо видны примеры тросы дополнительно удерживающие стеньги — не разберу где рычаг руля передающий крутящий момент — не разберу где цифра дополнительные факты я принял для себя условие что в задании должно содержаться не более изображений в вопросе часть задания которое находится в начале и в комментариях т е если увеличить изображение оно не уйдёт вверх из за того что другие изображения тоже увеличатся комната эрика в общем не предназначается для массового отыгрыша изображение не уйдёт быстро вверх из за большого количества ответов вариантов у игроков размер изображений можно увеличить за счёт удаления бесполезной полосы снизу см спасибо | 1 |
103,580 | 16,602,927,175 | IssuesEvent | 2021-06-01 22:16:51 | gms-ws-sandbox/nibrs | https://api.github.com/repos/gms-ws-sandbox/nibrs | opened | CVE-2019-11284 (High) detected in reactor-netty-0.8.8.RELEASE.jar | security vulnerability | ## CVE-2019-11284 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reactor-netty-0.8.8.RELEASE.jar</b></p></summary>
<p>Reactive Streams Netty driver</p>
<p>Library home page: <a href="https://github.com/reactor/reactor-netty">https://github.com/reactor/reactor-netty</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/projectreactor/netty/reactor-netty/0.8.8.RELEASE/reactor-netty-0.8.8.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-webflux-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-reactor-netty-2.1.5.RELEASE.jar
- :x: **reactor-netty-0.8.8.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pivotal Reactor Netty, versions prior to 0.8.11, passes headers through redirects, including authorization ones. A remote unauthenticated malicious user may gain access to credentials for a different server than they have access to.
<p>Publish Date: 2019-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11284>CVE-2019-11284</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11284</a></p>
<p>Release Date: 2019-10-17</p>
<p>Fix Resolution: 0.8.11</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.projectreactor.netty","packageName":"reactor-netty","packageVersion":"0.8.8.RELEASE","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-webflux:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-reactor-netty:2.1.5.RELEASE;io.projectreactor.netty:reactor-netty:0.8.8.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.8.11"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-11284","vulnerabilityDetails":"Pivotal Reactor Netty, versions prior to 0.8.11, passes headers through redirects, including authorization ones. A remote unauthenticated malicious user may gain access to credentials for a different server than they have access to.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11284","cvss3Severity":"high","cvss3Score":"8.6","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-11284 (High) detected in reactor-netty-0.8.8.RELEASE.jar - ## CVE-2019-11284 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reactor-netty-0.8.8.RELEASE.jar</b></p></summary>
<p>Reactive Streams Netty driver</p>
<p>Library home page: <a href="https://github.com/reactor/reactor-netty">https://github.com/reactor/reactor-netty</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/projectreactor/netty/reactor-netty/0.8.8.RELEASE/reactor-netty-0.8.8.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-webflux-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-reactor-netty-2.1.5.RELEASE.jar
- :x: **reactor-netty-0.8.8.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pivotal Reactor Netty, versions prior to 0.8.11, passes headers through redirects, including authorization ones. A remote unauthenticated malicious user may gain access to credentials for a different server than they have access to.
<p>Publish Date: 2019-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11284>CVE-2019-11284</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11284</a></p>
<p>Release Date: 2019-10-17</p>
<p>Fix Resolution: 0.8.11</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.projectreactor.netty","packageName":"reactor-netty","packageVersion":"0.8.8.RELEASE","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-webflux:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-reactor-netty:2.1.5.RELEASE;io.projectreactor.netty:reactor-netty:0.8.8.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.8.11"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-11284","vulnerabilityDetails":"Pivotal Reactor Netty, versions prior to 0.8.11, passes headers through redirects, including authorization ones. A remote unauthenticated malicious user may gain access to credentials for a different server than they have access to.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11284","cvss3Severity":"high","cvss3Score":"8.6","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in reactor netty release jar cve high severity vulnerability vulnerable library reactor netty release jar reactive streams netty driver library home page a href path to dependency file nibrs tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository io projectreactor netty reactor netty release reactor netty release jar dependency hierarchy spring boot starter webflux release jar root library spring boot starter reactor netty release jar x reactor netty release jar vulnerable library found in head commit a href found in base branch master vulnerability details pivotal reactor netty versions prior to passes headers through redirects including authorization ones a remote unauthenticated malicious user may gain access to credentials for a different server than they have access to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter webflux release org springframework boot spring boot starter reactor netty release io projectreactor netty reactor netty release isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails pivotal reactor netty versions prior to passes headers through redirects including authorization ones a remote unauthenticated malicious user may gain access to credentials for a different server than they have access to vulnerabilityurl | 0 |
5,802 | 30,727,495,672 | IssuesEvent | 2023-07-27 21:00:22 | cncf/tag-contributor-strategy | https://api.github.com/repos/cncf/tag-contributor-strategy | closed | Create a roadmap for the TAG | wg/governance wg/contribgrowth mentoring wg/maintainers-circle | In #329 we discussed how helpful having a roadmap is to encourage contributors and focus their efforts on high impact items. We should take our own advice and make one ourselves. Couple thoughts:
* Avoid making a wishlist. Limit to what we can realistically do with the current amount of time / velocity on projects.
* If needed call out what isn't on the roadmap that people may be looking for and wondering about.
* Prioritize or otherwise call out hard commitments for supporting the TOC.
* Make sure to include any time spent generally supporting projects, sometimes what we do doesn't fall into a "feature" type bucket but is still important. | True | Create a roadmap for the TAG - In #329 we discussed how helpful having a roadmap is to encourage contributors and focus their efforts on high impact items. We should take our own advice and make one ourselves. Couple thoughts:
* Avoid making a wishlist. Limit to what we can realistically do with the current amount of time / velocity on projects.
* If needed call out what isn't on the roadmap that people may be looking for and wondering about.
* Prioritize or otherwise call out hard commitments for supporting the TOC.
* Make sure to include any time spent generally supporting projects, sometimes what we do doesn't fall into a "feature" type bucket but is still important. | main | create a roadmap for the tag in we discussed how helpful having a roadmap is to encourage contributors and focus their efforts on high impact items we should take our own advice and make one ourselves couple thoughts avoid making a wishlist limit to what we can realistically do with the current amount of time velocity on projects if needed call out what isn t on the roadmap that people may be looking for and wondering about prioritize or otherwise call out hard commitments for supporting the toc make sure to include any time spent generally supporting projects sometimes what we do doesn t fall into a feature type bucket but is still important | 1 |
4,484 | 23,361,734,593 | IssuesEvent | 2022-08-10 12:19:42 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [a11y]: Tooltip content not being read by screen reader | status: needs more info type: a11y ♿ status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components-react
### Browser
Chrome
### Operating System
Windows
### Package version
7.55
### React version
16.13.1
### Automated testing tool and ruleset
Google Chrome Screen Reader
### Assistive technology
Google Chrome Screen Reader
### Description
Tooltip content not being read by Chrome Screen-reader.
### WCAG 2.1 Violation
_No response_
### Reproduction/example
https://v10-react.carbondesignsystem.com/?path=/story/components-tooltip--default-bottom
### Steps to reproduce
- Click in the link above;
- Start Chrome Screen-reader;
- Click on the tooltip icon.
Expected: Chrome screen-reader to be able to read the tooltip content.
Result: Chrome screen-reader does not read the tooltip content.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [a11y]: Tooltip content not being read by screen reader - ### Package
carbon-components-react
### Browser
Chrome
### Operating System
Windows
### Package version
7.55
### React version
16.13.1
### Automated testing tool and ruleset
Google Chrome Screen Reader
### Assistive technology
Google Chrome Screen Reader
### Description
Tooltip content not being read by Chrome Screen-reader.
### WCAG 2.1 Violation
_No response_
### Reproduction/example
https://v10-react.carbondesignsystem.com/?path=/story/components-tooltip--default-bottom
### Steps to reproduce
- Click in the link above;
- Start Chrome Screen-reader;
- Click on the tooltip icon.
Expected: Chrome screen-reader to be able to read the tooltip content.
Result: Chrome screen-reader does not read the tooltip content.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | tooltip content not being read by screen reader package carbon components react browser chrome operating system windows package version react version automated testing tool and ruleset google chrome screen reader assistive technology google chrome screen reader description tooltip content not being read by chrome screen reader wcag violation no response reproduction example steps to reproduce click in the link above start chrome screen reader click on the tooltip icon expected chrome screen reader to be able to read the tooltip content result chrome screen reader does not read the tooltip content code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
1,671 | 6,574,093,737 | IssuesEvent | 2017-09-11 11:27:28 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_network: unable to deal with network IDs | affects_2.2 bug_report cloud docker waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `docker_network`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/schwarz/code/infrastructure/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Debian GNU/Linux
##### SUMMARY
`docker` allows addressing networks by ID. Ansible should do the same. For the sake of consistency with the `docker` CLI and other modules.
##### STEPS TO REPRODUCE
``` sh
$ docker network create foo
f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93
$ ansible -m docker_network -a 'name=f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93 state=absent' localhost
```
##### EXPECTED RESULTS
The output should be the same as from `ansible -m docker_network -a 'name=foo state=absent' localhost`.
```
localhost | SUCCESS => {
"actions": [
"Removed network f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93"
],
"changed": true
}
```
##### ACTUAL RESULTS
Instead no network is deleted.
```
localhost | SUCCESS => {
"actions": [],
"changed": false
}
```
| True | docker_network: unable to deal with network IDs - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `docker_network`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/schwarz/code/infrastructure/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Debian GNU/Linux
##### SUMMARY
`docker` allows addressing networks by ID. Ansible should do the same. For the sake of consistency with the `docker` CLI and other modules.
##### STEPS TO REPRODUCE
``` sh
$ docker network create foo
f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93
$ ansible -m docker_network -a 'name=f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93 state=absent' localhost
```
##### EXPECTED RESULTS
The output should be the same as from `ansible -m docker_network -a 'name=foo state=absent' localhost`.
```
localhost | SUCCESS => {
"actions": [
"Removed network f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93"
],
"changed": true
}
```
##### ACTUAL RESULTS
Instead no network is deleted.
```
localhost | SUCCESS => {
"actions": [],
"changed": false
}
```
| main | docker network unable to deal with network ids issue type bug report component name docker network ansible version ansible config file home schwarz code infrastructure ansible cfg configured module search path default w o overrides configuration n a os environment debian gnu linux summary docker allows addressing networks by id ansible should do the same for the sake of consistency with the docker cli and other modules steps to reproduce sh docker network create foo ansible m docker network a name state absent localhost expected results the output should be the same as from ansible m docker network a name foo state absent localhost localhost success actions removed network changed true actual results instead no network is deleted localhost success actions changed false | 1 |
2,995 | 10,882,661,942 | IssuesEvent | 2019-11-18 01:29:06 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | QCMA v0.4.1 crashing while transferring | unmaintained wontfix | Crashes after selecting file to copy and transfer to PS Vita begins. I had tried fresh installs, Rebooting console and laptop, There is still no response after attempting Transfer again.
| True | QCMA v0.4.1 crashing while transferring - Crashes after selecting file to copy and transfer to PS Vita begins. I had tried fresh installs, Rebooting console and laptop, There is still no response after attempting Transfer again.
| main | qcma crashing while transferring crashes after selecting file to copy and transfer to ps vita begins i had tried fresh installs rebooting console and laptop there is still no response after attempting transfer again | 1 |
640,647 | 20,795,260,136 | IssuesEvent | 2022-03-17 08:39:29 | fh-fvtt/zweihander | https://api.github.com/repos/fh-fvtt/zweihander | closed | Fix errors in compendiums. | Priority: Medium | *Originally created by @ghost (https://github.com/fh-fvtt/zweihander/issues/57):*
https://trello.com/c/FKvza9k2/30-fix-errors-in-compendiums
Most of the compendiums have been generated using an external program from existing HTML files. Errors, typos and missing files need to be cross-checked with the Zweihänder CRB and fixed. | 1.0 | Fix errors in compendiums. - *Originally created by @ghost (https://github.com/fh-fvtt/zweihander/issues/57):*
https://trello.com/c/FKvza9k2/30-fix-errors-in-compendiums
Most of the compendiums have been generated using an external program from existing HTML files. Errors, typos and missing files need to be cross-checked with the Zweihänder CRB and fixed. | non_main | fix errors in compendiums originally created by ghost most of the compendiums have been generated using an external program from existing html files errors typos and missing files need to be cross checked with the zweihänder crb and fixed | 0 |
53,609 | 13,261,967,057 | IssuesEvent | 2020-08-20 20:51:43 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [production-histograms] syntax error in nugen_weight.py (Trac #1745) | Migrated from Trac combo simulation defect | http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/production-histograms/trunk/python/histogram_modules/simulation/nugen_weight.py#L12
The series of multiple commas as arguments is invalid. You probably meant to fill these in with actual values?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1745">https://code.icecube.wisc.edu/projects/icecube/ticket/1745</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/production-histograms/trunk/python/histogram_modules/simulation/nugen_weight.py#L12\n\nThe series of multiple commas as arguments is invalid. You probably meant to fill these in with actual values?",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2016-06-14T15:16:23",
"component": "combo simulation",
"summary": "[production-histograms] syntax error in nugen_weight.py",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [production-histograms] syntax error in nugen_weight.py (Trac #1745) - http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/production-histograms/trunk/python/histogram_modules/simulation/nugen_weight.py#L12
The series of multiple commas as arguments is invalid. You probably meant to fill these in with actual values?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1745">https://code.icecube.wisc.edu/projects/icecube/ticket/1745</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/production-histograms/trunk/python/histogram_modules/simulation/nugen_weight.py#L12\n\nThe series of multiple commas as arguments is invalid. You probably meant to fill these in with actual values?",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2016-06-14T15:16:23",
"component": "combo simulation",
"summary": "[production-histograms] syntax error in nugen_weight.py",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| non_main | syntax error in nugen weight py trac the series of multiple commas as arguments is invalid you probably meant to fill these in with actual values migrated from json status closed changetime ts description series of multiple commas as arguments is invalid you probably meant to fill these in with actual values reporter david schultz cc resolution fixed time component combo simulation summary syntax error in nugen weight py priority critical keywords milestone owner olivas type defect | 0 |
125,732 | 16,828,829,294 | IssuesEvent | 2021-06-17 23:13:41 | MicrosoftDocs/visualstudio-docs | https://api.github.com/repos/MicrosoftDocs/visualstudio-docs | closed | Link to Image Library is broken | Pri1 doc-bug needs-more-info visual-studio-windows/prod vs-ide-designers/tech |
The link to the Image Library is broken. If this is no longer available, this page should probably be removed or at least edited heavily.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 05fe994a-b63e-dfc1-9de9-97e56a4e4917
* Version Independent ID: 95c63cef-e907-f443-a885-d1265f47c171
* Content: [Image Library - Visual Studio](https://docs.microsoft.com/en-us/visualstudio/designers/the-visual-studio-image-library?view=vs-2019)
* Content Source: [docs/designers/the-visual-studio-image-library.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/master/docs/designers/the-visual-studio-image-library.md)
* Product: **visual-studio-windows**
* Technology: **vs-ide-designers**
* GitHub Login: @TerryGLee
* Microsoft Alias: **tglee** | 1.0 | Link to Image Library is broken -
The link to the Image Library is broken. If this is no longer available, this page should probably be removed or at least edited heavily.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 05fe994a-b63e-dfc1-9de9-97e56a4e4917
* Version Independent ID: 95c63cef-e907-f443-a885-d1265f47c171
* Content: [Image Library - Visual Studio](https://docs.microsoft.com/en-us/visualstudio/designers/the-visual-studio-image-library?view=vs-2019)
* Content Source: [docs/designers/the-visual-studio-image-library.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/master/docs/designers/the-visual-studio-image-library.md)
* Product: **visual-studio-windows**
* Technology: **vs-ide-designers**
* GitHub Login: @TerryGLee
* Microsoft Alias: **tglee** | non_main | link to image library is broken the link to the image library is broken if this is no longer available this page should probably be removed or at least edited heavily document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product visual studio windows technology vs ide designers github login terryglee microsoft alias tglee | 0 |
35,514 | 2,789,936,463 | IssuesEvent | 2015-05-08 22:32:44 | google/google-visualization-api-issues | https://api.github.com/repos/google/google-visualization-api-issues | opened | Mouseover tooltip | Priority-Low Type-Enhancement | Original [issue 569](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=569) created by orwant on 2011-04-01T20:34:00.000Z:
<b>What would you like to see us add to this API?</b>
Custom container tooltip. I want to put more informations like image, link...
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
All of them.
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| 1.0 | Mouseover tooltip - Original [issue 569](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=569) created by orwant on 2011-04-01T20:34:00.000Z:
<b>What would you like to see us add to this API?</b>
Custom container tooltip. I want to put more informations like image, link...
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
All of them.
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| non_main | mouseover tooltip original created by orwant on what would you like to see us add to this api custom container tooltip i want to put more informations like image link what component is this issue related to piechart linechart datatable query etc all of them for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved | 0 |
702,920 | 24,141,292,039 | IssuesEvent | 2022-09-21 15:00:53 | mvishnurana1/Diary | https://api.github.com/repos/mvishnurana1/Diary | closed | Pass userID while fetching entries by Date | high priority full-stack | The DB in its current state has no awareness of the logged-in user.
Please pass the logged-in user's ID while making request for fetching requests.
Also, on the Controller, return HTTP - 400, if no userID is found. | 1.0 | Pass userID while fetching entries by Date - The DB in its current state has no awareness of the logged-in user.
Please pass the logged-in user's ID while making request for fetching requests.
Also, on the Controller, return HTTP - 400, if no userID is found. | non_main | pass userid while fetching entries by date the db in its current state has no awareness of the logged in user please pass the logged in user s id while making request for fetching requests also on the controller return http if no userid is found | 0 |
1,422 | 6,191,042,908 | IssuesEvent | 2017-07-04 17:23:47 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | opened | Fix zenon.{0.7.1,0.8.0} | needs maintainer action no maintainer | The packages are misbehaving which leads to obscure errors in other part of the system (which shouldn't but's another question).
A first fix would be to make them install in their own prefix. I could not understand what in the world lead to the `zenon` directory [become a symlink](https://github.com/ocaml/opam-repository/issues/9690#issuecomment-312921164).
Unfortunately zenon has no maintainer.
| True | Fix zenon.{0.7.1,0.8.0} - The packages are misbehaving which leads to obscure errors in other part of the system (which shouldn't but's another question).
A first fix would be to make them install in their own prefix. I could not understand what in the world lead to the `zenon` directory [become a symlink](https://github.com/ocaml/opam-repository/issues/9690#issuecomment-312921164).
Unfortunately zenon has no maintainer.
| main | fix zenon the packages are misbehaving which leads to obscure errors in other part of the system which shouldn t but s another question a first fix would be to make them install in their own prefix i could not understand what in the world lead to the zenon directory unfortunately zenon has no maintainer | 1 |
255,312 | 27,484,902,653 | IssuesEvent | 2023-03-04 01:32:27 | panasalap/linux-4.1.15 | https://api.github.com/repos/panasalap/linux-4.1.15 | closed | CVE-2016-10088 (High) detected in linux179e72b561d3d331c850e1a5779688d7a7de5246, linuxlinux-4.1.17 - autoclosed | security vulnerability | ## CVE-2016-10088 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux179e72b561d3d331c850e1a5779688d7a7de5246</b>, <b>linuxlinux-4.1.17</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The sg implementation in the Linux kernel through 4.9 does not properly restrict write operations in situations where the KERNEL_DS option is set, which allows local users to read or write to arbitrary kernel memory locations or cause a denial of service (use-after-free) by leveraging access to a /dev/sg device, related to block/bsg.c and drivers/scsi/sg.c. NOTE: this vulnerability exists because of an incomplete fix for CVE-2016-9576.
<p>Publish Date: 2016-12-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-10088>CVE-2016-10088</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-10088">https://nvd.nist.gov/vuln/detail/CVE-2016-10088</a></p>
<p>Release Date: 2016-12-30</p>
<p>Fix Resolution: linux - 4.9.9-1;linux-zen - 4.9.9-1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-10088 (High) detected in linux179e72b561d3d331c850e1a5779688d7a7de5246, linuxlinux-4.1.17 - autoclosed - ## CVE-2016-10088 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux179e72b561d3d331c850e1a5779688d7a7de5246</b>, <b>linuxlinux-4.1.17</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The sg implementation in the Linux kernel through 4.9 does not properly restrict write operations in situations where the KERNEL_DS option is set, which allows local users to read or write to arbitrary kernel memory locations or cause a denial of service (use-after-free) by leveraging access to a /dev/sg device, related to block/bsg.c and drivers/scsi/sg.c. NOTE: this vulnerability exists because of an incomplete fix for CVE-2016-9576.
<p>Publish Date: 2016-12-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-10088>CVE-2016-10088</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-10088">https://nvd.nist.gov/vuln/detail/CVE-2016-10088</a></p>
<p>Release Date: 2016-12-30</p>
<p>Fix Resolution: linux - 4.9.9-1;linux-zen - 4.9.9-1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable libraries linuxlinux vulnerability details the sg implementation in the linux kernel through does not properly restrict write operations in situations where the kernel ds option is set which allows local users to read or write to arbitrary kernel memory locations or cause a denial of service use after free by leveraging access to a dev sg device related to block bsg c and drivers scsi sg c note this vulnerability exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux linux zen step up your open source security game with mend | 0 |
2,357 | 8,409,943,546 | IssuesEvent | 2018-10-12 09:02:53 | video-dev/hls.js | https://api.github.com/repos/video-dev/hls.js | opened | Add sample-AES encrypted hosted test stream (replace or move previous one) | Maintainer task | * Should replace: `https://video-dev.github.io/streams/bbbAES/playlists/sample_aes/index.m3u8` (atm disabled in `test-stream.js` because something wrong with key-file hosting, data maybe served corrupted)
* => We need another stream to test sample-AES decryption features
* Ideally hosted directly in the `video-dev/streams` repo
| True | Add sample-AES encrypted hosted test stream (replace or move previous one) - * Should replace: `https://video-dev.github.io/streams/bbbAES/playlists/sample_aes/index.m3u8` (atm disabled in `test-stream.js` because something wrong with key-file hosting, data maybe served corrupted)
* => We need another stream to test sample-AES decryption features
* Ideally hosted directly in the `video-dev/streams` repo
| main | add sample aes encrypted hosted test stream replace or move previous one should replace atm disabled in test stream js because something wrong with key file hosting data maybe served corrupted we need another stream to test sample aes decryption features ideally hosted directly in the video dev streams repo | 1 |
1,251 | 5,316,447,020 | IssuesEvent | 2017-02-13 19:54:16 | espeak-ng/espeak-ng | https://api.github.com/repos/espeak-ng/espeak-ng | closed | Look at using ucd-tools for isalpha, toupper, etc. wide-character support. | maintainability portability resolved/fixed | The `ucd-tools` project is being used in the eSpeak for Android project. It is also needed on platforms like Windows Mobile. Additionally, different versions of platforms support different versions of Unicode. Using `ucd-tools` would make this the Unicode support consistent.
| True | Look at using ucd-tools for isalpha, toupper, etc. wide-character support. - The `ucd-tools` project is being used in the eSpeak for Android project. It is also needed on platforms like Windows Mobile. Additionally, different versions of platforms support different versions of Unicode. Using `ucd-tools` would make this the Unicode support consistent.
| main | look at using ucd tools for isalpha toupper etc wide character support the ucd tools project is being used in the espeak for android project it is also needed on platforms like windows mobile additionally different versions of platforms support different versions of unicode using ucd tools would make this the unicode support consistent | 1 |
1,382 | 5,997,165,382 | IssuesEvent | 2017-06-03 21:08:00 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Cryptocurrency: "1 Eth" should show conversion, not Wikipedia info | Internal Maintainer Timeout | For example:
- [1 Ltc](https://duckduckgo.com/?q=1+ltc&ia=cryptocurrency) - OK.
- [1 Eth](https://duckduckgo.com/?q=1+Eth&ia=cryptocurrency) - **NOT** (Wiki)
---
IA Page: http://duck.co/ia/view/cryptocurrency
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @claytonspinner
| True | Cryptocurrency: "1 Eth" should show conversion, not Wikipedia info - For example:
- [1 Ltc](https://duckduckgo.com/?q=1+ltc&ia=cryptocurrency) - OK.
- [1 Eth](https://duckduckgo.com/?q=1+Eth&ia=cryptocurrency) - **NOT** (Wiki)
---
IA Page: http://duck.co/ia/view/cryptocurrency
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @claytonspinner
| main | cryptocurrency eth should show conversion not wikipedia info for example ok not wiki ia page claytonspinner | 1 |
5,671 | 29,497,410,764 | IssuesEvent | 2023-06-02 18:14:54 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | closed | Slight v0.5.1 version shows 0.5.0 | 🐛 bug 🚧 maintainer issue | **Description of the bug**
If you download the newest slight binary v0.5.1, and do `slight --version`, it shows v0.5.0
**Additional context**
| True | Slight v0.5.1 version shows 0.5.0 - **Description of the bug**
If you download the newest slight binary v0.5.1, and do `slight --version`, it shows v0.5.0
**Additional context**
| main | slight version shows description of the bug if you download the newest slight binary and do slight version it shows additional context | 1 |
3,377 | 13,080,857,607 | IssuesEvent | 2020-08-01 08:58:18 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | sameSite attribute not set correctly | bug maintainability | Javascript warning in Firefox console regarding cookies and "sameSite" attribute when switching the expression language.
### To Reproduce
Steps to reproduce the behavior:
1. Open Expression Editor
2. Then change language from GREL to Python
3. See warning
### Current Results
```
Cookie “scripting.lang” will be soon rejected because it has the “sameSite” attribute set to “none” or an invalid
value, without the “secure” attribute. To know more about the “sameSite“ attribute, read
https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite
project-bundle.js:10919:96
```
### Expected Behavior
no warning in Firefox Console
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
### Versions<!-- (please complete the following information)-->
- Operating System: Windows 10
- Browser Version: Firefox latest
- JRE or JDK Version: JDK8
- OpenRefine: openrefine-3.4-beta-388-g1dcc832
### Datasets
<!-- If you are allowed and are OK with making your data public, it would be awesome if you can include or attach the data causing the issue or a URL pointing to where the data is.
If you are concerned about keeping your data private, ping us on our [mailing list](https://groups.google.com/forum/#!forum/openrefine) -->
### Additional context
<!-- Add any other context about the problem here. -->
| True | sameSite attribute not set correctly - Javascript warning in Firefox console regarding cookies and "sameSite" attribute when switching the expression language.
### To Reproduce
Steps to reproduce the behavior:
1. Open Expression Editor
2. Then change language from GREL to Python
3. See warning
### Current Results
```
Cookie “scripting.lang” will be soon rejected because it has the “sameSite” attribute set to “none” or an invalid
value, without the “secure” attribute. To know more about the “sameSite“ attribute, read
https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite
project-bundle.js:10919:96
```
### Expected Behavior
no warning in Firefox Console
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
### Versions<!-- (please complete the following information)-->
- Operating System: Windows 10
- Browser Version: Firefox latest
- JRE or JDK Version: JDK8
- OpenRefine: openrefine-3.4-beta-388-g1dcc832
### Datasets
<!-- If you are allowed and are OK with making your data public, it would be awesome if you can include or attach the data causing the issue or a URL pointing to where the data is.
If you are concerned about keeping your data private, ping us on our [mailing list](https://groups.google.com/forum/#!forum/openrefine) -->
### Additional context
<!-- Add any other context about the problem here. -->
| main | samesite attribute not set correctly javascript warning in firefox console regarding cookies and samesite attribute when switching the expression language to reproduce steps to reproduce the behavior open expression editor then change language from grel to python see warning current results cookie “scripting lang” will be soon rejected because it has the “samesite” attribute set to “none” or an invalid value without the “secure” attribute to know more about the “samesite“ attribute read project bundle js expected behavior no warning in firefox console screenshots versions operating system windows browser version firefox latest jre or jdk version openrefine openrefine beta datasets if you are allowed and are ok with making your data public it would be awesome if you can include or attach the data causing the issue or a url pointing to where the data is if you are concerned about keeping your data private ping us on our additional context | 1 |
1,025 | 4,819,391,596 | IssuesEvent | 2016-11-04 19:06:32 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | haproxy module failure after upgrade to ansible 2.2.0 | affects_2.2 bug_report networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- haproxy module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
After upgrade to ansible 2.2.0, haprox module execution failed. But the weight has been changed though.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yml
- name: "Set {{ backend }}/{{ host }} Weight to {{ weight }}"
haproxy:
state: enabled
backend: "{{ backend }}"
host: "{{ host }}"
weight: "{{ weight }}"
socket: "{{ socket }}"
delegate_to: "{{ item }}"
with_items: "{{ groups.upay }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Weight changed and no failure message.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
I've formated the json output message.
```json
"failed: [s1-payment-001 -> s1-upay-001] (item=s1-upay-001) =>"
{
"failed": true,
"item": "s1-upay-001",
"module_stderr": "Shared connection to s1-upay-001 closed.
",
"module_stdout": "Traceback (most recent call last):
File \"/tmp/ansible_gz8hxE/ansible_module_haproxy.py\", line 350, in <module>
main()
File \"/tmp/ansible_gz8hxE/ansible_module_haproxy.py\", line 345, in main
ansible_haproxy.act()
File \"/tmp/ansible_gz8hxE/ansible_module_haproxy.py\", line 317, in act
self.module.exit_json(**self.command_results)
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 1799, in exit_json
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 388, in remove_values
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 388, in <genexpr>
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 399, in remove_values
TypeError: Value of unknown type: <type 'itertools.imap'>, <itertools.imap object at 0x7ff363f2af90>
",
"msg": "MODULE FAILURE"
}
```
| True | haproxy module failure after upgrade to ansible 2.2.0 - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- haproxy module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
After upgrade to ansible 2.2.0, haprox module execution failed. But the weight has been changed though.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yml
- name: "Set {{ backend }}/{{ host }} Weight to {{ weight }}"
haproxy:
state: enabled
backend: "{{ backend }}"
host: "{{ host }}"
weight: "{{ weight }}"
socket: "{{ socket }}"
delegate_to: "{{ item }}"
with_items: "{{ groups.upay }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Weight changed and no failure message.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
I've formated the json output message.
```json
"failed: [s1-payment-001 -> s1-upay-001] (item=s1-upay-001) =>"
{
"failed": true,
"item": "s1-upay-001",
"module_stderr": "Shared connection to s1-upay-001 closed.
",
"module_stdout": "Traceback (most recent call last):
File \"/tmp/ansible_gz8hxE/ansible_module_haproxy.py\", line 350, in <module>
main()
File \"/tmp/ansible_gz8hxE/ansible_module_haproxy.py\", line 345, in main
ansible_haproxy.act()
File \"/tmp/ansible_gz8hxE/ansible_module_haproxy.py\", line 317, in act
self.module.exit_json(**self.command_results)
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 1799, in exit_json
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 388, in remove_values
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 388, in <genexpr>
File \"/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\", line 399, in remove_values
TypeError: Value of unknown type: <type 'itertools.imap'>, <itertools.imap object at 0x7ff363f2af90>
",
"msg": "MODULE FAILURE"
}
```
| main | haproxy module failure after upgrade to ansible issue type bug report component name haproxy module ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary after upgrade to ansible haprox module execution failed but the weight has been changed though steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used yml name set backend host weight to weight haproxy state enabled backend backend host host weight weight socket socket delegate to item with items groups upay expected results weight changed and no failure message actual results i ve formated the json output message json failed item upay failed true item upay module stderr shared connection to upay closed module stdout traceback most recent call last file tmp ansible ansible module haproxy py line in main file tmp ansible ansible module haproxy py line in main ansible haproxy act file tmp ansible ansible module haproxy py line in act self module exit json self command results file tmp ansible ansible modlib zip ansible module utils basic py line in exit json file tmp ansible ansible modlib zip ansible module utils basic py line in remove values file tmp ansible ansible modlib zip ansible module utils basic py line in file tmp ansible ansible modlib zip ansible module utils basic py line in remove values typeerror value of unknown type msg module failure | 1 |
3,249 | 12,389,686,296 | IssuesEvent | 2020-05-20 09:25:11 | permon/permon | https://api.github.com/repos/permon/permon | opened | suggestions for permonsys.h | maintainability | * `FLLOP_ASSERT`
- rename to`PERMON_ASSERT`
- `#if defined(PETSC_USE_DEBUG)` no-op
* `FllopDebug`
- replace by `PetscInfo` which now offers finer control (after https://gitlab.com/petsc/petsc/-/merge_requests/2216)
* `FLLOP_SETERRQ*` can now be abandoned in favor of `SETERRQ*`
* similar for `FLLOP_EXTERN` and `FLLOP_INTERN`
* `FLLTIC` and `FLLTOC` can be removed
* `FllopTrace` -> `PermonTrace`
- or get rid of it | True | suggestions for permonsys.h - * `FLLOP_ASSERT`
- rename to`PERMON_ASSERT`
- `#if defined(PETSC_USE_DEBUG)` no-op
* `FllopDebug`
- replace by `PetscInfo` which now offers finer control (after https://gitlab.com/petsc/petsc/-/merge_requests/2216)
* `FLLOP_SETERRQ*` can now be abandoned in favor of `SETERRQ*`
* similar for `FLLOP_EXTERN` and `FLLOP_INTERN`
* `FLLTIC` and `FLLTOC` can be removed
* `FllopTrace` -> `PermonTrace`
- or get rid of it | main | suggestions for permonsys h fllop assert rename to permon assert if defined petsc use debug no op fllopdebug replace by petscinfo which now offers finer control after fllop seterrq can now be abandoned in favor of seterrq similar for fllop extern and fllop intern flltic and flltoc can be removed flloptrace permontrace or get rid of it | 1 |
381,434 | 26,451,993,501 | IssuesEvent | 2023-01-16 11:57:43 | wailsapp/wails | https://api.github.com/repos/wailsapp/wails | opened | Add docs about CI in Guides | Documentation | ### Have you read the Documentation Contribution Guidelines?
- [X] I have read the [Documentation Contribution Guidelines](https://wails.io/community-guide#ways-of-contributing).
### Description
# Add docs about CI in Guides
---
Hello, developer,
I think we should add a document about CI building.
We can write examples for each CI product. Or give the users some certain hints.
Just as a tip, like `Routing`.
### Self-service
- [ ] I'd be willing to address this documentation request myself. | 1.0 | Add docs about CI in Guides - ### Have you read the Documentation Contribution Guidelines?
- [X] I have read the [Documentation Contribution Guidelines](https://wails.io/community-guide#ways-of-contributing).
### Description
# Add docs about CI in Guides
---
Hello, developer,
I think we should add a document about CI building.
We can write examples for each CI product. Or give the users some certain hints.
Just as a tip, like `Routing`.
### Self-service
- [ ] I'd be willing to address this documentation request myself. | non_main | add docs about ci in guides have you read the documentation contribution guidelines i have read the description add docs about ci in guides hello developer i think we should add a document about ci building we can write examples for each ci product or give the users some certain hints just as a tip like routing self service i d be willing to address this documentation request myself | 0 |
185,877 | 21,876,179,925 | IssuesEvent | 2022-05-19 10:20:28 | turkdevops/landscapeapp | https://api.github.com/repos/turkdevops/landscapeapp | closed | CVE-2021-37712 (High) detected in tar-6.0.2.tgz - autoclosed | security vulnerability | ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.0.2.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.0.2.tgz">https://registry.npmjs.org/tar/-/tar-6.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /.yarn/cache/tar-npm-6.0.2-5a3aaf4b8a-7d28cc13d7.zip</p>
<p>
Dependency Hierarchy:
- ncc-0.33.0.tgz (Root Library)
- node-gyp-7.0.0.tgz
- :x: **tar-6.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/landscapeapp/commit/3657f85158253a9663b2210cdbe1dee4fc4d6249">3657f85158253a9663b2210cdbe1dee4fc4d6249</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 6.1.9</p>
<p>Direct dependency fix Resolution (@vercel/ncc): 0.33.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37712 (High) detected in tar-6.0.2.tgz - autoclosed - ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.0.2.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.0.2.tgz">https://registry.npmjs.org/tar/-/tar-6.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /.yarn/cache/tar-npm-6.0.2-5a3aaf4b8a-7d28cc13d7.zip</p>
<p>
Dependency Hierarchy:
- ncc-0.33.0.tgz (Root Library)
- node-gyp-7.0.0.tgz
- :x: **tar-6.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/landscapeapp/commit/3657f85158253a9663b2210cdbe1dee4fc4d6249">3657f85158253a9663b2210cdbe1dee4fc4d6249</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 6.1.9</p>
<p>Direct dependency fix Resolution (@vercel/ncc): 0.33.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in tar tgz autoclosed cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file package json path to vulnerable library yarn cache tar npm zip dependency hierarchy ncc tgz root library node gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution vercel ncc step up your open source security game with whitesource | 0 |
1,362 | 5,874,224,873 | IssuesEvent | 2017-05-15 15:36:28 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | reopened | "template<class> class std::auto_ptr" is deprecated in C++11/C++14 (GCC6 default) | bug Language-C++ Maintainability OpSys-Linux | When the rebuild for Fedora 24 triggered it failed. This was due to a deprecation warning that was treated as a error.
The deprecation warning was about `template<class> class std::auto_ptr` since this is used in [protoc/CppGenerator.cpp](https://github.com/OpenLightingProject/ola/blob/master/protoc/CppGenerator.cpp) on [line 52](https://github.com/OpenLightingProject/ola/blob/master/protoc/CppGenerator.cpp#L52) and [line 57](https://github.com/OpenLightingProject/ola/blob/master/protoc/CppGenerator.cpp#L57)
The koji build job is [here](http://koji.fedoraproject.org/koji/taskinfo?taskID=12854227) and the build log is [here](https://kojipkgs.fedoraproject.org//work/tasks/4227/12854227/build.log)
| True | "template<class> class std::auto_ptr" is deprecated in C++11/C++14 (GCC6 default) - When the rebuild for Fedora 24 triggered it failed. This was due to a deprecation warning that was treated as a error.
The deprecation warning was about `template<class> class std::auto_ptr` since this is used in [protoc/CppGenerator.cpp](https://github.com/OpenLightingProject/ola/blob/master/protoc/CppGenerator.cpp) on [line 52](https://github.com/OpenLightingProject/ola/blob/master/protoc/CppGenerator.cpp#L52) and [line 57](https://github.com/OpenLightingProject/ola/blob/master/protoc/CppGenerator.cpp#L57)
The koji build job is [here](http://koji.fedoraproject.org/koji/taskinfo?taskID=12854227) and the build log is [here](https://kojipkgs.fedoraproject.org//work/tasks/4227/12854227/build.log)
| main | template class std auto ptr is deprecated in c c default when the rebuild for fedora triggered it failed this was due to a deprecation warning that was treated as a error the deprecation warning was about template class std auto ptr since this is used in on and the koji build job is and the build log is | 1 |
4,147 | 19,723,537,239 | IssuesEvent | 2022-01-13 17:34:21 | omigroup/omigroup | https://api.github.com/repos/omigroup/omigroup | closed | What are the existing tasks in maintaining OMI and serving as a chair today? | Consistently deliver value Cultivate Resiliency Maintain sustainable innovation | It was suggested that we [draft out a list of the tasks and responsibilities](https://hackmd.io/@mrmetaverse/omi-tasks-audit) of current and future Chairs and leaders in OMI to help inform a potentially "better" structure.
_Originally posted by @mrmetaverse in https://github.com/omigroup/omigroup/discussions/157#discussioncomment-1950635_ | True | What are the existing tasks in maintaining OMI and serving as a chair today? - It was suggested that we [draft out a list of the tasks and responsibilities](https://hackmd.io/@mrmetaverse/omi-tasks-audit) of current and future Chairs and leaders in OMI to help inform a potentially "better" structure.
_Originally posted by @mrmetaverse in https://github.com/omigroup/omigroup/discussions/157#discussioncomment-1950635_ | main | what are the existing tasks in maintaining omi and serving as a chair today it was suggested that we of current and future chairs and leaders in omi to help inform a potentially better structure originally posted by mrmetaverse in | 1 |
650 | 4,163,899,406 | IssuesEvent | 2016-06-18 12:11:58 | Particular/NServiceBus.RabbitMQ | https://api.github.com/repos/Particular/NServiceBus.RabbitMQ | closed | Remove ConnectionManager | Size: S State: In Progress - Maintainer Prio Tag: Maintainer Prio Type: Refactoring | Currently, we still have both `ConnectionManager` and `ConnectionFactory` classes. As mentioned in https://github.com/Particular/NServiceBus.RabbitMQ/issues/74#issuecomment-201040218, the scope of what `ConnectionManager` is doing has been decreased. At this point it is only being used to manage the publish connection and passing along the creation of the admin connection to the connection factory:
https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Connection/ConnectionManager.cs
Instead of keeping `ConnectionManager` around just to pass it into `ChannelProvider`, I think it makes sense kill `ConnectionManager` and just pass `ConnectionFactory` around directly.
This would mean that `ChannelProvider` would get a `ConnectionFactory` and create its own publish connection when it needs it, and then would be responsible for closing it when the endpoint is stopping.
This approach works because at that point we would only ever need to be "creating" connections and the two places that keep connections open (`ChannelProvider` and `MessagePump`) can just deal with the created connection directly.
The `MessagePump` was originally designed this way because it needed to create a specific `ConnectionFactory` to pass in a custom scheduler, but I've cleaned that up a bit, and it no longer requires the custom scheduler. 550664404c2563d01096cec25cca283e038eb95d
Because of this, we could decide on an alternate approach. The `MessagePump` could once again get a "managed" connection from `ConnectionManager`, and then it would use that connection to create a channel.
This would put the `ConnectionManager` back in charge of connection lifetimes.
Currently, as a side effect of the `MessagePump` being responsible for creating a connection, each `MessagePump` instance creates its own connection, so each queue being consumed (main, optional instance queue, satellites) has a separate connection in addition to a separate channel. I was able to take advantage of this and the purpose of each connection is set differently instead of having a generic "consume" purpose: https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Receiving/MessagePump.cs#L74
This is nice when viewing the connections from the management UI, and we'd lose it if we went back to the `ConnectionManager` being in charge.
Thoughts?
@Particular/rabbitmq-transport-maintainers | True | Remove ConnectionManager - Currently, we still have both `ConnectionManager` and `ConnectionFactory` classes. As mentioned in https://github.com/Particular/NServiceBus.RabbitMQ/issues/74#issuecomment-201040218, the scope of what `ConnectionManager` is doing has been decreased. At this point it is only being used to manage the publish connection and passing along the creation of the admin connection to the connection factory:
https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Connection/ConnectionManager.cs
Instead of keeping `ConnectionManager` around just to pass it into `ChannelProvider`, I think it makes sense kill `ConnectionManager` and just pass `ConnectionFactory` around directly.
This would mean that `ChannelProvider` would get a `ConnectionFactory` and create its own publish connection when it needs it, and then would be responsible for closing it when the endpoint is stopping.
This approach works because at that point we would only ever need to be "creating" connections and the two places that keep connections open (`ChannelProvider` and `MessagePump`) can just deal with the created connection directly.
The `MessagePump` was originally designed this way because it needed to create a specific `ConnectionFactory` to pass in a custom scheduler, but I've cleaned that up a bit, and it no longer requires the custom scheduler. 550664404c2563d01096cec25cca283e038eb95d
Because of this, we could decide on an alternate approach. The `MessagePump` could once again get a "managed" connection from `ConnectionManager`, and then it would use that connection to create a channel.
This would put the `ConnectionManager` back in charge of connection lifetimes.
Currently, as a side effect of the `MessagePump` being responsible for creating a connection, each `MessagePump` instance creates its own connection, so each queue being consumed (main, optional instance queue, satellites) has a separate connection in addition to a separate channel. I was able to take advantage of this and the purpose of each connection is set differently instead of having a generic "consume" purpose: https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Receiving/MessagePump.cs#L74
This is nice when viewing the connections from the management UI, and we'd lose it if we went back to the `ConnectionManager` being in charge.
Thoughts?
@Particular/rabbitmq-transport-maintainers | main | remove connectionmanager currently we still have both connectionmanager and connectionfactory classes as mentioned in the scope of what connectionmanager is doing has been decreased at this point it is only being used to manage the publish connection and passing along the creation of the admin connection to the connection factory instead of keeping connectionmanager around just to pass it into channelprovider i think it makes sense kill connectionmanager and just pass connectionfactory around directly this would mean that channelprovider would get a connectionfactory and create its own publish connection when it needs it and then would be responsible for closing it when the endpoint is stopping this approach works because at that point we would only ever need to be creating connections and the two places that keep connections open channelprovider and messagepump can just deal with the created connection directly the messagepump was originally designed this way because it needed to create a specific connectionfactory to pass in a custom scheduler but i ve cleaned that up a bit and it no longer requires the custom scheduler because of this we could decide on an alternate approach the messagepump could once again get a managed connection from connectionmanager and then it would use that connection to create a channel this would put the connectionmanager back in charge of connection lifetimes currently as a side effect of the messagepump being responsible for creating a connection each messagepump instance creates its own connection so each queue being consumed main optional instance queue satellites has a separate connection in addition to a separate channel i was able to take advantage of this and the purpose of each connection is set differently instead of having a generic consume purpose this is nice when viewing the connections from the management ui and we d lose it if we went back to the connectionmanager being in charge thoughts particular rabbitmq transport maintainers | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.