Unnamed: 0 int64 9 832k | id float64 2.5B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 4 323 | labels stringlengths 4 2.67k | body stringlengths 23 107k | index stringclasses 4 values | text_combine stringlengths 96 107k | label stringclasses 2 values | text stringlengths 96 56.1k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
957 | 11,802,907,780 | IssuesEvent | 2020-03-18 22:42:34 | NuGet/Home | https://api.github.com/repos/NuGet/Home | reopened | Address VSTHRD010 and VSTHRD109 warnings | Area:Reliability Type:Bug | [VSTHRD010](https://github.com/microsoft/vs-threading/blob/master/doc/analyzers/VSTHRD010.md): Invoke single-threaded types on Main thread
[VSTHRD109 ](https://github.com/microsoft/vs-threading/blob/master/doc/analyzers/VSTHRD109.md) Switch instead of assert in async methods
Related https://github.com/microsoft/vs-threading/issues/577 | True | Address VSTHRD010 and VSTHRD109 warnings - [VSTHRD010](https://github.com/microsoft/vs-threading/blob/master/doc/analyzers/VSTHRD010.md): Invoke single-threaded types on Main thread
[VSTHRD109 ](https://github.com/microsoft/vs-threading/blob/master/doc/analyzers/VSTHRD109.md) Switch instead of assert in async methods
Related https://github.com/microsoft/vs-threading/issues/577 | reli | address and warnings invoke single threaded types on main thread switch instead of assert in async methods related | 1 |
191 | 5,254,751,309 | IssuesEvent | 2017-02-02 13:51:44 | LeastAuthority/leastauthority.com | https://api.github.com/repos/LeastAuthority/leastauthority.com | closed | the leastauthority.com repo must be restructured to reflect excision of blog source | blog deployment enhancement operations reliability review-needed unfinished business | zancas commented 6 days ago
Actually this is already accomplished on branch: 238_remove_the_blog_2 so simply deploying that branch closes this issue.
zancas
zancas commented 2 days ago
I propose that I:
(1) check out current production master on "testing"
(2) run the servers (flapp and web)
(3) view the blog
(4) checkout 238_remove_the_blog_2
(5) re View the site
(6) run update_testing_blog.py
(7) re View the site
(8) ask for a pair-peer
(9) clone blog_source to production
(10) checkout 238_remove_the_blog_2 on production CAUSING TEMPORARY OUTAGE
(11) run update_prodution_blog.py (fixing the outage)
zancas
zancas commented 2 days ago
I'll start with steps 1-7, now.
zancas
zancas commented 2 days ago
Ammendment:
I set the leastauthority.com repo to a commit which has had its blog_source directory removed (with git rm), I pushed that repo to testing and checked out the commit that did not have the blog_source directory.
The blog_source directory which was in the working directory prior to the checkout was unaffected. Therefore this process will not cause an outage.
zancas
zancas commented 2 days ago
Even without the --delete-output-directory flag pelican removes the target output directory.
zancas
zancas commented 2 days ago
Instead of git checkout 238_... on production we'll merge 238_... into master on github.
We'll git push --verbose production master:master maintaining synchronization.
We'll:
(1) ssh into production
(2) git merge --ff-only master on production
zancas
zancas commented 2 days ago
A clue from /var/log/auth.log on rho:
Jul 8 03:19:43 ip-172-31-47-120 sshd[20735]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Jul 8 03:19:43 ip-172-31-47-120 sshd[20735]: Connection closed by 67.176.52.224 [preauth]
zancas zancas referenced this issue in LeastAuthority/leastauthority.com a day ago
Open
openssh pubkey ECDSA vs RSA mismatch #10
zancas
zancas commented a day ago
OK, I've figured out the SSH authentication issue. (I THINK).
If I:
mv ~/.ssh/known_hosts $BACKUP
and run one of the update_.._blog.py tools, I get the same error that blocked us earlier in the deployment.
If I then ssh website@testing.leastauthority.com, I get:
0 git_branch: master
/home/arc/.ssh :
$ ssh website@testing.leastauthority.com
The authenticity of host 'testing.leastauthority.com (107.21.225.70)' can't be established.
ECDSA key fingerprint is c8:36:84:9f:20:be:3d:71:c9:e4:15:a9:76:4e:91:17.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'testing.leastauthority.com,107.21.225.70' (ECDSA) to the list of known hosts.
Notice that the key used is an ECDSA key.
If I then run:
python ./update_testing_blog.py
I get the same error as I got when there was no known_hosts file.
Now mv known_hosts aside again and run:
130 git_branch: master
/home/arc/.ssh :
$ ssh-keyscan -t rsa testing.leastauthority.com >> ~/.ssh/known_hosts
# testing.leastauthority.com SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1
0 git_branch: master
/home/arc/.ssh :
$ cat ~/.ssh/known_hosts
testing.leastauthority.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCWvLycjtQeZJpXbhSm/Y5fzbe3JV36CKehVzJQvqrJYg9oOpCwneXFIrIEhOS5Lg6OwXxe1jCqAYL6r+UOjTvMyGc971FU1d1qID71LcTdAiiI2k+8KeJjuOdH2P8WRIL4XQ21xQbqx/o7kb3OTeWDpBdzqrX4bSRwZ5MyzwccUi8ff9kNGjfQk97dt/e+PfxbjOpMm9JCFWGvcXJLvKo8gQCm1q7oajUzexCo8tuwu0TFcOlXCFUore5j8d/eUo0yvJZwgQuoyoSpok/o5sIRoAdVTM3eufkcy1DM0vFDJKP+OGbt+TLBEQo5ibOWEQTb1jpsPcxLVhcFnEIIjFKV
I still get the same error, which I am confused about, perhaps it's because the keyscan result was matched to a domain name... /me checks this hypothesis....
Yes, the "mv known_hosts" test confirms this, so the first argument to "update_blog" must be matched to a known host (either IPv4 or domain-name).
It seems that for multihoming, and switching servers, we should map to the domain name for infrastructure servers... indeed SSEC2s have publicly routable domain-names as well IIRC, so I'll make a new issue.
Once the public key in "known_hosts" is of type rsa, and once it's mapped to the IPv4 address passed to the function update_blog
the script authenticates as expected.
zancas
zancas commented a day ago
I don't know which software is unable to check ecdsa pubkeys.
paramiko? fabric?
zancas
zancas commented 23 hours ago
Next problem, not yet understood:
website@ip-172-31-47-120:~/blog_source$ python ./render_blog.py
WARNING: LOCALE option doesn't contain a correct value
ERROR: Skipping /home/website/blog_source/README.md: could not find information about 'title'
-> copying /home/website/blog_source/themes/lae/static to /home/website/leastauthority.com/content/blog/theme
-> writing /home/website/leastauthority.com/content/blog/feeds/all.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/essay.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/events.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/letters.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/news.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/press.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/press-releases.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/all-en.atom.xml
-> writing /home/website/leastauthority.com/content/blog/least_authority_releases_s4+_as_part_of_reset_the_net.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_performs_security_audit_for_globaleaks.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_performs_security_audit_for_cryptocat.html
-> writing /home/website/leastauthority.com/content/blog/BLAKE2-harder-better-faster-stronger-than-MD5.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_performs_security_audit_for_spideroak.html
-> writing /home/website/leastauthority.com/content/blog/lafs_summit_san_francisco_nov_11_13.html
-> writing /home/website/leastauthority.com/content/blog/lafs_featured_eff_tech_blog.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_joins_open_invention_network.html
-> writing /home/website/leastauthority.com/content/blog/s4_press_on_linuxbsdos.html
-> writing /home/website/leastauthority.com/content/blog/open_letter_silent_circle.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_press_motherboard_vice_com.html
-> writing /home/website/leastauthority.com/content/blog/lafs_press_computerworld.html
-> writing /home/website/leastauthority.com/content/blog/least-authority-announces-prism-proof-storage-service.html
CRITICAL: 'articles_next_page' is undefined
Traceback (most recent call last):
File "./render_blog.py", line 14, in <module>
render_blog()
File "./render_blog.py", line 10, in render_blog
subprocess.check_call(['pelican', '-v', '-s', 'conf.py'])
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['pelican', '-v', '-s', 'conf.py']' returned non-zero exit status 1
website@ip-172-31-47-120:~/blog_source$
zancas
zancas commented 22 hours ago
By testing on 'testing' I confirm that pelican 3.2.2 has this issue and 3.3.0 does not.
zancas
zancas commented 22 hours ago
Hmmm, here's what testing says about pelican post pip install pelican==3.3.0
## website@ip-10-142-174-67:~/blog_source$ pip show pelican
Name: pelican
Version: 3.3
Location: /usr/local/lib/python2.7/dist-packages
Requires: jinja2, blinker, pygments, docutils, python-dateutil, pytz, six, feedgenerator, unidecode
website@ip-10-142-174-67:~/blog_source$ pelican --version
3.4.0
website@ip-10-142-174-67:~/blog_source$
nathan-at-least
nathan-at-least commented 22 hours ago
What does which pelican say?
zancas
zancas commented 22 hours ago
website@ip-10-142-174-67:~$ which pelican
/usr/local/bin/pelican
nathan-at-least
nathan-at-least commented 22 hours ago
I'm going to install pelican into a fresh virtualenv to see if my local system shows the same version issue.
zancas
zancas commented 22 hours ago
I ran:
pip uninstall pelican
which pelican
on testing and there was no result.
Then I ran:
pip install pelican==3.3.0
and then:
root@ip-10-142-174-67:/home/ubuntu# which pelican
/usr/local/bin/pelican
root@ip-10-142-174-67:/home/ubuntu# pelican --version
3.4.0
## root@ip-10-142-174-67:/home/ubuntu# pip show pelican
Name: pelican
Version: 3.3
Location: /usr/local/lib/python2.7/dist-packages
Requires: jinja2, blinker, pygments, docutils, python-dateutil, pytz, six, feedgenerator, unidecode
root@ip-10-142-174-67:/home/ubuntu#
nathan-at-least
nathan-at-least commented 22 hours ago
If I do a fresh pip install I do not see this issue:
$ mkdir tmpvenv
$ virtualenv !$
virtualenv tmpvenv
Using real prefix '/usr'
New python executable in tmpvenv/bin/python
Installing Setuptools..............................................................................................................................................................................................................................done.
Installing Pip.....................................................................................................................................................................................................................................................................................................................................done.
$ . ./tmpvenv/bin/activate
(tmpvenv)
$ which pip
/home/n/tmp/tmpvenv/bin/pip
(tmpvenv)
$ which pelican
(tmpvenv)
$ pip install 'pelican==3.3.0'
Downloading/unpacking pelican==3.3.0
Downloading pelican-3.3.tar.gz (195kB): 195kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fp%2Fpelican%2Fpelican-3.3.tar.gz
Running setup.py egg_info for package pelican
Downloading/unpacking feedgenerator>=1.6 (from pelican==3.3.0)
Downloading feedgenerator-1.7.tar.gz
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Ff%2Ffeedgenerator%2Ffeedgenerator-1.7.tar.gz
Running setup.py egg_info for package feedgenerator
```
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
```
Downloading/unpacking jinja2>=2.7 (from pelican==3.3.0)
Downloading Jinja2-2.7.3.tar.gz (378kB): 378kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.7.3.tar.gz
Running setup.py egg_info for package jinja2
```
warning: no files found matching '*' under directory 'custom_fixers'
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
```
Downloading/unpacking pygments (from pelican==3.3.0)
Downloading Pygments-1.6.tar.gz (1.4MB): 1.4MB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPygments%2FPygments-1.6.tar.gz
Running setup.py egg_info for package pygments
Downloading/unpacking docutils (from pelican==3.3.0)
Downloading docutils-0.11.tar.gz (1.6MB): 1.6MB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fd%2Fdocutils%2Fdocutils-0.11.tar.gz
Running setup.py egg_info for package docutils
```
warning: no files found matching 'MANIFEST'
warning: no files found matching '*' under directory 'extras'
warning: no previously-included files matching '.cvsignore' found under directory '*'
warning: no previously-included files matching '*.pyc' found under directory '*'
warning: no previously-included files matching '*~' found under directory '*'
warning: no previously-included files matching '.DS_Store' found under directory '*'
```
Downloading/unpacking pytz>=0a (from pelican==3.3.0)
Downloading pytz-2014.4.tar.bz2 (159kB): 159kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fp%2Fpytz%2Fpytz-2014.4.tar.bz2
Running setup.py egg_info for package pytz
Downloading/unpacking blinker (from pelican==3.3.0)
Downloading blinker-1.3.tar.gz (91kB): 91kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fblinker%2Fblinker-1.3.tar.gz
Running setup.py egg_info for package blinker
Downloading/unpacking unidecode (from pelican==3.3.0)
Downloading Unidecode-0.04.16.tar.gz (200kB): 200kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FU%2FUnidecode%2FUnidecode-0.04.16.tar.gz
Running setup.py egg_info for package unidecode
Downloading/unpacking six (from pelican==3.3.0)
Using download cache from /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fs%2Fsix%2Fsix-1.7.3.tar.gz
Running setup.py egg_info for package six
```
no previously-included directories found matching 'documentation/_build'
```
Downloading/unpacking markupsafe (from jinja2>=2.7->pelican==3.3.0)
Downloading MarkupSafe-0.23.tar.gz
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FM%2FMarkupSafe%2FMarkupSafe-0.23.tar.gz
Running setup.py egg_info for package markupsafe
Installing collected packages: pelican, feedgenerator, jinja2, pygments, docutils, pytz, blinker, unidecode, six, markupsafe
Running setup.py install for pelican
```
Installing pelican script to /home/n/tmp/tmpvenv/bin
Installing pelican-import script to /home/n/tmp/tmpvenv/bin
Installing pelican-quickstart script to /home/n/tmp/tmpvenv/bin
Installing pelican-themes script to /home/n/tmp/tmpvenv/bin
```
Running setup.py install for feedgenerator
```
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
```
Running setup.py install for jinja2
```
warning: no files found matching '*' under directory 'custom_fixers'
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
```
Running setup.py install for pygments
```
Installing pygmentize script to /home/n/tmp/tmpvenv/bin
```
Running setup.py install for docutils
changing mode of build/scripts-2.7/rst2html.py from 644 to 755
changing mode of build/scripts-2.7/rst2s5.py from 644 to 755
changing mode of build/scripts-2.7/rst2latex.py from 644 to 755
changing mode of build/scripts-2.7/rst2xetex.py from 644 to 755
changing mode of build/scripts-2.7/rst2man.py from 644 to 755
changing mode of build/scripts-2.7/rst2xml.py from 644 to 755
changing mode of build/scripts-2.7/rst2pseudoxml.py from 644 to 755
changing mode of build/scripts-2.7/rstpep2html.py from 644 to 755
changing mode of build/scripts-2.7/rst2odt.py from 644 to 755
changing mode of build/scripts-2.7/rst2odt_prepstyles.py from 644 to 755
```
warning: no files found matching 'MANIFEST'
warning: no files found matching '*' under directory 'extras'
warning: no previously-included files matching '.cvsignore' found under directory '*'
warning: no previously-included files matching '*.pyc' found under directory '*'
warning: no previously-included files matching '*~' found under directory '*'
warning: no previously-included files matching '.DS_Store' found under directory '*'
changing mode of /home/n/tmp/tmpvenv/bin/rst2html.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2man.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2pseudoxml.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2odt.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rstpep2html.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2xml.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2xetex.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2latex.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2odt_prepstyles.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2s5.py to 755
```
Running setup.py install for pytz
Running setup.py install for blinker
Running setup.py install for unidecode
Running setup.py install for six
```
no previously-included directories found matching 'documentation/_build'
```
Running setup.py install for markupsafe
```
building 'markupsafe._speedups' extension
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c markupsafe/_speedups.c -o build/temp.linux-x86_64-2.7/markupsafe/_speedups.o
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/markupsafe/_speedups.o -o build/lib.linux-x86_64-2.7/markupsafe/_speedups.so
```
Successfully installed pelican feedgenerator jinja2 pygments docutils pytz blinker unidecode six markupsafe
Cleaning up...
(tmpvenv)
$ which pelican
/home/n/tmp/tmpvenv/bin/pelican
(tmpvenv)
$ pelican --version
3.3.0
(tmpvenv)
nathan-at-least
nathan-at-least commented 21 hours ago
## $ pip show pelican
Name: pelican
Version: 3.3
Location: /home/n/tmp/tmpvenv/lib/python2.7/site-packages
Requires: feedgenerator, jinja2, pygments, docutils, pytz, blinker, unidecode, six
(tmpvenv)
nathan-at-least
nathan-at-least commented 21 hours ago
What happens if you tell bash to hash -r? Could it possible be caching a reference to a different script?
zancas
zancas commented 21 hours ago
root@ip-10-142-174-67:/home/ubuntu# pelican --version
3.4.0
nathan-at-least
nathan-at-least commented 21 hours ago
Za linked to https://stackoverflow.com/questions/5226311/installing-specific-package-versions-with-pip
Does pip uninstall pelican && pip install 'pelican==3.3.0' help?
zancas
zancas commented 21 hours ago
The -I flag seems to make pip install -I ignore previously met dependencies.
But I still have the same result.
zancas
zancas commented 21 hours ago
The above invocation with the "'" 's doesn't help.
zancas
zancas commented 21 hours ago
How do I find out the source URL of a pip package?
Maybe, as with that stackoverflow ticket... hmmm... no that doesn't make sense.
zancas
zancas commented 21 hours ago
After:
root@ip-10-142-174-67:/home/ubuntu# pip uninstall pelican && pip install -I 'pelican==3.2.2'
root@ip-10-142-174-67:/home/ubuntu# pelican --version
3.2.2
## root@ip-10-142-174-67:/home/ubuntu# pip show pelican
Name: pelican
Version: 3.2.2
Location: /usr/local/lib/python2.7/dist-packages
Requires: feedgenerator, jinja2, pygments, docutils, pytz, blinker, unidecode, six
So it's specific to 3.3.0!!
BTW 3.4.0 is "LATEST".
zancas
zancas commented 21 hours ago
And, also specific to the testing machine (so far).
zancas
zancas commented 21 hours ago
Where does one report pelican/pip bugs....
zancas
zancas commented 21 hours ago
Hmmm... this link is live:
https://pypi.python.org/pypi/pelican/3.2.2
nathan-at-least
nathan-at-least commented 21 hours ago
Why does it work on my machine?
$ pip --version
pip 1.4.1 from /home/n/tmp/tmpvenv/lib/python2.7/site-packages (python 2.7)
(tmpvenv)
zancas
zancas commented 21 hours ago
Ha! Jynx:
root@ip-10-142-174-67:/home/ubuntu# pip --version
pip 1.5.4 from /usr/lib/python2.7/dist-packages (python 2.7)
zancas
zancas commented 21 hours ago
(LeastAuthority_env) 0 git_branch: master
/home/arc/LeastAuthority_env/LeastAuthority/website/leastauthority.com :
$ pip --version
pip 1.3.1 from /home/arc/LeastAuthority_env/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg (python 2.7)
zancas
zancas commented 21 hours ago
I'll try changing the version of pip on testing...
zancas
zancas commented 21 hours ago
SIgh... how to install a particular apt package version....
zancas
zancas commented 21 hours ago
http://debian-handbook.info/browse/wheezy/sect.apt-get.html
zancas
zancas commented 21 hours ago
This link is not available:
https://pypi.python.org/pypi/pelican/3.3.0
So how did my local pip install 3.3.0 ?
/me runs:
(LeastAuthority_env) 0 git_branch: master
/home/arc/LeastAuthority_env/LeastAuthority/website/leastauthority.com :
$ pip install -v -I pelican==3.3.0 | less
zancas
zancas commented 21 hours ago
Phewf even 1 "v" is pretty verbose!
zancas
zancas commented 20 hours ago
I can't seem to replicate this elsewhere, but I've replicated it many times with:
pip 1.5.4
On testing.
zancas
zancas commented 20 hours ago
Sigh... well pelican 3.4.0 renders successfully when used by update_testing_blog.py
zancas
zancas commented 19 hours ago
I installed pelican 3.4.0 on rho like this:
root@ip-172-31-47-120:/home/ubuntu# pip uninstall pelican
root@ip-172-31-47-120:/home/ubuntu# pip install -I pelican==3.4.0
then I ran:
python update_production_blog.py
zancas
zancas commented 33 minutes ago
Nej, I think you should close this issue.
| True | the leastauthority.com repo must be restructured to reflect excision of blog source - zancas commented 6 days ago
Actually this is already accomplished on branch: 238_remove_the_blog_2 so simply deploying that branch closes this issue.
zancas
zancas commented 2 days ago
I propose that I:
(1) check out current production master on "testing"
(2) run the servers (flapp and web)
(3) view the blog
(4) checkout 238_remove_the_blog_2
(5) re View the site
(6) run update_testing_blog.py
(7) re View the site
(8) ask for a pair-peer
(9) clone blog_source to production
(10) checkout 238_remove_the_blog_2 on production CAUSING TEMPORARY OUTAGE
(11) run update_prodution_blog.py (fixing the outage)
zancas
zancas commented 2 days ago
I'll start with steps 1-7, now.
zancas
zancas commented 2 days ago
Ammendment:
I set the leastauthority.com repo to a commit which has had its blog_source directory removed (with git rm), I pushed that repo to testing and checked out the commit that did not have the blog_source directory.
The blog_source directory which was in the working directory prior to the checkout was unaffected. Therefore this process will not cause an outage.
zancas
zancas commented 2 days ago
Even without the --delete-output-directory flag pelican removes the target output directory.
zancas
zancas commented 2 days ago
Instead of git checkout 238_... on production we'll merge 238_... into master on github.
We'll git push --verbose production master:master maintaining synchronization.
We'll:
(1) ssh into production
(2) git merge --ff-only master on production
zancas
zancas commented 2 days ago
A clue from /var/log/auth.log on rho:
Jul 8 03:19:43 ip-172-31-47-120 sshd[20735]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Jul 8 03:19:43 ip-172-31-47-120 sshd[20735]: Connection closed by 67.176.52.224 [preauth]
zancas zancas referenced this issue in LeastAuthority/leastauthority.com a day ago
Open
openssh pubkey ECDSA vs RSA mismatch #10
zancas
zancas commented a day ago
OK, I've figured out the SSH authentication issue. (I THINK).
If I:
mv ~/.ssh/known_hosts $BACKUP
and run one of the update_.._blog.py tools, I get the same error that blocked us earlier in the deployment.
If I then ssh website@testing.leastauthority.com, I get:
0 git_branch: master
/home/arc/.ssh :
$ ssh website@testing.leastauthority.com
The authenticity of host 'testing.leastauthority.com (107.21.225.70)' can't be established.
ECDSA key fingerprint is c8:36:84:9f:20:be:3d:71:c9:e4:15:a9:76:4e:91:17.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'testing.leastauthority.com,107.21.225.70' (ECDSA) to the list of known hosts.
Notice that the key used is an ECDSA key.
If I then run:
python ./update_testing_blog.py
I get the same error as I got when there was no known_hosts file.
Now mv known_hosts aside again and run:
130 git_branch: master
/home/arc/.ssh :
$ ssh-keyscan -t rsa testing.leastauthority.com >> ~/.ssh/known_hosts
# testing.leastauthority.com SSH-2.0-OpenSSH_6.6p1 Ubuntu-2ubuntu1
0 git_branch: master
/home/arc/.ssh :
$ cat ~/.ssh/known_hosts
testing.leastauthority.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCWvLycjtQeZJpXbhSm/Y5fzbe3JV36CKehVzJQvqrJYg9oOpCwneXFIrIEhOS5Lg6OwXxe1jCqAYL6r+UOjTvMyGc971FU1d1qID71LcTdAiiI2k+8KeJjuOdH2P8WRIL4XQ21xQbqx/o7kb3OTeWDpBdzqrX4bSRwZ5MyzwccUi8ff9kNGjfQk97dt/e+PfxbjOpMm9JCFWGvcXJLvKo8gQCm1q7oajUzexCo8tuwu0TFcOlXCFUore5j8d/eUo0yvJZwgQuoyoSpok/o5sIRoAdVTM3eufkcy1DM0vFDJKP+OGbt+TLBEQo5ibOWEQTb1jpsPcxLVhcFnEIIjFKV
I still get the same error, which I am confused about, perhaps it's because the keyscan result was matched to a domain name... /me checks this hypothesis....
Yes, the "mv known_hosts" test confirms this, so the first argument to "update_blog" must be matched to a known host (either IPv4 or domain-name).
It seems that for multihoming, and switching servers, we should map to the domain name for infrastructure servers... indeed SSEC2s have publicly routable domain-names as well IIRC, so I'll make a new issue.
Once the public key in "known_hosts" is of type rsa, and once it's mapped to the IPv4 address passed to the function update_blog
the script authenticates as expected.
zancas
zancas commented a day ago
I don't know which software is unable to check ecdsa pubkeys.
paramiko? fabric?
zancas
zancas commented 23 hours ago
Next problem, not yet understood:
website@ip-172-31-47-120:~/blog_source$ python ./render_blog.py
WARNING: LOCALE option doesn't contain a correct value
ERROR: Skipping /home/website/blog_source/README.md: could not find information about 'title'
-> copying /home/website/blog_source/themes/lae/static to /home/website/leastauthority.com/content/blog/theme
-> writing /home/website/leastauthority.com/content/blog/feeds/all.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/essay.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/events.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/letters.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/news.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/press.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/press-releases.atom.xml
-> writing /home/website/leastauthority.com/content/blog/feeds/all-en.atom.xml
-> writing /home/website/leastauthority.com/content/blog/least_authority_releases_s4+_as_part_of_reset_the_net.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_performs_security_audit_for_globaleaks.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_performs_security_audit_for_cryptocat.html
-> writing /home/website/leastauthority.com/content/blog/BLAKE2-harder-better-faster-stronger-than-MD5.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_performs_security_audit_for_spideroak.html
-> writing /home/website/leastauthority.com/content/blog/lafs_summit_san_francisco_nov_11_13.html
-> writing /home/website/leastauthority.com/content/blog/lafs_featured_eff_tech_blog.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_joins_open_invention_network.html
-> writing /home/website/leastauthority.com/content/blog/s4_press_on_linuxbsdos.html
-> writing /home/website/leastauthority.com/content/blog/open_letter_silent_circle.html
-> writing /home/website/leastauthority.com/content/blog/least_authority_press_motherboard_vice_com.html
-> writing /home/website/leastauthority.com/content/blog/lafs_press_computerworld.html
-> writing /home/website/leastauthority.com/content/blog/least-authority-announces-prism-proof-storage-service.html
CRITICAL: 'articles_next_page' is undefined
Traceback (most recent call last):
File "./render_blog.py", line 14, in <module>
render_blog()
File "./render_blog.py", line 10, in render_blog
subprocess.check_call(['pelican', '-v', '-s', 'conf.py'])
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['pelican', '-v', '-s', 'conf.py']' returned non-zero exit status 1
website@ip-172-31-47-120:~/blog_source$
zancas
zancas commented 22 hours ago
By testing on 'testing' I confirm that pelican 3.2.2 has this issue and 3.3.0 does not.
zancas
zancas commented 22 hours ago
Hmmm, here's what testing says about pelican post pip install pelican==3.3.0
## website@ip-10-142-174-67:~/blog_source$ pip show pelican
Name: pelican
Version: 3.3
Location: /usr/local/lib/python2.7/dist-packages
Requires: jinja2, blinker, pygments, docutils, python-dateutil, pytz, six, feedgenerator, unidecode
website@ip-10-142-174-67:~/blog_source$ pelican --version
3.4.0
website@ip-10-142-174-67:~/blog_source$
nathan-at-least
nathan-at-least commented 22 hours ago
What does which pelican say?
zancas
zancas commented 22 hours ago
website@ip-10-142-174-67:~$ which pelican
/usr/local/bin/pelican
nathan-at-least
nathan-at-least commented 22 hours ago
I'm going to install pelican into a fresh virtualenv to see if my local system shows the same version issue.
zancas
zancas commented 22 hours ago
I ran:
pip uninstall pelican
which pelican
on testing and there was no result.
Then I ran:
pip install pelican==3.3.0
and then:
root@ip-10-142-174-67:/home/ubuntu# which pelican
/usr/local/bin/pelican
root@ip-10-142-174-67:/home/ubuntu# pelican --version
3.4.0
## root@ip-10-142-174-67:/home/ubuntu# pip show pelican
Name: pelican
Version: 3.3
Location: /usr/local/lib/python2.7/dist-packages
Requires: jinja2, blinker, pygments, docutils, python-dateutil, pytz, six, feedgenerator, unidecode
root@ip-10-142-174-67:/home/ubuntu#
nathan-at-least
nathan-at-least commented 22 hours ago
If I do a fresh pip install I do not see this issue:
$ mkdir tmpvenv
$ virtualenv !$
virtualenv tmpvenv
Using real prefix '/usr'
New python executable in tmpvenv/bin/python
Installing Setuptools..............................................................................................................................................................................................................................done.
Installing Pip.....................................................................................................................................................................................................................................................................................................................................done.
$ . ./tmpvenv/bin/activate
(tmpvenv)
$ which pip
/home/n/tmp/tmpvenv/bin/pip
(tmpvenv)
$ which pelican
(tmpvenv)
$ pip install 'pelican==3.3.0'
Downloading/unpacking pelican==3.3.0
Downloading pelican-3.3.tar.gz (195kB): 195kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fp%2Fpelican%2Fpelican-3.3.tar.gz
Running setup.py egg_info for package pelican
Downloading/unpacking feedgenerator>=1.6 (from pelican==3.3.0)
Downloading feedgenerator-1.7.tar.gz
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Ff%2Ffeedgenerator%2Ffeedgenerator-1.7.tar.gz
Running setup.py egg_info for package feedgenerator
```
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
```
Downloading/unpacking jinja2>=2.7 (from pelican==3.3.0)
Downloading Jinja2-2.7.3.tar.gz (378kB): 378kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.7.3.tar.gz
Running setup.py egg_info for package jinja2
```
warning: no files found matching '*' under directory 'custom_fixers'
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
```
Downloading/unpacking pygments (from pelican==3.3.0)
Downloading Pygments-1.6.tar.gz (1.4MB): 1.4MB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPygments%2FPygments-1.6.tar.gz
Running setup.py egg_info for package pygments
Downloading/unpacking docutils (from pelican==3.3.0)
Downloading docutils-0.11.tar.gz (1.6MB): 1.6MB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fd%2Fdocutils%2Fdocutils-0.11.tar.gz
Running setup.py egg_info for package docutils
```
warning: no files found matching 'MANIFEST'
warning: no files found matching '*' under directory 'extras'
warning: no previously-included files matching '.cvsignore' found under directory '*'
warning: no previously-included files matching '*.pyc' found under directory '*'
warning: no previously-included files matching '*~' found under directory '*'
warning: no previously-included files matching '.DS_Store' found under directory '*'
```
Downloading/unpacking pytz>=0a (from pelican==3.3.0)
Downloading pytz-2014.4.tar.bz2 (159kB): 159kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fp%2Fpytz%2Fpytz-2014.4.tar.bz2
Running setup.py egg_info for package pytz
Downloading/unpacking blinker (from pelican==3.3.0)
Downloading blinker-1.3.tar.gz (91kB): 91kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fblinker%2Fblinker-1.3.tar.gz
Running setup.py egg_info for package blinker
Downloading/unpacking unidecode (from pelican==3.3.0)
Downloading Unidecode-0.04.16.tar.gz (200kB): 200kB downloaded
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FU%2FUnidecode%2FUnidecode-0.04.16.tar.gz
Running setup.py egg_info for package unidecode
Downloading/unpacking six (from pelican==3.3.0)
Using download cache from /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fs%2Fsix%2Fsix-1.7.3.tar.gz
Running setup.py egg_info for package six
```
no previously-included directories found matching 'documentation/_build'
```
Downloading/unpacking markupsafe (from jinja2>=2.7->pelican==3.3.0)
Downloading MarkupSafe-0.23.tar.gz
Storing download in cache at /home/n/.pip/cache/https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FM%2FMarkupSafe%2FMarkupSafe-0.23.tar.gz
Running setup.py egg_info for package markupsafe
Installing collected packages: pelican, feedgenerator, jinja2, pygments, docutils, pytz, blinker, unidecode, six, markupsafe
Running setup.py install for pelican
```
Installing pelican script to /home/n/tmp/tmpvenv/bin
Installing pelican-import script to /home/n/tmp/tmpvenv/bin
Installing pelican-quickstart script to /home/n/tmp/tmpvenv/bin
Installing pelican-themes script to /home/n/tmp/tmpvenv/bin
```
Running setup.py install for feedgenerator
```
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
```
Running setup.py install for jinja2
```
warning: no files found matching '*' under directory 'custom_fixers'
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
```
Running setup.py install for pygments
```
Installing pygmentize script to /home/n/tmp/tmpvenv/bin
```
Running setup.py install for docutils
changing mode of build/scripts-2.7/rst2html.py from 644 to 755
changing mode of build/scripts-2.7/rst2s5.py from 644 to 755
changing mode of build/scripts-2.7/rst2latex.py from 644 to 755
changing mode of build/scripts-2.7/rst2xetex.py from 644 to 755
changing mode of build/scripts-2.7/rst2man.py from 644 to 755
changing mode of build/scripts-2.7/rst2xml.py from 644 to 755
changing mode of build/scripts-2.7/rst2pseudoxml.py from 644 to 755
changing mode of build/scripts-2.7/rstpep2html.py from 644 to 755
changing mode of build/scripts-2.7/rst2odt.py from 644 to 755
changing mode of build/scripts-2.7/rst2odt_prepstyles.py from 644 to 755
```
warning: no files found matching 'MANIFEST'
warning: no files found matching '*' under directory 'extras'
warning: no previously-included files matching '.cvsignore' found under directory '*'
warning: no previously-included files matching '*.pyc' found under directory '*'
warning: no previously-included files matching '*~' found under directory '*'
warning: no previously-included files matching '.DS_Store' found under directory '*'
changing mode of /home/n/tmp/tmpvenv/bin/rst2html.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2man.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2pseudoxml.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2odt.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rstpep2html.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2xml.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2xetex.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2latex.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2odt_prepstyles.py to 755
changing mode of /home/n/tmp/tmpvenv/bin/rst2s5.py to 755
```
Running setup.py install for pytz
Running setup.py install for blinker
Running setup.py install for unidecode
Running setup.py install for six
```
no previously-included directories found matching 'documentation/_build'
```
Running setup.py install for markupsafe
```
building 'markupsafe._speedups' extension
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c markupsafe/_speedups.c -o build/temp.linux-x86_64-2.7/markupsafe/_speedups.o
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/markupsafe/_speedups.o -o build/lib.linux-x86_64-2.7/markupsafe/_speedups.so
```
Successfully installed pelican feedgenerator jinja2 pygments docutils pytz blinker unidecode six markupsafe
Cleaning up...
(tmpvenv)
$ which pelican
/home/n/tmp/tmpvenv/bin/pelican
(tmpvenv)
$ pelican --version
3.3.0
(tmpvenv)
nathan-at-least
nathan-at-least commented 21 hours ago
## $ pip show pelican
Name: pelican
Version: 3.3
Location: /home/n/tmp/tmpvenv/lib/python2.7/site-packages
Requires: feedgenerator, jinja2, pygments, docutils, pytz, blinker, unidecode, six
(tmpvenv)
nathan-at-least
nathan-at-least commented 21 hours ago
What happens if you tell bash to hash -r? Could it possible be caching a reference to a different script?
zancas
zancas commented 21 hours ago
root@ip-10-142-174-67:/home/ubuntu# pelican --version
3.4.0
nathan-at-least
nathan-at-least commented 21 hours ago
Za linked to https://stackoverflow.com/questions/5226311/installing-specific-package-versions-with-pip
Does pip uninstall pelican && pip install 'pelican==3.3.0' help?
zancas
zancas commented 21 hours ago
The -I flag seems to make pip install -I ignore previously met dependencies.
But I still have the same result.
zancas
zancas commented 21 hours ago
The above invocation with the "'" 's doesn't help.
zancas
zancas commented 21 hours ago
How do I find out the source URL of a pip package?
Maybe, as with that stackoverflow ticket... hmmm... no that doesn't make sense.
zancas
zancas commented 21 hours ago
After:
root@ip-10-142-174-67:/home/ubuntu# pip uninstall pelican && pip install -I 'pelican==3.2.2'
root@ip-10-142-174-67:/home/ubuntu# pelican --version
3.2.2
## root@ip-10-142-174-67:/home/ubuntu# pip show pelican
Name: pelican
Version: 3.2.2
Location: /usr/local/lib/python2.7/dist-packages
Requires: feedgenerator, jinja2, pygments, docutils, pytz, blinker, unidecode, six
So it's specific to 3.3.0!!
BTW 3.4.0 is "LATEST".
zancas
zancas commented 21 hours ago
And, also specific to the testing machine (so far).
zancas
zancas commented 21 hours ago
Where does one report pelican/pip bugs....
zancas
zancas commented 21 hours ago
Hmmm... this link is live:
https://pypi.python.org/pypi/pelican/3.2.2
nathan-at-least
nathan-at-least commented 21 hours ago
Why does it work on my machine?
$ pip --version
pip 1.4.1 from /home/n/tmp/tmpvenv/lib/python2.7/site-packages (python 2.7)
(tmpvenv)
zancas
zancas commented 21 hours ago
Ha! Jynx:
root@ip-10-142-174-67:/home/ubuntu# pip --version
pip 1.5.4 from /usr/lib/python2.7/dist-packages (python 2.7)
zancas
zancas commented 21 hours ago
(LeastAuthority_env) 0 git_branch: master
/home/arc/LeastAuthority_env/LeastAuthority/website/leastauthority.com :
$ pip --version
pip 1.3.1 from /home/arc/LeastAuthority_env/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg (python 2.7)
zancas
zancas commented 21 hours ago
I'll try changing the version of pip on testing...
zancas
zancas commented 21 hours ago
SIgh... how to install a particular apt package version....
zancas
zancas commented 21 hours ago
http://debian-handbook.info/browse/wheezy/sect.apt-get.html
zancas
zancas commented 21 hours ago
This link is not available:
https://pypi.python.org/pypi/pelican/3.3.0
So how did my local pip install 3.3.0 ?
/me runs:
(LeastAuthority_env) 0 git_branch: master
/home/arc/LeastAuthority_env/LeastAuthority/website/leastauthority.com :
$ pip install -v -I pelican==3.3.0 | less
zancas
zancas commented 21 hours ago
Phewf even 1 "v" is pretty verbose!
zancas
zancas commented 20 hours ago
I can't seem to replicate this elsewhere, but I've replicated it many times with:
pip 1.5.4
On testing.
zancas
zancas commented 20 hours ago
Sigh... well pelican 3.4.0 renders successfully when used by update_testing_blog.py
zancas
zancas commented 19 hours ago
I installed pelican 3.4.0 on rho like this:
root@ip-172-31-47-120:/home/ubuntu# pip uninstall pelican
root@ip-172-31-47-120:/home/ubuntu# pip install -I pelican==3.4.0
then I ran:
python update_production_blog.py
zancas
zancas commented 33 minutes ago
Nej, I think you should close this issue.
| reli | the leastauthority com repo must be restructured to reflect excision of blog source zancas commented days ago actually this is already accomplished on branch remove the blog so simply deploying that branch closes this issue zancas zancas commented days ago i propose that i check out current production master on testing run the servers flapp and web view the blog checkout remove the blog re view the site run update testing blog py re view the site ask for a pair peer clone blog source to production checkout remove the blog on production causing temporary outage run update prodution blog py fixing the outage zancas zancas commented days ago i ll start with steps now zancas zancas commented days ago ammendment i set the leastauthority com repo to a commit which has had its blog source directory removed with git rm i pushed that repo to testing and checked out the commit that did not have the blog source directory the blog source directory which was in the working directory prior to the checkout was unaffected therefore this process will not cause an outage zancas zancas commented days ago even without the delete output directory flag pelican removes the target output directory zancas zancas commented days ago instead of git checkout on production we ll merge into master on github we ll git push verbose production master master maintaining synchronization we ll ssh into production git merge ff only master on production zancas zancas commented days ago a clue from var log auth log on rho jul ip sshd error could not load host key etc ssh ssh host key jul ip sshd connection closed by zancas zancas referenced this issue in leastauthority leastauthority com a day ago open openssh pubkey ecdsa vs rsa mismatch zancas zancas commented a day ago ok i ve figured out the ssh authentication issue i think if i mv ssh known hosts backup and run one of the update blog py tools i get the same error that blocked us earlier in the deployment if i then ssh website testing leastauthority com i get git branch master home arc ssh ssh website testing leastauthority com the authenticity of host testing leastauthority com can t be established ecdsa key fingerprint is be are you sure you want to continue connecting yes no yes warning permanently added testing leastauthority com ecdsa to the list of known hosts notice that the key used is an ecdsa key if i then run python update testing blog py i get the same error as i got when there was no known hosts file now mv known hosts aside again and run git branch master home arc ssh ssh keyscan t rsa testing leastauthority com ssh known hosts testing leastauthority com ssh openssh ubuntu git branch master home arc ssh cat ssh known hosts testing leastauthority com ssh rsa e ogbt i still get the same error which i am confused about perhaps it s because the keyscan result was matched to a domain name me checks this hypothesis yes the mv known hosts test confirms this so the first argument to update blog must be matched to a known host either or domain name it seems that for multihoming and switching servers we should map to the domain name for infrastructure servers indeed have publicly routable domain names as well iirc so i ll make a new issue once the public key in known hosts is of type rsa and once it s mapped to the address passed to the function update blog the script authenticates as expected zancas zancas commented a day ago i don t know which software is unable to check ecdsa pubkeys paramiko fabric zancas zancas commented hours ago next problem not yet understood website ip blog source python render blog py warning locale option doesn t contain a correct value error skipping home website blog source readme md could not find information about title copying home website blog source themes lae static to home website leastauthority com content blog theme writing home website leastauthority com content blog feeds all atom xml writing home website leastauthority com content blog feeds essay atom xml writing home website leastauthority com content blog feeds events atom xml writing home website leastauthority com content blog feeds letters atom xml writing home website leastauthority com content blog feeds news atom xml writing home website leastauthority com content blog feeds press atom xml writing home website leastauthority com content blog feeds press releases atom xml writing home website leastauthority com content blog feeds all en atom xml writing home website leastauthority com content blog least authority releases as part of reset the net html writing home website leastauthority com content blog least authority performs security audit for globaleaks html writing home website leastauthority com content blog least authority performs security audit for cryptocat html writing home website leastauthority com content blog harder better faster stronger than html writing home website leastauthority com content blog least authority performs security audit for spideroak html writing home website leastauthority com content blog lafs summit san francisco nov html writing home website leastauthority com content blog lafs featured eff tech blog html writing home website leastauthority com content blog least authority joins open invention network html writing home website leastauthority com content blog press on linuxbsdos html writing home website leastauthority com content blog open letter silent circle html writing home website leastauthority com content blog least authority press motherboard vice com html writing home website leastauthority com content blog lafs press computerworld html writing home website leastauthority com content blog least authority announces prism proof storage service html critical articles next page is undefined traceback most recent call last file render blog py line in render blog file render blog py line in render blog subprocess check call file usr lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command returned non zero exit status website ip blog source zancas zancas commented hours ago by testing on testing i confirm that pelican has this issue and does not zancas zancas commented hours ago hmmm here s what testing says about pelican post pip install pelican website ip blog source pip show pelican name pelican version location usr local lib dist packages requires blinker pygments docutils python dateutil pytz six feedgenerator unidecode website ip blog source pelican version website ip blog source nathan at least nathan at least commented hours ago what does which pelican say zancas zancas commented hours ago website ip which pelican usr local bin pelican nathan at least nathan at least commented hours ago i m going to install pelican into a fresh virtualenv to see if my local system shows the same version issue zancas zancas commented hours ago i ran pip uninstall pelican which pelican on testing and there was no result then i ran pip install pelican and then root ip home ubuntu which pelican usr local bin pelican root ip home ubuntu pelican version root ip home ubuntu pip show pelican name pelican version location usr local lib dist packages requires blinker pygments docutils python dateutil pytz six feedgenerator unidecode root ip home ubuntu nathan at least nathan at least commented hours ago if i do a fresh pip install i do not see this issue mkdir tmpvenv virtualenv virtualenv tmpvenv using real prefix usr new python executable in tmpvenv bin python installing setuptools done installing pip done tmpvenv bin activate tmpvenv which pip home n tmp tmpvenv bin pip tmpvenv which pelican tmpvenv pip install pelican downloading unpacking pelican downloading pelican tar gz downloaded storing download in cache at home n pip cache https python org tar gz running setup py egg info for package pelican downloading unpacking feedgenerator from pelican downloading feedgenerator tar gz storing download in cache at home n pip cache https python org tar gz running setup py egg info for package feedgenerator warning no files found matching under directory tests warning no previously included files matching pyc found under directory tests downloading unpacking from pelican downloading tar gz downloaded storing download in cache at home n pip cache https python org tar gz running setup py egg info for package warning no files found matching under directory custom fixers warning no previously included files matching found under directory docs build warning no previously included files matching pyc found under directory warning no previously included files matching pyc found under directory docs warning no previously included files matching pyo found under directory warning no previously included files matching pyo found under directory docs downloading unpacking pygments from pelican downloading pygments tar gz downloaded storing download in cache at home n pip cache https python org tar gz running setup py egg info for package pygments downloading unpacking docutils from pelican downloading docutils tar gz downloaded storing download in cache at home n pip cache https python org tar gz running setup py egg info for package docutils warning no files found matching manifest warning no files found matching under directory extras warning no previously included files matching cvsignore found under directory warning no previously included files matching pyc found under directory warning no previously included files matching found under directory warning no previously included files matching ds store found under directory downloading unpacking pytz from pelican downloading pytz tar downloaded storing download in cache at home n pip cache https python org tar running setup py egg info for package pytz downloading unpacking blinker from pelican downloading blinker tar gz downloaded storing download in cache at home n pip cache https python org tar gz running setup py egg info for package blinker downloading unpacking unidecode from pelican downloading unidecode tar gz downloaded storing download in cache at home n pip cache https python org tar gz running setup py egg info for package unidecode downloading unpacking six from pelican using download cache from home n pip cache https python org tar gz running setup py egg info for package six no previously included directories found matching documentation build downloading unpacking markupsafe from pelican downloading markupsafe tar gz storing download in cache at home n pip cache https python org tar gz running setup py egg info for package markupsafe installing collected packages pelican feedgenerator pygments docutils pytz blinker unidecode six markupsafe running setup py install for pelican installing pelican script to home n tmp tmpvenv bin installing pelican import script to home n tmp tmpvenv bin installing pelican quickstart script to home n tmp tmpvenv bin installing pelican themes script to home n tmp tmpvenv bin running setup py install for feedgenerator warning no files found matching under directory tests warning no previously included files matching pyc found under directory tests running setup py install for warning no files found matching under directory custom fixers warning no previously included files matching found under directory docs build warning no previously included files matching pyc found under directory warning no previously included files matching pyc found under directory docs warning no previously included files matching pyo found under directory warning no previously included files matching pyo found under directory docs running setup py install for pygments installing pygmentize script to home n tmp tmpvenv bin running setup py install for docutils changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts py from to changing mode of build scripts prepstyles py from to warning no files found matching manifest warning no files found matching under directory extras warning no previously included files matching cvsignore found under directory warning no previously included files matching pyc found under directory warning no previously included files matching found under directory warning no previously included files matching ds store found under directory changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin py to changing mode of home n tmp tmpvenv bin prepstyles py to changing mode of home n tmp tmpvenv bin py to running setup py install for pytz running setup py install for blinker running setup py install for unidecode running setup py install for six no previously included directories found matching documentation build running setup py install for markupsafe building markupsafe speedups extension linux gnu gcc pthread fno strict aliasing dndebug g fwrapv wall wstrict prototypes fpic i usr include c markupsafe speedups c o build temp linux markupsafe speedups o linux gnu gcc pthread shared wl wl bsymbolic functions wl z relro fno strict aliasing dndebug g fwrapv wall wstrict prototypes d fortify source g fstack protector param ssp buffer size wformat werror format security build temp linux markupsafe speedups o o build lib linux markupsafe speedups so successfully installed pelican feedgenerator pygments docutils pytz blinker unidecode six markupsafe cleaning up tmpvenv which pelican home n tmp tmpvenv bin pelican tmpvenv pelican version tmpvenv nathan at least nathan at least commented hours ago pip show pelican name pelican version location home n tmp tmpvenv lib site packages requires feedgenerator pygments docutils pytz blinker unidecode six tmpvenv nathan at least nathan at least commented hours ago what happens if you tell bash to hash r could it possible be caching a reference to a different script zancas zancas commented hours ago root ip home ubuntu pelican version nathan at least nathan at least commented hours ago za linked to does pip uninstall pelican pip install pelican help zancas zancas commented hours ago the i flag seems to make pip install i ignore previously met dependencies but i still have the same result zancas zancas commented hours ago the above invocation with the s doesn t help zancas zancas commented hours ago how do i find out the source url of a pip package maybe as with that stackoverflow ticket hmmm no that doesn t make sense zancas zancas commented hours ago after root ip home ubuntu pip uninstall pelican pip install i pelican root ip home ubuntu pelican version root ip home ubuntu pip show pelican name pelican version location usr local lib dist packages requires feedgenerator pygments docutils pytz blinker unidecode six so it s specific to btw is latest zancas zancas commented hours ago and also specific to the testing machine so far zancas zancas commented hours ago where does one report pelican pip bugs zancas zancas commented hours ago hmmm this link is live nathan at least nathan at least commented hours ago why does it work on my machine pip version pip from home n tmp tmpvenv lib site packages python tmpvenv zancas zancas commented hours ago ha jynx root ip home ubuntu pip version pip from usr lib dist packages python zancas zancas commented hours ago leastauthority env git branch master home arc leastauthority env leastauthority website leastauthority com pip version pip from home arc leastauthority env lib site packages pip egg python zancas zancas commented hours ago i ll try changing the version of pip on testing zancas zancas commented hours ago sigh how to install a particular apt package version zancas zancas commented hours ago zancas zancas commented hours ago this link is not available so how did my local pip install me runs leastauthority env git branch master home arc leastauthority env leastauthority website leastauthority com pip install v i pelican less zancas zancas commented hours ago phewf even v is pretty verbose zancas zancas commented hours ago i can t seem to replicate this elsewhere but i ve replicated it many times with pip on testing zancas zancas commented hours ago sigh well pelican renders successfully when used by update testing blog py zancas zancas commented hours ago i installed pelican on rho like this root ip home ubuntu pip uninstall pelican root ip home ubuntu pip install i pelican then i ran python update production blog py zancas zancas commented minutes ago nej i think you should close this issue | 1 |
108,140 | 9,276,127,282 | IssuesEvent | 2019-03-20 01:30:47 | knative/serving | https://api.github.com/repos/knative/serving | closed | test/upgrade.TestRunLatestServicePostUpgrade is flaky | area/test-and-release kind/bug | ## In what area(s)?
/area test-and-release
## Expected Behavior
test/upgrade.TestRunLatestServicePostUpgrade passes 100% of the time.
## Actual Behavior
test/upgrade.TestRunLatestServicePostUpgrade is flaky, instead.
/assign @vagababov
| 1.0 | test/upgrade.TestRunLatestServicePostUpgrade is flaky - ## In what area(s)?
/area test-and-release
## Expected Behavior
test/upgrade.TestRunLatestServicePostUpgrade passes 100% of the time.
## Actual Behavior
test/upgrade.TestRunLatestServicePostUpgrade is flaky, instead.
/assign @vagababov
| non_reli | test upgrade testrunlatestservicepostupgrade is flaky in what area s area test and release expected behavior test upgrade testrunlatestservicepostupgrade passes of the time actual behavior test upgrade testrunlatestservicepostupgrade is flaky instead assign vagababov | 0 |
18,344 | 3,686,857,022 | IssuesEvent | 2016-02-25 04:06:04 | kumulsoft/Fixed-Assets | https://api.github.com/repos/kumulsoft/Fixed-Assets | closed | Import Utility - Stage 5 Posting | bug Discussion Fixed Ready for testing URGENT | When posting Asset Amount =Transaction Amount = Asset Value

| 1.0 | Import Utility - Stage 5 Posting - When posting Asset Amount =Transaction Amount = Asset Value

| non_reli | import utility stage posting when posting asset amount transaction amount asset value | 0 |
28,166 | 11,596,584,293 | IssuesEvent | 2020-02-24 19:14:26 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | opened | [Android] Safe Browsing doesn't work in android-core | android-related security | ## Description <!-- Provide a brief description of the issue -->
Safe Browsing isn't blocking anything in android core.
## Steps to reproduce <!-- Please add a series of steps to reproduce the issue -->
1. Visit https://testsafebrowsing.appspot.com/
2. Try all of the links in the "Webpage Warnings"
## Actual result <!-- Please add screenshots if needed -->
Not blocked.
## Expected result
Should be showing the red interstitial pages just like on desktop.
## Issue reproduces how often <!-- [Easily reproduced/Intermittent issue/No steps to reproduce] -->
always
## Issue happens on <!-- Mention yes or no -->
- Current Play Store version? no
- Beta build? yes
## Device details
- Device (Phone, Tablet, Phablet): Pixel 3a
- Android version: 10
## Brave version
```
Brave 1.5.90 Chromium: 80.0.3987.100 (Official Build) beta (32-bit)
Revision 3f00c26d457663a424865bbef1179f72eec1b9fe-refs/branch-heads/3987@{#864}
OS Android 10; Pixel 3a Build/QQ1A.200205.002
``` | True | [Android] Safe Browsing doesn't work in android-core - ## Description <!-- Provide a brief description of the issue -->
Safe Browsing isn't blocking anything in android core.
## Steps to reproduce <!-- Please add a series of steps to reproduce the issue -->
1. Visit https://testsafebrowsing.appspot.com/
2. Try all of the links in the "Webpage Warnings"
## Actual result <!-- Please add screenshots if needed -->
Not blocked.
## Expected result
Should be showing the red interstitial pages just like on desktop.
## Issue reproduces how often <!-- [Easily reproduced/Intermittent issue/No steps to reproduce] -->
always
## Issue happens on <!-- Mention yes or no -->
- Current Play Store version? no
- Beta build? yes
## Device details
- Device (Phone, Tablet, Phablet): Pixel 3a
- Android version: 10
## Brave version
```
Brave 1.5.90 Chromium: 80.0.3987.100 (Official Build) beta (32-bit)
Revision 3f00c26d457663a424865bbef1179f72eec1b9fe-refs/branch-heads/3987@{#864}
OS Android 10; Pixel 3a Build/QQ1A.200205.002
``` | non_reli | safe browsing doesn t work in android core description safe browsing isn t blocking anything in android core steps to reproduce visit try all of the links in the webpage warnings actual result not blocked expected result should be showing the red interstitial pages just like on desktop issue reproduces how often always issue happens on current play store version no beta build yes device details device phone tablet phablet pixel android version brave version brave chromium official build beta bit revision refs branch heads os android pixel build | 0 |
1,534 | 16,773,505,407 | IssuesEvent | 2021-06-14 17:41:16 | emmamei/cdkey | https://api.github.com/repos/emmamei/cdkey | opened | What is difference between windNormal and windAmount? | bug reliabilityfix | There is a variable windAmount and windNormal; what's the difference?
Probably windNormal is the *normal* winding and windAmount adjust based on whatever - but windAmount probably does *not* need to be anything but a local variable. | True | What is difference between windNormal and windAmount? - There is a variable windAmount and windNormal; what's the difference?
Probably windNormal is the *normal* winding and windAmount adjust based on whatever - but windAmount probably does *not* need to be anything but a local variable. | reli | what is difference between windnormal and windamount there is a variable windamount and windnormal what s the difference probably windnormal is the normal winding and windamount adjust based on whatever but windamount probably does not need to be anything but a local variable | 1 |
10,459 | 26,992,473,992 | IssuesEvent | 2023-02-09 21:08:58 | MicrosoftDocs/architecture-center | https://api.github.com/repos/MicrosoftDocs/architecture-center | closed | Add numbers to diagram | assigned-to-author triaged architecture-center/svc example-scenario/subsvc Pri1 | there is a numerical listing below the diagram, it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Version Independent ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Content: [Network-hardened web app - Azure Example Scenarios](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/security/hardened-web-app)
* Content Source: [docs/example-scenario/security/hardened-web-app.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/example-scenario/security/hardened-web-app.yml)
* Service: **architecture-center**
* Sub-service: **example-scenario**
* GitHub Login: @damaccar
* Microsoft Alias: **damaccar** | 1.0 | Add numbers to diagram - there is a numerical listing below the diagram, it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Version Independent ID: d2c472ec-9be4-6e0e-eac6-0236e2e1044a
* Content: [Network-hardened web app - Azure Example Scenarios](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/security/hardened-web-app)
* Content Source: [docs/example-scenario/security/hardened-web-app.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/example-scenario/security/hardened-web-app.yml)
* Service: **architecture-center**
* Sub-service: **example-scenario**
* GitHub Login: @damaccar
* Microsoft Alias: **damaccar** | non_reli | add numbers to diagram there is a numerical listing below the diagram it would be helpful if the diagram showed the numbers so that the text descriptions could be more easily correlated document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service example scenario github login damaccar microsoft alias damaccar | 0 |
2,036 | 22,782,792,503 | IssuesEvent | 2022-07-08 22:20:02 | neondatabase/neon | https://api.github.com/repos/neondatabase/neon | opened | There should be no way of creating | t/bug a/reliability c/storage | Follow-up of the last failures in https://github.com/neondatabase/neon/issues/1667#issuecomment-1179001544, see also #2062.
Looking at our `neon_local timeline branch` command, it works entirely with pageserver: https://github.com/neondatabase/neon/blob/39d86ed29e9a2887e6dc339fbb0eeb6040c12cc8/neon_local/src/main.rs#L695-L703
Moreover, the test runs it with `--ancestor-branch-name` parameter only, no `--ancestor-start-lsn`. So it's the pageserver who decides what LSN to create the branch at. Hence, if the pageserver is not fully caught up with safekeepers, the branch starts a little earlier than it should, truncating parts of the last transaction (effectively erasing it from the second timeline, I guess).
If so, I argue it's not flaky test, it's a design bug. We should have a clear semantics of what is "branching off a named branch" is (probably in terms of `commit_lsn`/`flush_lsn`/whatever on Safekeepers) and there should be no way to violate it by misusing the API. For example, there should no way to call the pageserver branch creation API without specifying not only the exact parent timeline, but the exact LSN in it.
I suspect there may be other similar bugs lurking around; the pageserver is probably used as the source of truth in multiple places (see e.g. #809). I suspect they would be impossible for our end users to debug unless we expose both safekeepers/pageserver's LSNs somehow, and very hard if we do. | True | There should be no way of creating - Follow-up of the last failures in https://github.com/neondatabase/neon/issues/1667#issuecomment-1179001544, see also #2062.
Looking at our `neon_local timeline branch` command, it works entirely with pageserver: https://github.com/neondatabase/neon/blob/39d86ed29e9a2887e6dc339fbb0eeb6040c12cc8/neon_local/src/main.rs#L695-L703
Moreover, the test runs it with `--ancestor-branch-name` parameter only, no `--ancestor-start-lsn`. So it's the pageserver who decides what LSN to create the branch at. Hence, if the pageserver is not fully caught up with safekeepers, the branch starts a little earlier than it should, truncating parts of the last transaction (effectively erasing it from the second timeline, I guess).
If so, I argue it's not flaky test, it's a design bug. We should have a clear semantics of what is "branching off a named branch" is (probably in terms of `commit_lsn`/`flush_lsn`/whatever on Safekeepers) and there should be no way to violate it by misusing the API. For example, there should no way to call the pageserver branch creation API without specifying not only the exact parent timeline, but the exact LSN in it.
I suspect there may be other similar bugs lurking around; the pageserver is probably used as the source of truth in multiple places (see e.g. #809). I suspect they would be impossible for our end users to debug unless we expose both safekeepers/pageserver's LSNs somehow, and very hard if we do. | reli | there should be no way of creating follow up of the last failures in see also looking at our neon local timeline branch command it works entirely with pageserver moreover the test runs it with ancestor branch name parameter only no ancestor start lsn so it s the pageserver who decides what lsn to create the branch at hence if the pageserver is not fully caught up with safekeepers the branch starts a little earlier than it should truncating parts of the last transaction effectively erasing it from the second timeline i guess if so i argue it s not flaky test it s a design bug we should have a clear semantics of what is branching off a named branch is probably in terms of commit lsn flush lsn whatever on safekeepers and there should be no way to violate it by misusing the api for example there should no way to call the pageserver branch creation api without specifying not only the exact parent timeline but the exact lsn in it i suspect there may be other similar bugs lurking around the pageserver is probably used as the source of truth in multiple places see e g i suspect they would be impossible for our end users to debug unless we expose both safekeepers pageserver s lsns somehow and very hard if we do | 1 |
3,022 | 31,623,332,008 | IssuesEvent | 2023-09-06 02:02:13 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | closed | [FEA] Support Split and Retry for GpuTopN | feature request reliability | **Is your feature request related to a problem? Please describe.**
`GpuTopN` does not support Retry or SplitAndRetry. We really should support it.
**Describe the solution you'd like**
`GpuTopN` has some state that is stored as the result of reading a new batch from . It should be fairly simple to do all of the processing in a retry block that returns that result that needs to be saved for the next iteration. The retry block could also split the input data as needed. | True | [FEA] Support Split and Retry for GpuTopN - **Is your feature request related to a problem? Please describe.**
`GpuTopN` does not support Retry or SplitAndRetry. We really should support it.
**Describe the solution you'd like**
`GpuTopN` has some state that is stored as the result of reading a new batch from . It should be fairly simple to do all of the processing in a retry block that returns that result that needs to be saved for the next iteration. The retry block could also split the input data as needed. | reli | support split and retry for gputopn is your feature request related to a problem please describe gputopn does not support retry or splitandretry we really should support it describe the solution you d like gputopn has some state that is stored as the result of reading a new batch from it should be fairly simple to do all of the processing in a retry block that returns that result that needs to be saved for the next iteration the retry block could also split the input data as needed | 1 |
461,366 | 13,229,038,107 | IssuesEvent | 2020-08-18 07:27:22 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Navigation via the map is broken on Team Assignment page | Priority: High | It seems that the map component is generating an invalid url when one clicks on the map (when trying to drill down jurisdictions). | 1.0 | Navigation via the map is broken on Team Assignment page - It seems that the map component is generating an invalid url when one clicks on the map (when trying to drill down jurisdictions). | non_reli | navigation via the map is broken on team assignment page it seems that the map component is generating an invalid url when one clicks on the map when trying to drill down jurisdictions | 0 |
228,610 | 18,244,712,658 | IssuesEvent | 2021-10-01 16:48:36 | ValveSoftware/Proton | https://api.github.com/repos/ValveSoftware/Proton | closed | Doom 2016 (379720) mouse sensitivity not uniform | Need Retest Whitelist Update Request | # Compatibility Report
- Name of the game with compatibility issues: DOOM 2016
- Steam AppID of the game: 379720
## System Information
- GPU: RTX 2070
- Driver/LLVM version: nvidia 418.43
- Kernel version: 5.0.2
- Link to full system information report as [Gist](https://gist.github.com/axb993/1335f879346877eb413d84bae1ebbefa)
- Proton version: 3.16-8 Beta
## I confirm:
- [ Y ] that I haven't found an existing compatibility report for this game.
- [ Y ] that I have checked whether there are updates for my system available.
(Cannot attach log as it's ~70MB, I can upload to Drive or something if necessary)
## Symptoms
Mouse is more sensitive when moving to the right than any other direction. Trying to move the cursor to the left edge of the screen results in it hovering near the edge before moving slightly back to the right. The cursor also starts in the top left corner of the screen.
[I recorded the problem in case that description doesn't make sense.](https://i.imgur.com/XNtLyrv.mp4)
This happens with Vulkan and OpenGL, fullscreen or windowed. Using Proton 3.16-8 Beta as both 3.16-4 and 3.7-8 Beta do not launch, and 3.7-8 hangs on a black screen. Everything was working perfectly before today.
## Reproduction
Start game, move mouse. Thought this could be a duplicate of #147 but changing polling rate/DPI has no effect. | 1.0 | Doom 2016 (379720) mouse sensitivity not uniform - # Compatibility Report
- Name of the game with compatibility issues: DOOM 2016
- Steam AppID of the game: 379720
## System Information
- GPU: RTX 2070
- Driver/LLVM version: nvidia 418.43
- Kernel version: 5.0.2
- Link to full system information report as [Gist](https://gist.github.com/axb993/1335f879346877eb413d84bae1ebbefa)
- Proton version: 3.16-8 Beta
## I confirm:
- [ Y ] that I haven't found an existing compatibility report for this game.
- [ Y ] that I have checked whether there are updates for my system available.
(Cannot attach log as it's ~70MB, I can upload to Drive or something if necessary)
## Symptoms
Mouse is more sensitive when moving to the right than any other direction. Trying to move the cursor to the left edge of the screen results in it hovering near the edge before moving slightly back to the right. The cursor also starts in the top left corner of the screen.
[I recorded the problem in case that description doesn't make sense.](https://i.imgur.com/XNtLyrv.mp4)
This happens with Vulkan and OpenGL, fullscreen or windowed. Using Proton 3.16-8 Beta as both 3.16-4 and 3.7-8 Beta do not launch, and 3.7-8 hangs on a black screen. Everything was working perfectly before today.
## Reproduction
Start game, move mouse. Thought this could be a duplicate of #147 but changing polling rate/DPI has no effect. | non_reli | doom mouse sensitivity not uniform compatibility report name of the game with compatibility issues doom steam appid of the game system information gpu rtx driver llvm version nvidia kernel version link to full system information report as proton version beta i confirm that i haven t found an existing compatibility report for this game that i have checked whether there are updates for my system available cannot attach log as it s i can upload to drive or something if necessary symptoms mouse is more sensitive when moving to the right than any other direction trying to move the cursor to the left edge of the screen results in it hovering near the edge before moving slightly back to the right the cursor also starts in the top left corner of the screen this happens with vulkan and opengl fullscreen or windowed using proton beta as both and beta do not launch and hangs on a black screen everything was working perfectly before today reproduction start game move mouse thought this could be a duplicate of but changing polling rate dpi has no effect | 0 |
148,404 | 5,681,232,160 | IssuesEvent | 2017-04-13 05:25:32 | psouza4/mediacentermaster | https://api.github.com/repos/psouza4/mediacentermaster | closed | Process New Downloads Does Not Tell Kodi To Update Library | Affects-General-Usability Component-Functionality Feature-Downloads Feature-Fetching Feature-TV Fixed / Resolved Player-Kodi-XBMC Priority-Low Type-Bug | Thread acknowledgement here: http://forums.mediacentermaster.com/viewtopic.php?f=4&t=11825
When I run process new downloads, MCM does not seem to properly tell Kodi to scan for new content. I enabled debug, and the only post messages i see is a GUI.ShowNotification message, which to me seems like it would not actually tell Kodi to scan for new content, but just show users that new content is available.
When I run Tools > Run Kodi integration, I do see the post command "VideoLibrary.Scan" in the debug logs. This command is not present when I run process new downloads.
*Support Info:*
Installation name: Media Center Master
Installation date: 20140302
Media Center Master: 2.16.4517.861
Windows version: Windows 10 Pro (64-bit)
Windows version (raw): Microsoft Windows NT 6.2.9200.0
.NET framework information:
Runtime loaded: 2.0.50727.8745
Runtimes installed:
2.0.50727.4927 SP2
3.0.30729.4926 SP2
3.5.30729.4926 SP1
4.6.01586
4.0.0.0
AppData path: C:\Users\pvr\AppData\Roaming\Peter Souza IV\Media Center Master
AppSettingKey path: HKEY_CURRENT_USER\SOFTWARE\Peter Souza IV\Media Center Master
Command line: "C:\Program Files (x86)\Media Center Master\MCMStubLauncher.exe" /skipUAC
Current path: C:\Program Files (x86)\Media Center Master
Administrative details:
User is administrator: true
UAC status: enabled
Elevation status: elevated
Proxy:(none in use)
*Kodi version:*
17.1-RC1
*Settings:*
AllowSlowUIColumns: False
AnnouncementsLast: 1401473239
AppendParentalRatingsToMovieDescription: False
Art_AutoResize: True
Art_BackdropWidth: 1280
Art_EnableOldArtChooser: False
Art_PosterWidth: 320
Art_SeenNoPromptDisclaimer: True
Artwork_FanartTV: False
Artwork_FanartTV_ClearArt: True
Artwork_FanartTV_ClearArt_SaveAs: clearart
Artwork_FanartTV_ClearLogo: True
Artwork_FanartTV_DiscArt: True
Artwork_FanartTV_DiscArt_SaveAs: disc
Artwork_FanartTV_HD: True
Artwork_FanartTV_Logo_SaveAs: logo
Artwork_FanartTV_MovieBanner: True
Artwork_FanartTV_MovieBanner_SaveAs: banner
Artwork_FanartTV_Thumbnail: True
Artwork_FanartTV_Thumbnail_SaveAs: thumb
AskForIMDBIDOnUnknownMovies: True
AutoFetchMetaDataForTVScanFolders: True
AutoMaximizePosterWindow: False
AutoProcess_Fetch: True
AutoProcess_Move: True
AutoProcess_ParseAndIgnore: False
AutoProcess_Rename: True
AutoProcessNewTV: True
AutoRunNewMediaList: False
AutoShrinkPostersForDVDID: True
BestSummary: Use smallest available description
ConfirmationPromptTimeout: 30
CreateBlank: False
CreateDVDIDMetaData: False
CreateMovieNFOMetaData: True
CreatePyTiVoMetadata: False
DeepInspectVideoFor3D: False
DetectAndOrganizeSingleFileItems: True
DisableTorrenting: False
DontDownloadTVBefore: 7/12/2015 11:19:24 AM
DontFetchPictures: False
DontSaveWorkingCache: False
DontTryToElevateUAC: False
DotNetCheck_Status: Pass
Download_ParserInTestMode: False
DownloadAllBackdrops: True
DownloadCastThumbs: False
DownloadCrewThumbs: False
Downloader_AcceptArchives: True
Downloader_AgeThreshold: 4
Downloader_AllowAVI: True
Downloader_AllowFLV: False
Downloader_AllowMKV: True
Downloader_AllowMP4: True
Downloader_AllowMPG: True
Downloader_AllowWMV: False
Downloader_AvoidSubs: True
Downloader_Avoidx264: False
Downloader_CleanUp: True
Downloader_ExtractArchives: True
Downloader_InspectTrackers: True
Downloader_Movie_1080HD_SizeMax: 15000000000
Downloader_Movie_1080HD_SizeMin: 0
Downloader_Movie_720HD_SizeMax: 10000000000
Downloader_Movie_720HD_SizeMin: 0
Downloader_Movie_SD_SizeMax: 1500000000
Downloader_Movie_SD_SizeMin: 0
Downloader_MovieAllowForeign: False
Downloader_MovieExcludeTerms: cam r5 ts telesync webscr* dvdscr* scr dubbed german fr french dutch* swesub* vosfr nlsubs nl_subs dvd4 dvd5 dvd9
Downloader_MovieExcludeTerms_1080HD: 720*
Downloader_MovieExcludeTerms_720HD: 1080*
Downloader_MovieExcludeTerms_SD: 720* 1080*
Downloader_MovieIncludeTerms_1080HD: 1080*
Downloader_MovieIncludeTerms_720HD: 720*
Downloader_MovieQuality: 5
Downloader_ParserIgnoresReadOnly: True
Downloader_RatingThreshold: 4.5
Downloader_ReplaceUseBestQuality: False
Downloader_RunAtStartup: True
Downloader_Torrents_SkipPeerCheckOnNewUploads: True
Downloader_TV_1080HD_SizeMax: 8000000000
Downloader_TV_1080HD_SizeMin: 2000
Downloader_TV_720HD_SizeMax: 4000000000
Downloader_TV_720HD_SizeMin: 1000
Downloader_TV_SD_SizeMax: 1250000000
Downloader_TV_SD_SizeMin: 1000
Downloader_TVAllowForeign: False
Downloader_TVExcludeTerms: preair flv dubbed german fr french dutch* swesub* vosfr nlsubs nl_subs dvd4 dvd5 dvd9
Downloader_TVExcludeTerms_1080HD: 1080*
Downloader_TVExcludeTerms_720HD: 1080*
Downloader_TVExcludeTerms_SD: 720* 1080*
Downloader_TVIncludeTerms_1080HD: 720* x265*
Downloader_TVIncludeTerms_720HD: 720*
Downloader_TVQuality: 11
Downloader_UpgradeMovieQuality: False
Downloader_WantObscure: False
DownloadHistory_Filter: vander
DownloadHistory_Library: True
DownloadHistory_Movies: True
DownloadHistory_NotFound: True
DownloadHistory_Skip: True
DownloadHistory_Success: True
DownloadHistory_TV: True
DualPass: True
EpisodeRenamingConvention: N - SXXEYY - T
FilteringDisabled: True
FolderCleaner_AutoCleanOnFetch: True
FolderCleaner_DeleteSamples: True
FolderCleaner_DeleteStaleMetadata: True
FolderCleaner_DeleteUnknownFolders: True
FolderCleaner_DeleteUnknownImages: True
FolderCleaner_DeleteUnusedExternalMetadata: True
FolderCleaner_TestMode: False
ForcedTitleTag: True
HasSeenTooltipOnMinimize: True
HideTitleBarLicensing: False
HighlightMissingMeta: True
HTML_FastVideoInspection: True
IgnorePeriodFilesAndDirectories: True
ImagesByNameMode: New
Interval_AutoScan: 86400
Interval_AutoScanEpisodes: 86400
Interval_DownloadNewMovies: 86400
Interval_DownloadNewTV: 21600
Interval_DownloadParser: 1800
Interval_DownloadTorrentFeeds: 3600
Interval_NewMediaList: 7200
Interval_uTorrent: 900
LastFetchMode: 2
LicenseCache: SVAfg*******************
LicenseCode: MCMLT******************
LicenseDetails: {Privacy Censor}
LicensingMode: 2
LicensingSystemNew: True
LimitBackdropDownloads: 5
LogLimit: 500
LogTimestamp: False
LogToFile: True
ManualMovieConfirm: True
MaxFetcherThreads: 2
MCMInstallCheck: 1
MediaListSort: ListTitle_NoArticles/A
Metadata_AllowFetcherKeywordTags: True
Metadata_aTVFlash: False
Metadata_Boxee: False
Metadata_DLNA: False
Metadata_Embedded: False
Metadata_JRiver: False
Metadata_MediaBrowserOld: False
Metadata_MythTV: False
Metadata_WDTV: False
Metadata_XStreamer: False
MinimizeToTray: True
MoreRelaxedSearching: True
MoveMovieDownloadsTo: E:\My Videos\Movies
Movie_VotedScoreSource: primary fetcher
Netgear_AutoReduceFileSize: True
Netgear_CompatibleTextFields: False
Netgear_EmbedPoster: True
Netgear_Force_EASCII: True
Netgear_MetaData: False
Netgear_TLENID3Standard: True
NewMediaListDays: 7
NMTPathMapping: C:\Example\Path**file:///opt/sybhttpd/localhost.drives/NETWORK_SHARE/EXAMPLESHARE|
NoCommas: False
NoConfirmOnOneResult: True
NonblockingConfirmationPrompts: True
NoSpaces: False
NoSpaces2: False
OnlyShowAnnouncements: False
Plugin_Fetcher_Adult: TMDbA
Plugin_Fetcher_Movies1: tMDB
Plugin_Fetcher_Movies2: IMDB
Plugin_Fetcher_TV: thetvdb.com
PopupMoveWindow: False
PostProcessing_Hide: True
PostProcessing_Wait: True
PostProcessingTVEpisode_Hide: True
PostProcessingTVEpisode_Wait: True
PostProcessingTVShow_Hide: True
PostProcessingTVShow_Wait: True
ReadOnlyFoldersVisible: True
RelaxedTVDetection: True
RenameFiles: True
RenameFolders: True
Renamer_Movie: T T (Y)
Renamer_Series: T, T (Y)
RenamerDownloadStyle: False
RespectIDs: True
RuT_EnableManager: False
ScanFolders: E:\My Videos\TV Shows|E:\My Videos\Movies|I:\My Videos\TV Shows
SearchWithoutPath: False
SeasonRenamingConvention: Season {S}
ShowDebug: False
SortByOriginalFolder: False
SortWithoutArticle: True
Subtitle_NamingMode: Single
Subtitles_PreferedLanguage: in English
Subtitles_PreferedLanguages: en,en
ThemedBackgrounds: True
ThumbnailExtract_Auto: False
ThumbnailExtract_Lots: False
TMDb_CountryAbbreviation: US
TMDb_LanguageAbbreviation: en
TMDb_LanguageID: 7
TMDb_LanguageName: English
TorrentDownloadPath: C:\Users\pvr\Dropbox\Torrents
TorrentDownloadPathOther: C:\Users\pvr\Dropbox\Torrents
TorrentFinishedPath: I:\uTorrent\Done
Torrents_AllowMagnetLinks: True
Torrents_DisableVirusProtection: True
Torrents_DownloadDelay: 8
Torrents_MinimumEpisodeSeeds: 1
Torrents_MinimumMovieSeeds: 35
Trailers_AllowAdaptiveYouTubeTrailerMuxing: False
Trailers_AlwaysReEncode: False
Trailers_AutoDownload: False
Trailers_AutoReEncode: False
Trailers_AutoReplace: True
Trailers_MaxBitRate: 1600
Trailers_SaveLocation: 6
Trailers_TargetSize: 480 (pixels high)
Trailers_WatchOutput: False
Trakt: False
Trakt_Auto: False
Trakt_AutoAddTV: False
Trakt_AutoDownload: False
Trakt_AutoDownloadCheckRelease: True
Trakt_ReleaseDayOffset: 0
TranslateSEEtoAAA: False
TTVDB_LanguageAbbreviation: en
TTVDB_LanguageID: 7
TTVDB_LanguageName: English
TV_Overrides: dora the explorer*74697*|girls*220411*|law and order ci*71489*4203|law and order*72368*|mash*70994*4315|nexo knights*304433*|real housewives beverly hills*196741*|real housewives of new york city*84669*|real housewives of new york*84669*|real housewives of nyc*84669*|scandal us*248841*|scandal*248841*|some assembly required*279412*|tfh*71475*|the flash*279121*|the office us*73244*6061|the real housewives new york city*84669*|transformers*72499*|million dollar listing la*128521*
TVAutoScanCheckMissingThumbs: True
TVFetchSeasonBanners: True
TVFetchSeasonPosters: True
TVShowBias: US
TVThemeMusic: False
TVVideoBackdrops: False
UITheme: Windows Media Center
UpdateFile: Update.txt
UpdatePath: http://download.MediaCenterMaster.com/AutoUpdate/
UseAutoDetectionMode: False
UseAutoUpdate: True
UseCloudNetwork: True
UseConsolodatedGenres: False
UseExistingPosters: True
UseFetcherThreads: True
Usenet_AutoLoadNZB: False
Usenet_Binsearch: False
Usenet_CacheLocation: C:\Users\pvr\AppData\Roaming\Peter Souza IV\Media Center Master\Usenet
Usenet_MaxConnections: 20
Usenet_MCMIndex: False
Usenet_MoveNZBFile: False
Usenet_NZBsu: False
Usenet_Password: {Privacy Censor}
Usenet_Port: 119
Usenet_SSL: False
Usenet_UseMCMToDownload: True
UsePostProcessing: False
UsePostProcessingTVEpisode: False
UsePostProcessingTVShow: False
UseSecondaryGenres: True
UseWindowsProxy: True
UT_AutoForceTVDownloads: False
UT_AutoLabel: False
UT_AutoRemoveCompletes: True
UT_AutoRepair: True
UT_EnableManager: True
UT_Hostname: localhost
UT_LeaveSeedingTorrentsWithETA: False
UT_Password: {Privacy Censor}
UT_Port: 24932
UT_RemoteAddMagnets: True
UT_RemoteAddTorrents: True
UT_SeedUntilRatio: 0.000
UT_Username: {Privacy Censor}
VersionUpgrader: 5
WantExtendedTVStatus: False
WantMovieTorrentDownloader: False
WantNewMovieDownloads: False
WantOpenSubtitlesOrg: False
WantQueueAbortConfirmation: True
WantSmoothImages: False
WantSubtitleDownloads: False
WantSubtitleDownloads_Movies: False
WantSubtitleDownloads_TV: False
WantTorrent_eztv: True
WantTorrent_isohunt: True
WantTorrent_kickasstorrents: True
WantTorrent_newtorrents: True
WantTorrent_rarbg: True
WantTorrent_thepiratebay: True
WantTorrent_thepiratebay_new: True
WantTorrent_torrentbutler: True
WantTorrentAutoStart: True
WantTVTorrentDownloader: True
WantUsenet: False
WantVideoDetails: False
WhatToDoWithBadEpisodeDownloads: Re-download
XBMC_AlternatePosterSize: True
XBMC_API_Host: localhost
XBMC_API_Integration: True
XBMC_API_Notifications: True
XBMC_API_Password: {Privacy Censor}
XBMC_API_Port: 1979
XBMC_API_Scan: True
XBMC_API_Username: {Privacy Censor}
XBMC_ExtraFanart: True
XBMC_FillTitleFromOriginalTitle: False
XBMC_GenerateFanArtwork: True
XBMC_GeneratePosterArtwork: True
XBMC_NeverDeleteExtraFanart: True
XBMC_SpecialsSeasonHackFix: True
XBMC_Version: 12
YAMJEnabled: False
*Version Info:*
File versions (current + install path):
AutoUpdate.dll version: 2.29.114.881
MCMStubLauncher.exe version: 2.10.114.881
unins000.exe version: 51.52.0.0
File versions (%AppData%\bin):
AdultDVDEmpire.dll version: 3.20.32615.597
AutoUpdate.dll version: 2.30.34914.1417
cdUniverse.dll version: 3.5.214.914
ffmpeg.exe version: none
GayDVDEmpire.dll version: 3.20.32615.597
ICSharpCode.SharpZipLib.dll version: 0.86.0.518
IMDB.dll version: 3.27.17413.1304
Ionic.Zip.dll version: 1.9.1.8
MCMIDTag.dll version: 2.10.114.0
MCMMKVTag.dll version: 2.10.114.0
MCMWTV.dll version: 2.10.114.0
MediaCenterMaster.exe version: 2.16.4517.861
MediaInfo.dll version: 0.7.65.0
MonoTorrent.dll version: 0.80.0.0
Newtonsoft.Json.dll version: 5.0.5.16108
ObjectListView.dll version: 2.10.114.881
par2.exe version: none
RottenTomatoes.dll version: 1.2.19713.596
Sublight.Plugins.SubtitleProvider.dll version: 1.0.0.0
SublightPlugin.dll version: 2.13.34914.1417
System.Data.SQLite.dll version: 1.0.74.0
tMDB.dll version: 5.32.25415.922
TMDbA.dll version: 1.5.36113.600
unrar.dll version: 4.0.4.3
VidInfo.exe version: 2.15.25315.814
*Debug logs:*
-----------------------------------------------------------------------------------------
MCM: [F#-1] now working on Southern Charm (2014)
[22:01:21.758] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
thetvdb.com Updated base metadata for this show
[22:01:21.835] DEBUG: thetvdb.com Looking for artwork to download for this show
[22:01:21.878] DEBUG: thetvdb.com Finished looking for artwork to download for this show
[22:01:21.922] DEBUG: thetvdb.com Looking for episodes that need metadata fetched for this show
[22:01:21.987] DEBUG: thetvdb.com ... season list obtained (count = 1, now getting lists of episodes...
[22:01:22.069] DEBUG: thetvdb.com ...... looking for episodes in: Season 1
thetvdb.com Fetching season 1 banner...
thetvdb.com none available
[22:01:22.154] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
[22:01:26.656] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4 (file): true
[22:01:26.677] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4
thetvdb.com Fetching data for season 1, episode 2
thetvdb.com "Sh-Epic Fail!"
[22:01:30.058] Fetching URL (POST): http://localhost:1979/jsonrpc
[22:01:30.059] DEBUG: Request data type switched to JSON
[22:01:30.061] Posted data: {"jsonrpc": "2.0", "method": "JSONRPC.Version", "id": "mcm"}
thetvdb.com Fetching season 1, episode 2 thumbnail...
[22:01:30.092] DEBUG: http://thetvdb.com/banners/episodes/278266/4814999.jpg
[22:01:30.092] DEBUG: Downloading: http://thetvdb.com/banners/episodes/278266/4814999.jpg
[22:01:30.093] DEBUG: ... to: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\metadata\4814999.jpg
[22:01:30.093] Fetching URL (GET): http://thetvdb.com/banners/episodes/278266/4814999.jpg
[22:01:30.110] Received data: {"id":"mcm","jsonrpc":"2.0","result":{"version":{"major":8,"minor":0,"patch":0}}}
[22:01:30.110] DEBUG: Version information from Kodi/XBMC: {"id":"mcm","jsonrpc":"2.0","result":{"version":{"major":8,"minor":0,"patch":0}}}
[22:01:30.139] Fetching URL (POST): http://localhost:1979/jsonrpc
[22:01:30.141] DEBUG: Request data type switched to JSON
[22:01:30.165] Posted data: {"jsonrpc": "2.0", "method": "GUI.ShowNotification", "params": {"title": "MCM: New Episode Fetched", "message": "Southern Charm - S01E02 - Sh-Epic Fail!", "displaytime": 8000}, "id": "mcm"}
[22:01:30.233] Received data: {"id":"mcm","jsonrpc":"2.0","result":"OK"}
[22:01:30.454] DEBUG: The image downloaded has a size of: 400x225
[22:01:30.520] DEBUG: Successfully wrote image data to disk.
[22:01:31.591] DEBUG: Operations ... completed community web conversation: 1,379ms
[22:01:32.911] DEBUG: Submitted details.
thetvdb.com Done with this show!
[22:01:33.123] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
[22:01:33.152] DEBUG: Saved to cache: I:\uTorrent
[22:01:35.354] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
[22:01:35.606] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
[22:01:40.202] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
MCM: Now resizing artwork for Southern Charm (2014)
[22:01:44.494] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
[22:01:44.500] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4 (file): true
MCM: [F#210] Created Kodi/XBMC metadata files
MCM: Cleaning folder E:\My Videos\TV Shows\Southern Charm (2014)...
[22:01:44.939] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
[22:01:48.090] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4 (file): true
[22:01:48.264] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
MCM: ... 0 files deleted
[22:01:48.729] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
MCM: ...done!
| 1.0 | Process New Downloads Does Not Tell Kodi To Update Library - Thread acknowledgement here: http://forums.mediacentermaster.com/viewtopic.php?f=4&t=11825
When I run process new downloads, MCM does not seem to properly tell Kodi to scan for new content. I enabled debug, and the only post messages i see is a GUI.ShowNotification message, which to me seems like it would not actually tell Kodi to scan for new content, but just show users that new content is available.
When I run Tools > Run Kodi integration, I do see the post command "VideoLibrary.Scan" in the debug logs. This command is not present when I run process new downloads.
*Support Info:*
Installation name: Media Center Master
Installation date: 20140302
Media Center Master: 2.16.4517.861
Windows version: Windows 10 Pro (64-bit)
Windows version (raw): Microsoft Windows NT 6.2.9200.0
.NET framework information:
Runtime loaded: 2.0.50727.8745
Runtimes installed:
2.0.50727.4927 SP2
3.0.30729.4926 SP2
3.5.30729.4926 SP1
4.6.01586
4.0.0.0
AppData path: C:\Users\pvr\AppData\Roaming\Peter Souza IV\Media Center Master
AppSettingKey path: HKEY_CURRENT_USER\SOFTWARE\Peter Souza IV\Media Center Master
Command line: "C:\Program Files (x86)\Media Center Master\MCMStubLauncher.exe" /skipUAC
Current path: C:\Program Files (x86)\Media Center Master
Administrative details:
User is administrator: true
UAC status: enabled
Elevation status: elevated
Proxy:(none in use)
*Kodi version:*
17.1-RC1
*Settings:*
AllowSlowUIColumns: False
AnnouncementsLast: 1401473239
AppendParentalRatingsToMovieDescription: False
Art_AutoResize: True
Art_BackdropWidth: 1280
Art_EnableOldArtChooser: False
Art_PosterWidth: 320
Art_SeenNoPromptDisclaimer: True
Artwork_FanartTV: False
Artwork_FanartTV_ClearArt: True
Artwork_FanartTV_ClearArt_SaveAs: clearart
Artwork_FanartTV_ClearLogo: True
Artwork_FanartTV_DiscArt: True
Artwork_FanartTV_DiscArt_SaveAs: disc
Artwork_FanartTV_HD: True
Artwork_FanartTV_Logo_SaveAs: logo
Artwork_FanartTV_MovieBanner: True
Artwork_FanartTV_MovieBanner_SaveAs: banner
Artwork_FanartTV_Thumbnail: True
Artwork_FanartTV_Thumbnail_SaveAs: thumb
AskForIMDBIDOnUnknownMovies: True
AutoFetchMetaDataForTVScanFolders: True
AutoMaximizePosterWindow: False
AutoProcess_Fetch: True
AutoProcess_Move: True
AutoProcess_ParseAndIgnore: False
AutoProcess_Rename: True
AutoProcessNewTV: True
AutoRunNewMediaList: False
AutoShrinkPostersForDVDID: True
BestSummary: Use smallest available description
ConfirmationPromptTimeout: 30
CreateBlank: False
CreateDVDIDMetaData: False
CreateMovieNFOMetaData: True
CreatePyTiVoMetadata: False
DeepInspectVideoFor3D: False
DetectAndOrganizeSingleFileItems: True
DisableTorrenting: False
DontDownloadTVBefore: 7/12/2015 11:19:24 AM
DontFetchPictures: False
DontSaveWorkingCache: False
DontTryToElevateUAC: False
DotNetCheck_Status: Pass
Download_ParserInTestMode: False
DownloadAllBackdrops: True
DownloadCastThumbs: False
DownloadCrewThumbs: False
Downloader_AcceptArchives: True
Downloader_AgeThreshold: 4
Downloader_AllowAVI: True
Downloader_AllowFLV: False
Downloader_AllowMKV: True
Downloader_AllowMP4: True
Downloader_AllowMPG: True
Downloader_AllowWMV: False
Downloader_AvoidSubs: True
Downloader_Avoidx264: False
Downloader_CleanUp: True
Downloader_ExtractArchives: True
Downloader_InspectTrackers: True
Downloader_Movie_1080HD_SizeMax: 15000000000
Downloader_Movie_1080HD_SizeMin: 0
Downloader_Movie_720HD_SizeMax: 10000000000
Downloader_Movie_720HD_SizeMin: 0
Downloader_Movie_SD_SizeMax: 1500000000
Downloader_Movie_SD_SizeMin: 0
Downloader_MovieAllowForeign: False
Downloader_MovieExcludeTerms: cam r5 ts telesync webscr* dvdscr* scr dubbed german fr french dutch* swesub* vosfr nlsubs nl_subs dvd4 dvd5 dvd9
Downloader_MovieExcludeTerms_1080HD: 720*
Downloader_MovieExcludeTerms_720HD: 1080*
Downloader_MovieExcludeTerms_SD: 720* 1080*
Downloader_MovieIncludeTerms_1080HD: 1080*
Downloader_MovieIncludeTerms_720HD: 720*
Downloader_MovieQuality: 5
Downloader_ParserIgnoresReadOnly: True
Downloader_RatingThreshold: 4.5
Downloader_ReplaceUseBestQuality: False
Downloader_RunAtStartup: True
Downloader_Torrents_SkipPeerCheckOnNewUploads: True
Downloader_TV_1080HD_SizeMax: 8000000000
Downloader_TV_1080HD_SizeMin: 2000
Downloader_TV_720HD_SizeMax: 4000000000
Downloader_TV_720HD_SizeMin: 1000
Downloader_TV_SD_SizeMax: 1250000000
Downloader_TV_SD_SizeMin: 1000
Downloader_TVAllowForeign: False
Downloader_TVExcludeTerms: preair flv dubbed german fr french dutch* swesub* vosfr nlsubs nl_subs dvd4 dvd5 dvd9
Downloader_TVExcludeTerms_1080HD: 1080*
Downloader_TVExcludeTerms_720HD: 1080*
Downloader_TVExcludeTerms_SD: 720* 1080*
Downloader_TVIncludeTerms_1080HD: 720* x265*
Downloader_TVIncludeTerms_720HD: 720*
Downloader_TVQuality: 11
Downloader_UpgradeMovieQuality: False
Downloader_WantObscure: False
DownloadHistory_Filter: vander
DownloadHistory_Library: True
DownloadHistory_Movies: True
DownloadHistory_NotFound: True
DownloadHistory_Skip: True
DownloadHistory_Success: True
DownloadHistory_TV: True
DualPass: True
EpisodeRenamingConvention: N - SXXEYY - T
FilteringDisabled: True
FolderCleaner_AutoCleanOnFetch: True
FolderCleaner_DeleteSamples: True
FolderCleaner_DeleteStaleMetadata: True
FolderCleaner_DeleteUnknownFolders: True
FolderCleaner_DeleteUnknownImages: True
FolderCleaner_DeleteUnusedExternalMetadata: True
FolderCleaner_TestMode: False
ForcedTitleTag: True
HasSeenTooltipOnMinimize: True
HideTitleBarLicensing: False
HighlightMissingMeta: True
HTML_FastVideoInspection: True
IgnorePeriodFilesAndDirectories: True
ImagesByNameMode: New
Interval_AutoScan: 86400
Interval_AutoScanEpisodes: 86400
Interval_DownloadNewMovies: 86400
Interval_DownloadNewTV: 21600
Interval_DownloadParser: 1800
Interval_DownloadTorrentFeeds: 3600
Interval_NewMediaList: 7200
Interval_uTorrent: 900
LastFetchMode: 2
LicenseCache: SVAfg*******************
LicenseCode: MCMLT******************
LicenseDetails: {Privacy Censor}
LicensingMode: 2
LicensingSystemNew: True
LimitBackdropDownloads: 5
LogLimit: 500
LogTimestamp: False
LogToFile: True
ManualMovieConfirm: True
MaxFetcherThreads: 2
MCMInstallCheck: 1
MediaListSort: ListTitle_NoArticles/A
Metadata_AllowFetcherKeywordTags: True
Metadata_aTVFlash: False
Metadata_Boxee: False
Metadata_DLNA: False
Metadata_Embedded: False
Metadata_JRiver: False
Metadata_MediaBrowserOld: False
Metadata_MythTV: False
Metadata_WDTV: False
Metadata_XStreamer: False
MinimizeToTray: True
MoreRelaxedSearching: True
MoveMovieDownloadsTo: E:\My Videos\Movies
Movie_VotedScoreSource: primary fetcher
Netgear_AutoReduceFileSize: True
Netgear_CompatibleTextFields: False
Netgear_EmbedPoster: True
Netgear_Force_EASCII: True
Netgear_MetaData: False
Netgear_TLENID3Standard: True
NewMediaListDays: 7
NMTPathMapping: C:\Example\Path**file:///opt/sybhttpd/localhost.drives/NETWORK_SHARE/EXAMPLESHARE|
NoCommas: False
NoConfirmOnOneResult: True
NonblockingConfirmationPrompts: True
NoSpaces: False
NoSpaces2: False
OnlyShowAnnouncements: False
Plugin_Fetcher_Adult: TMDbA
Plugin_Fetcher_Movies1: tMDB
Plugin_Fetcher_Movies2: IMDB
Plugin_Fetcher_TV: thetvdb.com
PopupMoveWindow: False
PostProcessing_Hide: True
PostProcessing_Wait: True
PostProcessingTVEpisode_Hide: True
PostProcessingTVEpisode_Wait: True
PostProcessingTVShow_Hide: True
PostProcessingTVShow_Wait: True
ReadOnlyFoldersVisible: True
RelaxedTVDetection: True
RenameFiles: True
RenameFolders: True
Renamer_Movie: T T (Y)
Renamer_Series: T, T (Y)
RenamerDownloadStyle: False
RespectIDs: True
RuT_EnableManager: False
ScanFolders: E:\My Videos\TV Shows|E:\My Videos\Movies|I:\My Videos\TV Shows
SearchWithoutPath: False
SeasonRenamingConvention: Season {S}
ShowDebug: False
SortByOriginalFolder: False
SortWithoutArticle: True
Subtitle_NamingMode: Single
Subtitles_PreferedLanguage: in English
Subtitles_PreferedLanguages: en,en
ThemedBackgrounds: True
ThumbnailExtract_Auto: False
ThumbnailExtract_Lots: False
TMDb_CountryAbbreviation: US
TMDb_LanguageAbbreviation: en
TMDb_LanguageID: 7
TMDb_LanguageName: English
TorrentDownloadPath: C:\Users\pvr\Dropbox\Torrents
TorrentDownloadPathOther: C:\Users\pvr\Dropbox\Torrents
TorrentFinishedPath: I:\uTorrent\Done
Torrents_AllowMagnetLinks: True
Torrents_DisableVirusProtection: True
Torrents_DownloadDelay: 8
Torrents_MinimumEpisodeSeeds: 1
Torrents_MinimumMovieSeeds: 35
Trailers_AllowAdaptiveYouTubeTrailerMuxing: False
Trailers_AlwaysReEncode: False
Trailers_AutoDownload: False
Trailers_AutoReEncode: False
Trailers_AutoReplace: True
Trailers_MaxBitRate: 1600
Trailers_SaveLocation: 6
Trailers_TargetSize: 480 (pixels high)
Trailers_WatchOutput: False
Trakt: False
Trakt_Auto: False
Trakt_AutoAddTV: False
Trakt_AutoDownload: False
Trakt_AutoDownloadCheckRelease: True
Trakt_ReleaseDayOffset: 0
TranslateSEEtoAAA: False
TTVDB_LanguageAbbreviation: en
TTVDB_LanguageID: 7
TTVDB_LanguageName: English
TV_Overrides: dora the explorer*74697*|girls*220411*|law and order ci*71489*4203|law and order*72368*|mash*70994*4315|nexo knights*304433*|real housewives beverly hills*196741*|real housewives of new york city*84669*|real housewives of new york*84669*|real housewives of nyc*84669*|scandal us*248841*|scandal*248841*|some assembly required*279412*|tfh*71475*|the flash*279121*|the office us*73244*6061|the real housewives new york city*84669*|transformers*72499*|million dollar listing la*128521*
TVAutoScanCheckMissingThumbs: True
TVFetchSeasonBanners: True
TVFetchSeasonPosters: True
TVShowBias: US
TVThemeMusic: False
TVVideoBackdrops: False
UITheme: Windows Media Center
UpdateFile: Update.txt
UpdatePath: http://download.MediaCenterMaster.com/AutoUpdate/
UseAutoDetectionMode: False
UseAutoUpdate: True
UseCloudNetwork: True
UseConsolodatedGenres: False
UseExistingPosters: True
UseFetcherThreads: True
Usenet_AutoLoadNZB: False
Usenet_Binsearch: False
Usenet_CacheLocation: C:\Users\pvr\AppData\Roaming\Peter Souza IV\Media Center Master\Usenet
Usenet_MaxConnections: 20
Usenet_MCMIndex: False
Usenet_MoveNZBFile: False
Usenet_NZBsu: False
Usenet_Password: {Privacy Censor}
Usenet_Port: 119
Usenet_SSL: False
Usenet_UseMCMToDownload: True
UsePostProcessing: False
UsePostProcessingTVEpisode: False
UsePostProcessingTVShow: False
UseSecondaryGenres: True
UseWindowsProxy: True
UT_AutoForceTVDownloads: False
UT_AutoLabel: False
UT_AutoRemoveCompletes: True
UT_AutoRepair: True
UT_EnableManager: True
UT_Hostname: localhost
UT_LeaveSeedingTorrentsWithETA: False
UT_Password: {Privacy Censor}
UT_Port: 24932
UT_RemoteAddMagnets: True
UT_RemoteAddTorrents: True
UT_SeedUntilRatio: 0.000
UT_Username: {Privacy Censor}
VersionUpgrader: 5
WantExtendedTVStatus: False
WantMovieTorrentDownloader: False
WantNewMovieDownloads: False
WantOpenSubtitlesOrg: False
WantQueueAbortConfirmation: True
WantSmoothImages: False
WantSubtitleDownloads: False
WantSubtitleDownloads_Movies: False
WantSubtitleDownloads_TV: False
WantTorrent_eztv: True
WantTorrent_isohunt: True
WantTorrent_kickasstorrents: True
WantTorrent_newtorrents: True
WantTorrent_rarbg: True
WantTorrent_thepiratebay: True
WantTorrent_thepiratebay_new: True
WantTorrent_torrentbutler: True
WantTorrentAutoStart: True
WantTVTorrentDownloader: True
WantUsenet: False
WantVideoDetails: False
WhatToDoWithBadEpisodeDownloads: Re-download
XBMC_AlternatePosterSize: True
XBMC_API_Host: localhost
XBMC_API_Integration: True
XBMC_API_Notifications: True
XBMC_API_Password: {Privacy Censor}
XBMC_API_Port: 1979
XBMC_API_Scan: True
XBMC_API_Username: {Privacy Censor}
XBMC_ExtraFanart: True
XBMC_FillTitleFromOriginalTitle: False
XBMC_GenerateFanArtwork: True
XBMC_GeneratePosterArtwork: True
XBMC_NeverDeleteExtraFanart: True
XBMC_SpecialsSeasonHackFix: True
XBMC_Version: 12
YAMJEnabled: False
*Version Info:*
File versions (current + install path):
AutoUpdate.dll version: 2.29.114.881
MCMStubLauncher.exe version: 2.10.114.881
unins000.exe version: 51.52.0.0
File versions (%AppData%\bin):
AdultDVDEmpire.dll version: 3.20.32615.597
AutoUpdate.dll version: 2.30.34914.1417
cdUniverse.dll version: 3.5.214.914
ffmpeg.exe version: none
GayDVDEmpire.dll version: 3.20.32615.597
ICSharpCode.SharpZipLib.dll version: 0.86.0.518
IMDB.dll version: 3.27.17413.1304
Ionic.Zip.dll version: 1.9.1.8
MCMIDTag.dll version: 2.10.114.0
MCMMKVTag.dll version: 2.10.114.0
MCMWTV.dll version: 2.10.114.0
MediaCenterMaster.exe version: 2.16.4517.861
MediaInfo.dll version: 0.7.65.0
MonoTorrent.dll version: 0.80.0.0
Newtonsoft.Json.dll version: 5.0.5.16108
ObjectListView.dll version: 2.10.114.881
par2.exe version: none
RottenTomatoes.dll version: 1.2.19713.596
Sublight.Plugins.SubtitleProvider.dll version: 1.0.0.0
SublightPlugin.dll version: 2.13.34914.1417
System.Data.SQLite.dll version: 1.0.74.0
tMDB.dll version: 5.32.25415.922
TMDbA.dll version: 1.5.36113.600
unrar.dll version: 4.0.4.3
VidInfo.exe version: 2.15.25315.814
*Debug logs:*
-----------------------------------------------------------------------------------------
MCM: [F#-1] now working on Southern Charm (2014)
[22:01:21.758] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
thetvdb.com Updated base metadata for this show
[22:01:21.835] DEBUG: thetvdb.com Looking for artwork to download for this show
[22:01:21.878] DEBUG: thetvdb.com Finished looking for artwork to download for this show
[22:01:21.922] DEBUG: thetvdb.com Looking for episodes that need metadata fetched for this show
[22:01:21.987] DEBUG: thetvdb.com ... season list obtained (count = 1, now getting lists of episodes...
[22:01:22.069] DEBUG: thetvdb.com ...... looking for episodes in: Season 1
thetvdb.com Fetching season 1 banner...
thetvdb.com none available
[22:01:22.154] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
[22:01:26.656] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4 (file): true
[22:01:26.677] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4
thetvdb.com Fetching data for season 1, episode 2
thetvdb.com "Sh-Epic Fail!"
[22:01:30.058] Fetching URL (POST): http://localhost:1979/jsonrpc
[22:01:30.059] DEBUG: Request data type switched to JSON
[22:01:30.061] Posted data: {"jsonrpc": "2.0", "method": "JSONRPC.Version", "id": "mcm"}
thetvdb.com Fetching season 1, episode 2 thumbnail...
[22:01:30.092] DEBUG: http://thetvdb.com/banners/episodes/278266/4814999.jpg
[22:01:30.092] DEBUG: Downloading: http://thetvdb.com/banners/episodes/278266/4814999.jpg
[22:01:30.093] DEBUG: ... to: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\metadata\4814999.jpg
[22:01:30.093] Fetching URL (GET): http://thetvdb.com/banners/episodes/278266/4814999.jpg
[22:01:30.110] Received data: {"id":"mcm","jsonrpc":"2.0","result":{"version":{"major":8,"minor":0,"patch":0}}}
[22:01:30.110] DEBUG: Version information from Kodi/XBMC: {"id":"mcm","jsonrpc":"2.0","result":{"version":{"major":8,"minor":0,"patch":0}}}
[22:01:30.139] Fetching URL (POST): http://localhost:1979/jsonrpc
[22:01:30.141] DEBUG: Request data type switched to JSON
[22:01:30.165] Posted data: {"jsonrpc": "2.0", "method": "GUI.ShowNotification", "params": {"title": "MCM: New Episode Fetched", "message": "Southern Charm - S01E02 - Sh-Epic Fail!", "displaytime": 8000}, "id": "mcm"}
[22:01:30.233] Received data: {"id":"mcm","jsonrpc":"2.0","result":"OK"}
[22:01:30.454] DEBUG: The image downloaded has a size of: 400x225
[22:01:30.520] DEBUG: Successfully wrote image data to disk.
[22:01:31.591] DEBUG: Operations ... completed community web conversation: 1,379ms
[22:01:32.911] DEBUG: Submitted details.
thetvdb.com Done with this show!
[22:01:33.123] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
[22:01:33.152] DEBUG: Saved to cache: I:\uTorrent
[22:01:35.354] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
[22:01:35.606] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
[22:01:40.202] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
MCM: Now resizing artwork for Southern Charm (2014)
[22:01:44.494] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
[22:01:44.500] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4 (file): true
MCM: [F#210] Created Kodi/XBMC metadata files
MCM: Cleaning folder E:\My Videos\TV Shows\Southern Charm (2014)...
[22:01:44.939] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
[22:01:48.090] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E02 - Sh-Epic Fail!.mp4 (file): true
[22:01:48.264] DEBUG: Saved to cache: E:\My Videos\TV Shows\Southern Charm (2014)
MCM: ... 0 files deleted
[22:01:48.729] DEBUG: E:\My Videos\TV Shows\Southern Charm (2014)\Season 1\Southern Charm - S01E01 - Peter Pan “Sin”Drome.mp4 (file): true
MCM: ...done!
| non_reli | process new downloads does not tell kodi to update library thread acknowledgement here when i run process new downloads mcm does not seem to properly tell kodi to scan for new content i enabled debug and the only post messages i see is a gui shownotification message which to me seems like it would not actually tell kodi to scan for new content but just show users that new content is available when i run tools run kodi integration i do see the post command videolibrary scan in the debug logs this command is not present when i run process new downloads support info installation name media center master installation date media center master windows version windows pro bit windows version raw microsoft windows nt net framework information runtime loaded runtimes installed appdata path c users pvr appdata roaming peter souza iv media center master appsettingkey path hkey current user software peter souza iv media center master command line c program files media center master mcmstublauncher exe skipuac current path c program files media center master administrative details user is administrator true uac status enabled elevation status elevated proxy none in use kodi version settings allowslowuicolumns false announcementslast appendparentalratingstomoviedescription false art autoresize true art backdropwidth art enableoldartchooser false art posterwidth art seennopromptdisclaimer true artwork fanarttv false artwork fanarttv clearart true artwork fanarttv clearart saveas clearart artwork fanarttv clearlogo true artwork fanarttv discart true artwork fanarttv discart saveas disc artwork fanarttv hd true artwork fanarttv logo saveas logo artwork fanarttv moviebanner true artwork fanarttv moviebanner saveas banner artwork fanarttv thumbnail true artwork fanarttv thumbnail saveas thumb askforimdbidonunknownmovies true autofetchmetadatafortvscanfolders true automaximizeposterwindow false autoprocess fetch true autoprocess move true autoprocess parseandignore false autoprocess rename true autoprocessnewtv true autorunnewmedialist false autoshrinkpostersfordvdid true bestsummary use smallest available description confirmationprompttimeout createblank false createdvdidmetadata false createmovienfometadata true createpytivometadata false false detectandorganizesinglefileitems true disabletorrenting false dontdownloadtvbefore am dontfetchpictures false dontsaveworkingcache false donttrytoelevateuac false dotnetcheck status pass download parserintestmode false downloadallbackdrops true downloadcastthumbs false downloadcrewthumbs false downloader acceptarchives true downloader agethreshold downloader allowavi true downloader allowflv false downloader allowmkv true downloader true downloader allowmpg true downloader allowwmv false downloader avoidsubs true downloader false downloader cleanup true downloader extractarchives true downloader inspecttrackers true downloader movie sizemax downloader movie sizemin downloader movie sizemax downloader movie sizemin downloader movie sd sizemax downloader movie sd sizemin downloader movieallowforeign false downloader movieexcludeterms cam ts telesync webscr dvdscr scr dubbed german fr french dutch swesub vosfr nlsubs nl subs downloader movieexcludeterms downloader movieexcludeterms downloader movieexcludeterms sd downloader movieincludeterms downloader movieincludeterms downloader moviequality downloader parserignoresreadonly true downloader ratingthreshold downloader replaceusebestquality false downloader runatstartup true downloader torrents skippeercheckonnewuploads true downloader tv sizemax downloader tv sizemin downloader tv sizemax downloader tv sizemin downloader tv sd sizemax downloader tv sd sizemin downloader tvallowforeign false downloader tvexcludeterms preair flv dubbed german fr french dutch swesub vosfr nlsubs nl subs downloader tvexcludeterms downloader tvexcludeterms downloader tvexcludeterms sd downloader tvincludeterms downloader tvincludeterms downloader tvquality downloader upgrademoviequality false downloader wantobscure false downloadhistory filter vander downloadhistory library true downloadhistory movies true downloadhistory notfound true downloadhistory skip true downloadhistory success true downloadhistory tv true dualpass true episoderenamingconvention n sxxeyy t filteringdisabled true foldercleaner autocleanonfetch true foldercleaner deletesamples true foldercleaner deletestalemetadata true foldercleaner deleteunknownfolders true foldercleaner deleteunknownimages true foldercleaner deleteunusedexternalmetadata true foldercleaner testmode false forcedtitletag true hasseentooltiponminimize true hidetitlebarlicensing false highlightmissingmeta true html fastvideoinspection true ignoreperiodfilesanddirectories true imagesbynamemode new interval autoscan interval autoscanepisodes interval downloadnewmovies interval downloadnewtv interval downloadparser interval downloadtorrentfeeds interval newmedialist interval utorrent lastfetchmode licensecache svafg licensecode mcmlt licensedetails privacy censor licensingmode licensingsystemnew true limitbackdropdownloads loglimit logtimestamp false logtofile true manualmovieconfirm true maxfetcherthreads mcminstallcheck medialistsort listtitle noarticles a metadata allowfetcherkeywordtags true metadata atvflash false metadata boxee false metadata dlna false metadata embedded false metadata jriver false metadata mediabrowserold false metadata mythtv false metadata wdtv false metadata xstreamer false minimizetotray true morerelaxedsearching true movemoviedownloadsto e my videos movies movie votedscoresource primary fetcher netgear autoreducefilesize true netgear compatibletextfields false netgear embedposter true netgear force eascii true netgear metadata false netgear true newmedialistdays nmtpathmapping c example path file opt sybhttpd localhost drives network share exampleshare nocommas false noconfirmononeresult true nonblockingconfirmationprompts true nospaces false false onlyshowannouncements false plugin fetcher adult tmdba plugin fetcher tmdb plugin fetcher imdb plugin fetcher tv thetvdb com popupmovewindow false postprocessing hide true postprocessing wait true postprocessingtvepisode hide true postprocessingtvepisode wait true postprocessingtvshow hide true postprocessingtvshow wait true readonlyfoldersvisible true relaxedtvdetection true renamefiles true renamefolders true renamer movie t t y renamer series t t y renamerdownloadstyle false respectids true rut enablemanager false scanfolders e my videos tv shows e my videos movies i my videos tv shows searchwithoutpath false seasonrenamingconvention season s showdebug false sortbyoriginalfolder false sortwithoutarticle true subtitle namingmode single subtitles preferedlanguage in english subtitles preferedlanguages en en themedbackgrounds true thumbnailextract auto false thumbnailextract lots false tmdb countryabbreviation us tmdb languageabbreviation en tmdb languageid tmdb languagename english torrentdownloadpath c users pvr dropbox torrents torrentdownloadpathother c users pvr dropbox torrents torrentfinishedpath i utorrent done torrents allowmagnetlinks true torrents disablevirusprotection true torrents downloaddelay torrents minimumepisodeseeds torrents minimummovieseeds trailers allowadaptiveyoutubetrailermuxing false trailers alwaysreencode false trailers autodownload false trailers autoreencode false trailers autoreplace true trailers maxbitrate trailers savelocation trailers targetsize pixels high trailers watchoutput false trakt false trakt auto false trakt autoaddtv false trakt autodownload false trakt autodownloadcheckrelease true trakt releasedayoffset translateseetoaaa false ttvdb languageabbreviation en ttvdb languageid ttvdb languagename english tv overrides dora the explorer girls law and order ci law and order mash nexo knights real housewives beverly hills real housewives of new york city real housewives of new york real housewives of nyc scandal us scandal some assembly required tfh the flash the office us the real housewives new york city transformers million dollar listing la tvautoscancheckmissingthumbs true tvfetchseasonbanners true tvfetchseasonposters true tvshowbias us tvthememusic false tvvideobackdrops false uitheme windows media center updatefile update txt updatepath useautodetectionmode false useautoupdate true usecloudnetwork true useconsolodatedgenres false useexistingposters true usefetcherthreads true usenet autoloadnzb false usenet binsearch false usenet cachelocation c users pvr appdata roaming peter souza iv media center master usenet usenet maxconnections usenet mcmindex false usenet movenzbfile false usenet nzbsu false usenet password privacy censor usenet port usenet ssl false usenet usemcmtodownload true usepostprocessing false usepostprocessingtvepisode false usepostprocessingtvshow false usesecondarygenres true usewindowsproxy true ut autoforcetvdownloads false ut autolabel false ut autoremovecompletes true ut autorepair true ut enablemanager true ut hostname localhost ut leaveseedingtorrentswitheta false ut password privacy censor ut port ut remoteaddmagnets true ut remoteaddtorrents true ut seeduntilratio ut username privacy censor versionupgrader wantextendedtvstatus false wantmovietorrentdownloader false wantnewmoviedownloads false wantopensubtitlesorg false wantqueueabortconfirmation true wantsmoothimages false wantsubtitledownloads false wantsubtitledownloads movies false wantsubtitledownloads tv false wanttorrent eztv true wanttorrent isohunt true wanttorrent kickasstorrents true wanttorrent newtorrents true wanttorrent rarbg true wanttorrent thepiratebay true wanttorrent thepiratebay new true wanttorrent torrentbutler true wanttorrentautostart true wanttvtorrentdownloader true wantusenet false wantvideodetails false whattodowithbadepisodedownloads re download xbmc alternatepostersize true xbmc api host localhost xbmc api integration true xbmc api notifications true xbmc api password privacy censor xbmc api port xbmc api scan true xbmc api username privacy censor xbmc extrafanart true xbmc filltitlefromoriginaltitle false xbmc generatefanartwork true xbmc generateposterartwork true xbmc neverdeleteextrafanart true xbmc specialsseasonhackfix true xbmc version yamjenabled false version info file versions current install path autoupdate dll version mcmstublauncher exe version exe version file versions appdata bin adultdvdempire dll version autoupdate dll version cduniverse dll version ffmpeg exe version none gaydvdempire dll version icsharpcode sharpziplib dll version imdb dll version ionic zip dll version mcmidtag dll version mcmmkvtag dll version mcmwtv dll version mediacentermaster exe version mediainfo dll version monotorrent dll version newtonsoft json dll version objectlistview dll version exe version none rottentomatoes dll version sublight plugins subtitleprovider dll version sublightplugin dll version system data sqlite dll version tmdb dll version tmdba dll version unrar dll version vidinfo exe version debug logs mcm now working on southern charm debug saved to cache e my videos tv shows southern charm thetvdb com updated base metadata for this show debug thetvdb com looking for artwork to download for this show debug thetvdb com finished looking for artwork to download for this show debug thetvdb com looking for episodes that need metadata fetched for this show debug thetvdb com season list obtained count now getting lists of episodes debug thetvdb com looking for episodes in season thetvdb com fetching season banner thetvdb com none available debug e my videos tv shows southern charm season southern charm peter pan “sin”drome file true debug e my videos tv shows southern charm season southern charm sh epic fail file true debug e my videos tv shows southern charm season southern charm sh epic fail thetvdb com fetching data for season episode thetvdb com sh epic fail fetching url post debug request data type switched to json posted data jsonrpc method jsonrpc version id mcm thetvdb com fetching season episode thumbnail debug debug downloading debug to e my videos tv shows southern charm season metadata jpg fetching url get received data id mcm jsonrpc result version major minor patch debug version information from kodi xbmc id mcm jsonrpc result version major minor patch fetching url post debug request data type switched to json posted data jsonrpc method gui shownotification params title mcm new episode fetched message southern charm sh epic fail displaytime id mcm received data id mcm jsonrpc result ok debug the image downloaded has a size of debug successfully wrote image data to disk debug operations completed community web conversation debug submitted details thetvdb com done with this show debug saved to cache e my videos tv shows southern charm debug saved to cache i utorrent debug saved to cache e my videos tv shows southern charm debug saved to cache e my videos tv shows southern charm debug saved to cache e my videos tv shows southern charm mcm now resizing artwork for southern charm debug e my videos tv shows southern charm season southern charm peter pan “sin”drome file true debug e my videos tv shows southern charm season southern charm sh epic fail file true mcm created kodi xbmc metadata files mcm cleaning folder e my videos tv shows southern charm debug e my videos tv shows southern charm season southern charm peter pan “sin”drome file true debug e my videos tv shows southern charm season southern charm sh epic fail file true debug saved to cache e my videos tv shows southern charm mcm files deleted debug e my videos tv shows southern charm season southern charm peter pan “sin”drome file true mcm done | 0 |
675 | 2,537,432,143 | IssuesEvent | 2015-01-26 20:32:50 | biocore/biom-format | https://api.github.com/repos/biocore/biom-format | closed | misleading error message if no h5py/HDF5 | bug documentation | If h5py/HDF5 isn't installed, attempting to read a BIOM HDF5 file with `biom.load_table` results in a misleading error message:
```
Traceback (most recent call last):
File "/usr/local/bin/observation_metadata_correlation.py", line 238, in <module>
main()
File "/usr/local/bin/observation_metadata_correlation.py", line 175, in main
bt = load_table(opts.otu_table_fp)
File "/usr/local/lib/python2.7/dist-packages/biom/parse.py", line 553, in load_table
raise TypeError("%s does not appear to be a BIOM file!" % f)
TypeError: otu_table.biom does not appear to be a BIOM file!
``` | 1.0 | misleading error message if no h5py/HDF5 - If h5py/HDF5 isn't installed, attempting to read a BIOM HDF5 file with `biom.load_table` results in a misleading error message:
```
Traceback (most recent call last):
File "/usr/local/bin/observation_metadata_correlation.py", line 238, in <module>
main()
File "/usr/local/bin/observation_metadata_correlation.py", line 175, in main
bt = load_table(opts.otu_table_fp)
File "/usr/local/lib/python2.7/dist-packages/biom/parse.py", line 553, in load_table
raise TypeError("%s does not appear to be a BIOM file!" % f)
TypeError: otu_table.biom does not appear to be a BIOM file!
``` | non_reli | misleading error message if no if isn t installed attempting to read a biom file with biom load table results in a misleading error message traceback most recent call last file usr local bin observation metadata correlation py line in main file usr local bin observation metadata correlation py line in main bt load table opts otu table fp file usr local lib dist packages biom parse py line in load table raise typeerror s does not appear to be a biom file f typeerror otu table biom does not appear to be a biom file | 0 |
232 | 5,529,947,224 | IssuesEvent | 2017-03-21 00:23:48 | Storj/bridge | https://api.github.com/repos/Storj/bridge | closed | Bridge API request timeouts | needs review reliability | ### Expected Behavior
```
Download does not fail
```
### Actual Behavior
```
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1444077 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1451377 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1463057 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [debug] Response Body: undefined
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [debug] Request failed, reason: ESOCKETTIMEDOUT - retrying (false)...
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [warn] Failed to download shard, reason: ESOCKETTIMEDOUT
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1467437 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [error] Failed to download file
Waiting for 0 seconds, press a key to continue ...
```
```
[Thu Mar 02 2017 01:57:54 GMT+0100 (W. Europe Standard Time)] [info] Received 2198489 of 18621244 bytes
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [debug] Response Body: undefined
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [debug] Request failed, reason: ESOCKETTIMEDOUT - retrying (false)...
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [warn] Failed to download shard, reason: ESOCKETTIMEDOUT
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [error] Failed to download file
Waiting for 0 seconds, press a key to continue ...
```
### Steps to Reproduce
1. Download a file from a public bucket
2. ...
3. ...
| True | Bridge API request timeouts - ### Expected Behavior
```
Download does not fail
```
### Actual Behavior
```
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1444077 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1451377 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1463057 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [debug] Response Body: undefined
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [debug] Request failed, reason: ESOCKETTIMEDOUT - retrying (false)...
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [warn] Failed to download shard, reason: ESOCKETTIMEDOUT
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [info] Received 1467437 of 18621244 bytes
[Thu Mar 02 2017 01:54:21 GMT+0100 (W. Europe Standard Time)] [error] Failed to download file
Waiting for 0 seconds, press a key to continue ...
```
```
[Thu Mar 02 2017 01:57:54 GMT+0100 (W. Europe Standard Time)] [info] Received 2198489 of 18621244 bytes
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [debug] Response Body: undefined
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [debug] Request failed, reason: ESOCKETTIMEDOUT - retrying (false)...
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [warn] Failed to download shard, reason: ESOCKETTIMEDOUT
[Thu Mar 02 2017 01:57:55 GMT+0100 (W. Europe Standard Time)] [error] Failed to download file
Waiting for 0 seconds, press a key to continue ...
```
### Steps to Reproduce
1. Download a file from a public bucket
2. ...
3. ...
| reli | bridge api request timeouts expected behavior download does not fail actual behavior received of bytes received of bytes received of bytes response body undefined request failed reason esockettimedout retrying false failed to download shard reason esockettimedout received of bytes failed to download file waiting for seconds press a key to continue received of bytes response body undefined request failed reason esockettimedout retrying false failed to download shard reason esockettimedout failed to download file waiting for seconds press a key to continue steps to reproduce download a file from a public bucket | 1 |
2,556 | 26,330,374,910 | IssuesEvent | 2023-01-10 10:20:18 | informalsystems/hermes | https://api.github.com/repos/informalsystems/hermes | closed | One send packet stuck on Osmosis channel `channel-0` to Cosmos-Hub | A: bug A: blocked I: rpc E: osmosis O: reliability | <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Summary of Bug
@faddat discovered this `SendPacket` stuck last week.
- sequence number: 1,275,507
- the packet was created 4 months ago ( 2022-09-08 01:33:45 )
- Tx hash: https://www.mintscan.io/osmosis/txs/F506105FAB8A7D26D950043B732C935A2896733C7AEB695F729CDC0631963252
- Packet stage: sent, not received yet at Cosmos-Hub
- Underlying issue why this packet is difficult to relay: the packet data is ~10MB in size, in particular the `Receiver` field is huge
- Complete packet data can also be seen here: https://gist.github.com/adizere/65fac551eaf793b8244721a88d02090c
## Version
v1.2
## Steps to Reproduce
- Using `clear packets`, Hermes is not able to pull the packet data for this packet.
```
hermes clear packets --chain osmosis-1 --port transfer --channel channel-0
2023-01-09T11:24:50.964255Z INFO ThreadId(01) using default configuration from '/Users/faddat/.hermes/config.toml'
2023-01-09T11:24:50.964627Z INFO ThreadId(01) running Hermes v1.2.0+00fe26ed
2023-01-09T11:25:04.664258Z INFO ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}: 2 unreceived packets found: [ 1275507, 1642799 ]
2023-01-09T11:25:04.664347Z WARN ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7722248 sequences=[Sequence(1275507), Sequence(1642799)]}: will do QueryPacketEventDataRequest from src_chain
2023-01-09T11:25:04.664421Z WARN ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7722248 sequences=[Sequence(1275507), Sequence(1642799)]}: query_packets_from_txs
2023-01-09T11:25:04.664458Z WARN ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7722248 sequences=[Sequence(1275507), Sequence(1642799)]}: /tx_search for request QueryPacketEventDataRequest { event_id: SendPacket, source_channel_id: ChannelId("channel-0"), source_port_id: PortId("transfer"), destination_channel_id: ChannelId("channel-141"), destination_port_id: PortId("transfer"), sequences: [Sequence(1275507), Sequence(1642799)], height: SmallerEqual(Specific(Height { revision: 1, height: 7722248 })) }
2023-01-09T11:25:18.475936Z WARN ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}: encountered query failure while pulling packet data: RPC error to endpoint https://rpc-osmosis-archive-ia.cosmosia.notional.ventures/: serde parse error: EOF while parsing a string at line 188 column 524468
SUCCESS []
```
The main part is `serde parse error: EOF while parsing a string at line 188 column 524468`.
- Jacob also discovered, interestingly, that using `tx packet-recv ` (instead of `clear packets`) gets Hermes more progress
```
2023-01-09T14:30:00.924712Z TRACE ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7724204 sequences=[Sequence(1275507)]}: end_block_events []
2023-01-09T14:30:00.926956Z INFO ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}: pulled packet data for 1 events out of 1 sequences: 1275507..=1275507; events.total=1 events.left=0
2023-01-09T14:30:01.395287Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: processing event event=SendPacket(SendPacket { packet: seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) }) at height 1-7724204
2023-01-09T14:30:03.331731Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: build timeout for channel packet=seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) height=4-13599968
2023-01-09T14:30:03.879642Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: built timeout msg packet=seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) height=4-13599969
2023-01-09T14:30:03.880406Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: collected event msg.type_url=/ibc.core.channel.v1.MsgTimeout event=SendPacket(SendPacket { packet: seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) }) at height 1-7724204
2023-01-09T14:30:03.880676Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:schedule{odata=packet-recv ->Source @4-13599968; len=1}: connection delay need not be taken into account: client update message will be prepended later
2023-01-09T14:30:03.880738Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}: retrying retry.current=1 retry.max=5
2023-01-09T14:30:03.880996Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}: prepending Source client update at height 4-13599969
2023-01-09T14:30:06.736708Z TRACE ThreadId(21) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: light client verification trusted=4-13599963 target=4-13599969
2023-01-09T14:30:21.291374Z TRACE ThreadId(21) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: adjusting headers with 0 supporting headers trusted=4-13599963 target=13599969
2023-01-09T14:30:21.291431Z TRACE ThreadId(21) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: fetching header height=4-13599964
2023-01-09T14:30:28.609044Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: building a MsgUpdateAnyClient from trusted height 4-13599963 to target height 4-13599969
2023-01-09T14:30:28.611898Z INFO ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}: assembled batch of 2 message(s)
2023-01-09T14:30:29.582360Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}: sending 2 messages as 2 batches to chain osmosis-1 in parallel
2023-01-09T14:30:29.582596Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: max fee, for use in tx simulation: Fee { amount: "7501uosmo", gas_limit: 3000000 }
2023-01-09T14:30:31.904180Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}:estimate_gas: tx simulation successful, gas amount used: 520049
2023-01-09T14:30:31.904324Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: send_tx: using 520049 gas, fee Fee { amount: "1431uosmo", gas_limit: 572053 } id=osmosis-1
2023-01-09T14:30:34.297676Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: gas estimation succeeded
2023-01-09T14:30:34.297749Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: tx was successfully broadcasted, increasing account sequence number response=Response { code: Ok, data: b"", log: "[]", hash: Hash::Sha256(6D81E1214BA130B3B232AAD8C6F6ED1B551B46E6F5A42FF5ABF4E6A6BF91AC75) } account.sequence.old=8 account.sequence.new=9
2023-01-09T14:30:34.299232Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=9}: max fee, for use in tx simulation: Fee { amount: "7501uosmo", gas_limit: 3000000 }
2023-01-09T14:30:39.801049Z ERROR ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=9}:estimate_gas: failed to simulate tx. propagating error to caller: gRPC call failed with status: status: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 413 Payload Too Large", details: [], metadata: MetadataMap { headers: {"server": "nginx/1.22.0", "date": "Mon, 09 Jan 2023 14:30:39 GMT", "content-type": "text/html", "content-length": "183"} }
2023-01-09T14:30:39.801182Z ERROR ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=9}: gas estimation failed or encountered another unrecoverable error error=gRPC call failed with status: status: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 413 Payload Too Large", details: [], metadata: MetadataMap { headers: {"server": "nginx/1.22.0", "date": "Mon, 09 Jan 2023 14:30:39 GMT", "content-type": "text/html", "content-length": "183"} }
ERROR link error: link failed with underlying error: gRPC call failed with status: status: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 413 Payload Too Large", details: [], metadata: MetadataMap { headers: {"server": "nginx/1.22.0", "date": "Mon, 09 Jan 2023 14:30:39 GMT", "content-type": "text/html", "content-length": "183"} }
```
The main part here is HTTP error 413, likely because of nginx frontend to Notional RPC nodes.
## Acceptance Criteria
- [ ] packet sequence number `1,275,507` relayed
____
## For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate milestone (priority) applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
| True | One send packet stuck on Osmosis channel `channel-0` to Cosmos-Hub - <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Summary of Bug
@faddat discovered this `SendPacket` stuck last week.
- sequence number: 1,275,507
- the packet was created 4 months ago ( 2022-09-08 01:33:45 )
- Tx hash: https://www.mintscan.io/osmosis/txs/F506105FAB8A7D26D950043B732C935A2896733C7AEB695F729CDC0631963252
- Packet stage: sent, not received yet at Cosmos-Hub
- Underlying issue why this packet is difficult to relay: the packet data is ~10MB in size, in particular the `Receiver` field is huge
- Complete packet data can also be seen here: https://gist.github.com/adizere/65fac551eaf793b8244721a88d02090c
## Version
v1.2
## Steps to Reproduce
- Using `clear packets`, Hermes is not able to pull the packet data for this packet.
```
hermes clear packets --chain osmosis-1 --port transfer --channel channel-0
2023-01-09T11:24:50.964255Z INFO ThreadId(01) using default configuration from '/Users/faddat/.hermes/config.toml'
2023-01-09T11:24:50.964627Z INFO ThreadId(01) running Hermes v1.2.0+00fe26ed
2023-01-09T11:25:04.664258Z INFO ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}: 2 unreceived packets found: [ 1275507, 1642799 ]
2023-01-09T11:25:04.664347Z WARN ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7722248 sequences=[Sequence(1275507), Sequence(1642799)]}: will do QueryPacketEventDataRequest from src_chain
2023-01-09T11:25:04.664421Z WARN ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7722248 sequences=[Sequence(1275507), Sequence(1642799)]}: query_packets_from_txs
2023-01-09T11:25:04.664458Z WARN ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7722248 sequences=[Sequence(1275507), Sequence(1642799)]}: /tx_search for request QueryPacketEventDataRequest { event_id: SendPacket, source_channel_id: ChannelId("channel-0"), source_port_id: PortId("transfer"), destination_channel_id: ChannelId("channel-141"), destination_port_id: PortId("transfer"), sequences: [Sequence(1275507), Sequence(1642799)], height: SmallerEqual(Specific(Height { revision: 1, height: 7722248 })) }
2023-01-09T11:25:18.475936Z WARN ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}: encountered query failure while pulling packet data: RPC error to endpoint https://rpc-osmosis-archive-ia.cosmosia.notional.ventures/: serde parse error: EOF while parsing a string at line 188 column 524468
SUCCESS []
```
The main part is `serde parse error: EOF while parsing a string at line 188 column 524468`.
- Jacob also discovered, interestingly, that using `tx packet-recv ` (instead of `clear packets`) gets Hermes more progress
```
2023-01-09T14:30:00.924712Z TRACE ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:query_send_packet_events{chain=osmosis-1 height=<=1-7724204 sequences=[Sequence(1275507)]}: end_block_events []
2023-01-09T14:30:00.926956Z INFO ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}: pulled packet data for 1 events out of 1 sequences: 1275507..=1275507; events.total=1 events.left=0
2023-01-09T14:30:01.395287Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: processing event event=SendPacket(SendPacket { packet: seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) }) at height 1-7724204
2023-01-09T14:30:03.331731Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: build timeout for channel packet=seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) height=4-13599968
2023-01-09T14:30:03.879642Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: built timeout msg packet=seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) height=4-13599969
2023-01-09T14:30:03.880406Z TRACE ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:generate_operational_data{tracking_id=packet-recv}: collected event msg.type_url=/ibc.core.channel.v1.MsgTimeout event=SendPacket(SendPacket { packet: seq:1275507, path:channel-0/transfer->channel-141/transfer, toh:no timeout, tos:Timestamp(2022-09-08T22:35:35Z)) }) at height 1-7724204
2023-01-09T14:30:03.880676Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:schedule{odata=packet-recv ->Source @4-13599968; len=1}: connection delay need not be taken into account: client update message will be prepended later
2023-01-09T14:30:03.880738Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}: retrying retry.current=1 retry.max=5
2023-01-09T14:30:03.880996Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}: prepending Source client update at height 4-13599969
2023-01-09T14:30:06.736708Z TRACE ThreadId(21) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: light client verification trusted=4-13599963 target=4-13599969
2023-01-09T14:30:21.291374Z TRACE ThreadId(21) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: adjusting headers with 0 supporting headers trusted=4-13599963 target=13599969
2023-01-09T14:30:21.291431Z TRACE ThreadId(21) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: fetching header height=4-13599964
2023-01-09T14:30:28.609044Z DEBUG ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:foreign_client.wait_and_build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}:foreign_client.build_update_client_with_trusted{client=cosmoshub-4->osmosis-1:07-tendermint-1 target_height=4-13599969}: building a MsgUpdateAnyClient from trusted height 4-13599963 to target height 4-13599969
2023-01-09T14:30:28.611898Z INFO ThreadId(01) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}: assembled batch of 2 message(s)
2023-01-09T14:30:29.582360Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}: sending 2 messages as 2 batches to chain osmosis-1 in parallel
2023-01-09T14:30:29.582596Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: max fee, for use in tx simulation: Fee { amount: "7501uosmo", gas_limit: 3000000 }
2023-01-09T14:30:31.904180Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}:estimate_gas: tx simulation successful, gas amount used: 520049
2023-01-09T14:30:31.904324Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: send_tx: using 520049 gas, fee Fee { amount: "1431uosmo", gas_limit: 572053 } id=osmosis-1
2023-01-09T14:30:34.297676Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: gas estimation succeeded
2023-01-09T14:30:34.297749Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=8}: tx was successfully broadcasted, increasing account sequence number response=Response { code: Ok, data: b"", log: "[]", hash: Hash::Sha256(6D81E1214BA130B3B232AAD8C6F6ED1B551B46E6F5A42FF5ABF4E6A6BF91AC75) } account.sequence.old=8 account.sequence.new=9
2023-01-09T14:30:34.299232Z DEBUG ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=9}: max fee, for use in tx simulation: Fee { amount: "7501uosmo", gas_limit: 3000000 }
2023-01-09T14:30:39.801049Z ERROR ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=9}:estimate_gas: failed to simulate tx. propagating error to caller: gRPC call failed with status: status: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 413 Payload Too Large", details: [], metadata: MetadataMap { headers: {"server": "nginx/1.22.0", "date": "Mon, 09 Jan 2023 14:30:39 GMT", "content-type": "text/html", "content-length": "183"} }
2023-01-09T14:30:39.801182Z ERROR ThreadId(11) relay_recv_packet_and_timeout_messages{src_chain=osmosis-1 src_port=transfer src_channel=channel-0 dst_chain=cosmoshub-4}:relay{odata=packet-recv ->Source @4-13599968; len=1}:send_messages_and_wait_commit{chain=osmosis-1 tracking_id=packet-recv}:send_tx_with_account_sequence_retry{chain=osmosis-1 account.sequence=9}: gas estimation failed or encountered another unrecoverable error error=gRPC call failed with status: status: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 413 Payload Too Large", details: [], metadata: MetadataMap { headers: {"server": "nginx/1.22.0", "date": "Mon, 09 Jan 2023 14:30:39 GMT", "content-type": "text/html", "content-length": "183"} }
ERROR link error: link failed with underlying error: gRPC call failed with status: status: Internal, message: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 413 Payload Too Large", details: [], metadata: MetadataMap { headers: {"server": "nginx/1.22.0", "date": "Mon, 09 Jan 2023 14:30:39 GMT", "content-type": "text/html", "content-length": "183"} }
```
The main part here is HTTP error 413, likely because of nginx frontend to Notional RPC nodes.
## Acceptance Criteria
- [ ] packet sequence number `1,275,507` relayed
____
## For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate milestone (priority) applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
| reli | one send packet stuck on osmosis channel channel to cosmos hub ☺ v ✰ thanks for opening an issue ✰ v before smashing the submit button please review the template v please also ensure that this is not a duplicate issue ☺ summary of bug faddat discovered this sendpacket stuck last week sequence number the packet was created months ago tx hash packet stage sent not received yet at cosmos hub underlying issue why this packet is difficult to relay the packet data is in size in particular the receiver field is huge complete packet data can also be seen here version steps to reproduce using clear packets hermes is not able to pull the packet data for this packet hermes clear packets chain osmosis port transfer channel channel info threadid using default configuration from users faddat hermes config toml info threadid running hermes info threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub unreceived packets found warn threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub query send packet events chain osmosis height sequences will do querypacketeventdatarequest from src chain warn threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub query send packet events chain osmosis height sequences query packets from txs warn threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub query send packet events chain osmosis height sequences tx search for request querypacketeventdatarequest event id sendpacket source channel id channelid channel source port id portid transfer destination channel id channelid channel destination port id portid transfer sequences height smallerequal specific height revision height warn threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub encountered query failure while pulling packet data rpc error to endpoint serde parse error eof while parsing a string at line column success the main part is serde parse error eof while parsing a string at line column jacob also discovered interestingly that using tx packet recv instead of clear packets gets hermes more progress trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub query send packet events chain osmosis height sequences end block events info threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub pulled packet data for events out of sequences events total events left trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub generate operational data tracking id packet recv processing event event sendpacket sendpacket packet seq path channel transfer channel transfer toh no timeout tos timestamp at height trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub generate operational data tracking id packet recv build timeout for channel packet seq path channel transfer channel transfer toh no timeout tos timestamp height trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub generate operational data tracking id packet recv built timeout msg packet seq path channel transfer channel transfer toh no timeout tos timestamp height trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub generate operational data tracking id packet recv collected event msg type url ibc core channel msgtimeout event sendpacket sendpacket packet seq path channel transfer channel transfer toh no timeout tos timestamp at height debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub schedule odata packet recv source len connection delay need not be taken into account client update message will be prepended later debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len retrying retry current retry max debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len prepending source client update at height trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len foreign client wait and build update client with trusted client cosmoshub osmosis tendermint target height foreign client build update client with trusted client cosmoshub osmosis tendermint target height light client verification trusted target trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len foreign client wait and build update client with trusted client cosmoshub osmosis tendermint target height foreign client build update client with trusted client cosmoshub osmosis tendermint target height adjusting headers with supporting headers trusted target trace threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len foreign client wait and build update client with trusted client cosmoshub osmosis tendermint target height foreign client build update client with trusted client cosmoshub osmosis tendermint target height fetching header height debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len foreign client wait and build update client with trusted client cosmoshub osmosis tendermint target height foreign client build update client with trusted client cosmoshub osmosis tendermint target height building a msgupdateanyclient from trusted height to target height info threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len assembled batch of message s debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv sending messages as batches to chain osmosis in parallel debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence max fee for use in tx simulation fee amount gas limit debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence estimate gas tx simulation successful gas amount used debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence send tx using gas fee fee amount gas limit id osmosis debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence gas estimation succeeded debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence tx was successfully broadcasted increasing account sequence number response response code ok data b log hash hash account sequence old account sequence new debug threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence max fee for use in tx simulation fee amount gas limit error threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence estimate gas failed to simulate tx propagating error to caller grpc call failed with status status internal message protocol error received message with invalid compression flag valid flags are and while receiving response with status payload too large details metadata metadatamap headers server nginx date mon jan gmt content type text html content length error threadid relay recv packet and timeout messages src chain osmosis src port transfer src channel channel dst chain cosmoshub relay odata packet recv source len send messages and wait commit chain osmosis tracking id packet recv send tx with account sequence retry chain osmosis account sequence gas estimation failed or encountered another unrecoverable error error grpc call failed with status status internal message protocol error received message with invalid compression flag valid flags are and while receiving response with status payload too large details metadata metadatamap headers server nginx date mon jan gmt content type text html content length error link error link failed with underlying error grpc call failed with status status internal message protocol error received message with invalid compression flag valid flags are and while receiving response with status payload too large details metadata metadatamap headers server nginx date mon jan gmt content type text html content length the main part here is http error likely because of nginx frontend to notional rpc nodes acceptance criteria packet sequence number relayed for admin use not duplicate issue appropriate labels applied appropriate milestone priority applied appropriate contributors tagged contributor assigned self assigned | 1 |
377,980 | 11,188,868,158 | IssuesEvent | 2020-01-02 07:23:19 | wso2/product-microgateway | https://api.github.com/repos/wso2/product-microgateway | closed | Move to ballerina 1.0.0 | Priority/Normal Type/Improvement | ### Describe your problem(s)
Ballerina team will be releasing the 1.0.0 soon. We need to start migrating to new version.
### Describe your solution
Start the migration process on top of the current master build, since ballerina team is ready to do the alpha.
### How will you implement it
* Get the core component to build with new ballerina syntax
* Move to new ballerina filters
* Get mgw build to work
| 1.0 | Move to ballerina 1.0.0 - ### Describe your problem(s)
Ballerina team will be releasing the 1.0.0 soon. We need to start migrating to new version.
### Describe your solution
Start the migration process on top of the current master build, since ballerina team is ready to do the alpha.
### How will you implement it
* Get the core component to build with new ballerina syntax
* Move to new ballerina filters
* Get mgw build to work
| non_reli | move to ballerina describe your problem s ballerina team will be releasing the soon we need to start migrating to new version describe your solution start the migration process on top of the current master build since ballerina team is ready to do the alpha how will you implement it get the core component to build with new ballerina syntax move to new ballerina filters get mgw build to work | 0 |
2,712 | 27,203,584,477 | IssuesEvent | 2023-02-20 11:25:30 | celo-org/celo-blockchain | https://api.github.com/repos/celo-org/celo-blockchain | closed | Validator stuck in `Error in handling Istanbul message` | bug dev-support theme: client-reliability archive | **Describe the bug**
`keefer` attempted to key rotate onto 1.1.0 on mainnet and had issues on the new validator.
Initial logs just after epoch block
```
WARN [11-05|17:03:44.177] Error in handling istanbul message address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=handleEvents cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=1 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd err="not an elected validator 0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd"
INFO [11-05|17:03:49.177] Reset timer to do round change address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=resetRoundChangeTimer cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=2 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd timeout=7s
WARN [11-05|17:03:49.178] Error in handling istanbul message address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=handleEvents cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=2 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd err="not an elected validator 0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd"
```
Logs 7 minutes later just prior to manual restart
```
WARN [11-05|17:10:10.328] Error in handling istanbul message address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=handleEvents cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=8 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd err="not an elected validator 0x3ed95D6D4Ce36Ea7B349cD401e324316D956331a"
```
At that point the node was restarted and started to participate in consensus.
** Possible Cause **
The node was stuck on block `3404160`, the epoch block. The specific failure message is due to [this line](https://github.com/celo-org/celo-blockchain/blob/61a60d70418d683847434a24f675d0c1f483a956/consensus/istanbul/utils.go#L67). I suspect that the current validator set is not being updated causing this to get stuck. | True | Validator stuck in `Error in handling Istanbul message` - **Describe the bug**
`keefer` attempted to key rotate onto 1.1.0 on mainnet and had issues on the new validator.
Initial logs just after epoch block
```
WARN [11-05|17:03:44.177] Error in handling istanbul message address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=handleEvents cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=1 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd err="not an elected validator 0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd"
INFO [11-05|17:03:49.177] Reset timer to do round change address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=resetRoundChangeTimer cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=2 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd timeout=7s
WARN [11-05|17:03:49.178] Error in handling istanbul message address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=handleEvents cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=2 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd err="not an elected validator 0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd"
```
Logs 7 minutes later just prior to manual restart
```
WARN [11-05|17:10:10.328] Error in handling istanbul message address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd func=handleEvents cur_seq=3404160 cur_epoch=197 cur_round=0 des_round=8 state="Waiting for new round" address=0x38Ff035534e45661D8DCB74A1BB55e8B5275feCd err="not an elected validator 0x3ed95D6D4Ce36Ea7B349cD401e324316D956331a"
```
At that point the node was restarted and started to participate in consensus.
** Possible Cause **
The node was stuck on block `3404160`, the epoch block. The specific failure message is due to [this line](https://github.com/celo-org/celo-blockchain/blob/61a60d70418d683847434a24f675d0c1f483a956/consensus/istanbul/utils.go#L67). I suspect that the current validator set is not being updated causing this to get stuck. | reli | validator stuck in error in handling istanbul message describe the bug keefer attempted to key rotate onto on mainnet and had issues on the new validator initial logs just after epoch block warn error in handling istanbul message address func handleevents cur seq cur epoch cur round des round state waiting for new round address err not an elected validator info reset timer to do round change address func resetroundchangetimer cur seq cur epoch cur round des round state waiting for new round address timeout warn error in handling istanbul message address func handleevents cur seq cur epoch cur round des round state waiting for new round address err not an elected validator logs minutes later just prior to manual restart warn error in handling istanbul message address func handleevents cur seq cur epoch cur round des round state waiting for new round address err not an elected validator at that point the node was restarted and started to participate in consensus possible cause the node was stuck on block the epoch block the specific failure message is due to i suspect that the current validator set is not being updated causing this to get stuck | 1 |
335,177 | 30,016,416,229 | IssuesEvent | 2023-06-26 19:05:09 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | sqlccl: interesting failures from TestExplainRedactDDL | C-bug skipped-test T-sql-queries | `TestExplainRedactDDL` is producing some interesting failures that don't seem to have anything to do with redaction of `EXPLAIN` output. For example, these:
- https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_BazelExtendedCi/9144645?showRootCauses=true&expandBuildChangesSection=true&expandBuildProblemsSection=true&expandBuildTestsSection=true&logFilter=debug
- https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_BazelEssentialCi/9130717?buildTab=overview&showRootCauses=true&expandBuildProblemsSection=true&expandBuildTestsSection=true&expandBuildChangesSection=true
I'm going to check this test in skipped, and then investigate these failures.
Jira issue: CRDB-25650 | 1.0 | sqlccl: interesting failures from TestExplainRedactDDL - `TestExplainRedactDDL` is producing some interesting failures that don't seem to have anything to do with redaction of `EXPLAIN` output. For example, these:
- https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_BazelExtendedCi/9144645?showRootCauses=true&expandBuildChangesSection=true&expandBuildProblemsSection=true&expandBuildTestsSection=true&logFilter=debug
- https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_BazelEssentialCi/9130717?buildTab=overview&showRootCauses=true&expandBuildProblemsSection=true&expandBuildTestsSection=true&expandBuildChangesSection=true
I'm going to check this test in skipped, and then investigate these failures.
Jira issue: CRDB-25650 | non_reli | sqlccl interesting failures from testexplainredactddl testexplainredactddl is producing some interesting failures that don t seem to have anything to do with redaction of explain output for example these i m going to check this test in skipped and then investigate these failures jira issue crdb | 0 |
5,527 | 2,777,351,544 | IssuesEvent | 2015-05-05 07:25:42 | h2opaulh2o/IP-A3-Project | https://api.github.com/repos/h2opaulh2o/IP-A3-Project | closed | Rewrite testcases for NewsGetter | Testing | Rewrite testcases for NewsGetter based on teacher's instructions. | 1.0 | Rewrite testcases for NewsGetter - Rewrite testcases for NewsGetter based on teacher's instructions. | non_reli | rewrite testcases for newsgetter rewrite testcases for newsgetter based on teacher s instructions | 0 |
123,246 | 26,235,255,037 | IssuesEvent | 2023-01-05 06:24:05 | Azure/autorest.csharp | https://api.github.com/repos/Azure/autorest.csharp | closed | The property marked as @key in CADL should be readonly in the generated models | v3 Client DPG DPG/RLC v2.0 GA Epic: Model Generation WS: Code Generation | https://microsoft.github.io/cadl/docs/standard-library/built-in-decorators/#key
@key - mark a model property as the key to identify instances of that type
So this means the property could not be set at the client level and it should be readonly.
Please help to verify this behavior. | 1.0 | The property marked as @key in CADL should be readonly in the generated models - https://microsoft.github.io/cadl/docs/standard-library/built-in-decorators/#key
@key - mark a model property as the key to identify instances of that type
So this means the property could not be set at the client level and it should be readonly.
Please help to verify this behavior. | non_reli | the property marked as key in cadl should be readonly in the generated models key mark a model property as the key to identify instances of that type so this means the property could not be set at the client level and it should be readonly please help to verify this behavior | 0 |
745,224 | 25,975,305,159 | IssuesEvent | 2022-12-19 14:27:06 | EESSI/eessi-bot-software-layer | https://api.github.com/repos/EESSI/eessi-bot-software-layer | closed | Update PR comments when job starts running | priority:low enhancement difficulty:medium | At the moment for each launched job the status table contains three lines (max):
Submitted -- added when job is submitted with UserHeld status
Released -- added when job manager removed the UserHeld status (usually a job is waiting to be started for some time)
Finished -- added when job manager recognises the job has ended
Released does not mean the job has started. The job manager could add another line when the job has started. | 1.0 | Update PR comments when job starts running - At the moment for each launched job the status table contains three lines (max):
Submitted -- added when job is submitted with UserHeld status
Released -- added when job manager removed the UserHeld status (usually a job is waiting to be started for some time)
Finished -- added when job manager recognises the job has ended
Released does not mean the job has started. The job manager could add another line when the job has started. | non_reli | update pr comments when job starts running at the moment for each launched job the status table contains three lines max submitted added when job is submitted with userheld status released added when job manager removed the userheld status usually a job is waiting to be started for some time finished added when job manager recognises the job has ended released does not mean the job has started the job manager could add another line when the job has started | 0 |
1,687 | 18,682,098,186 | IssuesEvent | 2021-11-01 07:33:43 | ppy/osu-framework | https://api.github.com/repos/ppy/osu-framework | closed | TextBox crashes if Current is disabled after a Text set | area:UI type:reliability | Found while looking at ppy/osu#10412.
Given the following test case (place in `TestSceneTextBox`):
```csharp
[Test]
public void TestTextAndCurrent()
{
InsertableTextBox textBox = null;
AddStep("add textbox", () =>
{
textBoxes.Add(textBox = new InsertableTextBox
{
Size = new Vector2(200, 40),
Text = "hello"
});
});
AddStep("set text and disable current", () =>
{
textBox.Text = "goodbye";
textBox.Current.Disabled = true;
});
}
```
the result is a crash to desktop with the following stack trace:
```
Unhandled exception. System.InvalidOperationException: Can not set value to "" as bindable is disabled.
at osu.Framework.Bindables.Bindable`1.set_Value(T value) in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Bindables/Bindable.cs:line 95
at osu.Framework.Graphics.UserInterface.TextBox.updateCursorAndLayout() in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Graphics/UserInterface/TextBox.cs:line 333
at osu.Framework.Graphics.UserInterface.TextBox.UpdateAfterChildren() in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Graphics/UserInterface/TextBox.cs:line 353
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree() in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Graphics/Containers/CompositeDrawable.cs:line 875
```
It's not a blocker for the issue, since there was another problem game-side, but this really ought to get fixed, as there's still a possibility for the game to hit this and I can't get to it today, so opening for tracking.
Yet another reason to dive into `TextBox`, I guess? | True | TextBox crashes if Current is disabled after a Text set - Found while looking at ppy/osu#10412.
Given the following test case (place in `TestSceneTextBox`):
```csharp
[Test]
public void TestTextAndCurrent()
{
InsertableTextBox textBox = null;
AddStep("add textbox", () =>
{
textBoxes.Add(textBox = new InsertableTextBox
{
Size = new Vector2(200, 40),
Text = "hello"
});
});
AddStep("set text and disable current", () =>
{
textBox.Text = "goodbye";
textBox.Current.Disabled = true;
});
}
```
the result is a crash to desktop with the following stack trace:
```
Unhandled exception. System.InvalidOperationException: Can not set value to "" as bindable is disabled.
at osu.Framework.Bindables.Bindable`1.set_Value(T value) in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Bindables/Bindable.cs:line 95
at osu.Framework.Graphics.UserInterface.TextBox.updateCursorAndLayout() in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Graphics/UserInterface/TextBox.cs:line 333
at osu.Framework.Graphics.UserInterface.TextBox.UpdateAfterChildren() in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Graphics/UserInterface/TextBox.cs:line 353
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree() in /home/dachb/Dokumenty/opensource/osu-framework/osu.Framework/Graphics/Containers/CompositeDrawable.cs:line 875
```
It's not a blocker for the issue, since there was another problem game-side, but this really ought to get fixed, as there's still a possibility for the game to hit this and I can't get to it today, so opening for tracking.
Yet another reason to dive into `TextBox`, I guess? | reli | textbox crashes if current is disabled after a text set found while looking at ppy osu given the following test case place in testscenetextbox csharp public void testtextandcurrent insertabletextbox textbox null addstep add textbox textboxes add textbox new insertabletextbox size new text hello addstep set text and disable current textbox text goodbye textbox current disabled true the result is a crash to desktop with the following stack trace unhandled exception system invalidoperationexception can not set value to as bindable is disabled at osu framework bindables bindable set value t value in home dachb dokumenty opensource osu framework osu framework bindables bindable cs line at osu framework graphics userinterface textbox updatecursorandlayout in home dachb dokumenty opensource osu framework osu framework graphics userinterface textbox cs line at osu framework graphics userinterface textbox updateafterchildren in home dachb dokumenty opensource osu framework osu framework graphics userinterface textbox cs line at osu framework graphics containers compositedrawable updatesubtree in home dachb dokumenty opensource osu framework osu framework graphics containers compositedrawable cs line it s not a blocker for the issue since there was another problem game side but this really ought to get fixed as there s still a possibility for the game to hit this and i can t get to it today so opening for tracking yet another reason to dive into textbox i guess | 1 |
528 | 8,335,812,531 | IssuesEvent | 2018-09-28 04:43:10 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | SocketHttpHandler set up with MaximumConnectionsPerServer could deadlock on concurrent request cancellation | area-System.Net.Http.SocketsHttpHandler bug tenet-reliability | @karelz @stephentoub
After the deadlock hits the process has to be restarted. If continued to be used the visible symptoms are the inability to communicate with a certain endpoint, the process may eventually run out of available threads.
Repro project: [DeadlockInSocketsHandler](https://github.com/baal2000/DeadlockInSocketsHandler)
Tested in Windows on SDK 2.1.301
Compile the console app and run. It would produce output similar to:
```
Running the test...
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
Deadlock detected: 2 requests are not completed
Finished the test. Press any key to exit.
```
The deadlock is caused by a race condition meaning it would strike after a random count of the test repetitions on each new application run. The constant values `MaximumConnectionsPerServer` and `MaxRequestCount` can be modified to increase/decrease probability of the deadlock, but `MaxRequestCount` must be higher than `MaximumConnectionsPerServer` to force some requests into `ConnectionWaiter` queue. The current values `1` and `2` are the lowest possible. They still reliably reproduce the issue and produce clean threads picture.
One may then attach to the running process or dump it to investigate the threads.
There would be 2 deadlocked threads, for example, named "A" and "B".
**Thread A**
```
System.Private.CoreLib.dll!System.Threading.SpinWait.SpinOnce(int sleep1Threshold)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.WaitForCallbackToComplete(long id)
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.DecrementConnectionCount()
System.Net.Http.dll!System.Net.Http.HttpConnection.Dispose(bool disposing)
System.Net.Http.dll!System.Net.Http.HttpConnection.RegisterCancellation.AnonymousMethod__65_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Thread B**
```
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.GetConnectionAsync.AnonymousMethod__38_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Explanation**
**Thread A**
1. HttpConnectionPool.DecrementConnectionCount() [entered](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L785) `lock(SyncObj)`
2. Spin-waits in CancellationTokenSource.WaitForCallbackToComplete for Thread B to complete HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback
**Thread B**
1. HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback waits to enter lock([SyncObj](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L284)) that is held by Thread A
2. SyncObj can never be released Thread A because it is going to spin-wait infinitely unless Thread B makes progress.
**Conclusion**
Both threads **cannot move**, that confirms the deadlock.
**Workarounds**
1. Cancel the requests to the same endpoint serially by the application. The request cancellation could be queued and then processed sequentially on a single a worker thread. Or the cancellation threads could be synchronized by a lock.
2. If possible, oo not set MaxConnectionsPerServer property. | True | SocketHttpHandler set up with MaximumConnectionsPerServer could deadlock on concurrent request cancellation - @karelz @stephentoub
After the deadlock hits the process has to be restarted. If continued to be used the visible symptoms are the inability to communicate with a certain endpoint, the process may eventually run out of available threads.
Repro project: [DeadlockInSocketsHandler](https://github.com/baal2000/DeadlockInSocketsHandler)
Tested in Windows on SDK 2.1.301
Compile the console app and run. It would produce output similar to:
```
Running the test...
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
Deadlock detected: 2 requests are not completed
Finished the test. Press any key to exit.
```
The deadlock is caused by a race condition meaning it would strike after a random count of the test repetitions on each new application run. The constant values `MaximumConnectionsPerServer` and `MaxRequestCount` can be modified to increase/decrease probability of the deadlock, but `MaxRequestCount` must be higher than `MaximumConnectionsPerServer` to force some requests into `ConnectionWaiter` queue. The current values `1` and `2` are the lowest possible. They still reliably reproduce the issue and produce clean threads picture.
One may then attach to the running process or dump it to investigate the threads.
There would be 2 deadlocked threads, for example, named "A" and "B".
**Thread A**
```
System.Private.CoreLib.dll!System.Threading.SpinWait.SpinOnce(int sleep1Threshold)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.WaitForCallbackToComplete(long id)
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.DecrementConnectionCount()
System.Net.Http.dll!System.Net.Http.HttpConnection.Dispose(bool disposing)
System.Net.Http.dll!System.Net.Http.HttpConnection.RegisterCancellation.AnonymousMethod__65_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Thread B**
```
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.GetConnectionAsync.AnonymousMethod__38_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Explanation**
**Thread A**
1. HttpConnectionPool.DecrementConnectionCount() [entered](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L785) `lock(SyncObj)`
2. Spin-waits in CancellationTokenSource.WaitForCallbackToComplete for Thread B to complete HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback
**Thread B**
1. HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback waits to enter lock([SyncObj](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L284)) that is held by Thread A
2. SyncObj can never be released Thread A because it is going to spin-wait infinitely unless Thread B makes progress.
**Conclusion**
Both threads **cannot move**, that confirms the deadlock.
**Workarounds**
1. Cancel the requests to the same endpoint serially by the application. The request cancellation could be queued and then processed sequentially on a single a worker thread. Or the cancellation threads could be synchronized by a lock.
2. If possible, oo not set MaxConnectionsPerServer property. | reli | sockethttphandler set up with maximumconnectionsperserver could deadlock on concurrent request cancellation karelz stephentoub after the deadlock hits the process has to be restarted if continued to be used the visible symptoms are the inability to communicate with a certain endpoint the process may eventually run out of available threads repro project tested in windows on sdk compile the console app and run it would produce output similar to running the test no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed deadlock detected requests are not completed finished the test press any key to exit the deadlock is caused by a race condition meaning it would strike after a random count of the test repetitions on each new application run the constant values maximumconnectionsperserver and maxrequestcount can be modified to increase decrease probability of the deadlock but maxrequestcount must be higher than maximumconnectionsperserver to force some requests into connectionwaiter queue the current values and are the lowest possible they still reliably reproduce the issue and produce clean threads picture one may then attach to the running process or dump it to investigate the threads there would be deadlocked threads for example named a and b thread a system private corelib dll system threading spinwait spinonce int system private corelib dll system threading cancellationtokensource waitforcallbacktocomplete long id system net http dll system net http httpconnectionpool decrementconnectioncount system net http dll system net http httpconnection dispose bool disposing system net http dll system net http httpconnection registercancellation anonymousmethod object s system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception deadlockinsocketshandler dll deadlockinsocketshandler program deadlocktestcore anonymousmethod line system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading tasks task executewiththreadlocal ref system threading tasks task currenttaskslot system private corelib dll system threading threadpoolworkqueue dispatch thread b system net http dll system net http httpconnectionpool getconnectionasync anonymousmethod object s system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception deadlockinsocketshandler dll deadlockinsocketshandler program deadlocktestcore anonymousmethod line system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading tasks task executewiththreadlocal ref system threading tasks task currenttaskslot system private corelib dll system threading threadpoolworkqueue dispatch explanation thread a httpconnectionpool decrementconnectioncount lock syncobj spin waits in cancellationtokensource waitforcallbacktocomplete for thread b to complete httpconnectionpool getconnectionasync callback thread b httpconnectionpool getconnectionasync callback waits to enter lock that is held by thread a syncobj can never be released thread a because it is going to spin wait infinitely unless thread b makes progress conclusion both threads cannot move that confirms the deadlock workarounds cancel the requests to the same endpoint serially by the application the request cancellation could be queued and then processed sequentially on a single a worker thread or the cancellation threads could be synchronized by a lock if possible oo not set maxconnectionsperserver property | 1 |
1,003 | 2,566,913,588 | IssuesEvent | 2015-02-09 00:32:48 | JukkaL/mypy | https://api.github.com/repos/JukkaL/mypy | opened | Add documentation for Optional[t] | documentation pep484 priority | Document `Optional[t]`. In particular, mention that it's currently for documentation only (equivalent to `t` when type checking, since `Uniont[t, None]` is the same as `t`). | 1.0 | Add documentation for Optional[t] - Document `Optional[t]`. In particular, mention that it's currently for documentation only (equivalent to `t` when type checking, since `Uniont[t, None]` is the same as `t`). | non_reli | add documentation for optional document optional in particular mention that it s currently for documentation only equivalent to t when type checking since uniont is the same as t | 0 |
496 | 7,931,539,969 | IssuesEvent | 2018-07-07 01:20:25 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Visual Studio crashed when type { object = ref | 4 - In Review Area-Compilers Bug Tenet-Reliability | **Version Used**:
15.7.4
**Steps to Reproduce**:
open any method, or C# interactive and start writing:
var temp = new { object = ref
**Expected Behavior**:
Error that "object" appears.
**Actual Behavior**:
Visual Studio crashed
| True | Visual Studio crashed when type { object = ref - **Version Used**:
15.7.4
**Steps to Reproduce**:
open any method, or C# interactive and start writing:
var temp = new { object = ref
**Expected Behavior**:
Error that "object" appears.
**Actual Behavior**:
Visual Studio crashed
| reli | visual studio crashed when type object ref version used steps to reproduce open any method or c interactive and start writing var temp new object ref expected behavior error that object appears actual behavior visual studio crashed | 1 |
1,767 | 19,585,073,333 | IssuesEvent | 2022-01-05 05:11:34 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Intellisense, Syntax highlight and Error Window are badly broken with incremental source generator in Visual Studio 17.0.4 | Bug Area-IDE Tenet-Reliability Investigation Required | **Version Used**:
Visual Studio 2022 17.0.4
Microsoft.CodeAnalysis.Analyzers 3.3.3
Microsoft.CodeAnalysis.CSharp.Workspaces 4.0.1
**Steps to Reproduce**:
1. Clone the [repro project](https://github.com/fberasategui/SourceGenerator.IntellisenseBug.Demo)
2. Open `SourceGenerator.IntellisenseBug.Demo.sln` in Visual Studio 2022 17.0.4
3. Build the solution
4. Observe the following behavior:

**Expected Behavior**:
- Intellisense and syntax highlight should continue to work with source-generated code.
- Visual Studio should not report errors in the Errors Window when the build actually succeeds, as per the Output Window.
- List of generated files should be shown in solution explorer under `Analyzers -> SourceGen.IntellisenseBug.Demo` instead of "this generator is not generating any files." message.
**Actual Behavior**:
- Everything works fine up until variable `c11` is declared.
- Intellisense and syntax highlight is broken when `c11` is declared, and never recover afterwards.
- Visual Studio reports errors in the Errors Window, but the build successfully passes
- Solution Explorer shows "this generator is not generating files." message instead of properly showing the generated file.
**More details**:
The source generator creates a single file with several empty C# classes named `Class1`, `Class2`, `Class3` and so forth, depending on the number of classes declared in the consuming project in the first variable declaration:
- `var classes = 5;` means there will be `Class1` to `Class5`
- `var classes = 10;` means there will be `Class1` to `Class10`
- and so on.
Using `<ProjectReference/>` and `<PackageReference/>` to consume the source generator, in both cases the behavior is the same. | True | Intellisense, Syntax highlight and Error Window are badly broken with incremental source generator in Visual Studio 17.0.4 - **Version Used**:
Visual Studio 2022 17.0.4
Microsoft.CodeAnalysis.Analyzers 3.3.3
Microsoft.CodeAnalysis.CSharp.Workspaces 4.0.1
**Steps to Reproduce**:
1. Clone the [repro project](https://github.com/fberasategui/SourceGenerator.IntellisenseBug.Demo)
2. Open `SourceGenerator.IntellisenseBug.Demo.sln` in Visual Studio 2022 17.0.4
3. Build the solution
4. Observe the following behavior:

**Expected Behavior**:
- Intellisense and syntax highlight should continue to work with source-generated code.
- Visual Studio should not report errors in the Errors Window when the build actually succeeds, as per the Output Window.
- List of generated files should be shown in solution explorer under `Analyzers -> SourceGen.IntellisenseBug.Demo` instead of "this generator is not generating any files." message.
**Actual Behavior**:
- Everything works fine up until variable `c11` is declared.
- Intellisense and syntax highlight is broken when `c11` is declared, and never recover afterwards.
- Visual Studio reports errors in the Errors Window, but the build successfully passes
- Solution Explorer shows "this generator is not generating files." message instead of properly showing the generated file.
**More details**:
The source generator creates a single file with several empty C# classes named `Class1`, `Class2`, `Class3` and so forth, depending on the number of classes declared in the consuming project in the first variable declaration:
- `var classes = 5;` means there will be `Class1` to `Class5`
- `var classes = 10;` means there will be `Class1` to `Class10`
- and so on.
Using `<ProjectReference/>` and `<PackageReference/>` to consume the source generator, in both cases the behavior is the same. | reli | intellisense syntax highlight and error window are badly broken with incremental source generator in visual studio version used visual studio microsoft codeanalysis analyzers microsoft codeanalysis csharp workspaces steps to reproduce clone the open sourcegenerator intellisensebug demo sln in visual studio build the solution observe the following behavior expected behavior intellisense and syntax highlight should continue to work with source generated code visual studio should not report errors in the errors window when the build actually succeeds as per the output window list of generated files should be shown in solution explorer under analyzers sourcegen intellisensebug demo instead of this generator is not generating any files message actual behavior everything works fine up until variable is declared intellisense and syntax highlight is broken when is declared and never recover afterwards visual studio reports errors in the errors window but the build successfully passes solution explorer shows this generator is not generating files message instead of properly showing the generated file more details the source generator creates a single file with several empty c classes named and so forth depending on the number of classes declared in the consuming project in the first variable declaration var classes means there will be to var classes means there will be to and so on using and to consume the source generator in both cases the behavior is the same | 1 |
915 | 11,592,662,190 | IssuesEvent | 2020-02-24 11:59:26 | sohaibaslam/learning_site | https://api.github.com/repos/sohaibaslam/learning_site | opened | Broken Crawlers 24, Feb 2020 | crawler broken/unreliable | 1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **americaneagle ca(100%)**
1. **ami ch(100%)/cn(100%)/dk(100%)/jp(100%)/kr(100%)/li(100%)/pl(100%)/uk(100%)/us(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **beginningboutique au(100%)**
1. **bergdorfgoodman (100%)/cn(100%)/hk(100%)/jp(100%)/mo(100%)/ru(100%)/tw(100%)**
1. **bijoubrigitte nl(100%)**
1. **boyner tr(100%)**
1. **browns us(100%)**
1. **canda cz(100%)**
1. **charmingcharlie us(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)**
1. **conforama fr(100%)**
1. **converse at(100%)/au(100%)/kr(100%)/nl(100%)**
1. **dunelondon uk(100%)**
1. **eastbay us(100%)**
1. **endclothing fr(100%)**
1. **falabella cl(100%)/co(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/uk(100%)**
1. **furla au(100%)/de(100%)/it(100%)/uk(100%)/us(100%)**
1. **hermes ca(100%)**
1. **hm ae(100%)/kw(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea pt(100%)**
1. **jimmyjazz us(100%)**
1. **klingel de(100%)**
1. **leroymerlin es(100%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **lncc kr(100%)**
1. **maccosmetics no(100%)**
1. **made ch(100%)**
1. **maisonsdumonde be(100%)/ch(100%)/it(100%)/uk(100%)**
1. **massimodutti eg(100%)/jo(100%)/qa(100%)**
1. **michaelkors ca(100%)**
1. **mothercare sa(100%)**
1. **muji de(100%)/fr(100%)/uk(100%)**
1. **myntradesktop in(100%)**
1. **overkill de(100%)/uk(100%)**
1. **oysho cn(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)/us(100%)**
1. **popup br(100%)**
1. **pullandbear kr(100%)/lb(100%)**
1. **rakuten us(100%)**
1. **reebok au(100%)**
1. **runnerspoint de(100%)**
1. **saksfifthavenue mo(100%)**
1. **selfridges us(100%)**
1. **shein de(100%)/se(100%)**
1. **simons ca(100%)**
1. **solebox de(100%)/uk(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)**
1. **stradivarius mo(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superstep ua(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)**
1. **tommyhilfiger us(100%)**
1. **topbrands ru(100%)**
1. **toryburch de(100%)/eu(100%)/fr(100%)/it(100%)/uk(100%)**
1. **vans ru(100%)**
1. **vergegirl au(100%)/eu(100%)/nz(100%)/uk(100%)/us(100%)**
1. **watchshop ca(100%)/eu(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **wenz de(100%)**
1. **xxl dk(100%)/fi(100%)/no(100%)/se(100%)**
1. **zalandolounge de(100%)**
1. **zara ba(100%)**
1. **zilingo id(100%)/my(100%)**
| True | Broken Crawlers 24, Feb 2020 - 1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **americaneagle ca(100%)**
1. **ami ch(100%)/cn(100%)/dk(100%)/jp(100%)/kr(100%)/li(100%)/pl(100%)/uk(100%)/us(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **beginningboutique au(100%)**
1. **bergdorfgoodman (100%)/cn(100%)/hk(100%)/jp(100%)/mo(100%)/ru(100%)/tw(100%)**
1. **bijoubrigitte nl(100%)**
1. **boyner tr(100%)**
1. **browns us(100%)**
1. **canda cz(100%)**
1. **charmingcharlie us(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)**
1. **conforama fr(100%)**
1. **converse at(100%)/au(100%)/kr(100%)/nl(100%)**
1. **dunelondon uk(100%)**
1. **eastbay us(100%)**
1. **endclothing fr(100%)**
1. **falabella cl(100%)/co(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/uk(100%)**
1. **furla au(100%)/de(100%)/it(100%)/uk(100%)/us(100%)**
1. **hermes ca(100%)**
1. **hm ae(100%)/kw(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea pt(100%)**
1. **jimmyjazz us(100%)**
1. **klingel de(100%)**
1. **leroymerlin es(100%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **lncc kr(100%)**
1. **maccosmetics no(100%)**
1. **made ch(100%)**
1. **maisonsdumonde be(100%)/ch(100%)/it(100%)/uk(100%)**
1. **massimodutti eg(100%)/jo(100%)/qa(100%)**
1. **michaelkors ca(100%)**
1. **mothercare sa(100%)**
1. **muji de(100%)/fr(100%)/uk(100%)**
1. **myntradesktop in(100%)**
1. **overkill de(100%)/uk(100%)**
1. **oysho cn(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)/us(100%)**
1. **popup br(100%)**
1. **pullandbear kr(100%)/lb(100%)**
1. **rakuten us(100%)**
1. **reebok au(100%)**
1. **runnerspoint de(100%)**
1. **saksfifthavenue mo(100%)**
1. **selfridges us(100%)**
1. **shein de(100%)/se(100%)**
1. **simons ca(100%)**
1. **solebox de(100%)/uk(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)**
1. **stradivarius mo(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superstep ua(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)**
1. **tommyhilfiger us(100%)**
1. **topbrands ru(100%)**
1. **toryburch de(100%)/eu(100%)/fr(100%)/it(100%)/uk(100%)**
1. **vans ru(100%)**
1. **vergegirl au(100%)/eu(100%)/nz(100%)/uk(100%)/us(100%)**
1. **watchshop ca(100%)/eu(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **wenz de(100%)**
1. **xxl dk(100%)/fi(100%)/no(100%)/se(100%)**
1. **zalandolounge de(100%)**
1. **zara ba(100%)**
1. **zilingo id(100%)/my(100%)**
| reli | broken crawlers feb eu fr uk us abcmart kr abercrombie cn hk jp americaneagle ca ami ch cn dk jp kr li pl uk us asos ae au ch cn hk id my nl ph pl ru sa sg th us vn babyshop ae sa beginningboutique au bergdorfgoodman cn hk jp mo ru tw bijoubrigitte nl boyner tr browns us canda cz charmingcharlie us clarks eu coach ca conforama fr converse at au kr nl dunelondon uk eastbay us endclothing fr falabella cl co footaction us footlocker be de dk es fr it lu nl no uk furla au de it uk us hermes ca hm ae kw sa hollister cn hk jp tw hunter ikea pt jimmyjazz us klingel de leroymerlin es levi my lifestylestores in lncc kr maccosmetics no made ch maisonsdumonde be ch it uk massimodutti eg jo qa michaelkors ca mothercare sa muji de fr uk myntradesktop in overkill de uk oysho cn parfois ad al am ao at ba be bg bh br by ch co cz de dk do ee eg es fi fr ge gr gt hr hu ie ir it jo kw lb lt lu lv ly ma mc mk mt mx mz nl om pa pe ph pl pt qa ro rs sa se si sk tn uk us ve ye patagonia ca us popup br pullandbear kr lb rakuten us reebok au runnerspoint de saksfifthavenue mo selfridges us shein de se simons ca solebox de uk splashfashions ae bh sa stefaniamode au stradivarius mo stylebop au ca cn de es fr hk jp kr mo sg us superstep ua tods cn gr pt tommybahama bh tommyhilfiger us topbrands ru toryburch de eu fr it uk vans ru vergegirl au eu nz uk us watchshop ca eu ru se wayfair ca de uk wenz de xxl dk fi no se zalandolounge de zara ba zilingo id my | 1 |
219 | 5,437,132,561 | IssuesEvent | 2017-03-06 05:19:34 | Azure/azure-webjobs-sdk-script | https://api.github.com/repos/Azure/azure-webjobs-sdk-script | opened | Need reliable way of waking up the runtime | reliability | Today, we rely on hitting / to wake it up, but from Portal and Scale controller. But that fails if user enables Proxies and takes over the root.
We need to come up with an alternative API to reliably wake up the runtime. | True | Need reliable way of waking up the runtime - Today, we rely on hitting / to wake it up, but from Portal and Scale controller. But that fails if user enables Proxies and takes over the root.
We need to come up with an alternative API to reliably wake up the runtime. | reli | need reliable way of waking up the runtime today we rely on hitting to wake it up but from portal and scale controller but that fails if user enables proxies and takes over the root we need to come up with an alternative api to reliably wake up the runtime | 1 |
1,790 | 19,844,869,484 | IssuesEvent | 2022-01-21 04:12:53 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Debug build of lazer crashes on launch when migrating to Realm | area:database type:reliability | # Steps to repro
1. Delete lazer data folder.
2. Checkout 2ad0ea35be5c170546b8cfca8f7cae0939567bcd and run that version. (Probably not related to that specific commit, it's just the commit I happened to be on when I first found the bug.)
3. Choose to start fresh at the specified location when prompted.
4. Close the game.
5. Check out master (at 1a207251628d50ae701ec73a4c47f0f27ca29930 right now) and run the game.
6. Game fails to launch with an IOException:
<details>
<summary>Stack Trace</summary>
```
2022-01-20 13:43:20 [verbose]: Beginning realm file store cleanup
2022-01-20 13:43:20 [verbose]: Finished realm file store cleanup (0 of 0 deleted)
2022-01-20 13:43:21 [important]: Your development database has been fully migrated to realm. If you switch back to a pre-realm branch and need your previous database, rename the backup file back to "client.db".
2022-01-20 13:43:21 [important]:
2022-01-20 13:43:21 [important]: Note that doing this can potentially leave your file store in a bad state.
2022-01-20 13:43:21 [error]: An unhandled error has occurred.
2022-01-20 13:43:21 [error]: System.IO.IOException: The process cannot access the file 'D:\Games\osu!lazer\data\client.db' because it is being used by another process.
2022-01-20 13:43:21 [error]: at System.IO.FileStream.ValidateFileHandle(SafeFileHandle fileHandle)
2022-01-20 13:43:21 [error]: at System.IO.FileStream.CreateFileOpenHandle(FileMode mode, FileShare share, FileOptions options)
2022-01-20 13:43:21 [error]: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
2022-01-20 13:43:21 [error]: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
2022-01-20 13:43:21 [error]: at System.IO.File.Open(String path, FileMode mode, FileAccess access, FileShare share)
2022-01-20 13:43:21 [error]: at osu.Framework.Platform.NativeStorage.GetStream(String path, FileAccess access, FileMode mode)
2022-01-20 13:43:21 [error]: at osu.Game.IO.WrappedStorage.GetStream(String path, FileAccess access, FileMode mode) in D:\Projects\OSU\osu\osu.Game\IO\WrappedStorage.cs:line 71
2022-01-20 13:43:21 [error]: at osu.Game.Database.DatabaseContextFactory.CreateBackup(String backupFilename) in D:\Projects\OSU\osu\osu.Game\Database\DatabaseContextFactory.cs:line 157
2022-01-20 13:43:21 [error]: at osu.Game.Database.EFToRealmMigrator.ensureBackup() in D:\Projects\OSU\osu\osu.Game\Database\EFToRealmMigrator.cs:line 409
2022-01-20 13:43:21 [error]: at osu.Game.Database.EFToRealmMigrator.migrateBeatmaps(OsuDbContext ef) in D:\Projects\OSU\osu\osu.Game\Database\EFToRealmMigrator.cs:line 80
2022-01-20 13:43:21 [error]: at osu.Game.Database.EFToRealmMigrator.Run() in D:\Projects\OSU\osu\osu.Game\Database\EFToRealmMigrator.cs:line 45
2022-01-20 13:43:21 [error]: at osu.Game.OsuGameBase.load(ReadableKeyCombinationProvider keyCombinationProvider) in D:\Projects\OSU\osu\osu.Game\OsuGameBase.cs:line 198
```
</details>
Here's how the data folder looks like after the crash: (Backup files are missing)

I can confirm that the migration itself is successful since the game launches just fine if I delete `client.db`. Modified settings are still persisted. It is only crashing in the backup creation part.
# Logs
[performance.log](https://github.com/ppy/osu/files/7905701/performance.log)
[runtime.log](https://github.com/ppy/osu/files/7905702/runtime.log)
[database.log](https://github.com/ppy/osu/files/7905703/database.log)
[network.log](https://github.com/ppy/osu/files/7905704/network.log)
| True | Debug build of lazer crashes on launch when migrating to Realm - # Steps to repro
1. Delete lazer data folder.
2. Checkout 2ad0ea35be5c170546b8cfca8f7cae0939567bcd and run that version. (Probably not related to that specific commit, it's just the commit I happened to be on when I first found the bug.)
3. Choose to start fresh at the specified location when prompted.
4. Close the game.
5. Check out master (at 1a207251628d50ae701ec73a4c47f0f27ca29930 right now) and run the game.
6. Game fails to launch with an IOException:
<details>
<summary>Stack Trace</summary>
```
2022-01-20 13:43:20 [verbose]: Beginning realm file store cleanup
2022-01-20 13:43:20 [verbose]: Finished realm file store cleanup (0 of 0 deleted)
2022-01-20 13:43:21 [important]: Your development database has been fully migrated to realm. If you switch back to a pre-realm branch and need your previous database, rename the backup file back to "client.db".
2022-01-20 13:43:21 [important]:
2022-01-20 13:43:21 [important]: Note that doing this can potentially leave your file store in a bad state.
2022-01-20 13:43:21 [error]: An unhandled error has occurred.
2022-01-20 13:43:21 [error]: System.IO.IOException: The process cannot access the file 'D:\Games\osu!lazer\data\client.db' because it is being used by another process.
2022-01-20 13:43:21 [error]: at System.IO.FileStream.ValidateFileHandle(SafeFileHandle fileHandle)
2022-01-20 13:43:21 [error]: at System.IO.FileStream.CreateFileOpenHandle(FileMode mode, FileShare share, FileOptions options)
2022-01-20 13:43:21 [error]: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
2022-01-20 13:43:21 [error]: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
2022-01-20 13:43:21 [error]: at System.IO.File.Open(String path, FileMode mode, FileAccess access, FileShare share)
2022-01-20 13:43:21 [error]: at osu.Framework.Platform.NativeStorage.GetStream(String path, FileAccess access, FileMode mode)
2022-01-20 13:43:21 [error]: at osu.Game.IO.WrappedStorage.GetStream(String path, FileAccess access, FileMode mode) in D:\Projects\OSU\osu\osu.Game\IO\WrappedStorage.cs:line 71
2022-01-20 13:43:21 [error]: at osu.Game.Database.DatabaseContextFactory.CreateBackup(String backupFilename) in D:\Projects\OSU\osu\osu.Game\Database\DatabaseContextFactory.cs:line 157
2022-01-20 13:43:21 [error]: at osu.Game.Database.EFToRealmMigrator.ensureBackup() in D:\Projects\OSU\osu\osu.Game\Database\EFToRealmMigrator.cs:line 409
2022-01-20 13:43:21 [error]: at osu.Game.Database.EFToRealmMigrator.migrateBeatmaps(OsuDbContext ef) in D:\Projects\OSU\osu\osu.Game\Database\EFToRealmMigrator.cs:line 80
2022-01-20 13:43:21 [error]: at osu.Game.Database.EFToRealmMigrator.Run() in D:\Projects\OSU\osu\osu.Game\Database\EFToRealmMigrator.cs:line 45
2022-01-20 13:43:21 [error]: at osu.Game.OsuGameBase.load(ReadableKeyCombinationProvider keyCombinationProvider) in D:\Projects\OSU\osu\osu.Game\OsuGameBase.cs:line 198
```
</details>
Here's how the data folder looks like after the crash: (Backup files are missing)

I can confirm that the migration itself is successful since the game launches just fine if I delete `client.db`. Modified settings are still persisted. It is only crashing in the backup creation part.
# Logs
[performance.log](https://github.com/ppy/osu/files/7905701/performance.log)
[runtime.log](https://github.com/ppy/osu/files/7905702/runtime.log)
[database.log](https://github.com/ppy/osu/files/7905703/database.log)
[network.log](https://github.com/ppy/osu/files/7905704/network.log)
| reli | debug build of lazer crashes on launch when migrating to realm steps to repro delete lazer data folder checkout and run that version probably not related to that specific commit it s just the commit i happened to be on when i first found the bug choose to start fresh at the specified location when prompted close the game check out master at right now and run the game game fails to launch with an ioexception stack trace beginning realm file store cleanup finished realm file store cleanup of deleted your development database has been fully migrated to realm if you switch back to a pre realm branch and need your previous database rename the backup file back to client db note that doing this can potentially leave your file store in a bad state an unhandled error has occurred system io ioexception the process cannot access the file d games osu lazer data client db because it is being used by another process at system io filestream validatefilehandle safefilehandle filehandle at system io filestream createfileopenhandle filemode mode fileshare share fileoptions options at system io filestream ctor string path filemode mode fileaccess access fileshare share buffersize fileoptions options at system io filestream ctor string path filemode mode fileaccess access fileshare share at system io file open string path filemode mode fileaccess access fileshare share at osu framework platform nativestorage getstream string path fileaccess access filemode mode at osu game io wrappedstorage getstream string path fileaccess access filemode mode in d projects osu osu osu game io wrappedstorage cs line at osu game database databasecontextfactory createbackup string backupfilename in d projects osu osu osu game database databasecontextfactory cs line at osu game database eftorealmmigrator ensurebackup in d projects osu osu osu game database eftorealmmigrator cs line at osu game database eftorealmmigrator migratebeatmaps osudbcontext ef in d projects osu osu osu game database eftorealmmigrator cs line at osu game database eftorealmmigrator run in d projects osu osu osu game database eftorealmmigrator cs line at osu game osugamebase load readablekeycombinationprovider keycombinationprovider in d projects osu osu osu game osugamebase cs line here s how the data folder looks like after the crash backup files are missing i can confirm that the migration itself is successful since the game launches just fine if i delete client db modified settings are still persisted it is only crashing in the backup creation part logs | 1 |
269 | 5,921,298,207 | IssuesEvent | 2017-05-22 22:37:51 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Crash writing a VB implements clause that did not resolve. | Area-IDE Bug Language-VB Tenet-Reliability |
```
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.CodeAnalysis.VisualBasic.Completion.Providers.ImplementsClauseCompletionProvider.GetSymbolsWorker(SyntaxContext context, Int32 position, OptionSet options, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.Completion.Providers.AbstractSymbolCompletionProvider.GetSymbolsWorker(Int32 position, Boolean preselect, SyntaxContext context, OptionSet options, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.Completion.Providers.AbstractSymbolCompletionProvider.<GetItemsWorkerAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Completion.Providers.AbstractSymbolCompletionProvider.<ProvideCompletionsAsync>d__14.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Completion.CompletionServiceWithProviders.<GetContextAsync>d__31.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Completion.CompletionServiceWithProviders.<ComputeNonEmptyCompletionContextsAsync>d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)
at Microsoft.CodeAnalysis.Completion.CompletionServiceWithProviders.<GetCompletionsAsync>d__22.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.Completion.Controller.Session.ModelComputer.<GetCompletionListAsync>d__14.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.Completion.Controller.Session.ModelComputer.<DoInBackgroundAsync>d__13.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1<System.Exception>)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception)
at Roslyn.Utilities.TaskExtensions.ReportFatalErrorWorker(System.Threading.Tasks.Task, System.Object)
at System.Threading.Tasks.ContinuationTaskFromTask.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.StandardTaskContinuation.Run(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.Task.ContinueWithCore(System.Threading.Tasks.Task, System.Threading.Tasks.TaskScheduler, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions)
at System.Threading.Tasks.Task.ContinueWith(System.Action`2<System.Threading.Tasks.Task,System.Object>, System.Object, System.Threading.Tasks.TaskScheduler, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions, System.Threading.StackCrawlMark ByRef)
at System.Threading.Tasks.Task.ContinueWith(System.Action`2<System.Threading.Tasks.Task,System.Object>, System.Object, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions, System.Threading.Tasks.TaskScheduler)
at Roslyn.Utilities.TaskExtensions.ReportFatalError(System.Threading.Tasks.Task, System.Object)
at System.Threading.Tasks.ContinuationTaskFromResultTask`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.StandardTaskContinuation.Run(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetException(System.Object)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetFromTask(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessInnerTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessCompletedOuterTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InvokeCore(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Invoke(System.Threading.Tasks.Task)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.TaskScheduler.TryExecuteTask(System.Threading.Tasks.Task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.Completion.PrioritizedTaskScheduler.ThreadStart()
at System.Threading.ThreadHelper.ThreadStart_Context(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Threading.ThreadHelper.ThreadStart()
``` | True | Crash writing a VB implements clause that did not resolve. -
```
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.CodeAnalysis.VisualBasic.Completion.Providers.ImplementsClauseCompletionProvider.GetSymbolsWorker(SyntaxContext context, Int32 position, OptionSet options, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.Completion.Providers.AbstractSymbolCompletionProvider.GetSymbolsWorker(Int32 position, Boolean preselect, SyntaxContext context, OptionSet options, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.Completion.Providers.AbstractSymbolCompletionProvider.<GetItemsWorkerAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Completion.Providers.AbstractSymbolCompletionProvider.<ProvideCompletionsAsync>d__14.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Completion.CompletionServiceWithProviders.<GetContextAsync>d__31.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Completion.CompletionServiceWithProviders.<ComputeNonEmptyCompletionContextsAsync>d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)
at Microsoft.CodeAnalysis.Completion.CompletionServiceWithProviders.<GetCompletionsAsync>d__22.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.Completion.Controller.Session.ModelComputer.<GetCompletionListAsync>d__14.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.Completion.Controller.Session.ModelComputer.<DoInBackgroundAsync>d__13.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1<System.Exception>)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception)
at Roslyn.Utilities.TaskExtensions.ReportFatalErrorWorker(System.Threading.Tasks.Task, System.Object)
at System.Threading.Tasks.ContinuationTaskFromTask.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.StandardTaskContinuation.Run(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.Task.ContinueWithCore(System.Threading.Tasks.Task, System.Threading.Tasks.TaskScheduler, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions)
at System.Threading.Tasks.Task.ContinueWith(System.Action`2<System.Threading.Tasks.Task,System.Object>, System.Object, System.Threading.Tasks.TaskScheduler, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions, System.Threading.StackCrawlMark ByRef)
at System.Threading.Tasks.Task.ContinueWith(System.Action`2<System.Threading.Tasks.Task,System.Object>, System.Object, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions, System.Threading.Tasks.TaskScheduler)
at Roslyn.Utilities.TaskExtensions.ReportFatalError(System.Threading.Tasks.Task, System.Object)
at System.Threading.Tasks.ContinuationTaskFromResultTask`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.StandardTaskContinuation.Run(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetException(System.Object)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetFromTask(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessInnerTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessCompletedOuterTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InvokeCore(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.__Canon, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Invoke(System.Threading.Tasks.Task)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.TaskScheduler.TryExecuteTask(System.Threading.Tasks.Task)
at Microsoft.CodeAnalysis.Editor.Implementation.IntelliSense.Completion.PrioritizedTaskScheduler.ThreadStart()
at System.Threading.ThreadHelper.ThreadStart_Context(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Threading.ThreadHelper.ThreadStart()
``` | reli | crash writing a vb implements clause that did not resolve application devenv exe framework version description the application requested process termination through system environment failfast string message message system nullreferenceexception object reference not set to an instance of an object at microsoft codeanalysis visualbasic completion providers implementsclausecompletionprovider getsymbolsworker syntaxcontext context position optionset options cancellationtoken cancellationtoken at microsoft codeanalysis completion providers abstractsymbolcompletionprovider getsymbolsworker position boolean preselect syntaxcontext context optionset options cancellationtoken cancellationtoken at microsoft codeanalysis completion providers abstractsymbolcompletionprovider d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis completion providers abstractsymbolcompletionprovider d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis completion completionservicewithproviders d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis completion completionservicewithproviders d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices taskawaiter validateend task task at microsoft codeanalysis completion completionservicewithproviders d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis editor implementation intellisense completion controller session modelcomputer d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis editor implementation intellisense completion controller session modelcomputer d movenext stack at system environment failfast system string system exception at microsoft codeanalysis failfast onfatalexception system exception at microsoft codeanalysis errorreporting fatalerror report system exception system action at microsoft codeanalysis errorreporting fatalerror report system exception at roslyn utilities taskextensions reportfatalerrorworker system threading tasks task system object at system threading tasks continuationtaskfromtask innerinvoke at system threading tasks task execute at system threading tasks task executioncontextcallback system object at system threading executioncontext runinternal system threading executioncontext system threading contextcallback system object boolean at system threading executioncontext run system threading executioncontext system threading contextcallback system object boolean at system threading tasks task executewiththreadlocal system threading tasks task byref at system threading tasks task executeentry boolean at system threading tasks threadpooltaskscheduler tryexecutetaskinline system threading tasks task boolean at system threading tasks taskscheduler tryruninline system threading tasks task boolean at system threading tasks taskcontinuation inlineifpossibleorelsequeue system threading tasks task boolean at system threading tasks standardtaskcontinuation run system threading tasks task boolean at system threading tasks task continuewithcore system threading tasks task system threading tasks taskscheduler system threading cancellationtoken system threading tasks taskcontinuationoptions at system threading tasks task continuewith system action system object system threading tasks taskscheduler system threading cancellationtoken system threading tasks taskcontinuationoptions system threading stackcrawlmark byref at system threading tasks task continuewith system action system object system threading cancellationtoken system threading tasks taskcontinuationoptions system threading tasks taskscheduler at roslyn utilities taskextensions reportfatalerror system threading tasks task system object at system threading tasks continuationtaskfromresulttask innerinvoke at system threading tasks task execute at system threading tasks task executioncontextcallback system object at system threading executioncontext runinternal system threading executioncontext system threading contextcallback system object boolean at system threading executioncontext run system threading executioncontext system threading contextcallback system object boolean at system threading tasks task executewiththreadlocal system threading tasks task byref at system threading tasks task executeentry boolean at system threading tasks threadpooltaskscheduler tryexecutetaskinline system threading tasks task boolean at system threading tasks taskscheduler tryruninline system threading tasks task boolean at system threading tasks taskcontinuation inlineifpossibleorelsequeue system threading tasks task boolean at system threading tasks standardtaskcontinuation run system threading tasks task boolean at system threading tasks task finishcontinuations at system threading tasks task finishstagethree at system threading tasks task finishstagetwo at system threading tasks task finish boolean at system threading tasks task trysetexception system object at system threading tasks unwrappromise trysetfromtask system threading tasks task boolean at system threading tasks unwrappromise processinnertask system threading tasks task at system threading tasks unwrappromise processcompletedoutertask system threading tasks task at system threading tasks unwrappromise invokecore system threading tasks task at system threading tasks unwrappromise invoke system threading tasks task at system threading tasks task finishcontinuations at system threading tasks task finishstagethree at system threading tasks task finishstagetwo at system threading tasks task finish boolean at system threading tasks task executewiththreadlocal system threading tasks task byref at system threading tasks task executeentry boolean at system threading tasks taskscheduler tryexecutetask system threading tasks task at microsoft codeanalysis editor implementation intellisense completion prioritizedtaskscheduler threadstart at system threading threadhelper threadstart context system object at system threading executioncontext runinternal system threading executioncontext system threading contextcallback system object boolean at system threading executioncontext run system threading executioncontext system threading contextcallback system object boolean at system threading executioncontext run system threading executioncontext system threading contextcallback system object at system threading threadhelper threadstart | 1 |
353 | 6,890,371,763 | IssuesEvent | 2017-11-22 13:48:54 | Storj/bridge | https://api.github.com/repos/Storj/bridge | closed | Uneven shard distribution and download traffic | reliability | ### Package Versions
Replace the values below using the output from `npm list storj-bridge`.
```
root@storj:~# storj --version
Storjcli: 4.0.0 | Core: 6.0.15
```
Replace the values below using the output from `node --version`.
```
v6.9.4
```
### Expected Behavior
Please describe the program's expected behavior. Include an example of your
usage code in the back ticks below if applicable.
I uploaded a tiny file. Farmer `e6ac6e2f73990df43daf3ca61f97e9210b7bbe80` received the shard and 5 other farmer received mirrors. If I try to download the file I would expect a even distribution beween all 6 farmer. They should all get a few download bytes.
```
root@storj:~# storj upload-file 57449d89dc436e0a48f041aa .storjcli/upload
[Tue Jan 31 2017 22:22:10 GMT+0100 (CET)] [info] 1 file(s) to upload.
[...] > Enter your passphrase to unlock your keyring > ********
[Tue Jan 31 2017 22:22:14 GMT+0100 (CET)] [info] Generating encryption key...
[Tue Jan 31 2017 22:22:14 GMT+0100 (CET)] [warn] [ upload ] Already exists in bucket. Uploading to (2017-01-31T21;22;14.937Z)-upload
[Tue Jan 31 2017 22:22:14 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Creating storage token... (retry: 0)
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] Encrypting file ".storjcli/upload"
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Encryption complete
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Storing file, hang tight!
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] Creating file staging frame
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Trying to upload shard /opt/tmp/9d7bfdf84b40 index 0
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Hash for this shard is: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Audit generation for shard done.
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Waiting on a storage offer from the network...
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Querying bridge for contract for e78cbd02eacf2d603887687a2bfe3657c9353372 (retry: 0)
[Tue Jan 31 2017 22:22:17 GMT+0100 (CET)] [info] Contract negotiated with: {"userAgent":"6.0.15","protocol":"1.1.0","address":"003s.storjlt.xyz","port":4601,"nodeID":"e6ac6e2f73990df43daf3ca61f97e9210b7bbe80","lastSeen":1485897735360}
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Shard transfer completed! 0 remaining...
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Transfer finished, creating entry.. (retry: 0)
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] sending exchange report
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Cleaning up...
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Finished cleaning!
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Encryption key saved to keyring.
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] File successfully stored in bucket.
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Name: (2017-01-31T21;22;14.937Z)-upload, Type: application/octet-stream, Size: 1280 bytes, ID: 76cf610ece582d974724db91
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] 1 of 1 files uploaded
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Done.
root@storj:~# storj list-mirrors 57449d89dc436e0a48f041aa 76cf610ece582d974724db91
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info]
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Established
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] -----------
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Shard: 0
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Hash: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://47.184.44.104:53261/e7c0f0d9ace3573ce1ef284ad646e75326e3854b
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://80.76.183.72:9101/e6b872ad219ecf753cc9228a550bdd234f6745e0
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client015.storj.dk:15056/e7871598b710ccf94f6630b5b9d9caed54f4d607
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://152.231.100.127:24054/e79aaac645c571dd8178b68065cfd5c40693f328
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://92.52.142.15:27832/e7e466fc25166bb2e7b85b740d4041083f1c1b3f
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info]
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Available
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] ---------
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Shard: 0
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Hash: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://89.176.149.45:37700/e7028eddc21f9ceaa4de94e80290f23382a49dcf
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client014.storj.dk:15050/e7c0f7c62b37d2fcf364dce4ee181e5c917d75ca
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client012.storj.dk:15056/e6ef38ec5173863cbf57ed87138449a81f25c345
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client021.storj.dk:15049/e672512fd7ac2091abe95e90bec433696844a083
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client015.storj.dk:15051/e697688f4d89b82f55105c9ee0bc84592cf16e13
```
### Actual Behavior
Please describe the program's actual behavior. Please include any stack traces
or log output in the back ticks below.
5 times in a row I downloaded from the same farmer `e7c0f0d9ace3573ce1ef284ad646e75326e3854b`. I did the same test 3 times with new uploaded files. The download traffic goes always to the same mirror farmer. The original farmer and the other 4 mirror farmer will get 0 traffic.
```
root@storj:~# storj get-pointers 57449d89dc436e0a48f041aa 76cf610ece582d974724db91
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Listing pointers for shards 0 - 0
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] -----------------------------------------
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info]
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Index: 0
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Hash: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Token: 876841e23eb5900ae5244b06b5f09706ab36584e
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Farmer: storj://47.184.44.104:53261/e7c0f0d9ace3573ce1ef284ad646e75326e3854b
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info]
```
### Steps to Reproduce
Please include the steps the reproduce the issue, numbered below. Include as
much detail as possible.
1. Upload a new file.
2. Wait until the bridge created 5 mirrors.
3. Download the files multiple times and track the nodeIDs.
| True | Uneven shard distribution and download traffic - ### Package Versions
Replace the values below using the output from `npm list storj-bridge`.
```
root@storj:~# storj --version
Storjcli: 4.0.0 | Core: 6.0.15
```
Replace the values below using the output from `node --version`.
```
v6.9.4
```
### Expected Behavior
Please describe the program's expected behavior. Include an example of your
usage code in the back ticks below if applicable.
I uploaded a tiny file. Farmer `e6ac6e2f73990df43daf3ca61f97e9210b7bbe80` received the shard and 5 other farmer received mirrors. If I try to download the file I would expect a even distribution beween all 6 farmer. They should all get a few download bytes.
```
root@storj:~# storj upload-file 57449d89dc436e0a48f041aa .storjcli/upload
[Tue Jan 31 2017 22:22:10 GMT+0100 (CET)] [info] 1 file(s) to upload.
[...] > Enter your passphrase to unlock your keyring > ********
[Tue Jan 31 2017 22:22:14 GMT+0100 (CET)] [info] Generating encryption key...
[Tue Jan 31 2017 22:22:14 GMT+0100 (CET)] [warn] [ upload ] Already exists in bucket. Uploading to (2017-01-31T21;22;14.937Z)-upload
[Tue Jan 31 2017 22:22:14 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Creating storage token... (retry: 0)
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] Encrypting file ".storjcli/upload"
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Encryption complete
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Storing file, hang tight!
[Tue Jan 31 2017 22:22:15 GMT+0100 (CET)] [info] Creating file staging frame
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Trying to upload shard /opt/tmp/9d7bfdf84b40 index 0
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Hash for this shard is: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Audit generation for shard done.
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Waiting on a storage offer from the network...
[Tue Jan 31 2017 22:22:16 GMT+0100 (CET)] [info] Querying bridge for contract for e78cbd02eacf2d603887687a2bfe3657c9353372 (retry: 0)
[Tue Jan 31 2017 22:22:17 GMT+0100 (CET)] [info] Contract negotiated with: {"userAgent":"6.0.15","protocol":"1.1.0","address":"003s.storjlt.xyz","port":4601,"nodeID":"e6ac6e2f73990df43daf3ca61f97e9210b7bbe80","lastSeen":1485897735360}
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Shard transfer completed! 0 remaining...
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Transfer finished, creating entry.. (retry: 0)
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] sending exchange report
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Cleaning up...
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Finished cleaning!
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] Encryption key saved to keyring.
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] [ (2017-01-31T21;22;14.937Z)-upload ] File successfully stored in bucket.
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Name: (2017-01-31T21;22;14.937Z)-upload, Type: application/octet-stream, Size: 1280 bytes, ID: 76cf610ece582d974724db91
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] 1 of 1 files uploaded
[Tue Jan 31 2017 22:22:18 GMT+0100 (CET)] [info] Done.
root@storj:~# storj list-mirrors 57449d89dc436e0a48f041aa 76cf610ece582d974724db91
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info]
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Established
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] -----------
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Shard: 0
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Hash: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://47.184.44.104:53261/e7c0f0d9ace3573ce1ef284ad646e75326e3854b
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://80.76.183.72:9101/e6b872ad219ecf753cc9228a550bdd234f6745e0
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client015.storj.dk:15056/e7871598b710ccf94f6630b5b9d9caed54f4d607
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://152.231.100.127:24054/e79aaac645c571dd8178b68065cfd5c40693f328
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://92.52.142.15:27832/e7e466fc25166bb2e7b85b740d4041083f1c1b3f
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info]
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Available
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] ---------
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Shard: 0
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] Hash: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://89.176.149.45:37700/e7028eddc21f9ceaa4de94e80290f23382a49dcf
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client014.storj.dk:15050/e7c0f7c62b37d2fcf364dce4ee181e5c917d75ca
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client012.storj.dk:15056/e6ef38ec5173863cbf57ed87138449a81f25c345
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client021.storj.dk:15049/e672512fd7ac2091abe95e90bec433696844a083
[Tue Jan 31 2017 22:22:47 GMT+0100 (CET)] [info] storj://client015.storj.dk:15051/e697688f4d89b82f55105c9ee0bc84592cf16e13
```
### Actual Behavior
Please describe the program's actual behavior. Please include any stack traces
or log output in the back ticks below.
5 times in a row I downloaded from the same farmer `e7c0f0d9ace3573ce1ef284ad646e75326e3854b`. I did the same test 3 times with new uploaded files. The download traffic goes always to the same mirror farmer. The original farmer and the other 4 mirror farmer will get 0 traffic.
```
root@storj:~# storj get-pointers 57449d89dc436e0a48f041aa 76cf610ece582d974724db91
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Listing pointers for shards 0 - 0
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] -----------------------------------------
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info]
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Index: 0
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Hash: e78cbd02eacf2d603887687a2bfe3657c9353372
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Token: 876841e23eb5900ae5244b06b5f09706ab36584e
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info] Farmer: storj://47.184.44.104:53261/e7c0f0d9ace3573ce1ef284ad646e75326e3854b
[Tue Jan 31 2017 22:31:08 GMT+0100 (CET)] [info]
```
### Steps to Reproduce
Please include the steps the reproduce the issue, numbered below. Include as
much detail as possible.
1. Upload a new file.
2. Wait until the bridge created 5 mirrors.
3. Download the files multiple times and track the nodeIDs.
| reli | uneven shard distribution and download traffic package versions replace the values below using the output from npm list storj bridge root storj storj version storjcli core replace the values below using the output from node version expected behavior please describe the program s expected behavior include an example of your usage code in the back ticks below if applicable i uploaded a tiny file farmer received the shard and other farmer received mirrors if i try to download the file i would expect a even distribution beween all farmer they should all get a few download bytes root storj storj upload file storjcli upload file s to upload enter your passphrase to unlock your keyring generating encryption key already exists in bucket uploading to upload creating storage token retry encrypting file storjcli upload encryption complete storing file hang tight creating file staging frame trying to upload shard opt tmp index hash for this shard is audit generation for shard done waiting on a storage offer from the network querying bridge for contract for retry contract negotiated with useragent protocol address storjlt xyz port nodeid lastseen shard transfer completed remaining transfer finished creating entry retry sending exchange report cleaning up finished cleaning encryption key saved to keyring file successfully stored in bucket name upload type application octet stream size bytes id of files uploaded done root storj storj list mirrors established shard hash storj storj storj storj dk storj storj available shard hash storj storj storj dk storj storj dk storj storj dk storj storj dk actual behavior please describe the program s actual behavior please include any stack traces or log output in the back ticks below times in a row i downloaded from the same farmer i did the same test times with new uploaded files the download traffic goes always to the same mirror farmer the original farmer and the other mirror farmer will get traffic root storj storj get pointers listing pointers for shards index hash token farmer storj steps to reproduce please include the steps the reproduce the issue numbered below include as much detail as possible upload a new file wait until the bridge created mirrors download the files multiple times and track the nodeids | 1 |
46,759 | 13,055,971,615 | IssuesEvent | 2020-07-30 03:16:23 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | [Serialization] (boost)serialization of raw and shared_ptr is broken (Trac #1832) | Incomplete Migration Migrated from Trac combo core defect | Migrated from https://code.icecube.wisc.edu/ticket/1832
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:54",
"description": "the serialization of shared pointers through the icecube::serialization interface is not working.\n\nerror below.\n\ncweaver suspects it hanging on the 'enable_if' switch.\n\n{{{\n[ 95%] Building CXX object IceHiveZ/CMakeFiles/IceHiveZ.dir/private/IceHiveZ/internals/Relation.cxx.o\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.cxx:12:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:20:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/serialization.h:32:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:8:\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr_helper.hpp:181:30: error: no matching\n member function for call to 'insert'\n result = m_o_sp->insert(std::make_pair(oid, s));\n ~~~~~~~~^~~~~~\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:171:7: note: in instantiation of\n function template specialization 'icecube::serialization::shared_ptr_helper<boost::shared_ptr>::reset<const\n OMKeyHash::CompactOMKeyHashService>' requested here\n h.reset(t,r); \n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:58:9: note: in instantiation of\n function template specialization 'icecube::serialization::load<icecube::archive::portable_binary_iarchive, const\n OMKeyHash::CompactOMKeyHashService>' requested here\n load(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:74:12: note: in instantiation of member\n function 'icecube::serialization::free_loader<icecube::archive::portable_binary_iarchive, boost::shared_ptr<const\n OMKeyHash::CompactOMKeyHashService> >::invoke' requested here\n typex::invoke(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:187:29: note: in instantiation of\n function template specialization 'icecube::serialization::split_free<icecube::archive::portable_binary_iarchive,\n boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> >' requested here\n icecube::serialization::split_free(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/serialization.hpp:128:9: note: in instantiation of\n function template specialization 'icecube::serialization::serialize<icecube::archive::portable_binary_iarchive, const\n OMKeyHash::CompactOMKeyHashService>' requested here\n serialize(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/iserializer.hpp:179:29: note: (skipping 20 contexts\n in backtrace; use -ftemplate-backtrace-limit=0 to see all)\n icecube::serialization::serialize_adl(\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/common_iarchive.hpp:66:18: note: in instantiation of\n function template specialization 'icecube::archive::load<icecube::archive::portable_binary_iarchive, const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n archive::load(* this->This(), t);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:144:10: note: in instantiation of\n function template specialization\n 'icecube::archive::detail::common_iarchive<icecube::archive::portable_binary_iarchive>::load_override<const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n ::load_override(t, static_cast<int>(version));\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:60:23: note: in instantiation\n of function template specialization 'icecube::archive::portable_binary_iarchive::load_override<const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n this->This()->load_override(t, 0);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:67:32: note: in instantiation\n of function template specialization\n 'icecube::archive::detail::interface_iarchive<icecube::archive::portable_binary_iarchive>::operator>><const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n return *(this->This()) >> t;\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:150:6: note: in instantiation of function\n template specialization 'icecube::archive::detail::interface_iarchive<icecube::archive::portable_binary_iarchive>::operator&<const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n ar & icecube::serialization::make_nvp(\"hasher\",hasher_);\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1089:9: note: candidate function not viable:\n no known conversion from 'pair<typename __make_pair_return<const void *&>::type, shared_ptr<const\n OMKeyHash::CompactOMKeyHashService>>' to 'const pair<const key_type, shared_ptr<void>>' for 1st argument\n insert(const value_type& __v) {return __tree_.__insert_unique(__v);}\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1118:10: note: candidate function not viable:\n no known conversion from 'pair<typename __make_pair_return<const void *&>::type, typename __make_pair_return<shared_ptr<const\n CompactOMKeyHashService> &>::type>' (aka 'pair<const void *, boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> >') to\n 'initializer_list<value_type>' (aka 'initializer_list<pair<const void *const, boost::shared_ptr<void> > >') for 1st argument\n void insert(initializer_list<value_type> __il)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1074:42: note: candidate template ignored:\n disabled by 'enable_if' [with _Pp = std::__1::pair<const void *, boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> >]\n class = typename enable_if<is_constructible<value_type, _Pp>::value>::type>\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1082:18: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n iterator insert(const_iterator __pos, _Pp&& __p)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1109:14: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n void insert(_InputIterator __f, _InputIterator __l)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1093:9: note: candidate function not viable:\n requires 2 arguments, but 1 was provided\n insert(const_iterator __p, const value_type& __v)\n}}}",
"reporter": "mzoll",
"cc": "olivas",
"resolution": "invalid",
"_ts": "1550067174476394",
"component": "combo core",
"summary": "[Serialization] (boost)serialization of raw and shared_ptr is broken",
"priority": "blocker",
"keywords": "serilaization shared_ptr",
"time": "2016-08-19T19:46:49",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
| 1.0 | [Serialization] (boost)serialization of raw and shared_ptr is broken (Trac #1832) - Migrated from https://code.icecube.wisc.edu/ticket/1832
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:54",
"description": "the serialization of shared pointers through the icecube::serialization interface is not working.\n\nerror below.\n\ncweaver suspects it hanging on the 'enable_if' switch.\n\n{{{\n[ 95%] Building CXX object IceHiveZ/CMakeFiles/IceHiveZ.dir/private/IceHiveZ/internals/Relation.cxx.o\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.cxx:12:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:20:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/serialization.h:32:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:8:\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr_helper.hpp:181:30: error: no matching\n member function for call to 'insert'\n result = m_o_sp->insert(std::make_pair(oid, s));\n ~~~~~~~~^~~~~~\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:171:7: note: in instantiation of\n function template specialization 'icecube::serialization::shared_ptr_helper<boost::shared_ptr>::reset<const\n OMKeyHash::CompactOMKeyHashService>' requested here\n h.reset(t,r); \n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:58:9: note: in instantiation of\n function template specialization 'icecube::serialization::load<icecube::archive::portable_binary_iarchive, const\n OMKeyHash::CompactOMKeyHashService>' requested here\n load(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:74:12: note: in instantiation of member\n function 'icecube::serialization::free_loader<icecube::archive::portable_binary_iarchive, boost::shared_ptr<const\n OMKeyHash::CompactOMKeyHashService> >::invoke' requested here\n typex::invoke(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:187:29: note: in instantiation of\n function template specialization 'icecube::serialization::split_free<icecube::archive::portable_binary_iarchive,\n boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> >' requested here\n icecube::serialization::split_free(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/serialization.hpp:128:9: note: in instantiation of\n function template specialization 'icecube::serialization::serialize<icecube::archive::portable_binary_iarchive, const\n OMKeyHash::CompactOMKeyHashService>' requested here\n serialize(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/iserializer.hpp:179:29: note: (skipping 20 contexts\n in backtrace; use -ftemplate-backtrace-limit=0 to see all)\n icecube::serialization::serialize_adl(\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/common_iarchive.hpp:66:18: note: in instantiation of\n function template specialization 'icecube::archive::load<icecube::archive::portable_binary_iarchive, const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n archive::load(* this->This(), t);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:144:10: note: in instantiation of\n function template specialization\n 'icecube::archive::detail::common_iarchive<icecube::archive::portable_binary_iarchive>::load_override<const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n ::load_override(t, static_cast<int>(version));\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:60:23: note: in instantiation\n of function template specialization 'icecube::archive::portable_binary_iarchive::load_override<const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n this->This()->load_override(t, 0);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:67:32: note: in instantiation\n of function template specialization\n 'icecube::archive::detail::interface_iarchive<icecube::archive::portable_binary_iarchive>::operator>><const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n return *(this->This()) >> t;\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:150:6: note: in instantiation of function\n template specialization 'icecube::archive::detail::interface_iarchive<icecube::archive::portable_binary_iarchive>::operator&<const\n icecube::serialization::nvp<boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> > >' requested here\n ar & icecube::serialization::make_nvp(\"hasher\",hasher_);\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1089:9: note: candidate function not viable:\n no known conversion from 'pair<typename __make_pair_return<const void *&>::type, shared_ptr<const\n OMKeyHash::CompactOMKeyHashService>>' to 'const pair<const key_type, shared_ptr<void>>' for 1st argument\n insert(const value_type& __v) {return __tree_.__insert_unique(__v);}\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1118:10: note: candidate function not viable:\n no known conversion from 'pair<typename __make_pair_return<const void *&>::type, typename __make_pair_return<shared_ptr<const\n CompactOMKeyHashService> &>::type>' (aka 'pair<const void *, boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> >') to\n 'initializer_list<value_type>' (aka 'initializer_list<pair<const void *const, boost::shared_ptr<void> > >') for 1st argument\n void insert(initializer_list<value_type> __il)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1074:42: note: candidate template ignored:\n disabled by 'enable_if' [with _Pp = std::__1::pair<const void *, boost::shared_ptr<const OMKeyHash::CompactOMKeyHashService> >]\n class = typename enable_if<is_constructible<value_type, _Pp>::value>::type>\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1082:18: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n iterator insert(const_iterator __pos, _Pp&& __p)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1109:14: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n void insert(_InputIterator __f, _InputIterator __l)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1093:9: note: candidate function not viable:\n requires 2 arguments, but 1 was provided\n insert(const_iterator __p, const value_type& __v)\n}}}",
"reporter": "mzoll",
"cc": "olivas",
"resolution": "invalid",
"_ts": "1550067174476394",
"component": "combo core",
"summary": "[Serialization] (boost)serialization of raw and shared_ptr is broken",
"priority": "blocker",
"keywords": "serilaization shared_ptr",
"time": "2016-08-19T19:46:49",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
| non_reli | boost serialization of raw and shared ptr is broken trac migrated from json status closed changetime description the serialization of shared pointers through the icecube serialization interface is not working n nerror below n ncweaver suspects it hanging on the enable if switch n n n building cxx object icehivez cmakefiles icehivez dir private icehivez internals relation cxx o nin file included from data user mzoll meta projects icerec trunk src icehivez private icehivez internals relation cxx nin file included from data user mzoll meta projects icerec trunk src icehivez private icehivez internals relation h nin file included from data user mzoll meta projects icerec trunk src icetray public icetray serialization h nin file included from data user mzoll meta projects icerec trunk src icetray public icetray portable binary archive hpp n data user mzoll meta projects icerec trunk src serialization public serialization shared ptr helper hpp error no matching n member function for call to insert n result m o sp insert std make pair oid s n n data user mzoll meta projects icerec trunk src serialization public serialization shared ptr hpp note in instantiation of n function template specialization icecube serialization shared ptr helper reset requested here n h reset t r n n data user mzoll meta projects icerec trunk src serialization public serialization split free hpp note in instantiation of n function template specialization icecube serialization load requested here n load ar t v n n data user mzoll meta projects icerec trunk src serialization public serialization split free hpp note in instantiation of member n function icecube serialization free loader invoke requested here n typex invoke ar t file version n n data user mzoll meta projects icerec trunk src serialization public serialization shared ptr hpp note in instantiation of n function template specialization icecube serialization split free requested here n icecube serialization split free ar t file version n n data user mzoll meta projects icerec trunk src serialization public serialization serialization hpp note in instantiation of n function template specialization icecube serialization serialize requested here n serialize ar t v n n data user mzoll meta projects icerec trunk src serialization public archive detail iserializer hpp note skipping contexts n in backtrace use ftemplate backtrace limit to see all n icecube serialization serialize adl n n data user mzoll meta projects icerec trunk src serialization public archive detail common iarchive hpp note in instantiation of n function template specialization icecube archive load requested here n archive load this this t n n data user mzoll meta projects icerec trunk src icetray public icetray portable binary archive hpp note in instantiation of n function template specialization n icecube archive detail common iarchive load override requested here n load override t static cast version n n data user mzoll meta projects icerec trunk src serialization public archive detail interface iarchive hpp note in instantiation n of function template specialization icecube archive portable binary iarchive load override requested here n this this load override t n n data user mzoll meta projects icerec trunk src serialization public archive detail interface iarchive hpp note in instantiation n of function template specialization n icecube archive detail interface iarchive operator requested here n return this this t n n data user mzoll meta projects icerec trunk src icehivez private icehivez internals relation h note in instantiation of function n template specialization icecube archive detail interface iarchive operator requested here n ar icecube serialization make nvp hasher hasher n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function not viable n no known conversion from pair type shared ptr to const pair for argument n insert const value type v return tree insert unique v n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function not viable n no known conversion from pair type typename make pair return type aka pair to n initializer list aka initializer list for argument n void insert initializer list il n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate template ignored n disabled by enable if n class typename enable if value type n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function template n not viable requires arguments but was provided n iterator insert const iterator pos pp p n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function template n not viable requires arguments but was provided n void insert inputiterator f inputiterator l n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function not viable n requires arguments but was provided n insert const iterator p const value type v n reporter mzoll cc olivas resolution invalid ts component combo core summary boost serialization of raw and shared ptr is broken priority blocker keywords serilaization shared ptr time milestone owner cweaver type defect | 0 |
115,921 | 11,890,261,674 | IssuesEvent | 2020-03-28 17:32:37 | ISPP-LinkedPet/backend_isppet | https://api.github.com/repos/ISPP-LinkedPet/backend_isppet | opened | Anuncio LinkedPet | documentation | - Realizacion del anuncio de la aplicaicion con los videos dados en google drive | 1.0 | Anuncio LinkedPet - - Realizacion del anuncio de la aplicaicion con los videos dados en google drive | non_reli | anuncio linkedpet realizacion del anuncio de la aplicaicion con los videos dados en google drive | 0 |
318,265 | 27,296,543,028 | IssuesEvent | 2023-02-23 20:53:27 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Servidores - Dados dos Servidores - Dores de Campos | generalization test development | DoD: Realizar o teste de Generalização do validador da tag Servidores - Dados dos Servidores para o Município de Dores de Campos. | 1.0 | Teste de generalizacao para a tag Servidores - Dados dos Servidores - Dores de Campos - DoD: Realizar o teste de Generalização do validador da tag Servidores - Dados dos Servidores para o Município de Dores de Campos. | non_reli | teste de generalizacao para a tag servidores dados dos servidores dores de campos dod realizar o teste de generalização do validador da tag servidores dados dos servidores para o município de dores de campos | 0 |
212,264 | 7,229,884,267 | IssuesEvent | 2018-02-12 00:02:25 | sussol/mobile | https://api.github.com/repos/sussol/mobile | opened | Crash when hardware back button 2 modals open | Bug Priority: Medium | Build Number: 2.1.0-rc5
Description: Can crash the app by pressing hardware back button when 2 modals are open.
Reproducible: yes
Reproduction Steps:
Open a selector modal (such as item selector), and while that is open click finalise to bring that modal up. Press hardware back button twice and you will crash to android main screen.
Comments: Probably should be able to open the finalise modal anyway when adding items to an invoice.
| 1.0 | Crash when hardware back button 2 modals open - Build Number: 2.1.0-rc5
Description: Can crash the app by pressing hardware back button when 2 modals are open.
Reproducible: yes
Reproduction Steps:
Open a selector modal (such as item selector), and while that is open click finalise to bring that modal up. Press hardware back button twice and you will crash to android main screen.
Comments: Probably should be able to open the finalise modal anyway when adding items to an invoice.
| non_reli | crash when hardware back button modals open build number description can crash the app by pressing hardware back button when modals are open reproducible yes reproduction steps open a selector modal such as item selector and while that is open click finalise to bring that modal up press hardware back button twice and you will crash to android main screen comments probably should be able to open the finalise modal anyway when adding items to an invoice | 0 |
278,975 | 30,702,430,413 | IssuesEvent | 2023-07-27 01:29:34 | artsking/packages_apps_settings_10.0.0_r33 | https://api.github.com/repos/artsking/packages_apps_settings_10.0.0_r33 | reopened | CVE-2023-20955 (High) detected in Settingsandroid-10.0.0_r33 | Mend: dependency security vulnerability | ## CVE-2023-20955 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Settingsandroid-10.0.0_r33</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Settings>https://android.googlesource.com/platform/packages/apps/Settings</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/packages_apps_settings_10.0.0_r33/commit/081b5699d08adc751bd29d01eff86bb13c550019">081b5699d08adc751bd29d01eff86bb13c550019</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/settings/applications/appinfo/AppInfoDashboardFragment.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In onPrepareOptionsMenu of AppInfoDashboardFragment.java, there is a possible way to bypass admin restrictions and uninstall applications for all users due to a missing permission check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-12 Android-12L Android-13Android ID: A-258653813
<p>Publish Date: 2023-03-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20955>CVE-2023-20955</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/packages/apps/Settings/+/f3b323e378ee5d98875711216cbd92f4fa795fc0">https://android.googlesource.com/platform/packages/apps/Settings/+/f3b323e378ee5d98875711216cbd92f4fa795fc0</a></p>
<p>Release Date: 2023-03-24</p>
<p>Fix Resolution: android-13.0.0_r32</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-20955 (High) detected in Settingsandroid-10.0.0_r33 - ## CVE-2023-20955 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Settingsandroid-10.0.0_r33</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Settings>https://android.googlesource.com/platform/packages/apps/Settings</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/packages_apps_settings_10.0.0_r33/commit/081b5699d08adc751bd29d01eff86bb13c550019">081b5699d08adc751bd29d01eff86bb13c550019</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/settings/applications/appinfo/AppInfoDashboardFragment.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In onPrepareOptionsMenu of AppInfoDashboardFragment.java, there is a possible way to bypass admin restrictions and uninstall applications for all users due to a missing permission check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-12 Android-12L Android-13Android ID: A-258653813
<p>Publish Date: 2023-03-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20955>CVE-2023-20955</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/packages/apps/Settings/+/f3b323e378ee5d98875711216cbd92f4fa795fc0">https://android.googlesource.com/platform/packages/apps/Settings/+/f3b323e378ee5d98875711216cbd92f4fa795fc0</a></p>
<p>Release Date: 2023-03-24</p>
<p>Fix Resolution: android-13.0.0_r32</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve high detected in settingsandroid cve high severity vulnerability vulnerable library settingsandroid library home page a href found in head commit a href found in base branch master vulnerable source files src com android settings applications appinfo appinfodashboardfragment java vulnerability details in onprepareoptionsmenu of appinfodashboardfragment java there is a possible way to bypass admin restrictions and uninstall applications for all users due to a missing permission check this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend | 0 |
1,339 | 15,093,422,389 | IssuesEvent | 2021-02-07 00:27:12 | FoundationDB/fdb-kubernetes-operator | https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator | opened | Getting status JSON through the client library rather than the CLI | reliability | Getting status JSON from the CLI can be fragile, since the output from the CLI is not guaranteed to be exclusively a valid JSON document. For instance, if the database is slow, the status JSON command can emit a warning message. Instead, we could try getting it through the client library from the `\xff\xff/status/json` key. | True | Getting status JSON through the client library rather than the CLI - Getting status JSON from the CLI can be fragile, since the output from the CLI is not guaranteed to be exclusively a valid JSON document. For instance, if the database is slow, the status JSON command can emit a warning message. Instead, we could try getting it through the client library from the `\xff\xff/status/json` key. | reli | getting status json through the client library rather than the cli getting status json from the cli can be fragile since the output from the cli is not guaranteed to be exclusively a valid json document for instance if the database is slow the status json command can emit a warning message instead we could try getting it through the client library from the xff xff status json key | 1 |
644 | 9,367,856,514 | IssuesEvent | 2019-04-03 07:09:21 | LiskHQ/lisk | https://api.github.com/repos/LiskHQ/lisk | closed | Change "eslint" and "prettier --write" command order in .lintstagedrc.json | :hammer: reliability enhancement refactoring | ### Expected behavior
Code must be linted after prettier changes.
```json
{
"*.js": ["prettier --write", "eslint", "git add"],
"*.{json,md}": ["prettier --write", "git add"]
}
```
### Actual behavior
Code is being rewritten by prettier after eslint checks. This is creating eslint errors.
```json
{
"*.js": ["eslint", "prettier --write", "git add"],
"*.{json,md}": ["prettier --write", "git add"]
}
```
### Steps to reproduce
### Which version(s) does this affect? (Environment, OS, etc...)
| True | Change "eslint" and "prettier --write" command order in .lintstagedrc.json - ### Expected behavior
Code must be linted after prettier changes.
```json
{
"*.js": ["prettier --write", "eslint", "git add"],
"*.{json,md}": ["prettier --write", "git add"]
}
```
### Actual behavior
Code is being rewritten by prettier after eslint checks. This is creating eslint errors.
```json
{
"*.js": ["eslint", "prettier --write", "git add"],
"*.{json,md}": ["prettier --write", "git add"]
}
```
### Steps to reproduce
### Which version(s) does this affect? (Environment, OS, etc...)
| reli | change eslint and prettier write command order in lintstagedrc json expected behavior code must be linted after prettier changes json js json md actual behavior code is being rewritten by prettier after eslint checks this is creating eslint errors json js json md steps to reproduce which version s does this affect environment os etc | 1 |
91,278 | 11,491,364,296 | IssuesEvent | 2020-02-11 18:51:07 | the-tale/the-tale | https://api.github.com/repos/the-tale/the-tale | closed | Проекты: доработка механик | comp_game_logic cont_game_designe est_medium type_improvement | Возможно, имеет смысл не делить ближний круг на два куска (соратники, противники).
Обсуждение: http://the-tale.org/forum/threads/1539?page=108#m185882 | 1.0 | Проекты: доработка механик - Возможно, имеет смысл не делить ближний круг на два куска (соратники, противники).
Обсуждение: http://the-tale.org/forum/threads/1539?page=108#m185882 | non_reli | проекты доработка механик возможно имеет смысл не делить ближний круг на два куска соратники противники обсуждение | 0 |
299,794 | 25,927,946,996 | IssuesEvent | 2022-12-16 07:09:26 | apache/shardingsphere | https://api.github.com/repos/apache/shardingsphere | closed | Remove spring dependency in shardingsphere-integration-test-scaling | type: refactor in: test | There is JDBCTemplate class using in shardingsphere-integration-test-scaling which is unnecessary.
Please use raw JDBC to access database and keep the module's dependency more simple. | 1.0 | Remove spring dependency in shardingsphere-integration-test-scaling - There is JDBCTemplate class using in shardingsphere-integration-test-scaling which is unnecessary.
Please use raw JDBC to access database and keep the module's dependency more simple. | non_reli | remove spring dependency in shardingsphere integration test scaling there is jdbctemplate class using in shardingsphere integration test scaling which is unnecessary please use raw jdbc to access database and keep the module s dependency more simple | 0 |
1,345 | 15,115,636,213 | IssuesEvent | 2021-02-09 04:59:52 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | closed | NPE in SqlDatabaseOperationsImpl.listBySqlServer | Mgmt Track 1 customer-reported question tenet-reliability | Hi, sporadically NPE happens here (pretty rare), there is no way to reproduce
```
java.lang.NullPointerException
at com.microsoft.azure.management.sql.implementation.SqlDatabaseOperationsImpl.listBySqlServer(SqlDatabaseOperationsImpl.java:162)
at com.microsoft.azure.management.sql.implementation.SqlDatabaseOperationsImpl.list(SqlDatabaseOperationsImpl.java:218)
```
Azure SDK for Java 1.30.0 | True | NPE in SqlDatabaseOperationsImpl.listBySqlServer - Hi, sporadically NPE happens here (pretty rare), there is no way to reproduce
```
java.lang.NullPointerException
at com.microsoft.azure.management.sql.implementation.SqlDatabaseOperationsImpl.listBySqlServer(SqlDatabaseOperationsImpl.java:162)
at com.microsoft.azure.management.sql.implementation.SqlDatabaseOperationsImpl.list(SqlDatabaseOperationsImpl.java:218)
```
Azure SDK for Java 1.30.0 | reli | npe in sqldatabaseoperationsimpl listbysqlserver hi sporadically npe happens here pretty rare there is no way to reproduce java lang nullpointerexception at com microsoft azure management sql implementation sqldatabaseoperationsimpl listbysqlserver sqldatabaseoperationsimpl java at com microsoft azure management sql implementation sqldatabaseoperationsimpl list sqldatabaseoperationsimpl java azure sdk for java | 1 |
592,935 | 17,934,461,657 | IssuesEvent | 2021-09-10 13:43:04 | dotCMS/core | https://api.github.com/repos/dotCMS/core | opened | Performance issues on Users portlet with many roles present | Type : Bug Severity : Support Requested Severity : Support Priority | **Describe the bug**
When there are many roles present (100s) the users portlet performance is drastically impacted. We currently are pulling all of the roles repeatedly. Every role is pulled individually in call to `api/role/loadbyid/id/{roleId}`.
The roles are pulled on 3 different occasions that I can find:
When you click a user
When you click the user details tab
When you add a role
Reproduced on: 21.09, 5.3.8.5
Ticket: https://dotcms.zendesk.com/agent/tickets/105139
**To Reproduce**
1. Open your network tab in dev tools
2. Navigate to the Users portlet
3. Select a user and watch the requests in the network tab
4. Go to roles, add a role, watch the requests again.
5. Go back to user details, watch the requests again.
**Expected behavior**
Roles do not typically change often, so a single load of them when you land on the Users portlet should suffice, and they should be loaded in a single call if possible.
| 1.0 | Performance issues on Users portlet with many roles present - **Describe the bug**
When there are many roles present (100s) the users portlet performance is drastically impacted. We currently are pulling all of the roles repeatedly. Every role is pulled individually in call to `api/role/loadbyid/id/{roleId}`.
The roles are pulled on 3 different occasions that I can find:
When you click a user
When you click the user details tab
When you add a role
Reproduced on: 21.09, 5.3.8.5
Ticket: https://dotcms.zendesk.com/agent/tickets/105139
**To Reproduce**
1. Open your network tab in dev tools
2. Navigate to the Users portlet
3. Select a user and watch the requests in the network tab
4. Go to roles, add a role, watch the requests again.
5. Go back to user details, watch the requests again.
**Expected behavior**
Roles do not typically change often, so a single load of them when you land on the Users portlet should suffice, and they should be loaded in a single call if possible.
| non_reli | performance issues on users portlet with many roles present describe the bug when there are many roles present the users portlet performance is drastically impacted we currently are pulling all of the roles repeatedly every role is pulled individually in call to api role loadbyid id roleid the roles are pulled on different occasions that i can find when you click a user when you click the user details tab when you add a role reproduced on ticket to reproduce open your network tab in dev tools navigate to the users portlet select a user and watch the requests in the network tab go to roles add a role watch the requests again go back to user details watch the requests again expected behavior roles do not typically change often so a single load of them when you land on the users portlet should suffice and they should be loaded in a single call if possible | 0 |
632,695 | 20,204,650,623 | IssuesEvent | 2022-02-11 18:50:15 | kubernetes/release | https://api.github.com/repos/kubernetes/release | closed | krel release-notes with --create-website-pr errors with git ssh fingerprint | kind/bug sig/release area/release-eng needs-priority | #### What happened:
Trying to generate the PRs to update the website https://kubernetes.io/releases/ with missing release notes for patch releases.
Running the command
```
krel release-notes --fix --fork csantanapr --tag v1.22.1 --list-v2 --create-website-pr --log-level trace
```
Noticed I'm only using the flag `--create-website-pr`
If you using both flags `--create-draft-pr` and `--create-website-pr` it works because the git clone will be done with https (create-draft-pr) and the create-website function will skip the git clone using ssh
Here is the background on the problem https://github.com/go-git/go-git/issues/411
I proposed we switch to https cloning since for k/k, this is consistence and it always works
#### What you expected to happen:
To not have errors and creates the PR
#### How to reproduce it (as minimally and precisely as possible):
- Create rsa private key pair for github
- Setup known hosts file
- Run the command
```
krel release-notes --fix --fork csantanapr --tag v1.22.1 --list-v2 --create-website-pr --log-level trace
```
#### Anything else we need to know?:
```
❯ krel release-notes --fix --fork csantanapr --tag v1.23.1 --list-v2 --create-website-pr
INFO Checking if a PR can be created from csantanapr/release-notes
Create website pull request? (Y/n) (1/10)
y
INFO Generating release notes for tag v1.23.1
INFO Cloning kubernetes/sig-release to read mapping files
INFO Cloning kubernetes/kubernetes
ERRO Clone repository failed. Tracked progress:
FATA creating website PR: generating release notes in JSON format: cloning default github repo: unable to clone repo: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
```
It errors with git ssh problems like this. Notice that is trying to git clone with ssh `git clone git@github.com/kubernetes/kubernetes`
`DEBU Using repository url "git@github.com:kubernetes/kubernetes" file="git/git.go:371`
I have a private ssh key in `~/.ssh/id_rsa`
And have the correct known_hosts
```
ssh-keyscan github.com > ~/.ssh/known_hosts
``
I'm able to git clone using `git`
```
git clone git@github.com/kubernetes/kubernetes
```
I can even test the ssh connection
```
$ ssh -T git@github.com
Hi mojotx! You've successfully authenticated, but GitHub does not provide shell access.
```
#### Environment:
- OS (e.g: `cat /etc/os-release`): MacOS and Ubuntu
| 1.0 | krel release-notes with --create-website-pr errors with git ssh fingerprint - #### What happened:
Trying to generate the PRs to update the website https://kubernetes.io/releases/ with missing release notes for patch releases.
Running the command
```
krel release-notes --fix --fork csantanapr --tag v1.22.1 --list-v2 --create-website-pr --log-level trace
```
Noticed I'm only using the flag `--create-website-pr`
If you using both flags `--create-draft-pr` and `--create-website-pr` it works because the git clone will be done with https (create-draft-pr) and the create-website function will skip the git clone using ssh
Here is the background on the problem https://github.com/go-git/go-git/issues/411
I proposed we switch to https cloning since for k/k, this is consistence and it always works
#### What you expected to happen:
To not have errors and creates the PR
#### How to reproduce it (as minimally and precisely as possible):
- Create rsa private key pair for github
- Setup known hosts file
- Run the command
```
krel release-notes --fix --fork csantanapr --tag v1.22.1 --list-v2 --create-website-pr --log-level trace
```
#### Anything else we need to know?:
```
❯ krel release-notes --fix --fork csantanapr --tag v1.23.1 --list-v2 --create-website-pr
INFO Checking if a PR can be created from csantanapr/release-notes
Create website pull request? (Y/n) (1/10)
y
INFO Generating release notes for tag v1.23.1
INFO Cloning kubernetes/sig-release to read mapping files
INFO Cloning kubernetes/kubernetes
ERRO Clone repository failed. Tracked progress:
FATA creating website PR: generating release notes in JSON format: cloning default github repo: unable to clone repo: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
```
It errors with git ssh problems like this. Notice that is trying to git clone with ssh `git clone git@github.com/kubernetes/kubernetes`
`DEBU Using repository url "git@github.com:kubernetes/kubernetes" file="git/git.go:371`
I have a private ssh key in `~/.ssh/id_rsa`
And have the correct known_hosts
```
ssh-keyscan github.com > ~/.ssh/known_hosts
``
I'm able to git clone using `git`
```
git clone git@github.com/kubernetes/kubernetes
```
I can even test the ssh connection
```
$ ssh -T git@github.com
Hi mojotx! You've successfully authenticated, but GitHub does not provide shell access.
```
#### Environment:
- OS (e.g: `cat /etc/os-release`): MacOS and Ubuntu
| non_reli | krel release notes with create website pr errors with git ssh fingerprint what happened trying to generate the prs to update the website with missing release notes for patch releases running the command krel release notes fix fork csantanapr tag list create website pr log level trace noticed i m only using the flag create website pr if you using both flags create draft pr and create website pr it works because the git clone will be done with https create draft pr and the create website function will skip the git clone using ssh here is the background on the problem i proposed we switch to https cloning since for k k this is consistence and it always works what you expected to happen to not have errors and creates the pr how to reproduce it as minimally and precisely as possible create rsa private key pair for github setup known hosts file run the command krel release notes fix fork csantanapr tag list create website pr log level trace anything else we need to know ❯ krel release notes fix fork csantanapr tag list create website pr info checking if a pr can be created from csantanapr release notes create website pull request y n y info generating release notes for tag info cloning kubernetes sig release to read mapping files info cloning kubernetes kubernetes erro clone repository failed tracked progress fata creating website pr generating release notes in json format cloning default github repo unable to clone repo ssh handshake failed ssh unable to authenticate attempted methods no supported methods remain it errors with git ssh problems like this notice that is trying to git clone with ssh git clone git github com kubernetes kubernetes debu using repository url git github com kubernetes kubernetes file git git go i have a private ssh key in ssh id rsa and have the correct known hosts ssh keyscan github com ssh known hosts i m able to git clone using git git clone git github com kubernetes kubernetes i can even test the ssh connection ssh t git github com hi mojotx you ve successfully authenticated but github does not provide shell access environment os e g cat etc os release macos and ubuntu | 0 |
126,567 | 17,933,739,131 | IssuesEvent | 2021-09-10 12:53:21 | microsoft/AzureTRE | https://api.github.com/repos/microsoft/AzureTRE | closed | [Story] All egress traffic routed through Azure Firewall | enhancement story security | ## Description
As a cloud administrator
I want to ensure that all egress traffic is managed in the Azure Firewall
So that I have one place only to allow egress traffic
The TRE solution network is designed as a hub and spoke where all egress traffic should be routed through the Azure Firewall.
## Acceptance criteria
- [x] All management subnets routes egress traffic through the Azure Firewall (have the default route associated)
- [x] #736
- [x] #738
- [x] #739
- [x] #735
- [x] #745
- [x] #746
- [x] #747
- [x] #748 | True | [Story] All egress traffic routed through Azure Firewall - ## Description
As a cloud administrator
I want to ensure that all egress traffic is managed in the Azure Firewall
So that I have one place only to allow egress traffic
The TRE solution network is designed as a hub and spoke where all egress traffic should be routed through the Azure Firewall.
## Acceptance criteria
- [x] All management subnets routes egress traffic through the Azure Firewall (have the default route associated)
- [x] #736
- [x] #738
- [x] #739
- [x] #735
- [x] #745
- [x] #746
- [x] #747
- [x] #748 | non_reli | all egress traffic routed through azure firewall description as a cloud administrator i want to ensure that all egress traffic is managed in the azure firewall so that i have one place only to allow egress traffic the tre solution network is designed as a hub and spoke where all egress traffic should be routed through the azure firewall acceptance criteria all management subnets routes egress traffic through the azure firewall have the default route associated | 0 |
2,391 | 25,101,664,148 | IssuesEvent | 2022-11-08 14:03:29 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Mixed benchmarks always fall in a weird state after several hours | kind/bug severity/high area/reliability | **Describe the bug**
I have mentioned earlier, that for me it looks like the mixed benchmarks are in a weird state. We can see always really high backpressure which is from my side not expected.

The Interesting is that this only happens after several hours of execution.

Furthermore, this also happens even after fixing #10532 so this seem to be not related!
We can see this behavior on all our recent mixed benchmarks.
The following processes are executed in them.



[With that load:](https://github.com/camunda/zeebe/blob/medic-cw-benchmarks/setupKWBenchmark.sh#L57-L67)
```
# reduce the load ~ so we roughly reach 200 PI
sed_inplace "s/rate=100/rate=50/" timer.yaml
sed_inplace "s/rate=100/rate=50/" publisher.yaml
sed_inplace "s/rate=200/rate=100/" starter.yaml
```
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Take the part from the Medic benchmark creation:
https://github.com/camunda/zeebe/blob/medic-cw-benchmarks/setupKWBenchmark.sh#L57-L67
It seems to always happen after 5 hours !
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
**Expected behavior**
We have the same backpressure and processing as in the first ~5 hours.
<!-- A clear and concise description of what you expected to happen. -->
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
<STACKTRACE>
```
</p>
</details>
**Environment:**
- OS: <!-- [e.g. Linux] -->
- Zeebe Version: 8.1.x <!-- [e.g. 0.20.0] -->
- Configuration: <!-- [e.g. exporters etc.] -->
| True | Mixed benchmarks always fall in a weird state after several hours - **Describe the bug**
I have mentioned earlier, that for me it looks like the mixed benchmarks are in a weird state. We can see always really high backpressure which is from my side not expected.

The Interesting is that this only happens after several hours of execution.

Furthermore, this also happens even after fixing #10532 so this seem to be not related!
We can see this behavior on all our recent mixed benchmarks.
The following processes are executed in them.



[With that load:](https://github.com/camunda/zeebe/blob/medic-cw-benchmarks/setupKWBenchmark.sh#L57-L67)
```
# reduce the load ~ so we roughly reach 200 PI
sed_inplace "s/rate=100/rate=50/" timer.yaml
sed_inplace "s/rate=100/rate=50/" publisher.yaml
sed_inplace "s/rate=200/rate=100/" starter.yaml
```
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Take the part from the Medic benchmark creation:
https://github.com/camunda/zeebe/blob/medic-cw-benchmarks/setupKWBenchmark.sh#L57-L67
It seems to always happen after 5 hours !
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
**Expected behavior**
We have the same backpressure and processing as in the first ~5 hours.
<!-- A clear and concise description of what you expected to happen. -->
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
<STACKTRACE>
```
</p>
</details>
**Environment:**
- OS: <!-- [e.g. Linux] -->
- Zeebe Version: 8.1.x <!-- [e.g. 0.20.0] -->
- Configuration: <!-- [e.g. exporters etc.] -->
| reli | mixed benchmarks always fall in a weird state after several hours describe the bug i have mentioned earlier that for me it looks like the mixed benchmarks are in a weird state we can see always really high backpressure which is from my side not expected the interesting is that this only happens after several hours of execution furthermore this also happens even after fixing so this seem to be not related we can see this behavior on all our recent mixed benchmarks the following processes are executed in them reduce the load so we roughly reach pi sed inplace s rate rate timer yaml sed inplace s rate rate publisher yaml sed inplace s rate rate starter yaml to reproduce take the part from the medic benchmark creation it seems to always happen after hours steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client expected behavior we have the same backpressure and processing as in the first hours log stacktrace full stacktrace environment os zeebe version x configuration | 1 |
289,938 | 21,796,738,893 | IssuesEvent | 2022-05-15 18:50:17 | kukeiko/entity-space | https://api.github.com/repos/kukeiko/entity-space | closed | new example app | documentation | ## What
Start a new example app, and look at using primng
## Why
I want to create an app that we can actually use. the current products is a bit useless for that.
| 1.0 | new example app - ## What
Start a new example app, and look at using primng
## Why
I want to create an app that we can actually use. the current products is a bit useless for that.
| non_reli | new example app what start a new example app and look at using primng why i want to create an app that we can actually use the current products is a bit useless for that | 0 |
33,300 | 9,098,500,899 | IssuesEvent | 2019-02-20 00:12:37 | gini/dexter | https://api.github.com/repos/gini/dexter | closed | ID and Secret-id is still required even when the binary has been built with the said parameters | build question | Hey Team,
I tried out Dexter on an Ubuntu 18 host and looks to be id & secret id are still required to get a successful login.
Built the binary with following parameters:
`CLIENT_ID=529839178945-toh9cj642someid7hvu7id9n5.apps.googleusercontent.com CLIENT_SECRET=VzK7vrksome_secret3ts6AfZa6Diz OS=linux make`
When a login attempt being made via dexter as follows getting a http 400.
`$dexter auth
`
However when all the parameters are specified as follows can log in just fine.
`dexter auth -i xxxxxx -s ffdfdff`
Appreciate your feedback guys, could be something that I might have missed.
| 1.0 | ID and Secret-id is still required even when the binary has been built with the said parameters - Hey Team,
I tried out Dexter on an Ubuntu 18 host and looks to be id & secret id are still required to get a successful login.
Built the binary with following parameters:
`CLIENT_ID=529839178945-toh9cj642someid7hvu7id9n5.apps.googleusercontent.com CLIENT_SECRET=VzK7vrksome_secret3ts6AfZa6Diz OS=linux make`
When a login attempt being made via dexter as follows getting a http 400.
`$dexter auth
`
However when all the parameters are specified as follows can log in just fine.
`dexter auth -i xxxxxx -s ffdfdff`
Appreciate your feedback guys, could be something that I might have missed.
| non_reli | id and secret id is still required even when the binary has been built with the said parameters hey team i tried out dexter on an ubuntu host and looks to be id secret id are still required to get a successful login built the binary with following parameters client id apps googleusercontent com client secret os linux make when a login attempt being made via dexter as follows getting a http dexter auth however when all the parameters are specified as follows can log in just fine dexter auth i xxxxxx s ffdfdff appreciate your feedback guys could be something that i might have missed | 0 |
1,246 | 14,264,992,694 | IssuesEvent | 2020-11-20 16:29:20 | FoundationDB/fdb-kubernetes-operator | https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator | opened | Failing reconciliation when an instance does not have a pod defined | reliability | In the UpdateStatus action, we check that each instance has a pod defined, and then run checks on the pods to make sure things are ready. If containers are not ready, we add the pod to the failing pods list. I think when an instance does not have a pod, we should treat it as failing, or otherwise add it to a list that will cause reconciliation to get blocked. This will prevent the user from getting confused and thinking the cluster is in a fully healthy state when in reality there are processes that are not configured at all. | True | Failing reconciliation when an instance does not have a pod defined - In the UpdateStatus action, we check that each instance has a pod defined, and then run checks on the pods to make sure things are ready. If containers are not ready, we add the pod to the failing pods list. I think when an instance does not have a pod, we should treat it as failing, or otherwise add it to a list that will cause reconciliation to get blocked. This will prevent the user from getting confused and thinking the cluster is in a fully healthy state when in reality there are processes that are not configured at all. | reli | failing reconciliation when an instance does not have a pod defined in the updatestatus action we check that each instance has a pod defined and then run checks on the pods to make sure things are ready if containers are not ready we add the pod to the failing pods list i think when an instance does not have a pod we should treat it as failing or otherwise add it to a list that will cause reconciliation to get blocked this will prevent the user from getting confused and thinking the cluster is in a fully healthy state when in reality there are processes that are not configured at all | 1 |
129 | 4,122,388,020 | IssuesEvent | 2016-06-09 01:56:57 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | SIGABRT_ASSERT_libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment | bug GC reliability | **The notes in this bug refer to the Ubuntu.14.04 dump [rc3-24202-00_0150](https://rapreqs.blob.core.windows.net/sschaab/BodyPart_b37ef701-45ae-42e3-af72-0c154ca00fde?sv=2015-04-05&sr=b&sig=n8qJCuj2ej%2Bh%2Bb9BUxOwqZV8XWXezf1Jpy66aHn2aBY%3D&st=2016-06-03T22%3A18%3A14Z&se=2017-06-03T22%3A18%3A14Z&sp=r). Other dumps are available if needed.**
STOP_REASON:
SIGABRT
FAULT_SYMBOL:
libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment
FAILURE_HASH:
SIGABRT_libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment
FAULT_STACK:
libc.so.6!__GI_raise
libc.so.6!__GI_abort
libcoreclr.so!UNKNOWN
libcoreclr.so!sigtrap_handler(int, siginfo_t*, void*)
libclrjit.so!sigtrap_handler(int, siginfo_t*, void*)
libpthread.so.0!???
libcoreclr.so!DBG_DebugBreak
libcoreclr.so!DebugBreak
libcoreclr.so!DbgAssertDialog
libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment(WKS::heap_segment*, WKS::gc_heap*)
libcoreclr.so!WKS::gc_heap::get_segment(unsigned long, int)
libcoreclr.so!WKS::gc_heap::get_segment_for_loh(unsigned long)
libcoreclr.so!WKS::gc_heap::get_large_segment(unsigned long, int*)
libcoreclr.so!WKS::gc_heap::loh_get_new_seg(WKS::generation*, unsigned long, int, int*, oom_reason*)
libcoreclr.so!WKS::gc_heap::allocate_large(int, unsigned long, alloc_context*, int)
libcoreclr.so!WKS::gc_heap::try_allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate_large_object(unsigned long, long&)
libcoreclr.so!WKS::GCHeap::Alloc(alloc_context*, unsigned long, unsigned int)
libcoreclr.so!Alloc(unsigned long, int, int)
libcoreclr.so!FastAllocatePrimitiveArray(MethodTable*, unsigned int, int)
libcoreclr.so!JIT_NewArr1(CORINFO_CLASS_STRUCT_*, long)
libcoreclr.so!JIT_NewArr1VC_MP_FastPortable(CORINFO_CLASS_STRUCT_*, long)
System.Runtime.Numerics.dll!System.Numerics.BigInteger.op_LeftShift(System.Numerics.BigInteger, Int32)
System.Runtime.Numerics.Tests.dll!System.Numerics.Tests.logTest.LargeValueLogTests(Int32, Int32, Int32, Int32)
System.Runtime.Numerics.Tests.dll!System.Numerics.Tests.logTest.RunLargeValueLogTests()
rc3-24202-00_0150.exe!stress.generated.UnitTests.UT8()
stress.execution.dll!stress.execution.UnitTest.Execute()
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy.RunWorker(stress.execution.ITestPattern, System.Threading.CancellationToken)
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy+<>c__DisplayClass1_0.<SpawnWorker>b__0()
System.Private.CoreLib.ni.dll!System.Threading.Tasks.Task.Execute()
System.Private.CoreLib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
System.Private.CoreLib.ni.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
System.Private.CoreLib.ni.dll!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
System.Private.CoreLib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
libcoreclr.so!CallDescrWorkerInternal
libcoreclr.so!CallDescrWorkerWithHandler(CallDescrData*, int)
libcoreclr.so!MethodDescCallSite::CallTargetWorker(unsigned long const*)
libcoreclr.so!MethodDescCallSite::Call(unsigned long const*)
libcoreclr.so!ThreadNative::KickOffThread_Worker(void*)
libcoreclr.so!ManagedThreadBase_DispatchInner(ManagedThreadCallState*)
libcoreclr.so!ManagedThreadBase_DispatchMiddle(ManagedThreadCallState*)
libcoreclr.so!ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::$_6::operator()(ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::TryArgs*) const::{lambda(Param*)#1}::operator()(Param*) const
libcoreclr.so!ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::$_6::operator()(ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::TryArgs*) const
libcoreclr.so!ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)
libcoreclr.so!ManagedThreadBase_FullTransitionWithAD(ADID, void (*)(void*), void*, UnhandledExceptionLocation)
libcoreclr.so!ManagedThreadBase::KickOff(ADID, void (*)(void*), void*)
libcoreclr.so!ThreadNative::KickOffThread(void*)
libcoreclr.so!Thread::intermediateThreadProc(void*)
libcoreclr.so!CorUnix::CPalThread::ThreadEntry(void*)
libpthread.so.0!start_thread
libc.so.6!__clone
FAULT_THREAD:
thread #1: tid = 61178, 0x00007f67acc04cc9 libc.so.6`__GI_raise(sig=6) + 57 at raise.c:56, name = 'corerun', stop reason = signal SIGABRT
**Looking at the source for frame 9 (gc.cpp line 3498) it looks like ((size_t)(begin_entry->seg1) == ro_in_entry) evals to true so it asserts**
frame #9: 0x00007f67ac1d0787 libcoreclr.so`WKS::gc_heap::seg_mapping_table_add_segment(seg=0x00007f6424d68000, hp=0x0000000000000000) + 1591 at gc.cpp:3498
(lldb) fr v -D1
(WKS::heap_segment *) seg = 0x00007f6424d68000
(WKS::gc_heap *) hp = 0x0000000000000000
(size_t) seg_end = 140069031018495
(size_t) begin_index = 1043588
(WKS::seg_mapping *) begin_entry = 0x00007f66d16ff318
(size_t) end_index = 1043595
(WKS::seg_mapping *) end_entry = 0x00007f66d16ff3c0
(lldb) expr -- begin_entry->seg1
(WKS::heap_segment *) $0 = 0x00007f641cd68000
(lldb) expr -- *begin_entry->seg1
(WKS::heap_segment) $1 = {
allocated = 0x00007f641cd69000 <no value available>
committed = 0x00007f641cd6a000 <no value available>
reserved = 0x00007f6464d68000 <no value available>
used = 0x00007f641cd69000 <no value available>
mem = 0x00007f641cd69000 <no value available>
flags = 8
next = 0x0000000000000000
plan_allocated = 0x00007f641cd69000 <no value available>
background_allocated = 0x0000000000000000 <no value available>
saved_bg_allocated = 0x0000000000000000 <no value available>
padandplug = {
plugandgap = {
gap = 0
reloc = 0
= {
m_pair = (left = 0, right = 0)
lr = 0
}
m_plug = {
skew = ([0] = <no value available>)
}
}
}
}
dprintf (1, ("set entry %d seg1 and %d seg0 to %Ix", begin_index, end_index, seg));
assert ((begin_entry->seg1 == 0) || ((size_t)(begin_entry->seg1) == ro_in_entry));
| True | SIGABRT_ASSERT_libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment - **The notes in this bug refer to the Ubuntu.14.04 dump [rc3-24202-00_0150](https://rapreqs.blob.core.windows.net/sschaab/BodyPart_b37ef701-45ae-42e3-af72-0c154ca00fde?sv=2015-04-05&sr=b&sig=n8qJCuj2ej%2Bh%2Bb9BUxOwqZV8XWXezf1Jpy66aHn2aBY%3D&st=2016-06-03T22%3A18%3A14Z&se=2017-06-03T22%3A18%3A14Z&sp=r). Other dumps are available if needed.**
STOP_REASON:
SIGABRT
FAULT_SYMBOL:
libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment
FAILURE_HASH:
SIGABRT_libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment
FAULT_STACK:
libc.so.6!__GI_raise
libc.so.6!__GI_abort
libcoreclr.so!UNKNOWN
libcoreclr.so!sigtrap_handler(int, siginfo_t*, void*)
libclrjit.so!sigtrap_handler(int, siginfo_t*, void*)
libpthread.so.0!???
libcoreclr.so!DBG_DebugBreak
libcoreclr.so!DebugBreak
libcoreclr.so!DbgAssertDialog
libcoreclr.so!WKS::gc_heap::seg_mapping_table_add_segment(WKS::heap_segment*, WKS::gc_heap*)
libcoreclr.so!WKS::gc_heap::get_segment(unsigned long, int)
libcoreclr.so!WKS::gc_heap::get_segment_for_loh(unsigned long)
libcoreclr.so!WKS::gc_heap::get_large_segment(unsigned long, int*)
libcoreclr.so!WKS::gc_heap::loh_get_new_seg(WKS::generation*, unsigned long, int, int*, oom_reason*)
libcoreclr.so!WKS::gc_heap::allocate_large(int, unsigned long, alloc_context*, int)
libcoreclr.so!WKS::gc_heap::try_allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate_large_object(unsigned long, long&)
libcoreclr.so!WKS::GCHeap::Alloc(alloc_context*, unsigned long, unsigned int)
libcoreclr.so!Alloc(unsigned long, int, int)
libcoreclr.so!FastAllocatePrimitiveArray(MethodTable*, unsigned int, int)
libcoreclr.so!JIT_NewArr1(CORINFO_CLASS_STRUCT_*, long)
libcoreclr.so!JIT_NewArr1VC_MP_FastPortable(CORINFO_CLASS_STRUCT_*, long)
System.Runtime.Numerics.dll!System.Numerics.BigInteger.op_LeftShift(System.Numerics.BigInteger, Int32)
System.Runtime.Numerics.Tests.dll!System.Numerics.Tests.logTest.LargeValueLogTests(Int32, Int32, Int32, Int32)
System.Runtime.Numerics.Tests.dll!System.Numerics.Tests.logTest.RunLargeValueLogTests()
rc3-24202-00_0150.exe!stress.generated.UnitTests.UT8()
stress.execution.dll!stress.execution.UnitTest.Execute()
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy.RunWorker(stress.execution.ITestPattern, System.Threading.CancellationToken)
stress.execution.dll!stress.execution.DedicatedThreadWorkerStrategy+<>c__DisplayClass1_0.<SpawnWorker>b__0()
System.Private.CoreLib.ni.dll!System.Threading.Tasks.Task.Execute()
System.Private.CoreLib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
System.Private.CoreLib.ni.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
System.Private.CoreLib.ni.dll!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
System.Private.CoreLib.ni.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
libcoreclr.so!CallDescrWorkerInternal
libcoreclr.so!CallDescrWorkerWithHandler(CallDescrData*, int)
libcoreclr.so!MethodDescCallSite::CallTargetWorker(unsigned long const*)
libcoreclr.so!MethodDescCallSite::Call(unsigned long const*)
libcoreclr.so!ThreadNative::KickOffThread_Worker(void*)
libcoreclr.so!ManagedThreadBase_DispatchInner(ManagedThreadCallState*)
libcoreclr.so!ManagedThreadBase_DispatchMiddle(ManagedThreadCallState*)
libcoreclr.so!ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::$_6::operator()(ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::TryArgs*) const::{lambda(Param*)#1}::operator()(Param*) const
libcoreclr.so!ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::$_6::operator()(ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)::TryArgs*) const
libcoreclr.so!ManagedThreadBase_DispatchOuter(ManagedThreadCallState*)
libcoreclr.so!ManagedThreadBase_FullTransitionWithAD(ADID, void (*)(void*), void*, UnhandledExceptionLocation)
libcoreclr.so!ManagedThreadBase::KickOff(ADID, void (*)(void*), void*)
libcoreclr.so!ThreadNative::KickOffThread(void*)
libcoreclr.so!Thread::intermediateThreadProc(void*)
libcoreclr.so!CorUnix::CPalThread::ThreadEntry(void*)
libpthread.so.0!start_thread
libc.so.6!__clone
FAULT_THREAD:
thread #1: tid = 61178, 0x00007f67acc04cc9 libc.so.6`__GI_raise(sig=6) + 57 at raise.c:56, name = 'corerun', stop reason = signal SIGABRT
**Looking at the source for frame 9 (gc.cpp line 3498) it looks like ((size_t)(begin_entry->seg1) == ro_in_entry) evals to true so it asserts**
frame #9: 0x00007f67ac1d0787 libcoreclr.so`WKS::gc_heap::seg_mapping_table_add_segment(seg=0x00007f6424d68000, hp=0x0000000000000000) + 1591 at gc.cpp:3498
(lldb) fr v -D1
(WKS::heap_segment *) seg = 0x00007f6424d68000
(WKS::gc_heap *) hp = 0x0000000000000000
(size_t) seg_end = 140069031018495
(size_t) begin_index = 1043588
(WKS::seg_mapping *) begin_entry = 0x00007f66d16ff318
(size_t) end_index = 1043595
(WKS::seg_mapping *) end_entry = 0x00007f66d16ff3c0
(lldb) expr -- begin_entry->seg1
(WKS::heap_segment *) $0 = 0x00007f641cd68000
(lldb) expr -- *begin_entry->seg1
(WKS::heap_segment) $1 = {
allocated = 0x00007f641cd69000 <no value available>
committed = 0x00007f641cd6a000 <no value available>
reserved = 0x00007f6464d68000 <no value available>
used = 0x00007f641cd69000 <no value available>
mem = 0x00007f641cd69000 <no value available>
flags = 8
next = 0x0000000000000000
plan_allocated = 0x00007f641cd69000 <no value available>
background_allocated = 0x0000000000000000 <no value available>
saved_bg_allocated = 0x0000000000000000 <no value available>
padandplug = {
plugandgap = {
gap = 0
reloc = 0
= {
m_pair = (left = 0, right = 0)
lr = 0
}
m_plug = {
skew = ([0] = <no value available>)
}
}
}
}
dprintf (1, ("set entry %d seg1 and %d seg0 to %Ix", begin_index, end_index, seg));
assert ((begin_entry->seg1 == 0) || ((size_t)(begin_entry->seg1) == ro_in_entry));
| reli | sigabrt assert libcoreclr so wks gc heap seg mapping table add segment the notes in this bug refer to the ubuntu dump other dumps are available if needed stop reason sigabrt fault symbol libcoreclr so wks gc heap seg mapping table add segment failure hash sigabrt libcoreclr so wks gc heap seg mapping table add segment fault stack libc so gi raise libc so gi abort libcoreclr so unknown libcoreclr so sigtrap handler int siginfo t void libclrjit so sigtrap handler int siginfo t void libpthread so libcoreclr so dbg debugbreak libcoreclr so debugbreak libcoreclr so dbgassertdialog libcoreclr so wks gc heap seg mapping table add segment wks heap segment wks gc heap libcoreclr so wks gc heap get segment unsigned long int libcoreclr so wks gc heap get segment for loh unsigned long libcoreclr so wks gc heap get large segment unsigned long int libcoreclr so wks gc heap loh get new seg wks generation unsigned long int int oom reason libcoreclr so wks gc heap allocate large int unsigned long alloc context int libcoreclr so wks gc heap try allocate more space alloc context unsigned long int libcoreclr so wks gc heap allocate more space alloc context unsigned long int libcoreclr so wks gc heap allocate large object unsigned long long libcoreclr so wks gcheap alloc alloc context unsigned long unsigned int libcoreclr so alloc unsigned long int int libcoreclr so fastallocateprimitivearray methodtable unsigned int int libcoreclr so jit corinfo class struct long libcoreclr so jit mp fastportable corinfo class struct long system runtime numerics dll system numerics biginteger op leftshift system numerics biginteger system runtime numerics tests dll system numerics tests logtest largevaluelogtests system runtime numerics tests dll system numerics tests logtest runlargevaluelogtests exe stress generated unittests stress execution dll stress execution unittest execute stress execution dll stress execution dedicatedthreadworkerstrategy runworker stress execution itestpattern system threading cancellationtoken stress execution dll stress execution dedicatedthreadworkerstrategy c b system private corelib ni dll system threading tasks task execute system private corelib ni dll system threading executioncontext run system threading executioncontext system threading contextcallback system object system private corelib ni dll system threading tasks task executewiththreadlocal system threading tasks task byref system private corelib ni dll system threading tasks task executeentry boolean system private corelib ni dll system threading executioncontext run system threading executioncontext system threading contextcallback system object libcoreclr so calldescrworkerinternal libcoreclr so calldescrworkerwithhandler calldescrdata int libcoreclr so methoddesccallsite calltargetworker unsigned long const libcoreclr so methoddesccallsite call unsigned long const libcoreclr so threadnative kickoffthread worker void libcoreclr so managedthreadbase dispatchinner managedthreadcallstate libcoreclr so managedthreadbase dispatchmiddle managedthreadcallstate libcoreclr so managedthreadbase dispatchouter managedthreadcallstate operator managedthreadbase dispatchouter managedthreadcallstate tryargs const lambda param operator param const libcoreclr so managedthreadbase dispatchouter managedthreadcallstate operator managedthreadbase dispatchouter managedthreadcallstate tryargs const libcoreclr so managedthreadbase dispatchouter managedthreadcallstate libcoreclr so managedthreadbase fulltransitionwithad adid void void void unhandledexceptionlocation libcoreclr so managedthreadbase kickoff adid void void void libcoreclr so threadnative kickoffthread void libcoreclr so thread intermediatethreadproc void libcoreclr so corunix cpalthread threadentry void libpthread so start thread libc so clone fault thread thread tid libc so gi raise sig at raise c name corerun stop reason signal sigabrt looking at the source for frame gc cpp line it looks like size t begin entry ro in entry evals to true so it asserts frame libcoreclr so wks gc heap seg mapping table add segment seg hp at gc cpp lldb fr v wks heap segment seg wks gc heap hp size t seg end size t begin index wks seg mapping begin entry size t end index wks seg mapping end entry lldb expr begin entry wks heap segment lldb expr begin entry wks heap segment allocated committed reserved used mem flags next plan allocated background allocated saved bg allocated padandplug plugandgap gap reloc m pair left right lr m plug skew dprintf set entry d and d to ix begin index end index seg assert begin entry size t begin entry ro in entry | 1 |
966 | 11,837,045,705 | IssuesEvent | 2020-03-23 13:35:28 | sohaibaslam/learning_site | https://api.github.com/repos/sohaibaslam/learning_site | opened | Broken Crawlers 23, Mar 2020 | crawler broken/unreliable | 1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **benetton lv(100%)**
1. **charmingcharlie us(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)**
1. **falabella cl(100%)/co(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **hm kw(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **klingel de(100%)**
1. **lifestylestores in(100%)**
1. **louisvuitton cn(100%)**
1. **made ch(100%)/de(100%)**
1. **michaelkors ca(100%)**
1. **mothercare sa(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **rakuten us(100%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **saksfifthavenue mo(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)**
1. **topbrands ru(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **zalandolounge de(100%)**
1. **zara pe(100%)/uy(100%)**
| True | Broken Crawlers 23, Mar 2020 - 1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **benetton lv(100%)**
1. **charmingcharlie us(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)**
1. **falabella cl(100%)/co(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **hm kw(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **klingel de(100%)**
1. **lifestylestores in(100%)**
1. **louisvuitton cn(100%)**
1. **made ch(100%)/de(100%)**
1. **michaelkors ca(100%)**
1. **mothercare sa(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **rakuten us(100%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **saksfifthavenue mo(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)**
1. **topbrands ru(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **zalandolounge de(100%)**
1. **zara pe(100%)/uy(100%)**
| reli | broken crawlers mar abcmart kr abercrombie cn hk jp americaneagle ca ami cn dk jp kr uk asos ae au ch cn hk id my nl ph pl ru sa sg th us vn babyshop ae sa benetton lv charmingcharlie us conforama fr converse au falabella cl co footaction us footlocker be de dk es fr it lu nl no se uk hm kw sa hollister cn hk jp tw klingel de lifestylestores in louisvuitton cn made ch de michaelkors ca mothercare sa popup br prettysecrets in rakuten us runnerspoint de runwaysale za saksfifthavenue mo simons ca snipes de splashfashions ae bh sa stylebop au ca cn de es fr hk jp kr mo sg us tods cn gr pt tommybahama bh topbrands ru wayfair ca de uk zalandolounge de zara pe uy | 1 |
1,018 | 12,304,812,372 | IssuesEvent | 2020-05-11 21:10:31 | microsoft/PTVS | https://api.github.com/repos/microsoft/PTVS | closed | Unexpected error in Python debugger Debug Interactive | area:Debugger area:REPL bug tenet:Reliability | I had typed in 'webpage' and was waiting for the value to be displayed, when I received the error. I see the value is actually there:
[Window Title]
devenv.exe
[Main Instruction]
An unexpected error occurred
[Content]
Please press Ctrl+C to copy the contents of this dialog and report this error to our issue tracker.
[^] Hide details [Close]
[Expanded Information]
```
Build: 15.9.18254.1
System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)
at System.Collections.Generic.List`1.get_Item(Int32 index)
at Microsoft.PythonTools.Repl.PythonDebugProcessReplEvaluator..ctor(IServiceProvider serviceProvider, PythonProcess process, IThreadIdMapper threadIdMapper)
at Microsoft.PythonTools.Repl.PythonDebugReplEvaluator.<AttachProcessAsync>d__80.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.PythonTools.Repl.PythonDebugReplEvaluator.<OnReadyForInputAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.PythonTools.Infrastructure.VSTaskExtensions.<HandleAllExceptions>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.VisualStudio.Telemetry.WindowsErrorReporting.WatsonReport.GetClrWatsonExceptionInfo(Exception exceptionObject)
``` | True | Unexpected error in Python debugger Debug Interactive - I had typed in 'webpage' and was waiting for the value to be displayed, when I received the error. I see the value is actually there:
[Window Title]
devenv.exe
[Main Instruction]
An unexpected error occurred
[Content]
Please press Ctrl+C to copy the contents of this dialog and report this error to our issue tracker.
[^] Hide details [Close]
[Expanded Information]
```
Build: 15.9.18254.1
System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)
at System.Collections.Generic.List`1.get_Item(Int32 index)
at Microsoft.PythonTools.Repl.PythonDebugProcessReplEvaluator..ctor(IServiceProvider serviceProvider, PythonProcess process, IThreadIdMapper threadIdMapper)
at Microsoft.PythonTools.Repl.PythonDebugReplEvaluator.<AttachProcessAsync>d__80.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.PythonTools.Repl.PythonDebugReplEvaluator.<OnReadyForInputAsync>d__15.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.PythonTools.Infrastructure.VSTaskExtensions.<HandleAllExceptions>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.VisualStudio.Telemetry.WindowsErrorReporting.WatsonReport.GetClrWatsonExceptionInfo(Exception exceptionObject)
``` | reli | unexpected error in python debugger debug interactive i had typed in webpage and was waiting for the value to be displayed when i received the error i see the value is actually there devenv exe an unexpected error occurred please press ctrl c to copy the contents of this dialog and report this error to our issue tracker hide details build system argumentoutofrangeexception index was out of range must be non negative and less than the size of the collection parameter name index at system throwhelper throwargumentoutofrangeexception exceptionargument argument exceptionresource resource at system collections generic list get item index at microsoft pythontools repl pythondebugprocessreplevaluator ctor iserviceprovider serviceprovider pythonprocess process ithreadidmapper threadidmapper at microsoft pythontools repl pythondebugreplevaluator d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft pythontools repl pythondebugreplevaluator d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft pythontools infrastructure vstaskextensions d movenext end of stack trace from previous location where exception was thrown at microsoft visualstudio telemetry windowserrorreporting watsonreport getclrwatsonexceptioninfo exception exceptionobject | 1 |
75,743 | 9,884,317,280 | IssuesEvent | 2019-06-24 21:46:12 | Azure/azure-cosmos-js | https://api.github.com/repos/Azure/azure-cosmos-js | closed | Move samples to TypeScript | discussion-wanted documentation improvement planning v3 | Given that we have a great experience for development in TypeScript, I propose we move our existing samples to TypeScript.
We probably need 1 JS sample around. I propose we write a new one which does a little bit of everything. | 1.0 | Move samples to TypeScript - Given that we have a great experience for development in TypeScript, I propose we move our existing samples to TypeScript.
We probably need 1 JS sample around. I propose we write a new one which does a little bit of everything. | non_reli | move samples to typescript given that we have a great experience for development in typescript i propose we move our existing samples to typescript we probably need js sample around i propose we write a new one which does a little bit of everything | 0 |
756 | 10,469,437,573 | IssuesEvent | 2019-09-22 20:41:45 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | [ARM64/Linux] error MSB6006: "csc.dll" exited with code 139. | bug reliability | I'm trying to do a Mono `--with-core=only` build, which uses a stock .NET Core to build our System.Private.CoreLib:
`dotnet build -p:TargetsUnix=true -v:diag -p:BuildArch=arm64 -p:OutputPath=bin/arm64 -p:RoslynPropsFile="../../../netcore/roslyn/packages/microsoft.net.compilers.toolset/3.1.0-beta3-19213-02/build/Microsoft.Net.Compilers.Toolset.props" System.Private.CoreLib.csproj`
This errors out in Roslyn, with no more detail than `/tmp/mono/netcore/roslyn/packages/microsoft.net.compilers.toolset/3.1.0-beta3-19213-02/tasks/netcoreapp2.1/Microsoft.CSharp.Core.targets(58,5): error MSB6006: "csc.dll" exited with code 139. [/tmp/mono/mcs/class/System.Private.CoreLib/System.Private.CoreLib.csproj]`
Using all the most bleeding edge things - dotnet 3.0.100-preview5-011568, Roslyn 3.1.0-beta3-19213-02, on Ubuntu 16.04 on AMD Opteron A1170 (SoftIron Overdrive 3000)
Example failure on https://dev.azure.com/dnceng/public/_build/results?buildId=194840
[msbuild.binlog.zip](https://github.com/dotnet/coreclr/files/3202924/msbuild.binlog.zip) | True | [ARM64/Linux] error MSB6006: "csc.dll" exited with code 139. - I'm trying to do a Mono `--with-core=only` build, which uses a stock .NET Core to build our System.Private.CoreLib:
`dotnet build -p:TargetsUnix=true -v:diag -p:BuildArch=arm64 -p:OutputPath=bin/arm64 -p:RoslynPropsFile="../../../netcore/roslyn/packages/microsoft.net.compilers.toolset/3.1.0-beta3-19213-02/build/Microsoft.Net.Compilers.Toolset.props" System.Private.CoreLib.csproj`
This errors out in Roslyn, with no more detail than `/tmp/mono/netcore/roslyn/packages/microsoft.net.compilers.toolset/3.1.0-beta3-19213-02/tasks/netcoreapp2.1/Microsoft.CSharp.Core.targets(58,5): error MSB6006: "csc.dll" exited with code 139. [/tmp/mono/mcs/class/System.Private.CoreLib/System.Private.CoreLib.csproj]`
Using all the most bleeding edge things - dotnet 3.0.100-preview5-011568, Roslyn 3.1.0-beta3-19213-02, on Ubuntu 16.04 on AMD Opteron A1170 (SoftIron Overdrive 3000)
Example failure on https://dev.azure.com/dnceng/public/_build/results?buildId=194840
[msbuild.binlog.zip](https://github.com/dotnet/coreclr/files/3202924/msbuild.binlog.zip) | reli | error csc dll exited with code i m trying to do a mono with core only build which uses a stock net core to build our system private corelib dotnet build p targetsunix true v diag p buildarch p outputpath bin p roslynpropsfile netcore roslyn packages microsoft net compilers toolset build microsoft net compilers toolset props system private corelib csproj this errors out in roslyn with no more detail than tmp mono netcore roslyn packages microsoft net compilers toolset tasks microsoft csharp core targets error csc dll exited with code using all the most bleeding edge things dotnet roslyn on ubuntu on amd opteron softiron overdrive example failure on | 1 |
198,925 | 22,674,191,463 | IssuesEvent | 2022-07-04 01:26:07 | turkdevops/WordPress | https://api.github.com/repos/turkdevops/WordPress | opened | CVE-2022-25758 (Medium) detected in scss-tokenizer-0.2.3.tgz | security vulnerability | ## CVE-2022-25758 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>scss-tokenizer-0.2.3.tgz</b></p></summary>
<p>A tokenzier for Sass' SCSS syntax</p>
<p>Library home page: <a href="https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz">https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz</a></p>
<p>Path to dependency file: /wp-content/themes/twentynineteen/package.json</p>
<p>Path to vulnerable library: /wp-content/themes/twentynineteen/node_modules/scss-tokenizer/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- sass-graph-2.2.5.tgz
- :x: **scss-tokenizer-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/WordPress/commit/a30d128bbb79f203ffa32cc8f88c681f2c014e5b">a30d128bbb79f203ffa32cc8f88c681f2c014e5b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package scss-tokenizer are vulnerable to Regular Expression Denial of Service (ReDoS) via the loadAnnotation() function, due to the usage of insecure regex.
<p>Publish Date: 2022-07-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25758>CVE-2022-25758</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-25758">https://nvd.nist.gov/vuln/detail/CVE-2022-25758</a></p>
<p>Release Date: 2022-07-01</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-25758 (Medium) detected in scss-tokenizer-0.2.3.tgz - ## CVE-2022-25758 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>scss-tokenizer-0.2.3.tgz</b></p></summary>
<p>A tokenzier for Sass' SCSS syntax</p>
<p>Library home page: <a href="https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz">https://registry.npmjs.org/scss-tokenizer/-/scss-tokenizer-0.2.3.tgz</a></p>
<p>Path to dependency file: /wp-content/themes/twentynineteen/package.json</p>
<p>Path to vulnerable library: /wp-content/themes/twentynineteen/node_modules/scss-tokenizer/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- sass-graph-2.2.5.tgz
- :x: **scss-tokenizer-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/WordPress/commit/a30d128bbb79f203ffa32cc8f88c681f2c014e5b">a30d128bbb79f203ffa32cc8f88c681f2c014e5b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package scss-tokenizer are vulnerable to Regular Expression Denial of Service (ReDoS) via the loadAnnotation() function, due to the usage of insecure regex.
<p>Publish Date: 2022-07-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25758>CVE-2022-25758</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-25758">https://nvd.nist.gov/vuln/detail/CVE-2022-25758</a></p>
<p>Release Date: 2022-07-01</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve medium detected in scss tokenizer tgz cve medium severity vulnerability vulnerable library scss tokenizer tgz a tokenzier for sass scss syntax library home page a href path to dependency file wp content themes twentynineteen package json path to vulnerable library wp content themes twentynineteen node modules scss tokenizer package json dependency hierarchy node sass tgz root library sass graph tgz x scss tokenizer tgz vulnerable library found in head commit a href found in base branch master vulnerability details all versions of package scss tokenizer are vulnerable to regular expression denial of service redos via the loadannotation function due to the usage of insecure regex publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution no fix step up your open source security game with mend | 0 |
358 | 6,902,952,932 | IssuesEvent | 2017-11-26 04:34:38 | willamm/WaveSimulator | https://api.github.com/repos/willamm/WaveSimulator | closed | Handle assert when adding shape out of bounds | safety / reliability TODO | Should handle gracefully instead of crashing, change to using an exception instead of an assert? | True | Handle assert when adding shape out of bounds - Should handle gracefully instead of crashing, change to using an exception instead of an assert? | reli | handle assert when adding shape out of bounds should handle gracefully instead of crashing change to using an exception instead of an assert | 1 |
3,035 | 31,789,143,224 | IssuesEvent | 2023-09-13 00:52:27 | Cyfrin/2023-08-sparkn | https://api.github.com/repos/Cyfrin/2023-08-sparkn | closed | Cannot use block.timestamp or block.timestamp + any value between 0-15 seconds as closeTime | medium ai-triage-Inclusivity ai-triage-Unreliable ai-triage-Risk ai-triage-MissingMinimumCheck finding-greif-proxy-mev | # Cannot use block.timestamp or block.timestamp + any value between 0-15 seconds as closeTime
### Severity
Medium Risk
### Relevant GitHub Links
<a data-meta="codehawks-github-link" href="https://github.com/Cyfrin/2023-08-sparkn/blob/0f139b2dc53905700dd29a01451b330f829653e9/src/ProxyFactory.sol#L110">https://github.com/Cyfrin/2023-08-sparkn/blob/0f139b2dc53905700dd29a01451b330f829653e9/src/ProxyFactory.sol#L110</a>
## Summary
In the ```setContest``` function, if the organizer decides to set ```closeTime``` as ```block.timestamp``` or ```block.timestamp+ 15 sec```, then the transaction will likely revert which is not the intended behavior protocol want.
## Vulnerability Details
The protocol states that if the organizer wishes then he has the ability to immediately end the contest providing the ```closeTime``` as ```block.timestamp``` when the owner sets the contest.
This is the feature the protocol desires to have because maybe organizers have already decided on the winners and want to just distribute the reward to the winners without any delay.
There are two checks in the ```setContest``` function to see if the ```closeTime``` is in the past or if it is more than 28 days + currentTime.
```js
if (closeTime > block.timestamp + MAX_CONTEST_PERIOD || closeTime < block.timestamp) {
revert ProxyFactory__CloseTimeNotInRange();
}
```
Imagine the scenario when the ```closeTime``` is ```block.timestamp + 5s```. Note that the ```closeTime``` is set off-chain.
Here when we send the transaction with this data then the tx will sit in the mempool and if minner decides to not include this tx in the next block then the whole logic will become invalid and ```closeTime``` will be less than the ```block.timestamp + 5s``` resulting in the transaction revert.
This is a bit complex issue so I will try to explain it with numbers.I couldn't write a POC for this due to some testing limitations.
- The etherjs or any frontend library fetch the ```block.timestamp``` from the blockchain let's say it is 1000.
- The ```closeTime``` the organizer wants to set is 1000 + 5 = 1005
- The ```setContest``` tx goes to mempool
- We all know the fact the miner can manipulate the current timestamp between 0-12s.
- If the miner decided to set the timestamp as 1000(prev time) + 10 s. then our ```closeTime``` will be in the past resulting in the transaction revert.
- In another scenario if the miner sets a ```block.timetsamp``` as 1000 + 5s. But he decided to not include our ```setContest``` in the block then our closeTime is still in the past so the transaction will revert.
## Impact
This will result in the unexpected behavior of the system due to how the blockchain is architectured.
Also, the severity of this depends on how often the organizer sets closeTime like this. If this happens frequently then this can be categorized as high and if not the medium or low. This totally depends of how the organizer wish to set it.
## Tools Used
manual review
## Recommendations
there are two recommendation.Protocol can use whatever seems best.
- set some minimum closeTime like 10 mins or soo...
- Rather than setting the closeTime it is encouraged to set the contest duration.This will break the other part of the system. So use this as only the point of reference
```js
function setContest(address organizer, bytes32 contestId, uint256 contestDuration, address implementation)
public
onlyOwner
{
if (organizer == address(0) || implementation == address(0)) revert ProxyFactory__NoZeroAddress();
if (contestDuration > MAX_CONTEST_PERIOD || contestDuration == 0) revert ProxyFactory__CloseTimeNotInRange();
bytes32 salt = _calculateSalt(organizer, contestId, implementation);
if (saltToCloseTime[salt] != 0) revert ProxyFactory__ContestIsAlreadyRegistered();
saltToCloseTime[salt] = contestDuration + block.timestamp;
emit SetContest(organizer, contestId,contestDuration+block.timestamp, implementation);
}
``` | True | Cannot use block.timestamp or block.timestamp + any value between 0-15 seconds as closeTime - # Cannot use block.timestamp or block.timestamp + any value between 0-15 seconds as closeTime
### Severity
Medium Risk
### Relevant GitHub Links
<a data-meta="codehawks-github-link" href="https://github.com/Cyfrin/2023-08-sparkn/blob/0f139b2dc53905700dd29a01451b330f829653e9/src/ProxyFactory.sol#L110">https://github.com/Cyfrin/2023-08-sparkn/blob/0f139b2dc53905700dd29a01451b330f829653e9/src/ProxyFactory.sol#L110</a>
## Summary
In the ```setContest``` function, if the organizer decides to set ```closeTime``` as ```block.timestamp``` or ```block.timestamp+ 15 sec```, then the transaction will likely revert which is not the intended behavior protocol want.
## Vulnerability Details
The protocol states that if the organizer wishes then he has the ability to immediately end the contest providing the ```closeTime``` as ```block.timestamp``` when the owner sets the contest.
This is the feature the protocol desires to have because maybe organizers have already decided on the winners and want to just distribute the reward to the winners without any delay.
There are two checks in the ```setContest``` function to see if the ```closeTime``` is in the past or if it is more than 28 days + currentTime.
```js
if (closeTime > block.timestamp + MAX_CONTEST_PERIOD || closeTime < block.timestamp) {
revert ProxyFactory__CloseTimeNotInRange();
}
```
Imagine the scenario when the ```closeTime``` is ```block.timestamp + 5s```. Note that the ```closeTime``` is set off-chain.
Here when we send the transaction with this data then the tx will sit in the mempool and if minner decides to not include this tx in the next block then the whole logic will become invalid and ```closeTime``` will be less than the ```block.timestamp + 5s``` resulting in the transaction revert.
This is a bit complex issue so I will try to explain it with numbers.I couldn't write a POC for this due to some testing limitations.
- The etherjs or any frontend library fetch the ```block.timestamp``` from the blockchain let's say it is 1000.
- The ```closeTime``` the organizer wants to set is 1000 + 5 = 1005
- The ```setContest``` tx goes to mempool
- We all know the fact the miner can manipulate the current timestamp between 0-12s.
- If the miner decided to set the timestamp as 1000(prev time) + 10 s. then our ```closeTime``` will be in the past resulting in the transaction revert.
- In another scenario if the miner sets a ```block.timetsamp``` as 1000 + 5s. But he decided to not include our ```setContest``` in the block then our closeTime is still in the past so the transaction will revert.
## Impact
This will result in the unexpected behavior of the system due to how the blockchain is architectured.
Also, the severity of this depends on how often the organizer sets closeTime like this. If this happens frequently then this can be categorized as high and if not the medium or low. This totally depends of how the organizer wish to set it.
## Tools Used
manual review
## Recommendations
there are two recommendation.Protocol can use whatever seems best.
- set some minimum closeTime like 10 mins or soo...
- Rather than setting the closeTime it is encouraged to set the contest duration.This will break the other part of the system. So use this as only the point of reference
```js
function setContest(address organizer, bytes32 contestId, uint256 contestDuration, address implementation)
public
onlyOwner
{
if (organizer == address(0) || implementation == address(0)) revert ProxyFactory__NoZeroAddress();
if (contestDuration > MAX_CONTEST_PERIOD || contestDuration == 0) revert ProxyFactory__CloseTimeNotInRange();
bytes32 salt = _calculateSalt(organizer, contestId, implementation);
if (saltToCloseTime[salt] != 0) revert ProxyFactory__ContestIsAlreadyRegistered();
saltToCloseTime[salt] = contestDuration + block.timestamp;
emit SetContest(organizer, contestId,contestDuration+block.timestamp, implementation);
}
``` | reli | cannot use block timestamp or block timestamp any value between seconds as closetime cannot use block timestamp or block timestamp any value between seconds as closetime severity medium risk relevant github links a data meta codehawks github link href summary in the setcontest function if the organizer decides to set closetime as block timestamp or block timestamp sec then the transaction will likely revert which is not the intended behavior protocol want vulnerability details the protocol states that if the organizer wishes then he has the ability to immediately end the contest providing the closetime as block timestamp when the owner sets the contest this is the feature the protocol desires to have because maybe organizers have already decided on the winners and want to just distribute the reward to the winners without any delay there are two checks in the setcontest function to see if the closetime is in the past or if it is more than days currenttime js if closetime block timestamp max contest period closetime block timestamp revert proxyfactory closetimenotinrange imagine the scenario when the closetime is block timestamp note that the closetime is set off chain here when we send the transaction with this data then the tx will sit in the mempool and if minner decides to not include this tx in the next block then the whole logic will become invalid and closetime will be less than the block timestamp resulting in the transaction revert this is a bit complex issue so i will try to explain it with numbers i couldn t write a poc for this due to some testing limitations the etherjs or any frontend library fetch the block timestamp from the blockchain let s say it is the closetime the organizer wants to set is the setcontest tx goes to mempool we all know the fact the miner can manipulate the current timestamp between if the miner decided to set the timestamp as prev time s then our closetime will be in the past resulting in the transaction revert in another scenario if the miner sets a block timetsamp as but he decided to not include our setcontest in the block then our closetime is still in the past so the transaction will revert impact this will result in the unexpected behavior of the system due to how the blockchain is architectured also the severity of this depends on how often the organizer sets closetime like this if this happens frequently then this can be categorized as high and if not the medium or low this totally depends of how the organizer wish to set it tools used manual review recommendations there are two recommendation protocol can use whatever seems best set some minimum closetime like mins or soo rather than setting the closetime it is encouraged to set the contest duration this will break the other part of the system so use this as only the point of reference js function setcontest address organizer contestid contestduration address implementation public onlyowner if organizer address implementation address revert proxyfactory nozeroaddress if contestduration max contest period contestduration revert proxyfactory closetimenotinrange salt calculatesalt organizer contestid implementation if salttoclosetime revert proxyfactory contestisalreadyregistered salttoclosetime contestduration block timestamp emit setcontest organizer contestid contestduration block timestamp implementation | 1 |
1,942 | 21,956,748,885 | IssuesEvent | 2022-05-24 12:46:49 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | Source Tiktok Marketing: have at least 90% unit test coverage | type/enhancement area/connectors new-connector area/reliability certification team/databases certification/beta team/connectors-python autoteam | Increase coverage up to 90%
> Task :airbyte-integrations:connectors:source-tiktok-marketing:unitTest
Name Stmts Miss Cover
---------------------------------------------------------
source_tiktok_marketing/__init__.py 2 0 100%
source_tiktok_marketing/spec.py 65 19 71%
source_tiktok_marketing/streams.py 314 98 69%
source_tiktok_marketing/source.py 45 16 64%
---------------------------------------------------------
TOTAL 426 133 69% | True | Source Tiktok Marketing: have at least 90% unit test coverage - Increase coverage up to 90%
> Task :airbyte-integrations:connectors:source-tiktok-marketing:unitTest
Name Stmts Miss Cover
---------------------------------------------------------
source_tiktok_marketing/__init__.py 2 0 100%
source_tiktok_marketing/spec.py 65 19 71%
source_tiktok_marketing/streams.py 314 98 69%
source_tiktok_marketing/source.py 45 16 64%
---------------------------------------------------------
TOTAL 426 133 69% | reli | source tiktok marketing have at least unit test coverage increase coverage up to task airbyte integrations connectors source tiktok marketing unittest name stmts miss cover source tiktok marketing init py source tiktok marketing spec py source tiktok marketing streams py source tiktok marketing source py total | 1 |
3,028 | 31,758,491,052 | IssuesEvent | 2023-09-12 01:53:25 | Project-Reclass/toynet | https://api.github.com/repos/Project-Reclass/toynet | closed | Instrument ToyNet Backend in Datadog | enhancement observability reliability | Implement:
- APM tracing for each endpoint in macroflask except sessions
- Logging for each endpoint in macroflask except sessions
- Usage Metrics for each endpoint in macroflask except sessions | True | Instrument ToyNet Backend in Datadog - Implement:
- APM tracing for each endpoint in macroflask except sessions
- Logging for each endpoint in macroflask except sessions
- Usage Metrics for each endpoint in macroflask except sessions | reli | instrument toynet backend in datadog implement apm tracing for each endpoint in macroflask except sessions logging for each endpoint in macroflask except sessions usage metrics for each endpoint in macroflask except sessions | 1 |
84,603 | 24,360,173,680 | IssuesEvent | 2022-10-03 10:59:23 | speedb-io/speedb | https://api.github.com/repos/speedb-io/speedb | closed | makefile: speed up test runs startup time | enhancement Upstreamable build | The Makefile has many places where it needlessly invokes a sub-make instead of declaring the dependencies correctly. This is causing slowdowns on startup, and especially noticeable during test runs (`make check`) where each time a sub-make is invoked it regenerates `make_config.mk` and that takes a long time. Rework the dependency graph so that at least running tests doesn't involve invoking make three times. | 1.0 | makefile: speed up test runs startup time - The Makefile has many places where it needlessly invokes a sub-make instead of declaring the dependencies correctly. This is causing slowdowns on startup, and especially noticeable during test runs (`make check`) where each time a sub-make is invoked it regenerates `make_config.mk` and that takes a long time. Rework the dependency graph so that at least running tests doesn't involve invoking make three times. | non_reli | makefile speed up test runs startup time the makefile has many places where it needlessly invokes a sub make instead of declaring the dependencies correctly this is causing slowdowns on startup and especially noticeable during test runs make check where each time a sub make is invoked it regenerates make config mk and that takes a long time rework the dependency graph so that at least running tests doesn t involve invoking make three times | 0 |
925 | 11,706,362,050 | IssuesEvent | 2020-03-07 21:39:05 | sohaibaslam/learning_site | https://api.github.com/repos/sohaibaslam/learning_site | opened | Broken Crawlers 08, Mar 2020 | crawler broken/unreliable | 1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **adidas pl(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)/us(100%)**
1. **antonioli es(100%)**
1. **arket uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **babywalz at(100%)/ch(100%)/de(100%)**
1. **bananarepublic ca(100%)**
1. **benetton lv(100%)**
1. **bijoubrigitte de(100%)/nl(100%)**
1. **boconcept at(100%)/de(100%)**
1. **boozt uk(100%)**
1. **borbonese eu(100%)/it(100%)/uk(100%)**
1. **buckle us(100%)**
1. **charmingcharlie us(100%)**
1. **chloe kr(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)/kr(100%)/nl(42%)**
1. **cos (100%)/at(100%)/hu(34%)**
1. **creationl de(100%)**
1. **dfs uk(100%)**
1. **dickssportinggoods us(100%)**
1. **eastbay us(100%)**
1. **ernstings de(100%)**
1. **falabella cl(100%)/co(100%)**
1. **fanatics us(100%)**
1. **fendi cn(100%)**
1. **footaction us(100%)**
1. **footlocker (100%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **gap ca(100%)**
1. **getthelabel au(100%)/dk(100%)**
1. **harrods (100%)**
1. **heine at(100%)**
1. **hermes at(100%)/ca(100%)/de(50%)/es(50%)/fr(67%)/nl(50%)/se(50%)/uk(67%)**
1. **hm ae(100%)/cz(34%)/eu(100%)/jp(37%)/kw(100%)/pl(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea au(100%)/pt(100%)**
1. **intersport es(84%)/fr(100%)**
1. **intimissimi cn(100%)/hk(100%)/jp(100%)**
1. **jackwills (100%)**
1. **jeffreycampbell us(100%)**
1. **klingel de(100%)**
1. **lacoste cn(100%)**
1. **laredouteapi es(100%)**
1. **lefties sa(100%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **made ch(100%)/de(100%)/es(100%)/nl(100%)/uk(100%)**
1. **massimodutti ad(49%)/al(50%)/am(49%)/az(50%)/ba(50%)/by(51%)/co(51%)/cr(48%)/cy(50%)/do(47%)/ec(51%)/eg(100%)/ge(46%)/gt(49%)/hk(49%)/hn(49%)/id(47%)/il(51%)/in(49%)/kz(50%)/mc(57%)/mk(50%)/mo(47%)/my(100%)/pa(49%)/ph(100%)/rs(49%)/sa(100%)/sg(45%)/th(100%)/tn(51%)/tw(49%)/ua(52%)/vn(100%)**
1. **maxfashion bh(100%)**
1. **melijoe be(44%)/cn(100%)/fr(33%)/kr(89%)/uk(81%)**
1. **michaelkors ca(100%)/us(33%)**
1. **missguided pl(100%)**
1. **moncler ru(100%)**
1. **monki nl(100%)/pl(100%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **mrporter ie(100%)**
1. **mrprice uk(100%)**
1. **muji de(100%)/uk(67%)**
1. **offspring uk(100%)**
1. **oldnavy ca(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear kr(100%)/qa(100%)/tw(100%)**
1. **rakuten fr(100%)/us(100%)**
1. **ralphlauren cn(30%)/de(100%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **sainsburys uk(100%)**
1. **saksfifthavenue mo(100%)/ru(68%)**
1. **selfridges es(100%)/fr(84%)/hk(74%)/kr(70%)/mo(35%)/sa(35%)/tw(30%)**
1. **shoedazzle us(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **solebox de(100%)/uk(100%)**
1. **soliver de(100%)**
1. **speedo us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)**
1. **stories be(100%)**
1. **stradivarius lb(100%)/sg(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superbalist za(100%)**
1. **thread uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/jp(37%)/nl(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(100%)/ph(100%)/za(100%)**
1. **tommyhilfiger jp(100%)/us(100%)**
1. **topbrands ru(100%)**
1. **trendygolf uk(100%)**
1. **undefeated us(100%)**
1. **underarmour ca(100%)/pe(100%)**
1. **watchshop eu(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wenz de(100%)**
1. **westwingnow ch(100%)**
1. **womanwithin us(100%)**
1. **xxl dk(100%)**
1. **zalandolounge de(100%)**
1. **zalora id(100%)/my(85%)/tw(100%)**
1. **zara kw(100%)/mo(100%)/pe(100%)/uy(100%)**
1. **zilingo my(100%)**
1. farfetch kr(46%)/mo(98%)/nl(68%)
1. snkrs eu(98%)
1. hibbett us(97%)
1. vip cn(95%)
1. nike hk(93%)/kr(79%)
1. lasula uk(92%)
1. dolcegabbana uk(87%)
1. koton tr(85%)
1. schwab de(84%)
1. burberry (74%)/au(78%)/be(82%)/bg(78%)/ch(73%)/dk(68%)/es(75%)/fi(74%)/ie(80%)/jp(73%)/my(77%)/pt(75%)/ro(69%)/sg(68%)/sk(73%)/tw(78%)/us(65%)
1. shoecarnival us(81%)
1. yoox at(74%)/be(42%)/fr(77%)/hk(40%)/ru(33%)/uk(52%)
1. zalando at(68%)/ch(61%)/de(61%)/dk(48%)/fi(31%)/it(44%)/pl(75%)
1. stevemadden us(73%)
1. maxmara dk(30%)/it(71%)/pl(30%)/se(30%)
1. zivame in(69%)
1. jelmoli ch(65%)
1. aloyoga us(62%)
1. bensherman eu(61%)
1. pinkboutique uk(61%)
1. timberland my(60%)
1. timberlandtrans sg(60%)
1. sfera es(59%)
1. venteprivee es(59%)/fr(53%)/it(57%)
1. limango de(58%)
1. misssixty cn(55%)
1. anniebing us(54%)
1. liujo es(40%)/it(52%)
1. anayi jp(50%)
1. brunellocucinelli cn(50%)
1. revolve us(50%)
1. amazon uk(49%)
1. sportchek ca(47%)
1. strellson at(43%)/be(42%)/ch(46%)/de(45%)/fr(40%)/nl(44%)
1. theory jp(45%)
1. jcpenney (43%)
1. hugoboss cn(41%)
1. lululemon cn(41%)
1. marinarinaldi at(37%)/be(37%)/cz(38%)/de(38%)/dk(36%)/es(38%)/fr(37%)/hu(36%)/ie(38%)/it(41%)/nl(38%)/pl(37%)/pt(36%)/ro(38%)/se(37%)/uk(37%)/us(32%)
1. prada (38%)/at(31%)/ch(37%)/de(38%)/dk(38%)/es(36%)/fi(38%)/gr(31%)/ie(41%)/it(38%)/no(32%)/pt(38%)/se(39%)
1. acne dk(40%)
1. interightatjd cn(40%)
1. petitbateau uk(40%)
1. alaia ae(35%)/at(34%)/bh(37%)/ca(31%)/de(32%)/dk(38%)/fi(36%)/fr(31%)/hk(35%)/it(36%)/jp(30%)/nl(34%)/no(34%)/pl(34%)/pt(38%)/se(34%)/tr(37%)/uk(36%)/us(33%)/za(34%)
1. mango am(31%)/bm(30%)/by(31%)/dz(31%)/is(31%)/kr(38%)
1. gstar (31%)/at(35%)/au(34%)/bg(33%)/ch(36%)/cz(35%)/de(37%)/ee(36%)/hr(36%)/hu(33%)/ie(30%)/lt(34%)/lv(34%)/pl(35%)/ru(37%)/si(36%)/sk(35%)
1. ssfshop kr(37%)
1. terranovastyle at(34%)/de(34%)/es(36%)/fr(36%)/it(35%)/nl(37%)/uk(35%)
1. tigerofsweden at(33%)/be(36%)/ca(33%)/ch(33%)/de(34%)/dk(35%)/es(36%)/fi(33%)/fr(33%)/ie(33%)/it(33%)/nl(35%)/no(37%)/se(34%)/uk(33%)
1. vionicshoes uk(37%)
1. only ca(30%)/us(36%)
1. replayjeans at(33%)/au(35%)/be(35%)/ca(35%)/ch(35%)/de(35%)/dk(33%)/es(32%)/eu(35%)/fr(35%)/it(32%)/no(34%)/pt(34%)/se(32%)/uk(35%)/us(33%)
1. vans at(31%)/de(33%)/dk(34%)/es(33%)/it(33%)/nl(33%)/pl(33%)/se(35%)/uk(33%)
1. justcavalli ae(33%)/at(33%)/au(32%)/ca(34%)/de(33%)/dk(33%)/es(30%)/fr(33%)/hk(30%)/it(33%)/jp(34%)/nl(33%)/no(30%)/pl(33%)/pt(33%)/ru(33%)/se(33%)/tr(34%)/uk(32%)/us(33%)/za(33%)
1. maxandco es(34%)/it(34%)/uk(34%)
1. neimanmarcus cn(34%)
1. aboutyou hu(31%)/pl(32%)/ro(31%)/sk(31%)
1. calvinklein cz(30%)/hr(30%)/hu(32%)/pl(30%)
1. boardiesapparel au(31%)
1. bonpoint es(30%)/it(31%)
1. dvf br(31%)
1. ochirly cn(31%)
1. patriziapepe at(30%)/be(30%)/bg(30%)/ca(30%)/ie(30%)/lu(30%)/us(30%)
| True | Broken Crawlers 08, Mar 2020 - 1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **adidas pl(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)/us(100%)**
1. **antonioli es(100%)**
1. **arket uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **babywalz at(100%)/ch(100%)/de(100%)**
1. **bananarepublic ca(100%)**
1. **benetton lv(100%)**
1. **bijoubrigitte de(100%)/nl(100%)**
1. **boconcept at(100%)/de(100%)**
1. **boozt uk(100%)**
1. **borbonese eu(100%)/it(100%)/uk(100%)**
1. **buckle us(100%)**
1. **charmingcharlie us(100%)**
1. **chloe kr(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)/kr(100%)/nl(42%)**
1. **cos (100%)/at(100%)/hu(34%)**
1. **creationl de(100%)**
1. **dfs uk(100%)**
1. **dickssportinggoods us(100%)**
1. **eastbay us(100%)**
1. **ernstings de(100%)**
1. **falabella cl(100%)/co(100%)**
1. **fanatics us(100%)**
1. **fendi cn(100%)**
1. **footaction us(100%)**
1. **footlocker (100%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **gap ca(100%)**
1. **getthelabel au(100%)/dk(100%)**
1. **harrods (100%)**
1. **heine at(100%)**
1. **hermes at(100%)/ca(100%)/de(50%)/es(50%)/fr(67%)/nl(50%)/se(50%)/uk(67%)**
1. **hm ae(100%)/cz(34%)/eu(100%)/jp(37%)/kw(100%)/pl(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea au(100%)/pt(100%)**
1. **intersport es(84%)/fr(100%)**
1. **intimissimi cn(100%)/hk(100%)/jp(100%)**
1. **jackwills (100%)**
1. **jeffreycampbell us(100%)**
1. **klingel de(100%)**
1. **lacoste cn(100%)**
1. **laredouteapi es(100%)**
1. **lefties sa(100%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **made ch(100%)/de(100%)/es(100%)/nl(100%)/uk(100%)**
1. **massimodutti ad(49%)/al(50%)/am(49%)/az(50%)/ba(50%)/by(51%)/co(51%)/cr(48%)/cy(50%)/do(47%)/ec(51%)/eg(100%)/ge(46%)/gt(49%)/hk(49%)/hn(49%)/id(47%)/il(51%)/in(49%)/kz(50%)/mc(57%)/mk(50%)/mo(47%)/my(100%)/pa(49%)/ph(100%)/rs(49%)/sa(100%)/sg(45%)/th(100%)/tn(51%)/tw(49%)/ua(52%)/vn(100%)**
1. **maxfashion bh(100%)**
1. **melijoe be(44%)/cn(100%)/fr(33%)/kr(89%)/uk(81%)**
1. **michaelkors ca(100%)/us(33%)**
1. **missguided pl(100%)**
1. **moncler ru(100%)**
1. **monki nl(100%)/pl(100%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **mrporter ie(100%)**
1. **mrprice uk(100%)**
1. **muji de(100%)/uk(67%)**
1. **offspring uk(100%)**
1. **oldnavy ca(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear kr(100%)/qa(100%)/tw(100%)**
1. **rakuten fr(100%)/us(100%)**
1. **ralphlauren cn(30%)/de(100%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **sainsburys uk(100%)**
1. **saksfifthavenue mo(100%)/ru(68%)**
1. **selfridges es(100%)/fr(84%)/hk(74%)/kr(70%)/mo(35%)/sa(35%)/tw(30%)**
1. **shoedazzle us(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **solebox de(100%)/uk(100%)**
1. **soliver de(100%)**
1. **speedo us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)**
1. **stories be(100%)**
1. **stradivarius lb(100%)/sg(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superbalist za(100%)**
1. **thread uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/jp(37%)/nl(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(100%)/ph(100%)/za(100%)**
1. **tommyhilfiger jp(100%)/us(100%)**
1. **topbrands ru(100%)**
1. **trendygolf uk(100%)**
1. **undefeated us(100%)**
1. **underarmour ca(100%)/pe(100%)**
1. **watchshop eu(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wenz de(100%)**
1. **westwingnow ch(100%)**
1. **womanwithin us(100%)**
1. **xxl dk(100%)**
1. **zalandolounge de(100%)**
1. **zalora id(100%)/my(85%)/tw(100%)**
1. **zara kw(100%)/mo(100%)/pe(100%)/uy(100%)**
1. **zilingo my(100%)**
1. farfetch kr(46%)/mo(98%)/nl(68%)
1. snkrs eu(98%)
1. hibbett us(97%)
1. vip cn(95%)
1. nike hk(93%)/kr(79%)
1. lasula uk(92%)
1. dolcegabbana uk(87%)
1. koton tr(85%)
1. schwab de(84%)
1. burberry (74%)/au(78%)/be(82%)/bg(78%)/ch(73%)/dk(68%)/es(75%)/fi(74%)/ie(80%)/jp(73%)/my(77%)/pt(75%)/ro(69%)/sg(68%)/sk(73%)/tw(78%)/us(65%)
1. shoecarnival us(81%)
1. yoox at(74%)/be(42%)/fr(77%)/hk(40%)/ru(33%)/uk(52%)
1. zalando at(68%)/ch(61%)/de(61%)/dk(48%)/fi(31%)/it(44%)/pl(75%)
1. stevemadden us(73%)
1. maxmara dk(30%)/it(71%)/pl(30%)/se(30%)
1. zivame in(69%)
1. jelmoli ch(65%)
1. aloyoga us(62%)
1. bensherman eu(61%)
1. pinkboutique uk(61%)
1. timberland my(60%)
1. timberlandtrans sg(60%)
1. sfera es(59%)
1. venteprivee es(59%)/fr(53%)/it(57%)
1. limango de(58%)
1. misssixty cn(55%)
1. anniebing us(54%)
1. liujo es(40%)/it(52%)
1. anayi jp(50%)
1. brunellocucinelli cn(50%)
1. revolve us(50%)
1. amazon uk(49%)
1. sportchek ca(47%)
1. strellson at(43%)/be(42%)/ch(46%)/de(45%)/fr(40%)/nl(44%)
1. theory jp(45%)
1. jcpenney (43%)
1. hugoboss cn(41%)
1. lululemon cn(41%)
1. marinarinaldi at(37%)/be(37%)/cz(38%)/de(38%)/dk(36%)/es(38%)/fr(37%)/hu(36%)/ie(38%)/it(41%)/nl(38%)/pl(37%)/pt(36%)/ro(38%)/se(37%)/uk(37%)/us(32%)
1. prada (38%)/at(31%)/ch(37%)/de(38%)/dk(38%)/es(36%)/fi(38%)/gr(31%)/ie(41%)/it(38%)/no(32%)/pt(38%)/se(39%)
1. acne dk(40%)
1. interightatjd cn(40%)
1. petitbateau uk(40%)
1. alaia ae(35%)/at(34%)/bh(37%)/ca(31%)/de(32%)/dk(38%)/fi(36%)/fr(31%)/hk(35%)/it(36%)/jp(30%)/nl(34%)/no(34%)/pl(34%)/pt(38%)/se(34%)/tr(37%)/uk(36%)/us(33%)/za(34%)
1. mango am(31%)/bm(30%)/by(31%)/dz(31%)/is(31%)/kr(38%)
1. gstar (31%)/at(35%)/au(34%)/bg(33%)/ch(36%)/cz(35%)/de(37%)/ee(36%)/hr(36%)/hu(33%)/ie(30%)/lt(34%)/lv(34%)/pl(35%)/ru(37%)/si(36%)/sk(35%)
1. ssfshop kr(37%)
1. terranovastyle at(34%)/de(34%)/es(36%)/fr(36%)/it(35%)/nl(37%)/uk(35%)
1. tigerofsweden at(33%)/be(36%)/ca(33%)/ch(33%)/de(34%)/dk(35%)/es(36%)/fi(33%)/fr(33%)/ie(33%)/it(33%)/nl(35%)/no(37%)/se(34%)/uk(33%)
1. vionicshoes uk(37%)
1. only ca(30%)/us(36%)
1. replayjeans at(33%)/au(35%)/be(35%)/ca(35%)/ch(35%)/de(35%)/dk(33%)/es(32%)/eu(35%)/fr(35%)/it(32%)/no(34%)/pt(34%)/se(32%)/uk(35%)/us(33%)
1. vans at(31%)/de(33%)/dk(34%)/es(33%)/it(33%)/nl(33%)/pl(33%)/se(35%)/uk(33%)
1. justcavalli ae(33%)/at(33%)/au(32%)/ca(34%)/de(33%)/dk(33%)/es(30%)/fr(33%)/hk(30%)/it(33%)/jp(34%)/nl(33%)/no(30%)/pl(33%)/pt(33%)/ru(33%)/se(33%)/tr(34%)/uk(32%)/us(33%)/za(33%)
1. maxandco es(34%)/it(34%)/uk(34%)
1. neimanmarcus cn(34%)
1. aboutyou hu(31%)/pl(32%)/ro(31%)/sk(31%)
1. calvinklein cz(30%)/hr(30%)/hu(32%)/pl(30%)
1. boardiesapparel au(31%)
1. bonpoint es(30%)/it(31%)
1. dvf br(31%)
1. ochirly cn(31%)
1. patriziapepe at(30%)/be(30%)/bg(30%)/ca(30%)/ie(30%)/lu(30%)/us(30%)
| reli | broken crawlers mar eu fr uk us abcmart kr abercrombie cn hk jp adidas pl americaneagle ca ami cn dk jp kr uk us antonioli es arket uk asos ae au ch cn hk id my nl ph pl ru sa sg th us vn babyshop ae sa babywalz at ch de bananarepublic ca benetton lv bijoubrigitte de nl boconcept at de boozt uk borbonese eu it uk buckle us charmingcharlie us chloe kr clarks eu coach ca conforama fr converse au kr nl cos at hu creationl de dfs uk dickssportinggoods us eastbay us ernstings de falabella cl co fanatics us fendi cn footaction us footlocker be de dk es fr it lu nl no se uk gap ca getthelabel au dk harrods heine at hermes at ca de es fr nl se uk hm ae cz eu jp kw pl sa hollister cn hk jp tw hunter ikea au pt intersport es fr intimissimi cn hk jp jackwills jeffreycampbell us klingel de lacoste cn laredouteapi es lefties sa levi my lifestylestores in made ch de es nl uk massimodutti ad al am az ba by co cr cy do ec eg ge gt hk hn id il in kz mc mk mo my pa ph rs sa sg th tn tw ua vn maxfashion bh melijoe be cn fr kr uk michaelkors ca us missguided pl moncler ru monki nl pl moosejaw us mothercare sa mq se mrporter ie mrprice uk muji de uk offspring uk oldnavy ca parfois ad al am ao at ba be bg bh br by ch co cz de dk do ee eg es fi fr ge gr gt hr hu ie ir it jo kw lb lt lu lv ly ma mc mk mt mx mz nl om pa pe ph pl pt qa ro rs sa se si sk tn uk us ve ye patagonia ca popup br prettysecrets in pullandbear kr qa tw rakuten fr us ralphlauren cn de runnerspoint de runwaysale za sainsburys uk saksfifthavenue mo ru selfridges es fr hk kr mo sa tw shoedazzle us simons ca snipes de solebox de uk soliver de speedo us splashfashions ae bh sa stefaniamode au stories be stradivarius lb sg stylebop au ca cn de es fr hk jp kr mo sg us superbalist za thread uk us tods cn gr jp nl pt tommybahama bh de ph za tommyhilfiger jp us topbrands ru trendygolf uk undefeated us underarmour ca pe watchshop eu ru se wayfair ca de uk weekday eu wenz de westwingnow ch womanwithin us xxl dk zalandolounge de zalora id my tw zara kw mo pe uy zilingo my farfetch kr mo nl snkrs eu hibbett us vip cn nike hk kr lasula uk dolcegabbana uk koton tr schwab de burberry au be bg ch dk es fi ie jp my pt ro sg sk tw us shoecarnival us yoox at be fr hk ru uk zalando at ch de dk fi it pl stevemadden us maxmara dk it pl se zivame in jelmoli ch aloyoga us bensherman eu pinkboutique uk timberland my timberlandtrans sg sfera es venteprivee es fr it limango de misssixty cn anniebing us liujo es it anayi jp brunellocucinelli cn revolve us amazon uk sportchek ca strellson at be ch de fr nl theory jp jcpenney hugoboss cn lululemon cn marinarinaldi at be cz de dk es fr hu ie it nl pl pt ro se uk us prada at ch de dk es fi gr ie it no pt se acne dk interightatjd cn petitbateau uk alaia ae at bh ca de dk fi fr hk it jp nl no pl pt se tr uk us za mango am bm by dz is kr gstar at au bg ch cz de ee hr hu ie lt lv pl ru si sk ssfshop kr terranovastyle at de es fr it nl uk tigerofsweden at be ca ch de dk es fi fr ie it nl no se uk vionicshoes uk only ca us replayjeans at au be ca ch de dk es eu fr it no pt se uk us vans at de dk es it nl pl se uk justcavalli ae at au ca de dk es fr hk it jp nl no pl pt ru se tr uk us za maxandco es it uk neimanmarcus cn aboutyou hu pl ro sk calvinklein cz hr hu pl boardiesapparel au bonpoint es it dvf br ochirly cn patriziapepe at be bg ca ie lu us | 1 |
612,896 | 19,058,730,537 | IssuesEvent | 2021-11-26 02:43:14 | innohack2021/HotSource | https://api.github.com/repos/innohack2021/HotSource | closed | Env: CI를 위한 github action 세팅 | Environment Priority: very high | # CI를 위한 github action 세팅
## 종류
- [x] Set Up
- [x] Environment
## 내용
PR시 빌드 자동화를 통해 시간을 절약합니다.
github에서 제공해주는 default template로 테스트 진행합니다.
추후 다양하게 구성할 예정입니다.
## 체크리스트
- [x] workflow 디렉토리 생성
- [x] ci.yaml 생성 | 1.0 | Env: CI를 위한 github action 세팅 - # CI를 위한 github action 세팅
## 종류
- [x] Set Up
- [x] Environment
## 내용
PR시 빌드 자동화를 통해 시간을 절약합니다.
github에서 제공해주는 default template로 테스트 진행합니다.
추후 다양하게 구성할 예정입니다.
## 체크리스트
- [x] workflow 디렉토리 생성
- [x] ci.yaml 생성 | non_reli | env ci를 위한 github action 세팅 ci를 위한 github action 세팅 종류 set up environment 내용 pr시 빌드 자동화를 통해 시간을 절약합니다 github에서 제공해주는 default template로 테스트 진행합니다 추후 다양하게 구성할 예정입니다 체크리스트 workflow 디렉토리 생성 ci yaml 생성 | 0 |
589 | 8,743,763,175 | IssuesEvent | 2018-12-12 20:07:24 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | Update chosen mail servers with new library binding UpdateMailservers | chat-reliability | # Problem
We need to know which mail servers are selected by the user. Discussion is in https://github.com/status-im/status-go/issues/1285. Binding accepts json-encoded list of enodes. And returns error if any enode cannot be unmarshalled into enode.Node data objects.
https://github.com/status-im/status-go/blob/8b310db28d0a535804f61a274900ebca20a40821/lib/library.go#L467
# Implementation
Update must be made before doing any requests. For now status-react can preserve all other logic, status-go part will use mail server that was added by status-react. If mail server changes status-go must be notified of this change.
<blockquote><img src="https://avatars3.githubusercontent.com/u/11767950?s=400&v=4" width="48" align="right"><div><img src="https://assets-cdn.github.com/favicon.ico" height="14"> GitHub</div><div><strong><a href="https://github.com/status-im/status-go">status-im/status-go</a></strong></div><div>The Status module that consumes go-ethereum. Contribute to status-im/status-go development by creating an account on GitHub.</div></blockquote> | True | Update chosen mail servers with new library binding UpdateMailservers - # Problem
We need to know which mail servers are selected by the user. Discussion is in https://github.com/status-im/status-go/issues/1285. Binding accepts json-encoded list of enodes. And returns error if any enode cannot be unmarshalled into enode.Node data objects.
https://github.com/status-im/status-go/blob/8b310db28d0a535804f61a274900ebca20a40821/lib/library.go#L467
# Implementation
Update must be made before doing any requests. For now status-react can preserve all other logic, status-go part will use mail server that was added by status-react. If mail server changes status-go must be notified of this change.
<blockquote><img src="https://avatars3.githubusercontent.com/u/11767950?s=400&v=4" width="48" align="right"><div><img src="https://assets-cdn.github.com/favicon.ico" height="14"> GitHub</div><div><strong><a href="https://github.com/status-im/status-go">status-im/status-go</a></strong></div><div>The Status module that consumes go-ethereum. Contribute to status-im/status-go development by creating an account on GitHub.</div></blockquote> | reli | update chosen mail servers with new library binding updatemailservers problem we need to know which mail servers are selected by the user discussion is in binding accepts json encoded list of enodes and returns error if any enode cannot be unmarshalled into enode node data objects implementation update must be made before doing any requests for now status react can preserve all other logic status go part will use mail server that was added by status react if mail server changes status go must be notified of this change github | 1 |
440 | 7,520,336,931 | IssuesEvent | 2018-04-12 14:14:24 | CCI-MOC/hil | https://api.github.com/repos/CCI-MOC/hil | opened | Project related CLI calls | easy nice to have reliability | Following calls are required for project.
Should list resources allocated to the project like <node>, <networks>, <users>
For Networks it should show which the project owns versus which are shared with the project
For Users it should show which are `admin` vs which are `regular`
This call should be accessed by administrator and by any user that belongs to the <project>.
```
project show <project_name>
``` | True | Project related CLI calls - Following calls are required for project.
Should list resources allocated to the project like <node>, <networks>, <users>
For Networks it should show which the project owns versus which are shared with the project
For Users it should show which are `admin` vs which are `regular`
This call should be accessed by administrator and by any user that belongs to the <project>.
```
project show <project_name>
``` | reli | project related cli calls following calls are required for project should list resources allocated to the project like for networks it should show which the project owns versus which are shared with the project for users it should show which are admin vs which are regular this call should be accessed by administrator and by any user that belongs to the project show | 1 |
619 | 9,037,288,325 | IssuesEvent | 2019-02-09 08:56:01 | Cha-OS/colabo | https://api.github.com/repos/Cha-OS/colabo | opened | State & Content persistance | CRITICAL! Motivation UX.UsrOnBoard+AvoidUsrErr backend domain:Workshops feature reliability | - text writing auto-save (local and on server)
-preserving the state in the app, to be set up upon leaving the app accidentally
-and all the other content/context preservation based issues that might frustrate writers, make a friction and lead them not to use the app for writing, or even loosing the poems/texts | True | State & Content persistance - - text writing auto-save (local and on server)
-preserving the state in the app, to be set up upon leaving the app accidentally
-and all the other content/context preservation based issues that might frustrate writers, make a friction and lead them not to use the app for writing, or even loosing the poems/texts | reli | state content persistance text writing auto save local and on server preserving the state in the app to be set up upon leaving the app accidentally and all the other content context preservation based issues that might frustrate writers make a friction and lead them not to use the app for writing or even loosing the poems texts | 1 |
1,294 | 14,658,359,273 | IssuesEvent | 2020-12-28 17:43:02 | PrismarineJS/mineflayer | https://api.github.com/repos/PrismarineJS/mineflayer | closed | Improve printing of errors (in examples) | reliability | I get this error when attempting to use `mineflayer`
> ```You are using a pure-javascript implementation of RSA.
> Your performance might be subpar. Please consider installing URSA```
How do I fix it?
| True | Improve printing of errors (in examples) - I get this error when attempting to use `mineflayer`
> ```You are using a pure-javascript implementation of RSA.
> Your performance might be subpar. Please consider installing URSA```
How do I fix it?
| reli | improve printing of errors in examples i get this error when attempting to use mineflayer you are using a pure javascript implementation of rsa your performance might be subpar please consider installing ursa how do i fix it | 1 |
307,169 | 23,186,938,217 | IssuesEvent | 2022-08-01 09:12:24 | timescale/tobs | https://api.github.com/repos/timescale/tobs | closed | Docs: describe how to use tobs in multi-cluster environment | documentation epic/tobs-production-readiness | <!-- Feel free to ask questions in #promscale on Timescale Slack! -->
**What is missing?**
We are aiming to deploy tobs internally in a multi-cluster hub-and-spoke architecture. It would be good to document how this is done and what configuration options are necessary.
**Why do we need it?**
To offer better experience in multi-cluster environments
**Anything else we need to know?**:
| 1.0 | Docs: describe how to use tobs in multi-cluster environment - <!-- Feel free to ask questions in #promscale on Timescale Slack! -->
**What is missing?**
We are aiming to deploy tobs internally in a multi-cluster hub-and-spoke architecture. It would be good to document how this is done and what configuration options are necessary.
**Why do we need it?**
To offer better experience in multi-cluster environments
**Anything else we need to know?**:
| non_reli | docs describe how to use tobs in multi cluster environment what is missing we are aiming to deploy tobs internally in a multi cluster hub and spoke architecture it would be good to document how this is done and what configuration options are necessary why do we need it to offer better experience in multi cluster environments anything else we need to know | 0 |
29,389 | 5,664,009,102 | IssuesEvent | 2017-04-11 00:26:12 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | closed | MDAnalysisTests raises exception when imported | defect testing | ### Expected behaviour
```python
import MDAnalysisTests
```
and
```python
import MDAnalysis.tests
```
should import (and make eg files available in `datafiles`.
### Actual behaviour
Both imports fail with
```
In [3]: import MDAnalysisTests
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-3-52912a4f0416> in <module>()
----> 1 import MDAnalysisTests
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/__init__.py in <module>()
141 pass
142
--> 143 from MDAnalysisTests.util import (
144 block_import,
145 executable_not_found,
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/util.py in <module>()
37 from functools import wraps
38 import importlib
---> 39 import mock
40 import os
41
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/__init__.py in <module>()
1 from __future__ import absolute_import
----> 2 import mock.mock as _mock
3 from mock.mock import *
4 __all__ = _mock.__all__
5 #import mock.mock as _mock
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/mock.py in <module>()
69 from pbr.version import VersionInfo
70
---> 71 _v = VersionInfo('mock').semantic_version()
72 __version__ = _v.release_string()
73 version_info = _v.version_tuple()
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in semantic_version(self)
458 """Return the SemanticVersion object for this version."""
459 if self._semantic is None:
--> 460 self._semantic = self._get_version_from_pkg_resources()
461 return self._semantic
462
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in _get_version_from_pkg_resources(self)
445 # installed into anything. Revert to setup-time logic.
446 from pbr import packaging
--> 447 result_string = packaging.get_version(self.package)
448 return SemanticVersion.from_pip_string(result_string)
449
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/packaging.pyc in get_version(package_name, pre_version)
748 " to pbr.version.VersionInfo. Project name {name} was"
749 " given, but was not able to be found.".format(
--> 750 name=package_name))
751
752
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name mock was given, but was not able to be found.
```
### Code to reproduce the behaviour
```python
import MDAnalysisTests
```
### Currently version of MDAnalysis:
(run `python -c "import MDAnalysis as mda; print(mda.__version__)"`)
0.16.0 (pip upgraded in a virtualenv) | 1.0 | MDAnalysisTests raises exception when imported - ### Expected behaviour
```python
import MDAnalysisTests
```
and
```python
import MDAnalysis.tests
```
should import (and make eg files available in `datafiles`.
### Actual behaviour
Both imports fail with
```
In [3]: import MDAnalysisTests
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-3-52912a4f0416> in <module>()
----> 1 import MDAnalysisTests
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/__init__.py in <module>()
141 pass
142
--> 143 from MDAnalysisTests.util import (
144 block_import,
145 executable_not_found,
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/util.py in <module>()
37 from functools import wraps
38 import importlib
---> 39 import mock
40 import os
41
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/__init__.py in <module>()
1 from __future__ import absolute_import
----> 2 import mock.mock as _mock
3 from mock.mock import *
4 __all__ = _mock.__all__
5 #import mock.mock as _mock
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/mock.py in <module>()
69 from pbr.version import VersionInfo
70
---> 71 _v = VersionInfo('mock').semantic_version()
72 __version__ = _v.release_string()
73 version_info = _v.version_tuple()
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in semantic_version(self)
458 """Return the SemanticVersion object for this version."""
459 if self._semantic is None:
--> 460 self._semantic = self._get_version_from_pkg_resources()
461 return self._semantic
462
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in _get_version_from_pkg_resources(self)
445 # installed into anything. Revert to setup-time logic.
446 from pbr import packaging
--> 447 result_string = packaging.get_version(self.package)
448 return SemanticVersion.from_pip_string(result_string)
449
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/packaging.pyc in get_version(package_name, pre_version)
748 " to pbr.version.VersionInfo. Project name {name} was"
749 " given, but was not able to be found.".format(
--> 750 name=package_name))
751
752
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name mock was given, but was not able to be found.
```
### Code to reproduce the behaviour
```python
import MDAnalysisTests
```
### Currently version of MDAnalysis:
(run `python -c "import MDAnalysis as mda; print(mda.__version__)"`)
0.16.0 (pip upgraded in a virtualenv) | non_reli | mdanalysistests raises exception when imported expected behaviour python import mdanalysistests and python import mdanalysis tests should import and make eg files available in datafiles actual behaviour both imports fail with in import mdanalysistests exception traceback most recent call last in import mdanalysistests users oliver virtualenvs mda clean lib site packages mdanalysistests init py in pass from mdanalysistests util import block import executable not found users oliver virtualenvs mda clean lib site packages mdanalysistests util py in from functools import wraps import importlib import mock import os users oliver virtualenvs mda clean lib site packages mock init py in from future import absolute import import mock mock as mock from mock mock import all mock all import mock mock as mock users oliver virtualenvs mda clean lib site packages mock mock py in from pbr version import versioninfo v versioninfo mock semantic version version v release string version info v version tuple users oliver virtualenvs mda clean lib site packages pbr version pyc in semantic version self return the semanticversion object for this version if self semantic is none self semantic self get version from pkg resources return self semantic users oliver virtualenvs mda clean lib site packages pbr version pyc in get version from pkg resources self installed into anything revert to setup time logic from pbr import packaging result string packaging get version self package return semanticversion from pip string result string users oliver virtualenvs mda clean lib site packages pbr packaging pyc in get version package name pre version to pbr version versioninfo project name name was given but was not able to be found format name package name exception versioning for this project requires either an sdist tarball or access to an upstream git repository it s also possible that there is a mismatch between the package name in setup cfg and the argument given to pbr version versioninfo project name mock was given but was not able to be found code to reproduce the behaviour python import mdanalysistests currently version of mdanalysis run python c import mdanalysis as mda print mda version pip upgraded in a virtualenv | 0 |
2,017 | 22,599,367,412 | IssuesEvent | 2022-06-29 07:45:08 | jina-ai/jina | https://api.github.com/repos/jina-ai/jina | closed | Epic: Improve reliability | epic/reliability | We want to improve the behavior of core from a reliability perspective. This covers mostly error modeling, handling and reporting.
**This includes taking on these known issues:**
1. Error reporting for Flows failing to start is confusing. It includes multiple different stack traces. Only one of them usually is relevant and our users easily get lost.
2. Errors during DataRequest processing are reportedly very awkward at the moment. Some of those errors get caught automatically by gRPC and reported back as `Unexpected <class...` errors
3. If a Kubernetes Pod is killed, we may lose some messages in the process
4. Gateway/Head/Workers are not protected against being overloaded. Clients sending too fast might crash instances
5. We have generally good test coverage, but no chaos/load system test in K8s
To solve the above issues we want to do the following things. The list is ordered by priority:
- [x] #4624
- [x] #4625
- [x] #4626
~~ #4627~~ cancelled
~~ #4628 ~~cancelled
- [x] #4629
- [x] #4630
~~ #4631~~ cancelled
- [x] #4654
- [x] #4657
- [x] https://github.com/jina-ai/jina/issues/4817
- [x] #4763
- [x] #4767
- [x] #4877
- [x] #4878 | True | Epic: Improve reliability - We want to improve the behavior of core from a reliability perspective. This covers mostly error modeling, handling and reporting.
**This includes taking on these known issues:**
1. Error reporting for Flows failing to start is confusing. It includes multiple different stack traces. Only one of them usually is relevant and our users easily get lost.
2. Errors during DataRequest processing are reportedly very awkward at the moment. Some of those errors get caught automatically by gRPC and reported back as `Unexpected <class...` errors
3. If a Kubernetes Pod is killed, we may lose some messages in the process
4. Gateway/Head/Workers are not protected against being overloaded. Clients sending too fast might crash instances
5. We have generally good test coverage, but no chaos/load system test in K8s
To solve the above issues we want to do the following things. The list is ordered by priority:
- [x] #4624
- [x] #4625
- [x] #4626
~~ #4627~~ cancelled
~~ #4628 ~~cancelled
- [x] #4629
- [x] #4630
~~ #4631~~ cancelled
- [x] #4654
- [x] #4657
- [x] https://github.com/jina-ai/jina/issues/4817
- [x] #4763
- [x] #4767
- [x] #4877
- [x] #4878 | reli | epic improve reliability we want to improve the behavior of core from a reliability perspective this covers mostly error modeling handling and reporting this includes taking on these known issues error reporting for flows failing to start is confusing it includes multiple different stack traces only one of them usually is relevant and our users easily get lost errors during datarequest processing are reportedly very awkward at the moment some of those errors get caught automatically by grpc and reported back as unexpected class errors if a kubernetes pod is killed we may lose some messages in the process gateway head workers are not protected against being overloaded clients sending too fast might crash instances we have generally good test coverage but no chaos load system test in to solve the above issues we want to do the following things the list is ordered by priority cancelled cancelled cancelled | 1 |
73,991 | 14,166,245,284 | IssuesEvent | 2020-11-12 08:37:44 | GSG-G9/choose-your-side | https://api.github.com/repos/GSG-G9/choose-your-side | closed | handle Uncaught Type Errors for your Inputs | code-review | https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L72
https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L108
**I think it would be better to declare your API's URL and its Variables at the top of the file then call them anywhere you want**
https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L65-L77
Here when I type Palestine for example it displays data but when I delete, it displays the text ''can't be empty** and the previous data still appeared under this text, so it's good to hide the previous `covidValue ` value when there is no entered data!
Also, if you write a more descriptive message instead of "can't be empty" like "Country Name can't be empty" would be better.
Also There is Error Appeared when there is a problem in Country Name which is
**Uncaught TypeError: Cannot read property 'Country' of undefined**
So try to handle Error or Wrong Values also.
https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L101-L113
There is an error that appeared in the console which is
**Uncaught TypeError: Cannot read property '0' of null**
so try to handle this error when there is an error or not existing value entered by the user | 1.0 | handle Uncaught Type Errors for your Inputs - https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L72
https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L108
**I think it would be better to declare your API's URL and its Variables at the top of the file then call them anywhere you want**
https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L65-L77
Here when I type Palestine for example it displays data but when I delete, it displays the text ''can't be empty** and the previous data still appeared under this text, so it's good to hide the previous `covidValue ` value when there is no entered data!
Also, if you write a more descriptive message instead of "can't be empty" like "Country Name can't be empty" would be better.
Also There is Error Appeared when there is a problem in Country Name which is
**Uncaught TypeError: Cannot read property 'Country' of undefined**
So try to handle Error or Wrong Values also.
https://github.com/GSG-G9/choose-your-side/blob/010991f86fb82bf48dfd474db9da16695231fa78/main.js#L101-L113
There is an error that appeared in the console which is
**Uncaught TypeError: Cannot read property '0' of null**
so try to handle this error when there is an error or not existing value entered by the user | non_reli | handle uncaught type errors for your inputs i think it would be better to declare your api s url and its variables at the top of the file then call them anywhere you want here when i type palestine for example it displays data but when i delete it displays the text can t be empty and the previous data still appeared under this text so it s good to hide the previous covidvalue value when there is no entered data also if you write a more descriptive message instead of can t be empty like country name can t be empty would be better also there is error appeared when there is a problem in country name which is uncaught typeerror cannot read property country of undefined so try to handle error or wrong values also there is an error that appeared in the console which is uncaught typeerror cannot read property of null so try to handle this error when there is an error or not existing value entered by the user | 0 |
232,056 | 7,653,950,319 | IssuesEvent | 2018-05-10 07:13:03 | metasfresh/metasfresh | https://api.github.com/repos/metasfresh/metasfresh | closed | webui: BPartner window: show BPartner Product tab | branch:master branch:release priority:high status:waiting-for-feedback type:enhancement | ### Is this a bug or feature request?
follow-up of https://github.com/metasfresh/metasfresh/issues/3834
### What is the current behavior?
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
| 1.0 | webui: BPartner window: show BPartner Product tab - ### Is this a bug or feature request?
follow-up of https://github.com/metasfresh/metasfresh/issues/3834
### What is the current behavior?
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
| non_reli | webui bpartner window show bpartner product tab is this a bug or feature request follow up of what is the current behavior which are the steps to reproduce what is the expected or desired behavior | 0 |
116,931 | 15,027,036,099 | IssuesEvent | 2021-02-01 23:53:12 | dotnet/winforms | https://api.github.com/repos/dotnet/winforms | closed | Windows Forms Designer Copies Global Project Resource Image to Form Resource File | :boom: regression area: VS designer | * .NET Core Version:
.NET 5.0-rc1 using Visual Studio 16.8.0 Preview 3.0.
* Have you experienced this same bug with .NET Framework?:
No.
**Problem description:**
Assigning a project image resource e. g. to the `Image` property of a button using the Windows Forms Designer results in copying the corresponding resource file from the global project resources to the local form resource and adding following line in the designer code file:
`this.InOK.Image = ((System.Drawing.Image)(resources.GetObject("InOK.Image")));`
**Expected behavior:**
The global resource should be used directly as it was the case previously in the .NET Framework. The expected line in the designer code should be similar to this:
`this.InOK.Image = global::QuickArchive.Properties.Resources.ok_16;` | 1.0 | Windows Forms Designer Copies Global Project Resource Image to Form Resource File - * .NET Core Version:
.NET 5.0-rc1 using Visual Studio 16.8.0 Preview 3.0.
* Have you experienced this same bug with .NET Framework?:
No.
**Problem description:**
Assigning a project image resource e. g. to the `Image` property of a button using the Windows Forms Designer results in copying the corresponding resource file from the global project resources to the local form resource and adding following line in the designer code file:
`this.InOK.Image = ((System.Drawing.Image)(resources.GetObject("InOK.Image")));`
**Expected behavior:**
The global resource should be used directly as it was the case previously in the .NET Framework. The expected line in the designer code should be similar to this:
`this.InOK.Image = global::QuickArchive.Properties.Resources.ok_16;` | non_reli | windows forms designer copies global project resource image to form resource file net core version net using visual studio preview have you experienced this same bug with net framework no problem description assigning a project image resource e g to the image property of a button using the windows forms designer results in copying the corresponding resource file from the global project resources to the local form resource and adding following line in the designer code file this inok image system drawing image resources getobject inok image expected behavior the global resource should be used directly as it was the case previously in the net framework the expected line in the designer code should be similar to this this inok image global quickarchive properties resources ok | 0 |
1,251 | 14,291,936,748 | IssuesEvent | 2020-11-23 23:49:03 | microsoft/azuredatastudio | https://api.github.com/repos/microsoft/azuredatastudio | closed | ModelView components not being initialized as disabled correctly | Area - Reliability Bug | A couple places aren't getting initialized as disabled correctly after #13261
**Problem 1:**
Project radio button should be disabled if there aren't any other projects open(this works correctly in the November release):
https://github.com/microsoft/azuredatastudio/blob/749989cd0b8cea6c00a3d6b0e5137cb512594237/extensions%2Fsql-database-projects%2Fsrc%2Fdialogs%2FaddDatabaseReferenceDialog.ts#L196
Steps to repro:
1. Go to Projects viewlet
2. Create a project by clicking "Create new" button
3. Right click on "Database References" in project tree and click "Add database reference"
Expected: Project radio button should be disabled because no other projects can be added as a reference
Actual: Project radio button is enabled

**Problem 2:**
Workspace inputbox should be disabled. It gets disabled if you toggle between the radio buttons
https://github.com/microsoft/azuredatastudio/blob/ddc8c000901dc2b9bafc7be5e91085a2a4b99a88/extensions%2Fdata-workspace%2Fsrc%2Fdialogs%2FdialogBase.ts#L92-L95
Steps to repro:
1. Go to Projects viewlet
2. Click "Open Existing" button to open dialog
Expected: workspace inputbox should be disabled
Actual: workspace inputbox is enabled

| True | ModelView components not being initialized as disabled correctly - A couple places aren't getting initialized as disabled correctly after #13261
**Problem 1:**
Project radio button should be disabled if there aren't any other projects open(this works correctly in the November release):
https://github.com/microsoft/azuredatastudio/blob/749989cd0b8cea6c00a3d6b0e5137cb512594237/extensions%2Fsql-database-projects%2Fsrc%2Fdialogs%2FaddDatabaseReferenceDialog.ts#L196
Steps to repro:
1. Go to Projects viewlet
2. Create a project by clicking "Create new" button
3. Right click on "Database References" in project tree and click "Add database reference"
Expected: Project radio button should be disabled because no other projects can be added as a reference
Actual: Project radio button is enabled

**Problem 2:**
Workspace inputbox should be disabled. It gets disabled if you toggle between the radio buttons
https://github.com/microsoft/azuredatastudio/blob/ddc8c000901dc2b9bafc7be5e91085a2a4b99a88/extensions%2Fdata-workspace%2Fsrc%2Fdialogs%2FdialogBase.ts#L92-L95
Steps to repro:
1. Go to Projects viewlet
2. Click "Open Existing" button to open dialog
Expected: workspace inputbox should be disabled
Actual: workspace inputbox is enabled

| reli | modelview components not being initialized as disabled correctly a couple places aren t getting initialized as disabled correctly after problem project radio button should be disabled if there aren t any other projects open this works correctly in the november release steps to repro go to projects viewlet create a project by clicking create new button right click on database references in project tree and click add database reference expected project radio button should be disabled because no other projects can be added as a reference actual project radio button is enabled problem workspace inputbox should be disabled it gets disabled if you toggle between the radio buttons steps to repro go to projects viewlet click open existing button to open dialog expected workspace inputbox should be disabled actual workspace inputbox is enabled | 1 |
2,542 | 26,161,256,626 | IssuesEvent | 2022-12-31 15:18:32 | bryopsida/syslog-portal | https://api.github.com/repos/bryopsida/syslog-portal | closed | Health Endpoint | reliability | As a user I want the application to be resilient and automatically address and/or route around issues. A standard way to integrate with orchestrators that provide this is a health end point. | True | Health Endpoint - As a user I want the application to be resilient and automatically address and/or route around issues. A standard way to integrate with orchestrators that provide this is a health end point. | reli | health endpoint as a user i want the application to be resilient and automatically address and or route around issues a standard way to integrate with orchestrators that provide this is a health end point | 1 |
371 | 2,676,612,446 | IssuesEvent | 2015-03-25 18:36:13 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | Security error in PHP interop client | core php security wrapped languages | When I try to run my PHP interop client (with TLS enabled) against another interop server (node, in this case), I get this error
```
E0311 16:29:19.493414992 25329 ssl_transport_security.c:833] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
E0311 16:29:19.493429581 25329 secure_transport_setup.c:213] Handshake failed with error TSI_PROTOCOL_FAILURE
E0311 16:29:19.493434390 25329 secure_channel_create.c:98] Secure transport setup failed with error 2.
``` | True | Security error in PHP interop client - When I try to run my PHP interop client (with TLS enabled) against another interop server (node, in this case), I get this error
```
E0311 16:29:19.493414992 25329 ssl_transport_security.c:833] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
E0311 16:29:19.493429581 25329 secure_transport_setup.c:213] Handshake failed with error TSI_PROTOCOL_FAILURE
E0311 16:29:19.493434390 25329 secure_channel_create.c:98] Secure transport setup failed with error 2.
``` | non_reli | security error in php interop client when i try to run my php interop client with tls enabled against another interop server node in this case i get this error ssl transport security c handshake failed with fatal error ssl error ssl error ssl routines get record wrong version number secure transport setup c handshake failed with error tsi protocol failure secure channel create c secure transport setup failed with error | 0 |
2,788 | 27,750,898,735 | IssuesEvent | 2023-03-15 20:39:00 | cds-snc/notification-planning-core | https://api.github.com/repos/cds-snc/notification-planning-core | closed | Align postgres version in VSCode devcontainers to the one used in staging/production | Reliability Incident Action | ## Description
As a GCNotify developer,
I want to use the same database version in my VSCode container as the one deployed in staging/production environments ,
So that I know ahead if there will be backward compatibility issues with the database version we use in and the database features that I am using for implementing my work.
### WHY are we building?
Reduce compatibility issues when deploying.
### WHAT are we building?
Aligning the postgres version of our VSCode devcontainers to the one deployed to staging and production environments.
### VALUE created by our solution
Less incident, more reliability!
## Acceptance Criteria
- [ ] All vscode devcontainer versions are aligned with staging and production environments.
## QA Steps
- [ ] Verify the version deployed in the devcontainer is the same as the one in staging/production environments.
## Related Tasks
* [Create Cloudwatch Alerts for failed Kubernetes Pods](https://app.zenhub.com/workspaces/notify-planning-core-6411dfb7c95fb80014e0cab0/issues/gh/cds-snc/notification-planning-core/19)
* [Move database migration logic out of the Kubernetes API pod initialization](https://app.zenhub.com/workspaces/notify-planning-core-6411dfb7c95fb80014e0cab0/issues/gh/cds-snc/notification-planning-core/20)
* [Raise alarm when database migration fails](https://app.zenhub.com/workspaces/notify-planning-core-6411dfb7c95fb80014e0cab0/issues/gh/cds-snc/notification-planning-core/21) | True | Align postgres version in VSCode devcontainers to the one used in staging/production - ## Description
As a GCNotify developer,
I want to use the same database version in my VSCode container as the one deployed in staging/production environments ,
So that I know ahead if there will be backward compatibility issues with the database version we use in and the database features that I am using for implementing my work.
### WHY are we building?
Reduce compatibility issues when deploying.
### WHAT are we building?
Aligning the postgres version of our VSCode devcontainers to the one deployed to staging and production environments.
### VALUE created by our solution
Less incident, more reliability!
## Acceptance Criteria
- [ ] All vscode devcontainer versions are aligned with staging and production environments.
## QA Steps
- [ ] Verify the version deployed in the devcontainer is the same as the one in staging/production environments.
## Related Tasks
* [Create Cloudwatch Alerts for failed Kubernetes Pods](https://app.zenhub.com/workspaces/notify-planning-core-6411dfb7c95fb80014e0cab0/issues/gh/cds-snc/notification-planning-core/19)
* [Move database migration logic out of the Kubernetes API pod initialization](https://app.zenhub.com/workspaces/notify-planning-core-6411dfb7c95fb80014e0cab0/issues/gh/cds-snc/notification-planning-core/20)
* [Raise alarm when database migration fails](https://app.zenhub.com/workspaces/notify-planning-core-6411dfb7c95fb80014e0cab0/issues/gh/cds-snc/notification-planning-core/21) | reli | align postgres version in vscode devcontainers to the one used in staging production description as a gcnotify developer i want to use the same database version in my vscode container as the one deployed in staging production environments so that i know ahead if there will be backward compatibility issues with the database version we use in and the database features that i am using for implementing my work why are we building reduce compatibility issues when deploying what are we building aligning the postgres version of our vscode devcontainers to the one deployed to staging and production environments value created by our solution less incident more reliability acceptance criteria all vscode devcontainer versions are aligned with staging and production environments qa steps verify the version deployed in the devcontainer is the same as the one in staging production environments related tasks | 1 |
62,440 | 3,185,503,064 | IssuesEvent | 2015-09-28 05:33:17 | HellscreamWoW/Tracker | https://api.github.com/repos/HellscreamWoW/Tracker | closed | Mistweaver Monks not regening mana | Priority-High Type-Backend | i just reached level 10 and my mist weaver monk got mana added as a thing, but she won't regen. Even if i drink water or other mana Regen items | 1.0 | Mistweaver Monks not regening mana - i just reached level 10 and my mist weaver monk got mana added as a thing, but she won't regen. Even if i drink water or other mana Regen items | non_reli | mistweaver monks not regening mana i just reached level and my mist weaver monk got mana added as a thing but she won t regen even if i drink water or other mana regen items | 0 |
29,971 | 13,190,349,448 | IssuesEvent | 2020-08-13 10:02:00 | wellcomecollection/platform | https://api.github.com/repos/wellcomecollection/platform | closed | Work out the cost of transferring everything from AWS to Azure | 💸 Costs 📦 Storage service | * Where are we going to spend money?
- GetObject calls from S3
- Data transfer out from S3 for replication
- (Maybe) NAT Gateway traffic out from a replicator in a private subnet
- (Maybe) Data transfer out of Azure for verification
- (Maybe) NAT Gateway traffic in from verification in Azure
* How much is it going to cost?
For https://github.com/wellcomecollection/platform/issues/4414 | 1.0 | Work out the cost of transferring everything from AWS to Azure - * Where are we going to spend money?
- GetObject calls from S3
- Data transfer out from S3 for replication
- (Maybe) NAT Gateway traffic out from a replicator in a private subnet
- (Maybe) Data transfer out of Azure for verification
- (Maybe) NAT Gateway traffic in from verification in Azure
* How much is it going to cost?
For https://github.com/wellcomecollection/platform/issues/4414 | non_reli | work out the cost of transferring everything from aws to azure where are we going to spend money getobject calls from data transfer out from for replication maybe nat gateway traffic out from a replicator in a private subnet maybe data transfer out of azure for verification maybe nat gateway traffic in from verification in azure how much is it going to cost for | 0 |
368,858 | 10,885,378,283 | IssuesEvent | 2019-11-18 10:16:40 | yalla-coop/curenetics | https://api.github.com/repos/yalla-coop/curenetics | closed | Show 'N/A' rather than 'No data' on clinical results | 1-point bug priority-1 | Change text on results when it hasn't found the relevant data from the clinical trial
Relates to #149 | 1.0 | Show 'N/A' rather than 'No data' on clinical results - Change text on results when it hasn't found the relevant data from the clinical trial
Relates to #149 | non_reli | show n a rather than no data on clinical results change text on results when it hasn t found the relevant data from the clinical trial relates to | 0 |
1,175 | 13,542,232,710 | IssuesEvent | 2020-09-16 17:01:04 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | NullReferenceException in NullableWalker.VisitConditionalOperatorCore | 4 - In Review Area-Compilers Bug New Language Feature - Nullable Reference Types Tenet-Reliability | **Version Used**:
Visual Studio 16.8.0 Preview 1
Targeting .NET 5.0.100-preview.7.20366.6
**Steps to Reproduce**:
1. Clone and open solution found [here](https://github.com/JakenVeina/Battleship.NET/tree/dotnet-roslyn-issue-46954).
2. Open file "Battleship.NET.Avalonia\Gamespace\Running\RunningGamespaceBoardTileViewModel.cs"
3. Select and copy code segment from Line 42 Character 44 to Line 57 Character 90. [Screenshot](https://media.discordapp.net/attachments/401165307475132416/745503014672138314/unknown.png)
4. Paste code segment at Line 33 Character 57. [Screenshot](https://media.discordapp.net/attachments/401165307475132416/745503088382574663/unknown.png)
5. Observe VS crash
**Expected Behavior**:
VS does not crash while attempting to copy and paste code.
**Actual Behavior**:
VS crashes due to a `NullReferenceException` from `Microsoft.CodeAnalysis.CSharp.NullableWalker.VisitConditionalOperatorCore`
[Full stack trace](https://paste.mod.gg/yovubopuca.sql) (actually, not full, Windows Event Viewer doesn't capture the whole thing).
| True | NullReferenceException in NullableWalker.VisitConditionalOperatorCore - **Version Used**:
Visual Studio 16.8.0 Preview 1
Targeting .NET 5.0.100-preview.7.20366.6
**Steps to Reproduce**:
1. Clone and open solution found [here](https://github.com/JakenVeina/Battleship.NET/tree/dotnet-roslyn-issue-46954).
2. Open file "Battleship.NET.Avalonia\Gamespace\Running\RunningGamespaceBoardTileViewModel.cs"
3. Select and copy code segment from Line 42 Character 44 to Line 57 Character 90. [Screenshot](https://media.discordapp.net/attachments/401165307475132416/745503014672138314/unknown.png)
4. Paste code segment at Line 33 Character 57. [Screenshot](https://media.discordapp.net/attachments/401165307475132416/745503088382574663/unknown.png)
5. Observe VS crash
**Expected Behavior**:
VS does not crash while attempting to copy and paste code.
**Actual Behavior**:
VS crashes due to a `NullReferenceException` from `Microsoft.CodeAnalysis.CSharp.NullableWalker.VisitConditionalOperatorCore`
[Full stack trace](https://paste.mod.gg/yovubopuca.sql) (actually, not full, Windows Event Viewer doesn't capture the whole thing).
| reli | nullreferenceexception in nullablewalker visitconditionaloperatorcore version used visual studio preview targeting net preview steps to reproduce clone and open solution found open file battleship net avalonia gamespace running runninggamespaceboardtileviewmodel cs select and copy code segment from line character to line character paste code segment at line character observe vs crash expected behavior vs does not crash while attempting to copy and paste code actual behavior vs crashes due to a nullreferenceexception from microsoft codeanalysis csharp nullablewalker visitconditionaloperatorcore actually not full windows event viewer doesn t capture the whole thing | 1 |
706 | 9,978,788,331 | IssuesEvent | 2019-07-09 20:49:04 | crossplaneio/crossplane | https://api.github.com/repos/crossplaneio/crossplane | closed | Stale Object Modification Error | bug reliability | * Bug Report
It appears we hitting concurrent update issue on a given object:
```
"error": "failed to update status of CRD instance postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99:
Operation cannot be fulfilled on cloudsqlinstances.database.gcp.crossplane.io \"postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99\":
the object has been modified; please apply your changes to the latest version and try again",
"stacktrace":
"github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr.(*zapLogger).Error
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/crossplaneio/crossplane/pkg/controller/gcp/database.(*Reconciler).Reconcile
/home/illya/go/src/github.com/crossplaneio/crossplane/pkg/controller/gcp/database/cloudsql_instance.go:214\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.Until
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"
```
I stumbled over this issue few times on different resources.
This could be an intermittent issue on my dev environment.
| True | Stale Object Modification Error - * Bug Report
It appears we hitting concurrent update issue on a given object:
```
"error": "failed to update status of CRD instance postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99:
Operation cannot be fulfilled on cloudsqlinstances.database.gcp.crossplane.io \"postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99\":
the object has been modified; please apply your changes to the latest version and try again",
"stacktrace":
"github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr.(*zapLogger).Error
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/crossplaneio/crossplane/pkg/controller/gcp/database.(*Reconciler).Reconcile
/home/illya/go/src/github.com/crossplaneio/crossplane/pkg/controller/gcp/database/cloudsql_instance.go:214\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.Until
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"
```
I stumbled over this issue few times on different resources.
This could be an intermittent issue on my dev environment.
| reli | stale object modification error bug report it appears we hitting concurrent update issue on a given object error failed to update status of crd instance postgresql operation cannot be fulfilled on cloudsqlinstances database gcp crossplane io postgresql the object has been modified please apply your changes to the latest version and try again stacktrace github com crossplaneio crossplane vendor github com go logr zapr zaplogger error home illya go src github com crossplaneio crossplane vendor github com go logr zapr zapr go ngithub com crossplaneio crossplane pkg controller gcp database reconciler reconcile home illya go src github com crossplaneio crossplane pkg controller gcp database cloudsql instance go ngithub com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller processnextworkitem home illya go src github com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller go ngithub com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller start home illya go src github com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller go ngithub com crossplaneio crossplane vendor io apimachinery pkg util wait jitteruntil home illya go src github com crossplaneio crossplane vendor io apimachinery pkg util wait wait go ngithub com crossplaneio crossplane vendor io apimachinery pkg util wait jitteruntil home illya go src github com crossplaneio crossplane vendor io apimachinery pkg util wait wait go ngithub com crossplaneio crossplane vendor io apimachinery pkg util wait until home illya go src github com crossplaneio crossplane vendor io apimachinery pkg util wait wait go i stumbled over this issue few times on different resources this could be an intermittent issue on my dev environment | 1 |
2,968 | 30,686,218,615 | IssuesEvent | 2023-07-26 12:32:29 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.TimerInstance.getElementInstanceKey()" because "timer" is null | kind/bug severity/high area/reliability component/db | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
NPE occurred on medic benchmark CW30. In the `DueDateTimerChecker` we ran into a NPE. In this part of the code we iterate through the due date column family to see which timers we need to trigger. We get the timer instances, but this returns null.
```java
dueDateColumnFamily.whileTrue(
(key, nil) -> {
final DbLong dueDate = key.first();
boolean consumed = false;
if (dueDate.getValue() <= timestamp) {
final var elementAndTimerKey = key.second();
final TimerInstance timerInstance = timerInstanceColumnFamily.get(elementAndTimerKey);
consumed = consumer.visit(timerInstance);
}
if (!consumed) {
nextDueDate = dueDate.getValue();
}
return consumed;
});
```
This seems to be some inconsistency in the state. How could we have a due date in the state, but don't know about the timer accompanying it?
It is interesting how the issue only occurred once. This means the due date is removed from the column family and one point, otherwise I would expect it to fail every time the due date checker runs. This is not the case, benchmark runs fine even after the error, making me believe it only happened once before the state became consistent again.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
Not clear
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
We should always be able to find the timer belonging to the due date.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.TimerInstance.getElementInstanceKey()" because "timer" is null
at io.camunda.zeebe.engine.processing.timer.DueDateTimerChecker$WriteTriggerTimerCommandVisitor.visit(DueDateTimerChecker.java:119) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.state.instance.DbTimerInstanceState.lambda$processTimersWithDueDateBefore$1(DbTimerInstanceState.java:98) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.visit(TransactionalColumnFamily.java:397) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.lambda$forEachInPrefix$20(TransactionalColumnFamily.java:376) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ColumnFamilyContext.withPrefixKey(ColumnFamilyContext.java:112) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.forEachInPrefix(TransactionalColumnFamily.java:360) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.forEachInPrefix(TransactionalColumnFamily.java:331) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.lambda$whileTrue$9(TransactionalColumnFamily.java:177) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.lambda$ensureInOpenTransaction$19(TransactionalColumnFamily.java:314) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.DefaultTransactionContext.runInNewTransaction(DefaultTransactionContext.java:61) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.DefaultTransactionContext.runInTransaction(DefaultTransactionContext.java:33) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.ensureInOpenTransaction(TransactionalColumnFamily.java:313) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.whileTrue(TransactionalColumnFamily.java:177) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.state.instance.DbTimerInstanceState.processTimersWithDueDateBefore(DbTimerInstanceState.java:90) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.processing.timer.DueDateTimerChecker$TriggerTimersSideEffect.apply(DueDateTimerChecker.java:101) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.processing.timer.DueDateTimerChecker$TriggerTimersSideEffect.apply(DueDateTimerChecker.java:69) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.processing.scheduled.DueDateChecker$TriggerEntitiesTask.execute(DueDateChecker.java:119) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl.lambda$toRunnable$5(ProcessingScheduleServiceImpl.java:135) ~[zeebe-stream-platform-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:94) ~[zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:109) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:205) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
```
</p>
</details>
**Environment:**
- OS: SaaS
- Zeebe Version: CW30 benchmark
| True | NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.TimerInstance.getElementInstanceKey()" because "timer" is null - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
NPE occurred on medic benchmark CW30. In the `DueDateTimerChecker` we ran into a NPE. In this part of the code we iterate through the due date column family to see which timers we need to trigger. We get the timer instances, but this returns null.
```java
dueDateColumnFamily.whileTrue(
(key, nil) -> {
final DbLong dueDate = key.first();
boolean consumed = false;
if (dueDate.getValue() <= timestamp) {
final var elementAndTimerKey = key.second();
final TimerInstance timerInstance = timerInstanceColumnFamily.get(elementAndTimerKey);
consumed = consumer.visit(timerInstance);
}
if (!consumed) {
nextDueDate = dueDate.getValue();
}
return consumed;
});
```
This seems to be some inconsistency in the state. How could we have a due date in the state, but don't know about the timer accompanying it?
It is interesting how the issue only occurred once. This means the due date is removed from the column family and one point, otherwise I would expect it to fail every time the due date checker runs. This is not the case, benchmark runs fine even after the error, making me believe it only happened once before the state became consistent again.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
Not clear
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
We should always be able to find the timer belonging to the due date.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.TimerInstance.getElementInstanceKey()" because "timer" is null
at io.camunda.zeebe.engine.processing.timer.DueDateTimerChecker$WriteTriggerTimerCommandVisitor.visit(DueDateTimerChecker.java:119) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.state.instance.DbTimerInstanceState.lambda$processTimersWithDueDateBefore$1(DbTimerInstanceState.java:98) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.visit(TransactionalColumnFamily.java:397) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.lambda$forEachInPrefix$20(TransactionalColumnFamily.java:376) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ColumnFamilyContext.withPrefixKey(ColumnFamilyContext.java:112) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.forEachInPrefix(TransactionalColumnFamily.java:360) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.forEachInPrefix(TransactionalColumnFamily.java:331) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.lambda$whileTrue$9(TransactionalColumnFamily.java:177) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.lambda$ensureInOpenTransaction$19(TransactionalColumnFamily.java:314) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.DefaultTransactionContext.runInNewTransaction(DefaultTransactionContext.java:61) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.DefaultTransactionContext.runInTransaction(DefaultTransactionContext.java:33) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.ensureInOpenTransaction(TransactionalColumnFamily.java:313) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.db.impl.rocksdb.transaction.TransactionalColumnFamily.whileTrue(TransactionalColumnFamily.java:177) ~[zeebe-db-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.state.instance.DbTimerInstanceState.processTimersWithDueDateBefore(DbTimerInstanceState.java:90) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.processing.timer.DueDateTimerChecker$TriggerTimersSideEffect.apply(DueDateTimerChecker.java:101) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.processing.timer.DueDateTimerChecker$TriggerTimersSideEffect.apply(DueDateTimerChecker.java:69) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.engine.processing.scheduled.DueDateChecker$TriggerEntitiesTask.execute(DueDateChecker.java:119) ~[zeebe-workflow-engine-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl.lambda$toRunnable$5(ProcessingScheduleServiceImpl.java:135) ~[zeebe-stream-platform-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:94) ~[zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:109) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:205) [zeebe-scheduler-8.3.0-SNAPSHOT.jar:8.3.0-SNAPSHOT]
```
</p>
</details>
**Environment:**
- OS: SaaS
- Zeebe Version: CW30 benchmark
| reli | nullpointerexception cannot invoke io camunda zeebe engine state instance timerinstance getelementinstancekey because timer is null describe the bug npe occurred on medic benchmark in the duedatetimerchecker we ran into a npe in this part of the code we iterate through the due date column family to see which timers we need to trigger we get the timer instances but this returns null java duedatecolumnfamily whiletrue key nil final dblong duedate key first boolean consumed false if duedate getvalue timestamp final var elementandtimerkey key second final timerinstance timerinstance timerinstancecolumnfamily get elementandtimerkey consumed consumer visit timerinstance if consumed nextduedate duedate getvalue return consumed this seems to be some inconsistency in the state how could we have a due date in the state but don t know about the timer accompanying it it is interesting how the issue only occurred once this means the due date is removed from the column family and one point otherwise i would expect it to fail every time the due date checker runs this is not the case benchmark runs fine even after the error making me believe it only happened once before the state became consistent again to reproduce steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client not clear expected behavior we should always be able to find the timer belonging to the due date log stacktrace full stacktrace java lang nullpointerexception cannot invoke io camunda zeebe engine state instance timerinstance getelementinstancekey because timer is null at io camunda zeebe engine processing timer duedatetimerchecker writetriggertimercommandvisitor visit duedatetimerchecker java at io camunda zeebe engine state instance dbtimerinstancestate lambda processtimerswithduedatebefore dbtimerinstancestate java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily visit transactionalcolumnfamily java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily lambda foreachinprefix transactionalcolumnfamily java at io camunda zeebe db impl rocksdb transaction columnfamilycontext withprefixkey columnfamilycontext java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily foreachinprefix transactionalcolumnfamily java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily foreachinprefix transactionalcolumnfamily java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily lambda whiletrue transactionalcolumnfamily java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily lambda ensureinopentransaction transactionalcolumnfamily java at io camunda zeebe db impl rocksdb transaction defaulttransactioncontext runinnewtransaction defaulttransactioncontext java at io camunda zeebe db impl rocksdb transaction defaulttransactioncontext runintransaction defaulttransactioncontext java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily ensureinopentransaction transactionalcolumnfamily java at io camunda zeebe db impl rocksdb transaction transactionalcolumnfamily whiletrue transactionalcolumnfamily java at io camunda zeebe engine state instance dbtimerinstancestate processtimerswithduedatebefore dbtimerinstancestate java at io camunda zeebe engine processing timer duedatetimerchecker triggertimerssideeffect apply duedatetimerchecker java at io camunda zeebe engine processing timer duedatetimerchecker triggertimerssideeffect apply duedatetimerchecker java at io camunda zeebe engine processing scheduled duedatechecker triggerentitiestask execute duedatechecker java at io camunda zeebe stream impl processingscheduleserviceimpl lambda torunnable processingscheduleserviceimpl java at io camunda zeebe scheduler actorjob invoke actorjob java at io camunda zeebe scheduler actorjob execute actorjob java at io camunda zeebe scheduler actortask execute actortask java at io camunda zeebe scheduler actorthread executecurrenttask actorthread java at io camunda zeebe scheduler actorthread dowork actorthread java at io camunda zeebe scheduler actorthread run actorthread java environment os saas zeebe version benchmark | 1 |
306,067 | 23,142,764,081 | IssuesEvent | 2022-07-28 20:16:42 | JanssenProject/jans | https://api.github.com/repos/JanssenProject/jans | closed | docs: minor formatting issue in yml spacing | area-documentation | Several items in the yml are misaligned, resulting in incorrect hierarchy. | 1.0 | docs: minor formatting issue in yml spacing - Several items in the yml are misaligned, resulting in incorrect hierarchy. | non_reli | docs minor formatting issue in yml spacing several items in the yml are misaligned resulting in incorrect hierarchy | 0 |
2,624 | 26,670,229,936 | IssuesEvent | 2023-01-26 09:37:50 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | NPE when handling follow-up events of process instance modification | kind/bug area/reliability component/engine | **Describe the bug**
```
java.lang.NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.ElementInstance.getValue()" because "processInstance" is null
```
**To Reproduce**
Unclear
**Expected behavior**
No NPE
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.ElementInstance.getValue()" because "processInstance" is null
at io.camunda.zeebe.engine.state.appliers.ProcessInstanceModifiedEventApplier.incrementNumberOfTakenSequenceFlows(ProcessInstanceModifiedEventApplier.java:61) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.state.appliers.ProcessInstanceModifiedEventApplier.applyState(ProcessInstanceModifiedEventApplier.java:39) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.state.appliers.ProcessInstanceModifiedEventApplier.applyState(ProcessInstanceModifiedEventApplier.java:21) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.state.appliers.EventAppliers.applyState(EventAppliers.java:259) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.processing.streamprocessor.writers.ResultBuilderBackedEventApplyingStateWriter.appendFollowUpEvent(ResultBuilderBackedEventApplyingStateWriter.java:41) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.processing.processinstance.ProcessInstanceModificationProcessor.processRecord(ProcessInstanceModificationProcessor.java:223) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.Engine.process(Engine.java:127) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.lambda$processCommand$3(ProcessingStateMachine.java:261) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:257) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:206) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:182) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:92) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:106) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:198) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
```
</p>
</details>
**Environment:**
- SaaS
- Zeebe Version: 8.1.5, 8.2.0-alpha3
[Error group](https://console.cloud.google.com/errors/detail/COzEm_6Kp6S_OQ;service=zeebe;time=P7D?project=camunda-cloud-240911)
Somewhat similar issue: #10606 | True | NPE when handling follow-up events of process instance modification - **Describe the bug**
```
java.lang.NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.ElementInstance.getValue()" because "processInstance" is null
```
**To Reproduce**
Unclear
**Expected behavior**
No NPE
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.NullPointerException: Cannot invoke "io.camunda.zeebe.engine.state.instance.ElementInstance.getValue()" because "processInstance" is null
at io.camunda.zeebe.engine.state.appliers.ProcessInstanceModifiedEventApplier.incrementNumberOfTakenSequenceFlows(ProcessInstanceModifiedEventApplier.java:61) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.state.appliers.ProcessInstanceModifiedEventApplier.applyState(ProcessInstanceModifiedEventApplier.java:39) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.state.appliers.ProcessInstanceModifiedEventApplier.applyState(ProcessInstanceModifiedEventApplier.java:21) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.state.appliers.EventAppliers.applyState(EventAppliers.java:259) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.processing.streamprocessor.writers.ResultBuilderBackedEventApplyingStateWriter.appendFollowUpEvent(ResultBuilderBackedEventApplyingStateWriter.java:41) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.processing.processinstance.ProcessInstanceModificationProcessor.processRecord(ProcessInstanceModificationProcessor.java:223) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.engine.Engine.process(Engine.java:127) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.lambda$processCommand$3(ProcessingStateMachine.java:261) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:257) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:206) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:182) ~[zeebe-workflow-engine-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:92) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:106) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:198) ~[zeebe-scheduler-8.1.5.jar:8.1.5]
```
</p>
</details>
**Environment:**
- SaaS
- Zeebe Version: 8.1.5, 8.2.0-alpha3
[Error group](https://console.cloud.google.com/errors/detail/COzEm_6Kp6S_OQ;service=zeebe;time=P7D?project=camunda-cloud-240911)
Somewhat similar issue: #10606 | reli | npe when handling follow up events of process instance modification describe the bug java lang nullpointerexception cannot invoke io camunda zeebe engine state instance elementinstance getvalue because processinstance is null to reproduce unclear expected behavior no npe log stacktrace full stacktrace java lang nullpointerexception cannot invoke io camunda zeebe engine state instance elementinstance getvalue because processinstance is null at io camunda zeebe engine state appliers processinstancemodifiedeventapplier incrementnumberoftakensequenceflows processinstancemodifiedeventapplier java at io camunda zeebe engine state appliers processinstancemodifiedeventapplier applystate processinstancemodifiedeventapplier java at io camunda zeebe engine state appliers processinstancemodifiedeventapplier applystate processinstancemodifiedeventapplier java at io camunda zeebe engine state appliers eventappliers applystate eventappliers java at io camunda zeebe engine processing streamprocessor writers resultbuilderbackedeventapplyingstatewriter appendfollowupevent resultbuilderbackedeventapplyingstatewriter java at io camunda zeebe engine processing processinstance processinstancemodificationprocessor processrecord processinstancemodificationprocessor java at io camunda zeebe engine engine process engine java at io camunda zeebe streamprocessor processingstatemachine lambda processcommand processingstatemachine java at io camunda zeebe db impl rocksdb transaction zeebetransaction run zeebetransaction java at io camunda zeebe streamprocessor processingstatemachine processcommand processingstatemachine java at io camunda zeebe streamprocessor processingstatemachine trytoreadnextrecord processingstatemachine java at io camunda zeebe streamprocessor processingstatemachine readnextrecord processingstatemachine java at io camunda zeebe scheduler actorjob invoke actorjob java at io camunda zeebe scheduler actorjob execute actorjob java at io camunda zeebe scheduler actortask execute actortask java at io camunda zeebe scheduler actorthread executecurrenttask actorthread java at io camunda zeebe scheduler actorthread dowork actorthread java at io camunda zeebe scheduler actorthread run actorthread java environment saas zeebe version somewhat similar issue | 1 |
16,830 | 2,615,124,713 | IssuesEvent | 2015-03-01 05:52:40 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | Rename extension projects | auto-migrated Milestone-Version1.7.0 Priority-Medium Type-Enhancement | ```
Rename google-api-client-extensions-* to google-api-extensions-*
```
Original issue reported on code.google.com by `rmis...@google.com` on 16 Feb 2012 at 4:30 | 1.0 | Rename extension projects - ```
Rename google-api-client-extensions-* to google-api-extensions-*
```
Original issue reported on code.google.com by `rmis...@google.com` on 16 Feb 2012 at 4:30 | non_reli | rename extension projects rename google api client extensions to google api extensions original issue reported on code google com by rmis google com on feb at | 0 |
2,404 | 25,167,524,855 | IssuesEvent | 2022-11-10 22:24:55 | hyperlane-xyz/hyperlane-monorepo | https://api.github.com/repos/hyperlane-xyz/hyperlane-monorepo | closed | Use QuorumProvider / FallbackProvider in ts/rs | reliability epic | - [x] https://github.com/abacus-network/abacus-monorepo/issues/868
- [x] https://github.com/abacus-network/abacus-monorepo/issues/869
- [x] https://github.com/abacus-network/abacus-monorepo/issues/870
- [x] https://github.com/abacus-network/abacus-monorepo/issues/871
- [ ] https://github.com/abacus-network/abacus-monorepo/issues/872
- [x] https://github.com/abacus-network/abacus-monorepo/issues/873
- [ ] #875
- [ ] #874 | True | Use QuorumProvider / FallbackProvider in ts/rs - - [x] https://github.com/abacus-network/abacus-monorepo/issues/868
- [x] https://github.com/abacus-network/abacus-monorepo/issues/869
- [x] https://github.com/abacus-network/abacus-monorepo/issues/870
- [x] https://github.com/abacus-network/abacus-monorepo/issues/871
- [ ] https://github.com/abacus-network/abacus-monorepo/issues/872
- [x] https://github.com/abacus-network/abacus-monorepo/issues/873
- [ ] #875
- [ ] #874 | reli | use quorumprovider fallbackprovider in ts rs | 1 |
109,701 | 11,647,928,971 | IssuesEvent | 2020-03-01 17:49:45 | scullyio/scully | https://api.github.com/repos/scullyio/scully | opened | Document steps to release v8 of ng-lib | documentation | I would like to help release ng-lib-v8 for my clients. But I can't cause I don't know what we do. | 1.0 | Document steps to release v8 of ng-lib - I would like to help release ng-lib-v8 for my clients. But I can't cause I don't know what we do. | non_reli | document steps to release of ng lib i would like to help release ng lib for my clients but i can t cause i don t know what we do | 0 |
209,313 | 16,190,575,738 | IssuesEvent | 2021-05-04 07:53:23 | gatsbyjs/gatsby | https://api.github.com/repos/gatsbyjs/gatsby | closed | [docs] design iconography for different types of callouts | not stale type: documentation | Similar to the discussion in #15616, we've talked about designing iconography to mark notes, warnings, and errors: things like a red lobster for an error/caution, an owl for an advanced tip, and a purple octopus for a note.
This issue will be satisfied when we have icons/illustrations to use in components. The goal should be to introduce a cohesive style that can be used throughout the docs, including for more editorial visual aids. | 1.0 | [docs] design iconography for different types of callouts - Similar to the discussion in #15616, we've talked about designing iconography to mark notes, warnings, and errors: things like a red lobster for an error/caution, an owl for an advanced tip, and a purple octopus for a note.
This issue will be satisfied when we have icons/illustrations to use in components. The goal should be to introduce a cohesive style that can be used throughout the docs, including for more editorial visual aids. | non_reli | design iconography for different types of callouts similar to the discussion in we ve talked about designing iconography to mark notes warnings and errors things like a red lobster for an error caution an owl for an advanced tip and a purple octopus for a note this issue will be satisfied when we have icons illustrations to use in components the goal should be to introduce a cohesive style that can be used throughout the docs including for more editorial visual aids | 0 |
295,714 | 9,100,604,960 | IssuesEvent | 2019-02-20 09:00:45 | jenkins-x/jx | https://api.github.com/repos/jenkins-x/jx | closed | Unable to create GKE cluster with JX due to ingress controllers stuck in creating state | area/GKE area/create-cluster kind/bug priority/awaiting-more-evidence | ### Summary
I am attempted to create a gke cluster with jenkins-x installed on it using `jx create terraform`. The cluster creation works and all pods come up and the cli tool even says jx installed successfully on the cluster. However the hook and monocular-api ingress controller are stuck in a creating state and eventually gives me this error `googleapi: Error 400: Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1, invalid`. I've deleted everything and reran `jx create terraform` a few times with this same issue occurring. Also the Github hooks get 200 responses from the cluster. In terms of functionality jenkins is not discovering repos in my Github at all.
### Steps to reproduce the behavior
Run `jx create terraform` on a gke cluster using all the defaults.
### Jx version
The output of `jx version` is:
```
jx 1.3.508
jenkins x platform 0.0.2871
Kubernetes cluster v1.9.7-gke.7
kubectl v1.11.3
helm client v2.11.0+g2e55dbe
helm server v2.11.0+g2e55dbe
git git version 2.17.1 (Apple Git-112)
```
### Kubernetes cluster
What kind of Kubernetes cluster are you using & how did you create it?
I'm using a GKE cluster that was created by terraform using `jx create terraform`.
### Operating system / Environment
Running jx commands on Mac OS X Mojave 10.14
### Expected behavior
GKE cluster and jx associated Kubernetes resources all create successfully.
### Actual behavior
The hook and monocular-api ingress controller returns errors during creation. Error is `googleapi: Error 400: Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1, invalid`
| 1.0 | Unable to create GKE cluster with JX due to ingress controllers stuck in creating state - ### Summary
I am attempted to create a gke cluster with jenkins-x installed on it using `jx create terraform`. The cluster creation works and all pods come up and the cli tool even says jx installed successfully on the cluster. However the hook and monocular-api ingress controller are stuck in a creating state and eventually gives me this error `googleapi: Error 400: Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1, invalid`. I've deleted everything and reran `jx create terraform` a few times with this same issue occurring. Also the Github hooks get 200 responses from the cluster. In terms of functionality jenkins is not discovering repos in my Github at all.
### Steps to reproduce the behavior
Run `jx create terraform` on a gke cluster using all the defaults.
### Jx version
The output of `jx version` is:
```
jx 1.3.508
jenkins x platform 0.0.2871
Kubernetes cluster v1.9.7-gke.7
kubectl v1.11.3
helm client v2.11.0+g2e55dbe
helm server v2.11.0+g2e55dbe
git git version 2.17.1 (Apple Git-112)
```
### Kubernetes cluster
What kind of Kubernetes cluster are you using & how did you create it?
I'm using a GKE cluster that was created by terraform using `jx create terraform`.
### Operating system / Environment
Running jx commands on Mac OS X Mojave 10.14
### Expected behavior
GKE cluster and jx associated Kubernetes resources all create successfully.
### Actual behavior
The hook and monocular-api ingress controller returns errors during creation. Error is `googleapi: Error 400: Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1, invalid`
| non_reli | unable to create gke cluster with jx due to ingress controllers stuck in creating state summary i am attempted to create a gke cluster with jenkins x installed on it using jx create terraform the cluster creation works and all pods come up and the cli tool even says jx installed successfully on the cluster however the hook and monocular api ingress controller are stuck in a creating state and eventually gives me this error googleapi error invalid value for field namedports port must be greater than or equal to invalid i ve deleted everything and reran jx create terraform a few times with this same issue occurring also the github hooks get responses from the cluster in terms of functionality jenkins is not discovering repos in my github at all steps to reproduce the behavior run jx create terraform on a gke cluster using all the defaults jx version the output of jx version is jx jenkins x platform kubernetes cluster gke kubectl helm client helm server git git version apple git kubernetes cluster what kind of kubernetes cluster are you using how did you create it i m using a gke cluster that was created by terraform using jx create terraform operating system environment running jx commands on mac os x mojave expected behavior gke cluster and jx associated kubernetes resources all create successfully actual behavior the hook and monocular api ingress controller returns errors during creation error is googleapi error invalid value for field namedports port must be greater than or equal to invalid | 0 |
9 | 2,553,534,449 | IssuesEvent | 2015-02-03 00:47:52 | Bubbus/ACF-Missiles | https://api.github.com/repos/Bubbus/ACF-Missiles | opened | Rocket pod caliber limitation. | bug reliability | I forgot to limit the pods exclusively to only load rockets matching their caliber. | True | Rocket pod caliber limitation. - I forgot to limit the pods exclusively to only load rockets matching their caliber. | reli | rocket pod caliber limitation i forgot to limit the pods exclusively to only load rockets matching their caliber | 1 |
1,155 | 13,434,913,744 | IssuesEvent | 2020-09-07 12:08:00 | akka/akka | https://api.github.com/repos/akka/akka | closed | Chunked messages in reliable delivery | 1 - triaged t:reliable-delivery t:typed | To avoid head of line blocking from serialization and transfer of large messages | True | Chunked messages in reliable delivery - To avoid head of line blocking from serialization and transfer of large messages | reli | chunked messages in reliable delivery to avoid head of line blocking from serialization and transfer of large messages | 1 |
167,660 | 13,039,059,850 | IssuesEvent | 2020-07-28 16:07:43 | onmoon/openapi-server-bundle | https://api.github.com/repos/onmoon/openapi-server-bundle | opened | Test for OnMoon\OpenApiServerBundle\CodeGenerator\Definitions | easy tests | Cover the OnMoon\OpenApiServerBundle\CodeGenerator\Definitions namespace with unit tests | 1.0 | Test for OnMoon\OpenApiServerBundle\CodeGenerator\Definitions - Cover the OnMoon\OpenApiServerBundle\CodeGenerator\Definitions namespace with unit tests | non_reli | test for onmoon openapiserverbundle codegenerator definitions cover the onmoon openapiserverbundle codegenerator definitions namespace with unit tests | 0 |
749,469 | 26,163,335,518 | IssuesEvent | 2022-12-31 23:34:04 | feast-dev/feast | https://api.github.com/repos/feast-dev/feast | closed | Python feature server integration tests are flaky | wontfix kind/bug priority/p0 critical | ## Expected Behavior
## Current Behavior
If I comment out all the other integration tests other than the 2 local tests and the 2 Go-based tests, and then run these 4 tests with `FEAST_USAGE=False IS_TEST=True python -m pytest -n 8 --integration sdk/python/tests/integration/online_store/test_universal_online.py::test_online_retrieval`, some of the tests will sometimes fail with errors like the following:
```
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=6572): Max retries exceeded with url: /get-online-features (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13184af10>: Failed to establish a new connection: [Errno 61] Connection refused'))
```
Similarly, on PRs, these tests are failing with errors such as:
```
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=6572): Read timed out. (read timeout=30)
```
However, these tests never fail when run individually. This suggests there is somehow some contention when the tests are run with multiprocessing.
Note that the recent test failures on master are all due to this flakiness.
## Steps to reproduce
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
| 1.0 | Python feature server integration tests are flaky - ## Expected Behavior
## Current Behavior
If I comment out all the other integration tests other than the 2 local tests and the 2 Go-based tests, and then run these 4 tests with `FEAST_USAGE=False IS_TEST=True python -m pytest -n 8 --integration sdk/python/tests/integration/online_store/test_universal_online.py::test_online_retrieval`, some of the tests will sometimes fail with errors like the following:
```
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=6572): Max retries exceeded with url: /get-online-features (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13184af10>: Failed to establish a new connection: [Errno 61] Connection refused'))
```
Similarly, on PRs, these tests are failing with errors such as:
```
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=6572): Read timed out. (read timeout=30)
```
However, these tests never fail when run individually. This suggests there is somehow some contention when the tests are run with multiprocessing.
Note that the recent test failures on master are all due to this flakiness.
## Steps to reproduce
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
| non_reli | python feature server integration tests are flaky expected behavior current behavior if i comment out all the other integration tests other than the local tests and the go based tests and then run these tests with feast usage false is test true python m pytest n integration sdk python tests integration online store test universal online py test online retrieval some of the tests will sometimes fail with errors like the following requests exceptions connectionerror httpconnectionpool host localhost port max retries exceeded with url get online features caused by newconnectionerror failed to establish a new connection connection refused similarly on prs these tests are failing with errors such as requests exceptions readtimeout httpconnectionpool host localhost port read timed out read timeout however these tests never fail when run individually this suggests there is somehow some contention when the tests are run with multiprocessing note that the recent test failures on master are all due to this flakiness steps to reproduce specifications version platform subsystem possible solution | 0 |
41,401 | 12,832,061,521 | IssuesEvent | 2020-07-07 06:56:23 | rvvergara/react-redux-firebase-boilerplate | https://api.github.com/repos/rvvergara/react-redux-firebase-boilerplate | closed | CVE-2018-11694 (High) detected in node-sass-4.14.1.tgz, node-sass-v4.14.1 | security vulnerability | ## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-redux-firebase-boilerplate/package.json</p>
<p>Path to vulnerable library: /react-redux-firebase-boilerplate/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/react-redux-firebase-boilerplate/commit/297ddd7e6eb164fe6d8f46cb91e905e1e40d0ce9">297ddd7e6eb164fe6d8f46cb91e905e1e40d0ce9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-11694 (High) detected in node-sass-4.14.1.tgz, node-sass-v4.14.1 - ## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-redux-firebase-boilerplate/package.json</p>
<p>Path to vulnerable library: /react-redux-firebase-boilerplate/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/react-redux-firebase-boilerplate/commit/297ddd7e6eb164fe6d8f46cb91e905e1e40d0ce9">297ddd7e6eb164fe6d8f46cb91e905e1e40d0ce9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve high detected in node sass tgz node sass cve high severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm react redux firebase boilerplate package json path to vulnerable library react redux firebase boilerplate node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource | 0 |
821,105 | 30,805,062,366 | IssuesEvent | 2023-08-01 06:28:00 | steedos/steedos-widgets | https://api.github.com/repos/steedos/steedos-widgets | closed | [Feature]: 表单微页面换成标准的ObjectForm | priority: High done new feature | ### Summary 摘要
- [x] 应用
- [x] 页面布局
### Why should this be worked on? 此需求的应用场景?
方便国际化和后续修改 | 1.0 | [Feature]: 表单微页面换成标准的ObjectForm - ### Summary 摘要
- [x] 应用
- [x] 页面布局
### Why should this be worked on? 此需求的应用场景?
方便国际化和后续修改 | non_reli | 表单微页面换成标准的objectform summary 摘要 应用 页面布局 why should this be worked on 此需求的应用场景? 方便国际化和后续修改 | 0 |
2,937 | 30,343,632,614 | IssuesEvent | 2023-07-11 14:10:36 | adoptium/infrastructure | https://api.github.com/repos/adoptium/infrastructure | closed | Investigate Suitability Of Windows 2022 Build Machines For Running Wix/Installers | os:windows reliability currency | Whilst investigating some issues relating to the migration of the windows build infrastructure, from Windows 2012 to Windows 2022, I've noticed that the "build windows installer" jobs are currently only running on the Windows 2012 machines.
Some evaluative testing of the create windows installer jobs on the windows 2022 machines should be undertaken before migrating the jenkins jobs to the new machine. | True | Investigate Suitability Of Windows 2022 Build Machines For Running Wix/Installers - Whilst investigating some issues relating to the migration of the windows build infrastructure, from Windows 2012 to Windows 2022, I've noticed that the "build windows installer" jobs are currently only running on the Windows 2012 machines.
Some evaluative testing of the create windows installer jobs on the windows 2022 machines should be undertaken before migrating the jenkins jobs to the new machine. | reli | investigate suitability of windows build machines for running wix installers whilst investigating some issues relating to the migration of the windows build infrastructure from windows to windows i ve noticed that the build windows installer jobs are currently only running on the windows machines some evaluative testing of the create windows installer jobs on the windows machines should be undertaken before migrating the jenkins jobs to the new machine | 1 |
373,970 | 11,053,557,440 | IssuesEvent | 2019-12-10 11:38:33 | bounswe/bounswe2019group4 | https://api.github.com/repos/bounswe/bounswe2019group4 | opened | Backend Additional Trading Equipment | Back-End Priority: Medium Type: Development | Since we define trading equipment as "Indices, stocks, ETFs, commodities, currencies, funds, bonds and cryptocurrencies" in our requirements glossary, we need to provide more trading equipment types. Currently we only have currencies.
We fetch currency values from [Alpha Vantage](https://www.alphavantage.co/documentation/) 3rd party API. This API also provides cryptocurrencies and stocks. We can use it. | 1.0 | Backend Additional Trading Equipment - Since we define trading equipment as "Indices, stocks, ETFs, commodities, currencies, funds, bonds and cryptocurrencies" in our requirements glossary, we need to provide more trading equipment types. Currently we only have currencies.
We fetch currency values from [Alpha Vantage](https://www.alphavantage.co/documentation/) 3rd party API. This API also provides cryptocurrencies and stocks. We can use it. | non_reli | backend additional trading equipment since we define trading equipment as indices stocks etfs commodities currencies funds bonds and cryptocurrencies in our requirements glossary we need to provide more trading equipment types currently we only have currencies we fetch currency values from party api this api also provides cryptocurrencies and stocks we can use it | 0 |
428 | 7,461,454,730 | IssuesEvent | 2018-03-31 03:07:59 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | [Linux/ARM32] Instruction cache flush on arm32 linux | Post-ZBB arch-arm32 area-VM os-linux reliability | A crash occurred when using AOT images. It was generated with `/FragileNonVersionable` option, but it looks not the point here. When analyzing the problem, the crash was occurred by SIGSEG while accessing the memory address because the value of `*.ni.dll` partly differs from the value loaded in memory. The code at a different point was associated with an target address of `movt`, `movw` instruction.
On a kernel dump, SIGSEGV occurs when trying to access a strange memory address (0x100174d4).
```
[2-117.0372] 5860: fc52f7fe 63b8f243 3377f6ca 6819681b
9a019803 f7fe6803 bf00fc4d 93042301
[2-117.0372] 5880: e7ffbf00 b0069804 8c10e8bd 4c1ce92d
0b10f10d f2439001 f6ca5338 681b[3377] <-- crashed here, 0x3377
[2-117.0372] 58a0: 2b00681b f7fed001 9801fcb3 fc38f7fe
bf00bf00 8c1ce8bd 4c10e92d f10db086
[2-117.0372] 58c0: 23000b20 93019302 91049005 f2439203
f6ca5338 681b3377 2b00681b f7fed001
```
Contents of the XXX.ni.dll file is
```
0003850: c1f2 0102 0198 0021 43f2 bd63 c1f2 0103 .......!C..c....
0003860: fef7 52fc 41f2 b863 c1f2 0103 1b68 1968 ..R.A..c.....h.h
0003870: 0398 019a 0368 fef7 4dfc 00bf 0123 0493 .....h..M....#..
0003880: 00bf ffe7 0498 06b0 bde8 108c 2de9 1c4c ............-..L
0003890: 0df1 100b 0190 41f2 3853 c1f2 [0103] 1b68 ......A.8S.....h <-- The file contains a value of 0x0301 instead of 0x3377
00038a0: 1b68 002b 01d0 fef7 b3fc 0198 fef7 38fc .h.+..........8.
00038b0: 00bf 00bf bde8 1c8c 2de9 104c 86b0 0df1 ........-..L....
00038c0: 200b 0023 0293 0193 0590 0491 0392 41f2 ..#..........A.
```
Based on the values in the memory, the area is composed of the following code, and the problematic part is the `ldr r3, [r3, # 0]` part.
```gdb
(gdb) p/x &code
$2 = 0x1102c
(gdb) p/x 0xad06-0xace0
$3 = 0x26
(gdb) p/x 0x1102c+0x26
$4 = 0x11052
(gdb) x/17i 0x1102c+1
0x1102d <code+1>: ldr r0, [sp, #44] ; 0x2c
0x1102f <code+3>: ldr r1, [sp, #20]
0x11031 <code+5>: ldr r2, [sp, #24]
0x11033 <code+7>: ldr r3, [r0, #0]
0x11035 <code+9>: bl 0xf470
0x11039 <code+13>: nop
0x1103b <code+15>: movw r3, #30480 ; 0x7710
0x1103f <code+19>: movt r3, #43969 ; 0xabc1
0x11043 <code+23>: ldr r0, [r3, #0]
0x11045 <code+25>: bl 0xf6f4
0x11049 <code+29>: str r0, [sp, #16]
0x1104b <code+31>: movw r3, #29908 ; 0x74d4
0x1104f <code+35>: movt r3, #43969 ; 0xabc1
0x11053 <code+39>: ldr r0, [r3, #0] <-- SIGSEGV HERE
0x11055 <code+41>: movw r3, #30448 ; 0x76f0
0x11059 <code+45>: movt r3, #43969 ; 0xabc1
0x1105d <code+49>: ldr r1, [r3, #0]
```
The strange thing is that the memory address that should be accessed in this area is `0xabc174d4` instead of `0x100174d4`.
It seemed to be related to relocation.
We looked at the code and we could find the relevant code in `PEImageLayout::ApplyBaseRelocations()`.
In response to the story that there was an issue of the instruction cache flush associated with the other JIT VM, we tried to force the cache flush as follows, and the symptom of the problem was eliminated.
```diff
diff --git a/src/vm/peimagelayout.cpp b/src/vm/peimagelayout.cpp
index 93ab77c..ca4b487 100644
--- a/src/vm/peimagelayout.cpp
+++ b/src/vm/peimagelayout.cpp
@@ -227,6 +227,8 @@ void PEImageLayout::ApplyBaseRelocations()
#ifdef _TARGET_ARM_
case IMAGE_REL_BASED_THUMB_MOV32:
PutThumb2Mov32((UINT16 *)address, GetThumb2Mov32((UINT16 *)address) + delta);
+
+ ClrFlushInstructionCache(address, 8);
break;
#endif
```
Basically, I think that cache flush through `mprotect()` is already well done in CoreCLR.
However, when I searched the Internet, I can found the following.
> You would expect that the mmap / mprotect syscalls would establish mappings that are immediately updated, I see that the kernel does indeed flush caches on mprotect. In that case, no cache flush would be required.
> However, I also see that some versions of libc do call cacheflush after mprotect, which would imply that some environments would need the caches flushed (or have previously). I'd take a guess that this is a workaround to a bug.
Https://stackoverflow.com/questions/2777725/does-mprotect-flush-the-instruction-cache-on-arm-linux
I looked at the `glibc` source code and found the following information:
- [Here](https://github.com/bminor/glibc/blob/master/sysdeps/unix/sysv/linux/arm/dl-machine.h#L22
) defines CLEAR_CACHE.
- [This](https://github.com/bminor/glibc/blob/master/elf/dl-reloc.c#L298
) is the only place where CLEAR_CACHE is used while relocating object.
- [Here](https://github.com/bminor/glibc/blob/master/sysdeps/arm/dl-machine.h#L30
) warnings that there the CLEAR_CACHE should be present to support text relocation.
It seems that `mprotect()` alone is not enough to support dynamic relocation of code `Linux/ARM`.
Perhaps, the comment /* This definition is Linux-specific. */ in dl-machine.h explains why there was no problem in other platform before.
I have some questions at this point.
- Is this the right solution? Is it a best way to solve this problem?
- Doesn't it has any impact on performance?
- If this is a really problem, why has not this been fixed in the linux kernel before?
- Is anyone encountered this problem on `Windows/ARM`?
I want anyone to comment.
| True | [Linux/ARM32] Instruction cache flush on arm32 linux - A crash occurred when using AOT images. It was generated with `/FragileNonVersionable` option, but it looks not the point here. When analyzing the problem, the crash was occurred by SIGSEG while accessing the memory address because the value of `*.ni.dll` partly differs from the value loaded in memory. The code at a different point was associated with an target address of `movt`, `movw` instruction.
On a kernel dump, SIGSEGV occurs when trying to access a strange memory address (0x100174d4).
```
[2-117.0372] 5860: fc52f7fe 63b8f243 3377f6ca 6819681b
9a019803 f7fe6803 bf00fc4d 93042301
[2-117.0372] 5880: e7ffbf00 b0069804 8c10e8bd 4c1ce92d
0b10f10d f2439001 f6ca5338 681b[3377] <-- crashed here, 0x3377
[2-117.0372] 58a0: 2b00681b f7fed001 9801fcb3 fc38f7fe
bf00bf00 8c1ce8bd 4c10e92d f10db086
[2-117.0372] 58c0: 23000b20 93019302 91049005 f2439203
f6ca5338 681b3377 2b00681b f7fed001
```
Contents of the XXX.ni.dll file is
```
0003850: c1f2 0102 0198 0021 43f2 bd63 c1f2 0103 .......!C..c....
0003860: fef7 52fc 41f2 b863 c1f2 0103 1b68 1968 ..R.A..c.....h.h
0003870: 0398 019a 0368 fef7 4dfc 00bf 0123 0493 .....h..M....#..
0003880: 00bf ffe7 0498 06b0 bde8 108c 2de9 1c4c ............-..L
0003890: 0df1 100b 0190 41f2 3853 c1f2 [0103] 1b68 ......A.8S.....h <-- The file contains a value of 0x0301 instead of 0x3377
00038a0: 1b68 002b 01d0 fef7 b3fc 0198 fef7 38fc .h.+..........8.
00038b0: 00bf 00bf bde8 1c8c 2de9 104c 86b0 0df1 ........-..L....
00038c0: 200b 0023 0293 0193 0590 0491 0392 41f2 ..#..........A.
```
Based on the values in the memory, the area is composed of the following code, and the problematic part is the `ldr r3, [r3, # 0]` part.
```gdb
(gdb) p/x &code
$2 = 0x1102c
(gdb) p/x 0xad06-0xace0
$3 = 0x26
(gdb) p/x 0x1102c+0x26
$4 = 0x11052
(gdb) x/17i 0x1102c+1
0x1102d <code+1>: ldr r0, [sp, #44] ; 0x2c
0x1102f <code+3>: ldr r1, [sp, #20]
0x11031 <code+5>: ldr r2, [sp, #24]
0x11033 <code+7>: ldr r3, [r0, #0]
0x11035 <code+9>: bl 0xf470
0x11039 <code+13>: nop
0x1103b <code+15>: movw r3, #30480 ; 0x7710
0x1103f <code+19>: movt r3, #43969 ; 0xabc1
0x11043 <code+23>: ldr r0, [r3, #0]
0x11045 <code+25>: bl 0xf6f4
0x11049 <code+29>: str r0, [sp, #16]
0x1104b <code+31>: movw r3, #29908 ; 0x74d4
0x1104f <code+35>: movt r3, #43969 ; 0xabc1
0x11053 <code+39>: ldr r0, [r3, #0] <-- SIGSEGV HERE
0x11055 <code+41>: movw r3, #30448 ; 0x76f0
0x11059 <code+45>: movt r3, #43969 ; 0xabc1
0x1105d <code+49>: ldr r1, [r3, #0]
```
The strange thing is that the memory address that should be accessed in this area is `0xabc174d4` instead of `0x100174d4`.
It seemed to be related to relocation.
We looked at the code and we could find the relevant code in `PEImageLayout::ApplyBaseRelocations()`.
In response to the story that there was an issue of the instruction cache flush associated with the other JIT VM, we tried to force the cache flush as follows, and the symptom of the problem was eliminated.
```diff
diff --git a/src/vm/peimagelayout.cpp b/src/vm/peimagelayout.cpp
index 93ab77c..ca4b487 100644
--- a/src/vm/peimagelayout.cpp
+++ b/src/vm/peimagelayout.cpp
@@ -227,6 +227,8 @@ void PEImageLayout::ApplyBaseRelocations()
#ifdef _TARGET_ARM_
case IMAGE_REL_BASED_THUMB_MOV32:
PutThumb2Mov32((UINT16 *)address, GetThumb2Mov32((UINT16 *)address) + delta);
+
+ ClrFlushInstructionCache(address, 8);
break;
#endif
```
Basically, I think that cache flush through `mprotect()` is already well done in CoreCLR.
However, when I searched the Internet, I can found the following.
> You would expect that the mmap / mprotect syscalls would establish mappings that are immediately updated, I see that the kernel does indeed flush caches on mprotect. In that case, no cache flush would be required.
> However, I also see that some versions of libc do call cacheflush after mprotect, which would imply that some environments would need the caches flushed (or have previously). I'd take a guess that this is a workaround to a bug.
Https://stackoverflow.com/questions/2777725/does-mprotect-flush-the-instruction-cache-on-arm-linux
I looked at the `glibc` source code and found the following information:
- [Here](https://github.com/bminor/glibc/blob/master/sysdeps/unix/sysv/linux/arm/dl-machine.h#L22
) defines CLEAR_CACHE.
- [This](https://github.com/bminor/glibc/blob/master/elf/dl-reloc.c#L298
) is the only place where CLEAR_CACHE is used while relocating object.
- [Here](https://github.com/bminor/glibc/blob/master/sysdeps/arm/dl-machine.h#L30
) warnings that there the CLEAR_CACHE should be present to support text relocation.
It seems that `mprotect()` alone is not enough to support dynamic relocation of code `Linux/ARM`.
Perhaps, the comment /* This definition is Linux-specific. */ in dl-machine.h explains why there was no problem in other platform before.
I have some questions at this point.
- Is this the right solution? Is it a best way to solve this problem?
- Doesn't it has any impact on performance?
- If this is a really problem, why has not this been fixed in the linux kernel before?
- Is anyone encountered this problem on `Windows/ARM`?
I want anyone to comment.
| reli | instruction cache flush on linux a crash occurred when using aot images it was generated with fragilenonversionable option but it looks not the point here when analyzing the problem the crash was occurred by sigseg while accessing the memory address because the value of ni dll partly differs from the value loaded in memory the code at a different point was associated with an target address of movt movw instruction on a kernel dump sigsegv occurs when trying to access a strange memory address
crashed here
contents of the xxx ni dll file is c c
r a c h h
h m
l
a h the file contains a value of instead of h
l
a based on the values in the memory the area is composed of the following code and the problematic part is the ldr part gdb gdb p x code
gdb p x
gdb p x
gdb x
ldr
ldr
ldr
ldr
bl
nop
movw
movt
ldr
bl
str
movw
movt
ldr sigsegv here
movw
movt
ldr the strange thing is that the memory address that should be accessed in this area is instead of it seemed to be related to relocation we looked at the code and we could find the relevant code in peimagelayout applybaserelocations in response to the story that there was an issue of the instruction cache flush associated with the other jit vm we tried to force the cache flush as follows and the symptom of the problem was eliminated diff diff git a src vm peimagelayout cpp b src vm peimagelayout cpp index a src vm peimagelayout cpp b src vm peimagelayout cpp void peimagelayout applybaserelocations ifdef target arm case image rel based thumb address address delta clrflushinstructioncache address break endif basically i think that cache flush through mprotect is already well done in coreclr however when i searched the internet i can found the following you would expect that the mmap mprotect syscalls would establish mappings that are immediately updated i see that the kernel does indeed flush caches on mprotect in that case no cache flush would be required however i also see that some versions of libc do call cacheflush after mprotect which would imply that some environments would need the caches flushed or have previously i d take a guess that this is a workaround to a bug i looked at the glibc source code and found the following information
defines clear cache
is the only place where clear cache is used while relocating object
warnings that there the clear cache should be present to support text relocation
it seems that mprotect alone is not enough to support dynamic relocation of code linux arm
perhaps the comment this definition is linux specific in dl machine h explains why there was no problem in other platform before i have some questions at this point is this the right solution is it a best way to solve this problem doesn t it has any impact on performance if this is a really problem why has not this been fixed in the linux kernel before is anyone encountered this problem on windows arm i want anyone to comment | 1 |
729 | 10,149,676,318 | IssuesEvent | 2019-08-05 15:43:46 | pulumi/pulumi | https://api.github.com/repos/pulumi/pulumi | closed | Lock down everything by default | area/tools impact/reliability | We wish to ensure Pulumi is the only way to modify environments by default. As a result, step #1 of adopting it in an existing environment should be to modify permissions to disable resource modification for any account other than Pulumi. Similarly, we probably want to disable direct SSH access to server resources. In both cases, it would be nice if any "exceptions" (like temporary SSH access during debugging) were supported by official and auditable workflows, including checking for drift afterwards to ensure our state still reflects live state, and vice versa. | True | Lock down everything by default - We wish to ensure Pulumi is the only way to modify environments by default. As a result, step #1 of adopting it in an existing environment should be to modify permissions to disable resource modification for any account other than Pulumi. Similarly, we probably want to disable direct SSH access to server resources. In both cases, it would be nice if any "exceptions" (like temporary SSH access during debugging) were supported by official and auditable workflows, including checking for drift afterwards to ensure our state still reflects live state, and vice versa. | reli | lock down everything by default we wish to ensure pulumi is the only way to modify environments by default as a result step of adopting it in an existing environment should be to modify permissions to disable resource modification for any account other than pulumi similarly we probably want to disable direct ssh access to server resources in both cases it would be nice if any exceptions like temporary ssh access during debugging were supported by official and auditable workflows including checking for drift afterwards to ensure our state still reflects live state and vice versa | 1 |
50,705 | 12,545,022,091 | IssuesEvent | 2020-06-05 18:12:15 | rook/rook | https://api.github.com/repos/rook/rook | opened | Integration tests are being skipped for PRs without a [test] flag | bug build | <!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://slack.rook.io/).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
Since the merge of #4028 the integration tests are being skipped when there is no `[test]` or `[test storage provider]` flag.
For example, look at #5045 in [this build](https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-5045/50/pipeline/). The pre-build jenkins step shows that there are go files, but the tests are skipped because of this message:
```
No code changes detected! Just building.
```
It seems something went wrong with the [last check for .go files](https://github.com/rook/rook/blob/master/Jenkinsfile#L72).
**Expected behavior:**
**How to reproduce it (minimal and precise):**
<!-- Please let us know any circumstances for reproduction of your bug. -->
**File(s) to submit**:
* Cluster CR (custom resource), typically called `cluster.yaml`, if necessary
* Operator's logs, if necessary
* Crashing pod(s) logs, if necessary
To get logs, use `kubectl -n <namespace> logs <pod name>`
When pasting logs, always surround them with backticks or use the `insert code` button from the Github UI.
Read [Github documentation if you need help](https://help.github.com/en/articles/creating-and-highlighting-code-blocks).
**Environment**:
* OS (e.g. from /etc/os-release):
* Kernel (e.g. `uname -a`):
* Cloud provider or hardware configuration:
* Rook version (use `rook version` inside of a Rook Pod):
* Storage backend version (e.g. for ceph do `ceph -v`):
* Kubernetes version (use `kubectl version`):
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/master/ceph-toolbox.html)):
| 1.0 | Integration tests are being skipped for PRs without a [test] flag - <!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://slack.rook.io/).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
Since the merge of #4028 the integration tests are being skipped when there is no `[test]` or `[test storage provider]` flag.
For example, look at #5045 in [this build](https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-5045/50/pipeline/). The pre-build jenkins step shows that there are go files, but the tests are skipped because of this message:
```
No code changes detected! Just building.
```
It seems something went wrong with the [last check for .go files](https://github.com/rook/rook/blob/master/Jenkinsfile#L72).
**Expected behavior:**
**How to reproduce it (minimal and precise):**
<!-- Please let us know any circumstances for reproduction of your bug. -->
**File(s) to submit**:
* Cluster CR (custom resource), typically called `cluster.yaml`, if necessary
* Operator's logs, if necessary
* Crashing pod(s) logs, if necessary
To get logs, use `kubectl -n <namespace> logs <pod name>`
When pasting logs, always surround them with backticks or use the `insert code` button from the Github UI.
Read [Github documentation if you need help](https://help.github.com/en/articles/creating-and-highlighting-code-blocks).
**Environment**:
* OS (e.g. from /etc/os-release):
* Kernel (e.g. `uname -a`):
* Cloud provider or hardware configuration:
* Rook version (use `rook version` inside of a Rook Pod):
* Storage backend version (e.g. for ceph do `ceph -v`):
* Kubernetes version (use `kubectl version`):
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/master/ceph-toolbox.html)):
| non_reli | integration tests are being skipped for prs without a flag are you in the right place for issues or feature requests please create an issue in this repository for general technical and non technical questions we are happy to help you on our did you already search the existing open issues for anything similar is this a bug report or feature request bug report deviation from expected behavior since the merge of the integration tests are being skipped when there is no or flag for example look at in the pre build jenkins step shows that there are go files but the tests are skipped because of this message no code changes detected just building it seems something went wrong with the expected behavior how to reproduce it minimal and precise file s to submit cluster cr custom resource typically called cluster yaml if necessary operator s logs if necessary crashing pod s logs if necessary to get logs use kubectl n logs when pasting logs always surround them with backticks or use the insert code button from the github ui read environment os e g from etc os release kernel e g uname a cloud provider or hardware configuration rook version use rook version inside of a rook pod storage backend version e g for ceph do ceph v kubernetes version use kubectl version kubernetes cluster type e g tectonic gke openshift storage backend status e g for ceph use ceph health in the | 0 |
11,957 | 7,747,781,860 | IssuesEvent | 2018-05-30 05:36:22 | pingcap/tikv | https://api.github.com/repos/pingcap/tikv | opened | sending messages in batch | performance raft | I wrote a simple test for gRPC - send 10 msgs with buffer hint and send one msg which contains 10 msgs in batch.
The proto is:
```
syntax = "proto3";
package raft;
message Peer {
uint64 id = 1;
uint64 store_id = 2;
bool is_learner = 3;
}
message RegionEpoch {
uint64 conf_ver = 1;
uint64 version = 2;
}
message Heartbeat {
uint64 to = 1;
uint64 term = 2;
uint64 log_term = 3;
uint64 index = 4;
uint64 commit = 5;
}
message Message {
uint64 region_id = 1;
Peer from_peer = 2;
Peer to_peer = 3;
RegionEpoch epch = 4;
Heartbeat msg = 5;
}
message Messages {
repeated Message msgs = 1;
}
message Done {
}
service Raft {
rpc One(stream Message) returns (Done) {}
rpc Multi(stream Messages) returns (Done) {}
}
```
The server implementation is very easy - receive all messages and reply one Done msg.
The client looks:
```rust
fn test_one(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.one().unwrap();
for _ in 0..num {
for _ in 0..9 {
sink = sink.send((new_msg(), WriteFlags::default().buffer_hint(true)))
.wait()
.unwrap();
}
sink = sink.send((new_msg(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!("one time {:?}", t.elapsed())
}
fn test_multi(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.multi().unwrap();
for _ in 0..num {
sink = sink.send((new_msgs(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!("multi time {:?}", t.elapsed())
}
```
Then I use `num = 100000` for test and get the result:
```
multi time Duration { secs: 3, nanos: 135709348 }
one time Duration { secs: 18, nanos: 973024439 }
```
As you can see, using batch can reduce the total time too much, maybe 5 times shorter. So I think it has a big benefit to send msgs in batch, especially for TiKV <-> TiKV.
But we also need to do a benchmark for it, and we must also consider backward compatibility. | True | sending messages in batch - I wrote a simple test for gRPC - send 10 msgs with buffer hint and send one msg which contains 10 msgs in batch.
The proto is:
```
syntax = "proto3";
package raft;
message Peer {
uint64 id = 1;
uint64 store_id = 2;
bool is_learner = 3;
}
message RegionEpoch {
uint64 conf_ver = 1;
uint64 version = 2;
}
message Heartbeat {
uint64 to = 1;
uint64 term = 2;
uint64 log_term = 3;
uint64 index = 4;
uint64 commit = 5;
}
message Message {
uint64 region_id = 1;
Peer from_peer = 2;
Peer to_peer = 3;
RegionEpoch epch = 4;
Heartbeat msg = 5;
}
message Messages {
repeated Message msgs = 1;
}
message Done {
}
service Raft {
rpc One(stream Message) returns (Done) {}
rpc Multi(stream Messages) returns (Done) {}
}
```
The server implementation is very easy - receive all messages and reply one Done msg.
The client looks:
```rust
fn test_one(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.one().unwrap();
for _ in 0..num {
for _ in 0..9 {
sink = sink.send((new_msg(), WriteFlags::default().buffer_hint(true)))
.wait()
.unwrap();
}
sink = sink.send((new_msg(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!("one time {:?}", t.elapsed())
}
fn test_multi(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.multi().unwrap();
for _ in 0..num {
sink = sink.send((new_msgs(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!("multi time {:?}", t.elapsed())
}
```
Then I use `num = 100000` for test and get the result:
```
multi time Duration { secs: 3, nanos: 135709348 }
one time Duration { secs: 18, nanos: 973024439 }
```
As you can see, using batch can reduce the total time too much, maybe 5 times shorter. So I think it has a big benefit to send msgs in batch, especially for TiKV <-> TiKV.
But we also need to do a benchmark for it, and we must also consider backward compatibility. | non_reli | sending messages in batch i wrote a simple test for grpc send msgs with buffer hint and send one msg which contains msgs in batch the proto is syntax package raft message peer id store id bool is learner message regionepoch conf ver version message heartbeat to term log term index commit message message region id peer from peer peer to peer regionepoch epch heartbeat msg message messages repeated message msgs message done service raft rpc one stream message returns done rpc multi stream messages returns done the server implementation is very easy receive all messages and reply one done msg the client looks rust fn test one num usize client raftclient let t instant now let mut sink receiver client one unwrap for in num for in sink sink send new msg writeflags default buffer hint true wait unwrap sink sink send new msg writeflags default wait unwrap future poll fn sink close wait unwrap receiver wait unwrap println one time t elapsed fn test multi num usize client raftclient let t instant now let mut sink receiver client multi unwrap for in num sink sink send new msgs writeflags default wait unwrap future poll fn sink close wait unwrap receiver wait unwrap println multi time t elapsed then i use num for test and get the result multi time duration secs nanos one time duration secs nanos as you can see using batch can reduce the total time too much maybe times shorter so i think it has a big benefit to send msgs in batch especially for tikv tikv but we also need to do a benchmark for it and we must also consider backward compatibility | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.