Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
14,611
2,829,610,116
IssuesEvent
2015-05-23 02:06:28
awesomebing1/fuzzdb
https://api.github.com/repos/awesomebing1/fuzzdb
closed
http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `sabujhos...@gmail.com` on 15 Nov 2014 at 3:44
1.0
http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf - ``` What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf http://studuj.czu.cz/clemson-vs-georgia-tech-live-streaming-ncaaf What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `sabujhos...@gmail.com` on 15 Nov 2014 at 3:44
defect
what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by sabujhos gmail com on nov at
1
129,994
27,605,313,853
IssuesEvent
2023-03-09 12:38:04
OpenSource-Journey/Your-Github-Contributions
https://api.github.com/repos/OpenSource-Journey/Your-Github-Contributions
closed
Add support for eslint, prettier and commit hook
enhancement code-quality code-formatting
Currently, we don't have support for code formatting and linting, this leads to unnecessary changes to the file when someone contributes to this project. so we need eslint rules set for project development and prettier for code formatting.
2.0
Add support for eslint, prettier and commit hook - Currently, we don't have support for code formatting and linting, this leads to unnecessary changes to the file when someone contributes to this project. so we need eslint rules set for project development and prettier for code formatting.
non_defect
add support for eslint prettier and commit hook currently we don t have support for code formatting and linting this leads to unnecessary changes to the file when someone contributes to this project so we need eslint rules set for project development and prettier for code formatting
0
350,187
31,861,723,965
IssuesEvent
2023-09-15 11:25:56
jmg049/wavers
https://api.github.com/repos/jmg049/wavers
closed
Testing 1#: Load testing
bug test
A segfault has appeared when loading multiple files via a for loop, specifically for 32-bit floats and likely 64-bit floats and 32-ints. This was hopefully addressed in this [commit](https://github.com/jmg049/wavers/commit/6fd8b0e8879d6f030011387b4604289b0d75b41d) by fixing the values used to initialize the sizes and capacity of the underlying buffers. However, due to time limits at the time this has yet to be tested in Python and the root cause determined for sure.
1.0
Testing 1#: Load testing - A segfault has appeared when loading multiple files via a for loop, specifically for 32-bit floats and likely 64-bit floats and 32-ints. This was hopefully addressed in this [commit](https://github.com/jmg049/wavers/commit/6fd8b0e8879d6f030011387b4604289b0d75b41d) by fixing the values used to initialize the sizes and capacity of the underlying buffers. However, due to time limits at the time this has yet to be tested in Python and the root cause determined for sure.
non_defect
testing load testing a segfault has appeared when loading multiple files via a for loop specifically for bit floats and likely bit floats and ints this was hopefully addressed in this by fixing the values used to initialize the sizes and capacity of the underlying buffers however due to time limits at the time this has yet to be tested in python and the root cause determined for sure
0
379,539
11,223,115,683
IssuesEvent
2020-01-07 21:50:15
acidanthera/bugtracker
https://api.github.com/repos/acidanthera/bugtracker
closed
Add a linux build of macserial to MacInfoPkg
priority:normal project:serial
I wanted to use macserial on linux, but only found macOS and Windows builds. I adapted the macserial build tool to compile it on linux. The code is at [https://github.com/chriswayg/MacInfoPkg/blob/master/macserial/build-linux.tool](https://github.com/chriswayg/MacInfoPkg/blob/master/macserial/build-linux.tool) but it is not integrated into the original build tool, thus I did not make a pull request. I have used it on Debian testing (actually grml). almost all commands work except: ``` --generate (-g) generate serial for current model --sys (-s) get system info ``` Thus my request, if you could add a linux build of macserial. > When to use issue tracker: > * You have new resources or patches to add but cannot make a pull request
1.0
Add a linux build of macserial to MacInfoPkg - I wanted to use macserial on linux, but only found macOS and Windows builds. I adapted the macserial build tool to compile it on linux. The code is at [https://github.com/chriswayg/MacInfoPkg/blob/master/macserial/build-linux.tool](https://github.com/chriswayg/MacInfoPkg/blob/master/macserial/build-linux.tool) but it is not integrated into the original build tool, thus I did not make a pull request. I have used it on Debian testing (actually grml). almost all commands work except: ``` --generate (-g) generate serial for current model --sys (-s) get system info ``` Thus my request, if you could add a linux build of macserial. > When to use issue tracker: > * You have new resources or patches to add but cannot make a pull request
non_defect
add a linux build of macserial to macinfopkg i wanted to use macserial on linux but only found macos and windows builds i adapted the macserial build tool to compile it on linux the code is at but it is not integrated into the original build tool thus i did not make a pull request i have used it on debian testing actually grml almost all commands work except generate g generate serial for current model sys s get system info thus my request if you could add a linux build of macserial when to use issue tracker you have new resources or patches to add but cannot make a pull request
0
10,789
2,622,189,720
IssuesEvent
2015-03-04 00:22:31
byzhang/cudpp
https://api.github.com/repos/byzhang/cudpp
closed
findFile/findDir search in wrong direction, and therefore find wrong path if the startDir is repeated in the path
auto-migrated Component-Tests Milestone-Release2.0 Priority-High Type-Defect
``` When CUDPP is in a path that has the name "cudpp" in it twice, for example, the way I keep branches: ~/src/idav/branches/proj/cudpp/release1.1/cudpp/ cudpp_testrig -rand fails to find its files. This is because cutupPath goes from the root of the path above, finding the first /cudpp first. It should instead work backwards up the tree, so it finds the closest instance of "startDir", rather than the farthest -- I think this is what users will expect. I think the correct way to do this is not using strtok, but by using the chdir() to traverse up the tree until either the startDir is found or the root is hit. I find it hard to believe each OS doesn't have a built-in function to do this, but a quick google search turns up nothing easy... This needs to be fixed. However I think we can leave it until after the release. ``` Original issue reported on code.google.com by `harr...@gmail.com` on 29 Jun 2009 at 7:42
1.0
findFile/findDir search in wrong direction, and therefore find wrong path if the startDir is repeated in the path - ``` When CUDPP is in a path that has the name "cudpp" in it twice, for example, the way I keep branches: ~/src/idav/branches/proj/cudpp/release1.1/cudpp/ cudpp_testrig -rand fails to find its files. This is because cutupPath goes from the root of the path above, finding the first /cudpp first. It should instead work backwards up the tree, so it finds the closest instance of "startDir", rather than the farthest -- I think this is what users will expect. I think the correct way to do this is not using strtok, but by using the chdir() to traverse up the tree until either the startDir is found or the root is hit. I find it hard to believe each OS doesn't have a built-in function to do this, but a quick google search turns up nothing easy... This needs to be fixed. However I think we can leave it until after the release. ``` Original issue reported on code.google.com by `harr...@gmail.com` on 29 Jun 2009 at 7:42
defect
findfile finddir search in wrong direction and therefore find wrong path if the startdir is repeated in the path when cudpp is in a path that has the name cudpp in it twice for example the way i keep branches src idav branches proj cudpp cudpp cudpp testrig rand fails to find its files this is because cutuppath goes from the root of the path above finding the first cudpp first it should instead work backwards up the tree so it finds the closest instance of startdir rather than the farthest i think this is what users will expect i think the correct way to do this is not using strtok but by using the chdir to traverse up the tree until either the startdir is found or the root is hit i find it hard to believe each os doesn t have a built in function to do this but a quick google search turns up nothing easy this needs to be fixed however i think we can leave it until after the release original issue reported on code google com by harr gmail com on jun at
1
132,917
12,521,824,161
IssuesEvent
2020-06-03 18:04:17
alphagov/govuk-frontend
https://api.github.com/repos/alphagov/govuk-frontend
opened
Consider renaming group for media queries to break points
awaiting triage documentation sass / css
## What [/settings/_media-queries.scss](https://github.com/alphagov/govuk-frontend/blob/master/src/govuk/settings/_media-queries.scss#L2) uses the group name `media-queries` but actually contains breakpoint related things. This means that frontend docs have a [Media queries section](https://frontend.design-system.service.gov.uk/sass-api-reference/#media-queries) that contains information about breakpoints but sounds like it has information about media queries themselves - I was helping someone on support and this tripped me up! The actual [media query mixin](http://127.0.0.1:4567/sass-api-reference/#govuk-media-query) is further down the page. We could consider renaming the group from `media-queries` to `break-points` (and also renaming the file, although this could be a breaking change for some users). ## Why Renaming the group from `media-queries` to `break-points` would be more reflective of what the file contains and reduce the duplication of things called media query/queries in the docs. ## Who needs to know about this Devs, Mark
1.0
Consider renaming group for media queries to break points - ## What [/settings/_media-queries.scss](https://github.com/alphagov/govuk-frontend/blob/master/src/govuk/settings/_media-queries.scss#L2) uses the group name `media-queries` but actually contains breakpoint related things. This means that frontend docs have a [Media queries section](https://frontend.design-system.service.gov.uk/sass-api-reference/#media-queries) that contains information about breakpoints but sounds like it has information about media queries themselves - I was helping someone on support and this tripped me up! The actual [media query mixin](http://127.0.0.1:4567/sass-api-reference/#govuk-media-query) is further down the page. We could consider renaming the group from `media-queries` to `break-points` (and also renaming the file, although this could be a breaking change for some users). ## Why Renaming the group from `media-queries` to `break-points` would be more reflective of what the file contains and reduce the duplication of things called media query/queries in the docs. ## Who needs to know about this Devs, Mark
non_defect
consider renaming group for media queries to break points what uses the group name media queries but actually contains breakpoint related things this means that frontend docs have a that contains information about breakpoints but sounds like it has information about media queries themselves i was helping someone on support and this tripped me up the actual is further down the page we could consider renaming the group from media queries to break points and also renaming the file although this could be a breaking change for some users why renaming the group from media queries to break points would be more reflective of what the file contains and reduce the duplication of things called media query queries in the docs who needs to know about this devs mark
0
288,467
24,905,692,897
IssuesEvent
2022-10-29 07:55:58
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: pgjdbc failed
C-test-failure O-robot O-roachtest T-sql-experience branch-release-22.2
roachtest.pgjdbc [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6975915?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6975915?buildTab=artifacts#/pgjdbc) on release-22.2 @ [5bbbb4fefe2cca87aa8dd34c007702bb2faa20ef](https://github.com/cockroachdb/cockroach/commits/5bbbb4fefe2cca87aa8dd34c007702bb2faa20ef): ``` test artifacts and logs in: /artifacts/pgjdbc/run_1 orm_helpers.go:191,orm_helpers.go:117,java_helpers.go:220,pgjdbc.go:206,pgjdbc.go:218,test_runner.go:930: Tests run on Cockroach v22.2.0-beta.2-347-g5bbbb4fefe Tests run against pgjdbc REL42.3.3 5792 Total Tests Run 5042 tests passed 750 tests failed 72 tests skipped 178 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: org.postgresql.test.jdbc2.StatementTest.testShortQueryTimeout - unknown (unexpected) For a full summary look at the pgjdbc artifacts An updated blocklist (pgjdbcBlocklist) is available in the artifacts' pgjdbc log ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_fs=ext4</code> , <code>ROACHTEST_localSSD=true</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*pgjdbc.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-20574
2.0
roachtest: pgjdbc failed - roachtest.pgjdbc [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6975915?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6975915?buildTab=artifacts#/pgjdbc) on release-22.2 @ [5bbbb4fefe2cca87aa8dd34c007702bb2faa20ef](https://github.com/cockroachdb/cockroach/commits/5bbbb4fefe2cca87aa8dd34c007702bb2faa20ef): ``` test artifacts and logs in: /artifacts/pgjdbc/run_1 orm_helpers.go:191,orm_helpers.go:117,java_helpers.go:220,pgjdbc.go:206,pgjdbc.go:218,test_runner.go:930: Tests run on Cockroach v22.2.0-beta.2-347-g5bbbb4fefe Tests run against pgjdbc REL42.3.3 5792 Total Tests Run 5042 tests passed 750 tests failed 72 tests skipped 178 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: org.postgresql.test.jdbc2.StatementTest.testShortQueryTimeout - unknown (unexpected) For a full summary look at the pgjdbc artifacts An updated blocklist (pgjdbcBlocklist) is available in the artifacts' pgjdbc log ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_fs=ext4</code> , <code>ROACHTEST_localSSD=true</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*pgjdbc.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-20574
non_defect
roachtest pgjdbc failed roachtest pgjdbc with on release test artifacts and logs in artifacts pgjdbc run orm helpers go orm helpers go java helpers go pgjdbc go pgjdbc go test runner go tests run on cockroach beta tests run against pgjdbc total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly test failed unexpectedly tests expected failed but skipped tests expected failed but not run fail org postgresql test statementtest testshortquerytimeout unknown unexpected for a full summary look at the pgjdbc artifacts an updated blocklist pgjdbcblocklist is available in the artifacts pgjdbc log parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb sql experience jira issue crdb
0
651,210
21,469,606,712
IssuesEvent
2022-04-26 08:20:10
earth-chris/elapid
https://api.github.com/repos/earth-chris/elapid
closed
GDAL_DATA path missing if gdal conda package isn't installed
bug low priority
In testing `pip install elapid` from a clean conda environment (`conda create --name test python==3.7`), the following error message is displayed: ``` ERROR 4: Unable to open EPSG support file gcs.csv. Try setting the GDAL_DATA environment variable to point to the directory containing EPSG csv files. ``` I believe this occurs because `elapid` only requires `rasterio`, which uses the minimal `libgdal` requirement and not the full `gdal`/`gdal-bin` install. This greatly simplifies build requirements but introduces this error. Rasterio has [documented the issue](https://rasterio.readthedocs.io/en/latest/faq.html). `fiona` packages their own `gdal_data` directory, which is here on my conda install: `$CONDA_ENV_DIR/lib/python3.7/site-packages/fiona/gdal_data/`. I suspect the `GDAL_DATA` variable could be set with `os.environ.update(GDAL_DATA='/path/to/fiona/gdal-data/`). But this will have to check that it's not referenced by other packages or its not overwriting a user default.
1.0
GDAL_DATA path missing if gdal conda package isn't installed - In testing `pip install elapid` from a clean conda environment (`conda create --name test python==3.7`), the following error message is displayed: ``` ERROR 4: Unable to open EPSG support file gcs.csv. Try setting the GDAL_DATA environment variable to point to the directory containing EPSG csv files. ``` I believe this occurs because `elapid` only requires `rasterio`, which uses the minimal `libgdal` requirement and not the full `gdal`/`gdal-bin` install. This greatly simplifies build requirements but introduces this error. Rasterio has [documented the issue](https://rasterio.readthedocs.io/en/latest/faq.html). `fiona` packages their own `gdal_data` directory, which is here on my conda install: `$CONDA_ENV_DIR/lib/python3.7/site-packages/fiona/gdal_data/`. I suspect the `GDAL_DATA` variable could be set with `os.environ.update(GDAL_DATA='/path/to/fiona/gdal-data/`). But this will have to check that it's not referenced by other packages or its not overwriting a user default.
non_defect
gdal data path missing if gdal conda package isn t installed in testing pip install elapid from a clean conda environment conda create name test python the following error message is displayed error unable to open epsg support file gcs csv try setting the gdal data environment variable to point to the directory containing epsg csv files i believe this occurs because elapid only requires rasterio which uses the minimal libgdal requirement and not the full gdal gdal bin install this greatly simplifies build requirements but introduces this error rasterio has fiona packages their own gdal data directory which is here on my conda install conda env dir lib site packages fiona gdal data i suspect the gdal data variable could be set with os environ update gdal data path to fiona gdal data but this will have to check that it s not referenced by other packages or its not overwriting a user default
0
33,118
7,035,460,809
IssuesEvent
2017-12-28 00:07:45
OGMS/ogms
https://api.github.com/repos/OGMS/ogms
closed
predisposition to disease of type X is schema for creating classes, not a class
auto-migrated Priority-Medium Type-Defect
``` should have some way of representing such schemas ``` Original issue reported on code.google.com by `alanruttenberg@gmail.com` on 4 Sep 2009 at 11:18
1.0
predisposition to disease of type X is schema for creating classes, not a class - ``` should have some way of representing such schemas ``` Original issue reported on code.google.com by `alanruttenberg@gmail.com` on 4 Sep 2009 at 11:18
defect
predisposition to disease of type x is schema for creating classes not a class should have some way of representing such schemas original issue reported on code google com by alanruttenberg gmail com on sep at
1
66,385
20,168,923,459
IssuesEvent
2022-02-10 08:35:22
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Element overwrites config dir symlink with text file
T-Defect
### Steps to reproduce ``` cd ~/.config ln -s ../.cfgkeep/Element-Nightly Element-Nightly /opt/Element-Nightly/element-desktop-nightly ``` After that, launch it like normal. It does not always replace the symlink on launch, but no matter what, after some hours of running, it replaces it. ### Outcome #### What did you expect? Normal usage #### What happened instead? Once it finally replaces the symlink with a file, it will fail to open at all. If you leave it running after a successful launch, it will eventually replace the symlink with a file, and then messages will fail to send (it will be unable to access encryption keys and whatnot). ### Operating system Manjaro Linux ### Application version Element Nightly version: 2022020501 Olm version: 3.2.8 ### How did you install the app? AUR ### Homeserver matrix.org ### Will you send logs? Yes
1.0
Element overwrites config dir symlink with text file - ### Steps to reproduce ``` cd ~/.config ln -s ../.cfgkeep/Element-Nightly Element-Nightly /opt/Element-Nightly/element-desktop-nightly ``` After that, launch it like normal. It does not always replace the symlink on launch, but no matter what, after some hours of running, it replaces it. ### Outcome #### What did you expect? Normal usage #### What happened instead? Once it finally replaces the symlink with a file, it will fail to open at all. If you leave it running after a successful launch, it will eventually replace the symlink with a file, and then messages will fail to send (it will be unable to access encryption keys and whatnot). ### Operating system Manjaro Linux ### Application version Element Nightly version: 2022020501 Olm version: 3.2.8 ### How did you install the app? AUR ### Homeserver matrix.org ### Will you send logs? Yes
defect
element overwrites config dir symlink with text file steps to reproduce cd config ln s cfgkeep element nightly element nightly opt element nightly element desktop nightly after that launch it like normal it does not always replace the symlink on launch but no matter what after some hours of running it replaces it outcome what did you expect normal usage what happened instead once it finally replaces the symlink with a file it will fail to open at all if you leave it running after a successful launch it will eventually replace the symlink with a file and then messages will fail to send it will be unable to access encryption keys and whatnot operating system manjaro linux application version element nightly version olm version how did you install the app aur homeserver matrix org will you send logs yes
1
44,230
12,062,190,671
IssuesEvent
2020-04-16 02:15:51
Automattic/wp-calypso
https://api.github.com/repos/Automattic/wp-calypso
closed
Themes info page Open Live Demo link broken when not using a single site as context
Themes [Pri] High [Type] Defect
On a theme info page, when I click on the Open Live Demo link, like on this one: https://wordpress.com/theme/alves it goes to a blank page. It only works if you right click and open in a new tab. Looking at the JS console we see related errors: > async-load-components-web-preview-component.a2ff7de6b2ed0a0ce13c.min.js:1 Uncaught (in promise) TypeError: Cannot read property 'URL' of null at Function.Object.recordTracksEvent [as mapToProps] (async-load-components-web-preview-component.a2ff7de6b2ed0a0ce13c.min.js:1)
1.0
Themes info page Open Live Demo link broken when not using a single site as context - On a theme info page, when I click on the Open Live Demo link, like on this one: https://wordpress.com/theme/alves it goes to a blank page. It only works if you right click and open in a new tab. Looking at the JS console we see related errors: > async-load-components-web-preview-component.a2ff7de6b2ed0a0ce13c.min.js:1 Uncaught (in promise) TypeError: Cannot read property 'URL' of null at Function.Object.recordTracksEvent [as mapToProps] (async-load-components-web-preview-component.a2ff7de6b2ed0a0ce13c.min.js:1)
defect
themes info page open live demo link broken when not using a single site as context on a theme info page when i click on the open live demo link like on this one it goes to a blank page it only works if you right click and open in a new tab looking at the js console we see related errors async load components web preview component min js uncaught in promise typeerror cannot read property url of null at function object recordtracksevent async load components web preview component min js
1
21,067
3,455,646,301
IssuesEvent
2015-12-17 21:04:57
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Assert failure in VM after commit of https://codereview.chromium.org/1410383020/
area-vm Priority-Medium Type-Defect
On the debug buildbots, an assert is failing in the test vm/cc/PrintJSON FAILED: none-vm debug_ia32 vm/cc/PrintJSON Expected: Pass Actual: Crash CommandOutput[run_vm_unittest]: stdout: Running test: PrintJSON stderr: /Volumes/data/b/build/slave/vm-mac-debug-ia32-be/build/sdk/runtime/vm/object.cc:7578: error: expected: cls.LookupField(field_name) == this->raw() void Field::PrintJSONImpl(JSONStream* stream, bool ref) const { 7526 JSONObject jsobj(stream); 7575 JSONObject jsobj(stream); 7527 Class& cls = Class::Handle(owner()); 7576 Class& cls = Class::Handle(owner()); 7528 String& field_name = String::Handle(name()); 7577 String& field_name = String::Handle(name()); 7529 ASSERT(cls.LookupField(field_name) == this->raw()); 7578 ASSERT(cls.LookupField(field_name) == this->raw()); 7530 field_name = String::EncodeIRI(field_name); 7579 field_name = String::EncodeIRI(field_name); 7531 AddCommonObjectProperties(&jsobj, "Field", ref); 7580 AddCommonObjectProperties(&jsobj, "Field", ref); 7532 jsobj.AddFixedServiceId("classes/%" Pd "/fields/%s", 7581 jsobj.AddFixedServiceId("classes/%" Pd "/fields/%
1.0
Assert failure in VM after commit of https://codereview.chromium.org/1410383020/ - On the debug buildbots, an assert is failing in the test vm/cc/PrintJSON FAILED: none-vm debug_ia32 vm/cc/PrintJSON Expected: Pass Actual: Crash CommandOutput[run_vm_unittest]: stdout: Running test: PrintJSON stderr: /Volumes/data/b/build/slave/vm-mac-debug-ia32-be/build/sdk/runtime/vm/object.cc:7578: error: expected: cls.LookupField(field_name) == this->raw() void Field::PrintJSONImpl(JSONStream* stream, bool ref) const { 7526 JSONObject jsobj(stream); 7575 JSONObject jsobj(stream); 7527 Class& cls = Class::Handle(owner()); 7576 Class& cls = Class::Handle(owner()); 7528 String& field_name = String::Handle(name()); 7577 String& field_name = String::Handle(name()); 7529 ASSERT(cls.LookupField(field_name) == this->raw()); 7578 ASSERT(cls.LookupField(field_name) == this->raw()); 7530 field_name = String::EncodeIRI(field_name); 7579 field_name = String::EncodeIRI(field_name); 7531 AddCommonObjectProperties(&jsobj, "Field", ref); 7580 AddCommonObjectProperties(&jsobj, "Field", ref); 7532 jsobj.AddFixedServiceId("classes/%" Pd "/fields/%s", 7581 jsobj.AddFixedServiceId("classes/%" Pd "/fields/%
defect
assert failure in vm after commit of on the debug buildbots an assert is failing in the test vm cc printjson failed none vm debug vm cc printjson expected pass actual crash commandoutput stdout running test printjson stderr volumes data b build slave vm mac debug be build sdk runtime vm object cc error expected cls lookupfield field name this raw void field printjsonimpl jsonstream stream bool ref const jsonobject jsobj stream jsonobject jsobj stream class cls class handle owner class cls class handle owner string field name string handle name string field name string handle name assert cls lookupfield field name this raw assert cls lookupfield field name this raw field name string encodeiri field name field name string encodeiri field name addcommonobjectproperties jsobj field ref addcommonobjectproperties jsobj field ref jsobj addfixedserviceid classes pd fields s jsobj addfixedserviceid classes pd fields
1
72,073
23,919,074,650
IssuesEvent
2022-09-09 15:08:24
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: webElement.sendKeys carriage return to a textarea
I-defect needs-triaging G-chromedriver
### What happened? I am using selenium to type multi-line text to <textarea> in html. When the text contains carriage return character (\r or U+000D), I found selenium didn't type it in the text area (attached my reproduction code). The newline character (\n or U+000A) is typed. According to <textarea> w3c standard, <textarea> normalizes newline remaining \r. https://html.spec.whatwg.org/#the-textarea-element:normalize-newlines https://infra.spec.whatwg.org/#normalize-newlines quote "To normalize newlines in a [string], replace every U+000D CR U+000A LF [code point] pair with a single U+000A LF [code point] and then replace every remaining U+000D CR [code point]." When I use selenium to sendKeys a text with both \r\n and \r (e.g. "line 1\r\nline 2\rline 3"), I only see \n was typed but \r is ignored. Because if \r is also type, I would see <textarea> normalize \r\n -> \n and \r -> \n. Without using selenium, I can prove the \r can be typed to <textarea> and is normalized to \n. This is my jsbin demo code: https://jsbin.com/xapisezini/edit?html,js,console,output ### How can we reproduce the issue? ```shell /* reproduce selenium sendKey test */ import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import java.util.concurrent.TimeUnit; public class MyTest { private WebDriver driver; @Before public void setup() { System.setProperty("webdriver.chrome.driver","chromedriver"); driver = new ChromeDriver(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); } @Test public void testTextArea() throws InterruptedException { driver.get(<localhost URL to hello.html>); WebElement ta = driver.findElement(By.id("w3review")); ta.sendKeys("line 1\r\nline 2\rline 3"); } @After public void teardown() { driver.quit(); } } /* hello.html */ <!DOCTYPE html> <html> <body> <h1>The textarea element</h1> <p><label>Reproduce sendKeys:</label></p> <textarea id="w3review"></textarea> </body> </html> ``` ### Relevant log output ```shell After selenium sendKeys to <textarea>, the observed textarea is line 1 line 2line 3 I am expecting desired textarea line 1 line 2 line 3 ``` ### Operating System Debian 10 ### Selenium version Java <groupId>org.seleniumhq.selenium</groupId><artifactId>selenium-support</artifactId><version>2.31.0</version> ### What are the browser(s) and version(s) where you see this issue? Chrome 99 ### What are the browser driver(s) and version(s) where you see this issue? <groupId>org.seleniumhq.selenium</groupId><artifactId>selenium-chrome-driver</artifactId><version>4.4.0</version> ### Are you using Selenium Grid? _No response_
1.0
[🐛 Bug]: webElement.sendKeys carriage return to a textarea - ### What happened? I am using selenium to type multi-line text to <textarea> in html. When the text contains carriage return character (\r or U+000D), I found selenium didn't type it in the text area (attached my reproduction code). The newline character (\n or U+000A) is typed. According to <textarea> w3c standard, <textarea> normalizes newline remaining \r. https://html.spec.whatwg.org/#the-textarea-element:normalize-newlines https://infra.spec.whatwg.org/#normalize-newlines quote "To normalize newlines in a [string], replace every U+000D CR U+000A LF [code point] pair with a single U+000A LF [code point] and then replace every remaining U+000D CR [code point]." When I use selenium to sendKeys a text with both \r\n and \r (e.g. "line 1\r\nline 2\rline 3"), I only see \n was typed but \r is ignored. Because if \r is also type, I would see <textarea> normalize \r\n -> \n and \r -> \n. Without using selenium, I can prove the \r can be typed to <textarea> and is normalized to \n. This is my jsbin demo code: https://jsbin.com/xapisezini/edit?html,js,console,output ### How can we reproduce the issue? ```shell /* reproduce selenium sendKey test */ import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import java.util.concurrent.TimeUnit; public class MyTest { private WebDriver driver; @Before public void setup() { System.setProperty("webdriver.chrome.driver","chromedriver"); driver = new ChromeDriver(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); } @Test public void testTextArea() throws InterruptedException { driver.get(<localhost URL to hello.html>); WebElement ta = driver.findElement(By.id("w3review")); ta.sendKeys("line 1\r\nline 2\rline 3"); } @After public void teardown() { driver.quit(); } } /* hello.html */ <!DOCTYPE html> <html> <body> <h1>The textarea element</h1> <p><label>Reproduce sendKeys:</label></p> <textarea id="w3review"></textarea> </body> </html> ``` ### Relevant log output ```shell After selenium sendKeys to <textarea>, the observed textarea is line 1 line 2line 3 I am expecting desired textarea line 1 line 2 line 3 ``` ### Operating System Debian 10 ### Selenium version Java <groupId>org.seleniumhq.selenium</groupId><artifactId>selenium-support</artifactId><version>2.31.0</version> ### What are the browser(s) and version(s) where you see this issue? Chrome 99 ### What are the browser driver(s) and version(s) where you see this issue? <groupId>org.seleniumhq.selenium</groupId><artifactId>selenium-chrome-driver</artifactId><version>4.4.0</version> ### Are you using Selenium Grid? _No response_
defect
webelement sendkeys carriage return to a textarea what happened i am using selenium to type multi line text to in html when the text contains carriage return character r or u i found selenium didn t type it in the text area attached my reproduction code the newline character n or u is typed according to standard normalizes newline remaining r quote to normalize newlines in a replace every u cr u lf pair with a single u lf and then replace every remaining u cr when i use selenium to sendkeys a text with both r n and r e g line r nline rline i only see n was typed but r is ignored because if r is also type i would see normalize r n n and r n without using selenium i can prove the r can be typed to and is normalized to n this is my jsbin demo code how can we reproduce the issue shell reproduce selenium sendkey test import org openqa selenium webelement import org openqa selenium chrome chromedriver import java util concurrent timeunit public class mytest private webdriver driver before public void setup system setproperty webdriver chrome driver chromedriver driver new chromedriver driver manage timeouts implicitlywait timeunit seconds test public void testtextarea throws interruptedexception driver get webelement ta driver findelement by id ta sendkeys line r nline rline after public void teardown driver quit hello html the textarea element reproduce sendkeys relevant log output shell after selenium sendkeys to the observed textarea is line line i am expecting desired textarea line line line operating system debian selenium version java org seleniumhq selenium selenium support what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue org seleniumhq selenium selenium chrome driver are you using selenium grid no response
1
280,227
30,808,212,310
IssuesEvent
2023-08-01 08:39:30
open-component-model/ocm
https://api.github.com/repos/open-component-model/ocm
opened
Fix code scanning alert - Clear-text logging of sensitive information
area/security
<!-- Warning: The suggested title contains the alert rule name. This can expose security information. --> Tracking issue for: - [ ] https://github.com/open-component-model/ocm/security/code-scanning/2
True
Fix code scanning alert - Clear-text logging of sensitive information - <!-- Warning: The suggested title contains the alert rule name. This can expose security information. --> Tracking issue for: - [ ] https://github.com/open-component-model/ocm/security/code-scanning/2
non_defect
fix code scanning alert clear text logging of sensitive information tracking issue for
0
24,082
3,908,546,911
IssuesEvent
2016-04-19 16:11:24
google/guava
https://api.github.com/repos/google/guava
opened
Work around Samsung 5.0.x Atomic*FieldUpdater bug in InterruptibleTask
package: concurrent platform: android type: defect
We already do in `AbstractFuture`, and we're about to do so in `AggregateFutureState`.
1.0
Work around Samsung 5.0.x Atomic*FieldUpdater bug in InterruptibleTask - We already do in `AbstractFuture`, and we're about to do so in `AggregateFutureState`.
defect
work around samsung x atomic fieldupdater bug in interruptibletask we already do in abstractfuture and we re about to do so in aggregatefuturestate
1
47,428
13,056,181,355
IssuesEvent
2020-07-30 03:54:35
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
phys-services: Missing pybindings (Trac #541)
Migrated from Trac booking defect
These classes could likely use some fancy shmancy pybindings in the phys-services project: I3CutValuesStd I3CascadeCutValues I3CascadeCutValuesStd Migrated from https://code.icecube.wisc.edu/ticket/541 ```json { "status": "closed", "changetime": "2014-11-23T03:37:56", "description": "These classes could likely use some fancy shmancy pybindings in the phys-services project:\n\nI3CutValuesStd\nI3CascadeCutValues\nI3CascadeCutValuesStd", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1416713876862109", "component": "booking", "summary": "phys-services: Missing pybindings", "priority": "normal", "keywords": "", "time": "2009-03-11T20:18:29", "milestone": "", "owner": "troy", "type": "defect" } ```
1.0
phys-services: Missing pybindings (Trac #541) - These classes could likely use some fancy shmancy pybindings in the phys-services project: I3CutValuesStd I3CascadeCutValues I3CascadeCutValuesStd Migrated from https://code.icecube.wisc.edu/ticket/541 ```json { "status": "closed", "changetime": "2014-11-23T03:37:56", "description": "These classes could likely use some fancy shmancy pybindings in the phys-services project:\n\nI3CutValuesStd\nI3CascadeCutValues\nI3CascadeCutValuesStd", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1416713876862109", "component": "booking", "summary": "phys-services: Missing pybindings", "priority": "normal", "keywords": "", "time": "2009-03-11T20:18:29", "milestone": "", "owner": "troy", "type": "defect" } ```
defect
phys services missing pybindings trac these classes could likely use some fancy shmancy pybindings in the phys services project migrated from json status closed changetime description these classes could likely use some fancy shmancy pybindings in the phys services project n reporter blaufuss cc resolution fixed ts component booking summary phys services missing pybindings priority normal keywords time milestone owner troy type defect
1
13,814
2,784,102,924
IssuesEvent
2015-05-07 07:14:43
sylingd/phpsocks5
https://api.github.com/repos/sylingd/phpsocks5
closed
程序不能正常工作
auto-migrated Priority-Medium Type-Defect
``` 我已经部署好了该程序,但是使用firefox 用socks5代理会显示空页.我根据其他人的反馈以为是firefox超时 .我换用了polipo firefox显示502 Server dropped connection .请帮忙看一下是什么问题 ``` Original issue reported on code.google.com by `fenggao...@gmail.com` on 4 Apr 2011 at 5:05 Attachments: * [phpsocks5_log.log](https://storage.googleapis.com/google-code-attachments/phpsocks5/issue-20/comment-0/phpsocks5_log.log) * [socks5err.log](https://storage.googleapis.com/google-code-attachments/phpsocks5/issue-20/comment-0/socks5err.log)
1.0
程序不能正常工作 - ``` 我已经部署好了该程序,但是使用firefox 用socks5代理会显示空页.我根据其他人的反馈以为是firefox超时 .我换用了polipo firefox显示502 Server dropped connection .请帮忙看一下是什么问题 ``` Original issue reported on code.google.com by `fenggao...@gmail.com` on 4 Apr 2011 at 5:05 Attachments: * [phpsocks5_log.log](https://storage.googleapis.com/google-code-attachments/phpsocks5/issue-20/comment-0/phpsocks5_log.log) * [socks5err.log](https://storage.googleapis.com/google-code-attachments/phpsocks5/issue-20/comment-0/socks5err.log)
defect
程序不能正常工作 我已经部署好了该程序 但是使用firefox 我根据其他人的反馈以为是firefox超时 我换用了polipo server dropped connection 请帮忙看一下是什么问题 original issue reported on code google com by fenggao gmail com on apr at attachments
1
328,328
28,114,491,159
IssuesEvent
2023-03-31 09:42:05
wazuh/wazuh-qa
https://api.github.com/repos/wazuh/wazuh-qa
closed
Debian Linux 11 SCA policy - checks 5 to 5.2.22
team/qa feature/sca dev-testing subteam/qa-main level/task type/test
| Target version | Related issue | Related PR | |---|---|---| | 4.4.x | #3825 | https://github.com/wazuh/wazuh/pull/16017 | |Check Id and Name| Status| Extra| |---|---|---| |5 Access, Authentication and Authorization||| |5.1 Configure time-based job schedulers||| |5.1.1 Ensure cron daemon is enabled and running (Automated)|🟢|| |5.1.2 Ensure permissions on /etc/crontab are configured (Automated)|🟢|| |5.1.3 Ensure permissions on /etc/cron.hourly are configured (Automated)|🟢|| |5.1.4 Ensure permissions on /etc/cron.daily are configured (Automated)|🟢|| |5.1.5 Ensure permissions on /etc/cron.weekly are configured (Automated)|🟢|| |5.1.6 Ensure permissions on /etc/cron.monthly are configured (Automated)|🟢|| |5.1.7 Ensure permissions on /etc/cron.d are configured (Automated)|🟢|| |5.1.8 Ensure cron is restricted to authorized users (Automated)|🟢|| |5.1.9 Ensure at is restricted to authorized users (Automated)|🟢|| |||| |5.2 Configure SSH Server||| |5.2.1 Ensure permissions on /etc/ssh/sshd_config are configured (Automated)|🟢|| |5.2.2 Ensure permissions on SSH private host key files are configured (Automated)|⚫|| |5.2.3 Ensure permissions on SSH public host key files are configured (Automated)|⚫|| |5.2.4 Ensure SSH access is limited (Automated)|🟢|| |5.2.5 Ensure SSH LogLevel is appropriate (Automated)|🟢|| |5.2.6 Ensure SSH PAM is enabled (Automated)|🟢|| |5.2.7 Ensure SSH root login is disabled (Automated)|🟢|| |5.2.8 Ensure SSH HostbasedAuthentication is disabled (Automated)|🟢|| |5.2.9 Ensure SSH PermitEmptyPasswords is disabled (Automated)|🟢|| |5.2.10 Ensure SSH PermitUserEnvironment is disabled (Automated)|🟢|| |5.2.11 Ensure SSH IgnoreRhosts is enabled (Automated)|🟢|| |5.2.12 Ensure SSH X11 forwarding is disabled (Automated)|🟢|| |5.2.13 Ensure only strong Ciphers are used (Automated)|🟢|| |5.2.14 Ensure only strong MAC algorithms are used (Automated)|🟢|| |5.2.15 Ensure only strong Key Exchange algorithms are used (Automated)|🟢|| |5.2.16 Ensure SSH AllowTcpForwarding is disabled (Automated)|🟢|| |5.2.17 Ensure SSH warning banner is configured (Automated)|🟢|| |5.2.18 Ensure SSH MaxAuthTries is set to 4 or less (Automated)|🟢|| |5.2.19 Ensure SSH MaxStartups is configured (Automated)|🟢|| |5.2.20 Ensure SSH MaxSessions is set to 10 or less (Automated)|🟢|| |5.2.21 Ensure SSH LoginGraceTime is set to one minute or less (Automated)|🟢|| |5.2.22 Ensure SSH Idle Timeout Interval is configured (Automated)|🟢||
2.0
Debian Linux 11 SCA policy - checks 5 to 5.2.22 - | Target version | Related issue | Related PR | |---|---|---| | 4.4.x | #3825 | https://github.com/wazuh/wazuh/pull/16017 | |Check Id and Name| Status| Extra| |---|---|---| |5 Access, Authentication and Authorization||| |5.1 Configure time-based job schedulers||| |5.1.1 Ensure cron daemon is enabled and running (Automated)|🟢|| |5.1.2 Ensure permissions on /etc/crontab are configured (Automated)|🟢|| |5.1.3 Ensure permissions on /etc/cron.hourly are configured (Automated)|🟢|| |5.1.4 Ensure permissions on /etc/cron.daily are configured (Automated)|🟢|| |5.1.5 Ensure permissions on /etc/cron.weekly are configured (Automated)|🟢|| |5.1.6 Ensure permissions on /etc/cron.monthly are configured (Automated)|🟢|| |5.1.7 Ensure permissions on /etc/cron.d are configured (Automated)|🟢|| |5.1.8 Ensure cron is restricted to authorized users (Automated)|🟢|| |5.1.9 Ensure at is restricted to authorized users (Automated)|🟢|| |||| |5.2 Configure SSH Server||| |5.2.1 Ensure permissions on /etc/ssh/sshd_config are configured (Automated)|🟢|| |5.2.2 Ensure permissions on SSH private host key files are configured (Automated)|⚫|| |5.2.3 Ensure permissions on SSH public host key files are configured (Automated)|⚫|| |5.2.4 Ensure SSH access is limited (Automated)|🟢|| |5.2.5 Ensure SSH LogLevel is appropriate (Automated)|🟢|| |5.2.6 Ensure SSH PAM is enabled (Automated)|🟢|| |5.2.7 Ensure SSH root login is disabled (Automated)|🟢|| |5.2.8 Ensure SSH HostbasedAuthentication is disabled (Automated)|🟢|| |5.2.9 Ensure SSH PermitEmptyPasswords is disabled (Automated)|🟢|| |5.2.10 Ensure SSH PermitUserEnvironment is disabled (Automated)|🟢|| |5.2.11 Ensure SSH IgnoreRhosts is enabled (Automated)|🟢|| |5.2.12 Ensure SSH X11 forwarding is disabled (Automated)|🟢|| |5.2.13 Ensure only strong Ciphers are used (Automated)|🟢|| |5.2.14 Ensure only strong MAC algorithms are used (Automated)|🟢|| |5.2.15 Ensure only strong Key Exchange algorithms are used (Automated)|🟢|| |5.2.16 Ensure SSH AllowTcpForwarding is disabled (Automated)|🟢|| |5.2.17 Ensure SSH warning banner is configured (Automated)|🟢|| |5.2.18 Ensure SSH MaxAuthTries is set to 4 or less (Automated)|🟢|| |5.2.19 Ensure SSH MaxStartups is configured (Automated)|🟢|| |5.2.20 Ensure SSH MaxSessions is set to 10 or less (Automated)|🟢|| |5.2.21 Ensure SSH LoginGraceTime is set to one minute or less (Automated)|🟢|| |5.2.22 Ensure SSH Idle Timeout Interval is configured (Automated)|🟢||
non_defect
debian linux sca policy checks to target version related issue related pr x check id and name status extra access authentication and authorization configure time based job schedulers ensure cron daemon is enabled and running automated 🟢 ensure permissions on etc crontab are configured automated 🟢 ensure permissions on etc cron hourly are configured automated 🟢 ensure permissions on etc cron daily are configured automated 🟢 ensure permissions on etc cron weekly are configured automated 🟢 ensure permissions on etc cron monthly are configured automated 🟢 ensure permissions on etc cron d are configured automated 🟢 ensure cron is restricted to authorized users automated 🟢 ensure at is restricted to authorized users automated 🟢 configure ssh server ensure permissions on etc ssh sshd config are configured automated 🟢 ensure permissions on ssh private host key files are configured automated ⚫ ensure permissions on ssh public host key files are configured automated ⚫ ensure ssh access is limited automated 🟢 ensure ssh loglevel is appropriate automated 🟢 ensure ssh pam is enabled automated 🟢 ensure ssh root login is disabled automated 🟢 ensure ssh hostbasedauthentication is disabled automated 🟢 ensure ssh permitemptypasswords is disabled automated 🟢 ensure ssh permituserenvironment is disabled automated 🟢 ensure ssh ignorerhosts is enabled automated 🟢 ensure ssh forwarding is disabled automated 🟢 ensure only strong ciphers are used automated 🟢 ensure only strong mac algorithms are used automated 🟢 ensure only strong key exchange algorithms are used automated 🟢 ensure ssh allowtcpforwarding is disabled automated 🟢 ensure ssh warning banner is configured automated 🟢 ensure ssh maxauthtries is set to or less automated 🟢 ensure ssh maxstartups is configured automated 🟢 ensure ssh maxsessions is set to or less automated 🟢 ensure ssh logingracetime is set to one minute or less automated 🟢 ensure ssh idle timeout interval is configured automated 🟢
0
54,072
13,382,881,344
IssuesEvent
2020-09-02 09:28:04
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Turbo Table Column Resize Is Ignored If Smaller Than minWidth [expand mode]
LTS-PORTABLE defect
**I'm submitting a ...** (check one with "x") ``` [ x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Current behavior** Set [style.minWidth] for a resizeable column Try to resize the column to be narrower than its min-width Result: resize is ignored, the column has the same width as before the resize. There is a closed issue similar to this one. https://github.com/primefaces/primeng/issues/5937 But that only solved the issue in fit mode. The issue is still there in Expand mode. Can we fix expand mode as well? You can easily repro it even in the documentation **Expected behavior** The column should be resized to its min-width. **What is the motivation / use case for changing the behavior?** This behavior is not user friendly. * **Angular version:** 5.X 7 * **PrimeNG version:** 5.X 7.10
1.0
Turbo Table Column Resize Is Ignored If Smaller Than minWidth [expand mode] - **I'm submitting a ...** (check one with "x") ``` [ x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Current behavior** Set [style.minWidth] for a resizeable column Try to resize the column to be narrower than its min-width Result: resize is ignored, the column has the same width as before the resize. There is a closed issue similar to this one. https://github.com/primefaces/primeng/issues/5937 But that only solved the issue in fit mode. The issue is still there in Expand mode. Can we fix expand mode as well? You can easily repro it even in the documentation **Expected behavior** The column should be resized to its min-width. **What is the motivation / use case for changing the behavior?** This behavior is not user friendly. * **Angular version:** 5.X 7 * **PrimeNG version:** 5.X 7.10
defect
turbo table column resize is ignored if smaller than minwidth i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see current behavior set for a resizeable column try to resize the column to be narrower than its min width result resize is ignored the column has the same width as before the resize there is a closed issue similar to this one but that only solved the issue in fit mode the issue is still there in expand mode can we fix expand mode as well you can easily repro it even in the documentation expected behavior the column should be resized to its min width what is the motivation use case for changing the behavior this behavior is not user friendly angular version x primeng version x
1
15,648
2,867,996,360
IssuesEvent
2015-06-05 16:15:47
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
bad `method.node` value with analyzer 0.25.0+1 and Dart-1.10.0
Area-Analyzer NeedsInfo Priority-Medium Type-Defect
*This issue was originally filed by @a14n* _____ Through source_gen-0.4.0+1 I get a bad method.node result when analyzing the following code: &nbsp;&nbsp;abstract class \_Test implements JsInterface { &nbsp;&nbsp;&nbsp;&nbsp;String m1() =&gt; null; &nbsp;&nbsp;&nbsp;&nbsp;m2(); &nbsp;&nbsp;} For the methodElement corresponding to `m2` I get `String m1();` for `methodElement.node.toString()` and `m2() → dynamic` for `methodElement.toString()`. Note that if m2 is explicitly defined as `dynamic m2()` `methodElement.node.toString()` is correct.
1.0
bad `method.node` value with analyzer 0.25.0+1 and Dart-1.10.0 - *This issue was originally filed by @a14n* _____ Through source_gen-0.4.0+1 I get a bad method.node result when analyzing the following code: &nbsp;&nbsp;abstract class \_Test implements JsInterface { &nbsp;&nbsp;&nbsp;&nbsp;String m1() =&gt; null; &nbsp;&nbsp;&nbsp;&nbsp;m2(); &nbsp;&nbsp;} For the methodElement corresponding to `m2` I get `String m1();` for `methodElement.node.toString()` and `m2() → dynamic` for `methodElement.toString()`. Note that if m2 is explicitly defined as `dynamic m2()` `methodElement.node.toString()` is correct.
defect
bad method node value with analyzer and dart this issue was originally filed by through source gen i get a bad method node result when analyzing the following code nbsp nbsp abstract class test implements jsinterface nbsp nbsp nbsp nbsp string gt null nbsp nbsp nbsp nbsp nbsp nbsp for the methodelement corresponding to i get string for methodelement node tostring and → dynamic for methodelement tostring note that if is explicitly defined as dynamic methodelement node tostring is correct
1
93,430
8,415,378,204
IssuesEvent
2018-10-13 14:09:23
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
CI/TST: hypothesis tests regularly fail due time-out
CI Testing
I have seen the following a few times: ``` ______________ TestDataFrameAggregate.test_frequency_is_original _______________ [gw0] linux -- Python 3.7.0 /home/travis/miniconda3/envs/pandas/bin/python self = <pandas.tests.frame.test_apply.TestDataFrameAggregate object at 0x7f17a4ae6ef0> @given(index=indices(5), num_columns=integers(0, 5)) > def test_frequency_is_original(self, index, num_columns): pandas/tests/frame/test_apply.py:1159: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:695: in evaluate_test_data result = self.execute(data, collect=True) ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:610: in execute result = self.test_runner(data, run) ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/executors.py:58: in default_new_style_executor return function(data) ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:606: in run return test(*args, **kwargs) pandas/tests/frame/test_apply.py:1159: in test_frequency_is_original def test_frequency_is_original(self, index, num_columns): ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:566: in test runtime, ceil(runtime / 100) * 100, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ message = 'Test took 298.69ms to run. In future the default deadline setting will be 200ms, which will make this an error. You c...ue of e.g. 300 to turn tests slower than this into an error, or you can set it to None to disable this check entirely.' s = settings(buffer_size=8192, database=DirectoryBasedExampleDatabase('/home/travis/build/jorisvandenbossche/pandas/.hypot...step_count=50, suppress_health_check=[HealthCheck.too_slow], timeout=60, use_coverage=True, verbosity=Verbosity.normal) def note_deprecation(message, s=None): # type: (str, settings) -> None if s is None: s = settings.default assert s is not None verbosity = s.verbosity warning = HypothesisDeprecationWarning(message) if verbosity > Verbosity.quiet: > warnings.warn(warning, stacklevel=3) E hypothesis.errors.HypothesisDeprecationWarning: Test took 298.69ms to run. In future the default deadline setting will be 200ms, which will make this an error. You can set deadline to an explicit value of e.g. 300 to turn tests slower than this into an error, or you can set it to None to disable this check entirely. ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/_settings.py:828: HypothesisDeprecationWarning ---------------------------------- Hypothesis ---------------------------------- You can add @seed(169950230187726778200653375172917819369) to this test or run pytest with --hypothesis-seed=169950230187726778200653375172917819369 to reproduce this failure. ```
1.0
CI/TST: hypothesis tests regularly fail due time-out - I have seen the following a few times: ``` ______________ TestDataFrameAggregate.test_frequency_is_original _______________ [gw0] linux -- Python 3.7.0 /home/travis/miniconda3/envs/pandas/bin/python self = <pandas.tests.frame.test_apply.TestDataFrameAggregate object at 0x7f17a4ae6ef0> @given(index=indices(5), num_columns=integers(0, 5)) > def test_frequency_is_original(self, index, num_columns): pandas/tests/frame/test_apply.py:1159: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:695: in evaluate_test_data result = self.execute(data, collect=True) ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:610: in execute result = self.test_runner(data, run) ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/executors.py:58: in default_new_style_executor return function(data) ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:606: in run return test(*args, **kwargs) pandas/tests/frame/test_apply.py:1159: in test_frequency_is_original def test_frequency_is_original(self, index, num_columns): ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/core.py:566: in test runtime, ceil(runtime / 100) * 100, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ message = 'Test took 298.69ms to run. In future the default deadline setting will be 200ms, which will make this an error. You c...ue of e.g. 300 to turn tests slower than this into an error, or you can set it to None to disable this check entirely.' s = settings(buffer_size=8192, database=DirectoryBasedExampleDatabase('/home/travis/build/jorisvandenbossche/pandas/.hypot...step_count=50, suppress_health_check=[HealthCheck.too_slow], timeout=60, use_coverage=True, verbosity=Verbosity.normal) def note_deprecation(message, s=None): # type: (str, settings) -> None if s is None: s = settings.default assert s is not None verbosity = s.verbosity warning = HypothesisDeprecationWarning(message) if verbosity > Verbosity.quiet: > warnings.warn(warning, stacklevel=3) E hypothesis.errors.HypothesisDeprecationWarning: Test took 298.69ms to run. In future the default deadline setting will be 200ms, which will make this an error. You can set deadline to an explicit value of e.g. 300 to turn tests slower than this into an error, or you can set it to None to disable this check entirely. ../../../miniconda3/envs/pandas/lib/python3.7/site-packages/hypothesis/_settings.py:828: HypothesisDeprecationWarning ---------------------------------- Hypothesis ---------------------------------- You can add @seed(169950230187726778200653375172917819369) to this test or run pytest with --hypothesis-seed=169950230187726778200653375172917819369 to reproduce this failure. ```
non_defect
ci tst hypothesis tests regularly fail due time out i have seen the following a few times testdataframeaggregate test frequency is original linux python home travis envs pandas bin python self given index indices num columns integers def test frequency is original self index num columns pandas tests frame test apply py envs pandas lib site packages hypothesis core py in evaluate test data result self execute data collect true envs pandas lib site packages hypothesis core py in execute result self test runner data run envs pandas lib site packages hypothesis executors py in default new style executor return function data envs pandas lib site packages hypothesis core py in run return test args kwargs pandas tests frame test apply py in test frequency is original def test frequency is original self index num columns envs pandas lib site packages hypothesis core py in test runtime ceil runtime message test took to run in future the default deadline setting will be which will make this an error you c ue of e g to turn tests slower than this into an error or you can set it to none to disable this check entirely s settings buffer size database directorybasedexampledatabase home travis build jorisvandenbossche pandas hypot step count suppress health check timeout use coverage true verbosity verbosity normal def note deprecation message s none type str settings none if s is none s settings default assert s is not none verbosity s verbosity warning hypothesisdeprecationwarning message if verbosity verbosity quiet warnings warn warning stacklevel e hypothesis errors hypothesisdeprecationwarning test took to run in future the default deadline setting will be which will make this an error you can set deadline to an explicit value of e g to turn tests slower than this into an error or you can set it to none to disable this check entirely envs pandas lib site packages hypothesis settings py hypothesisdeprecationwarning hypothesis you can add seed to this test or run pytest with hypothesis seed to reproduce this failure
0
54,513
13,755,155,734
IssuesEvent
2020-10-06 18:00:56
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
resilvering starts instantly after finishing in an infinite loop
Status: Triage Needed Type: Defect
(This is also a copy of my [question](https://unix.stackexchange.com/questions/612997/zfs-pool-in-permanent-resilvering-loop-unable-to-detach-remove-devices) on unix stackoverflow) <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | archlinux Distribution Version | Linux Kernel | 5.8.12-arch1-1 Architecture | x86_64 ZFS Version | 0.8.4-1 SPL Version | 0.8.4-1 <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing My pool: ❯ zpool status pool: wdblack state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Oct 5 13:35:14 2020 2.84T scanned at 586M/s, 852G issued at 172M/s, 9.52T total 66.5M resilvered, 8.74% done, 0 days 14:44:07 to go remove: Removal of vdev 0 copied 5.99T in 15h17m, completed on Sat Jun 6 03:15:41 2020 29.5M memory used for removed device mappings config: NAME STATE READ WRITE CKSUM wdblack ONLINE 0 0 0 wwn-0x50014ee26390e982 ONLINE 0 0 82 wwn-0x5000cca27ec99833-part1 ONLINE 0 0 264K (resilvering) ata-WDC_WD80EZAZ-11TDBA0_JEH52XAN ONLINE 0 0 0 ata-GOODRAM_C100_FF180744082600124110 ONLINE 0 0 0 sdg ONLINE 0 0 0 The -part1 has been in resilvering loop for a week now. It starts itself every time. Zfs features: NAME PROPERTY VALUE SOURCE wdblack type filesystem - wdblack creation Sat Mar 9 10:54 2019 - wdblack used 9.56T - wdblack available 10.5T - wdblack referenced 96K - wdblack compressratio 1.00x - wdblack mounted yes - wdblack quota none default wdblack reservation none default wdblack recordsize 128K local wdblack mountpoint /home/agilob/disk local wdblack sharenfs off default wdblack checksum on default wdblack compression off default wdblack atime off local wdblack devices on default wdblack exec on default wdblack setuid on default wdblack readonly off default wdblack zoned off default wdblack snapdir hidden default wdblack aclinherit restricted default wdblack createtxg 1 - wdblack canmount on default wdblack xattr on default wdblack copies 1 default wdblack version 5 - wdblack utf8only off - wdblack normalization none - wdblack casesensitivity sensitive - wdblack vscan off default wdblack nbmand off default wdblack sharesmb off default wdblack refquota none default wdblack refreservation none default wdblack guid 5626685650647801653 - wdblack primarycache all default wdblack secondarycache all default wdblack usedbysnapshots 0B - wdblack usedbydataset 96K - wdblack usedbychildren 9.56T - wdblack usedbyrefreservation 0B - wdblack logbias latency default wdblack objsetid 51 - wdblack dedup off default wdblack mlslabel none default wdblack sync standard default wdblack dnodesize legacy default wdblack refcompressratio 1.00x - wdblack written 96K - wdblack logicalused 9.56T - wdblack logicalreferenced 42K - wdblack volmode default default wdblack filesystem_limit none default wdblack snapshot_limit none default wdblack filesystem_count none default wdblack snapshot_count none default wdblack snapdev hidden default wdblack acltype off default wdblack context none default wdblack fscontext none default wdblack defcontext none default wdblack rootcontext none default wdblack relatime off local wdblack redundant_metadata all default wdblack overlay off default wdblack encryption off default wdblack keylocation none default wdblack keyformat none default wdblack pbkdf2iters 0 default wdblack special_small_blocks 0 default I'm unable to detach any device, even those that aren't resilvering ❯ sudo zpool detach wdblack /dev/sdg cannot detach /dev/sdg: only applicable to mirror and replacing vdevs ❯ sudo zpool detach wdblack wwn-0x5000cca27ec99833-part1 cannot detach wwn-0x5000cca27ec99833-part1: only applicable to mirror and replacing vdevs Can't remove then either ❯ sudo zpool remove wdblack wwn-0x5000cca27ec99833-part1 cannot remove wwn-0x5000cca27ec99833-part1: Pool busy; removal may already be in progress ❯ sudo zpool remove wdblack sdg cannot remove sdg: invalid config; all top-level vdevs must have the same sector size and not be raidz. If I remember correctly, resilvering has started itself after starting scrubing, which can't be changed: ❯ sudo zpool scrub -s wdblack cannot cancel scrubbing wdblack: currently resilvering ❯ sudo zpool scrub wdblack cannot scrub wdblack: currently resilvering `zpool status wdblack` reports there are files damaged, but the number changes "randomly" it's usually between 7 (most often) and over 3000000, right now it's `errors: 259134 data errors, use '-v' for a list` over 300000 is number of my all files on the pool! Right now it reports over 259k errors, but when I sudo `zpool status wdblack -v` it can only print 7 files, not 259k. I'm using archlinux, at the time of writing Linux 5.8.12-arch1-1 #1 SMP PREEMPT Sat, 26 Sep 2020 21:42:58 +0000 x86_64 GNU/Linux ❯ zfs version zfs-0.8.4-1 zfs-kmod-0.8.4-1 Resilver starts instantly after finishing Oct 6 2020 18:41:03.798523850 sysevent.fs.zfs.resilver_finish version = 0x0 class = "sysevent.fs.zfs.resilver_finish" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 time = 0x5f7cac2f 0x2f9881ca eid = 0x27e4e Oct 6 2020 18:41:03.798523850 sysevent.fs.zfs.history_event version = 0x0 class = "sysevent.fs.zfs.history_event" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 history_internal_str = "errors=159477" history_internal_name = "starting deferred resilver" history_txg = 0x7246f3 history_time = 0x5f7cac2f time = 0x5f7cac2f 0x2f9881ca eid = 0x27e4f Oct 6 2020 18:41:09.005195427 sysevent.fs.zfs.resilver_start version = 0x0 class = "sysevent.fs.zfs.resilver_start" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 time = 0x5f7cac35 0x4f46a3 eid = 0x27e50 ### Describe how to reproduce the problem No idea. ### Include any warning/errors/backtraces from the system logs None.
1.0
resilvering starts instantly after finishing in an infinite loop - (This is also a copy of my [question](https://unix.stackexchange.com/questions/612997/zfs-pool-in-permanent-resilvering-loop-unable-to-detach-remove-devices) on unix stackoverflow) <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | archlinux Distribution Version | Linux Kernel | 5.8.12-arch1-1 Architecture | x86_64 ZFS Version | 0.8.4-1 SPL Version | 0.8.4-1 <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing My pool: ❯ zpool status pool: wdblack state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Oct 5 13:35:14 2020 2.84T scanned at 586M/s, 852G issued at 172M/s, 9.52T total 66.5M resilvered, 8.74% done, 0 days 14:44:07 to go remove: Removal of vdev 0 copied 5.99T in 15h17m, completed on Sat Jun 6 03:15:41 2020 29.5M memory used for removed device mappings config: NAME STATE READ WRITE CKSUM wdblack ONLINE 0 0 0 wwn-0x50014ee26390e982 ONLINE 0 0 82 wwn-0x5000cca27ec99833-part1 ONLINE 0 0 264K (resilvering) ata-WDC_WD80EZAZ-11TDBA0_JEH52XAN ONLINE 0 0 0 ata-GOODRAM_C100_FF180744082600124110 ONLINE 0 0 0 sdg ONLINE 0 0 0 The -part1 has been in resilvering loop for a week now. It starts itself every time. Zfs features: NAME PROPERTY VALUE SOURCE wdblack type filesystem - wdblack creation Sat Mar 9 10:54 2019 - wdblack used 9.56T - wdblack available 10.5T - wdblack referenced 96K - wdblack compressratio 1.00x - wdblack mounted yes - wdblack quota none default wdblack reservation none default wdblack recordsize 128K local wdblack mountpoint /home/agilob/disk local wdblack sharenfs off default wdblack checksum on default wdblack compression off default wdblack atime off local wdblack devices on default wdblack exec on default wdblack setuid on default wdblack readonly off default wdblack zoned off default wdblack snapdir hidden default wdblack aclinherit restricted default wdblack createtxg 1 - wdblack canmount on default wdblack xattr on default wdblack copies 1 default wdblack version 5 - wdblack utf8only off - wdblack normalization none - wdblack casesensitivity sensitive - wdblack vscan off default wdblack nbmand off default wdblack sharesmb off default wdblack refquota none default wdblack refreservation none default wdblack guid 5626685650647801653 - wdblack primarycache all default wdblack secondarycache all default wdblack usedbysnapshots 0B - wdblack usedbydataset 96K - wdblack usedbychildren 9.56T - wdblack usedbyrefreservation 0B - wdblack logbias latency default wdblack objsetid 51 - wdblack dedup off default wdblack mlslabel none default wdblack sync standard default wdblack dnodesize legacy default wdblack refcompressratio 1.00x - wdblack written 96K - wdblack logicalused 9.56T - wdblack logicalreferenced 42K - wdblack volmode default default wdblack filesystem_limit none default wdblack snapshot_limit none default wdblack filesystem_count none default wdblack snapshot_count none default wdblack snapdev hidden default wdblack acltype off default wdblack context none default wdblack fscontext none default wdblack defcontext none default wdblack rootcontext none default wdblack relatime off local wdblack redundant_metadata all default wdblack overlay off default wdblack encryption off default wdblack keylocation none default wdblack keyformat none default wdblack pbkdf2iters 0 default wdblack special_small_blocks 0 default I'm unable to detach any device, even those that aren't resilvering ❯ sudo zpool detach wdblack /dev/sdg cannot detach /dev/sdg: only applicable to mirror and replacing vdevs ❯ sudo zpool detach wdblack wwn-0x5000cca27ec99833-part1 cannot detach wwn-0x5000cca27ec99833-part1: only applicable to mirror and replacing vdevs Can't remove then either ❯ sudo zpool remove wdblack wwn-0x5000cca27ec99833-part1 cannot remove wwn-0x5000cca27ec99833-part1: Pool busy; removal may already be in progress ❯ sudo zpool remove wdblack sdg cannot remove sdg: invalid config; all top-level vdevs must have the same sector size and not be raidz. If I remember correctly, resilvering has started itself after starting scrubing, which can't be changed: ❯ sudo zpool scrub -s wdblack cannot cancel scrubbing wdblack: currently resilvering ❯ sudo zpool scrub wdblack cannot scrub wdblack: currently resilvering `zpool status wdblack` reports there are files damaged, but the number changes "randomly" it's usually between 7 (most often) and over 3000000, right now it's `errors: 259134 data errors, use '-v' for a list` over 300000 is number of my all files on the pool! Right now it reports over 259k errors, but when I sudo `zpool status wdblack -v` it can only print 7 files, not 259k. I'm using archlinux, at the time of writing Linux 5.8.12-arch1-1 #1 SMP PREEMPT Sat, 26 Sep 2020 21:42:58 +0000 x86_64 GNU/Linux ❯ zfs version zfs-0.8.4-1 zfs-kmod-0.8.4-1 Resilver starts instantly after finishing Oct 6 2020 18:41:03.798523850 sysevent.fs.zfs.resilver_finish version = 0x0 class = "sysevent.fs.zfs.resilver_finish" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 time = 0x5f7cac2f 0x2f9881ca eid = 0x27e4e Oct 6 2020 18:41:03.798523850 sysevent.fs.zfs.history_event version = 0x0 class = "sysevent.fs.zfs.history_event" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 history_internal_str = "errors=159477" history_internal_name = "starting deferred resilver" history_txg = 0x7246f3 history_time = 0x5f7cac2f time = 0x5f7cac2f 0x2f9881ca eid = 0x27e4f Oct 6 2020 18:41:09.005195427 sysevent.fs.zfs.resilver_start version = 0x0 class = "sysevent.fs.zfs.resilver_start" pool = "wdblack" pool_guid = 0x509d876228a22ecc pool_state = 0x0 pool_context = 0x0 time = 0x5f7cac35 0x4f46a3 eid = 0x27e50 ### Describe how to reproduce the problem No idea. ### Include any warning/errors/backtraces from the system logs None.
defect
resilvering starts instantly after finishing in an infinite loop this is also a copy of my on unix stackoverflow thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name archlinux distribution version linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing my pool ❯ zpool status pool wdblack state online status one or more devices is currently being resilvered the pool will continue to function possibly in a degraded state action wait for the resilver to complete scan resilver in progress since mon oct scanned at s issued at s total resilvered done days to go remove removal of vdev copied in completed on sat jun memory used for removed device mappings config name state read write cksum wdblack online wwn online wwn online resilvering ata wdc online ata goodram online sdg online the has been in resilvering loop for a week now it starts itself every time zfs features name property value source wdblack type filesystem wdblack creation sat mar wdblack used wdblack available wdblack referenced wdblack compressratio wdblack mounted yes wdblack quota none default wdblack reservation none default wdblack recordsize local wdblack mountpoint home agilob disk local wdblack sharenfs off default wdblack checksum on default wdblack compression off default wdblack atime off local wdblack devices on default wdblack exec on default wdblack setuid on default wdblack readonly off default wdblack zoned off default wdblack snapdir hidden default wdblack aclinherit restricted default wdblack createtxg wdblack canmount on default wdblack xattr on default wdblack copies default wdblack version wdblack off wdblack normalization none wdblack casesensitivity sensitive wdblack vscan off default wdblack nbmand off default wdblack sharesmb off default wdblack refquota none default wdblack refreservation none default wdblack guid wdblack primarycache all default wdblack secondarycache all default wdblack usedbysnapshots wdblack usedbydataset wdblack usedbychildren wdblack usedbyrefreservation wdblack logbias latency default wdblack objsetid wdblack dedup off default wdblack mlslabel none default wdblack sync standard default wdblack dnodesize legacy default wdblack refcompressratio wdblack written wdblack logicalused wdblack logicalreferenced wdblack volmode default default wdblack filesystem limit none default wdblack snapshot limit none default wdblack filesystem count none default wdblack snapshot count none default wdblack snapdev hidden default wdblack acltype off default wdblack context none default wdblack fscontext none default wdblack defcontext none default wdblack rootcontext none default wdblack relatime off local wdblack redundant metadata all default wdblack overlay off default wdblack encryption off default wdblack keylocation none default wdblack keyformat none default wdblack default wdblack special small blocks default i m unable to detach any device even those that aren t resilvering ❯ sudo zpool detach wdblack dev sdg cannot detach dev sdg only applicable to mirror and replacing vdevs ❯ sudo zpool detach wdblack wwn cannot detach wwn only applicable to mirror and replacing vdevs can t remove then either ❯ sudo zpool remove wdblack wwn cannot remove wwn pool busy removal may already be in progress ❯ sudo zpool remove wdblack sdg cannot remove sdg invalid config all top level vdevs must have the same sector size and not be raidz if i remember correctly resilvering has started itself after starting scrubing which can t be changed ❯ sudo zpool scrub s wdblack cannot cancel scrubbing wdblack currently resilvering ❯ sudo zpool scrub wdblack cannot scrub wdblack currently resilvering zpool status wdblack reports there are files damaged but the number changes randomly it s usually between most often and over right now it s errors data errors use v for a list over is number of my all files on the pool right now it reports over errors but when i sudo zpool status wdblack v it can only print files not i m using archlinux at the time of writing linux smp preempt sat sep gnu linux ❯ zfs version zfs zfs kmod resilver starts instantly after finishing oct sysevent fs zfs resilver finish version class sysevent fs zfs resilver finish pool wdblack pool guid pool state pool context time eid oct sysevent fs zfs history event version class sysevent fs zfs history event pool wdblack pool guid pool state pool context history internal str errors history internal name starting deferred resilver history txg history time time eid oct sysevent fs zfs resilver start version class sysevent fs zfs resilver start pool wdblack pool guid pool state pool context time eid describe how to reproduce the problem no idea include any warning errors backtraces from the system logs none
1
17,123
2,974,600,413
IssuesEvent
2015-07-15 02:15:09
Reimashi/jotai
https://api.github.com/repos/Reimashi/jotai
closed
Hard disk not seen
auto-migrated Priority-Medium Type-Defect
``` What is the expected output? What do you see instead? Temperature of my second hard disk. Don't see the disk. What version of the product are you using? On what operating system? 0.3.2 Beta Windows 7 Home Premium 64-Bit SP1 Please provide any additional information below. - Mainboard ASUS P5N72-T PREMIUM - not seen: 2nd disk Samsung HD1003SJ on ide mode nforce sata (nvidia driver) - seen: 1st disk OCZ-VERTEX2 3.5 120.0GB on MSI Star USB3/SATA6 (ms driver) Both drives are seen with other tools (i.e. CrystalDiskInfo 4.0.1). Additinal system info see: http://www.sysprofile.de/id79017 Please attach a Report created with "File / Save Report...". ``` Original issue reported on code.google.com by `matthias...@arcor.de` on 2 Jun 2011 at 7:30 Attachments: * [OpenHardwareMonitor.Report-20110602.txt](https://storage.googleapis.com/google-code-attachments/open-hardware-monitor/issue-232/comment-0/OpenHardwareMonitor.Report-20110602.txt)
1.0
Hard disk not seen - ``` What is the expected output? What do you see instead? Temperature of my second hard disk. Don't see the disk. What version of the product are you using? On what operating system? 0.3.2 Beta Windows 7 Home Premium 64-Bit SP1 Please provide any additional information below. - Mainboard ASUS P5N72-T PREMIUM - not seen: 2nd disk Samsung HD1003SJ on ide mode nforce sata (nvidia driver) - seen: 1st disk OCZ-VERTEX2 3.5 120.0GB on MSI Star USB3/SATA6 (ms driver) Both drives are seen with other tools (i.e. CrystalDiskInfo 4.0.1). Additinal system info see: http://www.sysprofile.de/id79017 Please attach a Report created with "File / Save Report...". ``` Original issue reported on code.google.com by `matthias...@arcor.de` on 2 Jun 2011 at 7:30 Attachments: * [OpenHardwareMonitor.Report-20110602.txt](https://storage.googleapis.com/google-code-attachments/open-hardware-monitor/issue-232/comment-0/OpenHardwareMonitor.Report-20110602.txt)
defect
hard disk not seen what is the expected output what do you see instead temperature of my second hard disk don t see the disk what version of the product are you using on what operating system beta windows home premium bit please provide any additional information below mainboard asus t premium not seen disk samsung on ide mode nforce sata nvidia driver seen disk ocz on msi star ms driver both drives are seen with other tools i e crystaldiskinfo additinal system info see please attach a report created with file save report original issue reported on code google com by matthias arcor de on jun at attachments
1
9,760
3,069,922,081
IssuesEvent
2015-08-18 23:16:19
marklogic/marklogic-sesame
https://api.github.com/repos/marklogic/marklogic-sesame
closed
Adding triples of format ntriples fails
Bug minor test
Adding triples of format ntriples fails with exception. con.add(MarkLogicRepositoryConnectionTest.class.getResourceAsStream( "family-tree.nt"), "", RDFFormat.NTRIPLES, dirgraph); com.marklogic.client.FailedRequestException: Local message: failed to apply resource at graphs: Internal Server Error. Server Message: XDMP-DOCBADCHAR: xdmp:nquad($body, $options) -- Unexpected character found: '' (0xfeff) at :1:0 . See the MarkLogic server error log for further detail. at com.marklogic.client.impl.JerseyServices.checkStatus(JerseyServices.java:4600) at com.marklogic.client.impl.JerseyServices.postResource(JerseyServices.java:3494) at com.marklogic.client.impl.JerseyServices.mergeGraph(JerseyServices.java:5429) at com.marklogic.client.impl.GraphManagerImpl.merge(GraphManagerImpl.java:207) at com.marklogic.client.impl.GraphManagerImpl.merge(GraphManagerImpl.java:193) at com.marklogic.semantics.sesame.client.MarkLogicClientImpl.performAdd(MarkLogicClientImpl.java:244) at com.marklogic.semantics.sesame.client.MarkLogicClient.sendAdd(MarkLogicClient.java:138) at com.marklogic.semantics.sesame.MarkLogicRepositoryConnection.add(MarkLogicRepositoryConnection.java:500) at com.marklogic.sesame.functionaltests.MarkLogicRepositoryConnectionTest.testPrepareQuery1(MarkLogicRepositoryConnectionTest.java:1074) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
1.0
Adding triples of format ntriples fails - Adding triples of format ntriples fails with exception. con.add(MarkLogicRepositoryConnectionTest.class.getResourceAsStream( "family-tree.nt"), "", RDFFormat.NTRIPLES, dirgraph); com.marklogic.client.FailedRequestException: Local message: failed to apply resource at graphs: Internal Server Error. Server Message: XDMP-DOCBADCHAR: xdmp:nquad($body, $options) -- Unexpected character found: '' (0xfeff) at :1:0 . See the MarkLogic server error log for further detail. at com.marklogic.client.impl.JerseyServices.checkStatus(JerseyServices.java:4600) at com.marklogic.client.impl.JerseyServices.postResource(JerseyServices.java:3494) at com.marklogic.client.impl.JerseyServices.mergeGraph(JerseyServices.java:5429) at com.marklogic.client.impl.GraphManagerImpl.merge(GraphManagerImpl.java:207) at com.marklogic.client.impl.GraphManagerImpl.merge(GraphManagerImpl.java:193) at com.marklogic.semantics.sesame.client.MarkLogicClientImpl.performAdd(MarkLogicClientImpl.java:244) at com.marklogic.semantics.sesame.client.MarkLogicClient.sendAdd(MarkLogicClient.java:138) at com.marklogic.semantics.sesame.MarkLogicRepositoryConnection.add(MarkLogicRepositoryConnection.java:500) at com.marklogic.sesame.functionaltests.MarkLogicRepositoryConnectionTest.testPrepareQuery1(MarkLogicRepositoryConnectionTest.java:1074) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
non_defect
adding triples of format ntriples fails adding triples of format ntriples fails with exception con add marklogicrepositoryconnectiontest class getresourceasstream family tree nt rdfformat ntriples dirgraph com marklogic client failedrequestexception local message failed to apply resource at graphs internal server error server message xdmp docbadchar xdmp nquad body options unexpected character found  at see the marklogic server error log for further detail at com marklogic client impl jerseyservices checkstatus jerseyservices java at com marklogic client impl jerseyservices postresource jerseyservices java at com marklogic client impl jerseyservices mergegraph jerseyservices java at com marklogic client impl graphmanagerimpl merge graphmanagerimpl java at com marklogic client impl graphmanagerimpl merge graphmanagerimpl java at com marklogic semantics sesame client marklogicclientimpl performadd marklogicclientimpl java at com marklogic semantics sesame client marklogicclient sendadd marklogicclient java at com marklogic semantics sesame marklogicrepositoryconnection add marklogicrepositoryconnection java at com marklogic sesame functionaltests marklogicrepositoryconnectiontest marklogicrepositoryconnectiontest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner run parentrunner java at org eclipse jdt internal runner run java at org eclipse jdt internal junit runner testexecution run testexecution java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner run remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner main remotetestrunner java
0
14,074
2,789,885,649
IssuesEvent
2015-05-08 22:10:15
google/google-visualization-api-issues
https://api.github.com/repos/google/google-visualization-api-issues
closed
Memory leak when using gauges
Priority-Medium Type-Defect
Original [issue 425](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=425) created by orwant on 2010-10-08T09:38:31.000Z: <b>What steps will reproduce the problem? Please provide a link to a</b> <b>demonstration page if at all possible, or attach code.</b> Visiting the below page shows that a memory leak exists in the gauges when they are redrawn. http://code.google.com/apis/visualization/documentation/gallery/gauge.html#Methods <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> Gauges <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> NO <b>What operating system and browser are you using?</b> Windows 7, IE v8.0.7600.16385 &amp; Chrome 6.0.472.63 <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
1.0
Memory leak when using gauges - Original [issue 425](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=425) created by orwant on 2010-10-08T09:38:31.000Z: <b>What steps will reproduce the problem? Please provide a link to a</b> <b>demonstration page if at all possible, or attach code.</b> Visiting the below page shows that a memory leak exists in the gauges when they are redrawn. http://code.google.com/apis/visualization/documentation/gallery/gauge.html#Methods <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> Gauges <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> NO <b>What operating system and browser are you using?</b> Windows 7, IE v8.0.7600.16385 &amp; Chrome 6.0.472.63 <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
defect
memory leak when using gauges original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code visiting the below page shows that a memory leak exists in the gauges when they are redrawn what component is this issue related to piechart linechart datatable query etc gauges are you using the test environment version if you are not sure answer no no what operating system and browser are you using windows ie amp chrome for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
1
71,252
23,508,215,741
IssuesEvent
2022-08-18 14:19:27
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
p:panel without header generation results in missing border-top
:lady_beetle: defect :bangbang: needs-triage
### Describe the bug I created my own theme and that may be involved, but this is what I'm seeing. It wasn't an issue in PF 8 with one of the community themes. If I put a p:panel around some content I expect to get a box on the page, with all four borders. What I am getting is a box with no top border. Its in the style that way (border: 1px solid #color and then border-top: 0). Which makes sense if the panel has a header facet because the header will have a bottom border which blends into the non-existent top border. But if no header facet is defined, there's no top border and I have to add a style to the panel to fix the issue, hard coding the style and color. If I had to change the theme again, I'd also have to search my program for all references to that code and update it, which should be avoided. Like I said, I used to get boxes with p:panel w/o a header facet, now I don't. Seems like a bug to me. ### Reproducer _No response_ ### Expected behavior A p:panel without a header facet should render a box with all four borders. ### PrimeFaces edition Community ### PrimeFaces version 11.0.0 ### Theme Custom ### JSF implementation Mojarra ### JSF version 2.3 ### Browser(s) Chrome and Edge
1.0
p:panel without header generation results in missing border-top - ### Describe the bug I created my own theme and that may be involved, but this is what I'm seeing. It wasn't an issue in PF 8 with one of the community themes. If I put a p:panel around some content I expect to get a box on the page, with all four borders. What I am getting is a box with no top border. Its in the style that way (border: 1px solid #color and then border-top: 0). Which makes sense if the panel has a header facet because the header will have a bottom border which blends into the non-existent top border. But if no header facet is defined, there's no top border and I have to add a style to the panel to fix the issue, hard coding the style and color. If I had to change the theme again, I'd also have to search my program for all references to that code and update it, which should be avoided. Like I said, I used to get boxes with p:panel w/o a header facet, now I don't. Seems like a bug to me. ### Reproducer _No response_ ### Expected behavior A p:panel without a header facet should render a box with all four borders. ### PrimeFaces edition Community ### PrimeFaces version 11.0.0 ### Theme Custom ### JSF implementation Mojarra ### JSF version 2.3 ### Browser(s) Chrome and Edge
defect
p panel without header generation results in missing border top describe the bug i created my own theme and that may be involved but this is what i m seeing it wasn t an issue in pf with one of the community themes if i put a p panel around some content i expect to get a box on the page with all four borders what i am getting is a box with no top border its in the style that way border solid color and then border top which makes sense if the panel has a header facet because the header will have a bottom border which blends into the non existent top border but if no header facet is defined there s no top border and i have to add a style to the panel to fix the issue hard coding the style and color if i had to change the theme again i d also have to search my program for all references to that code and update it which should be avoided like i said i used to get boxes with p panel w o a header facet now i don t seems like a bug to me reproducer no response expected behavior a p panel without a header facet should render a box with all four borders primefaces edition community primefaces version theme custom jsf implementation mojarra jsf version browser s chrome and edge
1
65,850
19,720,463,003
IssuesEvent
2022-01-13 14:55:53
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
spoiler is shown at notifcation
T-Defect
### Steps to reproduce * let someone send a spoiler in a room, where you get notifications * look at the notification the notification: ![image](https://user-images.githubusercontent.com/44570204/149347670-b62f22e0-3c85-425b-a467-77408b93f4e9.png) in element: ![image](https://user-images.githubusercontent.com/44570204/149347840-80f3339f-7343-4e93-87ae-67cb759ea1da.png) ### Outcome #### What did you expect? Show a notification that a spoiler was send, without the message content. #### What happened instead? A message with the content of the spoiler is shown. ### Operating system Manjaro ### Application version Element version: 1.9.8 ### How did you install the app? https://archlinux.org/packages/community/x86_64/element-desktop/ ### Homeserver _No response_ ### Will you send logs? No
1.0
spoiler is shown at notifcation - ### Steps to reproduce * let someone send a spoiler in a room, where you get notifications * look at the notification the notification: ![image](https://user-images.githubusercontent.com/44570204/149347670-b62f22e0-3c85-425b-a467-77408b93f4e9.png) in element: ![image](https://user-images.githubusercontent.com/44570204/149347840-80f3339f-7343-4e93-87ae-67cb759ea1da.png) ### Outcome #### What did you expect? Show a notification that a spoiler was send, without the message content. #### What happened instead? A message with the content of the spoiler is shown. ### Operating system Manjaro ### Application version Element version: 1.9.8 ### How did you install the app? https://archlinux.org/packages/community/x86_64/element-desktop/ ### Homeserver _No response_ ### Will you send logs? No
defect
spoiler is shown at notifcation steps to reproduce let someone send a spoiler in a room where you get notifications look at the notification the notification in element outcome what did you expect show a notification that a spoiler was send without the message content what happened instead a message with the content of the spoiler is shown operating system manjaro application version element version how did you install the app homeserver no response will you send logs no
1
2,191
2,603,977,724
IssuesEvent
2015-02-24 19:01:57
chrsmith/nishazi6
https://api.github.com/repos/chrsmith/nishazi6
opened
沈阳沈阳疱疹治疗医治
auto-migrated Priority-Medium Type-Defect
``` 沈阳沈阳疱疹治疗医治〓沈陽軍區政治部醫院性病〓TEL:024-3 1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。� ��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:29
1.0
沈阳沈阳疱疹治疗医治 - ``` 沈阳沈阳疱疹治疗医治〓沈陽軍區政治部醫院性病〓TEL:024-3 1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。� ��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:29
defect
沈阳沈阳疱疹治疗医治 沈阳沈阳疱疹治疗医治〓沈陽軍區政治部醫院性病〓tel: 〓 , 。� �� 。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 original issue reported on code google com by gmail com on jun at
1
578,543
17,147,471,982
IssuesEvent
2021-07-13 16:05:22
ballerina-platform/ballerina-standard-library
https://api.github.com/repos/ballerina-platform/ballerina-standard-library
closed
Get NPE when doing a outbound call
Priority/High Team/PCP Type/Bug module/http
```ballerina import ballerina/http; service / on new http:Listener(8090) { resource function get stats/[string country](http:Caller caller, http:Request request) returns error? { http:Client covid19Client = check new ("https://disease.sh"); string path1 = string `/v3/covid-19/countries/${country}`; CovidCountry statusByCountry = check covid19Client->get(path1); decimal totalCases = statusByCountry?.cases ?: 0d; http:Client worldBankClient = check new ("http://api.worldbank.org/v2"); string path2 = string `/country/${country}/indicator/SP.POP.TOTL`; CountryPopulation[] populationByCountry = check worldBankClient->get(path2); // int population = (populationByCountry[0]?.value ?: 0) / 1000000; // decimal totalCasesPerMillion = totalCases / <decimal>population; // string countryName = country; // json payload = {country : countryName, totalCasesPerMillion : totalCasesPerMillion}; check caller->respond(totalCases); } } # Covid-19 status of the given country public type CovidCountry record { # Last updated timestamp decimal updated?; # Country name string country?; # Country information record { # Country Id decimal _id?; # Country ISO2 code string iso2?; # Country ISO3 code string iso3?; # Latitude decimal lat?; # Longtitude decimal long?; # URL for the country flag string flag?;} countryInfo?; # Total cases decimal cases?; # Today cases decimal todayCases?; # Total deaths decimal deaths?; # Today deaths decimal todayDeaths?; # Total recovered decimal recovered?; # Today recovered decimal todayRecovered?; # Active cases decimal active?; # Critical cases decimal critical?; # Cases per one million decimal casesPerOneMillion?; # Deaths per one million decimal deathsPerOneMillion?; # Total number of Covid-19 tests administered decimal tests?; # Covid-19 tests for one million decimal testsPerOneMillion?; # Total population decimal population?; # Continent name string continent?; # One case per people decimal oneCasePerPeople?; # One death per people decimal oneDeathPerPeople?; # One test per people decimal oneTestPerPeople?; # Active cases per one million decimal activePerOneMillion?; # Recovered cases per one million decimal recoveredPerOneMillion?; # Critical cases per one million decimal criticalPerOneMillion?; }; public type CountryPopulation record { # World bank indicator Indicator indicator?; # Country Country country?; # Date-range by year, month or quarter that scopes the result-set. string date?; # Country population int? value?; }; # Data indicator public type Indicator record { # Id of the indicator string id?; # Value represent by the indicator string value?; }; # Represent a Country public type Country record { # Country code string id?; # Country name string value?; }; ``` Request ``` curl -v http://localhost:8090/stats/LK ```
1.0
Get NPE when doing a outbound call - ```ballerina import ballerina/http; service / on new http:Listener(8090) { resource function get stats/[string country](http:Caller caller, http:Request request) returns error? { http:Client covid19Client = check new ("https://disease.sh"); string path1 = string `/v3/covid-19/countries/${country}`; CovidCountry statusByCountry = check covid19Client->get(path1); decimal totalCases = statusByCountry?.cases ?: 0d; http:Client worldBankClient = check new ("http://api.worldbank.org/v2"); string path2 = string `/country/${country}/indicator/SP.POP.TOTL`; CountryPopulation[] populationByCountry = check worldBankClient->get(path2); // int population = (populationByCountry[0]?.value ?: 0) / 1000000; // decimal totalCasesPerMillion = totalCases / <decimal>population; // string countryName = country; // json payload = {country : countryName, totalCasesPerMillion : totalCasesPerMillion}; check caller->respond(totalCases); } } # Covid-19 status of the given country public type CovidCountry record { # Last updated timestamp decimal updated?; # Country name string country?; # Country information record { # Country Id decimal _id?; # Country ISO2 code string iso2?; # Country ISO3 code string iso3?; # Latitude decimal lat?; # Longtitude decimal long?; # URL for the country flag string flag?;} countryInfo?; # Total cases decimal cases?; # Today cases decimal todayCases?; # Total deaths decimal deaths?; # Today deaths decimal todayDeaths?; # Total recovered decimal recovered?; # Today recovered decimal todayRecovered?; # Active cases decimal active?; # Critical cases decimal critical?; # Cases per one million decimal casesPerOneMillion?; # Deaths per one million decimal deathsPerOneMillion?; # Total number of Covid-19 tests administered decimal tests?; # Covid-19 tests for one million decimal testsPerOneMillion?; # Total population decimal population?; # Continent name string continent?; # One case per people decimal oneCasePerPeople?; # One death per people decimal oneDeathPerPeople?; # One test per people decimal oneTestPerPeople?; # Active cases per one million decimal activePerOneMillion?; # Recovered cases per one million decimal recoveredPerOneMillion?; # Critical cases per one million decimal criticalPerOneMillion?; }; public type CountryPopulation record { # World bank indicator Indicator indicator?; # Country Country country?; # Date-range by year, month or quarter that scopes the result-set. string date?; # Country population int? value?; }; # Data indicator public type Indicator record { # Id of the indicator string id?; # Value represent by the indicator string value?; }; # Represent a Country public type Country record { # Country code string id?; # Country name string value?; }; ``` Request ``` curl -v http://localhost:8090/stats/LK ```
non_defect
get npe when doing a outbound call ballerina import ballerina http service on new http listener resource function get stats http caller caller http request request returns error http client check new string string covid countries country covidcountry statusbycountry check get decimal totalcases statusbycountry cases http client worldbankclient check new string string country country indicator sp pop totl countrypopulation populationbycountry check worldbankclient get int population populationbycountry value decimal totalcasespermillion totalcases population string countryname country json payload country countryname totalcasespermillion totalcasespermillion check caller respond totalcases covid status of the given country public type covidcountry record last updated timestamp decimal updated country name string country country information record country id decimal id country code string country code string latitude decimal lat longtitude decimal long url for the country flag string flag countryinfo total cases decimal cases today cases decimal todaycases total deaths decimal deaths today deaths decimal todaydeaths total recovered decimal recovered today recovered decimal todayrecovered active cases decimal active critical cases decimal critical cases per one million decimal casesperonemillion deaths per one million decimal deathsperonemillion total number of covid tests administered decimal tests covid tests for one million decimal testsperonemillion total population decimal population continent name string continent one case per people decimal onecaseperpeople one death per people decimal onedeathperpeople one test per people decimal onetestperpeople active cases per one million decimal activeperonemillion recovered cases per one million decimal recoveredperonemillion critical cases per one million decimal criticalperonemillion public type countrypopulation record world bank indicator indicator indicator country country country date range by year month or quarter that scopes the result set string date country population int value data indicator public type indicator record id of the indicator string id value represent by the indicator string value represent a country public type country record country code string id country name string value request curl v
0
824,092
31,141,141,865
IssuesEvent
2023-08-16 00:09:28
googleapis/python-firestore
https://api.github.com/repos/googleapis/python-firestore
closed
Continuous fuzzing integration with Google's OSS-Fuzz
api: firestore type: feature request priority: p3
Hi, I was wondering if you would like to integrate continuous fuzzing by way of OSS-Fuzz? Fuzzing is a way to automate test-case generation and has been heavily used for memory unsafe languages. Recently efforts have been put into fuzzing memory safe languages and Python is one of the languages where it would be great to use fuzzing. In this https://github.com/google/oss-fuzz/pull/8014 I did an initial integration into OSS-Fuzz. OSS-Fuzz is a free service run by Google that performs continuous fuzzing of important open source projects. Since this is also a Google project you should be able to reach out internally e.g. to the Google Open Source Security team (GOSST) if you have any questions. If you would like to integrate, the only thing I need is a list of email(s) that will get access to the data produced by OSS-Fuzz, such as bug reports, coverage reports and more stats. Notice the emails affiliated with the project will be public in the OSS-Fuzz repo, as they will be part of a configuration file.
1.0
Continuous fuzzing integration with Google's OSS-Fuzz - Hi, I was wondering if you would like to integrate continuous fuzzing by way of OSS-Fuzz? Fuzzing is a way to automate test-case generation and has been heavily used for memory unsafe languages. Recently efforts have been put into fuzzing memory safe languages and Python is one of the languages where it would be great to use fuzzing. In this https://github.com/google/oss-fuzz/pull/8014 I did an initial integration into OSS-Fuzz. OSS-Fuzz is a free service run by Google that performs continuous fuzzing of important open source projects. Since this is also a Google project you should be able to reach out internally e.g. to the Google Open Source Security team (GOSST) if you have any questions. If you would like to integrate, the only thing I need is a list of email(s) that will get access to the data produced by OSS-Fuzz, such as bug reports, coverage reports and more stats. Notice the emails affiliated with the project will be public in the OSS-Fuzz repo, as they will be part of a configuration file.
non_defect
continuous fuzzing integration with google s oss fuzz hi i was wondering if you would like to integrate continuous fuzzing by way of oss fuzz fuzzing is a way to automate test case generation and has been heavily used for memory unsafe languages recently efforts have been put into fuzzing memory safe languages and python is one of the languages where it would be great to use fuzzing in this i did an initial integration into oss fuzz oss fuzz is a free service run by google that performs continuous fuzzing of important open source projects since this is also a google project you should be able to reach out internally e g to the google open source security team gosst if you have any questions if you would like to integrate the only thing i need is a list of email s that will get access to the data produced by oss fuzz such as bug reports coverage reports and more stats notice the emails affiliated with the project will be public in the oss fuzz repo as they will be part of a configuration file
0
35,141
7,602,701,126
IssuesEvent
2018-04-29 05:09:03
DotJoshJohnson/vscode-xml
https://api.github.com/repos/DotJoshJohnson/vscode-xml
closed
Should not remove newlines from inside XML tag
(XML Formatter) Defect
We have some XML whose values contain newlines. The auto-format feature is removing those newlines when saving. This causes us to lose important formatting. Please do not replace newlines inside of XML values.
1.0
Should not remove newlines from inside XML tag - We have some XML whose values contain newlines. The auto-format feature is removing those newlines when saving. This causes us to lose important formatting. Please do not replace newlines inside of XML values.
defect
should not remove newlines from inside xml tag we have some xml whose values contain newlines the auto format feature is removing those newlines when saving this causes us to lose important formatting please do not replace newlines inside of xml values
1
267,639
20,238,449,533
IssuesEvent
2022-02-14 06:23:35
binance-chain/bsc
https://api.github.com/repos/binance-chain/bsc
closed
Сan't sync the last 64-100 blocks(
documentation question
I can't sync the last 64-100 blocks, what am I doing wrong? Someone can tell me where and how I can sync the node without problems? **list of my servers on which I am trying sync the node** 
vultr Bare Metal 8 cores / 16 threads @ 3.7 GHz RAM 128 GB 10 Gbps Network 1.9 TB NVMe SSD 
AWS m5zn.3xlarge 12 vCPUs RAM 48.0 GiB volum gp3 IOPS 15000 Throughput (MiB/s) 950 
AWS i3.2xlarge 8 vCPUs RAM 61.0 GiB 1900 GB NVMe SSD 
AWS Dedicated Hosts i3en.2xlarge 8 CPU RAM 64.0 GiB RAID0 5000 GB (2 * 2500 GB NVMe SSD) DigitalOcean Memory-Optimized 16 vCPUs RAM 128 GB RAID0 2.34 TB 6x SSD DigitalOcean 8 vCPUs RAM 16 GB Memory RAID0 5 volume **Сonfig and disk tests one of my servers ** (
 AWS Dedicated Hosts i3en.2xlarge 8 CPU RAM 64.0 GiB RAID0 5000 GB (2 * 2500 GB NVMe SSD) ) instance: Geth/v1.1.7-74f6b613/linux-amd64/go1.16.10 at block: 0 (Mon Apr 20 2020 13:46:54 GMT+0000 (UTC)) modules: debug:1.0 eth:1.0 net:1.0 personal:1.0 rpc:1.0 web3:1.0 To exit, press ctrl-d > net.peerCount 62 > eth.syncing { currentBlock: 14220573, highestBlock: 14220648, knownStates: 1604080420, pulledStates: 1603983829, startingBlock: 14217784 } > **Сommand to start a node** `--syncmode fast --http --http.addr 0.0.0.0 --http.vhosts '*' --cache 40960 --http.api 'eth,net,web3,personal,debug' --rpc.allow-unprotected-txs --allow-insecure-unlock --txlookuplimit 0 --pprof --pprof.addr=0.0.0.0 --pprof.port=6060 --metrics` **config.toml** `[Eth] NetworkId = 56 NoPruning = false NoPrefetch = false LightPeers = 100 UltraLightFraction = 75 TrieTimeout = 100000000000 EnablePreimageRecording = false EWASMInterpreter = "" EVMInterpreter = "" [Eth.Miner] GasFloor = 30000000 GasCeil = 40000000 GasPrice = 1000000000 Recommit = 10000000000 Noverify = false [Eth.TxPool] Locals = [] NoLocals = true Journal = "transactions.rlp" Rejournal = 3600000000000 PriceLimit = 1000000000 PriceBump = 10 AccountSlots = 512 GlobalSlots = 10000 AccountQueue = 256 GlobalQueue = 5000 Lifetime = 10800000000000 [Eth.GPO] Blocks = 20 Percentile = 60 OracleThreshold = 20 [Node] IPCPath = "geth.ipc" HTTPHost = "0.0.0.0" NoUSB = true InsecureUnlockAllowed = false HTTPPort = 8545 HTTPVirtualHosts = ["localhost"] HTTPModules = ["eth", "net", "web3", "txpool", "parlia"] WSPort = 8546 WSModules = ["net", "web3", "eth"] [Node.P2P] MaxPeers = 200 NoDiscovery = false BootstrapNodes = ["enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311","enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311","enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311"] StaticNodes = ["enode://b208ecff0e78b0de84fbed72ce4a39590903ead22759dc77b560298550394dd723b232daa42c76596a6763447dc3a7951027cc5fb3f8085d42775023c9412c63@23.21.154.239:30311", "enode://bcccac7dfbd21bd1dbbb7bf64ef7af6986520091f00320905e5919d12167fc8c94698099946933747ad2ea24745d35ebc980f9ba1475a839c83574d5ccf318d9@34.196.94.250:30311", "enode://8661a9799cb9c202c87c895b06de689279fb3b9795b7a00a33be04d0651331a3f0603b1056fb2a8a9eeea76a4396f6fa8d29d03b4f3f29f34adf33fe0e96d905@44.197.67.153:30311", "enode://3fe3aca8482301a3b966e40e66b9a924ede7bb10b6f76e2c1c0aa421e293189e44d381c652d9f875ea7f865829e9a8e6a5d006e5a5e54176aa2069dd18b18ef6@23.22.69.150:30311", "enode://29af5f724433add3047a2f20b4c82ca587fe507c579531798a4bad25cbcbb0243d9da94d14f8388b80a7780568db39c1bee7a8862b8bebe7f8c6eb34246d5569@3.215.144.57:30311", "enode://82776830495703594e88b2225509a42517545ec53c3da93eea51eaa1556e3a1d56a2bd342aca5519261d40ae9ade8ff81289d7fa1a999e6e7f698aad0aa0fce6@34.236.200.111:30311", "enode://ffb8116321f8c01bc0f33852d95f8779cd2e29711c089026464907b55fdaed8d1261f49ab68f0012594633f290f005d284e4b49cb50eba7b3634f4901ec9fa7d@34.193.27.17:30311", "enode://97be405ab074c15eca1a82625f6516f08488f3c39033d72e81e98996e0f3cbc5c047e89276f953c27a08f20f41ec334e819096be543206368ae1f71c0a438b88@3.216.10.24:30311", "enode://bcdd46e1e49986d3ea31010933e7910409afcc7766dbb9335c9f27202a2361a621632a4b5a8e2bfa9bf2019a49a3658122c3e5d6e8d8a16a6984df91e7aed6c9@107.21.209.99:30311", "enode://75c5f28810dda7533bfe6dd3ba16b61c3dd2ab5198edbd8279c5e34b22615932ed826da146fdb11f62680ad238a2d961bb1c7eb6f63e2a78683584ad7e9e0103@107.20.8.154:30311"] ListenAddr = ":30311" EnableMsgEvents = false [Node.HTTPTimeouts] ReadTimeout = 30000000000 WriteTimeout = 30000000000 IdleTimeout = 120000000000 [Node.LogConfig] FilePath = "bsc.log" MaxBytesSize = 10485760 Level = "info" FileRoot = "" ` **Testing disk performance with utilities ioping and fio** ioping -c 100 --- test/ (ext4 /dev/md0) ioping statistics --- 99 requests completed in 32.1 ms, 396 KiB read, 3.09 k iops, 12.1 MiB/s generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s min/avg/max/mdev = 53.7 us / 323.8 us / 2.90 ms / 747.5 us fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75 fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=424MiB/s,w=141MiB/s][r=109k,w=36.2k IOPS][eta 00m:00s] fiotest: (groupid=0, jobs=1): err= 0: pid=12054: Sun Jan 9 17:09:39 2022 read: IOPS=109k, BW=425MiB/s (446MB/s)(6141MiB/14446msec) bw ( KiB/s): min=428912, max=438360, per=100.00%, avg=435327.82, stdev=1906.74, samples=28 iops : min=107228, max=109590, avg=108831.93, stdev=476.70, samples=28 write: IOPS=36.3k, BW=142MiB/s (149MB/s)(2051MiB/14446msec); 0 zone resets bw ( KiB/s): min=142504, max=147592, per=100.00%, avg=145367.86, stdev=1278.79, samples=28 iops : min=35626, max=36898, avg=36341.96, stdev=319.71, samples=28 cpu : usr=23.41%, sys=64.77%, ctx=424696, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=1572145,525007,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=425MiB/s (446MB/s), 425MiB/s-425MiB/s (446MB/s-446MB/s), io=6141MiB (6440MB), run=14446-14446msec WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=2051MiB (2150MB), run=14446-14446msec Disk stats (read/write): md0: ios=1566737/523226, merge=0/0, ticks=142500/10184, in_queue=152684, util=99.36%, aggrios=786072/262504, aggrmerge=0/0, aggrticks=73772/5866, aggrin_queue=79637, aggrutil=99.07% nvme2n1: ios=786291/262285, merge=0/0, ticks=72881/5722, in_queue=78603, util=99.06% nvme1n1: ios=785854/262724, merge=0/1, ticks=74663/6010, in_queue=80672, util=99.07% fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randread fiotest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [r(1)][100.0%][r=605MiB/s][r=155k IOPS][eta 00m:00s] fiotest: (groupid=0, jobs=1): err= 0: pid=12067: Sun Jan 9 17:11:00 2022 read: IOPS=155k, BW=606MiB/s (635MB/s)(8192MiB/13518msec) bw ( KiB/s): min=617944, max=622664, per=100.00%, avg=620542.56, stdev=1151.26, samples=27 iops : min=154486, max=155666, avg=155135.63, stdev=287.61, samples=27 cpu : usr=32.61%, sys=67.39%, ctx=28, majf=0, minf=73 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=2097152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=606MiB/s (635MB/s), 606MiB/s-606MiB/s (635MB/s-635MB/s), io=8192MiB (8590MB), run=13518-13518msec Disk stats (read/write): md0: ios=2078610/0, merge=0/0, ticks=166488/0, in_queue=166488, util=99.30%, aggrios=1048576/0, aggrmerge=0/0, aggrticks=87131/0, aggrin_queue=87131, aggrutil=99.00% nvme2n1: ios=1048576/0, merge=0/0, ticks=86110/0, in_queue=86110, util=98.99% nvme1n1: ios=1048576/0, merge=0/0, ticks=88152/0, in_queue=88152, util=99.00% fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=fiotest --bs=4k --iodepth=64 --size=8G --readwrite=randwrite fiotest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [w(1)][100.0%][w=504MiB/s][w=129k IOPS][eta 00m:00s] fiotest: (groupid=0, jobs=1): err= 0: pid=12079: Sun Jan 9 17:12:03 2022 write: IOPS=125k, BW=490MiB/s (513MB/s)(8192MiB/16730msec); 0 zone resets bw ( KiB/s): min=124064, max=517392, per=99.96%, avg=501189.06, stdev=68829.52, samples=33 iops : min=31016, max=129348, avg=125297.24, stdev=17207.37, samples=33 cpu : usr=13.24%, sys=57.40%, ctx=1033830, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): WRITE: bw=490MiB/s (513MB/s), 490MiB/s-490MiB/s (513MB/s-513MB/s), io=8192MiB (8590MB), run=16730-16730msec Disk stats (read/write): md0: ios=5065/2086436, merge=0/0, ticks=432/40740, in_queue=41172, util=99.46%, aggrios=2532/1048576, aggrmerge=0/0, aggrticks=226/22662, aggrin_queue=22888, aggrutil=99.22% nvme2n1: ios=2408/1048576, merge=0/0, ticks=192/22242, in_queue=22434, util=99.22% nvme1n1: ios=2657/1048576, merge=0/0, ticks=260/23082, in_queue=23343, util=99.22%
1.0
Сan't sync the last 64-100 blocks( - I can't sync the last 64-100 blocks, what am I doing wrong? Someone can tell me where and how I can sync the node without problems? **list of my servers on which I am trying sync the node** 
vultr Bare Metal 8 cores / 16 threads @ 3.7 GHz RAM 128 GB 10 Gbps Network 1.9 TB NVMe SSD 
AWS m5zn.3xlarge 12 vCPUs RAM 48.0 GiB volum gp3 IOPS 15000 Throughput (MiB/s) 950 
AWS i3.2xlarge 8 vCPUs RAM 61.0 GiB 1900 GB NVMe SSD 
AWS Dedicated Hosts i3en.2xlarge 8 CPU RAM 64.0 GiB RAID0 5000 GB (2 * 2500 GB NVMe SSD) DigitalOcean Memory-Optimized 16 vCPUs RAM 128 GB RAID0 2.34 TB 6x SSD DigitalOcean 8 vCPUs RAM 16 GB Memory RAID0 5 volume **Сonfig and disk tests one of my servers ** (
 AWS Dedicated Hosts i3en.2xlarge 8 CPU RAM 64.0 GiB RAID0 5000 GB (2 * 2500 GB NVMe SSD) ) instance: Geth/v1.1.7-74f6b613/linux-amd64/go1.16.10 at block: 0 (Mon Apr 20 2020 13:46:54 GMT+0000 (UTC)) modules: debug:1.0 eth:1.0 net:1.0 personal:1.0 rpc:1.0 web3:1.0 To exit, press ctrl-d > net.peerCount 62 > eth.syncing { currentBlock: 14220573, highestBlock: 14220648, knownStates: 1604080420, pulledStates: 1603983829, startingBlock: 14217784 } > **Сommand to start a node** `--syncmode fast --http --http.addr 0.0.0.0 --http.vhosts '*' --cache 40960 --http.api 'eth,net,web3,personal,debug' --rpc.allow-unprotected-txs --allow-insecure-unlock --txlookuplimit 0 --pprof --pprof.addr=0.0.0.0 --pprof.port=6060 --metrics` **config.toml** `[Eth] NetworkId = 56 NoPruning = false NoPrefetch = false LightPeers = 100 UltraLightFraction = 75 TrieTimeout = 100000000000 EnablePreimageRecording = false EWASMInterpreter = "" EVMInterpreter = "" [Eth.Miner] GasFloor = 30000000 GasCeil = 40000000 GasPrice = 1000000000 Recommit = 10000000000 Noverify = false [Eth.TxPool] Locals = [] NoLocals = true Journal = "transactions.rlp" Rejournal = 3600000000000 PriceLimit = 1000000000 PriceBump = 10 AccountSlots = 512 GlobalSlots = 10000 AccountQueue = 256 GlobalQueue = 5000 Lifetime = 10800000000000 [Eth.GPO] Blocks = 20 Percentile = 60 OracleThreshold = 20 [Node] IPCPath = "geth.ipc" HTTPHost = "0.0.0.0" NoUSB = true InsecureUnlockAllowed = false HTTPPort = 8545 HTTPVirtualHosts = ["localhost"] HTTPModules = ["eth", "net", "web3", "txpool", "parlia"] WSPort = 8546 WSModules = ["net", "web3", "eth"] [Node.P2P] MaxPeers = 200 NoDiscovery = false BootstrapNodes = ["enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311","enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311","enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311"] StaticNodes = ["enode://b208ecff0e78b0de84fbed72ce4a39590903ead22759dc77b560298550394dd723b232daa42c76596a6763447dc3a7951027cc5fb3f8085d42775023c9412c63@23.21.154.239:30311", "enode://bcccac7dfbd21bd1dbbb7bf64ef7af6986520091f00320905e5919d12167fc8c94698099946933747ad2ea24745d35ebc980f9ba1475a839c83574d5ccf318d9@34.196.94.250:30311", "enode://8661a9799cb9c202c87c895b06de689279fb3b9795b7a00a33be04d0651331a3f0603b1056fb2a8a9eeea76a4396f6fa8d29d03b4f3f29f34adf33fe0e96d905@44.197.67.153:30311", "enode://3fe3aca8482301a3b966e40e66b9a924ede7bb10b6f76e2c1c0aa421e293189e44d381c652d9f875ea7f865829e9a8e6a5d006e5a5e54176aa2069dd18b18ef6@23.22.69.150:30311", "enode://29af5f724433add3047a2f20b4c82ca587fe507c579531798a4bad25cbcbb0243d9da94d14f8388b80a7780568db39c1bee7a8862b8bebe7f8c6eb34246d5569@3.215.144.57:30311", "enode://82776830495703594e88b2225509a42517545ec53c3da93eea51eaa1556e3a1d56a2bd342aca5519261d40ae9ade8ff81289d7fa1a999e6e7f698aad0aa0fce6@34.236.200.111:30311", "enode://ffb8116321f8c01bc0f33852d95f8779cd2e29711c089026464907b55fdaed8d1261f49ab68f0012594633f290f005d284e4b49cb50eba7b3634f4901ec9fa7d@34.193.27.17:30311", "enode://97be405ab074c15eca1a82625f6516f08488f3c39033d72e81e98996e0f3cbc5c047e89276f953c27a08f20f41ec334e819096be543206368ae1f71c0a438b88@3.216.10.24:30311", "enode://bcdd46e1e49986d3ea31010933e7910409afcc7766dbb9335c9f27202a2361a621632a4b5a8e2bfa9bf2019a49a3658122c3e5d6e8d8a16a6984df91e7aed6c9@107.21.209.99:30311", "enode://75c5f28810dda7533bfe6dd3ba16b61c3dd2ab5198edbd8279c5e34b22615932ed826da146fdb11f62680ad238a2d961bb1c7eb6f63e2a78683584ad7e9e0103@107.20.8.154:30311"] ListenAddr = ":30311" EnableMsgEvents = false [Node.HTTPTimeouts] ReadTimeout = 30000000000 WriteTimeout = 30000000000 IdleTimeout = 120000000000 [Node.LogConfig] FilePath = "bsc.log" MaxBytesSize = 10485760 Level = "info" FileRoot = "" ` **Testing disk performance with utilities ioping and fio** ioping -c 100 --- test/ (ext4 /dev/md0) ioping statistics --- 99 requests completed in 32.1 ms, 396 KiB read, 3.09 k iops, 12.1 MiB/s generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s min/avg/max/mdev = 53.7 us / 323.8 us / 2.90 ms / 747.5 us fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75 fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=424MiB/s,w=141MiB/s][r=109k,w=36.2k IOPS][eta 00m:00s] fiotest: (groupid=0, jobs=1): err= 0: pid=12054: Sun Jan 9 17:09:39 2022 read: IOPS=109k, BW=425MiB/s (446MB/s)(6141MiB/14446msec) bw ( KiB/s): min=428912, max=438360, per=100.00%, avg=435327.82, stdev=1906.74, samples=28 iops : min=107228, max=109590, avg=108831.93, stdev=476.70, samples=28 write: IOPS=36.3k, BW=142MiB/s (149MB/s)(2051MiB/14446msec); 0 zone resets bw ( KiB/s): min=142504, max=147592, per=100.00%, avg=145367.86, stdev=1278.79, samples=28 iops : min=35626, max=36898, avg=36341.96, stdev=319.71, samples=28 cpu : usr=23.41%, sys=64.77%, ctx=424696, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=1572145,525007,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=425MiB/s (446MB/s), 425MiB/s-425MiB/s (446MB/s-446MB/s), io=6141MiB (6440MB), run=14446-14446msec WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=2051MiB (2150MB), run=14446-14446msec Disk stats (read/write): md0: ios=1566737/523226, merge=0/0, ticks=142500/10184, in_queue=152684, util=99.36%, aggrios=786072/262504, aggrmerge=0/0, aggrticks=73772/5866, aggrin_queue=79637, aggrutil=99.07% nvme2n1: ios=786291/262285, merge=0/0, ticks=72881/5722, in_queue=78603, util=99.06% nvme1n1: ios=785854/262724, merge=0/1, ticks=74663/6010, in_queue=80672, util=99.07% fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randread fiotest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [r(1)][100.0%][r=605MiB/s][r=155k IOPS][eta 00m:00s] fiotest: (groupid=0, jobs=1): err= 0: pid=12067: Sun Jan 9 17:11:00 2022 read: IOPS=155k, BW=606MiB/s (635MB/s)(8192MiB/13518msec) bw ( KiB/s): min=617944, max=622664, per=100.00%, avg=620542.56, stdev=1151.26, samples=27 iops : min=154486, max=155666, avg=155135.63, stdev=287.61, samples=27 cpu : usr=32.61%, sys=67.39%, ctx=28, majf=0, minf=73 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=2097152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=606MiB/s (635MB/s), 606MiB/s-606MiB/s (635MB/s-635MB/s), io=8192MiB (8590MB), run=13518-13518msec Disk stats (read/write): md0: ios=2078610/0, merge=0/0, ticks=166488/0, in_queue=166488, util=99.30%, aggrios=1048576/0, aggrmerge=0/0, aggrticks=87131/0, aggrin_queue=87131, aggrutil=99.00% nvme2n1: ios=1048576/0, merge=0/0, ticks=86110/0, in_queue=86110, util=98.99% nvme1n1: ios=1048576/0, merge=0/0, ticks=88152/0, in_queue=88152, util=99.00% fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=fiotest --bs=4k --iodepth=64 --size=8G --readwrite=randwrite fiotest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [w(1)][100.0%][w=504MiB/s][w=129k IOPS][eta 00m:00s] fiotest: (groupid=0, jobs=1): err= 0: pid=12079: Sun Jan 9 17:12:03 2022 write: IOPS=125k, BW=490MiB/s (513MB/s)(8192MiB/16730msec); 0 zone resets bw ( KiB/s): min=124064, max=517392, per=99.96%, avg=501189.06, stdev=68829.52, samples=33 iops : min=31016, max=129348, avg=125297.24, stdev=17207.37, samples=33 cpu : usr=13.24%, sys=57.40%, ctx=1033830, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=0,2097152,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): WRITE: bw=490MiB/s (513MB/s), 490MiB/s-490MiB/s (513MB/s-513MB/s), io=8192MiB (8590MB), run=16730-16730msec Disk stats (read/write): md0: ios=5065/2086436, merge=0/0, ticks=432/40740, in_queue=41172, util=99.46%, aggrios=2532/1048576, aggrmerge=0/0, aggrticks=226/22662, aggrin_queue=22888, aggrutil=99.22% nvme2n1: ios=2408/1048576, merge=0/0, ticks=192/22242, in_queue=22434, util=99.22% nvme1n1: ios=2657/1048576, merge=0/0, ticks=260/23082, in_queue=23343, util=99.22%
non_defect
сan t sync the last blocks i can t sync the last blocks what am i doing wrong someone can tell me where and how i can sync the node without problems list of my servers on which i am trying sync the node 
vultr bare metal cores threads ghz ram gb gbps network tb nvme ssd 
aws vcpus ram gib volum iops throughput mib s 
aws vcpus ram gib gb nvme ssd 
aws dedicated hosts cpu ram gib gb gb nvme ssd digitalocean memory optimized vcpus ram gb tb ssd digitalocean vcpus ram gb memory volume сonfig and disk tests one of my servers 
 aws dedicated hosts cpu ram gib gb gb nvme ssd instance geth linux at block mon apr gmt utc modules debug eth net personal rpc to exit press ctrl d net peercount eth syncing currentblock highestblock knownstates pulledstates startingblock сommand to start a node syncmode fast http http addr http vhosts cache http api eth net personal debug rpc allow unprotected txs allow insecure unlock txlookuplimit pprof pprof addr pprof port metrics config toml networkid nopruning false noprefetch false lightpeers ultralightfraction trietimeout enablepreimagerecording false ewasminterpreter evminterpreter gasfloor gasceil gasprice recommit noverify false locals nolocals true journal transactions rlp rejournal pricelimit pricebump accountslots globalslots accountqueue globalqueue lifetime blocks percentile oraclethreshold ipcpath geth ipc httphost nousb true insecureunlockallowed false httpport httpvirtualhosts httpmodules wsport wsmodules maxpeers nodiscovery false bootstrapnodes staticnodes listenaddr enablemsgevents false readtimeout writetimeout idletimeout filepath bsc log maxbytessize level info fileroot testing disk performance with utilities ioping and fio ioping c test dev ioping statistics requests completed in ms kib read k iops mib s generated requests in min kib iops kib s min avg max mdev us us ms us fio randrepeat ioengine libaio direct gtod reduce name fiotest filename testfio bs iodepth size readwrite randrw rwmixread fiotest g rw randrw bs r w t ioengine libaio iodepth fio starting process jobs f fiotest groupid jobs err pid sun jan read iops bw s s bw kib s min max per avg stdev samples iops min max avg stdev samples write iops bw s s zone resets bw kib s min max per avg stdev samples iops min max avg stdev samples cpu usr sys ctx majf minf io depths submit complete issued rwts total short dropped latency target window percentile depth run status group all jobs read bw s s s s s s io run write bw s s s s s s io run disk stats read write ios merge ticks in queue util aggrios aggrmerge aggrticks aggrin queue aggrutil ios merge ticks in queue util ios merge ticks in queue util fio randrepeat ioengine libaio direct gtod reduce name fiotest filename testfio bs iodepth size readwrite randread fiotest g rw randread bs r w t ioengine libaio iodepth fio starting process jobs f fiotest groupid jobs err pid sun jan read iops bw s s bw kib s min max per avg stdev samples iops min max avg stdev samples cpu usr sys ctx majf minf io depths submit complete issued rwts total short dropped latency target window percentile depth run status group all jobs read bw s s s s s s io run disk stats read write ios merge ticks in queue util aggrios aggrmerge aggrticks aggrin queue aggrutil ios merge ticks in queue util ios merge ticks in queue util fio randrepeat ioengine libaio direct gtod reduce name fiotest filename fiotest bs iodepth size readwrite randwrite fiotest g rw randwrite bs r w t ioengine libaio iodepth fio starting process jobs f fiotest groupid jobs err pid sun jan write iops bw s s zone resets bw kib s min max per avg stdev samples iops min max avg stdev samples cpu usr sys ctx majf minf io depths submit complete issued rwts total short dropped latency target window percentile depth run status group all jobs write bw s s s s s s io run disk stats read write ios merge ticks in queue util aggrios aggrmerge aggrticks aggrin queue aggrutil ios merge ticks in queue util ios merge ticks in queue util
0
71,608
23,719,764,444
IssuesEvent
2022-08-30 14:30:09
FreeRADIUS/freeradius-server
https://api.github.com/repos/FreeRADIUS/freeradius-server
closed
[defect]: SQL-User-Name not available on first query in a unlang SQL/MySQL xlat
defect
### What type of defect/bug is this? Unexpected behaviour (obvious or verified by project member) ### How can the issue be reproduced? In a stock Debian 11 install (3.0.25 or 3.2.0) enable SQL and tune the parameters for the MySQL/MariaDB server/DB used. Uncomment some example from the "users" file: (example) **lameuser Auth-Type := Reject Reply-Message = "Your account has been disabled."** Add this lines (bold) in the beginning of the "authorize" section (just to show the problem) ... authorize { **update control { &Tmp-String-0 = "%{sql:SELECT '%{SQL-User-Name}' AS val}" &Tmp-String-1 = "%{sql:SELECT '%{SQL-User-Name}' AS val}" }** ... Run the server in debug mode and do a simple "radtest": **radtest -t pap lameuser anypass 127.0.0.1 0 testing123** ### Log output from the FreeRADIUS daemon ```shell The freeradius -X shows (relevant lines below only) Behavior: The SQL-User-Name seems to be created in the first %{sql: ... } but is expanded to a null '' string on the query and copied to Tmp-String-0 (in the p.o.c. code) In the second (and following) %{sql: ...} uses the correct expansion is done and Tmp-String-1 has the SQL-User-Name copied. ... Ready to process requests (0) Received Access-Request Id 65 from 127.0.0.1:53928 to 127.0.0.1:1812 length 78 (0) User-Name = "lameuser" (0) User-Password = "nopass" (0) NAS-IP-Address = 127.0.1.1 (0) NAS-Port = 0 (0) Message-Authenticator = 0xfca03c1c853d861789ff20f5b9cd9cd5 (0) # Executing section authorize from file /etc/freeradius/sites-enabled/default (0) authorize { (0) update control { (0) EXPAND %{User-Name} (0) --> lameuser (0) SQL-User-Name set to 'lameuser' rlm_sql (sql): Reserved connection (0) (0) Executing select query: SELECT '' AS val rlm_sql (sql): Released connection (0) Need more connections to reach 10 spares rlm_sql (sql): Opening additional connection (5), 1 of 27 pending slots used rlm_sql_mysql: Starting connect to MySQL server rlm_sql_mysql: Connected to database 'radlocal' on db1.net.ipl.pt via TCP/IP, server version 10.5.16-MariaDB-1:10.5.16+maria~buster-log, protocol version 10 (0) EXPAND %{sql:SELECT '%{SQL-User-Name}' AS val} (0) --> (0) &Tmp-String-0 = rlm_sql (sql): Reserved connection (1) rlm_sql (sql): Released connection (1) (0) EXPAND %{User-Name} (0) --> lameuser (0) SQL-User-Name set to 'lameuser' rlm_sql (sql): Reserved connection (2) (0) Executing select query: SELECT 'lameuser' AS val rlm_sql (sql): Released connection (2) (0) EXPAND %{sql:SELECT '%{SQL-User-Name}' AS val} (0) --> lameuser (0) &Tmp-String-1 = lameuser (0) } # update control = noop (0) policy filter_username { ... ``` ### Relevant log output from client utilities Not relevant for the problem. ### Backtrace from LLDB or GDB _No response_
1.0
[defect]: SQL-User-Name not available on first query in a unlang SQL/MySQL xlat - ### What type of defect/bug is this? Unexpected behaviour (obvious or verified by project member) ### How can the issue be reproduced? In a stock Debian 11 install (3.0.25 or 3.2.0) enable SQL and tune the parameters for the MySQL/MariaDB server/DB used. Uncomment some example from the "users" file: (example) **lameuser Auth-Type := Reject Reply-Message = "Your account has been disabled."** Add this lines (bold) in the beginning of the "authorize" section (just to show the problem) ... authorize { **update control { &Tmp-String-0 = "%{sql:SELECT '%{SQL-User-Name}' AS val}" &Tmp-String-1 = "%{sql:SELECT '%{SQL-User-Name}' AS val}" }** ... Run the server in debug mode and do a simple "radtest": **radtest -t pap lameuser anypass 127.0.0.1 0 testing123** ### Log output from the FreeRADIUS daemon ```shell The freeradius -X shows (relevant lines below only) Behavior: The SQL-User-Name seems to be created in the first %{sql: ... } but is expanded to a null '' string on the query and copied to Tmp-String-0 (in the p.o.c. code) In the second (and following) %{sql: ...} uses the correct expansion is done and Tmp-String-1 has the SQL-User-Name copied. ... Ready to process requests (0) Received Access-Request Id 65 from 127.0.0.1:53928 to 127.0.0.1:1812 length 78 (0) User-Name = "lameuser" (0) User-Password = "nopass" (0) NAS-IP-Address = 127.0.1.1 (0) NAS-Port = 0 (0) Message-Authenticator = 0xfca03c1c853d861789ff20f5b9cd9cd5 (0) # Executing section authorize from file /etc/freeradius/sites-enabled/default (0) authorize { (0) update control { (0) EXPAND %{User-Name} (0) --> lameuser (0) SQL-User-Name set to 'lameuser' rlm_sql (sql): Reserved connection (0) (0) Executing select query: SELECT '' AS val rlm_sql (sql): Released connection (0) Need more connections to reach 10 spares rlm_sql (sql): Opening additional connection (5), 1 of 27 pending slots used rlm_sql_mysql: Starting connect to MySQL server rlm_sql_mysql: Connected to database 'radlocal' on db1.net.ipl.pt via TCP/IP, server version 10.5.16-MariaDB-1:10.5.16+maria~buster-log, protocol version 10 (0) EXPAND %{sql:SELECT '%{SQL-User-Name}' AS val} (0) --> (0) &Tmp-String-0 = rlm_sql (sql): Reserved connection (1) rlm_sql (sql): Released connection (1) (0) EXPAND %{User-Name} (0) --> lameuser (0) SQL-User-Name set to 'lameuser' rlm_sql (sql): Reserved connection (2) (0) Executing select query: SELECT 'lameuser' AS val rlm_sql (sql): Released connection (2) (0) EXPAND %{sql:SELECT '%{SQL-User-Name}' AS val} (0) --> lameuser (0) &Tmp-String-1 = lameuser (0) } # update control = noop (0) policy filter_username { ... ``` ### Relevant log output from client utilities Not relevant for the problem. ### Backtrace from LLDB or GDB _No response_
defect
sql user name not available on first query in a unlang sql mysql xlat what type of defect bug is this unexpected behaviour obvious or verified by project member how can the issue be reproduced in a stock debian install or enable sql and tune the parameters for the mysql mariadb server db used uncomment some example from the users file example lameuser auth type reject reply message your account has been disabled add this lines bold in the beginning of the authorize section just to show the problem authorize update control tmp string sql select sql user name as val tmp string sql select sql user name as val run the server in debug mode and do a simple radtest radtest t pap lameuser anypass log output from the freeradius daemon shell the freeradius x shows relevant lines below only behavior the sql user name seems to be created in the first sql but is expanded to a null string on the query and copied to tmp string in the p o c code in the second and following sql uses the correct expansion is done and tmp string has the sql user name copied ready to process requests received access request id from to length user name lameuser user password nopass nas ip address nas port message authenticator executing section authorize from file etc freeradius sites enabled default authorize update control expand user name lameuser sql user name set to lameuser rlm sql sql reserved connection executing select query select as val rlm sql sql released connection need more connections to reach spares rlm sql sql opening additional connection of pending slots used rlm sql mysql starting connect to mysql server rlm sql mysql connected to database radlocal on net ipl pt via tcp ip server version mariadb maria buster log protocol version expand sql select sql user name as val tmp string rlm sql sql reserved connection rlm sql sql released connection expand user name lameuser sql user name set to lameuser rlm sql sql reserved connection executing select query select lameuser as val rlm sql sql released connection expand sql select sql user name as val lameuser tmp string lameuser update control noop policy filter username relevant log output from client utilities not relevant for the problem backtrace from lldb or gdb no response
1
7,003
2,610,321,490
IssuesEvent
2015-02-26 19:43:41
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
closed
SFX
auto-migrated Priority-Medium Type-Defect
``` Kamino, the background SFX audio is far too loud ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 9 May 2011 at 12:13
1.0
SFX - ``` Kamino, the background SFX audio is far too loud ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 9 May 2011 at 12:13
defect
sfx kamino the background sfx audio is far too loud original issue reported on code google com by gmail com on may at
1
231,167
25,490,706,274
IssuesEvent
2022-11-27 02:17:51
CliffCrerar/learn-transact-SQL-fundamentals
https://api.github.com/repos/CliffCrerar/learn-transact-SQL-fundamentals
closed
CVE-2021-23364 (Medium) detected in browserslist-4.7.0.tgz - autoclosed
security vulnerability
## CVE-2021-23364 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.7.0.tgz</b></p></summary> <p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p> <p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/react-dev-utils/node_modules/browserslist/package.json</p> <p> Dependency Hierarchy: - docz-1.3.2.tgz (Root Library) - docz-core-1.2.0.tgz - react-dev-utils-9.1.0.tgz - :x: **browserslist-4.7.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/CliffCrerar/learn-transact-SQL-fundamentals/commit/370244ac880085782f78402864a99636da8d1348">370244ac880085782f78402864a99636da8d1348</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries. <p>Publish Date: 2021-04-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23364>CVE-2021-23364</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p> <p>Release Date: 2021-04-28</p> <p>Fix Resolution (browserslist): 4.16.5</p> <p>Direct dependency fix Resolution (docz): 2.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23364 (Medium) detected in browserslist-4.7.0.tgz - autoclosed - ## CVE-2021-23364 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.7.0.tgz</b></p></summary> <p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p> <p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/react-dev-utils/node_modules/browserslist/package.json</p> <p> Dependency Hierarchy: - docz-1.3.2.tgz (Root Library) - docz-core-1.2.0.tgz - react-dev-utils-9.1.0.tgz - :x: **browserslist-4.7.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/CliffCrerar/learn-transact-SQL-fundamentals/commit/370244ac880085782f78402864a99636da8d1348">370244ac880085782f78402864a99636da8d1348</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries. <p>Publish Date: 2021-04-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23364>CVE-2021-23364</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p> <p>Release Date: 2021-04-28</p> <p>Fix Resolution (browserslist): 4.16.5</p> <p>Direct dependency fix Resolution (docz): 2.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in browserslist tgz autoclosed cve medium severity vulnerability vulnerable library browserslist tgz share target browsers between different front end tools like autoprefixer stylelint and babel env preset library home page a href path to dependency file package json path to vulnerable library node modules react dev utils node modules browserslist package json dependency hierarchy docz tgz root library docz core tgz react dev utils tgz x browserslist tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package browserslist from and before are vulnerable to regular expression denial of service redos during parsing of queries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution browserslist direct dependency fix resolution docz step up your open source security game with mend
0
81,116
30,717,956,653
IssuesEvent
2023-07-27 14:10:17
openziti/zrok
https://api.github.com/repos/openziti/zrok
closed
Expired Account Request; Second Attempt Fails
defect
Request an account; let the request expire... second attempt will run into a unique constraint on the email address (now that `v0.4` is using soft deletes).
1.0
Expired Account Request; Second Attempt Fails - Request an account; let the request expire... second attempt will run into a unique constraint on the email address (now that `v0.4` is using soft deletes).
defect
expired account request second attempt fails request an account let the request expire second attempt will run into a unique constraint on the email address now that is using soft deletes
1
25,247
4,256,267,154
IssuesEvent
2016-07-10 01:07:50
catmaid/CATMAID
https://api.github.com/repos/catmaid/CATMAID
closed
Need to revisit/remove front end permissions enforcement
difficulty: low priority: low type: defect
This leads to fun behavior like not being able to deactivate a skeleton activated through other means, e.g., a search.
1.0
Need to revisit/remove front end permissions enforcement - This leads to fun behavior like not being able to deactivate a skeleton activated through other means, e.g., a search.
defect
need to revisit remove front end permissions enforcement this leads to fun behavior like not being able to deactivate a skeleton activated through other means e g a search
1
16,200
20,712,205,517
IssuesEvent
2022-03-12 04:03:00
ethereum/EIPs
https://api.github.com/repos/ethereum/EIPs
closed
Archival of Abandoned/Withdrawn EIPs
type: EIP1 (Process) stale
The Abandoned (*name may change, see #2941*) status is introduced for EIPs which are no longer pursued due to various reasons. There has been some concerns that the current process is designed at merging drafts as soon as possible (*even if we fail at that*) will eventually result in a lot abandoned EIPs. In order to clean up eips.ethereum.org, one possible solution could be considering a process of "archival": after a certain time period, Abandoned EIPs are archived. Archived EIPs: - are listed under a separate section called "Archive" on eips.ethereum.org and do not show up under the respective categories nor under "All" - their title and header is displayed unchanged - their body is replaced with a text explaining they are archived and can be found in github, also it would explain how to revive them (-> mark them draft)
1.0
Archival of Abandoned/Withdrawn EIPs - The Abandoned (*name may change, see #2941*) status is introduced for EIPs which are no longer pursued due to various reasons. There has been some concerns that the current process is designed at merging drafts as soon as possible (*even if we fail at that*) will eventually result in a lot abandoned EIPs. In order to clean up eips.ethereum.org, one possible solution could be considering a process of "archival": after a certain time period, Abandoned EIPs are archived. Archived EIPs: - are listed under a separate section called "Archive" on eips.ethereum.org and do not show up under the respective categories nor under "All" - their title and header is displayed unchanged - their body is replaced with a text explaining they are archived and can be found in github, also it would explain how to revive them (-> mark them draft)
non_defect
archival of abandoned withdrawn eips the abandoned name may change see status is introduced for eips which are no longer pursued due to various reasons there has been some concerns that the current process is designed at merging drafts as soon as possible even if we fail at that will eventually result in a lot abandoned eips in order to clean up eips ethereum org one possible solution could be considering a process of archival after a certain time period abandoned eips are archived archived eips are listed under a separate section called archive on eips ethereum org and do not show up under the respective categories nor under all their title and header is displayed unchanged their body is replaced with a text explaining they are archived and can be found in github also it would explain how to revive them mark them draft
0
32,150
6,721,320,277
IssuesEvent
2017-10-16 11:13:55
CenturyLinkCloud/mdw-designer
https://api.github.com/repos/CenturyLinkCloud/mdw-designer
closed
NullPointerExceptions when executing Service API Tests
defect
For example, when running AdminApisWorkgroups.test: ``` !ENTRY com.centurylink.mdw.designer.ui 4 4 2017-10-13 09:06:14.953 !MESSAGE Error !STACK 0 java.lang.NullPointerException at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestResults(AutomatedTestSuite.java:522) at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestCaseResults(AutomatedTestSuite.java:407) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.updateTreeAndProgressBar(AutomatedTestView.java:693) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.access$12(AutomatedTestView.java:566) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView$3.run(AutomatedTestView.java:348) at java.lang.Thread.run(Thread.java:745) !ENTRY com.centurylink.mdw.designer.ui 4 4 2017-10-13 09:06:17.480 !MESSAGE Error !STACK 0 java.lang.NullPointerException at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestResults(AutomatedTestSuite.java:522) at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestCaseResults(AutomatedTestSuite.java:407) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.updateTreeAndProgressBar(AutomatedTestView.java:693) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.access$12(AutomatedTestView.java:566) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView$3.run(AutomatedTestView.java:359) at java.lang.Thread.run(Thread.java:745) ```
1.0
NullPointerExceptions when executing Service API Tests - For example, when running AdminApisWorkgroups.test: ``` !ENTRY com.centurylink.mdw.designer.ui 4 4 2017-10-13 09:06:14.953 !MESSAGE Error !STACK 0 java.lang.NullPointerException at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestResults(AutomatedTestSuite.java:522) at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestCaseResults(AutomatedTestSuite.java:407) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.updateTreeAndProgressBar(AutomatedTestView.java:693) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.access$12(AutomatedTestView.java:566) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView$3.run(AutomatedTestView.java:348) at java.lang.Thread.run(Thread.java:745) !ENTRY com.centurylink.mdw.designer.ui 4 4 2017-10-13 09:06:17.480 !MESSAGE Error !STACK 0 java.lang.NullPointerException at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestResults(AutomatedTestSuite.java:522) at com.centurylink.mdw.plugin.designer.model.AutomatedTestSuite.writeTestCaseResults(AutomatedTestSuite.java:407) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.updateTreeAndProgressBar(AutomatedTestView.java:693) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView.access$12(AutomatedTestView.java:566) at com.centurylink.mdw.plugin.designer.views.AutomatedTestView$3.run(AutomatedTestView.java:359) at java.lang.Thread.run(Thread.java:745) ```
defect
nullpointerexceptions when executing service api tests for example when running adminapisworkgroups test entry com centurylink mdw designer ui message error stack java lang nullpointerexception at com centurylink mdw plugin designer model automatedtestsuite writetestresults automatedtestsuite java at com centurylink mdw plugin designer model automatedtestsuite writetestcaseresults automatedtestsuite java at com centurylink mdw plugin designer views automatedtestview updatetreeandprogressbar automatedtestview java at com centurylink mdw plugin designer views automatedtestview access automatedtestview java at com centurylink mdw plugin designer views automatedtestview run automatedtestview java at java lang thread run thread java entry com centurylink mdw designer ui message error stack java lang nullpointerexception at com centurylink mdw plugin designer model automatedtestsuite writetestresults automatedtestsuite java at com centurylink mdw plugin designer model automatedtestsuite writetestcaseresults automatedtestsuite java at com centurylink mdw plugin designer views automatedtestview updatetreeandprogressbar automatedtestview java at com centurylink mdw plugin designer views automatedtestview access automatedtestview java at com centurylink mdw plugin designer views automatedtestview run automatedtestview java at java lang thread run thread java
1
126,371
26,834,402,081
IssuesEvent
2023-02-02 18:17:57
ubuntu-flutter-community/software
https://api.github.com/repos/ubuntu-flutter-community/software
closed
Cut ExploreModel into two models: SearchModel and ExploreModel
enhancement: code 👩‍💻
Since the pages are detached we should also detach the models so some of the selects would not be needed any more eventually. in general it is always better to have small models for small parts of the UI so you have more control over the rebuilds of individual UI parts without changing other parts.
1.0
Cut ExploreModel into two models: SearchModel and ExploreModel - Since the pages are detached we should also detach the models so some of the selects would not be needed any more eventually. in general it is always better to have small models for small parts of the UI so you have more control over the rebuilds of individual UI parts without changing other parts.
non_defect
cut exploremodel into two models searchmodel and exploremodel since the pages are detached we should also detach the models so some of the selects would not be needed any more eventually in general it is always better to have small models for small parts of the ui so you have more control over the rebuilds of individual ui parts without changing other parts
0
346
2,533,453,619
IssuesEvent
2015-01-23 23:33:20
srabbelier-google/issue-export-test-3
https://api.github.com/repos/srabbelier-google/issue-export-test-3
closed
NumberFormatException inside FirefoxWebElement.getSize()
Priority-Medium Type-Defect
Original [issue 1](https://code.google.com/p/selenium/issues/detail?id=1) created by srabbelier-google on 2009-08-26T08:38:47.000Z: affected version: 0.9.7089 java.lang.NumberFormatException: For input string: &quot;328.3999938964844&quot; at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:456) at java.lang.Integer.parseInt(Integer.java:497) at org.openqa.selenium.firefox.FirefoxWebElement.getSize(FirefoxWebElement.java: 154)
1.0
NumberFormatException inside FirefoxWebElement.getSize() - Original [issue 1](https://code.google.com/p/selenium/issues/detail?id=1) created by srabbelier-google on 2009-08-26T08:38:47.000Z: affected version: 0.9.7089 java.lang.NumberFormatException: For input string: &quot;328.3999938964844&quot; at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48) at java.lang.Integer.parseInt(Integer.java:456) at java.lang.Integer.parseInt(Integer.java:497) at org.openqa.selenium.firefox.FirefoxWebElement.getSize(FirefoxWebElement.java: 154)
defect
numberformatexception inside firefoxwebelement getsize original created by srabbelier google on affected version java lang numberformatexception for input string quot quot at java lang numberformatexception forinputstring numberformatexception java at java lang integer parseint integer java at java lang integer parseint integer java at org openqa selenium firefox firefoxwebelement getsize firefoxwebelement java
1
210,690
23,768,696,429
IssuesEvent
2022-09-01 14:39:27
alieint/aspnetcore-2.1.24
https://api.github.com/repos/alieint/aspnetcore-2.1.24
opened
system.text.regularexpressions.4.3.0.nupkg: 1 vulnerabilities (highest severity is: 7.5)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p></summary> <p></p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/PackageArchive/Scenario.WebApp/Scenario.WebApp.csproj</p> <p>Path to vulnerable library: /tmp/ws-ua_20220901135018_MKJNEV/dotnet_MQZKOH/20220901135018/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg,/obj/pkgs/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> <p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/e512408cb0b9fc17164d22b08f507d2e41131490">e512408cb0b9fc17164d22b08f507d2e41131490</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2019-0820](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | detected in multiple dependencies | Direct | System.Text.RegularExpressions - 4.3.1 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0820</summary> ### Vulnerable Libraries - <b>system.text.regularexpressions.4.3.0.nupkg</b>, <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p> ### <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p></p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/PackageArchive/Scenario.WebApp/Scenario.WebApp.csproj</p> <p>Path to vulnerable library: /tmp/ws-ua_20220901135018_MKJNEV/dotnet_MQZKOH/20220901135018/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg,/obj/pkgs/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> Dependency Hierarchy: - :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library) ### <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/Http/Http/test/Microsoft.AspNetCore.Http.Tests.csproj</p> <p>Path to vulnerable library: /ages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg,/home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> Dependency Hierarchy: - :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/e512408cb0b9fc17164d22b08f507d2e41131490">e512408cb0b9fc17164d22b08f507d2e41131490</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981. Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg. <p>Publish Date: 2019-05-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820>CVE-2019-0820</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p> <p>Release Date: 2019-05-16</p> <p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
system.text.regularexpressions.4.3.0.nupkg: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p></summary> <p></p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/PackageArchive/Scenario.WebApp/Scenario.WebApp.csproj</p> <p>Path to vulnerable library: /tmp/ws-ua_20220901135018_MKJNEV/dotnet_MQZKOH/20220901135018/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg,/obj/pkgs/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> <p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/e512408cb0b9fc17164d22b08f507d2e41131490">e512408cb0b9fc17164d22b08f507d2e41131490</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2019-0820](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | detected in multiple dependencies | Direct | System.Text.RegularExpressions - 4.3.1 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0820</summary> ### Vulnerable Libraries - <b>system.text.regularexpressions.4.3.0.nupkg</b>, <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p> ### <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p></p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/PackageArchive/Scenario.WebApp/Scenario.WebApp.csproj</p> <p>Path to vulnerable library: /tmp/ws-ua_20220901135018_MKJNEV/dotnet_MQZKOH/20220901135018/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg,/obj/pkgs/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> Dependency Hierarchy: - :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library) ### <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/Http/Http/test/Microsoft.AspNetCore.Http.Tests.csproj</p> <p>Path to vulnerable library: /ages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg,/home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> Dependency Hierarchy: - :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/e512408cb0b9fc17164d22b08f507d2e41131490">e512408cb0b9fc17164d22b08f507d2e41131490</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981. Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg. <p>Publish Date: 2019-05-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820>CVE-2019-0820</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p> <p>Release Date: 2019-05-16</p> <p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_defect
system text regularexpressions nupkg vulnerabilities highest severity is vulnerable library system text regularexpressions nupkg library home page a href path to dependency file src packagearchive scenario webapp scenario webapp csproj path to vulnerable library tmp ws ua mkjnev dotnet mqzkoh system text regularexpressions system text regularexpressions nupkg obj pkgs system text regularexpressions system text regularexpressions nupkg found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high detected in multiple dependencies direct system text regularexpressions details cve vulnerable libraries system text regularexpressions nupkg system text regularexpressions nupkg system text regularexpressions nupkg library home page a href path to dependency file src packagearchive scenario webapp scenario webapp csproj path to vulnerable library tmp ws ua mkjnev dotnet mqzkoh system text regularexpressions system text regularexpressions nupkg obj pkgs system text regularexpressions system text regularexpressions nupkg dependency hierarchy x system text regularexpressions nupkg vulnerable library system text regularexpressions nupkg provides the system text regularexpressions regex class an implementation of a regular expression e library home page a href path to dependency file src http http test microsoft aspnetcore http tests csproj path to vulnerable library ages system text regularexpressions system text regularexpressions nupkg home wss scanner nuget packages system text regularexpressions system text regularexpressions nupkg dependency hierarchy x system text regularexpressions nupkg vulnerable library found in head commit a href found in base branch main vulnerability details a denial of service vulnerability exists when net framework and net core improperly process regex strings aka net framework and net core denial of service vulnerability this cve id is unique from cve cve mend note after conducting further research mend has determined that cve only affects environments with versions and only on environment of system text regularexpressions nupkg publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system text regularexpressions step up your open source security game with mend
0
4,659
2,610,138,172
IssuesEvent
2015-02-26 18:43:37
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Box problem
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. taking a box, which contains the thorwballs, with the plane makes you get another plane instead of it ``` ----- Original issue reported on code.google.com by `alessand...@gmail.com` on 29 Sep 2010 at 12:47
1.0
Box problem - ``` What steps will reproduce the problem? 1. taking a box, which contains the thorwballs, with the plane makes you get another plane instead of it ``` ----- Original issue reported on code.google.com by `alessand...@gmail.com` on 29 Sep 2010 at 12:47
defect
box problem what steps will reproduce the problem taking a box which contains the thorwballs with the plane makes you get another plane instead of it original issue reported on code google com by alessand gmail com on sep at
1
69,421
22,345,920,271
IssuesEvent
2022-06-15 07:48:00
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
CacheConfigHolder misses DataPersistenceConfig
Type: Defect Team: Core Module: Config
**Describe the bug** `CacheConfigHolder` which used in client serialization of `CacheConfig` doesn't have `DataPersistenceConfig` element which exists in the real cache config object. Therefore, these cache config accesses from the client miss this element or updates on this element are ignored. See: - https://github.com/hazelcast/hazelcast/blob/edabe3f269fec45323f0a4ef817fb27ec3b57be2/hazelcast/src/main/java/com/hazelcast/client/impl/protocol/codec/holder/CacheConfigHolder.java - https://github.com/hazelcast/hazelcast/blob/edabe3f269fec45323f0a4ef817fb27ec3b57be2/hazelcast/src/main/java/com/hazelcast/config/AbstractCacheConfig.java#L95
1.0
CacheConfigHolder misses DataPersistenceConfig - **Describe the bug** `CacheConfigHolder` which used in client serialization of `CacheConfig` doesn't have `DataPersistenceConfig` element which exists in the real cache config object. Therefore, these cache config accesses from the client miss this element or updates on this element are ignored. See: - https://github.com/hazelcast/hazelcast/blob/edabe3f269fec45323f0a4ef817fb27ec3b57be2/hazelcast/src/main/java/com/hazelcast/client/impl/protocol/codec/holder/CacheConfigHolder.java - https://github.com/hazelcast/hazelcast/blob/edabe3f269fec45323f0a4ef817fb27ec3b57be2/hazelcast/src/main/java/com/hazelcast/config/AbstractCacheConfig.java#L95
defect
cacheconfigholder misses datapersistenceconfig describe the bug cacheconfigholder which used in client serialization of cacheconfig doesn t have datapersistenceconfig element which exists in the real cache config object therefore these cache config accesses from the client miss this element or updates on this element are ignored see
1
78,459
15,573,055,506
IssuesEvent
2021-03-17 08:03:03
rammatzkvosky/789
https://api.github.com/repos/rammatzkvosky/789
reopened
CVE-2018-11307 (High) detected in jackson-databind-2.8.7.jar
security vulnerability
## CVE-2018-11307 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: 789/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.7.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/789/commit/f9b1c2e88187d7e6e5d7e812ef82431893dff972">f9b1c2e88187d7e6e5d7e812ef82431893dff972</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.5. Use of Jackson default typing along with a gadget class from iBatis allows exfiltration of content. Fixed in 2.7.9.4, 2.8.11.2, and 2.9.6. <p>Publish Date: 2019-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11307>CVE-2018-11307</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2032">https://github.com/FasterXML/jackson-databind/issues/2032</a></p> <p>Release Date: 2019-03-17</p> <p>Fix Resolution: jackson-databind-2.9.6</p> </p> </details> <p></p>
True
CVE-2018-11307 (High) detected in jackson-databind-2.8.7.jar - ## CVE-2018-11307 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: 789/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.7.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/789/commit/f9b1c2e88187d7e6e5d7e812ef82431893dff972">f9b1c2e88187d7e6e5d7e812ef82431893dff972</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.5. Use of Jackson default typing along with a gadget class from iBatis allows exfiltration of content. Fixed in 2.7.9.4, 2.8.11.2, and 2.9.6. <p>Publish Date: 2019-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11307>CVE-2018-11307</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2032">https://github.com/FasterXML/jackson-databind/issues/2032</a></p> <p>Release Date: 2019-03-17</p> <p>Fix Resolution: jackson-databind-2.9.6</p> </p> </details> <p></p>
non_defect
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details an issue was discovered in fasterxml jackson databind through use of jackson default typing along with a gadget class from ibatis allows exfiltration of content fixed in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind
0
104,195
4,202,941,410
IssuesEvent
2016-06-28 01:41:56
ePADD/epadd
https://api.github.com/repos/ePADD/epadd
opened
Search terms "dad AND mom" not highlighted in some results
Bug Medium priority
Version Jun 27 (Jun 8, 216 in About ePADD) ![screen shot 2016-06-27 at 6 40 00 pm](https://cloud.githubusercontent.com/assets/1050899/16401141/b0a2e0b6-3c96-11e6-805a-35f7026e1b91.png) ![screen shot 2016-06-27 at 6 39 47 pm](https://cloud.githubusercontent.com/assets/1050899/16401146/baa5f8a0-3c96-11e6-93ff-2634e4fd2b98.png)
1.0
Search terms "dad AND mom" not highlighted in some results - Version Jun 27 (Jun 8, 216 in About ePADD) ![screen shot 2016-06-27 at 6 40 00 pm](https://cloud.githubusercontent.com/assets/1050899/16401141/b0a2e0b6-3c96-11e6-805a-35f7026e1b91.png) ![screen shot 2016-06-27 at 6 39 47 pm](https://cloud.githubusercontent.com/assets/1050899/16401146/baa5f8a0-3c96-11e6-93ff-2634e4fd2b98.png)
non_defect
search terms dad and mom not highlighted in some results version jun jun in about epadd
0
101,891
16,533,032,656
IssuesEvent
2021-05-27 08:33:15
ZcashFoundation/zebra
https://api.github.com/repos/ZcashFoundation/zebra
opened
Rate-limit useless GetData to peers, Credit: Equilibrium
A-network A-rust C-bug C-security I-remote-node-overload P-High S-needs-investigation S-needs-triage
**Is your feature request related to a problem? Please describe.** Zebra sends lots of GetData (next block chain hashes) messages to peers, even if those peers: * ignore them * reject them * send useless answers This bug could be a significant issue in smaller networks. **Describe the solution you'd like** We should rate-limit GetData messages, particularly to peers that don't give us useful responses. These messages are sent by the syncer: `obtain_tips`: https://github.com/ZcashFoundation/zebra/blob/5cdcc5255f8c66fe0fc0f2a41b6e7b37e8530b1f/zebrad/src/components/sync.rs#L373-L384 `extend_tips`: https://github.com/ZcashFoundation/zebra/blob/5cdcc5255f8c66fe0fc0f2a41b6e7b37e8530b1f/zebrad/src/components/sync.rs#L485-L499 We might want to use part of the crawler refactor from #2160, where we de-duplicate the `candidates.update()` call. That will make it easier to rate-limit the requests. **Describe alternatives you've considered** We could limit fanout to the number of active peers (#TODO) - this only helps when we have 1-3 peers, which is unlikely on mainnet We could rate-limit identical messages to the same peer (#2153) These alternatives would slow down syncing: - rate-limit all messages to the peer set (#2153) - rate limit all messages to individual peers (#2153)
True
Rate-limit useless GetData to peers, Credit: Equilibrium - **Is your feature request related to a problem? Please describe.** Zebra sends lots of GetData (next block chain hashes) messages to peers, even if those peers: * ignore them * reject them * send useless answers This bug could be a significant issue in smaller networks. **Describe the solution you'd like** We should rate-limit GetData messages, particularly to peers that don't give us useful responses. These messages are sent by the syncer: `obtain_tips`: https://github.com/ZcashFoundation/zebra/blob/5cdcc5255f8c66fe0fc0f2a41b6e7b37e8530b1f/zebrad/src/components/sync.rs#L373-L384 `extend_tips`: https://github.com/ZcashFoundation/zebra/blob/5cdcc5255f8c66fe0fc0f2a41b6e7b37e8530b1f/zebrad/src/components/sync.rs#L485-L499 We might want to use part of the crawler refactor from #2160, where we de-duplicate the `candidates.update()` call. That will make it easier to rate-limit the requests. **Describe alternatives you've considered** We could limit fanout to the number of active peers (#TODO) - this only helps when we have 1-3 peers, which is unlikely on mainnet We could rate-limit identical messages to the same peer (#2153) These alternatives would slow down syncing: - rate-limit all messages to the peer set (#2153) - rate limit all messages to individual peers (#2153)
non_defect
rate limit useless getdata to peers credit equilibrium is your feature request related to a problem please describe zebra sends lots of getdata next block chain hashes messages to peers even if those peers ignore them reject them send useless answers this bug could be a significant issue in smaller networks describe the solution you d like we should rate limit getdata messages particularly to peers that don t give us useful responses these messages are sent by the syncer obtain tips extend tips we might want to use part of the crawler refactor from where we de duplicate the candidates update call that will make it easier to rate limit the requests describe alternatives you ve considered we could limit fanout to the number of active peers todo this only helps when we have peers which is unlikely on mainnet we could rate limit identical messages to the same peer these alternatives would slow down syncing rate limit all messages to the peer set rate limit all messages to individual peers
0
4,203
2,610,089,154
IssuesEvent
2015-02-26 18:27:00
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳冬天如何祛痘
auto-migrated Priority-Medium Type-Defect
``` 深圳冬天如何祛痘【深圳韩方科颜全国热线400-869-1818,24小时 QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:33
1.0
深圳冬天如何祛痘 - ``` 深圳冬天如何祛痘【深圳韩方科颜全国热线400-869-1818,24小时 QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:33
defect
深圳冬天如何祛痘 深圳冬天如何祛痘【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at
1
777,317
27,275,864,039
IssuesEvent
2023-02-23 05:07:25
KDT3-miniproject-team1/backend
https://api.github.com/repos/KDT3-miniproject-team1/backend
closed
[Fix] 일부 수정사항 반영
For : API Priority : Low Status : Completed Type : Refactory
## 🔨개발 할 기능 수정사항 반영 ## 🧩 세부 기능 해당 기능에 대한 세부 계획 작성 (ex. -[ ] 로그인 시 아이디 비번 입력 받기) - [x] 필요없는 출력문 제거 - [x] 관심상품에 productId 추가
1.0
[Fix] 일부 수정사항 반영 - ## 🔨개발 할 기능 수정사항 반영 ## 🧩 세부 기능 해당 기능에 대한 세부 계획 작성 (ex. -[ ] 로그인 시 아이디 비번 입력 받기) - [x] 필요없는 출력문 제거 - [x] 관심상품에 productId 추가
non_defect
일부 수정사항 반영 🔨개발 할 기능 수정사항 반영 🧩 세부 기능 해당 기능에 대한 세부 계획 작성 ex 로그인 시 아이디 비번 입력 받기 필요없는 출력문 제거 관심상품에 productid 추가
0
23,355
4,932,415,607
IssuesEvent
2016-11-28 13:37:20
Jumpscale/jscockpit
https://api.github.com/repos/Jumpscale/jscockpit
opened
cockpit-doc: Runs Walkthrough
type_documentation type_feature
## GOAL: Explain what the **Runs** page is all about ## DESCRIPTION: Placeholder: https://github.com/Jumpscale/jscockpit/blob/master/docs/walkthrough/Runs/Runs.md
1.0
cockpit-doc: Runs Walkthrough - ## GOAL: Explain what the **Runs** page is all about ## DESCRIPTION: Placeholder: https://github.com/Jumpscale/jscockpit/blob/master/docs/walkthrough/Runs/Runs.md
non_defect
cockpit doc runs walkthrough goal explain what the runs page is all about description placeholder
0
502,315
14,544,279,708
IssuesEvent
2020-12-15 17:57:49
microsoft/PowerToys
https://api.github.com/repos/microsoft/PowerToys
closed
improve auto-update logic on PC resume
Area-Setup/Install Issue-Bug Priority-2
With the latest update from 18.1 to 18.2, I only got the update notification after I restarted PowerToys elevated. I'm not sure if it was the elevation or the restart itself that triggered it, but only then I got the notification. I usually hibernate my Surface, but PowerToys was running after resume.
1.0
improve auto-update logic on PC resume - With the latest update from 18.1 to 18.2, I only got the update notification after I restarted PowerToys elevated. I'm not sure if it was the elevation or the restart itself that triggered it, but only then I got the notification. I usually hibernate my Surface, but PowerToys was running after resume.
non_defect
improve auto update logic on pc resume with the latest update from to i only got the update notification after i restarted powertoys elevated i m not sure if it was the elevation or the restart itself that triggered it but only then i got the notification i usually hibernate my surface but powertoys was running after resume
0
80,800
30,536,222,783
IssuesEvent
2023-07-19 17:33:51
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
BUG: Tests fail due to DeprecationWarning on imp
defect
### Describe your issue. I'm setting up a SciPy dev environment on the development branch. python dev.py test fails. Numerous tests do not pass. I could be doing something goofy but I'm trying to follow the docs as well as I can. I don't know why a DeprecationWarning causes an error or why imp is involved. ### Reproducing Code Example ```python python dev.py test ``` ### Error message ```shell A representative sample: _ ERROR collecting build-install/lib/python3.10/site-packages/scipy/sparse/csgr aph/tests/test_matching.py _ ../../../../venv/scipy3.10/lib/python3.10/site-packages/_pytest/runner.py:341: in from_call result: Optional[TResult] = func() cls = <class '_pytest.runner.CallInfo'> duration = 0.003461237996816635 excinfo = <ExceptionInfo DeprecationWarning("the imp module is depre cated in favour of importlib and slated for removal in Python 3.12; see the mod ule's documentation for alternative uses") tblen=38> func = <function pytest_make_collect_report.<locals>.<lambda> at 0x16bd5aa70> precise_start = 230735.183075331 precise_stop = 230735.186536569 reraise = None result = None start = 1689736893.95786 stop = 1689736893.961324 when = 'collect' Short form: ERROR scipy/cluster/tests/test_disjoint_set.py - DeprecationWarning: the imp module is deprecated in favour of importlib an... So many errors are triggered that I'm not sure that all of the errors have this form. ``` ### SciPy/NumPy/Python version and system information ```shell import sys, numpy; print(numpy.__version__, sys.version_info) 1.22.4 sys.version_info(major=3, minor=10, micro=11, releaselevel='final', serial=0) I'm running an Intel Mac, Python 3.10, installed from python.org, running in a venv. I installed dependencies using this gist: https://gist.github.com/stefanv/0b052fa4014fa07e18e81fe544afc9f9 ```
1.0
BUG: Tests fail due to DeprecationWarning on imp - ### Describe your issue. I'm setting up a SciPy dev environment on the development branch. python dev.py test fails. Numerous tests do not pass. I could be doing something goofy but I'm trying to follow the docs as well as I can. I don't know why a DeprecationWarning causes an error or why imp is involved. ### Reproducing Code Example ```python python dev.py test ``` ### Error message ```shell A representative sample: _ ERROR collecting build-install/lib/python3.10/site-packages/scipy/sparse/csgr aph/tests/test_matching.py _ ../../../../venv/scipy3.10/lib/python3.10/site-packages/_pytest/runner.py:341: in from_call result: Optional[TResult] = func() cls = <class '_pytest.runner.CallInfo'> duration = 0.003461237996816635 excinfo = <ExceptionInfo DeprecationWarning("the imp module is depre cated in favour of importlib and slated for removal in Python 3.12; see the mod ule's documentation for alternative uses") tblen=38> func = <function pytest_make_collect_report.<locals>.<lambda> at 0x16bd5aa70> precise_start = 230735.183075331 precise_stop = 230735.186536569 reraise = None result = None start = 1689736893.95786 stop = 1689736893.961324 when = 'collect' Short form: ERROR scipy/cluster/tests/test_disjoint_set.py - DeprecationWarning: the imp module is deprecated in favour of importlib an... So many errors are triggered that I'm not sure that all of the errors have this form. ``` ### SciPy/NumPy/Python version and system information ```shell import sys, numpy; print(numpy.__version__, sys.version_info) 1.22.4 sys.version_info(major=3, minor=10, micro=11, releaselevel='final', serial=0) I'm running an Intel Mac, Python 3.10, installed from python.org, running in a venv. I installed dependencies using this gist: https://gist.github.com/stefanv/0b052fa4014fa07e18e81fe544afc9f9 ```
defect
bug tests fail due to deprecationwarning on imp describe your issue i m setting up a scipy dev environment on the development branch python dev py test fails numerous tests do not pass i could be doing something goofy but i m trying to follow the docs as well as i can i don t know why a deprecationwarning causes an error or why imp is involved reproducing code example python python dev py test error message shell a representative sample error collecting build install lib site packages scipy sparse csgr aph tests test matching py venv lib site packages pytest runner py in from call result optional func cls duration excinfo exceptioninfo deprecationwarning the imp module is depre cated in favour of importlib and slated for removal in python see the mod ule s documentation for alternative uses tblen func at precise start precise stop reraise none result none start stop when collect short form error scipy cluster tests test disjoint set py deprecationwarning the imp module is deprecated in favour of importlib an so many errors are triggered that i m not sure that all of the errors have this form scipy numpy python version and system information shell import sys numpy print numpy version sys version info sys version info major minor micro releaselevel final serial i m running an intel mac python installed from python org running in a venv i installed dependencies using this gist
1
79,153
28,017,964,875
IssuesEvent
2023-03-28 01:21:44
salsadigitalauorg/civictheme_source
https://api.github.com/repos/salsadigitalauorg/civictheme_source
closed
[DEFECT] test sync 1
Type: Defect
### Summary summary 1 ### Steps to reproduce str 1 ### Observed outcome oo 1 ### Expected outcome eo 1 <br/> --- JIRA: CIVIC-1401
1.0
[DEFECT] test sync 1 - ### Summary summary 1 ### Steps to reproduce str 1 ### Observed outcome oo 1 ### Expected outcome eo 1 <br/> --- JIRA: CIVIC-1401
defect
test sync summary summary steps to reproduce str observed outcome oo expected outcome eo jira civic
1
460,333
13,208,261,847
IssuesEvent
2020-08-15 03:29:02
larissatrochta/THP-Website-QA
https://api.github.com/repos/larissatrochta/THP-Website-QA
opened
Carousel Control Arrows Are Uncentered
low severity medium priority
When viewing the website on desktop, the carousel arrows to shift slides are uncentered. The controls need to be adjusted, possibly using the class in the CSS to align them. Steps to Reproduce: 1. Visit http://thehoneypot.larissatrochta.ca/ on desktop 2. Scroll down to the "What's Good" carousel 3. View the carousel control arrows Expected Result: The carousel control arrows are centered vertically with the images. Actual Result: The carousel control arrows appear close to the bottom of the images. On desktop view: <img width="1022" alt="Screen Shot 2020-08-14 at 8 23 35 PM" src="https://user-images.githubusercontent.com/66489216/90304404-29f55680-de6c-11ea-9126-47b7608d0263.png">
1.0
Carousel Control Arrows Are Uncentered - When viewing the website on desktop, the carousel arrows to shift slides are uncentered. The controls need to be adjusted, possibly using the class in the CSS to align them. Steps to Reproduce: 1. Visit http://thehoneypot.larissatrochta.ca/ on desktop 2. Scroll down to the "What's Good" carousel 3. View the carousel control arrows Expected Result: The carousel control arrows are centered vertically with the images. Actual Result: The carousel control arrows appear close to the bottom of the images. On desktop view: <img width="1022" alt="Screen Shot 2020-08-14 at 8 23 35 PM" src="https://user-images.githubusercontent.com/66489216/90304404-29f55680-de6c-11ea-9126-47b7608d0263.png">
non_defect
carousel control arrows are uncentered when viewing the website on desktop the carousel arrows to shift slides are uncentered the controls need to be adjusted possibly using the class in the css to align them steps to reproduce visit on desktop scroll down to the what s good carousel view the carousel control arrows expected result the carousel control arrows are centered vertically with the images actual result the carousel control arrows appear close to the bottom of the images on desktop view img width alt screen shot at pm src
0
300,456
22,679,777,651
IssuesEvent
2022-07-04 08:51:35
kyma-project/kyma
https://api.github.com/repos/kyma-project/kyma
reopened
kyma eventing documentation - get started
area/documentation area/eventing
This section should cover one easy tutorial to get familiar with - connection of sink and event type (subscription) - publishing one event (on epp) - checking that this event was received in the sink This tutorial should use existing building blocks: - how to setup a kyma cluster - how to deploy a workload If possible it should be entirely solved using busola (with optional commandline calls on separate tabs) Ideas: - sending and listening for cloudevents could be done using https://github.com/cloudevents/conformance
1.0
kyma eventing documentation - get started - This section should cover one easy tutorial to get familiar with - connection of sink and event type (subscription) - publishing one event (on epp) - checking that this event was received in the sink This tutorial should use existing building blocks: - how to setup a kyma cluster - how to deploy a workload If possible it should be entirely solved using busola (with optional commandline calls on separate tabs) Ideas: - sending and listening for cloudevents could be done using https://github.com/cloudevents/conformance
non_defect
kyma eventing documentation get started this section should cover one easy tutorial to get familiar with connection of sink and event type subscription publishing one event on epp checking that this event was received in the sink this tutorial should use existing building blocks how to setup a kyma cluster how to deploy a workload if possible it should be entirely solved using busola with optional commandline calls on separate tabs ideas sending and listening for cloudevents could be done using
0
323,986
9,881,892,140
IssuesEvent
2019-06-24 15:35:55
grpc/grpc
https://api.github.com/repos/grpc/grpc
closed
Error loading native libs (1.18.0) and error with incorrect symbol
kind/bug lang/C# priority/P3
### What version of gRPC and what language are you using? I've tried both 1.18.0 and 1.19.0. ### What operating system (Linux, Windows,...) and version? OSX and Windows. ### What did you do? This is a simplified structure of my project. - coredriver (.NET **Core** 2.x) - fwdriver (.NET **Framework** 4.6.2) - sharedlib (.NET **Standard** 2.0) The shared library makes use of gRPC - by means of a NuGet package `Grpc`. A build/run for .NET *Core* works smoothly. However, for .NET Framework I'm having problems. The build goes well and the *native libraries* are placed properly in the output/publish directory, together with the other DLLs. But when I run the application I get the following exception: ``` System.IO.FileNotFoundException: Error loading native library. Not found in any of the possible locations: C:\Windows\Microsoft.Net\assembly\GAC_MSIL\Grpc.Core\v4.0_1.0.0.0__d754f35622e28bad\grpc_csharp_ext.x86.dll,C:\Windows\Microsoft.Net\assembly\GAC_MSIL\Grpc.Core\v4.0_1.0.0.0__d754f35622e28bad\runtimes/win/native\grpc_csharp_ext.x86.dll,C:\Windows\Microsoft.Net\assembly\GAC_MSIL\Grpc.Core\v4.0_1.0.0.0__d754f35622e28bad\../..\runtimes/win/native\grpc_csharp_ext.x86.dll at Grpc.Core.Internal.UnmanagedLibrary.FirstValidLibraryPath(String[] libraryPathAlternatives) at Grpc.Core.Internal.UnmanagedLibrary..ctor(String[] libraryPathAlternatives) at Grpc.Core.Internal.NativeExtension.LoadUnmanagedLibrary() at Grpc.Core.Internal.NativeExtension.LoadNativeMethods() at Grpc.Core.Internal.NativeExtension..ctor() at Grpc.Core.Internal.NativeExtension.Get() at Grpc.Core.GrpcEnvironment.GrpcNativeInit() at Grpc.Core.GrpcEnvironment..ctor() at Grpc.Core.GrpcEnvironment.AddRef() at Grpc.Core.Channel..ctor(String target, ChannelCredentials credentials, IEnumerable`1 options) at Grpc.Core.Channel..ctor(String host, Int32 port, ChannelCredentials credentials, IEnumerable`1 options) ``` I'm surprised by the search path for loading... I googled around and found numerous reports with this trace, with slight variations: [here](https://github.com/grpc/grpc/issues/12570), [there](https://github.com/grpc/grpc/issues/5589), [again](https://github.com/grpc/grpc/issues/17451), etc. Most of them are for older versions of gRPC. Nevertheless, I gave it a try by adding an explicit NuGet reference of `Grpc.Core` (in the dependent project as well). Didn't work... I even considered upgrading to 1.19.0, but I then get this exception: ``` System.MissingMethodException: Method not found: 'Void Grpc.Core.CallOptions..ctor(Grpc.Core.Metadata, System.Nullable`1<System.DateTime>, System.Threading.CancellationToken, Grpc.Core.WriteOptions, Grpc.Core.ContextPropagationToken, Grpc.Core.CallCredentials)'. at io.shiftleft.generic2cpg.infra.GrpcLogger..ctor() ``` Before I dig further, is there (still) any issue in this area?
1.0
Error loading native libs (1.18.0) and error with incorrect symbol - ### What version of gRPC and what language are you using? I've tried both 1.18.0 and 1.19.0. ### What operating system (Linux, Windows,...) and version? OSX and Windows. ### What did you do? This is a simplified structure of my project. - coredriver (.NET **Core** 2.x) - fwdriver (.NET **Framework** 4.6.2) - sharedlib (.NET **Standard** 2.0) The shared library makes use of gRPC - by means of a NuGet package `Grpc`. A build/run for .NET *Core* works smoothly. However, for .NET Framework I'm having problems. The build goes well and the *native libraries* are placed properly in the output/publish directory, together with the other DLLs. But when I run the application I get the following exception: ``` System.IO.FileNotFoundException: Error loading native library. Not found in any of the possible locations: C:\Windows\Microsoft.Net\assembly\GAC_MSIL\Grpc.Core\v4.0_1.0.0.0__d754f35622e28bad\grpc_csharp_ext.x86.dll,C:\Windows\Microsoft.Net\assembly\GAC_MSIL\Grpc.Core\v4.0_1.0.0.0__d754f35622e28bad\runtimes/win/native\grpc_csharp_ext.x86.dll,C:\Windows\Microsoft.Net\assembly\GAC_MSIL\Grpc.Core\v4.0_1.0.0.0__d754f35622e28bad\../..\runtimes/win/native\grpc_csharp_ext.x86.dll at Grpc.Core.Internal.UnmanagedLibrary.FirstValidLibraryPath(String[] libraryPathAlternatives) at Grpc.Core.Internal.UnmanagedLibrary..ctor(String[] libraryPathAlternatives) at Grpc.Core.Internal.NativeExtension.LoadUnmanagedLibrary() at Grpc.Core.Internal.NativeExtension.LoadNativeMethods() at Grpc.Core.Internal.NativeExtension..ctor() at Grpc.Core.Internal.NativeExtension.Get() at Grpc.Core.GrpcEnvironment.GrpcNativeInit() at Grpc.Core.GrpcEnvironment..ctor() at Grpc.Core.GrpcEnvironment.AddRef() at Grpc.Core.Channel..ctor(String target, ChannelCredentials credentials, IEnumerable`1 options) at Grpc.Core.Channel..ctor(String host, Int32 port, ChannelCredentials credentials, IEnumerable`1 options) ``` I'm surprised by the search path for loading... I googled around and found numerous reports with this trace, with slight variations: [here](https://github.com/grpc/grpc/issues/12570), [there](https://github.com/grpc/grpc/issues/5589), [again](https://github.com/grpc/grpc/issues/17451), etc. Most of them are for older versions of gRPC. Nevertheless, I gave it a try by adding an explicit NuGet reference of `Grpc.Core` (in the dependent project as well). Didn't work... I even considered upgrading to 1.19.0, but I then get this exception: ``` System.MissingMethodException: Method not found: 'Void Grpc.Core.CallOptions..ctor(Grpc.Core.Metadata, System.Nullable`1<System.DateTime>, System.Threading.CancellationToken, Grpc.Core.WriteOptions, Grpc.Core.ContextPropagationToken, Grpc.Core.CallCredentials)'. at io.shiftleft.generic2cpg.infra.GrpcLogger..ctor() ``` Before I dig further, is there (still) any issue in this area?
non_defect
error loading native libs and error with incorrect symbol what version of grpc and what language are you using i ve tried both and what operating system linux windows and version osx and windows what did you do this is a simplified structure of my project coredriver net core x fwdriver net framework sharedlib net standard the shared library makes use of grpc by means of a nuget package grpc a build run for net core works smoothly however for net framework i m having problems the build goes well and the native libraries are placed properly in the output publish directory together with the other dlls but when i run the application i get the following exception system io filenotfoundexception error loading native library not found in any of the possible locations c windows microsoft net assembly gac msil grpc core grpc csharp ext dll c windows microsoft net assembly gac msil grpc core runtimes win native grpc csharp ext dll c windows microsoft net assembly gac msil grpc core runtimes win native grpc csharp ext dll at grpc core internal unmanagedlibrary firstvalidlibrarypath string librarypathalternatives at grpc core internal unmanagedlibrary ctor string librarypathalternatives at grpc core internal nativeextension loadunmanagedlibrary at grpc core internal nativeextension loadnativemethods at grpc core internal nativeextension ctor at grpc core internal nativeextension get at grpc core grpcenvironment grpcnativeinit at grpc core grpcenvironment ctor at grpc core grpcenvironment addref at grpc core channel ctor string target channelcredentials credentials ienumerable options at grpc core channel ctor string host port channelcredentials credentials ienumerable options i m surprised by the search path for loading i googled around and found numerous reports with this trace with slight variations etc most of them are for older versions of grpc nevertheless i gave it a try by adding an explicit nuget reference of grpc core in the dependent project as well didn t work i even considered upgrading to but i then get this exception system missingmethodexception method not found void grpc core calloptions ctor grpc core metadata system nullable system threading cancellationtoken grpc core writeoptions grpc core contextpropagationtoken grpc core callcredentials at io shiftleft infra grpclogger ctor before i dig further is there still any issue in this area
0
18,033
3,021,485,315
IssuesEvent
2015-07-31 15:01:17
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
Cake\I18n\Time numeric sorting in collection
Defect
Not sure if this is is a framework issue or I'm doing something wrong with the ORM. I'm querying a couple different models and using a collection to display them sorted by the `modified` property to make a makeshift news feed. This is on the latest Cake release, 3.0.9. The problem is in the sortBy method for the Collection. The expected behavior is all the various models to be in the `$recent` collection and sorted by their `modified` date descending. The only sort method that returns as expected is `SORT_NUMERIC`, but it throws the error: `Object of class Cake\I18n\Time could not be converted to double` twice. Changing to `SORT_NATURAL`, `SORT_STRING`, `SORT_LOCALE_STRING` or leaving the sort option off will display the wrong entities based on the date. ```php $recent_uploads = $this->Users->Uploads->find('recent')->limit(10)->all(); $recent_charts = $this->Users->Clients->Charts->find('recent')->limit(10)->all(); $recent_users = $this->Users->find('recent')->limit(10)->all(); $recent = (new Collection([])) ->append($recent_charts) ->append($recent_uploads) ->append($recent_users) ->sortBy('modified', SORT_DESC, SORT_NUMERIC) ->take(10); $feed = $recent->toArray(); ``` Suppressing the error will also cause the incorrect sort order. I tried also just passing the result sets into the new collection, but the sort ordering was incorrect as well. Any help or direction would be greatly appreciated. Thanks!
1.0
Cake\I18n\Time numeric sorting in collection - Not sure if this is is a framework issue or I'm doing something wrong with the ORM. I'm querying a couple different models and using a collection to display them sorted by the `modified` property to make a makeshift news feed. This is on the latest Cake release, 3.0.9. The problem is in the sortBy method for the Collection. The expected behavior is all the various models to be in the `$recent` collection and sorted by their `modified` date descending. The only sort method that returns as expected is `SORT_NUMERIC`, but it throws the error: `Object of class Cake\I18n\Time could not be converted to double` twice. Changing to `SORT_NATURAL`, `SORT_STRING`, `SORT_LOCALE_STRING` or leaving the sort option off will display the wrong entities based on the date. ```php $recent_uploads = $this->Users->Uploads->find('recent')->limit(10)->all(); $recent_charts = $this->Users->Clients->Charts->find('recent')->limit(10)->all(); $recent_users = $this->Users->find('recent')->limit(10)->all(); $recent = (new Collection([])) ->append($recent_charts) ->append($recent_uploads) ->append($recent_users) ->sortBy('modified', SORT_DESC, SORT_NUMERIC) ->take(10); $feed = $recent->toArray(); ``` Suppressing the error will also cause the incorrect sort order. I tried also just passing the result sets into the new collection, but the sort ordering was incorrect as well. Any help or direction would be greatly appreciated. Thanks!
defect
cake time numeric sorting in collection not sure if this is is a framework issue or i m doing something wrong with the orm i m querying a couple different models and using a collection to display them sorted by the modified property to make a makeshift news feed this is on the latest cake release the problem is in the sortby method for the collection the expected behavior is all the various models to be in the recent collection and sorted by their modified date descending the only sort method that returns as expected is sort numeric but it throws the error object of class cake time could not be converted to double twice changing to sort natural sort string sort locale string or leaving the sort option off will display the wrong entities based on the date php recent uploads this users uploads find recent limit all recent charts this users clients charts find recent limit all recent users this users find recent limit all recent new collection append recent charts append recent uploads append recent users sortby modified sort desc sort numeric take feed recent toarray suppressing the error will also cause the incorrect sort order i tried also just passing the result sets into the new collection but the sort ordering was incorrect as well any help or direction would be greatly appreciated thanks
1
69,988
22,777,569,060
IssuesEvent
2022-07-08 15:52:40
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
GenerationOption.DEFAULT should act as STORED in client side computed columns
T: Defect C: Functionality P: Medium E: Professional Edition E: Enterprise Edition
When the `GenerationOption.DEFAULT` is set in client side computed columns, then nothing happens, i.e. no emulation is enabled. This is certainly not desired. The `DEFAULT` option just means that when generating DDL for server side computed columns, we don't want to generate any explicit `VIRTUAL` or `STORED` keyword, unless it is required. So, let's default to `STORED`, which is probably the more reasonable default, especially for client side computed columns, which default to `STORED` e.g. when emulating them via #13418
1.0
GenerationOption.DEFAULT should act as STORED in client side computed columns - When the `GenerationOption.DEFAULT` is set in client side computed columns, then nothing happens, i.e. no emulation is enabled. This is certainly not desired. The `DEFAULT` option just means that when generating DDL for server side computed columns, we don't want to generate any explicit `VIRTUAL` or `STORED` keyword, unless it is required. So, let's default to `STORED`, which is probably the more reasonable default, especially for client side computed columns, which default to `STORED` e.g. when emulating them via #13418
defect
generationoption default should act as stored in client side computed columns when the generationoption default is set in client side computed columns then nothing happens i e no emulation is enabled this is certainly not desired the default option just means that when generating ddl for server side computed columns we don t want to generate any explicit virtual or stored keyword unless it is required so let s default to stored which is probably the more reasonable default especially for client side computed columns which default to stored e g when emulating them via
1
129,012
17,662,870,563
IssuesEvent
2021-08-21 21:54:59
PyO3/pyo3
https://api.github.com/repos/PyO3/pyo3
closed
Feature request: raw data API for PyString
enhancement hard needs-design
PyUnicode internally stores its data in various variations. See https://docs.python.org/3/c-api/unicode.html. PyO3's `PyString` currently only allows you to get at UTF-8 / Rust `str` compatible variations of the data. rust-cpython - by contrast - exposes a `PyString.data()` returning a `PyStringData` enum: ```rust pub enum PyStringData<'a> { Latin1(&'a [u8]), Utf8(&'a [u8]), Utf16(&'a [u16]), Utf32(&'a [u32]), } ``` This API enables Rust to have access to the raw bytes backing a Python string, not the UTF-8 normalization of it (if different). PyOxidizer was relying on this API for testing. (There are some low-level tests around encoding handling that need to verify exact byte sequences and Python string representations are being handled properly.) While I'm certainly capable of using `unsafe` Python C APIs to get at the raw string data to close this feature gap, I was curious if PyO3 would be interested in a PR to expose a `PyStringData` enumeration for `PyString` instances. Here is my proposal: 1. `PyString` gains a `pub fn data(&self) -> PyStringData<'_>` 2. `PyStringData` is an enum with a variant for each internal Python string variation. 3. `PyString.data()` calls out to `PyUnicode_READY()` + `PyUnicode_{KIND, DATA, GET_LENGTH}` and constructs a `PyStringData` with a slice. I'd be willing to contribute a PR for this feature if there is interest.
1.0
Feature request: raw data API for PyString - PyUnicode internally stores its data in various variations. See https://docs.python.org/3/c-api/unicode.html. PyO3's `PyString` currently only allows you to get at UTF-8 / Rust `str` compatible variations of the data. rust-cpython - by contrast - exposes a `PyString.data()` returning a `PyStringData` enum: ```rust pub enum PyStringData<'a> { Latin1(&'a [u8]), Utf8(&'a [u8]), Utf16(&'a [u16]), Utf32(&'a [u32]), } ``` This API enables Rust to have access to the raw bytes backing a Python string, not the UTF-8 normalization of it (if different). PyOxidizer was relying on this API for testing. (There are some low-level tests around encoding handling that need to verify exact byte sequences and Python string representations are being handled properly.) While I'm certainly capable of using `unsafe` Python C APIs to get at the raw string data to close this feature gap, I was curious if PyO3 would be interested in a PR to expose a `PyStringData` enumeration for `PyString` instances. Here is my proposal: 1. `PyString` gains a `pub fn data(&self) -> PyStringData<'_>` 2. `PyStringData` is an enum with a variant for each internal Python string variation. 3. `PyString.data()` calls out to `PyUnicode_READY()` + `PyUnicode_{KIND, DATA, GET_LENGTH}` and constructs a `PyStringData` with a slice. I'd be willing to contribute a PR for this feature if there is interest.
non_defect
feature request raw data api for pystring pyunicode internally stores its data in various variations see s pystring currently only allows you to get at utf rust str compatible variations of the data rust cpython by contrast exposes a pystring data returning a pystringdata enum rust pub enum pystringdata a a a a this api enables rust to have access to the raw bytes backing a python string not the utf normalization of it if different pyoxidizer was relying on this api for testing there are some low level tests around encoding handling that need to verify exact byte sequences and python string representations are being handled properly while i m certainly capable of using unsafe python c apis to get at the raw string data to close this feature gap i was curious if would be interested in a pr to expose a pystringdata enumeration for pystring instances here is my proposal pystring gains a pub fn data self pystringdata pystringdata is an enum with a variant for each internal python string variation pystring data calls out to pyunicode ready pyunicode kind data get length and constructs a pystringdata with a slice i d be willing to contribute a pr for this feature if there is interest
0
53,999
13,302,700,934
IssuesEvent
2020-08-25 14:35:30
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
ndimage.rotate misses some values (Trac #1378)
Migrated from Trac defect prio-normal scipy.ndimage
_Original ticket http://projects.scipy.org/scipy/ticket/1378 on 2011-01-29 by trac user tmcookies, assigned to unknown._ When rotating 3d-structure elements, some values get lost. I have two cases that i can show: ``` import numpy as np from scipy.ndimage import rotate cTA = np.array([ [[-1, -1, -1], [-1, -1, -1], [-1, -1, -1]], [[ 0, 0, 0], [ 0, 1, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 1, 0], [ 0, 0, 0]] ]) cTA1 = rotate(cTA, 90,(0,1)) cTA6 = rotate(cTA,180,(0,1)) cTA6_ = rotate(cTA1,90,(0,1)) print cTA6 print cTA6_ ``` The result of cTA6 and cTA6_ should be the same (rotating 180 or two times 90 should be the same), but the 180-rotation loses three values at the bottom array. The result i get is the following: ``` [[[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[-1 -1 -1] [-1 -1 -1] [ 0 0 0]]] __here are the missing '-1' __ [[[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[-1 -1 -1] [-1 -1 -1] [-1 -1 -1]]] ``` By the way: i'm running Ubuntu 10.10 32bit (yeah, they still didn't switch to 0.8)
1.0
ndimage.rotate misses some values (Trac #1378) - _Original ticket http://projects.scipy.org/scipy/ticket/1378 on 2011-01-29 by trac user tmcookies, assigned to unknown._ When rotating 3d-structure elements, some values get lost. I have two cases that i can show: ``` import numpy as np from scipy.ndimage import rotate cTA = np.array([ [[-1, -1, -1], [-1, -1, -1], [-1, -1, -1]], [[ 0, 0, 0], [ 0, 1, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 1, 0], [ 0, 0, 0]] ]) cTA1 = rotate(cTA, 90,(0,1)) cTA6 = rotate(cTA,180,(0,1)) cTA6_ = rotate(cTA1,90,(0,1)) print cTA6 print cTA6_ ``` The result of cTA6 and cTA6_ should be the same (rotating 180 or two times 90 should be the same), but the 180-rotation loses three values at the bottom array. The result i get is the following: ``` [[[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[-1 -1 -1] [-1 -1 -1] [ 0 0 0]]] __here are the missing '-1' __ [[[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[ 0 0 0] [ 0 1 0] [ 0 0 0]] [[-1 -1 -1] [-1 -1 -1] [-1 -1 -1]]] ``` By the way: i'm running Ubuntu 10.10 32bit (yeah, they still didn't switch to 0.8)
defect
ndimage rotate misses some values trac original ticket on by trac user tmcookies assigned to unknown when rotating structure elements some values get lost i have two cases that i can show import numpy as np from scipy ndimage import rotate cta np array rotate cta rotate cta rotate print print the result of and should be the same rotating or two times should be the same but the rotation loses three values at the bottom array the result i get is the following here are the missing by the way i m running ubuntu yeah they still didn t switch to
1
21,623
3,525,845,239
IssuesEvent
2016-01-14 00:24:43
gadLinux/hibernate-generic-dao
https://api.github.com/repos/gadLinux/hibernate-generic-dao
closed
Issue when Implementing Ehcache
auto-migrated Priority-Medium Type-Defect
``` We are using GenericDAO in our project. Now we are implementing Hibernate Secondary Cache(EhCache) with the existing code. Issue: Cannot fetch entity from Secondlevel cache. Because we have to set setCacheable(true) with each criteria query. We don't know how to set it with GenericDAO. Eg: Product product = (Product) session.createCriteria(Product.class).setCacheable(true) .add(Restrictions.eq("id", id)) .uniqueResult(); How can I do with the same with google GenericDAO? Please help us. ``` Original issue reported on code.google.com by `sonus...@gmail.com` on 27 Jun 2015 at 10:31
1.0
Issue when Implementing Ehcache - ``` We are using GenericDAO in our project. Now we are implementing Hibernate Secondary Cache(EhCache) with the existing code. Issue: Cannot fetch entity from Secondlevel cache. Because we have to set setCacheable(true) with each criteria query. We don't know how to set it with GenericDAO. Eg: Product product = (Product) session.createCriteria(Product.class).setCacheable(true) .add(Restrictions.eq("id", id)) .uniqueResult(); How can I do with the same with google GenericDAO? Please help us. ``` Original issue reported on code.google.com by `sonus...@gmail.com` on 27 Jun 2015 at 10:31
defect
issue when implementing ehcache we are using genericdao in our project now we are implementing hibernate secondary cache ehcache with the existing code issue cannot fetch entity from secondlevel cache because we have to set setcacheable true with each criteria query we don t know how to set it with genericdao eg product product product session createcriteria product class setcacheable true add restrictions eq id id uniqueresult how can i do with the same with google genericdao please help us original issue reported on code google com by sonus gmail com on jun at
1
16,646
21,710,184,265
IssuesEvent
2022-05-10 13:18:14
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Migrate errors with `The underlying table for model `_prisma_migrations` does not exist.` when using a non default PostgreSQL schema
bug/1-unconfirmed kind/bug process/candidate topic: schema team/schema topic: postgresql
### Bug description Here the project is using a non default PostgreSQL schema `error-handling-prod` (i.e. not `public`) `npx prisma migrate dev` errors with ``` Environment variables loaded from prisma/.env Prisma schema loaded from prisma/schema.prisma Datasource "db": PostgreSQL database "mydb123", schema "error-handling-prod" at "localhost:5432" PostgreSQL database mydb123 created at localhost:5432 Applying migration `20220510114355_baseline` Error: P1014 The underlying table for model `_prisma_migrations` does not exist. ``` Screenshots from TablePlus, it seems the `_prisma_migrations` was created in the `error-handling-prod` schema but Prisma Migrate tries to find it in the `public` schema where it's missing maybe? <img width="760" alt="Screen Shot 2022-05-10 at 15 15 51" src="https://user-images.githubusercontent.com/1328733/167637261-f3575bcb-8e33-4268-a29c-f94684066e25.png"> <img width="759" alt="Screen Shot 2022-05-10 at 15 15 44" src="https://user-images.githubusercontent.com/1328733/167637275-d90798f7-e5e0-4fc7-b452-131109f40a0a.png"> ### How to reproduce Clone `npx prisma migrate dev` ### Expected behavior _No response_ ### Prisma information See reproduction: ### Environment & setup - OS: macOS - Database: PostgreSQL - Node.js version: 16.10 ### Prisma Version ``` 3.14.0-dev.60 ```
1.0
Migrate errors with `The underlying table for model `_prisma_migrations` does not exist.` when using a non default PostgreSQL schema - ### Bug description Here the project is using a non default PostgreSQL schema `error-handling-prod` (i.e. not `public`) `npx prisma migrate dev` errors with ``` Environment variables loaded from prisma/.env Prisma schema loaded from prisma/schema.prisma Datasource "db": PostgreSQL database "mydb123", schema "error-handling-prod" at "localhost:5432" PostgreSQL database mydb123 created at localhost:5432 Applying migration `20220510114355_baseline` Error: P1014 The underlying table for model `_prisma_migrations` does not exist. ``` Screenshots from TablePlus, it seems the `_prisma_migrations` was created in the `error-handling-prod` schema but Prisma Migrate tries to find it in the `public` schema where it's missing maybe? <img width="760" alt="Screen Shot 2022-05-10 at 15 15 51" src="https://user-images.githubusercontent.com/1328733/167637261-f3575bcb-8e33-4268-a29c-f94684066e25.png"> <img width="759" alt="Screen Shot 2022-05-10 at 15 15 44" src="https://user-images.githubusercontent.com/1328733/167637275-d90798f7-e5e0-4fc7-b452-131109f40a0a.png"> ### How to reproduce Clone `npx prisma migrate dev` ### Expected behavior _No response_ ### Prisma information See reproduction: ### Environment & setup - OS: macOS - Database: PostgreSQL - Node.js version: 16.10 ### Prisma Version ``` 3.14.0-dev.60 ```
non_defect
migrate errors with the underlying table for model prisma migrations does not exist when using a non default postgresql schema bug description here the project is using a non default postgresql schema error handling prod i e not public npx prisma migrate dev errors with environment variables loaded from prisma env prisma schema loaded from prisma schema prisma datasource db postgresql database schema error handling prod at localhost postgresql database created at localhost applying migration baseline error the underlying table for model prisma migrations does not exist screenshots from tableplus it seems the prisma migrations was created in the error handling prod schema but prisma migrate tries to find it in the public schema where it s missing maybe img width alt screen shot at src img width alt screen shot at src how to reproduce clone npx prisma migrate dev expected behavior no response prisma information see reproduction environment setup os macos database postgresql node js version prisma version dev
0
11,097
2,632,749,202
IssuesEvent
2015-03-08 13:32:10
simonsteele/pn
https://api.github.com/repos/simonsteele/pn
closed
BIG Memory leak when refreshing files by answering yes to "refresh and lose changes"
Component-Logic Priority-Medium Type-Defect
Original [issue 35](https://code.google.com/p/pnotepad/issues/detail?id=35) created by simonsteele on 2008-04-15T08:08:08.000Z: <b>What steps will reproduce the problem?</b> 1. Open large document (about 5MB)in PN. Use task manager to see memory usage. 2. Open document in notepad and add a line. Save. 3. In PN: Answer yes to refresh and lose changes. <b>What is the expected output? What do you see instead?</b> Expected output: Memory usage by pn.exe increase by a few bytes Instead: Memory usage of pn.exe, as reported by task manager, has increased by several megabytes! <b>What version of the product are you using? On what operating system?</b> WinXP. Tested on both 2.0.7.706-devel 2.0.8.718 with same result <b>Please provide any additional information below.</b>
1.0
BIG Memory leak when refreshing files by answering yes to "refresh and lose changes" - Original [issue 35](https://code.google.com/p/pnotepad/issues/detail?id=35) created by simonsteele on 2008-04-15T08:08:08.000Z: <b>What steps will reproduce the problem?</b> 1. Open large document (about 5MB)in PN. Use task manager to see memory usage. 2. Open document in notepad and add a line. Save. 3. In PN: Answer yes to refresh and lose changes. <b>What is the expected output? What do you see instead?</b> Expected output: Memory usage by pn.exe increase by a few bytes Instead: Memory usage of pn.exe, as reported by task manager, has increased by several megabytes! <b>What version of the product are you using? On what operating system?</b> WinXP. Tested on both 2.0.7.706-devel 2.0.8.718 with same result <b>Please provide any additional information below.</b>
defect
big memory leak when refreshing files by answering yes to refresh and lose changes original created by simonsteele on what steps will reproduce the problem open large document about in pn use task manager to see memory usage open document in notepad and add a line save in pn answer yes to refresh and lose changes what is the expected output what do you see instead expected output memory usage by pn exe increase by a few bytes instead memory usage of pn exe as reported by task manager has increased by several megabytes what version of the product are you using on what operating system winxp tested on both devel with same result please provide any additional information below
1
59,095
11,944,281,697
IssuesEvent
2020-04-03 01:57:28
azriel91/autexousious
https://api.github.com/repos/azriel91/autexousious
closed
Ongoing Code Maintenance
M: code
Tasks: * ~~Update Rust nightly and clippy~~ For gitlab runners, run a [gitlab runner setup pipeline](https://gitlab.com/azriel91/gitlab_runner_setup/pipelines). Rustfmt and Clippy are now installed on Linux Rust nightly as part of the gitlab runner setup. * [x] Update dependencies ```bash # Runs rustfmt, cargo update, and cargo outdated cargo make --no-workspace maintain ``` * ~~Remove `TODO`s~~ * [x] Run clippy (stable). * [x] Run `cargo udeps` and remove unused crates. * ~~Code coverage -- add tests / `kcov-ignore`s.~~
1.0
Ongoing Code Maintenance - Tasks: * ~~Update Rust nightly and clippy~~ For gitlab runners, run a [gitlab runner setup pipeline](https://gitlab.com/azriel91/gitlab_runner_setup/pipelines). Rustfmt and Clippy are now installed on Linux Rust nightly as part of the gitlab runner setup. * [x] Update dependencies ```bash # Runs rustfmt, cargo update, and cargo outdated cargo make --no-workspace maintain ``` * ~~Remove `TODO`s~~ * [x] Run clippy (stable). * [x] Run `cargo udeps` and remove unused crates. * ~~Code coverage -- add tests / `kcov-ignore`s.~~
non_defect
ongoing code maintenance tasks update rust nightly and clippy for gitlab runners run a rustfmt and clippy are now installed on linux rust nightly as part of the gitlab runner setup update dependencies bash runs rustfmt cargo update and cargo outdated cargo make no workspace maintain remove todo s run clippy stable run cargo udeps and remove unused crates code coverage add tests kcov ignore s
0
249,741
18,858,232,223
IssuesEvent
2021-11-12 09:31:57
sebbycake/pe
https://api.github.com/repos/sebbycake/pe
opened
Unnecessary repetition of information in creating a group
severity.VeryLow type.DocumentationBug
![image.png](https://raw.githubusercontent.com/sebbycake/pe/main/files/0cfebd79-3335-4a35-b009-72a2152fea62.png) Optional description is already mentioned in the previous sentence, hence there's repetition <!--session: 1636704596119-e7104d61-fc70-458b-a135-3434d542885b--> <!--Version: Web v3.4.1-->
1.0
Unnecessary repetition of information in creating a group - ![image.png](https://raw.githubusercontent.com/sebbycake/pe/main/files/0cfebd79-3335-4a35-b009-72a2152fea62.png) Optional description is already mentioned in the previous sentence, hence there's repetition <!--session: 1636704596119-e7104d61-fc70-458b-a135-3434d542885b--> <!--Version: Web v3.4.1-->
non_defect
unnecessary repetition of information in creating a group optional description is already mentioned in the previous sentence hence there s repetition
0
240,643
20,053,562,810
IssuesEvent
2022-02-03 09:38:19
dask/distributed
https://api.github.com/repos/dask/distributed
reopened
Flay test test_worker_reconnects_mid_compute*
flaky test
The two tests seem to fail occasionally on windows `test_worker_reconnects_mid_compute_multiple_states_on_scheduler` `test_worker_reconnects_mid_compute` https://github.com/dask/distributed/runs/4556308866?check_suite_focus=true [windows-latest-3.8-ci1-timeouts.zip](https://github.com/dask/distributed/files/7752878/windows-latest-3.8-ci1-timeouts.zip)
1.0
Flay test test_worker_reconnects_mid_compute* - The two tests seem to fail occasionally on windows `test_worker_reconnects_mid_compute_multiple_states_on_scheduler` `test_worker_reconnects_mid_compute` https://github.com/dask/distributed/runs/4556308866?check_suite_focus=true [windows-latest-3.8-ci1-timeouts.zip](https://github.com/dask/distributed/files/7752878/windows-latest-3.8-ci1-timeouts.zip)
non_defect
flay test test worker reconnects mid compute the two tests seem to fail occasionally on windows test worker reconnects mid compute multiple states on scheduler test worker reconnects mid compute
0
49,713
13,187,255,192
IssuesEvent
2020-08-13 02:50:17
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2 (Trac #1915)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1915">https://code.icecube.wisc.edu/ticket/1915</a>, reported by david.schultz and owned by juancarlos</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:30", "description": "Apparently DSTExtractor resets the event header, including the SubEventID. So all the SubEventIDs in L2 are 0.\n\nHow it happens:\nin the segment dst.extractor.ExtractDST:\n extract_to_frame = True\nthis causes the header to be overwritten.\n\nMaybe we should set this to False in filterscripts? Not sure what else it controls though.", "reporter": "david.schultz", "cc": "claudio.kopper", "resolution": "fixed", "_ts": "1550067210114669", "component": "combo reconstruction", "summary": "[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2", "priority": "critical", "keywords": "", "time": "2016-11-29T22:22:46", "milestone": "", "owner": "juancarlos", "type": "defect" } ``` </p> </details>
1.0
[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2 (Trac #1915) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1915">https://code.icecube.wisc.edu/ticket/1915</a>, reported by david.schultz and owned by juancarlos</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:30", "description": "Apparently DSTExtractor resets the event header, including the SubEventID. So all the SubEventIDs in L2 are 0.\n\nHow it happens:\nin the segment dst.extractor.ExtractDST:\n extract_to_frame = True\nthis causes the header to be overwritten.\n\nMaybe we should set this to False in filterscripts? Not sure what else it controls though.", "reporter": "david.schultz", "cc": "claudio.kopper", "resolution": "fixed", "_ts": "1550067210114669", "component": "combo reconstruction", "summary": "[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2", "priority": "critical", "keywords": "", "time": "2016-11-29T22:22:46", "milestone": "", "owner": "juancarlos", "type": "defect" } ``` </p> </details>
defect
dstextractor setting subeventid to in trac migrated from json status closed changetime description apparently dstextractor resets the event header including the subeventid so all the subeventids in are n nhow it happens nin the segment dst extractor extractdst n extract to frame true nthis causes the header to be overwritten n nmaybe we should set this to false in filterscripts not sure what else it controls though reporter david schultz cc claudio kopper resolution fixed ts component combo reconstruction summary dstextractor setting subeventid to in priority critical keywords time milestone owner juancarlos type defect
1
40,843
10,179,904,278
IssuesEvent
2019-08-09 08:58:47
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
ColumnToggler: Keyboard trap on Nova and Luna
6.2.23 7.0.6 defect
## Summary ColumnToggler suffers a keyboard trap. When focusing checkboxes of this component with the keyboard you are not able to access other elements on the page. - Navigation with TAB is restricted to the checkboxes - It is not possible to leave the overlay with the keyboard - You can not close the overlay with the keyboard This can be reproduced in the [showcase](https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml). More details on the WCAG success criteria can be found here: https://www.w3.org/TR/UNDERSTANDING-WCAG20/keyboard-operation-trapping.html ## 1) Environment - It does not work with 7.0.4 - It does not work in the showcase: https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml ## 2) Expected behavior - As a keyboard user I would like to leave the ColumnToggler with the keyboard - As a keyboard user I would like to close the ColumnToggler overlay ## 3) Actual behavior - Keyboard trap ## 4) Steps to reproduce - Navigate to the ColumnToggler checkboxes with the keyboard - Try to leave the component with the keyboard only ## 5) Sample XHTML - Please refer to https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml ## 6) Sample bean - Please refer to https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml
1.0
ColumnToggler: Keyboard trap on Nova and Luna - ## Summary ColumnToggler suffers a keyboard trap. When focusing checkboxes of this component with the keyboard you are not able to access other elements on the page. - Navigation with TAB is restricted to the checkboxes - It is not possible to leave the overlay with the keyboard - You can not close the overlay with the keyboard This can be reproduced in the [showcase](https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml). More details on the WCAG success criteria can be found here: https://www.w3.org/TR/UNDERSTANDING-WCAG20/keyboard-operation-trapping.html ## 1) Environment - It does not work with 7.0.4 - It does not work in the showcase: https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml ## 2) Expected behavior - As a keyboard user I would like to leave the ColumnToggler with the keyboard - As a keyboard user I would like to close the ColumnToggler overlay ## 3) Actual behavior - Keyboard trap ## 4) Steps to reproduce - Navigate to the ColumnToggler checkboxes with the keyboard - Try to leave the component with the keyboard only ## 5) Sample XHTML - Please refer to https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml ## 6) Sample bean - Please refer to https://www.primefaces.org/showcase/ui/data/datatable/columnToggler.xhtml
defect
columntoggler keyboard trap on nova and luna summary columntoggler suffers a keyboard trap when focusing checkboxes of this component with the keyboard you are not able to access other elements on the page navigation with tab is restricted to the checkboxes it is not possible to leave the overlay with the keyboard you can not close the overlay with the keyboard this can be reproduced in the more details on the wcag success criteria can be found here environment it does not work with it does not work in the showcase expected behavior as a keyboard user i would like to leave the columntoggler with the keyboard as a keyboard user i would like to close the columntoggler overlay actual behavior keyboard trap steps to reproduce navigate to the columntoggler checkboxes with the keyboard try to leave the component with the keyboard only sample xhtml please refer to sample bean please refer to
1
319,508
9,745,259,573
IssuesEvent
2019-06-03 09:12:03
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
accounts.google.com - see bug description
browser-firefox engine-gecko priority-critical
<!-- @browser: Firefox 65.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://accounts.google.com/signin/v2/identifier?service=youtube&uilel=3&passive=true&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Den%26next%3D%252F&hl=en&flowName=GlifWebSignIn&flowEntry=ServiceLogin **Browser / Version**: Firefox 65.0 **Operating System**: Windows 10 **Tested Another Browser**: No **Problem type**: Something else **Description**: can not sign in unless i am on private **Steps to Reproduce**: same as above [![Screenshot Description](https://webcompat.com/uploads/2019/6/fa8a3d3a-8716-4c98-a4dc-0d26f6a9506c-thumb.jpeg)](https://webcompat.com/uploads/2019/6/fa8a3d3a-8716-4c98-a4dc-0d26f6a9506c.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190211233335</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: release</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.google.com/ServiceLogin?service=youtube&uilel=3&passive=true&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Den%26next%3D%252F&hl=en" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.google.com/ServiceLogin?service=youtube&uilel=3&passive=true&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Den%26next%3D%252F&hl=en" line: 1}]', u'[JavaScript Warning: "Content Security Policy: Ignoring x-frame-options because of frame-ancestors directive."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
accounts.google.com - see bug description - <!-- @browser: Firefox 65.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://accounts.google.com/signin/v2/identifier?service=youtube&uilel=3&passive=true&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Den%26next%3D%252F&hl=en&flowName=GlifWebSignIn&flowEntry=ServiceLogin **Browser / Version**: Firefox 65.0 **Operating System**: Windows 10 **Tested Another Browser**: No **Problem type**: Something else **Description**: can not sign in unless i am on private **Steps to Reproduce**: same as above [![Screenshot Description](https://webcompat.com/uploads/2019/6/fa8a3d3a-8716-4c98-a4dc-0d26f6a9506c-thumb.jpeg)](https://webcompat.com/uploads/2019/6/fa8a3d3a-8716-4c98-a4dc-0d26f6a9506c.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190211233335</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: release</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.google.com/ServiceLogin?service=youtube&uilel=3&passive=true&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Den%26next%3D%252F&hl=en" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.google.com/ServiceLogin?service=youtube&uilel=3&passive=true&continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Den%26next%3D%252F&hl=en" line: 1}]', u'[JavaScript Warning: "Content Security Policy: Ignoring x-frame-options because of frame-ancestors directive."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://accounts.youtube.com/accounts/CheckConnection?pmpo=https%3A%2F%2Faccounts.google.com&v=1628305952&timestamp=1559368830050" line: 1}]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
accounts google com see bug description url browser version firefox operating system windows tested another browser no problem type something else description can not sign in unless i am on private steps to reproduce same as above browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel release console messages u u u u u u u u from with ❤️
0
44,666
5,638,839,963
IssuesEvent
2017-04-06 13:02:22
redhat-ipaas/ipaas-ui
https://api.github.com/repos/redhat-ipaas/ipaas-ui
closed
create connection e2e test
testing
this should be part of `@smoke` tests together with create integration and others
1.0
create connection e2e test - this should be part of `@smoke` tests together with create integration and others
non_defect
create connection test this should be part of smoke tests together with create integration and others
0
24,132
3,917,074,252
IssuesEvent
2016-04-21 06:25:09
irnawansuprapti/openbiz-cubi
https://api.github.com/repos/irnawansuprapti/openbiz-cubi
closed
Skin Cancer Relating to Tanning
auto-migrated Priority-Medium spam Type-Defect
``` We are conversation around avoiding too some sun danger. Although a worthy tan can micturate us face sexier and sunshine does meliorate in providing vitamin D to the cells of our body, too much sun is real bad for our rind.If we satisfy too untold in the sun without infliction we present somebody prematurely aging peel for careful. If you get to output in the sun achieve trusty that you use white emollient lotion together with crapulence sufficiency food.All of these tips gift aid you a lot in duty your rind fair and you should never overlook them. By but pursuing them you will observation improvements and you gift be on your way towards that keen perception pare you require. http://www.skinphysiciantips.com/instant-wrinkle-repair/ ``` Original issue reported on code.google.com by `CleoHo...@gmail.com` on 15 Apr 2015 at 6:54
1.0
Skin Cancer Relating to Tanning - ``` We are conversation around avoiding too some sun danger. Although a worthy tan can micturate us face sexier and sunshine does meliorate in providing vitamin D to the cells of our body, too much sun is real bad for our rind.If we satisfy too untold in the sun without infliction we present somebody prematurely aging peel for careful. If you get to output in the sun achieve trusty that you use white emollient lotion together with crapulence sufficiency food.All of these tips gift aid you a lot in duty your rind fair and you should never overlook them. By but pursuing them you will observation improvements and you gift be on your way towards that keen perception pare you require. http://www.skinphysiciantips.com/instant-wrinkle-repair/ ``` Original issue reported on code.google.com by `CleoHo...@gmail.com` on 15 Apr 2015 at 6:54
defect
skin cancer relating to tanning we are conversation around avoiding too some sun danger although a worthy tan can micturate us face sexier and sunshine does meliorate in providing vitamin d to the cells of our body too much sun is real bad for our rind if we satisfy too untold in the sun without infliction we present somebody prematurely aging peel for careful if you get to output in the sun achieve trusty that you use white emollient lotion together with crapulence sufficiency food all of these tips gift aid you a lot in duty your rind fair and you should never overlook them by but pursuing them you will observation improvements and you gift be on your way towards that keen perception pare you require original issue reported on code google com by cleoho gmail com on apr at
1
24,134
3,917,074,266
IssuesEvent
2016-04-21 06:25:09
irnawansuprapti/openbiz-cubi
https://api.github.com/repos/irnawansuprapti/openbiz-cubi
closed
Skin Tightening
auto-migrated Priority-Medium spam Type-Defect
``` switch and slowly roasting them to a scrumptious lobster red. With the sun?s drive sunbaked by the Winter, group tend to forget near the dangers of UV, believe that the cloud overcompensate or down temperatures mean that their skin is innocuous, that there?s no chance of fervent or worsened.This is criminal, and since each motion to hear almost the realities of UV soft, and how we can somebody protect ourselves from its pervasive danger.What is UV illuminating? Just put, it?s electromagnetic therapy with a wavelength that?s shorter than that of telescopic perch, and person than x rays.Perceptible loose, UV rays, x rays?all of these are retributive obloquy for diverse wavelengths of electromagnetic rays. http://www.skinphysiciantips.com/instant-wrinkle-repair/ ``` Original issue reported on code.google.com by `CleoHo...@gmail.com` on 15 Apr 2015 at 7:27
1.0
Skin Tightening - ``` switch and slowly roasting them to a scrumptious lobster red. With the sun?s drive sunbaked by the Winter, group tend to forget near the dangers of UV, believe that the cloud overcompensate or down temperatures mean that their skin is innocuous, that there?s no chance of fervent or worsened.This is criminal, and since each motion to hear almost the realities of UV soft, and how we can somebody protect ourselves from its pervasive danger.What is UV illuminating? Just put, it?s electromagnetic therapy with a wavelength that?s shorter than that of telescopic perch, and person than x rays.Perceptible loose, UV rays, x rays?all of these are retributive obloquy for diverse wavelengths of electromagnetic rays. http://www.skinphysiciantips.com/instant-wrinkle-repair/ ``` Original issue reported on code.google.com by `CleoHo...@gmail.com` on 15 Apr 2015 at 7:27
defect
skin tightening switch and slowly roasting them to a scrumptious lobster red with the sun s drive sunbaked by the winter group tend to forget near the dangers of uv believe that the cloud overcompensate or down temperatures mean that their skin is innocuous that there s no chance of fervent or worsened this is criminal and since each motion to hear almost the realities of uv soft and how we can somebody protect ourselves from its pervasive danger what is uv illuminating just put it s electromagnetic therapy with a wavelength that s shorter than that of telescopic perch and person than x rays perceptible loose uv rays x rays all of these are retributive obloquy for diverse wavelengths of electromagnetic rays original issue reported on code google com by cleoho gmail com on apr at
1
515,284
14,959,095,668
IssuesEvent
2021-01-27 02:20:30
codeforsanjose/gov-agenda-notifier
https://api.github.com/repos/codeforsanjose/gov-agenda-notifier
closed
Hook up with Admin Agenda Items Table with GraphQL
Low Priority
Hook up with Admin Agenda Items Table with GraphQL
1.0
Hook up with Admin Agenda Items Table with GraphQL - Hook up with Admin Agenda Items Table with GraphQL
non_defect
hook up with admin agenda items table with graphql hook up with admin agenda items table with graphql
0
68,858
21,929,539,488
IssuesEvent
2022-05-23 08:31:38
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
closed
Change of Number at the beginning of a message
T-Defect
### Steps to reproduce Write any date in the Element desktop programm, which converts it into a list. Following Screenshot shows the message displayed in the desktop app: ![Bug report](https://user-images.githubusercontent.com/105990583/169666087-bff6c731-35c6-456d-89dd-592b69a3a448.png) Now look at the message in the android app. ### Outcome #### What did you expect? The same message as it is displayed in the desktop programm. #### What happened instead? The number got converted/rearranged. Following screenshot shows the message displayed in the android app: ![Bug Report 2](https://user-images.githubusercontent.com/105990583/169666262-899ef3e8-ee1a-42d5-8a91-8f1279261372.jpg) The bug was found when i wrote a date and accidentally started a list (eg.: 13. May is my birthday), which was then converted and displayed as a completely diffierent date in the android app. So all of my friends now think that my birthday is at 1. May :/ ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
Change of Number at the beginning of a message - ### Steps to reproduce Write any date in the Element desktop programm, which converts it into a list. Following Screenshot shows the message displayed in the desktop app: ![Bug report](https://user-images.githubusercontent.com/105990583/169666087-bff6c731-35c6-456d-89dd-592b69a3a448.png) Now look at the message in the android app. ### Outcome #### What did you expect? The same message as it is displayed in the desktop programm. #### What happened instead? The number got converted/rearranged. Following screenshot shows the message displayed in the android app: ![Bug Report 2](https://user-images.githubusercontent.com/105990583/169666262-899ef3e8-ee1a-42d5-8a91-8f1279261372.jpg) The bug was found when i wrote a date and accidentally started a list (eg.: 13. May is my birthday), which was then converted and displayed as a completely diffierent date in the android app. So all of my friends now think that my birthday is at 1. May :/ ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store _No response_ ### Homeserver _No response_ ### Will you send logs? No
defect
change of number at the beginning of a message steps to reproduce write any date in the element desktop programm which converts it into a list following screenshot shows the message displayed in the desktop app now look at the message in the android app outcome what did you expect the same message as it is displayed in the desktop programm what happened instead the number got converted rearranged following screenshot shows the message displayed in the android app the bug was found when i wrote a date and accidentally started a list eg may is my birthday which was then converted and displayed as a completely diffierent date in the android app so all of my friends now think that my birthday is at may your phone model no response operating system version no response application version and app store no response homeserver no response will you send logs no
1
8,190
2,611,470,169
IssuesEvent
2015-02-27 05:15:14
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
"complicated" Hand-drawn maps result in protocol violation
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Choose Hand drawn Map 2. Draw a bit more complicated map 3. Click back What is the expected output? What do you see instead? The map should be accepted and seen in the preview by others. "Protocol violation" happens and the client crashes. What version of the product are you using? On what operating system? 0.9.15 Ubuntu 10.04, 64 bit ``` Original issue reported on code.google.com by `tobias.s...@gmail.com` on 3 Mar 2011 at 7:07
1.0
"complicated" Hand-drawn maps result in protocol violation - ``` What steps will reproduce the problem? 1. Choose Hand drawn Map 2. Draw a bit more complicated map 3. Click back What is the expected output? What do you see instead? The map should be accepted and seen in the preview by others. "Protocol violation" happens and the client crashes. What version of the product are you using? On what operating system? 0.9.15 Ubuntu 10.04, 64 bit ``` Original issue reported on code.google.com by `tobias.s...@gmail.com` on 3 Mar 2011 at 7:07
defect
complicated hand drawn maps result in protocol violation what steps will reproduce the problem choose hand drawn map draw a bit more complicated map click back what is the expected output what do you see instead the map should be accepted and seen in the preview by others protocol violation happens and the client crashes what version of the product are you using on what operating system ubuntu bit original issue reported on code google com by tobias s gmail com on mar at
1
223,615
17,117,287,987
IssuesEvent
2021-07-11 16:12:27
nextauthjs/next-auth
https://api.github.com/repos/nextauthjs/next-auth
closed
Add missing error explanations to docs
documentation good first issue
Currently, the [Errors documentation page](https://next-auth.js.org/errors) has missing/empty sections all over. We should document when these errors occur and describe how to resolve them, if possible.
1.0
Add missing error explanations to docs - Currently, the [Errors documentation page](https://next-auth.js.org/errors) has missing/empty sections all over. We should document when these errors occur and describe how to resolve them, if possible.
non_defect
add missing error explanations to docs currently the has missing empty sections all over we should document when these errors occur and describe how to resolve them if possible
0
80,916
30,595,826,436
IssuesEvent
2023-07-21 21:56:54
zed-industries/community
https://api.github.com/repos/zed-industries/community
opened
Removing empty brackets
defect triage admin read
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Language: TSX If I add an empty React props using **intellisense** and then remove the first brackets, only the first bracket is removed. If I create empty brackets manually and then remove the first bracket, both brackets are correctly removed. The same use case can be applied to "". ### Environment Zed: v0.95.3 (stable) OS: macOS 13.4.1 Memory: 16 GiB Architecture: x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature https://github.com/zed-industries/community/assets/7671531/277128d6-2d8f-4694-85e1-575932513477 ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
1.0
Removing empty brackets - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Language: TSX If I add an empty React props using **intellisense** and then remove the first brackets, only the first bracket is removed. If I create empty brackets manually and then remove the first bracket, both brackets are correctly removed. The same use case can be applied to "". ### Environment Zed: v0.95.3 (stable) OS: macOS 13.4.1 Memory: 16 GiB Architecture: x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature https://github.com/zed-industries/community/assets/7671531/277128d6-2d8f-4694-85e1-575932513477 ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
defect
removing empty brackets check for existing issues completed describe the bug provide steps to reproduce it language tsx if i add an empty react props using intellisense and then remove the first brackets only the first bracket is removed if i create empty brackets manually and then remove the first bracket both brackets are correctly removed the same use case can be applied to environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
1
131,858
28,042,128,538
IssuesEvent
2023-03-28 19:23:43
FrancescoXX/4c-site
https://api.github.com/repos/FrancescoXX/4c-site
closed
[Bug]: Project Card Styling Issue
🛠 goal: fix 💻 aspect: code 🤖 aspect: dx
### Description Styling Issue with the Project card, where the "Repo" and "View" button are placed asymmetrical with respect to the project card. ![4C-Project-card-centering-issue](https://user-images.githubusercontent.com/83387409/228032818-a432c25f-6942-4718-a243-72a6eedaab36.png) ### To Reproduce 1. Go to https://www.4c.rocks/ 2. Click on 'Projects' 3. Centering issue with the "Repo" and "View" button can be seen with respect to the cards ### Anything else? _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/FrancescoXX/4c-site/blob/main/CODE_OF_CONDUCT.md)
1.0
[Bug]: Project Card Styling Issue - ### Description Styling Issue with the Project card, where the "Repo" and "View" button are placed asymmetrical with respect to the project card. ![4C-Project-card-centering-issue](https://user-images.githubusercontent.com/83387409/228032818-a432c25f-6942-4718-a243-72a6eedaab36.png) ### To Reproduce 1. Go to https://www.4c.rocks/ 2. Click on 'Projects' 3. Centering issue with the "Repo" and "View" button can be seen with respect to the cards ### Anything else? _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/FrancescoXX/4c-site/blob/main/CODE_OF_CONDUCT.md)
non_defect
project card styling issue description styling issue with the project card where the repo and view button are placed asymmetrical with respect to the project card to reproduce go to click on projects centering issue with the repo and view button can be seen with respect to the cards anything else no response code of conduct i agree to follow this project s
0
361,155
25,327,980,925
IssuesEvent
2022-11-18 10:51:59
scs/substrate-api-client
https://api.github.com/repos/scs/substrate-api-client
closed
Document breaking changes
F9-question F4-documentation Q3-substantial
Currently, there is no documentation or change log that makes any breaking changes from one commit to another visible. To fix this I'm proposing the following: - Add a tagging scheme according to the polkadot verison (because the api-client is following the substrate releases already with branches, adding a tag would not cause much additional effort). - Similar to the worker, CI could create a release log that emphasizes the label `E1-breaksapi`. As this would probably affect the work with the worker, what do you think about that @clangenb @OverOrion ?
1.0
Document breaking changes - Currently, there is no documentation or change log that makes any breaking changes from one commit to another visible. To fix this I'm proposing the following: - Add a tagging scheme according to the polkadot verison (because the api-client is following the substrate releases already with branches, adding a tag would not cause much additional effort). - Similar to the worker, CI could create a release log that emphasizes the label `E1-breaksapi`. As this would probably affect the work with the worker, what do you think about that @clangenb @OverOrion ?
non_defect
document breaking changes currently there is no documentation or change log that makes any breaking changes from one commit to another visible to fix this i m proposing the following add a tagging scheme according to the polkadot verison because the api client is following the substrate releases already with branches adding a tag would not cause much additional effort similar to the worker ci could create a release log that emphasizes the label breaksapi as this would probably affect the work with the worker what do you think about that clangenb overorion
0
269,378
8,435,258,203
IssuesEvent
2018-10-17 12:42:02
CS2113-AY1819S1-T12-2/main
https://api.github.com/repos/CS2113-AY1819S1-T12-2/main
opened
Implement different predicates for different categories
priority.high type.task v1.2
- Name - Phone - Email - Address - Tag
1.0
Implement different predicates for different categories - - Name - Phone - Email - Address - Tag
non_defect
implement different predicates for different categories name phone email address tag
0
77,096
26,770,708,414
IssuesEvent
2023-01-31 13:56:52
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Space panel avatar colour and alignment regressions
T-Defect X-Regression S-Major X-Release-Blocker A-Spaces A11y O-Frequent
### Steps to reproduce 1. Enable light mode 2. Have some spaces without image 3. Look at the space panel ### Outcome #### What did you expect? Dark grey letters on a light grey background #### What happened instead? Dark grey letters on coloured background. Hard to read ![image](https://user-images.githubusercontent.com/6216686/215510365-594a1bf5-ff55-4f48-81fc-c828c1b68340.png) ### Operating system Ubuntu 22.04.1 LTS ### Browser information Firefox 109.0.1 ### URL for webapp https://develop.element.io/ ### Application version Element version: 50f2b532e907-react-3e2bf5640e17-js-c142232f4d13 Olm version: 3.2.12 ### Homeserver matrix.org ### Will you send logs? No
1.0
Space panel avatar colour and alignment regressions - ### Steps to reproduce 1. Enable light mode 2. Have some spaces without image 3. Look at the space panel ### Outcome #### What did you expect? Dark grey letters on a light grey background #### What happened instead? Dark grey letters on coloured background. Hard to read ![image](https://user-images.githubusercontent.com/6216686/215510365-594a1bf5-ff55-4f48-81fc-c828c1b68340.png) ### Operating system Ubuntu 22.04.1 LTS ### Browser information Firefox 109.0.1 ### URL for webapp https://develop.element.io/ ### Application version Element version: 50f2b532e907-react-3e2bf5640e17-js-c142232f4d13 Olm version: 3.2.12 ### Homeserver matrix.org ### Will you send logs? No
defect
space panel avatar colour and alignment regressions steps to reproduce enable light mode have some spaces without image look at the space panel outcome what did you expect dark grey letters on a light grey background what happened instead dark grey letters on coloured background hard to read operating system ubuntu lts browser information firefox url for webapp application version element version react js olm version homeserver matrix org will you send logs no
1
134,805
30,190,981,526
IssuesEvent
2023-07-04 15:21:33
astro-informatics/sopt
https://api.github.com/repos/astro-informatics/sopt
closed
Identify and cleanup multiple consecutive namespace closing and reopening within same code/header file
code-quality
There are instances such as [`cpp/sopt/objective_functions.h`](https://github.com/astro-informatics/sopt/pull/368/files#diff-5b4764c334510006d37fe6edc4927a7a862feb21db2b77689c734a005f2a6053) where a namespace is closed and immediately reopened. This issue is for identifying and cleaning up such redundancies in the codebase, discussed in #370. Shall be closed by #392. The offending files can be searched for using ripgrep's multiline regex: ``` rg -Ul '^\} *\/\/ *\bnamespace\b(.*)$\n*(?:^ *$)*^ *\bnamespace\b *\1' cpp/ ```
1.0
Identify and cleanup multiple consecutive namespace closing and reopening within same code/header file - There are instances such as [`cpp/sopt/objective_functions.h`](https://github.com/astro-informatics/sopt/pull/368/files#diff-5b4764c334510006d37fe6edc4927a7a862feb21db2b77689c734a005f2a6053) where a namespace is closed and immediately reopened. This issue is for identifying and cleaning up such redundancies in the codebase, discussed in #370. Shall be closed by #392. The offending files can be searched for using ripgrep's multiline regex: ``` rg -Ul '^\} *\/\/ *\bnamespace\b(.*)$\n*(?:^ *$)*^ *\bnamespace\b *\1' cpp/ ```
non_defect
identify and cleanup multiple consecutive namespace closing and reopening within same code header file there are instances such as where a namespace is closed and immediately reopened this issue is for identifying and cleaning up such redundancies in the codebase discussed in shall be closed by the offending files can be searched for using ripgrep s multiline regex rg ul bnamespace b n bnamespace b cpp
0
216,681
24,287,896,691
IssuesEvent
2022-09-29 01:11:14
Nivaskumark/kernel_v4.19.72_old
https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old
opened
CVE-2022-3303 (Medium) detected in linuxlinux-4.19.83
security vulnerability
## CVE-2022-3303 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/core/oss/pcm_oss.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/core/oss/pcm_oss.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A race condition flaw was found in the Linux kernel sound subsystem due to improper locking. It could lead to a NULL pointer dereference while handling the SNDCTL_DSP_SYNC ioctl. A privileged local user (root or member of the audio group) could use this flaw to crash the system, resulting in a denial of service condition <p>Publish Date: 2022-09-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3303>CVE-2022-3303</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-3303">https://www.linuxkernelcves.com/cves/CVE-2022-3303</a></p> <p>Release Date: 2022-09-27</p> <p>Fix Resolution: v5.15.68,v5.19.9,v6.0-rc5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-3303 (Medium) detected in linuxlinux-4.19.83 - ## CVE-2022-3303 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/core/oss/pcm_oss.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/core/oss/pcm_oss.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A race condition flaw was found in the Linux kernel sound subsystem due to improper locking. It could lead to a NULL pointer dereference while handling the SNDCTL_DSP_SYNC ioctl. A privileged local user (root or member of the audio group) could use this flaw to crash the system, resulting in a denial of service condition <p>Publish Date: 2022-09-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3303>CVE-2022-3303</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-3303">https://www.linuxkernelcves.com/cves/CVE-2022-3303</a></p> <p>Release Date: 2022-09-27</p> <p>Fix Resolution: v5.15.68,v5.19.9,v6.0-rc5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files sound core oss pcm oss c sound core oss pcm oss c vulnerability details a race condition flaw was found in the linux kernel sound subsystem due to improper locking it could lead to a null pointer dereference while handling the sndctl dsp sync ioctl a privileged local user root or member of the audio group could use this flaw to crash the system resulting in a denial of service condition publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
42,069
5,424,368,174
IssuesEvent
2017-03-03 00:55:18
eblondel/geosapi
https://api.github.com/repos/eblondel/geosapi
closed
Set Travis CI docker geoserver instance for integration tests
integration tests
All docker images tested work, but no sample data configured. To investigate if there is a way to configure basic sample data in Geoserver, otherwise all tests will need to be revised. Known docker geoserver images https://github.com/oscarfonts/docker-geoserver - question sent to owner (https://github.com/oscarfonts/docker-geoserver/issues/6) https://github.com/kartoza/docker-geoserver https://github.com/thinkWhere/GeoServer-Docker https://bitbucket.org/ololoteam/geoserver-docker
1.0
Set Travis CI docker geoserver instance for integration tests - All docker images tested work, but no sample data configured. To investigate if there is a way to configure basic sample data in Geoserver, otherwise all tests will need to be revised. Known docker geoserver images https://github.com/oscarfonts/docker-geoserver - question sent to owner (https://github.com/oscarfonts/docker-geoserver/issues/6) https://github.com/kartoza/docker-geoserver https://github.com/thinkWhere/GeoServer-Docker https://bitbucket.org/ololoteam/geoserver-docker
non_defect
set travis ci docker geoserver instance for integration tests all docker images tested work but no sample data configured to investigate if there is a way to configure basic sample data in geoserver otherwise all tests will need to be revised known docker geoserver images question sent to owner
0
58,178
16,413,761,057
IssuesEvent
2021-05-19 01:55:16
puppetlabs/r10k
https://api.github.com/repos/puppetlabs/r10k
closed
config global option warning message
[type] defect stale
Half of the time the following command is displaying me the deprecation warning, and the rest of time I do not get warning. This has left me unsure if I should change the command, or not, and if how as it seems to me that the options are wrote correctly. ``` # r10k deploy --config /etc/r10k.yaml environment --puppetfile [r10k - WARN] Calling `r10k --config <action>` as a global option is deprecated; \ use r10k <action> --config # r10k version 1.4.2 ``` Looking the changes in between versions 1.4.2 to 1.5.1 it seems there might not be fix to this issue, but I could be wrong as well. git log 3a7e77f..da70982
1.0
config global option warning message - Half of the time the following command is displaying me the deprecation warning, and the rest of time I do not get warning. This has left me unsure if I should change the command, or not, and if how as it seems to me that the options are wrote correctly. ``` # r10k deploy --config /etc/r10k.yaml environment --puppetfile [r10k - WARN] Calling `r10k --config <action>` as a global option is deprecated; \ use r10k <action> --config # r10k version 1.4.2 ``` Looking the changes in between versions 1.4.2 to 1.5.1 it seems there might not be fix to this issue, but I could be wrong as well. git log 3a7e77f..da70982
defect
config global option warning message half of the time the following command is displaying me the deprecation warning and the rest of time i do not get warning this has left me unsure if i should change the command or not and if how as it seems to me that the options are wrote correctly deploy config etc yaml environment puppetfile calling config as a global option is deprecated use config version looking the changes in between versions to it seems there might not be fix to this issue but i could be wrong as well git log
1
362,570
25,381,674,431
IssuesEvent
2022-11-21 18:04:42
open-horizon/open-horizon.github.io
https://api.github.com/repos/open-horizon/open-horizon.github.io
closed
Documentation📄: Fedora 37 support
documentation
### What is the current documentation state? Add support for Fedora 37 Add Fedora 37 to the following pages: - https://open-horizon.github.io/docs/installing/adding_devices.html - https://open-horizon.github.io/docs/installing/advanced_man_install.html ### Where is this stated? _No response_ ### Why do you want to improve the statement? _No response_ ### Proposed Statement _No response_ ### Additional context. _No response_
1.0
Documentation📄: Fedora 37 support - ### What is the current documentation state? Add support for Fedora 37 Add Fedora 37 to the following pages: - https://open-horizon.github.io/docs/installing/adding_devices.html - https://open-horizon.github.io/docs/installing/advanced_man_install.html ### Where is this stated? _No response_ ### Why do you want to improve the statement? _No response_ ### Proposed Statement _No response_ ### Additional context. _No response_
non_defect
documentation📄 fedora support what is the current documentation state add support for fedora add fedora to the following pages where is this stated no response why do you want to improve the statement no response proposed statement no response additional context no response
0
442,973
30,868,601,591
IssuesEvent
2023-08-03 09:44:42
SequentiaSEQ/Sequentia-Core-Elements
https://api.github.com/repos/SequentiaSEQ/Sequentia-Core-Elements
reopened
Roadmap around Sequentia
documentation question
**Alpha** - [ ] Testnet - [ ] Fix bugs - [ ] Run the Node testnet - [ ] Lightning Node - [ ] Explorer **Beta** - [ ] Mainnet - [ ] Official Release
1.0
Roadmap around Sequentia - **Alpha** - [ ] Testnet - [ ] Fix bugs - [ ] Run the Node testnet - [ ] Lightning Node - [ ] Explorer **Beta** - [ ] Mainnet - [ ] Official Release
non_defect
roadmap around sequentia alpha testnet fix bugs run the node testnet lightning node explorer beta mainnet official release
0
31,818
6,630,733,059
IssuesEvent
2017-09-25 02:01:41
extnet/Ext.NET
https://api.github.com/repos/extnet/Ext.NET
reopened
Field Note on checkbox shows merged with checkbox
4.x defect
Found: 4.2.0 This issue was spotted once an user posted a test case for a 3.3.0 issue with field notes, but the forum thread itself is not complaining about this specific issue in 4.x, but something else. That forum thread would be: [Large note makes indicator hard to see](http://forums.ext.net/showthread.php?61710). At least text fields works fine. Following a test code: ```xml <%@ Page Language="C#" %> <!DOCTYPE html> <html> <head id="Head1" runat="server"> <title>Field notes bug with checkboxes</title> </head> <body> <form id="Form1" runat="server"> <ext:ResourceManager ID="ResourceManager1" runat="server" /> <ext:Window ID="Window1" runat="server" Icon="ApplicationFormAdd" Width="500" Hidden="false" Modal="true" Layout="FitLayout" Title="TEST"> <Items> <ext:FormPanel ID="FormPanel1" runat="server" Layout="VBoxLayout" BodyPadding="5" > <Items> <ext:TextField ID="TextField1" runat="server" FieldLabel="Test Text field" Note="This is a really really really large note,<br/>I am glad i entered it like this" /> <ext:Checkbox ID="CheckboxField1" runat="server" IndicatorText="*" AllowBlank="false" FieldLabel="Test Large Note" Note="This is a really really really large note,<br/>I am glad i entered it like this" /> </Items> </ext:FormPanel> </Items> </ext:Window> </form> </body> </html> ```
1.0
Field Note on checkbox shows merged with checkbox - Found: 4.2.0 This issue was spotted once an user posted a test case for a 3.3.0 issue with field notes, but the forum thread itself is not complaining about this specific issue in 4.x, but something else. That forum thread would be: [Large note makes indicator hard to see](http://forums.ext.net/showthread.php?61710). At least text fields works fine. Following a test code: ```xml <%@ Page Language="C#" %> <!DOCTYPE html> <html> <head id="Head1" runat="server"> <title>Field notes bug with checkboxes</title> </head> <body> <form id="Form1" runat="server"> <ext:ResourceManager ID="ResourceManager1" runat="server" /> <ext:Window ID="Window1" runat="server" Icon="ApplicationFormAdd" Width="500" Hidden="false" Modal="true" Layout="FitLayout" Title="TEST"> <Items> <ext:FormPanel ID="FormPanel1" runat="server" Layout="VBoxLayout" BodyPadding="5" > <Items> <ext:TextField ID="TextField1" runat="server" FieldLabel="Test Text field" Note="This is a really really really large note,<br/>I am glad i entered it like this" /> <ext:Checkbox ID="CheckboxField1" runat="server" IndicatorText="*" AllowBlank="false" FieldLabel="Test Large Note" Note="This is a really really really large note,<br/>I am glad i entered it like this" /> </Items> </ext:FormPanel> </Items> </ext:Window> </form> </body> </html> ```
defect
field note on checkbox shows merged with checkbox found this issue was spotted once an user posted a test case for a issue with field notes but the forum thread itself is not complaining about this specific issue in x but something else that forum thread would be at least text fields works fine following a test code xml field notes bug with checkboxes ext window id runat server icon applicationformadd width hidden false modal true layout fitlayout title test ext formpanel id runat server layout vboxlayout bodypadding ext textfield id runat server fieldlabel test text field note this is a really really really large note i am glad i entered it like this ext checkbox id runat server indicatortext allowblank false fieldlabel test large note note this is a really really really large note i am glad i entered it like this
1
69,002
17,513,067,004
IssuesEvent
2021-08-11 01:45:47
tdwg/rs.tdwg.org
https://api.github.com/repos/tdwg/rs.tdwg.org
closed
harmonize summary in Jekyll header and abstract in header section
stds landing pg build script
Similar to Issue #21 , the value of the "summary" attribute in the Jekyll header section is similar (and in some cases identical) to the "Abstract" item in the header section. The script manages them as separate items ("summary" column of the [pageInfo.csv](https://github.com/tdwg/rs.tdwg.org/blob/stds-pages/html/stds-pages/pageInfo.csv) file and "description" column of the [standards metadata table](https://github.com/tdwg/rs.tdwg.org/blob/stds-pages/standards/standards.csv). These two fields should be compared and reviewed for all of the standards to make sure that they present the information as desired.
1.0
harmonize summary in Jekyll header and abstract in header section - Similar to Issue #21 , the value of the "summary" attribute in the Jekyll header section is similar (and in some cases identical) to the "Abstract" item in the header section. The script manages them as separate items ("summary" column of the [pageInfo.csv](https://github.com/tdwg/rs.tdwg.org/blob/stds-pages/html/stds-pages/pageInfo.csv) file and "description" column of the [standards metadata table](https://github.com/tdwg/rs.tdwg.org/blob/stds-pages/standards/standards.csv). These two fields should be compared and reviewed for all of the standards to make sure that they present the information as desired.
non_defect
harmonize summary in jekyll header and abstract in header section similar to issue the value of the summary attribute in the jekyll header section is similar and in some cases identical to the abstract item in the header section the script manages them as separate items summary column of the file and description column of the these two fields should be compared and reviewed for all of the standards to make sure that they present the information as desired
0
29,410
4,501,210,590
IssuesEvent
2016-09-01 08:36:51
mattbearman/lime
https://api.github.com/repos/mattbearman/lime
closed
Test
test
## Details ## **Submitted:** August 24, 2016 15:40 **Category:** Test **Sender Email:** **Website:** Test **URL:** http://staging-www.bugmuncher.com/ **Operating System:** Mac OS X Yosemite **Browser:** Chrome 52.0.2743.116 **Browser Size:** 1280 x 1316 **User Agent:** Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 **Description:** Integrations! ## Screenshot ## ![Screenshot](http://staging-api.bugmuncher.com/feedback/326fc2c9f48b68404f78c54a56679d4152b3d11b/screenshot.png) 1. integrations! ## Browser Plugins ## Widevine Content Decryption Module Chrome PDF Viewer Shockwave Flash Native Client ## Events ## **method:** GET **url:** http://staging-www.bugmuncher.com/ **timestamp:** Wed Aug 24 2016 16:40:29 GMT+0100 (BST) **type:** page_load --- **method:** POST **url:** http://sumome.com/api/load/ **timestamp:** Wed Aug 24 2016 16:40:30 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/googleanalytics/load **timestamp:** Wed Aug 24 2016 16:40:30 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/contentanalytics/status **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/heatmaps/status **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/listbuilder/load **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/scrollbox/load **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **content:** Feedback Button Clicked **timestamp:** Wed Aug 24 2016 16:40:33 GMT+0100 (BST) **type:** bugmuncher --- **type:** bugmuncher **content:** Feedback Report Submitted **timestamp:** Wed Aug 24 2016 16:40:50 GMT+0100 (BST) ---
1.0
Test - ## Details ## **Submitted:** August 24, 2016 15:40 **Category:** Test **Sender Email:** **Website:** Test **URL:** http://staging-www.bugmuncher.com/ **Operating System:** Mac OS X Yosemite **Browser:** Chrome 52.0.2743.116 **Browser Size:** 1280 x 1316 **User Agent:** Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 **Description:** Integrations! ## Screenshot ## ![Screenshot](http://staging-api.bugmuncher.com/feedback/326fc2c9f48b68404f78c54a56679d4152b3d11b/screenshot.png) 1. integrations! ## Browser Plugins ## Widevine Content Decryption Module Chrome PDF Viewer Shockwave Flash Native Client ## Events ## **method:** GET **url:** http://staging-www.bugmuncher.com/ **timestamp:** Wed Aug 24 2016 16:40:29 GMT+0100 (BST) **type:** page_load --- **method:** POST **url:** http://sumome.com/api/load/ **timestamp:** Wed Aug 24 2016 16:40:30 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/googleanalytics/load **timestamp:** Wed Aug 24 2016 16:40:30 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/contentanalytics/status **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/heatmaps/status **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/listbuilder/load **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **method:** POST **url:** http://sumome.com/apps/scrollbox/load **timestamp:** Wed Aug 24 2016 16:40:31 GMT+0100 (BST) **type:** ajax --- **content:** Feedback Button Clicked **timestamp:** Wed Aug 24 2016 16:40:33 GMT+0100 (BST) **type:** bugmuncher --- **type:** bugmuncher **content:** Feedback Report Submitted **timestamp:** Wed Aug 24 2016 16:40:50 GMT+0100 (BST) ---
non_defect
test details submitted august category test sender email website test url operating system mac os x yosemite browser chrome browser size x user agent mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari description integrations screenshot integrations browser plugins widevine content decryption module chrome pdf viewer shockwave flash native client events method get url timestamp wed aug gmt bst type page load method post url timestamp wed aug gmt bst type ajax method post url timestamp wed aug gmt bst type ajax method post url timestamp wed aug gmt bst type ajax method post url timestamp wed aug gmt bst type ajax method post url timestamp wed aug gmt bst type ajax method post url timestamp wed aug gmt bst type ajax content feedback button clicked timestamp wed aug gmt bst type bugmuncher type bugmuncher content feedback report submitted timestamp wed aug gmt bst
0
25,029
4,177,493,999
IssuesEvent
2016-06-22 00:20:02
schuel/hmmm
https://api.github.com/repos/schuel/hmmm
closed
No Courses found. Why not propose a new one for "gre"?
defect
this answer on the start-page is most of the time not showing the full searched string. title shows it correctly
1.0
No Courses found. Why not propose a new one for "gre"? - this answer on the start-page is most of the time not showing the full searched string. title shows it correctly
defect
no courses found why not propose a new one for gre this answer on the start page is most of the time not showing the full searched string title shows it correctly
1
94,733
11,905,930,107
IssuesEvent
2020-03-30 19:27:20
harmony-one/harmony
https://api.github.com/repos/harmony-one/harmony
closed
Duplicate bls keys as internal harmony node shouldn't be allowed.
design
When external node uses same BLS keys are harmony nodes, the logic is messed up, external nodes may get reward because harmony node signed the block. We shouldn't allow bls keys that's same as internal harmony nodes keys
1.0
Duplicate bls keys as internal harmony node shouldn't be allowed. - When external node uses same BLS keys are harmony nodes, the logic is messed up, external nodes may get reward because harmony node signed the block. We shouldn't allow bls keys that's same as internal harmony nodes keys
non_defect
duplicate bls keys as internal harmony node shouldn t be allowed when external node uses same bls keys are harmony nodes the logic is messed up external nodes may get reward because harmony node signed the block we shouldn t allow bls keys that s same as internal harmony nodes keys
0
302,083
22,786,675,948
IssuesEvent
2022-07-09 11:01:34
theapsgroup/steampipe-plugin-freshservice
https://api.github.com/repos/theapsgroup/steampipe-plugin-freshservice
closed
Final Pre-Submission Review
documentation enhancement
This issue is to do a final review of all the docs, especially the table examples before submitting `v0.0.1` to Steampipe for publishing on the hub.
1.0
Final Pre-Submission Review - This issue is to do a final review of all the docs, especially the table examples before submitting `v0.0.1` to Steampipe for publishing on the hub.
non_defect
final pre submission review this issue is to do a final review of all the docs especially the table examples before submitting to steampipe for publishing on the hub
0
67,876
13,043,001,489
IssuesEvent
2020-07-29 00:09:54
sisterAn/JavaScript-Algorithms
https://api.github.com/repos/sisterAn/JavaScript-Algorithms
opened
字节&leetcode70:爬楼梯问题
LeetCode 字节
假设你正在爬楼梯。需要 n 阶你才能到达楼顶。 每次你可以爬 1 或 2 个台阶。你有多少种不同的方法可以爬到楼顶呢? **注意:** 给定 n 是一个正整数。 **示例 1:** ```js 输入: 2 输出: 2 解释: 有两种方法可以爬到楼顶。 1. 1 阶 + 1 阶 2. 2 阶 ``` **示例 2:** ```js 输入: 3 输出: 3 解释: 有三种方法可以爬到楼顶。 1. 1 阶 + 1 阶 + 1 阶 2. 1 阶 + 2 阶 3. 2 阶 + 1 阶 ``` 附赠leetcode地址:[leetcode](https://leetcode-cn.com/problems/climbing-stairs)
1.0
字节&leetcode70:爬楼梯问题 - 假设你正在爬楼梯。需要 n 阶你才能到达楼顶。 每次你可以爬 1 或 2 个台阶。你有多少种不同的方法可以爬到楼顶呢? **注意:** 给定 n 是一个正整数。 **示例 1:** ```js 输入: 2 输出: 2 解释: 有两种方法可以爬到楼顶。 1. 1 阶 + 1 阶 2. 2 阶 ``` **示例 2:** ```js 输入: 3 输出: 3 解释: 有三种方法可以爬到楼顶。 1. 1 阶 + 1 阶 + 1 阶 2. 1 阶 + 2 阶 3. 2 阶 + 1 阶 ``` 附赠leetcode地址:[leetcode](https://leetcode-cn.com/problems/climbing-stairs)
non_defect
字节 :爬楼梯问题 假设你正在爬楼梯。需要 n 阶你才能到达楼顶。 每次你可以爬 或 个台阶。你有多少种不同的方法可以爬到楼顶呢? 注意: 给定 n 是一个正整数。 示例 : js 输入: 输出: 解释: 有两种方法可以爬到楼顶。 阶 阶 阶 示例 : js 输入: 输出: 解释: 有三种方法可以爬到楼顶。 阶 阶 阶 阶 阶 阶 阶 附赠leetcode地址:
0
98,593
16,387,762,518
IssuesEvent
2021-05-17 12:45:48
fitzinbox/Exomiser
https://api.github.com/repos/fitzinbox/Exomiser
opened
CVE-2017-18640 (High) detected in snakeyaml-1.23.jar
security vulnerability
## CVE-2017-18640 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.23.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: Exomiser/exomiser-cli/pom.xml</p> <p>Path to vulnerable library: Exomiser/exomiser-cli/target/lib/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,canner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p> <p> Dependency Hierarchy: - :x: **snakeyaml-1.23.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fitzinbox/Exomiser/commit/3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6">3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564. <p>Publish Date: 2019-12-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640>CVE-2017-18640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640</a></p> <p>Release Date: 2019-12-12</p> <p>Fix Resolution: org.yaml:snakeyaml:1.26</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-18640 (High) detected in snakeyaml-1.23.jar - ## CVE-2017-18640 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.23.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: Exomiser/exomiser-cli/pom.xml</p> <p>Path to vulnerable library: Exomiser/exomiser-cli/target/lib/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,canner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p> <p> Dependency Hierarchy: - :x: **snakeyaml-1.23.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fitzinbox/Exomiser/commit/3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6">3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564. <p>Publish Date: 2019-12-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640>CVE-2017-18640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640</a></p> <p>Release Date: 2019-12-12</p> <p>Fix Resolution: org.yaml:snakeyaml:1.26</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in snakeyaml jar cve high severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file exomiser exomiser cli pom xml path to vulnerable library exomiser exomiser cli target lib snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar canner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details the alias feature in snakeyaml allows entity expansion during a load operation a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with whitesource
0
233,891
25,780,411,935
IssuesEvent
2022-12-09 15:27:20
smb-h/Estates-price-prediction
https://api.github.com/repos/smb-h/Estates-price-prediction
closed
CVE-2022-29200 (Medium) detected in tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl
security vulnerability
## CVE-2022-29200 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/smb-h/Estates-price-prediction/commit/43d8dec55efbdc71655c52119862fee409624fda">43d8dec55efbdc71655c52119862fee409624fda</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.LSTMBlockCell` does not fully validate the input arguments. This results in a `CHECK`-failure which can be used to trigger a denial of service attack. The code does not validate the ranks of any of the arguments to this API call. This results in `CHECK`-failures when the elements of the tensor are accessed. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue. <p>Publish Date: 2022-05-20 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-29200>CVE-2022-29200</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200</a></p> <p>Release Date: 2022-05-20</p> <p>Fix Resolution: tensorflow - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-cpu - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-gpu - 2.6.4,2.7.2,2.8.1,2.9.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-29200 (Medium) detected in tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl - ## CVE-2022-29200 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/73/a3/142f73d0e076f5582fd8da29c68af0413bf529933eed09f86a8857fab0d6/tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **tensorflow-2.6.3-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/smb-h/Estates-price-prediction/commit/43d8dec55efbdc71655c52119862fee409624fda">43d8dec55efbdc71655c52119862fee409624fda</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.LSTMBlockCell` does not fully validate the input arguments. This results in a `CHECK`-failure which can be used to trigger a denial of service attack. The code does not validate the ranks of any of the arguments to this API call. This results in `CHECK`-failures when the elements of the tensor are accessed. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue. <p>Publish Date: 2022-05-20 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-29200>CVE-2022-29200</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200</a></p> <p>Release Date: 2022-05-20</p> <p>Fix Resolution: tensorflow - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-cpu - 2.6.4,2.7.2,2.8.1,2.9.0;tensorflow-gpu - 2.6.4,2.7.2,2.8.1,2.9.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch main vulnerability details tensorflow is an open source platform for machine learning prior to versions and the implementation of tf raw ops lstmblockcell does not fully validate the input arguments this results in a check failure which can be used to trigger a denial of service attack the code does not validate the ranks of any of the arguments to this api call this results in check failures when the elements of the tensor are accessed versions and contain a patch for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
0
326,814
9,961,513,267
IssuesEvent
2019-07-07 05:31:06
dhis2/maintenance-app
https://api.github.com/repos/dhis2/maintenance-app
closed
Attributes missing from option form
bug priority:medium stale wontfix
When adding/editing an option in an optionSet the custom attributes linked to the option do not show.
1.0
Attributes missing from option form - When adding/editing an option in an optionSet the custom attributes linked to the option do not show.
non_defect
attributes missing from option form when adding editing an option in an optionset the custom attributes linked to the option do not show
0
9,948
25,778,979,621
IssuesEvent
2022-12-09 14:23:16
OWASP/raider
https://api.github.com/repos/OWASP/raider
opened
Fix logging for Plugins
bug architecture
Plugins aren't logging information correctly at the moment, it doesn't use the same logger rest of raider does.
1.0
Fix logging for Plugins - Plugins aren't logging information correctly at the moment, it doesn't use the same logger rest of raider does.
non_defect
fix logging for plugins plugins aren t logging information correctly at the moment it doesn t use the same logger rest of raider does
0
74,342
25,076,130,415
IssuesEvent
2022-11-07 15:37:37
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: By raises AttributeError: type object 'By' has no attribute 'XPATH'
I-defect needs-triaging
### What happened? First, link to the [StackOverflow](https://stackoverflow.com/questions/74348377/python-selenium-by-raises-attributeerror-type-object-by-has-no-attribute-xpa) post i created, it seems that it may not be a mistake by me, but a problem indeed: Here is my code: ``` import socket import httpcore import re import os import json import selenium import httpx as web from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from bs4 import BeautifulSoup from time import sleep def main_test(): chrome_options = Options() prefs = {"download.default_directory": f"{os.getcwd()}/Music"} chrome_options.add_argument("user-data-dir=selenium") chrome_options.add_experimental_option("prefs", prefs) dr = webdriver.Chrome(options=chrome_options, service=Service(ChromeDriverManager().install())) dr.get(URL) print(f"{selenium.__version__=}") dr.find_element(By.XPATH, "/html/body/div[1]/div[1]/div/div[1]/ul/li[2]/a").click() dr.quit() if __name__ == '__main__': main_test() ``` and this raises this exception: ``` selenium.__version__='4.6.0' Traceback (most recent call last): File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 142, in <module> main_test() File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 137, in main_test dr.find_element(By.XPATH, "/html/body/div[1]/div[1]/div/div[1]/ul/li[2]/a").click() AttributeError: type object 'By' has no attribute 'XPATH' ``` ### How can we reproduce the issue? ```shell To reproduce the issue, just copy paste my code and my imports. You'll need to install packaging cause there is a currently a bug with from webdriver_manager.chrome import ChromeDriverManager so, go ahead pip install packaging ``` ### Relevant log output ```shell selenium.__version__='4.6.0' Traceback (most recent call last): File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 142, in <module> main_test() File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 137, in main_test dr.find_element(By.XPATH, "/html/body/div[1]/div[1]/div/div[1]/ul/li[2]/a").click() AttributeError: type object 'By' has no attribute 'XPATH' ``` ### Operating System macOS Ventura ### Selenium version Python 3.9.6, Selenium 4.6.0 ### What are the browser(s) and version(s) where you see this issue? Chrome 107.0.5304.87 (Official Build) (arm64) ### What are the browser driver(s) and version(s) where you see this issue? ChromeDriver 107.0.5304.62 (1eec40d3a5764881c92085aaee66d25075c159aa-refs/branch-heads/5304@{#942}) ### Are you using Selenium Grid? _No response_
1.0
[🐛 Bug]: By raises AttributeError: type object 'By' has no attribute 'XPATH' - ### What happened? First, link to the [StackOverflow](https://stackoverflow.com/questions/74348377/python-selenium-by-raises-attributeerror-type-object-by-has-no-attribute-xpa) post i created, it seems that it may not be a mistake by me, but a problem indeed: Here is my code: ``` import socket import httpcore import re import os import json import selenium import httpx as web from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from bs4 import BeautifulSoup from time import sleep def main_test(): chrome_options = Options() prefs = {"download.default_directory": f"{os.getcwd()}/Music"} chrome_options.add_argument("user-data-dir=selenium") chrome_options.add_experimental_option("prefs", prefs) dr = webdriver.Chrome(options=chrome_options, service=Service(ChromeDriverManager().install())) dr.get(URL) print(f"{selenium.__version__=}") dr.find_element(By.XPATH, "/html/body/div[1]/div[1]/div/div[1]/ul/li[2]/a").click() dr.quit() if __name__ == '__main__': main_test() ``` and this raises this exception: ``` selenium.__version__='4.6.0' Traceback (most recent call last): File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 142, in <module> main_test() File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 137, in main_test dr.find_element(By.XPATH, "/html/body/div[1]/div[1]/div/div[1]/ul/li[2]/a").click() AttributeError: type object 'By' has no attribute 'XPATH' ``` ### How can we reproduce the issue? ```shell To reproduce the issue, just copy paste my code and my imports. You'll need to install packaging cause there is a currently a bug with from webdriver_manager.chrome import ChromeDriverManager so, go ahead pip install packaging ``` ### Relevant log output ```shell selenium.__version__='4.6.0' Traceback (most recent call last): File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 142, in <module> main_test() File "/Users/andrea/Dev/Python/custom_scripts/ytchannel/main.py", line 137, in main_test dr.find_element(By.XPATH, "/html/body/div[1]/div[1]/div/div[1]/ul/li[2]/a").click() AttributeError: type object 'By' has no attribute 'XPATH' ``` ### Operating System macOS Ventura ### Selenium version Python 3.9.6, Selenium 4.6.0 ### What are the browser(s) and version(s) where you see this issue? Chrome 107.0.5304.87 (Official Build) (arm64) ### What are the browser driver(s) and version(s) where you see this issue? ChromeDriver 107.0.5304.62 (1eec40d3a5764881c92085aaee66d25075c159aa-refs/branch-heads/5304@{#942}) ### Are you using Selenium Grid? _No response_
defect
by raises attributeerror type object by has no attribute xpath what happened first link to the post i created it seems that it may not be a mistake by me but a problem indeed here is my code import socket import httpcore import re import os import json import selenium import httpx as web from selenium import webdriver from selenium webdriver chrome options import options from selenium webdriver chrome service import service from selenium webdriver common by import by from webdriver manager chrome import chromedrivermanager from import beautifulsoup from time import sleep def main test chrome options options prefs download default directory f os getcwd music chrome options add argument user data dir selenium chrome options add experimental option prefs prefs dr webdriver chrome options chrome options service service chromedrivermanager install dr get url print f selenium version dr find element by xpath html body div div div div ul li a click dr quit if name main main test and this raises this exception selenium version traceback most recent call last file users andrea dev python custom scripts ytchannel main py line in main test file users andrea dev python custom scripts ytchannel main py line in main test dr find element by xpath html body div div div div ul li a click attributeerror type object by has no attribute xpath how can we reproduce the issue shell to reproduce the issue just copy paste my code and my imports you ll need to install packaging cause there is a currently a bug with from webdriver manager chrome import chromedrivermanager so go ahead pip install packaging relevant log output shell selenium version traceback most recent call last file users andrea dev python custom scripts ytchannel main py line in main test file users andrea dev python custom scripts ytchannel main py line in main test dr find element by xpath html body div div div div ul li a click attributeerror type object by has no attribute xpath operating system macos ventura selenium version python selenium what are the browser s and version s where you see this issue chrome official build what are the browser driver s and version s where you see this issue chromedriver refs branch heads are you using selenium grid no response
1
9,197
7,861,403,924
IssuesEvent
2018-06-22 00:13:13
Dallas-Makerspace/tracker
https://api.github.com/repos/Dallas-Makerspace/tracker
closed
A Development/Beta Testing website for Calendar
Change Request Committee/Infrastructure Priority/HIGH System/Overcloud wontfix
## Expected Behavior To be able to demonstrate functionality to those that do not code for testing and debugging. ## Actual Behavior No such ability exists yet. ## Steps to Reproduce the Problem 1. 1. 1. ## Specifications (The version of the project, operating system, hardware etc.) Something similar to the production website that I have access to, so I can upload and install the calendar. However you can make it work, is fine with me as long as people can get to it remotely.
1.0
A Development/Beta Testing website for Calendar - ## Expected Behavior To be able to demonstrate functionality to those that do not code for testing and debugging. ## Actual Behavior No such ability exists yet. ## Steps to Reproduce the Problem 1. 1. 1. ## Specifications (The version of the project, operating system, hardware etc.) Something similar to the production website that I have access to, so I can upload and install the calendar. However you can make it work, is fine with me as long as people can get to it remotely.
non_defect
a development beta testing website for calendar expected behavior to be able to demonstrate functionality to those that do not code for testing and debugging actual behavior no such ability exists yet steps to reproduce the problem specifications the version of the project operating system hardware etc something similar to the production website that i have access to so i can upload and install the calendar however you can make it work is fine with me as long as people can get to it remotely
0