Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
57,300
| 15,729,891,037
|
IssuesEvent
|
2021-03-29 15:19:27
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
buffer overrun: false positives when checking std::string usage (Trac #285)
|
Incomplete Migration Migrated from Trac Other defect hyd_danmar
|
Migrated from https://trac.cppcheck.net/ticket/285
```json
{
"status": "closed",
"changetime": "2009-05-06T19:56:15",
"description": "This was reported via email from Slava Semushin\n\n it produce false positives (without -a flag) in case like\n{{{\nfor (int i = 0; i <= str.size(); ++i) {\n if (i == str.size() || str[i] == ' ') {\n break;\n }\n}\n}}}\n\nIn this case cppcheck reports about bounds overrun, but if i ==\nstr.size() then str[i] == ' ' shouldn't be evaluated.\n\n",
"reporter": "hyd_danmar",
"cc": "slava.semushin@gmail.com",
"resolution": "fixed",
"_ts": "1241639775000000",
"component": "Other",
"summary": "buffer overrun: false positives when checking std::string usage",
"priority": "",
"keywords": "",
"time": "2009-05-05T17:28:06",
"milestone": "1.32",
"owner": "hyd_danmar",
"type": "defect"
}
```
|
1.0
|
buffer overrun: false positives when checking std::string usage (Trac #285) - Migrated from https://trac.cppcheck.net/ticket/285
```json
{
"status": "closed",
"changetime": "2009-05-06T19:56:15",
"description": "This was reported via email from Slava Semushin\n\n it produce false positives (without -a flag) in case like\n{{{\nfor (int i = 0; i <= str.size(); ++i) {\n if (i == str.size() || str[i] == ' ') {\n break;\n }\n}\n}}}\n\nIn this case cppcheck reports about bounds overrun, but if i ==\nstr.size() then str[i] == ' ' shouldn't be evaluated.\n\n",
"reporter": "hyd_danmar",
"cc": "slava.semushin@gmail.com",
"resolution": "fixed",
"_ts": "1241639775000000",
"component": "Other",
"summary": "buffer overrun: false positives when checking std::string usage",
"priority": "",
"keywords": "",
"time": "2009-05-05T17:28:06",
"milestone": "1.32",
"owner": "hyd_danmar",
"type": "defect"
}
```
|
defect
|
buffer overrun false positives when checking std string usage trac migrated from json status closed changetime description this was reported via email from slava semushin n n it produce false positives without a flag in case like n nfor int i i str size i n if i str size str n break n n n n nin this case cppcheck reports about bounds overrun but if i nstr size then str shouldn t be evaluated n n reporter hyd danmar cc slava semushin gmail com resolution fixed ts component other summary buffer overrun false positives when checking std string usage priority keywords time milestone owner hyd danmar type defect
| 1
|
25,181
| 4,232,536,796
|
IssuesEvent
|
2016-07-05 00:31:33
|
arkayenro/arkinventory
|
https://api.github.com/repos/arkayenro/arkinventory
|
closed
|
Chauffeured Chopper in ArkInventory_Mounts Titan Panel integration
|
auto-migrated Priority-Medium Type-Defect
|
```
Downloaded from > curse
What steps will reproduce the problem?
1. Collect 35 Heirloom Items and add Chauffeured Chopper to Mounts Collection.
2. Log in with a Character that normally cannot ride any Mount (is now able to
use the Chauffeured Chopper).
3. Cannot mount the Chauffeured Chopper using the ArkInventory_Mounts Button in
TitanPanel.
The new Mount "Chauffeured Chopper" obtained from the achievement "Heirloom
Hoarder" which requires 35 Heirlooms can be used without the ability
"Apprentice Riding" and also there is no Level restriction.
Therefore any Level 1 Character is able to use it.
While using TitanPanel and the ArkInventory_Mounts button, it is not possible
to mount the Chauffeured Chopper, since it is complaining about "Skill not high
enough".
Used Version of ArkInventory is 3.05.00
```
Original issue reported on code.google.com by `dfbloods...@googlemail.com` on 26 Feb 2015 at 4:38
|
1.0
|
Chauffeured Chopper in ArkInventory_Mounts Titan Panel integration - ```
Downloaded from > curse
What steps will reproduce the problem?
1. Collect 35 Heirloom Items and add Chauffeured Chopper to Mounts Collection.
2. Log in with a Character that normally cannot ride any Mount (is now able to
use the Chauffeured Chopper).
3. Cannot mount the Chauffeured Chopper using the ArkInventory_Mounts Button in
TitanPanel.
The new Mount "Chauffeured Chopper" obtained from the achievement "Heirloom
Hoarder" which requires 35 Heirlooms can be used without the ability
"Apprentice Riding" and also there is no Level restriction.
Therefore any Level 1 Character is able to use it.
While using TitanPanel and the ArkInventory_Mounts button, it is not possible
to mount the Chauffeured Chopper, since it is complaining about "Skill not high
enough".
Used Version of ArkInventory is 3.05.00
```
Original issue reported on code.google.com by `dfbloods...@googlemail.com` on 26 Feb 2015 at 4:38
|
defect
|
chauffeured chopper in arkinventory mounts titan panel integration downloaded from curse what steps will reproduce the problem collect heirloom items and add chauffeured chopper to mounts collection log in with a character that normally cannot ride any mount is now able to use the chauffeured chopper cannot mount the chauffeured chopper using the arkinventory mounts button in titanpanel the new mount chauffeured chopper obtained from the achievement heirloom hoarder which requires heirlooms can be used without the ability apprentice riding and also there is no level restriction therefore any level character is able to use it while using titanpanel and the arkinventory mounts button it is not possible to mount the chauffeured chopper since it is complaining about skill not high enough used version of arkinventory is original issue reported on code google com by dfbloods googlemail com on feb at
| 1
|
33,237
| 7,058,076,852
|
IssuesEvent
|
2018-01-04 18:53:15
|
maloep/romcollectionbrowser
|
https://api.github.com/repos/maloep/romcollectionbrowser
|
closed
|
Enhancement Request: Sort systems A-Z
|
auto-migrated Priority-Medium Type-Defect
|
```
Systems seem to be always listed by ID or otherwise. Can we have an option or
default to sort the system names?
```
Original issue reported on code.google.com by `mdegu...@gmail.com` on 7 Sep 2014 at 1:49
|
1.0
|
Enhancement Request: Sort systems A-Z - ```
Systems seem to be always listed by ID or otherwise. Can we have an option or
default to sort the system names?
```
Original issue reported on code.google.com by `mdegu...@gmail.com` on 7 Sep 2014 at 1:49
|
defect
|
enhancement request sort systems a z systems seem to be always listed by id or otherwise can we have an option or default to sort the system names original issue reported on code google com by mdegu gmail com on sep at
| 1
|
166,920
| 6,314,727,010
|
IssuesEvent
|
2017-07-24 11:42:29
|
PaddlePaddle/Paddle
|
https://api.github.com/repos/PaddlePaddle/Paddle
|
closed
|
Remove Projection and Operator
|
low priority
|
Paddle 中跟Layer相似的概念还有Projection和Operator,引入Function #892 的目的是把Computation相关的用统一的形式表示,Layer的计算实现上变成是对一些Function的调用,所以应该是可以去掉Projection和Operator。
```
/**
* A projection takes one Argument as input, calculate the result and add it
* to output Argument.
*/
class Projection
```
```
/**
* Operator like Projection, but takes more than one Arguments as input.
* @note: Operator can't have parameters.
*/
class Operator
```
|
1.0
|
Remove Projection and Operator - Paddle 中跟Layer相似的概念还有Projection和Operator,引入Function #892 的目的是把Computation相关的用统一的形式表示,Layer的计算实现上变成是对一些Function的调用,所以应该是可以去掉Projection和Operator。
```
/**
* A projection takes one Argument as input, calculate the result and add it
* to output Argument.
*/
class Projection
```
```
/**
* Operator like Projection, but takes more than one Arguments as input.
* @note: Operator can't have parameters.
*/
class Operator
```
|
non_defect
|
remove projection and operator paddle 中跟layer相似的概念还有projection和operator,引入function 的目的是把computation相关的用统一的形式表示,layer的计算实现上变成是对一些function的调用,所以应该是可以去掉projection和operator。 a projection takes one argument as input calculate the result and add it to output argument class projection operator like projection but takes more than one arguments as input note operator can t have parameters class operator
| 0
|
38,791
| 8,966,908,425
|
IssuesEvent
|
2019-01-29 00:55:10
|
svigerske/Ipopt
|
https://api.github.com/repos/svigerske/Ipopt
|
closed
|
IpAdaptiveMuUpdate.cpp:700: missing break
|
Ipopt defect
|
Issue created by migration from Trac.
Original creator: dcb314
Original creation time: 2013-09-05 07:58:09
Assignee: ipopt-team
Version: 3.11
I just ran the static analyser "cppcheck" over the source
code of coin-or-iopt-3.11.0
It said
[IpAdaptiveMuUpdate.cpp:700] -> [IpAdaptiveMuUpdate.cpp:702]: (warning) Variable 'centrality' is reassigned a value before the old one has been used. 'break;' missing?
Source code is
case 2:
centrality = complty/xi;
case 3:
centrality = complty/pow(xi,3);
break;
Suggest add break statement.
|
1.0
|
IpAdaptiveMuUpdate.cpp:700: missing break - Issue created by migration from Trac.
Original creator: dcb314
Original creation time: 2013-09-05 07:58:09
Assignee: ipopt-team
Version: 3.11
I just ran the static analyser "cppcheck" over the source
code of coin-or-iopt-3.11.0
It said
[IpAdaptiveMuUpdate.cpp:700] -> [IpAdaptiveMuUpdate.cpp:702]: (warning) Variable 'centrality' is reassigned a value before the old one has been used. 'break;' missing?
Source code is
case 2:
centrality = complty/xi;
case 3:
centrality = complty/pow(xi,3);
break;
Suggest add break statement.
|
defect
|
ipadaptivemuupdate cpp missing break issue created by migration from trac original creator original creation time assignee ipopt team version i just ran the static analyser cppcheck over the source code of coin or iopt it said warning variable centrality is reassigned a value before the old one has been used break missing source code is case centrality complty xi case centrality complty pow xi break suggest add break statement
| 1
|
18,558
| 3,071,852,534
|
IssuesEvent
|
2015-08-19 14:19:36
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
FileUpload fails to execute listener on every second upload
|
defect
|
Hi, I would like to see this google code issue resolved: https://code.google.com/p/primefaces/issues/detail?id=6157, it still happens to me with PrimeFaces 5.2.9 and MyFaces 2.2.7, I think it's a problem of the MyFaces implementation, but the patch suggested in the issue works fine, could someone implement it in the 5.3 or 5.2.* release?
I copy/paste the whole issue:
Reported by ryan.ben...@gmail.com, Sep 23, 2013
Primefaces 4.0.RC1 - MyFaces 2.1.12 - Spring Webflow 2.3.2 - Jetty/Tomcat 7
I found that overriding the isTransient method of the org.primefaces.component.fileupload.FileUpload component fixes the issue. It looks like the Common and Native decoders are setting the value to true after queuing the FileUpload event and this then removes the component from the ui tree when it is next processed. It would be good to know why it is set to transient as this seems to be new with the 4.0 branch. For anyone else wondering, you can override the component by creating a class:
```
public class NonTransientFileUpload extends org.primefaces.component.fileupload.FileUpload {
@Override
public boolean isTransient() {
return false;
}
}
```
and then add the following to the webapps faces-config
```
<component>
<component-type>org.primefaces.component.FileUpload</component-type>
<component-class>mypackage.NonTransientFileUpload</component-class>
</component>
```
|
1.0
|
FileUpload fails to execute listener on every second upload - Hi, I would like to see this google code issue resolved: https://code.google.com/p/primefaces/issues/detail?id=6157, it still happens to me with PrimeFaces 5.2.9 and MyFaces 2.2.7, I think it's a problem of the MyFaces implementation, but the patch suggested in the issue works fine, could someone implement it in the 5.3 or 5.2.* release?
I copy/paste the whole issue:
Reported by ryan.ben...@gmail.com, Sep 23, 2013
Primefaces 4.0.RC1 - MyFaces 2.1.12 - Spring Webflow 2.3.2 - Jetty/Tomcat 7
I found that overriding the isTransient method of the org.primefaces.component.fileupload.FileUpload component fixes the issue. It looks like the Common and Native decoders are setting the value to true after queuing the FileUpload event and this then removes the component from the ui tree when it is next processed. It would be good to know why it is set to transient as this seems to be new with the 4.0 branch. For anyone else wondering, you can override the component by creating a class:
```
public class NonTransientFileUpload extends org.primefaces.component.fileupload.FileUpload {
@Override
public boolean isTransient() {
return false;
}
}
```
and then add the following to the webapps faces-config
```
<component>
<component-type>org.primefaces.component.FileUpload</component-type>
<component-class>mypackage.NonTransientFileUpload</component-class>
</component>
```
|
defect
|
fileupload fails to execute listener on every second upload hi i would like to see this google code issue resolved it still happens to me with primefaces and myfaces i think it s a problem of the myfaces implementation but the patch suggested in the issue works fine could someone implement it in the or release i copy paste the whole issue reported by ryan ben gmail com sep primefaces myfaces spring webflow jetty tomcat i found that overriding the istransient method of the org primefaces component fileupload fileupload component fixes the issue it looks like the common and native decoders are setting the value to true after queuing the fileupload event and this then removes the component from the ui tree when it is next processed it would be good to know why it is set to transient as this seems to be new with the branch for anyone else wondering you can override the component by creating a class public class nontransientfileupload extends org primefaces component fileupload fileupload override public boolean istransient return false and then add the following to the webapps faces config org primefaces component fileupload mypackage nontransientfileupload
| 1
|
20,613
| 3,388,391,305
|
IssuesEvent
|
2015-11-29 08:07:45
|
crutchcorn/stagger
|
https://api.github.com/repos/crutchcorn/stagger
|
reopened
|
tag[TIT2] should return a single frame, not a 1-element list
|
auto-migrated Priority-Medium Type-Defect
|
```
For frame types that don't _allow_duplicates, Tag's __getitem__ should
automatically extract the single frame from the list stored in _frames.
>>> tag[TIT2]
TIT2(text="Staralfur")
>>> tag._frames[TIT2]
[TIT2(text="Staralfur")]
Also, __setitem__ should throw an error if it is given a multi-element list.
```
Original issue reported on code.google.com by `Karoly.Lorentey` on 15 Jun 2009 at 2:36
|
1.0
|
tag[TIT2] should return a single frame, not a 1-element list - ```
For frame types that don't _allow_duplicates, Tag's __getitem__ should
automatically extract the single frame from the list stored in _frames.
>>> tag[TIT2]
TIT2(text="Staralfur")
>>> tag._frames[TIT2]
[TIT2(text="Staralfur")]
Also, __setitem__ should throw an error if it is given a multi-element list.
```
Original issue reported on code.google.com by `Karoly.Lorentey` on 15 Jun 2009 at 2:36
|
defect
|
tag should return a single frame not a element list for frame types that don t allow duplicates tag s getitem should automatically extract the single frame from the list stored in frames tag text staralfur tag frames also setitem should throw an error if it is given a multi element list original issue reported on code google com by karoly lorentey on jun at
| 1
|
29,152
| 5,558,187,573
|
IssuesEvent
|
2017-03-24 14:10:31
|
richgel999/miniz
|
https://api.github.com/repos/richgel999/miniz
|
reopened
|
Compiler warnings on strict aliasing
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. gcc -Wall -Werror -O2 miniz.c
2.
3.
What is the expected output? What do you see instead?
Expected output: Compiler errors due to no main().
Actual output:
Compiler errors due to type punning.
miniz.c: In function ‘tdefl_find_match’:
miniz.c:2270:3: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2282:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2282:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2282:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2294:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c: In function ‘tdefl_compress_fast’:
miniz.c:2367:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
What version of the product are you using? On what operating system?
9.1.15, all MINIZ_NO_* enabled. Linux Mint 15, AMD64, GCC 4.7.3-1ubuntu1.
Please provide any additional information below.
Disabling warnings, optimization, or both, makes this go away.
```
Original issue reported on code.google.com by `blub...@gmail.com` on 24 Nov 2013 at 5:02
|
1.0
|
Compiler warnings on strict aliasing - ```
What steps will reproduce the problem?
1. gcc -Wall -Werror -O2 miniz.c
2.
3.
What is the expected output? What do you see instead?
Expected output: Compiler errors due to no main().
Actual output:
Compiler errors due to type punning.
miniz.c: In function ‘tdefl_find_match’:
miniz.c:2270:3: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2282:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2282:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2282:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c:2294:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
miniz.c: In function ‘tdefl_compress_fast’:
miniz.c:2367:7: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
What version of the product are you using? On what operating system?
9.1.15, all MINIZ_NO_* enabled. Linux Mint 15, AMD64, GCC 4.7.3-1ubuntu1.
Please provide any additional information below.
Disabling warnings, optimization, or both, makes this go away.
```
Original issue reported on code.google.com by `blub...@gmail.com` on 24 Nov 2013 at 5:02
|
defect
|
compiler warnings on strict aliasing what steps will reproduce the problem gcc wall werror miniz c what is the expected output what do you see instead expected output compiler errors due to no main actual output compiler errors due to type punning miniz c in function ‘tdefl find match’ miniz c error dereferencing type punned pointer will break strict aliasing rules miniz c error dereferencing type punned pointer will break strict aliasing rules miniz c error dereferencing type punned pointer will break strict aliasing rules miniz c error dereferencing type punned pointer will break strict aliasing rules miniz c error dereferencing type punned pointer will break strict aliasing rules miniz c in function ‘tdefl compress fast’ miniz c error dereferencing type punned pointer will break strict aliasing rules what version of the product are you using on what operating system all miniz no enabled linux mint gcc please provide any additional information below disabling warnings optimization or both makes this go away original issue reported on code google com by blub gmail com on nov at
| 1
|
260,903
| 8,216,642,395
|
IssuesEvent
|
2018-09-05 09:48:32
|
MLB-LED-Scoreboard/mlb-led-scoreboard
|
https://api.github.com/repos/MLB-LED-Scoreboard/mlb-led-scoreboard
|
closed
|
Update 32x32 standings images in the README
|
enhancement low priority
|
Some small changes have been made since those were taken.
I'll wait till the season starts before doing it.
|
1.0
|
Update 32x32 standings images in the README - Some small changes have been made since those were taken.
I'll wait till the season starts before doing it.
|
non_defect
|
update standings images in the readme some small changes have been made since those were taken i ll wait till the season starts before doing it
| 0
|
157,604
| 24,697,436,356
|
IssuesEvent
|
2022-10-19 13:08:28
|
NEARWEEK/CORE
|
https://api.github.com/repos/NEARWEEK/CORE
|
opened
|
Re-design for fund3r page
|
design
|
As of now the submission.nearweek page still have the NEAR Foundation style. With help from our designer Frederik, we want to do a makeover for submission.nearweek
- [ ] Update logo
- [ ] Update favicon
- [ ] Re-design page in style with the NEARWEEK identity
## 🤼♂️ Reviewer
@Kisgus
## 🔗 Work doc(s) / inspirational links
|
1.0
|
Re-design for fund3r page - As of now the submission.nearweek page still have the NEAR Foundation style. With help from our designer Frederik, we want to do a makeover for submission.nearweek
- [ ] Update logo
- [ ] Update favicon
- [ ] Re-design page in style with the NEARWEEK identity
## 🤼♂️ Reviewer
@Kisgus
## 🔗 Work doc(s) / inspirational links
|
non_defect
|
re design for page as of now the submission nearweek page still have the near foundation style with help from our designer frederik we want to do a makeover for submission nearweek update logo update favicon re design page in style with the nearweek identity 🤼♂️ reviewer kisgus 🔗 work doc s inspirational links
| 0
|
66,049
| 19,909,123,608
|
IssuesEvent
|
2022-01-25 15:33:29
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
"Ctrl K" and "Ctrl + Shift + D" should be translatable
|
T-Defect
|
### Steps to reproduce
Non-English languages use words other than "Ctrl" for that key, so shortcuts like "Ctrl K" and "Ctrl + Shift + D" should be translatable.
I was going to submit a PR but I wasn't sure whether we should use concatenation like in BasicMessageComposer line 69, or provide these as separate translatable strings like "Use Ctrl + Enter to send a message".
### Outcome
#### What did you expect?
In German, "Ctrl K" should display as "Strg K"
#### What happened instead?
The English is used everywhere.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
"Ctrl K" and "Ctrl + Shift + D" should be translatable - ### Steps to reproduce
Non-English languages use words other than "Ctrl" for that key, so shortcuts like "Ctrl K" and "Ctrl + Shift + D" should be translatable.
I was going to submit a PR but I wasn't sure whether we should use concatenation like in BasicMessageComposer line 69, or provide these as separate translatable strings like "Use Ctrl + Enter to send a message".
### Outcome
#### What did you expect?
In German, "Ctrl K" should display as "Strg K"
#### What happened instead?
The English is used everywhere.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
ctrl k and ctrl shift d should be translatable steps to reproduce non english languages use words other than ctrl for that key so shortcuts like ctrl k and ctrl shift d should be translatable i was going to submit a pr but i wasn t sure whether we should use concatenation like in basicmessagecomposer line or provide these as separate translatable strings like use ctrl enter to send a message outcome what did you expect in german ctrl k should display as strg k what happened instead the english is used everywhere operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs no
| 1
|
181,112
| 30,625,032,844
|
IssuesEvent
|
2023-07-24 10:51:39
|
readthedocs/readthedocs.org
|
https://api.github.com/repos/readthedocs/readthedocs.org
|
closed
|
Automatically version PDF/ePUB output
|
Improvement Needed: design decision
|
In our sphinx theme HTML output, we show the version of the docs as the verbose slug name that RTD uses -- ie, `latest`, and `stable`. On the PDF output, and I assume on the epub output as well, we don't override the conf.py `release` setting. This can result in an HTML output with a different version than the PDF output -- for instance that HTML version says `latest` and the PDF output doesn't state a version or requires the user to set a `release` config variable manually. The `release` configuration variable is what controls the version disaply on the PDF/epub output.
- We don't normally set the `version` config variable, but rather do this through a secondary variable particular to our theme. Should we be setting `version` if the user hasn't already instead? I'm curious if there are strong arguments against this one.
- We could set `release` if the user has not set the `release` variable. This would still allow for local override, otherwise the version on the pdf/epub output would match the HTML version
|
1.0
|
Automatically version PDF/ePUB output - In our sphinx theme HTML output, we show the version of the docs as the verbose slug name that RTD uses -- ie, `latest`, and `stable`. On the PDF output, and I assume on the epub output as well, we don't override the conf.py `release` setting. This can result in an HTML output with a different version than the PDF output -- for instance that HTML version says `latest` and the PDF output doesn't state a version or requires the user to set a `release` config variable manually. The `release` configuration variable is what controls the version disaply on the PDF/epub output.
- We don't normally set the `version` config variable, but rather do this through a secondary variable particular to our theme. Should we be setting `version` if the user hasn't already instead? I'm curious if there are strong arguments against this one.
- We could set `release` if the user has not set the `release` variable. This would still allow for local override, otherwise the version on the pdf/epub output would match the HTML version
|
non_defect
|
automatically version pdf epub output in our sphinx theme html output we show the version of the docs as the verbose slug name that rtd uses ie latest and stable on the pdf output and i assume on the epub output as well we don t override the conf py release setting this can result in an html output with a different version than the pdf output for instance that html version says latest and the pdf output doesn t state a version or requires the user to set a release config variable manually the release configuration variable is what controls the version disaply on the pdf epub output we don t normally set the version config variable but rather do this through a secondary variable particular to our theme should we be setting version if the user hasn t already instead i m curious if there are strong arguments against this one we could set release if the user has not set the release variable this would still allow for local override otherwise the version on the pdf epub output would match the html version
| 0
|
4,304
| 2,610,091,209
|
IssuesEvent
|
2015-02-26 18:27:34
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳痘印怎么消
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳痘印怎么消【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:47
|
1.0
|
深圳痘印怎么消 - ```
深圳痘印怎么消【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:47
|
defect
|
深圳痘印怎么消 深圳痘印怎么消【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方�� �—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科� ��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康 祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治�� �粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘� �� original issue reported on code google com by szft com on may at
| 1
|
27,482
| 5,031,333,879
|
IssuesEvent
|
2016-12-16 06:22:38
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
opened
|
Calling Console.WriteLine() should write blank line in Bridge.Console
|
defect
|
### Expected
```js
<-- nothing
```
### Actual
```js
null
```
### Steps To Reproduce
http://deck.net/cb55b734f78d4b69ea99049e3af1a707
```cs
public class Program
{
public static void Main()
{
Console.WriteLine();
}
}
```
|
1.0
|
Calling Console.WriteLine() should write blank line in Bridge.Console - ### Expected
```js
<-- nothing
```
### Actual
```js
null
```
### Steps To Reproduce
http://deck.net/cb55b734f78d4b69ea99049e3af1a707
```cs
public class Program
{
public static void Main()
{
Console.WriteLine();
}
}
```
|
defect
|
calling console writeline should write blank line in bridge console expected js nothing actual js null steps to reproduce cs public class program public static void main console writeline
| 1
|
64,411
| 18,669,743,464
|
IssuesEvent
|
2021-10-30 13:31:30
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Scrolling beyond room list is jerky again
|
T-Defect
|
### Steps to reproduce
It seems #17460 #17565 #18440 has resurfaced.
### Outcome
#### What did you expect?
Scroll to the bottom
#### What happened instead?
https://user-images.githubusercontent.com/2803622/139534634-c12e9e03-5612-41f9-8cde-e7c95ffe18b9.mp4
### Operating system
arch / sway / ozone
### Application version
Element Nightly version: 2021102801 Olm version: 3.2.3
### How did you install the app?
aur
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Scrolling beyond room list is jerky again - ### Steps to reproduce
It seems #17460 #17565 #18440 has resurfaced.
### Outcome
#### What did you expect?
Scroll to the bottom
#### What happened instead?
https://user-images.githubusercontent.com/2803622/139534634-c12e9e03-5612-41f9-8cde-e7c95ffe18b9.mp4
### Operating system
arch / sway / ozone
### Application version
Element Nightly version: 2021102801 Olm version: 3.2.3
### How did you install the app?
aur
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
scrolling beyond room list is jerky again steps to reproduce it seems has resurfaced outcome what did you expect scroll to the bottom what happened instead operating system arch sway ozone application version element nightly version olm version how did you install the app aur homeserver no response will you send logs no
| 1
|
78,019
| 27,278,569,628
|
IssuesEvent
|
2023-02-23 08:15:35
|
colour-science/colour
|
https://api.github.com/repos/colour-science/colour
|
closed
|
[BUG] Bump numpy dependency to 1.21 minimum.
|
Defect
|
### Description
Hello, just update the venv of one of my project and got an exception related to numpy dependency.
My project is using poetry for venv management :
- `colour` using the develop branch for github, updated few minutes ago
- `numpy=1.20.*`
> venv is resolved to `colour=0.4.2 682c406` and `numpy=1.20.3`
[Numpy Doc mention](https://numpy.org/devdocs/reference/typing.html#numpy.typing.NDArray) that NDArray is a feature of 1.21 so I guess I got my issue as poetry is resolving `1.20.3`.
Having a look at the [colour's pyproject.toml](https://github.com/colour-science/colour/blob/682c40654f5f9951f1daddde12777d3b3a4d416d/pyproject.toml#L48) I notice it still specify `>= 1.20` while it should actually be `>= 1.21` now.
Cheers.
Liam.
### Code for Reproduction
```python
import colour
```
### Exception Message
```shell
[...]
from numpy.typing import ArrayLike, NDArray
ImportError: cannot import name 'NDArray' from 'numpy.typing'
```
### Environment Information
```shell
pyproject.toml :
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
colour = { git = "https://github.com/colour-science/colour.git", branch = "develop", extras = ["plotting"] }
numpy = "1.20.*"
```
|
1.0
|
[BUG] Bump numpy dependency to 1.21 minimum. - ### Description
Hello, just update the venv of one of my project and got an exception related to numpy dependency.
My project is using poetry for venv management :
- `colour` using the develop branch for github, updated few minutes ago
- `numpy=1.20.*`
> venv is resolved to `colour=0.4.2 682c406` and `numpy=1.20.3`
[Numpy Doc mention](https://numpy.org/devdocs/reference/typing.html#numpy.typing.NDArray) that NDArray is a feature of 1.21 so I guess I got my issue as poetry is resolving `1.20.3`.
Having a look at the [colour's pyproject.toml](https://github.com/colour-science/colour/blob/682c40654f5f9951f1daddde12777d3b3a4d416d/pyproject.toml#L48) I notice it still specify `>= 1.20` while it should actually be `>= 1.21` now.
Cheers.
Liam.
### Code for Reproduction
```python
import colour
```
### Exception Message
```shell
[...]
from numpy.typing import ArrayLike, NDArray
ImportError: cannot import name 'NDArray' from 'numpy.typing'
```
### Environment Information
```shell
pyproject.toml :
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
colour = { git = "https://github.com/colour-science/colour.git", branch = "develop", extras = ["plotting"] }
numpy = "1.20.*"
```
|
defect
|
bump numpy dependency to minimum description hello just update the venv of one of my project and got an exception related to numpy dependency my project is using poetry for venv management colour using the develop branch for github updated few minutes ago numpy venv is resolved to colour and numpy that ndarray is a feature of so i guess i got my issue as poetry is resolving having a look at the i notice it still specify while it should actually be now cheers liam code for reproduction python import colour exception message shell from numpy typing import arraylike ndarray importerror cannot import name ndarray from numpy typing environment information shell pyproject toml python colour git branch develop extras numpy
| 1
|
57,513
| 15,822,968,785
|
IssuesEvent
|
2021-04-05 23:31:43
|
bigbluebutton/bigbluebutton
|
https://api.github.com/repos/bigbluebutton/bigbluebutton
|
closed
|
externalUserId not present in all redis messages
|
module: core priority: normal type: defect
|
Originally reported on Google Code with ID 1362
```
[This issue is a result of the thread: https://groups.google.com/forum/?fromgroups=#!topic/bigbluebutton-dev/q_Ffc4eba0A]
The application that includes bbb can easily log bbb events by leveraging redis PubSub
(adding a new subscriber).
Having the externalUserId in all the messages will be very useful to avoid having to
figure out the mapping between external and internal user id after the first message.
Below, some sample messages, the first one containing the external user id, the second
one without it.
User joined message: {"externalUserId":"3","internalUserId":"45","meetingId":"287da0651bbe7af557b588ce0c9aeaa9a39487a6-1353098991675","role":"VIEWER","messageId":"UserJoinedEvent","fullname":"user1"}
User left message:
{"internalUserId":"45","meetingId":"287da0651bbe7af557b588ce0c9aeaa9a39487a6-1353098991675","messageId":"UserLeftEvent"}
```
Reported by `federicoboerr` on 2012-11-19 15:01:31
|
1.0
|
externalUserId not present in all redis messages - Originally reported on Google Code with ID 1362
```
[This issue is a result of the thread: https://groups.google.com/forum/?fromgroups=#!topic/bigbluebutton-dev/q_Ffc4eba0A]
The application that includes bbb can easily log bbb events by leveraging redis PubSub
(adding a new subscriber).
Having the externalUserId in all the messages will be very useful to avoid having to
figure out the mapping between external and internal user id after the first message.
Below, some sample messages, the first one containing the external user id, the second
one without it.
User joined message: {"externalUserId":"3","internalUserId":"45","meetingId":"287da0651bbe7af557b588ce0c9aeaa9a39487a6-1353098991675","role":"VIEWER","messageId":"UserJoinedEvent","fullname":"user1"}
User left message:
{"internalUserId":"45","meetingId":"287da0651bbe7af557b588ce0c9aeaa9a39487a6-1353098991675","messageId":"UserLeftEvent"}
```
Reported by `federicoboerr` on 2012-11-19 15:01:31
|
defect
|
externaluserid not present in all redis messages originally reported on google code with id the application that includes bbb can easily log bbb events by leveraging redis pubsub adding a new subscriber having the externaluserid in all the messages will be very useful to avoid having to figure out the mapping between external and internal user id after the first message below some sample messages the first one containing the external user id the second one without it user joined message externaluserid internaluserid meetingid role viewer messageid userjoinedevent fullname user left message internaluserid meetingid messageid userleftevent reported by federicoboerr on
| 1
|
65,793
| 19,695,390,807
|
IssuesEvent
|
2022-01-12 11:34:09
|
martinrotter/rssguard
|
https://api.github.com/repos/martinrotter/rssguard
|
closed
|
[BUG]: Parse HTML in Title (to support RTL titles)
|
Type-Defect Component-Plugins-Feedly
|
### Brief description of the feature request
Hi,
Some feed will use an html `div` tag to align the title and set it direction for readability.
e.g a Hebrew feed which use the div tag and direction attribute

while in Feedly it display the titles correctly, with the current aliment and direction (RTL)

|
1.0
|
[BUG]: Parse HTML in Title (to support RTL titles) - ### Brief description of the feature request
Hi,
Some feed will use an html `div` tag to align the title and set it direction for readability.
e.g a Hebrew feed which use the div tag and direction attribute

while in Feedly it display the titles correctly, with the current aliment and direction (RTL)

|
defect
|
parse html in title to support rtl titles brief description of the feature request hi some feed will use an html div tag to align the title and set it direction for readability e g a hebrew feed which use the div tag and direction attribute while in feedly it display the titles correctly with the current aliment and direction rtl
| 1
|
50,569
| 13,187,584,285
|
IssuesEvent
|
2020-08-13 03:53:50
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as "Primary" (Trac #958)
|
Migrated from Trac combo core defect
|
I describe the Python side of this issue, but the same holds for C++.
get_most_energetic_primary filters the tree for I3Particles with is_primary is true and picks the most energetic. It returns none if there are no particles with the property is_primary is true.
On the other hand, I3MCTree.primaries yields all particles the head of the tree, independent of whether is_primary is true or not. It is based solely on the tree topology.
I found a real-life example of tree where I3MCTree.primaries yields some primary particles, while get_most_energetic_primary yields nothing. The reason is that the primary particles are not properly marked as is_primary. I expect this conflict to happen often.
We should fix either I3MCTree.primaries or get_most_energetic_primary, since they yield conflicting information for such a case. I suggest to fix get_most_energetic_primary, by making it also taking particles into account, which are primaries by topology of the tree.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/958
, reported by hdembinski and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:13:59",
"description": "I describe the Python side of this issue, but the same holds for C++.\n\nget_most_energetic_primary filters the tree for I3Particles with is_primary is true and picks the most energetic. It returns none if there are no particles with the property is_primary is true.\n\nOn the other hand, I3MCTree.primaries yields all particles the head of the tree, independent of whether is_primary is true or not. It is based solely on the tree topology.\n\nI found a real-life example of tree where I3MCTree.primaries yields some primary particles, while get_most_energetic_primary yields nothing. The reason is that the primary particles are not properly marked as is_primary. I expect this conflict to happen often.\n\nWe should fix either I3MCTree.primaries or get_most_energetic_primary, since they yield conflicting information for such a case. I suggest to fix get_most_energetic_primary, by making it also taking particles into account, which are primaries by topology of the tree.",
"reporter": "hdembinski",
"cc": "",
"resolution": "fixed",
"_ts": "1458335639558230",
"component": "combo core",
"summary": "[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as \"Primary\"",
"priority": "normal",
"keywords": "dataio I3MCTreePhysicsLibrary dataclasses",
"time": "2015-05-02T22:55:49",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as "Primary" (Trac #958) - I describe the Python side of this issue, but the same holds for C++.
get_most_energetic_primary filters the tree for I3Particles with is_primary is true and picks the most energetic. It returns none if there are no particles with the property is_primary is true.
On the other hand, I3MCTree.primaries yields all particles the head of the tree, independent of whether is_primary is true or not. It is based solely on the tree topology.
I found a real-life example of tree where I3MCTree.primaries yields some primary particles, while get_most_energetic_primary yields nothing. The reason is that the primary particles are not properly marked as is_primary. I expect this conflict to happen often.
We should fix either I3MCTree.primaries or get_most_energetic_primary, since they yield conflicting information for such a case. I suggest to fix get_most_energetic_primary, by making it also taking particles into account, which are primaries by topology of the tree.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/958
, reported by hdembinski and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:13:59",
"description": "I describe the Python side of this issue, but the same holds for C++.\n\nget_most_energetic_primary filters the tree for I3Particles with is_primary is true and picks the most energetic. It returns none if there are no particles with the property is_primary is true.\n\nOn the other hand, I3MCTree.primaries yields all particles the head of the tree, independent of whether is_primary is true or not. It is based solely on the tree topology.\n\nI found a real-life example of tree where I3MCTree.primaries yields some primary particles, while get_most_energetic_primary yields nothing. The reason is that the primary particles are not properly marked as is_primary. I expect this conflict to happen often.\n\nWe should fix either I3MCTree.primaries or get_most_energetic_primary, since they yield conflicting information for such a case. I suggest to fix get_most_energetic_primary, by making it also taking particles into account, which are primaries by topology of the tree.",
"reporter": "hdembinski",
"cc": "",
"resolution": "fixed",
"_ts": "1458335639558230",
"component": "combo core",
"summary": "[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as \"Primary\"",
"priority": "normal",
"keywords": "dataio I3MCTreePhysicsLibrary dataclasses",
"time": "2015-05-02T22:55:49",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
defect
|
get most energetic primary fails if particles in tree head are not marked properly as primary trac i describe the python side of this issue but the same holds for c get most energetic primary filters the tree for with is primary is true and picks the most energetic it returns none if there are no particles with the property is primary is true on the other hand primaries yields all particles the head of the tree independent of whether is primary is true or not it is based solely on the tree topology i found a real life example of tree where primaries yields some primary particles while get most energetic primary yields nothing the reason is that the primary particles are not properly marked as is primary i expect this conflict to happen often we should fix either primaries or get most energetic primary since they yield conflicting information for such a case i suggest to fix get most energetic primary by making it also taking particles into account which are primaries by topology of the tree migrated from reported by hdembinski and owned by olivas json status closed changetime description i describe the python side of this issue but the same holds for c n nget most energetic primary filters the tree for with is primary is true and picks the most energetic it returns none if there are no particles with the property is primary is true n non the other hand primaries yields all particles the head of the tree independent of whether is primary is true or not it is based solely on the tree topology n ni found a real life example of tree where primaries yields some primary particles while get most energetic primary yields nothing the reason is that the primary particles are not properly marked as is primary i expect this conflict to happen often n nwe should fix either primaries or get most energetic primary since they yield conflicting information for such a case i suggest to fix get most energetic primary by making it also taking particles into account which are primaries by topology of the tree reporter hdembinski cc resolution fixed ts component combo core summary get most energetic primary fails if particles in tree head are not marked properly as primary priority normal keywords dataio dataclasses time milestone owner olivas type defect
| 1
|
17,239
| 2,985,338,878
|
IssuesEvent
|
2015-07-18 23:11:58
|
JakobKallin/RPG-Ambience
|
https://api.github.com/repos/JakobKallin/RPG-Ambience
|
closed
|
Some content partially hidden
|
Browser-specific Defect
|
Some content is partially hidden due to a seemingly Chrome-specific flexbox bug, likely one of those described in [flexbugs](https://github.com/philipwalton/flexbugs) (["Minimum content sizing of flex items not honored"](https://github.com/philipwalton/flexbugs#1-minimum-content-sizing-of-flex-items-not-honored)). This issue appeared earlier this year and must have been caused by a Chrome update, because the CSS was not altered in that time.
|
1.0
|
Some content partially hidden - Some content is partially hidden due to a seemingly Chrome-specific flexbox bug, likely one of those described in [flexbugs](https://github.com/philipwalton/flexbugs) (["Minimum content sizing of flex items not honored"](https://github.com/philipwalton/flexbugs#1-minimum-content-sizing-of-flex-items-not-honored)). This issue appeared earlier this year and must have been caused by a Chrome update, because the CSS was not altered in that time.
|
defect
|
some content partially hidden some content is partially hidden due to a seemingly chrome specific flexbox bug likely one of those described in this issue appeared earlier this year and must have been caused by a chrome update because the css was not altered in that time
| 1
|
61,648
| 17,023,747,938
|
IssuesEvent
|
2021-07-03 03:37:53
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
New users are given a proprietary map by default
|
Component: potlatch2 Priority: major Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 12.02pm, Thursday, 22nd September 2011]**
The default Potlatch 2 background is being rendered as the Ordnance Survey 1:50000 map in parts of Britain. New users are likely to assume that the background is something they can legally trace from.
There are more details at http://comments.gmane.org/gmane.comp.gis.openstreetmap.region.gb/6684. (The last link should be http://uk.bing.com/maps/?v=2&cp=55.58972057046764~-3.549976721405983&lvl=14&dir=0&sty=h&eo=0&where1=Coulter%2C%20South%20Lanarkshire&form=LMLTCC, sorry for posting a plan text email.)
--
Andrew
|
1.0
|
New users are given a proprietary map by default - **[Submitted to the original trac issue database at 12.02pm, Thursday, 22nd September 2011]**
The default Potlatch 2 background is being rendered as the Ordnance Survey 1:50000 map in parts of Britain. New users are likely to assume that the background is something they can legally trace from.
There are more details at http://comments.gmane.org/gmane.comp.gis.openstreetmap.region.gb/6684. (The last link should be http://uk.bing.com/maps/?v=2&cp=55.58972057046764~-3.549976721405983&lvl=14&dir=0&sty=h&eo=0&where1=Coulter%2C%20South%20Lanarkshire&form=LMLTCC, sorry for posting a plan text email.)
--
Andrew
|
defect
|
new users are given a proprietary map by default the default potlatch background is being rendered as the ordnance survey map in parts of britain new users are likely to assume that the background is something they can legally trace from there are more details at the last link should be sorry for posting a plan text email andrew
| 1
|
66,032
| 19,905,044,891
|
IssuesEvent
|
2022-01-25 11:54:44
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
`maven-deploy.sh` should not continue executing if one of the `mvn` command invocations fail
|
T: Defect
|
### Expected behavior
`maven-deploy.sh` should not continue executing if one of the `mvn` command invocations fail
### Actual behavior
Errors are silently ignored and the script happily carries on, even though the `mvn deploy:deploy-file` command has failed.
### Steps to reproduce the problem
```shell
$ ./maven-deploy.sh -u https://artifacts.example.com/repository/some-repository-name -r artifacts
```
Quoting myself from https://github.com/jOOQ/jOOQ/issues/12939#issuecomment-1021064947:
> On another note; would it be worth adding `set -e` (and potentially `-o pipefail`) to this script as well? Some people hate this kind of error handling (and it surely has some drawbacks), but... as the script currently stands, if the upload fails with e.g. `401 Unauthorized` or `403 Forbidden`, the script will happily carry on and try to upload all the other files as well.
>
> I personally like the idea of "fail fast" and I think it could be applied here as well. Failing to upload one of the files could just as well error with some reasonable error message instead of just continuing with the rest of the files.
You _could_ even consider adding `set -u` (fail on undefined variables) but it's even more controversial and will likely require script changes. But at the very least, `set -e -o pipefail` would be worth considering.
Note that `set -e` is considered bad practice by some people; this link might be of interest: e.g. http://mywiki.wooledge.org/BashFAQ/105.
Another option would be to do the `||` approach instead, i.e. run `mvn deploy:deploy-file -Dfile=jOOQ-pom/pom.xml -DgroupId=org.jooq.trial-java-8 -DartifactId=jooq-parent -DrepositoryId=$REPOSITORY -Durl=$URL -Dversion=$VERSION -Dpackaging=pom || handle_error` and define `handle_error` as a shell function instead (which in turn could print an error and `exit 1`). That's "safer" in one sense but causes a bit of noise in the actual script code.
Anyway, those are implementation details and I think the first question is: _are we happy about the current semantics_ or do we want to fail more gracefully in case of e.g. 401/403 being returned from the Maven repository?
### Versions
- jOOQ: 3.16.2
- Java: 11.0.13
- OS: Debian GNU/Linux 11 (`bullseye`)
|
1.0
|
`maven-deploy.sh` should not continue executing if one of the `mvn` command invocations fail - ### Expected behavior
`maven-deploy.sh` should not continue executing if one of the `mvn` command invocations fail
### Actual behavior
Errors are silently ignored and the script happily carries on, even though the `mvn deploy:deploy-file` command has failed.
### Steps to reproduce the problem
```shell
$ ./maven-deploy.sh -u https://artifacts.example.com/repository/some-repository-name -r artifacts
```
Quoting myself from https://github.com/jOOQ/jOOQ/issues/12939#issuecomment-1021064947:
> On another note; would it be worth adding `set -e` (and potentially `-o pipefail`) to this script as well? Some people hate this kind of error handling (and it surely has some drawbacks), but... as the script currently stands, if the upload fails with e.g. `401 Unauthorized` or `403 Forbidden`, the script will happily carry on and try to upload all the other files as well.
>
> I personally like the idea of "fail fast" and I think it could be applied here as well. Failing to upload one of the files could just as well error with some reasonable error message instead of just continuing with the rest of the files.
You _could_ even consider adding `set -u` (fail on undefined variables) but it's even more controversial and will likely require script changes. But at the very least, `set -e -o pipefail` would be worth considering.
Note that `set -e` is considered bad practice by some people; this link might be of interest: e.g. http://mywiki.wooledge.org/BashFAQ/105.
Another option would be to do the `||` approach instead, i.e. run `mvn deploy:deploy-file -Dfile=jOOQ-pom/pom.xml -DgroupId=org.jooq.trial-java-8 -DartifactId=jooq-parent -DrepositoryId=$REPOSITORY -Durl=$URL -Dversion=$VERSION -Dpackaging=pom || handle_error` and define `handle_error` as a shell function instead (which in turn could print an error and `exit 1`). That's "safer" in one sense but causes a bit of noise in the actual script code.
Anyway, those are implementation details and I think the first question is: _are we happy about the current semantics_ or do we want to fail more gracefully in case of e.g. 401/403 being returned from the Maven repository?
### Versions
- jOOQ: 3.16.2
- Java: 11.0.13
- OS: Debian GNU/Linux 11 (`bullseye`)
|
defect
|
maven deploy sh should not continue executing if one of the mvn command invocations fail expected behavior maven deploy sh should not continue executing if one of the mvn command invocations fail actual behavior errors are silently ignored and the script happily carries on even though the mvn deploy deploy file command has failed steps to reproduce the problem shell maven deploy sh u r artifacts quoting myself from on another note would it be worth adding set e and potentially o pipefail to this script as well some people hate this kind of error handling and it surely has some drawbacks but as the script currently stands if the upload fails with e g unauthorized or forbidden the script will happily carry on and try to upload all the other files as well i personally like the idea of fail fast and i think it could be applied here as well failing to upload one of the files could just as well error with some reasonable error message instead of just continuing with the rest of the files you could even consider adding set u fail on undefined variables but it s even more controversial and will likely require script changes but at the very least set e o pipefail would be worth considering note that set e is considered bad practice by some people this link might be of interest e g another option would be to do the approach instead i e run mvn deploy deploy file dfile jooq pom pom xml dgroupid org jooq trial java dartifactid jooq parent drepositoryid repository durl url dversion version dpackaging pom handle error and define handle error as a shell function instead which in turn could print an error and exit that s safer in one sense but causes a bit of noise in the actual script code anyway those are implementation details and i think the first question is are we happy about the current semantics or do we want to fail more gracefully in case of e g being returned from the maven repository versions jooq java os debian gnu linux bullseye
| 1
|
50,355
| 13,187,457,119
|
IssuesEvent
|
2020-08-13 03:28:33
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
glshovel quits on Help-> About -> Close (Trac #576)
|
Migrated from Trac defect glshovel
|
When running the GLshovel, Help-> about brings up a nice
dialog box about the program. But the "Close" button for this dialog
will quit the GLShovel rather than just close the dialog box.
Reported on EL5 at Pole, and confirmed as well on Ubu 8.10 at UMD.
Reported version number also seems quite random.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/576
, reported by blaufuss and owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-05-11T22:19:13",
"description": "When running the GLshovel, Help-> about brings up a nice \ndialog box about the program. But the \"Close\" button for this dialog\nwill quit the GLShovel rather than just close the dialog box.\n\nReported on EL5 at Pole, and confirmed as well on Ubu 8.10 at UMD.\n\nReported version number also seems quite random.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1305152353000000",
"component": "glshovel",
"summary": "glshovel quits on Help-> About -> Close",
"priority": "normal",
"keywords": "",
"time": "2009-11-18T19:05:13",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
glshovel quits on Help-> About -> Close (Trac #576) - When running the GLshovel, Help-> about brings up a nice
dialog box about the program. But the "Close" button for this dialog
will quit the GLShovel rather than just close the dialog box.
Reported on EL5 at Pole, and confirmed as well on Ubu 8.10 at UMD.
Reported version number also seems quite random.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/576
, reported by blaufuss and owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-05-11T22:19:13",
"description": "When running the GLshovel, Help-> about brings up a nice \ndialog box about the program. But the \"Close\" button for this dialog\nwill quit the GLShovel rather than just close the dialog box.\n\nReported on EL5 at Pole, and confirmed as well on Ubu 8.10 at UMD.\n\nReported version number also seems quite random.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1305152353000000",
"component": "glshovel",
"summary": "glshovel quits on Help-> About -> Close",
"priority": "normal",
"keywords": "",
"time": "2009-11-18T19:05:13",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
defect
|
glshovel quits on help about close trac when running the glshovel help about brings up a nice dialog box about the program but the close button for this dialog will quit the glshovel rather than just close the dialog box reported on at pole and confirmed as well on ubu at umd reported version number also seems quite random migrated from reported by blaufuss and owned by troy json status closed changetime description when running the glshovel help about brings up a nice ndialog box about the program but the close button for this dialog nwill quit the glshovel rather than just close the dialog box n nreported on at pole and confirmed as well on ubu at umd n nreported version number also seems quite random reporter blaufuss cc resolution fixed ts component glshovel summary glshovel quits on help about close priority normal keywords time milestone owner troy type defect
| 1
|
679,159
| 23,222,580,614
|
IssuesEvent
|
2022-08-02 19:46:28
|
rathena/rathena
|
https://api.github.com/repos/rathena/rathena
|
closed
|
Item script bug.
|
component:database status:confirmed priority:low mode:renewal type:bug
|
<!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: Latest
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**:
<!-- Please specify the client date you used. -->
* **Server Mode**: Renewal
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**:
db/re/item_db_equip.yml
Line: 153191
bonus bAllTraits,6-(JobLevel/5);
Should be
bonus bAllTraitStats,6-(JobLevel/5);
|
1.0
|
Item script bug. - <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: Latest
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**:
<!-- Please specify the client date you used. -->
* **Server Mode**: Renewal
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**:
db/re/item_db_equip.yml
Line: 153191
bonus bAllTraits,6-(JobLevel/5);
Should be
bonus bAllTraitStats,6-(JobLevel/5);
|
non_defect
|
item script bug rathena hash latest please specify the rathena on which you encountered this issue how to get your github hash cd your rathena directory git rev parse short head copy the resulting hash client date server mode renewal description of issue db re item db equip yml line bonus balltraits joblevel should be bonus balltraitstats joblevel
| 0
|
779,236
| 27,345,764,154
|
IssuesEvent
|
2023-02-27 04:44:42
|
AY2223S2-CS2103T-W15-1/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-W15-1/tp
|
closed
|
Update README.md page to match project
|
type.Chore priority.High
|
Add a UI mockup of intended final product.
Update all contents to match own project.
Update link of GitHub Actions build status badge to reflect build status of own repo.
Acknowledge original source of the code.
|
1.0
|
Update README.md page to match project - Add a UI mockup of intended final product.
Update all contents to match own project.
Update link of GitHub Actions build status badge to reflect build status of own repo.
Acknowledge original source of the code.
|
non_defect
|
update readme md page to match project add a ui mockup of intended final product update all contents to match own project update link of github actions build status badge to reflect build status of own repo acknowledge original source of the code
| 0
|
18,490
| 3,067,858,810
|
IssuesEvent
|
2015-08-18 13:05:33
|
gbif/ipt
|
https://api.github.com/repos/gbif/ipt
|
closed
|
Identifiers Changed for Unregistered Datasets in IPT 2.2
|
bug Milestone-Release2.3 Priority-High Type-Defect wontfix
|
iDigBio ran into this when consuming the RSS feeds from a recently updated IPT.
Old Format:
http://ipt.flmnh.ufl.edu:8080/ipt/resource.do?id=herbarium
New Format:
http://ipt.flmnh.ufl.edu:8080/ipt/resource?id=herbarium
That cat is kind of out of the bag here, I'm not sure if there is much point in patching this at this point. I just want to make you guys aware that people do consume those as identifiers, they are the guid value in the RSS feed when a dataset is unregistered, and that changing them is problematic.
|
1.0
|
Identifiers Changed for Unregistered Datasets in IPT 2.2 - iDigBio ran into this when consuming the RSS feeds from a recently updated IPT.
Old Format:
http://ipt.flmnh.ufl.edu:8080/ipt/resource.do?id=herbarium
New Format:
http://ipt.flmnh.ufl.edu:8080/ipt/resource?id=herbarium
That cat is kind of out of the bag here, I'm not sure if there is much point in patching this at this point. I just want to make you guys aware that people do consume those as identifiers, they are the guid value in the RSS feed when a dataset is unregistered, and that changing them is problematic.
|
defect
|
identifiers changed for unregistered datasets in ipt idigbio ran into this when consuming the rss feeds from a recently updated ipt old format new format that cat is kind of out of the bag here i m not sure if there is much point in patching this at this point i just want to make you guys aware that people do consume those as identifiers they are the guid value in the rss feed when a dataset is unregistered and that changing them is problematic
| 1
|
650,917
| 21,442,941,131
|
IssuesEvent
|
2022-04-25 00:45:01
|
GC-spigot/AdvancedEnchantments
|
https://api.github.com/repos/GC-spigot/AdvancedEnchantments
|
closed
|
Heads issue
|
Priority: Critical Resolution: Accepted
|
### Describe the bug
Placing a Minecraft head against a block (wall) turns the head to face north and sets it to steve. After disabling AdvancedEnchantments, it seems to fix the issue. Console reveals no issues when heads are placed.
(Server is up to date with AE 8.9.16)
### How to reproduce
Placing heads against blocks, (IT MUST BE FULL BLOCKS) seems to cause the issue.
### Screenshots / Videos
https://gyazo.com/502385f2c32f6e6ec9893973350e796b
https://gyazo.com/67a5f2e757e2fe968400aee4b7366e0b
### "/ae plinfo" link
Could not create debug link! Error: Server returned HTTP response code: 500 for URL: https://paste.md-5.net/documents
### Server Log
N/A (No errors)
|
1.0
|
Heads issue - ### Describe the bug
Placing a Minecraft head against a block (wall) turns the head to face north and sets it to steve. After disabling AdvancedEnchantments, it seems to fix the issue. Console reveals no issues when heads are placed.
(Server is up to date with AE 8.9.16)
### How to reproduce
Placing heads against blocks, (IT MUST BE FULL BLOCKS) seems to cause the issue.
### Screenshots / Videos
https://gyazo.com/502385f2c32f6e6ec9893973350e796b
https://gyazo.com/67a5f2e757e2fe968400aee4b7366e0b
### "/ae plinfo" link
Could not create debug link! Error: Server returned HTTP response code: 500 for URL: https://paste.md-5.net/documents
### Server Log
N/A (No errors)
|
non_defect
|
heads issue describe the bug placing a minecraft head against a block wall turns the head to face north and sets it to steve after disabling advancedenchantments it seems to fix the issue console reveals no issues when heads are placed server is up to date with ae how to reproduce placing heads against blocks it must be full blocks seems to cause the issue screenshots videos ae plinfo link could not create debug link error server returned http response code for url server log n a no errors
| 0
|
60,845
| 17,023,537,731
|
IssuesEvent
|
2021-07-03 02:32:06
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Railway and highway bridges
|
Component: mapnik Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 11.52pm, Monday, 11th January 2010]**
http://www.openstreetmap.org/edit?lat=39.875278&lon=-75.259287&zoom=18
Here, the layer=1 motorway bridge is shown to be passing over a railway bridge that is designated layer=2.
|
1.0
|
Railway and highway bridges - **[Submitted to the original trac issue database at 11.52pm, Monday, 11th January 2010]**
http://www.openstreetmap.org/edit?lat=39.875278&lon=-75.259287&zoom=18
Here, the layer=1 motorway bridge is shown to be passing over a railway bridge that is designated layer=2.
|
defect
|
railway and highway bridges here the layer motorway bridge is shown to be passing over a railway bridge that is designated layer
| 1
|
287,701
| 31,856,301,404
|
IssuesEvent
|
2023-09-15 07:42:52
|
Trinadh465/linux-4.1.15_CVE-2023-26607
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-26607
|
opened
|
CVE-2022-32296 (Low) detected in linuxlinux-4.6
|
Mend: dependency security vulnerability
|
## CVE-2022-32296 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/inet_hashtables.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/inet_hashtables.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.17.9 allows TCP servers to identify clients by observing what source ports are used. This occurs because of use of Algorithm 4 ("Double-Hash Port Selection Algorithm") of RFC 6056.
<p>Publish Date: 2022-06-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-32296>CVE-2022-32296</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-32296">https://www.linuxkernelcves.com/cves/CVE-2022-32296</a></p>
<p>Release Date: 2022-06-05</p>
<p>Fix Resolution: v4.9.320,v4.14.285,v4.19.249,v5.4.201,v5.10.125,v5.15.41,v5.17.9,v5.18-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-32296 (Low) detected in linuxlinux-4.6 - ## CVE-2022-32296 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/inet_hashtables.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/inet_hashtables.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.17.9 allows TCP servers to identify clients by observing what source ports are used. This occurs because of use of Algorithm 4 ("Double-Hash Port Selection Algorithm") of RFC 6056.
<p>Publish Date: 2022-06-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-32296>CVE-2022-32296</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-32296">https://www.linuxkernelcves.com/cves/CVE-2022-32296</a></p>
<p>Release Date: 2022-06-05</p>
<p>Fix Resolution: v4.9.320,v4.14.285,v4.19.249,v5.4.201,v5.10.125,v5.15.41,v5.17.9,v5.18-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve low detected in linuxlinux cve low severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files net inet hashtables c net inet hashtables c vulnerability details the linux kernel before allows tcp servers to identify clients by observing what source ports are used this occurs because of use of algorithm double hash port selection algorithm of rfc publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
80,423
| 30,282,800,865
|
IssuesEvent
|
2023-07-08 09:30:14
|
scoutplan/scoutplan
|
https://api.github.com/repos/scoutplan/scoutplan
|
closed
|
[Scoutplan Production/production] NoMethodError: undefined method `youth' for nil:NilClass
|
defect
|
## Backtrace
line 43 of [PROJECT_ROOT]/app/views/events/partials/show/_payment.slim: _app_views_events_partials_show__payment_slim__178278537601882565_145080
line 14 of [PROJECT_ROOT]/app/views/events/show.html.slim: block in _app_views_events_show_html_slim___1465711909595258218_141940
line 4 of [PROJECT_ROOT]/app/views/events/show.html.slim: _app_views_events_show_html_slim___1465711909595258218_141940
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/97366583)
|
1.0
|
[Scoutplan Production/production] NoMethodError: undefined method `youth' for nil:NilClass - ## Backtrace
line 43 of [PROJECT_ROOT]/app/views/events/partials/show/_payment.slim: _app_views_events_partials_show__payment_slim__178278537601882565_145080
line 14 of [PROJECT_ROOT]/app/views/events/show.html.slim: block in _app_views_events_show_html_slim___1465711909595258218_141940
line 4 of [PROJECT_ROOT]/app/views/events/show.html.slim: _app_views_events_show_html_slim___1465711909595258218_141940
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/97366583)
|
defect
|
nomethoderror undefined method youth for nil nilclass backtrace line of app views events partials show payment slim app views events partials show payment slim line of app views events show html slim block in app views events show html slim line of app views events show html slim app views events show html slim
| 1
|
36,682
| 8,058,583,157
|
IssuesEvent
|
2018-08-02 18:57:10
|
extnet/Ext.NET
|
https://api.github.com/repos/extnet/Ext.NET
|
closed
|
Grid with BufferedRenderer throws JS error on record insertion
|
2.x 3.x 4.x defect fixed-in-latest-extjs sencha
|
http://forums.ext.net/showthread.php?26533
http://www.sencha.com/forum/showthread.php?272367
**Update:** Issue still open after 6.0.1 release.
|
1.0
|
Grid with BufferedRenderer throws JS error on record insertion - http://forums.ext.net/showthread.php?26533
http://www.sencha.com/forum/showthread.php?272367
**Update:** Issue still open after 6.0.1 release.
|
defect
|
grid with bufferedrenderer throws js error on record insertion update issue still open after release
| 1
|
270,564
| 23,519,024,375
|
IssuesEvent
|
2022-08-19 02:25:09
|
woowa-techcamp-2022/android-banchan-04
|
https://api.github.com/repos/woowa-techcamp-2022/android-banchan-04
|
closed
|
데이터베이스 insert와 update 분리 문제
|
fix test day11
|
## CheckList
- [ ] Primary Key가 `autoGenerate = true` 인 경우 insert 시 자동으로 새 entity id를 생성하므로 insert와 update를 명시하여 Transaction으로conflict를 예방하여 update를 구현
|
1.0
|
데이터베이스 insert와 update 분리 문제 - ## CheckList
- [ ] Primary Key가 `autoGenerate = true` 인 경우 insert 시 자동으로 새 entity id를 생성하므로 insert와 update를 명시하여 Transaction으로conflict를 예방하여 update를 구현
|
non_defect
|
데이터베이스 insert와 update 분리 문제 checklist primary key가 autogenerate true 인 경우 insert 시 자동으로 새 entity id를 생성하므로 insert와 update를 명시하여 transaction으로conflict를 예방하여 update를 구현
| 0
|
191,336
| 6,828,197,293
|
IssuesEvent
|
2017-11-08 19:35:02
|
r-lib/styler
|
https://api.github.com/repos/r-lib/styler
|
closed
|
Add style_dir() to RStudio add-in
|
Complexity: Low Priority: Low Status: Postponed Type: Enhancement
|
Currently the add-in provides 3 options to style active region, active file, or the whole package. It would be awesome to add another option to style all files in the working directory. There's already a `style_dir()` function that can be bound to the option.
|
1.0
|
Add style_dir() to RStudio add-in - Currently the add-in provides 3 options to style active region, active file, or the whole package. It would be awesome to add another option to style all files in the working directory. There's already a `style_dir()` function that can be bound to the option.
|
non_defect
|
add style dir to rstudio add in currently the add in provides options to style active region active file or the whole package it would be awesome to add another option to style all files in the working directory there s already a style dir function that can be bound to the option
| 0
|
55,481
| 14,514,322,144
|
IssuesEvent
|
2020-12-13 07:51:13
|
ascott18/TellMeWhen
|
https://api.github.com/repos/ascott18/TellMeWhen
|
closed
|
[Bug] Opacity Showing at 100% When Set to 'Hidden' for All-Unit Buff/Debuffs
|
S: resolved T: defect
|
**What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
Version 9.0.2
**What steps will reproduce the problem?**
1. Select All-Unit Buffs/Debuffs
2. Input a debuff like Entangling Roots in the 'What to Track' section
3. Set the Opacity & Color to 'Hidden' or zero
4. Test the behavior by casting the debuff (Entangling Roots in my case) on a target - you'll notice the icon is shown when the debuff is present rather than being hidden or invisible.
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
I know that the icon showing at full Opacity when set t o 'Hidden' is a bug because if you set the Opacity to a super low value like 1%, it actually works as intended and shows the icon at 1% opacity whenever the debuff is present on any target.
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->

|
1.0
|
[Bug] Opacity Showing at 100% When Set to 'Hidden' for All-Unit Buff/Debuffs - **What version of TellMeWhen are you using? **
<!-- Found in-game at the top of TMW's configuration window. "The latest" is not a version. -->
Version 9.0.2
**What steps will reproduce the problem?**
1. Select All-Unit Buffs/Debuffs
2. Input a debuff like Entangling Roots in the 'What to Track' section
3. Set the Opacity & Color to 'Hidden' or zero
4. Test the behavior by casting the debuff (Entangling Roots in my case) on a target - you'll notice the icon is shown when the debuff is present rather than being hidden or invisible.
<!-- Add more steps if needed -->
**What do you expect to happen? What happens instead?**
I know that the icon showing at full Opacity when set t o 'Hidden' is a bug because if you set the Opacity to a super low value like 1%, it actually works as intended and shows the icon at 1% opacity whenever the debuff is present on any target.
**Screenshots and Export Strings**
<!-- If your issue pertains to a specific icon or group, please post the relevant export string(s).
To get an export string, open the icon editor, and click the button labeled "Import/Export/Backup". Select the "To String" option for the appropriate export type (icon, group, or profile), and then press CTRL+C to copy it to your clipboard.
Additionally, if applicable, add screenshots to help explain your problem. You can paste images directly into GitHub issues, or you can upload files as well. -->
**Additional Info**
<!-- Please add any additional information you think will be useful in reproducing and/or solving the issue. -->

|
defect
|
opacity showing at when set to hidden for all unit buff debuffs what version of tellmewhen are you using version what steps will reproduce the problem select all unit buffs debuffs input a debuff like entangling roots in the what to track section set the opacity color to hidden or zero test the behavior by casting the debuff entangling roots in my case on a target you ll notice the icon is shown when the debuff is present rather than being hidden or invisible what do you expect to happen what happens instead i know that the icon showing at full opacity when set t o hidden is a bug because if you set the opacity to a super low value like it actually works as intended and shows the icon at opacity whenever the debuff is present on any target screenshots and export strings if your issue pertains to a specific icon or group please post the relevant export string s to get an export string open the icon editor and click the button labeled import export backup select the to string option for the appropriate export type icon group or profile and then press ctrl c to copy it to your clipboard additionally if applicable add screenshots to help explain your problem you can paste images directly into github issues or you can upload files as well additional info
| 1
|
21,340
| 3,489,285,720
|
IssuesEvent
|
2016-01-03 19:11:31
|
zaproxy/zaproxy
|
https://api.github.com/repos/zaproxy/zaproxy
|
closed
|
HTTPS Info classcast exception
|
Priority-Medium Type-Defect
|
```
I get thess exception when running the httpsInfo add-on.
We various trees that dont contain SiteNodes so the class should check that first ;)
java.lang.ClassCastException: org.zaproxy.zap.extension.script.ScriptNode cannot be
cast to org.parosproxy.paros.model.SiteNode
at org.zaproxy.zap.extension.httpsInfo.MenuEntry.isEnableForComponent(Unknown Source)
at org.parosproxy.paros.view.MainPopupMenu.handleMenuItem(MainPopupMenu.java:151)
at org.parosproxy.paros.view.MainPopupMenu.show(MainPopupMenu.java:123)
at org.zaproxy.zap.extension.scripts.ScriptsListPanel$5.mouseClicked(Unknown Source)
at org.zaproxy.zap.extension.scripts.ScriptsListPanel$5.mouseReleased(Unknown Source)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:290)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:289)
at java.awt.Component.processMouseEvent(Component.java:6505)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3321)
at java.awt.Component.processEvent(Component.java:6270)
at java.awt.Container.processEvent(Container.java:2229)
at java.awt.Component.dispatchEventImpl(Component.java:4861)
at java.awt.Container.dispatchEventImpl(Container.java:2287)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4832)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4492)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4422)
at java.awt.Container.dispatchEventImpl(Container.java:2273)
at java.awt.Window.dispatchEventImpl(Window.java:2719)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:723)
at java.awt.EventQueue.access$200(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:682)
at java.awt.EventQueue$3.run(EventQueue.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)
at java.awt.EventQueue$4.run(EventQueue.java:696)
at java.awt.EventQueue$4.run(EventQueue.java:694)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:693)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:244)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:163)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:147)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:139)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:97)
java.lang.ClassCastException: org.zaproxy.zap.extension.alert.AlertNode cannot be cast
to org.parosproxy.paros.model.SiteNode
at org.zaproxy.zap.extension.httpsInfo.MenuEntry.isEnableForComponent(Unknown Source)
at org.parosproxy.paros.view.MainPopupMenu.handleMenuItem(MainPopupMenu.java:151)
at org.parosproxy.paros.view.MainPopupMenu.show(MainPopupMenu.java:123)
at org.zaproxy.zap.extension.alert.AlertPanel$2.mouseClicked(AlertPanel.java:245)
at java.awt.AWTEventMulticaster.mouseClicked(AWTEventMulticaster.java:270)
at java.awt.Component.processMouseEvent(Component.java:6508)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3321)
at java.awt.Component.processEvent(Component.java:6270)
at java.awt.Container.processEvent(Container.java:2229)
at java.awt.Component.dispatchEventImpl(Component.java:4861)
at java.awt.Container.dispatchEventImpl(Container.java:2287)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4832)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4501)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4422)
at java.awt.Container.dispatchEventImpl(Container.java:2273)
at java.awt.Window.dispatchEventImpl(Window.java:2719)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:723)
at java.awt.EventQueue.access$200(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:682)
at java.awt.EventQueue$3.run(EventQueue.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)
at java.awt.EventQueue$4.run(EventQueue.java:696)
at java.awt.EventQueue$4.run(EventQueue.java:694)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:693)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:244)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:163)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:147)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:139)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:97)
```
Original issue reported on code.google.com by `psiinon` on 2013-08-06 13:29:17
|
1.0
|
HTTPS Info classcast exception - ```
I get thess exception when running the httpsInfo add-on.
We various trees that dont contain SiteNodes so the class should check that first ;)
java.lang.ClassCastException: org.zaproxy.zap.extension.script.ScriptNode cannot be
cast to org.parosproxy.paros.model.SiteNode
at org.zaproxy.zap.extension.httpsInfo.MenuEntry.isEnableForComponent(Unknown Source)
at org.parosproxy.paros.view.MainPopupMenu.handleMenuItem(MainPopupMenu.java:151)
at org.parosproxy.paros.view.MainPopupMenu.show(MainPopupMenu.java:123)
at org.zaproxy.zap.extension.scripts.ScriptsListPanel$5.mouseClicked(Unknown Source)
at org.zaproxy.zap.extension.scripts.ScriptsListPanel$5.mouseReleased(Unknown Source)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:290)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:289)
at java.awt.Component.processMouseEvent(Component.java:6505)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3321)
at java.awt.Component.processEvent(Component.java:6270)
at java.awt.Container.processEvent(Container.java:2229)
at java.awt.Component.dispatchEventImpl(Component.java:4861)
at java.awt.Container.dispatchEventImpl(Container.java:2287)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4832)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4492)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4422)
at java.awt.Container.dispatchEventImpl(Container.java:2273)
at java.awt.Window.dispatchEventImpl(Window.java:2719)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:723)
at java.awt.EventQueue.access$200(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:682)
at java.awt.EventQueue$3.run(EventQueue.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)
at java.awt.EventQueue$4.run(EventQueue.java:696)
at java.awt.EventQueue$4.run(EventQueue.java:694)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:693)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:244)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:163)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:147)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:139)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:97)
java.lang.ClassCastException: org.zaproxy.zap.extension.alert.AlertNode cannot be cast
to org.parosproxy.paros.model.SiteNode
at org.zaproxy.zap.extension.httpsInfo.MenuEntry.isEnableForComponent(Unknown Source)
at org.parosproxy.paros.view.MainPopupMenu.handleMenuItem(MainPopupMenu.java:151)
at org.parosproxy.paros.view.MainPopupMenu.show(MainPopupMenu.java:123)
at org.zaproxy.zap.extension.alert.AlertPanel$2.mouseClicked(AlertPanel.java:245)
at java.awt.AWTEventMulticaster.mouseClicked(AWTEventMulticaster.java:270)
at java.awt.Component.processMouseEvent(Component.java:6508)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3321)
at java.awt.Component.processEvent(Component.java:6270)
at java.awt.Container.processEvent(Container.java:2229)
at java.awt.Component.dispatchEventImpl(Component.java:4861)
at java.awt.Container.dispatchEventImpl(Container.java:2287)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4832)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4501)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4422)
at java.awt.Container.dispatchEventImpl(Container.java:2273)
at java.awt.Window.dispatchEventImpl(Window.java:2719)
at java.awt.Component.dispatchEvent(Component.java:4687)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:723)
at java.awt.EventQueue.access$200(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:682)
at java.awt.EventQueue$3.run(EventQueue.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)
at java.awt.EventQueue$4.run(EventQueue.java:696)
at java.awt.EventQueue$4.run(EventQueue.java:694)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:693)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:244)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:163)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:147)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:139)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:97)
```
Original issue reported on code.google.com by `psiinon` on 2013-08-06 13:29:17
|
defect
|
https info classcast exception i get thess exception when running the httpsinfo add on we various trees that dont contain sitenodes so the class should check that first java lang classcastexception org zaproxy zap extension script scriptnode cannot be cast to org parosproxy paros model sitenode at org zaproxy zap extension httpsinfo menuentry isenableforcomponent unknown source at org parosproxy paros view mainpopupmenu handlemenuitem mainpopupmenu java at org parosproxy paros view mainpopupmenu show mainpopupmenu java at org zaproxy zap extension scripts scriptslistpanel mouseclicked unknown source at org zaproxy zap extension scripts scriptslistpanel mousereleased unknown source at java awt awteventmulticaster mousereleased awteventmulticaster java at java awt awteventmulticaster mousereleased awteventmulticaster java at java awt component processmouseevent component java at javax swing jcomponent processmouseevent jcomponent java at java awt component processevent component java at java awt container processevent container java at java awt component dispatcheventimpl component java at java awt container dispatcheventimpl container java at java awt component dispatchevent component java at java awt lightweightdispatcher retargetmouseevent container java at java awt lightweightdispatcher processmouseevent container java at java awt lightweightdispatcher dispatchevent container java at java awt container dispatcheventimpl container java at java awt window dispatcheventimpl window java at java awt component dispatchevent component java at java awt eventqueue dispatcheventimpl eventqueue java at java awt eventqueue access eventqueue java at java awt eventqueue run eventqueue java at java awt eventqueue run eventqueue java at java security accesscontroller doprivileged native method at java security protectiondomain dointersectionprivilege protectiondomain java at java security protectiondomain dointersectionprivilege protectiondomain java at java awt eventqueue run eventqueue java at java awt eventqueue run eventqueue java at java security accesscontroller doprivileged native method at java security protectiondomain dointersectionprivilege protectiondomain java at java awt eventqueue dispatchevent eventqueue java at java awt eventdispatchthread pumponeeventforfilters eventdispatchthread java at java awt eventdispatchthread pumpeventsforfilter eventdispatchthread java at java awt eventdispatchthread pumpeventsforhierarchy eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread run eventdispatchthread java java lang classcastexception org zaproxy zap extension alert alertnode cannot be cast to org parosproxy paros model sitenode at org zaproxy zap extension httpsinfo menuentry isenableforcomponent unknown source at org parosproxy paros view mainpopupmenu handlemenuitem mainpopupmenu java at org parosproxy paros view mainpopupmenu show mainpopupmenu java at org zaproxy zap extension alert alertpanel mouseclicked alertpanel java at java awt awteventmulticaster mouseclicked awteventmulticaster java at java awt component processmouseevent component java at javax swing jcomponent processmouseevent jcomponent java at java awt component processevent component java at java awt container processevent container java at java awt component dispatcheventimpl component java at java awt container dispatcheventimpl container java at java awt component dispatchevent component java at java awt lightweightdispatcher retargetmouseevent container java at java awt lightweightdispatcher processmouseevent container java at java awt lightweightdispatcher dispatchevent container java at java awt container dispatcheventimpl container java at java awt window dispatcheventimpl window java at java awt component dispatchevent component java at java awt eventqueue dispatcheventimpl eventqueue java at java awt eventqueue access eventqueue java at java awt eventqueue run eventqueue java at java awt eventqueue run eventqueue java at java security accesscontroller doprivileged native method at java security protectiondomain dointersectionprivilege protectiondomain java at java security protectiondomain dointersectionprivilege protectiondomain java at java awt eventqueue run eventqueue java at java awt eventqueue run eventqueue java at java security accesscontroller doprivileged native method at java security protectiondomain dointersectionprivilege protectiondomain java at java awt eventqueue dispatchevent eventqueue java at java awt eventdispatchthread pumponeeventforfilters eventdispatchthread java at java awt eventdispatchthread pumpeventsforfilter eventdispatchthread java at java awt eventdispatchthread pumpeventsforhierarchy eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread run eventdispatchthread java original issue reported on code google com by psiinon on
| 1
|
30,498
| 6,143,275,472
|
IssuesEvent
|
2017-06-27 04:48:02
|
fieldenms/tg
|
https://api.github.com/repos/fieldenms/tg
|
closed
|
Revalidation of dependent properties should take into account requiredness
|
Complexity: low Defect Developer productivity Modelling P1
|
### Description
The requiredness of properties is the first validation step for all required by the definition properties -- this is a contract.
This contract is followed when changing properties directly, but is violated upon revalidation of dependent properties. This violation needs to be rectified.
### Expected outcome
Proper following of the validation contract whereby no BCE handlers are invoked if a corresponding property is required, but has no value assigned.
### Actual outcome
Revalidating a dependent property, which is required by definition, but was not yet assigned a value, leads to NPE exceptions in BCE handlers for such property.
|
1.0
|
Revalidation of dependent properties should take into account requiredness - ### Description
The requiredness of properties is the first validation step for all required by the definition properties -- this is a contract.
This contract is followed when changing properties directly, but is violated upon revalidation of dependent properties. This violation needs to be rectified.
### Expected outcome
Proper following of the validation contract whereby no BCE handlers are invoked if a corresponding property is required, but has no value assigned.
### Actual outcome
Revalidating a dependent property, which is required by definition, but was not yet assigned a value, leads to NPE exceptions in BCE handlers for such property.
|
defect
|
revalidation of dependent properties should take into account requiredness description the requiredness of properties is the first validation step for all required by the definition properties this is a contract this contract is followed when changing properties directly but is violated upon revalidation of dependent properties this violation needs to be rectified expected outcome proper following of the validation contract whereby no bce handlers are invoked if a corresponding property is required but has no value assigned actual outcome revalidating a dependent property which is required by definition but was not yet assigned a value leads to npe exceptions in bce handlers for such property
| 1
|
275,674
| 30,281,827,274
|
IssuesEvent
|
2023-07-08 06:38:21
|
KOSASIH/Microfarma
|
https://api.github.com/repos/KOSASIH/Microfarma
|
opened
|
spring-cloud-starter-stream-kafka-4.0.3.jar: 4 vulnerabilities (highest severity is: 7.5)
|
Mend: dependency security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-cloud-starter-stream-kafka-4.0.3.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/3.3.2/kafka-clients-3.3.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-cloud-starter-stream-kafka version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-34454](https://www.mend.io/vulnerability-database/CVE-2023-34454) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | snappy-java-1.1.8.4.jar | Transitive | N/A* | ❌ |
| [CVE-2023-34453](https://www.mend.io/vulnerability-database/CVE-2023-34453) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | snappy-java-1.1.8.4.jar | Transitive | N/A* | ❌ |
| [CVE-2023-34455](https://www.mend.io/vulnerability-database/CVE-2023-34455) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | snappy-java-1.1.8.4.jar | Transitive | N/A* | ❌ |
| [CVE-2023-25194](https://www.mend.io/vulnerability-database/CVE-2023-25194) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.6 | kafka-clients-3.3.2.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-34454</summary>
### Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- kafka-clients-3.3.2.jar
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
snappy-java is a fast compressor/decompressor for Java. Due to unchecked multiplications, an integer overflow may occur in versions prior to 1.1.10.1, causing an unrecoverable fatal error.
The function `compress(char[] input)` in the file `Snappy.java` receives an array of characters and compresses it. It does so by multiplying the length by 2 and passing it to the rawCompress` function.
Since the length is not tested, the multiplication by two can cause an integer overflow and become negative. The rawCompress function then uses the received length and passes it to the natively compiled maxCompressedLength function, using the returned value to allocate a byte array.
Since the maxCompressedLength function treats the length as an unsigned integer, it doesn’t care that it is negative, and it returns a valid value, which is casted to a signed integer by the Java engine. If the result is negative, a `java.lang.NegativeArraySizeException` exception will be raised while trying to allocate the array `buf`. On the other side, if the result is positive, the `buf` array will successfully be allocated, but its size might be too small to use for the compression, causing a fatal Access Violation error.
The same issue exists also when using the `compress` functions that receive double, float, int, long and short, each using a different multiplier that may cause the same issue. The issue most likely won’t occur when using a byte array, since creating a byte array of size 0x80000000 (or any other negative value) is impossible in the first place.
Version 1.1.10.1 contains a patch for this issue.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34454>CVE-2023-34454</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-fjpj-2g6w-x25r">https://github.com/xerial/snappy-java/security/advisories/GHSA-fjpj-2g6w-x25r</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution: org.xerial.snappy:snappy-java:1.1.10.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-34453</summary>
### Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- kafka-clients-3.3.2.jar
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
snappy-java is a fast compressor/decompressor for Java. Due to unchecked multiplications, an integer overflow may occur in versions prior to 1.1.10.1, causing a fatal error.
The function `shuffle(int[] input)` in the file `BitShuffle.java` receives an array of integers and applies a bit shuffle on it. It does so by multiplying the length by 4 and passing it to the natively compiled shuffle function. Since the length is not tested, the multiplication by four can cause an integer overflow and become a smaller value than the true size, or even zero or negative. In the case of a negative value, a `java.lang.NegativeArraySizeException` exception will raise, which can crash the program. In a case of a value that is zero or too small, the code that afterwards references the shuffled array will assume a bigger size of the array, which might cause exceptions such as `java.lang.ArrayIndexOutOfBoundsException`.
The same issue exists also when using the `shuffle` functions that receive a double, float, long and short, each using a different multiplier that may cause the same issue.
Version 1.1.10.1 contains a patch for this vulnerability.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34453>CVE-2023-34453</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf">https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution: org.xerial.snappy:snappy-java:1.1.10.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-34455</summary>
### Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- kafka-clients-3.3.2.jar
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
snappy-java is a fast compressor/decompressor for Java. Due to use of an unchecked chunk length, an unrecoverable fatal error can occur in versions prior to 1.1.10.1.
The code in the function hasNextChunk in the fileSnappyInputStream.java checks if a given stream has more chunks to read. It does that by attempting to read 4 bytes. If it wasn’t possible to read the 4 bytes, the function returns false. Otherwise, if 4 bytes were available, the code treats them as the length of the next chunk.
In the case that the `compressed` variable is null, a byte array is allocated with the size given by the input data. Since the code doesn’t test the legality of the `chunkSize` variable, it is possible to pass a negative number (such as 0xFFFFFFFF which is -1), which will cause the code to raise a `java.lang.NegativeArraySizeException` exception. A worse case would happen when passing a huge positive value (such as 0x7FFFFFFF), which would raise the fatal `java.lang.OutOfMemoryError` error.
Version 1.1.10.1 contains a patch for this issue.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34455>CVE-2023-34455</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh">https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution: org.xerial.snappy:snappy-java:1.1.10.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2023-25194</summary>
### Vulnerable Library - <b>kafka-clients-3.3.2.jar</b></p>
<p></p>
<p>Library home page: <a href="https://kafka.apache.org">https://kafka.apache.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/3.3.2/kafka-clients-3.3.2.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- :x: **kafka-clients-3.3.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A possible security vulnerability has been identified in Apache Kafka Connect.
This requires access to a Kafka Connect worker, and the ability to create/modify connectors on it with an arbitrary Kafka client SASL JAAS config
and a SASL-based security protocol, which has been possible on Kafka Connect clusters since Apache Kafka 2.3.0.
When configuring the connector via the Kafka Connect REST API, an authenticated operator can set the `sasl.jaas.config`
property for any of the connector's Kafka clients to "com.sun.security.auth.module.JndiLoginModule", which can be done via the
`producer.override.sasl.jaas.config`, `consumer.override.sasl.jaas.config`, or `admin.override.sasl.jaas.config` properties.
This will allow the server to connect to the attacker's LDAP server
and deserialize the LDAP response, which the attacker can use to execute java deserialization gadget chains on the Kafka connect server.
Attacker can cause unrestricted deserialization of untrusted data (or) RCE vulnerability when there are gadgets in the classpath.
Since Apache Kafka 3.0.0, users are allowed to specify these properties in connector configurations for Kafka Connect clusters running with out-of-the-box
configurations. Before Apache Kafka 3.0.0, users may not specify these properties unless the Kafka Connect cluster has been reconfigured with a connector
client override policy that permits them.
Since Apache Kafka 3.4.0, we have added a system property ("-Dorg.apache.kafka.disallowed.login.modules") to disable the problematic login modules usage
in SASL JAAS configuration. Also by default "com.sun.security.auth.module.JndiLoginModule" is disabled in Apache Kafka 3.4.0.
We advise the Kafka Connect users to validate connector configurations and only allow trusted JNDI configurations. Also examine connector dependencies for
vulnerable versions and either upgrade their connectors, upgrading that specific dependency, or removing the connectors as options for remediation. Finally,
in addition to leveraging the "org.apache.kafka.disallowed.login.modules" system property, Kafka Connect users can also implement their own connector
client config override policy, which can be used to control which Kafka client properties can be overridden directly in a connector config and which cannot.
<p>Publish Date: 2023-02-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25194>CVE-2023-25194</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://kafka.apache.org/cve-list">https://kafka.apache.org/cve-list</a></p>
<p>Release Date: 2023-02-07</p>
<p>Fix Resolution: org.apache.kafka:kafka-clients:3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
spring-cloud-starter-stream-kafka-4.0.3.jar: 4 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-cloud-starter-stream-kafka-4.0.3.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/3.3.2/kafka-clients-3.3.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-cloud-starter-stream-kafka version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-34454](https://www.mend.io/vulnerability-database/CVE-2023-34454) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | snappy-java-1.1.8.4.jar | Transitive | N/A* | ❌ |
| [CVE-2023-34453](https://www.mend.io/vulnerability-database/CVE-2023-34453) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | snappy-java-1.1.8.4.jar | Transitive | N/A* | ❌ |
| [CVE-2023-34455](https://www.mend.io/vulnerability-database/CVE-2023-34455) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | snappy-java-1.1.8.4.jar | Transitive | N/A* | ❌ |
| [CVE-2023-25194](https://www.mend.io/vulnerability-database/CVE-2023-25194) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.6 | kafka-clients-3.3.2.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-34454</summary>
### Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- kafka-clients-3.3.2.jar
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
snappy-java is a fast compressor/decompressor for Java. Due to unchecked multiplications, an integer overflow may occur in versions prior to 1.1.10.1, causing an unrecoverable fatal error.
The function `compress(char[] input)` in the file `Snappy.java` receives an array of characters and compresses it. It does so by multiplying the length by 2 and passing it to the rawCompress` function.
Since the length is not tested, the multiplication by two can cause an integer overflow and become negative. The rawCompress function then uses the received length and passes it to the natively compiled maxCompressedLength function, using the returned value to allocate a byte array.
Since the maxCompressedLength function treats the length as an unsigned integer, it doesn’t care that it is negative, and it returns a valid value, which is casted to a signed integer by the Java engine. If the result is negative, a `java.lang.NegativeArraySizeException` exception will be raised while trying to allocate the array `buf`. On the other side, if the result is positive, the `buf` array will successfully be allocated, but its size might be too small to use for the compression, causing a fatal Access Violation error.
The same issue exists also when using the `compress` functions that receive double, float, int, long and short, each using a different multiplier that may cause the same issue. The issue most likely won’t occur when using a byte array, since creating a byte array of size 0x80000000 (or any other negative value) is impossible in the first place.
Version 1.1.10.1 contains a patch for this issue.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34454>CVE-2023-34454</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-fjpj-2g6w-x25r">https://github.com/xerial/snappy-java/security/advisories/GHSA-fjpj-2g6w-x25r</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution: org.xerial.snappy:snappy-java:1.1.10.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-34453</summary>
### Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- kafka-clients-3.3.2.jar
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
snappy-java is a fast compressor/decompressor for Java. Due to unchecked multiplications, an integer overflow may occur in versions prior to 1.1.10.1, causing a fatal error.
The function `shuffle(int[] input)` in the file `BitShuffle.java` receives an array of integers and applies a bit shuffle on it. It does so by multiplying the length by 4 and passing it to the natively compiled shuffle function. Since the length is not tested, the multiplication by four can cause an integer overflow and become a smaller value than the true size, or even zero or negative. In the case of a negative value, a `java.lang.NegativeArraySizeException` exception will raise, which can crash the program. In a case of a value that is zero or too small, the code that afterwards references the shuffled array will assume a bigger size of the array, which might cause exceptions such as `java.lang.ArrayIndexOutOfBoundsException`.
The same issue exists also when using the `shuffle` functions that receive a double, float, long and short, each using a different multiplier that may cause the same issue.
Version 1.1.10.1 contains a patch for this vulnerability.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34453>CVE-2023-34453</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf">https://github.com/xerial/snappy-java/security/advisories/GHSA-pqr6-cmr2-h8hf</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution: org.xerial.snappy:snappy-java:1.1.10.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-34455</summary>
### Vulnerable Library - <b>snappy-java-1.1.8.4.jar</b></p>
<p>snappy-java: A fast compression/decompression library</p>
<p>Library home page: <a href="https://github.com/xerial/snappy-java">https://github.com/xerial/snappy-java</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- kafka-clients-3.3.2.jar
- :x: **snappy-java-1.1.8.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
snappy-java is a fast compressor/decompressor for Java. Due to use of an unchecked chunk length, an unrecoverable fatal error can occur in versions prior to 1.1.10.1.
The code in the function hasNextChunk in the fileSnappyInputStream.java checks if a given stream has more chunks to read. It does that by attempting to read 4 bytes. If it wasn’t possible to read the 4 bytes, the function returns false. Otherwise, if 4 bytes were available, the code treats them as the length of the next chunk.
In the case that the `compressed` variable is null, a byte array is allocated with the size given by the input data. Since the code doesn’t test the legality of the `chunkSize` variable, it is possible to pass a negative number (such as 0xFFFFFFFF which is -1), which will cause the code to raise a `java.lang.NegativeArraySizeException` exception. A worse case would happen when passing a huge positive value (such as 0x7FFFFFFF), which would raise the fatal `java.lang.OutOfMemoryError` error.
Version 1.1.10.1 contains a patch for this issue.
<p>Publish Date: 2023-06-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-34455>CVE-2023-34455</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh">https://github.com/xerial/snappy-java/security/advisories/GHSA-qcwq-55hx-v3vh</a></p>
<p>Release Date: 2023-06-15</p>
<p>Fix Resolution: org.xerial.snappy:snappy-java:1.1.10.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2023-25194</summary>
### Vulnerable Library - <b>kafka-clients-3.3.2.jar</b></p>
<p></p>
<p>Library home page: <a href="https://kafka.apache.org">https://kafka.apache.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/kafka/kafka-clients/3.3.2/kafka-clients-3.3.2.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-stream-kafka-4.0.3.jar (Root Library)
- spring-cloud-stream-binder-kafka-4.0.3.jar
- spring-kafka-3.0.7.jar
- :x: **kafka-clients-3.3.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Microfarma/commit/0aff84cffb9d2e0e49468891392bdd7ab59d6aef">0aff84cffb9d2e0e49468891392bdd7ab59d6aef</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A possible security vulnerability has been identified in Apache Kafka Connect.
This requires access to a Kafka Connect worker, and the ability to create/modify connectors on it with an arbitrary Kafka client SASL JAAS config
and a SASL-based security protocol, which has been possible on Kafka Connect clusters since Apache Kafka 2.3.0.
When configuring the connector via the Kafka Connect REST API, an authenticated operator can set the `sasl.jaas.config`
property for any of the connector's Kafka clients to "com.sun.security.auth.module.JndiLoginModule", which can be done via the
`producer.override.sasl.jaas.config`, `consumer.override.sasl.jaas.config`, or `admin.override.sasl.jaas.config` properties.
This will allow the server to connect to the attacker's LDAP server
and deserialize the LDAP response, which the attacker can use to execute java deserialization gadget chains on the Kafka connect server.
Attacker can cause unrestricted deserialization of untrusted data (or) RCE vulnerability when there are gadgets in the classpath.
Since Apache Kafka 3.0.0, users are allowed to specify these properties in connector configurations for Kafka Connect clusters running with out-of-the-box
configurations. Before Apache Kafka 3.0.0, users may not specify these properties unless the Kafka Connect cluster has been reconfigured with a connector
client override policy that permits them.
Since Apache Kafka 3.4.0, we have added a system property ("-Dorg.apache.kafka.disallowed.login.modules") to disable the problematic login modules usage
in SASL JAAS configuration. Also by default "com.sun.security.auth.module.JndiLoginModule" is disabled in Apache Kafka 3.4.0.
We advise the Kafka Connect users to validate connector configurations and only allow trusted JNDI configurations. Also examine connector dependencies for
vulnerable versions and either upgrade their connectors, upgrading that specific dependency, or removing the connectors as options for remediation. Finally,
in addition to leveraging the "org.apache.kafka.disallowed.login.modules" system property, Kafka Connect users can also implement their own connector
client config override policy, which can be used to control which Kafka client properties can be overridden directly in a connector config and which cannot.
<p>Publish Date: 2023-02-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25194>CVE-2023-25194</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://kafka.apache.org/cve-list">https://kafka.apache.org/cve-list</a></p>
<p>Release Date: 2023-02-07</p>
<p>Fix Resolution: org.apache.kafka:kafka-clients:3.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_defect
|
spring cloud starter stream kafka jar vulnerabilities highest severity is vulnerable library spring cloud starter stream kafka jar path to dependency file pom xml path to vulnerable library home wss scanner repository org apache kafka kafka clients kafka clients jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in spring cloud starter stream kafka version remediation available high snappy java jar transitive n a high snappy java jar transitive n a high snappy java jar transitive n a medium kafka clients jar transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library snappy java jar snappy java a fast compression decompression library library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org xerial snappy snappy java snappy java jar dependency hierarchy spring cloud starter stream kafka jar root library spring cloud stream binder kafka jar spring kafka jar kafka clients jar x snappy java jar vulnerable library found in head commit a href found in base branch main vulnerability details snappy java is a fast compressor decompressor for java due to unchecked multiplications an integer overflow may occur in versions prior to causing an unrecoverable fatal error the function compress char input in the file snappy java receives an array of characters and compresses it it does so by multiplying the length by and passing it to the rawcompress function since the length is not tested the multiplication by two can cause an integer overflow and become negative the rawcompress function then uses the received length and passes it to the natively compiled maxcompressedlength function using the returned value to allocate a byte array since the maxcompressedlength function treats the length as an unsigned integer it doesn’t care that it is negative and it returns a valid value which is casted to a signed integer by the java engine if the result is negative a java lang negativearraysizeexception exception will be raised while trying to allocate the array buf on the other side if the result is positive the buf array will successfully be allocated but its size might be too small to use for the compression causing a fatal access violation error the same issue exists also when using the compress functions that receive double float int long and short each using a different multiplier that may cause the same issue the issue most likely won’t occur when using a byte array since creating a byte array of size or any other negative value is impossible in the first place version contains a patch for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org xerial snappy snappy java step up your open source security game with mend cve vulnerable library snappy java jar snappy java a fast compression decompression library library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org xerial snappy snappy java snappy java jar dependency hierarchy spring cloud starter stream kafka jar root library spring cloud stream binder kafka jar spring kafka jar kafka clients jar x snappy java jar vulnerable library found in head commit a href found in base branch main vulnerability details snappy java is a fast compressor decompressor for java due to unchecked multiplications an integer overflow may occur in versions prior to causing a fatal error the function shuffle int input in the file bitshuffle java receives an array of integers and applies a bit shuffle on it it does so by multiplying the length by and passing it to the natively compiled shuffle function since the length is not tested the multiplication by four can cause an integer overflow and become a smaller value than the true size or even zero or negative in the case of a negative value a java lang negativearraysizeexception exception will raise which can crash the program in a case of a value that is zero or too small the code that afterwards references the shuffled array will assume a bigger size of the array which might cause exceptions such as java lang arrayindexoutofboundsexception the same issue exists also when using the shuffle functions that receive a double float long and short each using a different multiplier that may cause the same issue version contains a patch for this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org xerial snappy snappy java step up your open source security game with mend cve vulnerable library snappy java jar snappy java a fast compression decompression library library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org xerial snappy snappy java snappy java jar dependency hierarchy spring cloud starter stream kafka jar root library spring cloud stream binder kafka jar spring kafka jar kafka clients jar x snappy java jar vulnerable library found in head commit a href found in base branch main vulnerability details snappy java is a fast compressor decompressor for java due to use of an unchecked chunk length an unrecoverable fatal error can occur in versions prior to the code in the function hasnextchunk in the filesnappyinputstream java checks if a given stream has more chunks to read it does that by attempting to read bytes if it wasn’t possible to read the bytes the function returns false otherwise if bytes were available the code treats them as the length of the next chunk in the case that the compressed variable is null a byte array is allocated with the size given by the input data since the code doesn’t test the legality of the chunksize variable it is possible to pass a negative number such as which is which will cause the code to raise a java lang negativearraysizeexception exception a worse case would happen when passing a huge positive value such as which would raise the fatal java lang outofmemoryerror error version contains a patch for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org xerial snappy snappy java step up your open source security game with mend cve vulnerable library kafka clients jar library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org apache kafka kafka clients kafka clients jar dependency hierarchy spring cloud starter stream kafka jar root library spring cloud stream binder kafka jar spring kafka jar x kafka clients jar vulnerable library found in head commit a href found in base branch main vulnerability details a possible security vulnerability has been identified in apache kafka connect this requires access to a kafka connect worker and the ability to create modify connectors on it with an arbitrary kafka client sasl jaas config and a sasl based security protocol which has been possible on kafka connect clusters since apache kafka when configuring the connector via the kafka connect rest api an authenticated operator can set the sasl jaas config property for any of the connector s kafka clients to com sun security auth module jndiloginmodule which can be done via the producer override sasl jaas config consumer override sasl jaas config or admin override sasl jaas config properties this will allow the server to connect to the attacker s ldap server and deserialize the ldap response which the attacker can use to execute java deserialization gadget chains on the kafka connect server attacker can cause unrestricted deserialization of untrusted data or rce vulnerability when there are gadgets in the classpath since apache kafka users are allowed to specify these properties in connector configurations for kafka connect clusters running with out of the box configurations before apache kafka users may not specify these properties unless the kafka connect cluster has been reconfigured with a connector client override policy that permits them since apache kafka we have added a system property dorg apache kafka disallowed login modules to disable the problematic login modules usage in sasl jaas configuration also by default com sun security auth module jndiloginmodule is disabled in apache kafka we advise the kafka connect users to validate connector configurations and only allow trusted jndi configurations also examine connector dependencies for vulnerable versions and either upgrade their connectors upgrading that specific dependency or removing the connectors as options for remediation finally in addition to leveraging the org apache kafka disallowed login modules system property kafka connect users can also implement their own connector client config override policy which can be used to control which kafka client properties can be overridden directly in a connector config and which cannot publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache kafka kafka clients step up your open source security game with mend
| 0
|
27,461
| 5,026,626,285
|
IssuesEvent
|
2016-12-15 13:14:59
|
ONSdigital/eq-survey-runner
|
https://api.github.com/repos/ONSdigital/eq-survey-runner
|
opened
|
Show further guidance on household composition is in wrong place
|
design-defect
|
### Expected behaviour
Show further guidance shown below Add Person button
### Actual behaviour
Show further guidance shown below each person's last name.
### Steps to reproduce the behaviour
- navigate to HH composition screen.
### Technical information
#### Browser
Chrome
#### Operating System
macOS
### Screenshot

|
1.0
|
Show further guidance on household composition is in wrong place - ### Expected behaviour
Show further guidance shown below Add Person button
### Actual behaviour
Show further guidance shown below each person's last name.
### Steps to reproduce the behaviour
- navigate to HH composition screen.
### Technical information
#### Browser
Chrome
#### Operating System
macOS
### Screenshot

|
defect
|
show further guidance on household composition is in wrong place expected behaviour show further guidance shown below add person button actual behaviour show further guidance shown below each person s last name steps to reproduce the behaviour navigate to hh composition screen technical information browser chrome operating system macos screenshot
| 1
|
139,497
| 11,271,645,460
|
IssuesEvent
|
2020-01-14 13:28:34
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
QueryCacheNoEventLossTest.no_event_lost_during_migrations__with_many_parallel_nodes
|
Source: Internal Team: Core Type: Test-Failure
|
- Fails on `Hazelcast-3.x-nightly`
- Fails on [this build](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-3.x-nightly/348/testReport/com.hazelcast.map.impl.querycache/QueryCacheNoEventLossTest/no_event_lost_during_migrations__with_many_parallel_nodes/)
- Error
```
expected:<0> but was:<1>
```
- Stack Trace
```
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest.no_event_lost_during_migrations(QueryCacheNoEventLossTest.java:123)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest.access$000(QueryCacheNoEventLossTest.java:53)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest$1.run(QueryCacheNoEventLossTest.java:74)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1356)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1458)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest.no_event_lost_during_migrations__with_many_parallel_nodes(QueryCacheNoEventLossTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:114)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
kindly check
|
1.0
|
QueryCacheNoEventLossTest.no_event_lost_during_migrations__with_many_parallel_nodes - - Fails on `Hazelcast-3.x-nightly`
- Fails on [this build](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-3.x-nightly/348/testReport/com.hazelcast.map.impl.querycache/QueryCacheNoEventLossTest/no_event_lost_during_migrations__with_many_parallel_nodes/)
- Error
```
expected:<0> but was:<1>
```
- Stack Trace
```
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest.no_event_lost_during_migrations(QueryCacheNoEventLossTest.java:123)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest.access$000(QueryCacheNoEventLossTest.java:53)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest$1.run(QueryCacheNoEventLossTest.java:74)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1356)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1458)
at com.hazelcast.map.impl.querycache.QueryCacheNoEventLossTest.no_event_lost_during_migrations__with_many_parallel_nodes(QueryCacheNoEventLossTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:114)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
kindly check
|
non_defect
|
querycachenoeventlosstest no event lost during migrations with many parallel nodes fails on hazelcast x nightly fails on error expected but was stack trace java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com hazelcast map impl querycache querycachenoeventlosstest no event lost during migrations querycachenoeventlosstest java at com hazelcast map impl querycache querycachenoeventlosstest access querycachenoeventlosstest java at com hazelcast map impl querycache querycachenoeventlosstest run querycachenoeventlosstest java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast map impl querycache querycachenoeventlosstest no event lost during migrations with many parallel nodes querycachenoeventlosstest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java kindly check
| 0
|
28,256
| 5,226,538,457
|
IssuesEvent
|
2017-01-27 21:42:42
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Wrong height for SelectOneMenu with SelectItemGroups
|
defect
|
**Problem**
The height of the SelectOneMenu is wrong if it contains less than 10 SelectItemGroups. It only counts the root elements and not the childs.
**Possible fix:**
Add something like the following method to [SelectOneMenuRenderer](https://github.com/primefaces/primefaces/blob/master/src/main/java/org/primefaces/component/selectonemenu/SelectOneMenuRenderer.java)
``` java
private int getSize(List<SelectItem> items) {
int sum = 0;
for (SelectItem item : items) {
if(item instanceof SelectItemGroup) {
SelectItemGroup selectItemGroup = (SelectItemGroup) item;
sum += selectItemGroup.getSelectItems().length; //Childs
}
sum++; //Add a direct child or the group header
}
return sum;
}
```
Change [line 235](https://github.com/primefaces/primefaces/blob/master/src/main/java/org/primefaces/component/selectonemenu/SelectOneMenuRenderer.java#L235) to
``` java
writer.writeAttribute("style", "height:" + calculateWrapperHeight(menu, getSize(selectItems), null);
```
Thanks
|
1.0
|
Wrong height for SelectOneMenu with SelectItemGroups - **Problem**
The height of the SelectOneMenu is wrong if it contains less than 10 SelectItemGroups. It only counts the root elements and not the childs.
**Possible fix:**
Add something like the following method to [SelectOneMenuRenderer](https://github.com/primefaces/primefaces/blob/master/src/main/java/org/primefaces/component/selectonemenu/SelectOneMenuRenderer.java)
``` java
private int getSize(List<SelectItem> items) {
int sum = 0;
for (SelectItem item : items) {
if(item instanceof SelectItemGroup) {
SelectItemGroup selectItemGroup = (SelectItemGroup) item;
sum += selectItemGroup.getSelectItems().length; //Childs
}
sum++; //Add a direct child or the group header
}
return sum;
}
```
Change [line 235](https://github.com/primefaces/primefaces/blob/master/src/main/java/org/primefaces/component/selectonemenu/SelectOneMenuRenderer.java#L235) to
``` java
writer.writeAttribute("style", "height:" + calculateWrapperHeight(menu, getSize(selectItems), null);
```
Thanks
|
defect
|
wrong height for selectonemenu with selectitemgroups problem the height of the selectonemenu is wrong if it contains less than selectitemgroups it only counts the root elements and not the childs possible fix add something like the following method to java private int getsize list items int sum for selectitem item items if item instanceof selectitemgroup selectitemgroup selectitemgroup selectitemgroup item sum selectitemgroup getselectitems length childs sum add a direct child or the group header return sum change to java writer writeattribute style height calculatewrapperheight menu getsize selectitems null thanks
| 1
|
2,346
| 2,607,896,771
|
IssuesEvent
|
2015-02-26 00:11:44
|
chrsmithdemos/zen-coding
|
https://api.github.com/repos/chrsmithdemos/zen-coding
|
closed
|
Aptana. Zen-codding doesn't work after restarting studio.
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Install zen-codding (copy files into "Scripts" folder in project directory);
2. Refresh project - all is ok now, zen-codding is available;
3. Restart Aptana studio - zen-codding doesn't work;
4. To fix this issue you need to reinstall zen-codding, but after
restarting studio the problem appears again.
What version of the product are you using? On what operating system?
MS Windows XP Pro SP3
Aptana Studio, build: 1.5.1.026124
Zen.Coding-Aptana.v0.3
Please provide any additional information below.
Screen grabs are in attachment.
```
-----
Original issue reported on code.google.com by `bezz.mail` on 3 Oct 2009 at 9:30
Attachments:
* [zc_before_restart.png](https://storage.googleapis.com/google-code-attachments/zen-coding/issue-26/comment-0/zc_before_restart.png)
* [zc_after_restart.png](https://storage.googleapis.com/google-code-attachments/zen-coding/issue-26/comment-0/zc_after_restart.png)
|
1.0
|
Aptana. Zen-codding doesn't work after restarting studio. - ```
What steps will reproduce the problem?
1. Install zen-codding (copy files into "Scripts" folder in project directory);
2. Refresh project - all is ok now, zen-codding is available;
3. Restart Aptana studio - zen-codding doesn't work;
4. To fix this issue you need to reinstall zen-codding, but after
restarting studio the problem appears again.
What version of the product are you using? On what operating system?
MS Windows XP Pro SP3
Aptana Studio, build: 1.5.1.026124
Zen.Coding-Aptana.v0.3
Please provide any additional information below.
Screen grabs are in attachment.
```
-----
Original issue reported on code.google.com by `bezz.mail` on 3 Oct 2009 at 9:30
Attachments:
* [zc_before_restart.png](https://storage.googleapis.com/google-code-attachments/zen-coding/issue-26/comment-0/zc_before_restart.png)
* [zc_after_restart.png](https://storage.googleapis.com/google-code-attachments/zen-coding/issue-26/comment-0/zc_after_restart.png)
|
defect
|
aptana zen codding doesn t work after restarting studio what steps will reproduce the problem install zen codding copy files into scripts folder in project directory refresh project all is ok now zen codding is available restart aptana studio zen codding doesn t work to fix this issue you need to reinstall zen codding but after restarting studio the problem appears again what version of the product are you using on what operating system ms windows xp pro aptana studio build zen coding aptana please provide any additional information below screen grabs are in attachment original issue reported on code google com by bezz mail on oct at attachments
| 1
|
40,822
| 10,169,709,775
|
IssuesEvent
|
2019-08-08 01:50:56
|
nhattn/abot
|
https://api.github.com/repos/nhattn/abot
|
closed
|
Remove all marked with [Obsolete] attribute
|
Milestone-Release2.0 Priority-Medium Type-Defect auto-migrated
|
```
For next major version get rid of these!!!
```
Original issue reported on code.google.com by `sjdir...@gmail.com` on 22 Oct 2013 at 4:40
|
1.0
|
Remove all marked with [Obsolete] attribute - ```
For next major version get rid of these!!!
```
Original issue reported on code.google.com by `sjdir...@gmail.com` on 22 Oct 2013 at 4:40
|
defect
|
remove all marked with attribute for next major version get rid of these original issue reported on code google com by sjdir gmail com on oct at
| 1
|
57,289
| 15,729,483,390
|
IssuesEvent
|
2021-03-29 14:54:17
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
Vet Center Outstations are failing to push to lighthouse.
|
Defect Product Support Team Vet Center
|
**Describe the defect**
When a vetcenter outstation status is editied, the status change is queued to go to lighthouse. This part is working. On queue processing, the outstation items in the queue are returning 500 errors from the API.

```
[error] 500 Server error: `POST https://api.va.gov/services/va_facilities/v0/facilities/vc_3121OS/cms-overlay` resulted in a `500 Internal Server Error` response:
{"errors":[{"title":"Internal server error","detail":"JSON parse error: Cannot deserialize value of type `gov.va.api.lig (truncated...)
[error] Post API: failure to process queued items. Total items in queue: 1.
```
Needs more investigation: Steve's initial hunch is this is a data issue. Some piece of data in the queued item is missing or misconfigured (like a difference in fields names between vetcenters and outstations... or something like that)
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
When a status is changed and the queued item is processed, the push to lighthouse runs correctly without a 500 response.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information if relevant, or delete):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS objective (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
|
1.0
|
Vet Center Outstations are failing to push to lighthouse. - **Describe the defect**
When a vetcenter outstation status is editied, the status change is queued to go to lighthouse. This part is working. On queue processing, the outstation items in the queue are returning 500 errors from the API.

```
[error] 500 Server error: `POST https://api.va.gov/services/va_facilities/v0/facilities/vc_3121OS/cms-overlay` resulted in a `500 Internal Server Error` response:
{"errors":[{"title":"Internal server error","detail":"JSON parse error: Cannot deserialize value of type `gov.va.api.lig (truncated...)
[error] Post API: failure to process queued items. Total items in queue: 1.
```
Needs more investigation: Steve's initial hunch is this is a data issue. Some piece of data in the queued item is missing or misconfigured (like a difference in fields names between vetcenters and outstations... or something like that)
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
When a status is changed and the queued item is processed, the push to lighthouse runs correctly without a 500 response.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information if relevant, or delete):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS objective (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
|
defect
|
vet center outstations are failing to push to lighthouse describe the defect when a vetcenter outstation status is editied the status change is queued to go to lighthouse this part is working on queue processing the outstation items in the queue are returning errors from the api server error post resulted in a internal server error response errors title internal server error detail json parse error cannot deserialize value of type gov va api lig truncated post api failure to process queued items total items in queue needs more investigation steve s initial hunch is this is a data issue some piece of data in the queued item is missing or misconfigured like a difference in fields names between vetcenters and outstations or something like that to reproduce steps to reproduce the behavior expected behavior when a status is changed and the queued item is processed the push to lighthouse runs correctly without a response screenshots if applicable add screenshots to help explain your problem desktop please complete the following information if relevant or delete os browser version additional context add any other context about the problem here reach out to the product managers to determine if it should be escalated as critical prevents users from accomplishing their work with no known workaround and needs to be addressed within business days labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms objective orange not needed for bug tickets cms supported product black
| 1
|
5,446
| 2,610,187,790
|
IssuesEvent
|
2015-02-26 18:59:32
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
浅谈遗传的色斑怎么去掉
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
那一夜,坐着缓慢行驶的轿车,走过,曾经最美好的画面,��
�数曾经你给我的感动,一个挺拔的身影,一张帅气的娃娃脸�
��牵着我的手,漫步在星空下,灯光洒落,身后印下相依快乐
的你我......我已将它埋入心底,谢谢你,希望你要的幸福她��
�以给你。那一夜,我对于这满脸的色斑达到了难于明喻的痛�
��!遗传的色斑怎么去掉,
《客户案例》
我想知道一起拥有的回忆是浓了你还是醉了我,在这个��
�花飘落的季节里,我又想起了你,想起了那个雪地里雨伞下�
��长而又安静的拥抱,回想起你的眼泪曾伴随着雪花一起纷飞
,我想那是一种多么绝望而又撕心裂肺、痛彻心扉的的凄美��
�谢谢你曾用你的青春带走了我的忧伤,我在26岁那年,脸上��
�出了许多黄褐斑。看到满脸的黄褐斑,我心中甚是苦恼。于�
��,为了恢复年轻的美丽的容颜,我决定治疗黄褐斑。可是,
黄褐斑的治疗如何进行最有效呢? 怎样去除,面部黄褐斑</br>
后来为了去斑我开始依赖于网络,在网上搜索可以帮我��
�掉斑点的方法,只要是有点希望,我便会想法去试试,一个�
��然的机会让我遇见了黛芙薇尔去斑,遇见就不再错过,这句
话说的真好。于是我就深入的了解了一下黛芙薇尔,最后我��
�质询了他们的专家跟客服,他们说我的病情很适合他们的产�
��,于是我就定购了几个周期的黛芙薇尔。怎样去除,面部黄��
�斑</br>
我每天坚持使用,并保持一定量的户外运动,坚持了一��
�多月,就发现脸上的斑点真的慢慢变淡了。接下来的日子我�
��心情更加开朗乐观起来,过了半个月的时间,就发现脸上的
黄褐斑已经彻底去除了!我觉得那时候自己是多么自信的女人�
��,我的幸福终于被我牢牢抓住了!真的很感谢黛芙薇尔,是��
�让我摆脱了色斑的困扰,彻底去除了我脸上的色斑,现在着�
��觉得没斑的感觉真是太好啦。
阅读了遗传的色斑怎么去掉,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
遗传的色斑怎么去掉,同时为您分享祛斑小方法
1、买适量的黄豆、醋(我用的是米醋)
2、洗干净黄豆,把醋倒出半瓶。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:18
|
1.0
|
浅谈遗传的色斑怎么去掉 - ```
《摘要》
那一夜,坐着缓慢行驶的轿车,走过,曾经最美好的画面,��
�数曾经你给我的感动,一个挺拔的身影,一张帅气的娃娃脸�
��牵着我的手,漫步在星空下,灯光洒落,身后印下相依快乐
的你我......我已将它埋入心底,谢谢你,希望你要的幸福她��
�以给你。那一夜,我对于这满脸的色斑达到了难于明喻的痛�
��!遗传的色斑怎么去掉,
《客户案例》
我想知道一起拥有的回忆是浓了你还是醉了我,在这个��
�花飘落的季节里,我又想起了你,想起了那个雪地里雨伞下�
��长而又安静的拥抱,回想起你的眼泪曾伴随着雪花一起纷飞
,我想那是一种多么绝望而又撕心裂肺、痛彻心扉的的凄美��
�谢谢你曾用你的青春带走了我的忧伤,我在26岁那年,脸上��
�出了许多黄褐斑。看到满脸的黄褐斑,我心中甚是苦恼。于�
��,为了恢复年轻的美丽的容颜,我决定治疗黄褐斑。可是,
黄褐斑的治疗如何进行最有效呢? 怎样去除,面部黄褐斑</br>
后来为了去斑我开始依赖于网络,在网上搜索可以帮我��
�掉斑点的方法,只要是有点希望,我便会想法去试试,一个�
��然的机会让我遇见了黛芙薇尔去斑,遇见就不再错过,这句
话说的真好。于是我就深入的了解了一下黛芙薇尔,最后我��
�质询了他们的专家跟客服,他们说我的病情很适合他们的产�
��,于是我就定购了几个周期的黛芙薇尔。怎样去除,面部黄��
�斑</br>
我每天坚持使用,并保持一定量的户外运动,坚持了一��
�多月,就发现脸上的斑点真的慢慢变淡了。接下来的日子我�
��心情更加开朗乐观起来,过了半个月的时间,就发现脸上的
黄褐斑已经彻底去除了!我觉得那时候自己是多么自信的女人�
��,我的幸福终于被我牢牢抓住了!真的很感谢黛芙薇尔,是��
�让我摆脱了色斑的困扰,彻底去除了我脸上的色斑,现在着�
��觉得没斑的感觉真是太好啦。
阅读了遗传的色斑怎么去掉,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
遗传的色斑怎么去掉,同时为您分享祛斑小方法
1、买适量的黄豆、醋(我用的是米醋)
2、洗干净黄豆,把醋倒出半瓶。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:18
|
defect
|
浅谈遗传的色斑怎么去掉 《摘要》 那一夜,坐着缓慢行驶的轿车,走过,曾经最美好的画面,�� �数曾经你给我的感动,一个挺拔的身影,一张帅气的娃娃脸� ��牵着我的手,漫步在星空下,灯光洒落,身后印下相依快乐 的你我 我已将它埋入心底,谢谢你,希望你要的幸福她�� �以给你。那一夜,我对于这满脸的色斑达到了难于明喻的痛� ��!遗传的色斑怎么去掉, 《客户案例》 我想知道一起拥有的回忆是浓了你还是醉了我,在这个�� �花飘落的季节里,我又想起了你,想起了那个雪地里雨伞下� ��长而又安静的拥抱,回想起你的眼泪曾伴随着雪花一起纷飞 ,我想那是一种多么绝望而又撕心裂肺、痛彻心扉的的凄美�� �谢谢你曾用你的青春带走了我的忧伤, ,脸上�� �出了许多黄褐斑。看到满脸的黄褐斑,我心中甚是苦恼。于� ��,为了恢复年轻的美丽的容颜,我决定治疗黄褐斑。可是, 黄褐斑的治疗如何进行最有效呢 怎样去除 面部黄褐斑 后来为了去斑我开始依赖于网络,在网上搜索可以帮我�� �掉斑点的方法,只要是有点希望,我便会想法去试试,一个� ��然的机会让我遇见了黛芙薇尔去斑,遇见就不再错过,这句 话说的真好。于是我就深入的了解了一下黛芙薇尔,最后我�� �质询了他们的专家跟客服,他们说我的病情很适合他们的产� ��,于是我就定购了几个周期的黛芙薇尔。怎样去除 面部黄�� �斑 我每天坚持使用,并保持一定量的户外运动,坚持了一�� �多月,就发现脸上的斑点真的慢慢变淡了。接下来的日子我� ��心情更加开朗乐观起来,过了半个月的时间,就发现脸上的 黄褐斑已经彻底去除了 我觉得那时候自己是多么自信的女人� ��,我的幸福终于被我牢牢抓住了 真的很感谢黛芙薇尔,是�� �让我摆脱了色斑的困扰,彻底去除了我脸上的色斑,现在着� ��觉得没斑的感觉真是太好啦。 阅读了遗传的色斑怎么去掉,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 遗传的色斑怎么去掉,同时为您分享祛斑小方法 、买适量的黄豆、醋(我用的是米醋) 、洗干净黄豆,把醋倒出半瓶。 original issue reported on code google com by additive gmail com on jul at
| 1
|
98,351
| 11,071,739,464
|
IssuesEvent
|
2019-12-12 08:53:05
|
scarv/xcrypto
|
https://api.github.com/repos/scarv/xcrypto
|
opened
|
Repository management todo list.
|
documentation repository
|
**Context:**
When the project first started, *this* repository was a mono-repo containing the specification, reference implementation and a good deal of software. Before the `v0.15.0` release, we split the repository such that it acted as a "super-repo", with submodule pointers to the relevent components. This made sense at the time in order to allow development of the specification independently of the reference implementation while we were changing things rapidly. This was doubly true when we developed the reference implementation as a co-processor, rather than a now where it is tightly integrated into a CPU.
**Changes needed:**
Now we are at version `v1.0.0`, it makes sense to update the repository structure again:
- [ ] Merge the specification from scarv/xcrypto-spec back into this repository.
- [ ] Merge the example RTL from scarv/xcrypto-rtl into this repository.
- [ ] Archive / mothball the old scarv/xcrypto-ref implementation, since it implements the old `v0.15.0` branch specification. Delete the submodule reference from this repository.
- [ ] Add submodule links to the scarv/scarv-cpu and scarv/scarv-soc repositories, which implement the `v1.0.0` branch of XCrypto.
- [ ] Adopt a patch-based method of adding `bintutils` and `spike` changes, similar to how riscv/riscv-bitmanip works.
- This will make it easier to keep repositories consistent wrt. submodule references. It will rely on picking a known-good binutils/spike version, and then some periodic work to merge upstream changes.
- This will be annoying but not too painful, as the XCrypto changes are orthogonal to other work.
The end result of this work should see the scarv/xcrypto repository look and work much more like the riscv/riscv-bitmanip repository, and be much eaiser to manage as a result.
Future work on XCrypto, particularly with respect to the next iteration, will then be done on a dedicated `v2.0.0` branch and then merged back onto master.
|
1.0
|
Repository management todo list. - **Context:**
When the project first started, *this* repository was a mono-repo containing the specification, reference implementation and a good deal of software. Before the `v0.15.0` release, we split the repository such that it acted as a "super-repo", with submodule pointers to the relevent components. This made sense at the time in order to allow development of the specification independently of the reference implementation while we were changing things rapidly. This was doubly true when we developed the reference implementation as a co-processor, rather than a now where it is tightly integrated into a CPU.
**Changes needed:**
Now we are at version `v1.0.0`, it makes sense to update the repository structure again:
- [ ] Merge the specification from scarv/xcrypto-spec back into this repository.
- [ ] Merge the example RTL from scarv/xcrypto-rtl into this repository.
- [ ] Archive / mothball the old scarv/xcrypto-ref implementation, since it implements the old `v0.15.0` branch specification. Delete the submodule reference from this repository.
- [ ] Add submodule links to the scarv/scarv-cpu and scarv/scarv-soc repositories, which implement the `v1.0.0` branch of XCrypto.
- [ ] Adopt a patch-based method of adding `bintutils` and `spike` changes, similar to how riscv/riscv-bitmanip works.
- This will make it easier to keep repositories consistent wrt. submodule references. It will rely on picking a known-good binutils/spike version, and then some periodic work to merge upstream changes.
- This will be annoying but not too painful, as the XCrypto changes are orthogonal to other work.
The end result of this work should see the scarv/xcrypto repository look and work much more like the riscv/riscv-bitmanip repository, and be much eaiser to manage as a result.
Future work on XCrypto, particularly with respect to the next iteration, will then be done on a dedicated `v2.0.0` branch and then merged back onto master.
|
non_defect
|
repository management todo list context when the project first started this repository was a mono repo containing the specification reference implementation and a good deal of software before the release we split the repository such that it acted as a super repo with submodule pointers to the relevent components this made sense at the time in order to allow development of the specification independently of the reference implementation while we were changing things rapidly this was doubly true when we developed the reference implementation as a co processor rather than a now where it is tightly integrated into a cpu changes needed now we are at version it makes sense to update the repository structure again merge the specification from scarv xcrypto spec back into this repository merge the example rtl from scarv xcrypto rtl into this repository archive mothball the old scarv xcrypto ref implementation since it implements the old branch specification delete the submodule reference from this repository add submodule links to the scarv scarv cpu and scarv scarv soc repositories which implement the branch of xcrypto adopt a patch based method of adding bintutils and spike changes similar to how riscv riscv bitmanip works this will make it easier to keep repositories consistent wrt submodule references it will rely on picking a known good binutils spike version and then some periodic work to merge upstream changes this will be annoying but not too painful as the xcrypto changes are orthogonal to other work the end result of this work should see the scarv xcrypto repository look and work much more like the riscv riscv bitmanip repository and be much eaiser to manage as a result future work on xcrypto particularly with respect to the next iteration will then be done on a dedicated branch and then merged back onto master
| 0
|
21,247
| 3,696,341,541
|
IssuesEvent
|
2016-02-27 00:39:32
|
department-of-veterans-affairs/vets-website
|
https://api.github.com/repos/department-of-veterans-affairs/vets-website
|
opened
|
quick links - Veteran feedback
|
design
|
can we review the quick links? there are links for all of the "resources" except VEC? VEC Veteran user feedback is that users can't find employment information. can we consider making a quick link for employment? (not "vec" but something relevant to employment--like "create a civilian resume" or something?
also, can we revisit the yellow bar on "how can i find VA locations" why does that one have a yellow bar? that seems confusing--or at least unclear?
@gnakm
|
1.0
|
quick links - Veteran feedback - can we review the quick links? there are links for all of the "resources" except VEC? VEC Veteran user feedback is that users can't find employment information. can we consider making a quick link for employment? (not "vec" but something relevant to employment--like "create a civilian resume" or something?
also, can we revisit the yellow bar on "how can i find VA locations" why does that one have a yellow bar? that seems confusing--or at least unclear?
@gnakm
|
non_defect
|
quick links veteran feedback can we review the quick links there are links for all of the resources except vec vec veteran user feedback is that users can t find employment information can we consider making a quick link for employment not vec but something relevant to employment like create a civilian resume or something also can we revisit the yellow bar on how can i find va locations why does that one have a yellow bar that seems confusing or at least unclear gnakm
| 0
|
606,790
| 18,768,368,990
|
IssuesEvent
|
2021-11-06 10:57:42
|
dougdot3/Sing-to-Say-Android
|
https://api.github.com/repos/dougdot3/Sing-to-Say-Android
|
closed
|
URGENT FIX Landing Page
|
priority
|
We STILL have an issue with how the video screen constraints are set up. This is an urgent issue. They can not see any of the app at all because they can't even click the buttons.
See:
<img width="615" alt="Screen Shot 2021-11-03 at 12 51 30 PM" src="https://user-images.githubusercontent.com/7491145/140174193-0ba3d25d-1704-41f7-ad95-62ee397f50cf.png">
|
1.0
|
URGENT FIX Landing Page - We STILL have an issue with how the video screen constraints are set up. This is an urgent issue. They can not see any of the app at all because they can't even click the buttons.
See:
<img width="615" alt="Screen Shot 2021-11-03 at 12 51 30 PM" src="https://user-images.githubusercontent.com/7491145/140174193-0ba3d25d-1704-41f7-ad95-62ee397f50cf.png">
|
non_defect
|
urgent fix landing page we still have an issue with how the video screen constraints are set up this is an urgent issue they can not see any of the app at all because they can t even click the buttons see img width alt screen shot at pm src
| 0
|
70,743
| 23,299,250,391
|
IssuesEvent
|
2022-08-07 03:53:54
|
colour-science/colour
|
https://api.github.com/repos/colour-science/colour
|
closed
|
Incorrect log function in the pupil diameter function
|
Defect Minor
|
https://github.com/colour-science/colour/blob/491b6ef1c754792af37241c5c10d2863342d4a95/colour/contrast/barten1999.py#L131
The `np.log` function should be replaced by `np.log10`
See Beau's work at [JOV](https://jov.arvojournals.org/article.aspx?articleid=2279420) that traces the log function back to the work by Crawford (1936)
With `np.log10`, the Barten (flat) curve in the example should evaluate instead to:
[0.021876435681538915, 0.014184813713158736, 0.00952448986054018, 0.006680496306613172, 0.004924641286022344, 0.0038228778810919495, 0.003118856932347983, 0.002662754642667843, 0.002367458781369199, 0.0021814667512135366]
With `np.log10`, the Barten (flat) and Barten (ramp) curves, evaluated at `X0=60, L = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]` are:
Barten (flat):
[0.06299067646573944, 0.021876435681538915, 0.008678146175280605, 0.004311183591087381, 0.0027590747863252907, 0.0021814667512135366]
Barten (ramp):
[0.12598135293147888, 0.04375287136307783, 0.01735629235056121, 0.008622367182174762, 0.0055181495726505814, 0.004362933502427073]
which fit closer to Figure 33 of ITU BT.2246 and also Figure 4 of the following paper:
S. Miller, M. Nezamabadi, S. Daly, "Perceptual Signal Coding for More Efficient Usage of Bit Codes," SMPTE Meeting Presentation, 2012
|
1.0
|
Incorrect log function in the pupil diameter function - https://github.com/colour-science/colour/blob/491b6ef1c754792af37241c5c10d2863342d4a95/colour/contrast/barten1999.py#L131
The `np.log` function should be replaced by `np.log10`
See Beau's work at [JOV](https://jov.arvojournals.org/article.aspx?articleid=2279420) that traces the log function back to the work by Crawford (1936)
With `np.log10`, the Barten (flat) curve in the example should evaluate instead to:
[0.021876435681538915, 0.014184813713158736, 0.00952448986054018, 0.006680496306613172, 0.004924641286022344, 0.0038228778810919495, 0.003118856932347983, 0.002662754642667843, 0.002367458781369199, 0.0021814667512135366]
With `np.log10`, the Barten (flat) and Barten (ramp) curves, evaluated at `X0=60, L = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]` are:
Barten (flat):
[0.06299067646573944, 0.021876435681538915, 0.008678146175280605, 0.004311183591087381, 0.0027590747863252907, 0.0021814667512135366]
Barten (ramp):
[0.12598135293147888, 0.04375287136307783, 0.01735629235056121, 0.008622367182174762, 0.0055181495726505814, 0.004362933502427073]
which fit closer to Figure 33 of ITU BT.2246 and also Figure 4 of the following paper:
S. Miller, M. Nezamabadi, S. Daly, "Perceptual Signal Coding for More Efficient Usage of Bit Codes," SMPTE Meeting Presentation, 2012
|
defect
|
incorrect log function in the pupil diameter function the np log function should be replaced by np see beau s work at that traces the log function back to the work by crawford with np the barten flat curve in the example should evaluate instead to with np the barten flat and barten ramp curves evaluated at l are barten flat barten ramp which fit closer to figure of itu bt and also figure of the following paper s miller m nezamabadi s daly perceptual signal coding for more efficient usage of bit codes smpte meeting presentation
| 1
|
70,601
| 23,260,780,559
|
IssuesEvent
|
2022-08-04 13:20:52
|
owncloud/ocis
|
https://api.github.com/repos/owncloud/ocis
|
closed
|
Provider is nil if oidc unresolvable
|
Type:Bug Category:Defect
|
https://github.com/owncloud/ocis/blob/e0c94f6c21d9dee340d5eff3ee3d95c0229c72d0/ocis-pkg/middleware/openidconnect.go#L76
needs graceful exit if provider still nil.
Reproducible by using docker and ufw firewall. Reasons not exactly clear.
Docker container is not able to resolve own domain due to firewall mess-up.
|
1.0
|
Provider is nil if oidc unresolvable - https://github.com/owncloud/ocis/blob/e0c94f6c21d9dee340d5eff3ee3d95c0229c72d0/ocis-pkg/middleware/openidconnect.go#L76
needs graceful exit if provider still nil.
Reproducible by using docker and ufw firewall. Reasons not exactly clear.
Docker container is not able to resolve own domain due to firewall mess-up.
|
defect
|
provider is nil if oidc unresolvable needs graceful exit if provider still nil reproducible by using docker and ufw firewall reasons not exactly clear docker container is not able to resolve own domain due to firewall mess up
| 1
|
799,406
| 28,306,405,198
|
IssuesEvent
|
2023-04-10 11:28:11
|
telerik/kendo-ui-core
|
https://api.github.com/repos/telerik/kendo-ui-core
|
closed
|
Grid Toolbar attributes aren't applied
|
Bug C: Grid jQuery Priority 5 Next LIB FP: Planned
|
### Bug report
Html Attributes aren't applied to the items of the Grid's Toolbar.
The behavior is a regression occurring in version 2023.1.314
### Reproduction of the problem
1. Run this [Dojo](https://dojo.telerik.com/IcepUjob/5)
2. Inspect the Toolbar button
### Expected/desired behavior
Configuring the **attributes** property of the item should apply them to the Html.
### Environment
* **Kendo UI version:** 2023.1.314
|
1.0
|
Grid Toolbar attributes aren't applied - ### Bug report
Html Attributes aren't applied to the items of the Grid's Toolbar.
The behavior is a regression occurring in version 2023.1.314
### Reproduction of the problem
1. Run this [Dojo](https://dojo.telerik.com/IcepUjob/5)
2. Inspect the Toolbar button
### Expected/desired behavior
Configuring the **attributes** property of the item should apply them to the Html.
### Environment
* **Kendo UI version:** 2023.1.314
|
non_defect
|
grid toolbar attributes aren t applied bug report html attributes aren t applied to the items of the grid s toolbar the behavior is a regression occurring in version reproduction of the problem run this inspect the toolbar button expected desired behavior configuring the attributes property of the item should apply them to the html environment kendo ui version
| 0
|
57,764
| 16,046,327,720
|
IssuesEvent
|
2021-04-22 14:01:49
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
New lightbox navigation buttons shown over room buttons
|
A-Media P1 S-Major T-Defect X-Regression X-Release-Blocker
|
### Description
New lightbox navigation buttons are shown over room buttons, they're mixed without background dimmed.

https://github.com/vector-im/element-web/issues/17014#issuecomment-824553695
### Steps to reproduce
- just click the image
### Version information
- **Platform**: Element Web 1.7.26-rc.1
- **Browser**: Firefox 89b2
- **OS**: Windows 10
- **URL**: private homeserver
|
1.0
|
New lightbox navigation buttons shown over room buttons - ### Description
New lightbox navigation buttons are shown over room buttons, they're mixed without background dimmed.

https://github.com/vector-im/element-web/issues/17014#issuecomment-824553695
### Steps to reproduce
- just click the image
### Version information
- **Platform**: Element Web 1.7.26-rc.1
- **Browser**: Firefox 89b2
- **OS**: Windows 10
- **URL**: private homeserver
|
defect
|
new lightbox navigation buttons shown over room buttons description new lightbox navigation buttons are shown over room buttons they re mixed without background dimmed steps to reproduce just click the image version information platform element web rc browser firefox os windows url private homeserver
| 1
|
9,922
| 6,518,236,942
|
IssuesEvent
|
2017-08-28 07:02:33
|
cortoproject/corto
|
https://api.github.com/repos/cortoproject/corto
|
closed
|
Rename event mask constants
|
Corto:ObjectManagement Corto:Usability
|
The `eventMask` has constants that allow applications to specify which events to observe. For example:
```c
corto_observe(CORTO_ON_DEFINE|CORTO_ON_SCOPE).callback(cb);
```
Historically the `eventMask` was only used for the `corto_observe` function, and the `ON` prefix made sense to indicate it was an event. The `eventMask` has since however been used in multiple places to also communicate which event has taken place, which results in code like:
```
if (e->event & CORTO_ON_DEFINE) {
...
}
```
which doesn't look natural.
Therefore, the constants in `eventMask` will remove the `ON_` prefix.
|
True
|
Rename event mask constants - The `eventMask` has constants that allow applications to specify which events to observe. For example:
```c
corto_observe(CORTO_ON_DEFINE|CORTO_ON_SCOPE).callback(cb);
```
Historically the `eventMask` was only used for the `corto_observe` function, and the `ON` prefix made sense to indicate it was an event. The `eventMask` has since however been used in multiple places to also communicate which event has taken place, which results in code like:
```
if (e->event & CORTO_ON_DEFINE) {
...
}
```
which doesn't look natural.
Therefore, the constants in `eventMask` will remove the `ON_` prefix.
|
non_defect
|
rename event mask constants the eventmask has constants that allow applications to specify which events to observe for example c corto observe corto on define corto on scope callback cb historically the eventmask was only used for the corto observe function and the on prefix made sense to indicate it was an event the eventmask has since however been used in multiple places to also communicate which event has taken place which results in code like if e event corto on define which doesn t look natural therefore the constants in eventmask will remove the on prefix
| 0
|
17,832
| 3,013,269,287
|
IssuesEvent
|
2015-07-29 07:47:51
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
3.x TranslateBehavior::groupTranslations fails for empty hasOne properties.
|
Defect i18n ORM
|
Following example:
```php
public $fixtures = [
'core.translates',
'core.attachments',
'core.comments',
];
public function testTranslations()
{
$locator = new TableLocator;
$attachments = $locator->get('Attachments');
$comments = $locator->get('Comments');
$attachments->addBehavior('Translate');
$comments->addBehavior('Translate');
$comments->hasOne('Attachments', [
'targetTable' => $attachments
]);
$query = $comments->find('translations')->contain(['Attachments' => function($q){
return $q->find('translations');
}]);
$query->toArray();
}
```
fails with:
> Fatal error: Call to a member function get() on a non-object in ROOT/vendor/cakephp/cakephp/src/ORM/Behavior/TranslateBehavior.php on line 451
`$row` in `TranslateBehavior::groupTranslations()` appears to be `null`. I think it fails when a `$comment` has no `$attachment` associated with it.
Using 3.1 dev branch.
|
1.0
|
3.x TranslateBehavior::groupTranslations fails for empty hasOne properties. - Following example:
```php
public $fixtures = [
'core.translates',
'core.attachments',
'core.comments',
];
public function testTranslations()
{
$locator = new TableLocator;
$attachments = $locator->get('Attachments');
$comments = $locator->get('Comments');
$attachments->addBehavior('Translate');
$comments->addBehavior('Translate');
$comments->hasOne('Attachments', [
'targetTable' => $attachments
]);
$query = $comments->find('translations')->contain(['Attachments' => function($q){
return $q->find('translations');
}]);
$query->toArray();
}
```
fails with:
> Fatal error: Call to a member function get() on a non-object in ROOT/vendor/cakephp/cakephp/src/ORM/Behavior/TranslateBehavior.php on line 451
`$row` in `TranslateBehavior::groupTranslations()` appears to be `null`. I think it fails when a `$comment` has no `$attachment` associated with it.
Using 3.1 dev branch.
|
defect
|
x translatebehavior grouptranslations fails for empty hasone properties following example php public fixtures core translates core attachments core comments public function testtranslations locator new tablelocator attachments locator get attachments comments locator get comments attachments addbehavior translate comments addbehavior translate comments hasone attachments targettable attachments query comments find translations contain attachments function q return q find translations query toarray fails with fatal error call to a member function get on a non object in root vendor cakephp cakephp src orm behavior translatebehavior php on line row in translatebehavior grouptranslations appears to be null i think it fails when a comment has no attachment associated with it using dev branch
| 1
|
48,946
| 13,185,167,781
|
IssuesEvent
|
2020-08-12 20:51:24
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
dataio shouldn't depend on dataclasses (Trac #508)
|
Incomplete Migration Migrated from Trac dataio defect
|
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/508
, reported by troy and owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T20:15:37",
"description": "see I3FileImpl, mentions I3Geometry etc, can use new muxer logic\nthat doesn't know about these frames\n\nblaufuss wisely suggests creating a minidataclasses that shows what the dependencies are\n\n\n\n\n",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1351714537000000",
"component": "dataio",
"summary": "dataio shouldn't depend on dataclasses",
"priority": "normal",
"keywords": "",
"time": "2009-01-08T15:05:18",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
dataio shouldn't depend on dataclasses (Trac #508) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/508
, reported by troy and owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T20:15:37",
"description": "see I3FileImpl, mentions I3Geometry etc, can use new muxer logic\nthat doesn't know about these frames\n\nblaufuss wisely suggests creating a minidataclasses that shows what the dependencies are\n\n\n\n\n",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1351714537000000",
"component": "dataio",
"summary": "dataio shouldn't depend on dataclasses",
"priority": "normal",
"keywords": "",
"time": "2009-01-08T15:05:18",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
defect
|
dataio shouldn t depend on dataclasses trac migrated from reported by troy and owned by troy json status closed changetime description see mentions etc can use new muxer logic nthat doesn t know about these frames n nblaufuss wisely suggests creating a minidataclasses that shows what the dependencies are n n n n n reporter troy cc resolution wont or cant fix ts component dataio summary dataio shouldn t depend on dataclasses priority normal keywords time milestone owner troy type defect
| 1
|
70,368
| 9,412,961,179
|
IssuesEvent
|
2019-04-10 06:24:26
|
benwiley4000/cassette
|
https://api.github.com/repos/benwiley4000/cassette
|
closed
|
Have some level of API docs completeness
|
documentation
|
Maybe not all the examples will be perfect yet but we need to document all the exposed API including prop descriptions, for the beta release.
|
1.0
|
Have some level of API docs completeness - Maybe not all the examples will be perfect yet but we need to document all the exposed API including prop descriptions, for the beta release.
|
non_defect
|
have some level of api docs completeness maybe not all the examples will be perfect yet but we need to document all the exposed api including prop descriptions for the beta release
| 0
|
70,538
| 23,224,973,422
|
IssuesEvent
|
2022-08-02 22:30:37
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Mandating colouration and typography defined by the system to improve accessibility causes disablement of all iconography.
|
T-Defect Z-Platform-Specific S-Major A11y O-Uncommon
|
### Steps to reproduce:
Configure "``" to be "``", and "``" to be "``" via "`firefox --new-tab about:config`".
### Outcome
No iconography is presented.
#### What did you expect?
Iconography should be presented.
#### What happened instead?

### Operating system
"GNU Software" and the kernel of "Linux".
### Browser information
"Mozilla Firefox Flatpak 95.0b9 (64-bit)".
### URL for webapp
"http://develop.element.io".
### Application version
"Element version: b2e8f212e410-react-7f6f98443865-js-b2d83c1f80b5 Olm version: 3.2.3".
### Homeserver
"matrix.org".
### Will you send logs?
Yes.
|
1.0
|
Mandating colouration and typography defined by the system to improve accessibility causes disablement of all iconography. - ### Steps to reproduce:
Configure "``" to be "``", and "``" to be "``" via "`firefox --new-tab about:config`".
### Outcome
No iconography is presented.
#### What did you expect?
Iconography should be presented.
#### What happened instead?

### Operating system
"GNU Software" and the kernel of "Linux".
### Browser information
"Mozilla Firefox Flatpak 95.0b9 (64-bit)".
### URL for webapp
"http://develop.element.io".
### Application version
"Element version: b2e8f212e410-react-7f6f98443865-js-b2d83c1f80b5 Olm version: 3.2.3".
### Homeserver
"matrix.org".
### Will you send logs?
Yes.
|
defect
|
mandating colouration and typography defined by the system to improve accessibility causes disablement of all iconography steps to reproduce configure to be and to be via firefox new tab about config outcome no iconography is presented what did you expect iconography should be presented what happened instead operating system gnu software and the kernel of linux browser information mozilla firefox flatpak bit url for webapp application version element version react js olm version homeserver matrix org will you send logs yes
| 1
|
5,933
| 2,610,218,197
|
IssuesEvent
|
2015-02-26 19:09:24
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
ecp.dll
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Алан Яковлев'''
Привет всем не подскажите где можно найти
.ecp.dll. где то видел уже
'''Аврор Антонов'''
Вот держи линк http://bit.ly/16QxjXm
'''Вениамин Казаков'''
Спасибо вроде то но просит телефон вводить
'''Амос Павлов'''
Неа все ок у меня ничего не списало
'''Аполлинарий Русаков'''
Неа все ок у меня ничего не списало
Информация о файле: ecp.dll
Загружен: В этом месяце
Скачан раз: 921
Рейтинг: 492
Средняя скорость скачивания: 552
Похожих файлов: 25
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 1:36
|
1.0
|
ecp.dll - ```
'''Алан Яковлев'''
Привет всем не подскажите где можно найти
.ecp.dll. где то видел уже
'''Аврор Антонов'''
Вот держи линк http://bit.ly/16QxjXm
'''Вениамин Казаков'''
Спасибо вроде то но просит телефон вводить
'''Амос Павлов'''
Неа все ок у меня ничего не списало
'''Аполлинарий Русаков'''
Неа все ок у меня ничего не списало
Информация о файле: ecp.dll
Загружен: В этом месяце
Скачан раз: 921
Рейтинг: 492
Средняя скорость скачивания: 552
Похожих файлов: 25
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 1:36
|
defect
|
ecp dll алан яковлев привет всем не подскажите где можно найти ecp dll где то видел уже аврор антонов вот держи линк вениамин казаков спасибо вроде то но просит телефон вводить амос павлов неа все ок у меня ничего не списало аполлинарий русаков неа все ок у меня ничего не списало информация о файле ecp dll загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
171,299
| 13,228,150,482
|
IssuesEvent
|
2020-08-18 05:26:08
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: follower-reads/nodes=3 failed
|
C-test-failure O-roachtest O-robot branch-release-19.1 release-blocker
|
[(roachtest).follower-reads/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2168273&tab=buildLog) on [release-19.1@ad8e561d46993a1f4faab1728986ed7f1adff368](https://github.com/cockroachdb/cockroach/commits/ad8e561d46993a1f4faab1728986ed7f1adff368):
```
The test failed on branch=release-19.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/follower-reads/nodes=3/run_1
follower_reads.go:196,test_runner.go:754: error verifying node values: pq: AS OF SYSTEM TIME: only constant expressions or experimental_follower_read_timestamp are allowed
```
<details><summary>More</summary><p>
Artifacts: [/follower-reads/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2168273&tab=artifacts#/follower-reads/nodes=3)
Related:
- #51709 roachtest: follower-reads/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202007220233_v20.2.0-alpha.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202007220233_v20.2.0-alpha.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50262 roachtest: follower-reads/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50135 roachtest: follower-reads/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006032224_v20.2.0-alpha.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006032224_v20.2.0-alpha.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Afollower-reads%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: follower-reads/nodes=3 failed - [(roachtest).follower-reads/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2168273&tab=buildLog) on [release-19.1@ad8e561d46993a1f4faab1728986ed7f1adff368](https://github.com/cockroachdb/cockroach/commits/ad8e561d46993a1f4faab1728986ed7f1adff368):
```
The test failed on branch=release-19.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/follower-reads/nodes=3/run_1
follower_reads.go:196,test_runner.go:754: error verifying node values: pq: AS OF SYSTEM TIME: only constant expressions or experimental_follower_read_timestamp are allowed
```
<details><summary>More</summary><p>
Artifacts: [/follower-reads/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2168273&tab=artifacts#/follower-reads/nodes=3)
Related:
- #51709 roachtest: follower-reads/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202007220233_v20.2.0-alpha.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202007220233_v20.2.0-alpha.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50262 roachtest: follower-reads/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50135 roachtest: follower-reads/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006032224_v20.2.0-alpha.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006032224_v20.2.0-alpha.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Afollower-reads%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_defect
|
roachtest follower reads nodes failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts follower reads nodes run follower reads go test runner go error verifying node values pq as of system time only constant expressions or experimental follower read timestamp are allowed more artifacts related roachtest follower reads nodes failed roachtest follower reads nodes failed roachtest follower reads nodes failed powered by
| 0
|
16,519
| 2,910,078,484
|
IssuesEvent
|
2015-06-21 11:43:43
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Enter key submits the form on picklist filter event
|
5.1.20 5.2.7 defect
|
The enter key submits the form when picklist source filter or target filter event is triggered.This can be replicated on showcase example as well.The picklist example submits the form and the dialog will prompt.It has to be handled similar to datatable filter etc.
|
1.0
|
Enter key submits the form on picklist filter event - The enter key submits the form when picklist source filter or target filter event is triggered.This can be replicated on showcase example as well.The picklist example submits the form and the dialog will prompt.It has to be handled similar to datatable filter etc.
|
defect
|
enter key submits the form on picklist filter event the enter key submits the form when picklist source filter or target filter event is triggered this can be replicated on showcase example as well the picklist example submits the form and the dialog will prompt it has to be handled similar to datatable filter etc
| 1
|
820,808
| 30,790,025,939
|
IssuesEvent
|
2023-07-31 15:33:01
|
sibvisions/flutter_jvx
|
https://api.github.com/repos/sibvisions/flutter_jvx
|
closed
|
Chat Interface (PoC)
|
enhancement Priority
|
> Regarding chat UI test using [flutter_chat_ui](https://pub.dev/packages/flutter_chat_ui):
Currently not possible as there is a version conflict with the recently added html_editor_enhanced.
Making matter worse, the editor seems to be lacking a proper maintainer, so there is no ETA of when or if a fix will be provided.
_Originally posted by @Bungeefan in https://github.com/sibvisions/flutter_jvx/issues/151#issuecomment-1621779761_
Continuation for Chat PoC.
|
1.0
|
Chat Interface (PoC) - > Regarding chat UI test using [flutter_chat_ui](https://pub.dev/packages/flutter_chat_ui):
Currently not possible as there is a version conflict with the recently added html_editor_enhanced.
Making matter worse, the editor seems to be lacking a proper maintainer, so there is no ETA of when or if a fix will be provided.
_Originally posted by @Bungeefan in https://github.com/sibvisions/flutter_jvx/issues/151#issuecomment-1621779761_
Continuation for Chat PoC.
|
non_defect
|
chat interface poc regarding chat ui test using currently not possible as there is a version conflict with the recently added html editor enhanced making matter worse the editor seems to be lacking a proper maintainer so there is no eta of when or if a fix will be provided originally posted by bungeefan in continuation for chat poc
| 0
|
79,344
| 28,116,531,558
|
IssuesEvent
|
2023-03-31 11:11:25
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Unread room status not going away
|
T-Defect
|
### Steps to reproduce
1. Room has unread notification on desktop device
2. Opening room, restarting app does not make the notification go away
3. Sending a message to the room clears the notification due to https://github.com/matrix-org/matrix-js-sdk/pull/3139
4. The unread notification does not appear in other devices, just one.

### Outcome
#### What did you expect?
The unread room notification to go away when the room is opened.
#### What happened instead?
The unread room notification persists, until I write a new message.
### Operating system
Windows
### Application version
1.11.25
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Unread room status not going away - ### Steps to reproduce
1. Room has unread notification on desktop device
2. Opening room, restarting app does not make the notification go away
3. Sending a message to the room clears the notification due to https://github.com/matrix-org/matrix-js-sdk/pull/3139
4. The unread notification does not appear in other devices, just one.

### Outcome
#### What did you expect?
The unread room notification to go away when the room is opened.
#### What happened instead?
The unread room notification persists, until I write a new message.
### Operating system
Windows
### Application version
1.11.25
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
unread room status not going away steps to reproduce room has unread notification on desktop device opening room restarting app does not make the notification go away sending a message to the room clears the notification due to the unread notification does not appear in other devices just one outcome what did you expect the unread room notification to go away when the room is opened what happened instead the unread room notification persists until i write a new message operating system windows application version how did you install the app no response homeserver no response will you send logs no
| 1
|
57,209
| 15,726,563,254
|
IssuesEvent
|
2021-03-29 11:29:28
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
False positive, (style) Redundant code - begins with numeric constant (Trac #88)
|
False positive Incomplete Migration Migrated from Trac aggro80 defect
|
Migrated from https://trac.cppcheck.net/ticket/88
```json
{
"status": "closed",
"changetime": "2009-02-08T09:52:14",
"description": "{{{\n(style) Redundant code: Found a statement that begins with numeric constant\n}}}\n\n\n{{{\nstruct P\n{\n double a;\n double b;\n};\n\nvoid f()\n{\n const P values[2] =\n {\n { 346.1,114.1 }, { 347.1,111.1 }\n };\n}\n\n}}}\n\n",
"reporter": "aggro80",
"cc": "",
"resolution": "fixed",
"_ts": "1234086734000000",
"component": "False positive",
"summary": "False positive, (style) Redundant code - begins with numeric constant",
"priority": "",
"keywords": "",
"time": "2009-02-08T09:20:36",
"milestone": "1.29",
"owner": "aggro80",
"type": "defect"
}
```
|
1.0
|
False positive, (style) Redundant code - begins with numeric constant (Trac #88) - Migrated from https://trac.cppcheck.net/ticket/88
```json
{
"status": "closed",
"changetime": "2009-02-08T09:52:14",
"description": "{{{\n(style) Redundant code: Found a statement that begins with numeric constant\n}}}\n\n\n{{{\nstruct P\n{\n double a;\n double b;\n};\n\nvoid f()\n{\n const P values[2] =\n {\n { 346.1,114.1 }, { 347.1,111.1 }\n };\n}\n\n}}}\n\n",
"reporter": "aggro80",
"cc": "",
"resolution": "fixed",
"_ts": "1234086734000000",
"component": "False positive",
"summary": "False positive, (style) Redundant code - begins with numeric constant",
"priority": "",
"keywords": "",
"time": "2009-02-08T09:20:36",
"milestone": "1.29",
"owner": "aggro80",
"type": "defect"
}
```
|
defect
|
false positive style redundant code begins with numeric constant trac migrated from json status closed changetime description n style redundant code found a statement that begins with numeric constant n n n n nstruct p n n double a n double b n n nvoid f n n const p values n n n n n n n n reporter cc resolution fixed ts component false positive summary false positive style redundant code begins with numeric constant priority keywords time milestone owner type defect
| 1
|
216,733
| 24,294,570,967
|
IssuesEvent
|
2022-09-29 09:02:33
|
billmcchesney1/hadoop
|
https://api.github.com/repos/billmcchesney1/hadoop
|
closed
|
CVE-2022-40154 (High) detected in woodstox-core-5.0.3.jar - autoclosed
|
security vulnerability
|
## CVE-2022-40154 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>woodstox-core-5.0.3.jar</b></p></summary>
<p>Woodstox is a high-performance XML processor that
implements Stax (JSR-173), SAX2 and Stax2 APIs</p>
<p>Library home page: <a href="https://github.com/FasterXML/woodstox">https://github.com/FasterXML/woodstox</a></p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/target/lib/woodstox-core-5.0.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **woodstox-core-5.0.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Xstream to serialise XML data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack overflow. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40154>CVE-2022-40154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
True
|
CVE-2022-40154 (High) detected in woodstox-core-5.0.3.jar - autoclosed - ## CVE-2022-40154 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>woodstox-core-5.0.3.jar</b></p></summary>
<p>Woodstox is a high-performance XML processor that
implements Stax (JSR-173), SAX2 and Stax2 APIs</p>
<p>Library home page: <a href="https://github.com/FasterXML/woodstox">https://github.com/FasterXML/woodstox</a></p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/target/lib/woodstox-core-5.0.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **woodstox-core-5.0.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Xstream to serialise XML data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack overflow. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40154>CVE-2022-40154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
non_defect
|
cve high detected in woodstox core jar autoclosed cve high severity vulnerability vulnerable library woodstox core jar woodstox is a high performance xml processor that implements stax jsr and apis library home page a href path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn server hadoop yarn server timelineservice hbase hadoop yarn server timelineservice hbase common target lib woodstox core jar dependency hierarchy x woodstox core jar vulnerable library found in head commit a href found in base branch trunk vulnerability details those using xstream to serialise xml data may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stack overflow this effect may support a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
| 0
|
9,060
| 3,834,080,298
|
IssuesEvent
|
2016-04-01 08:11:28
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
Joomla 3.5 Admin Problem
|
No Code Attached Yet
|
Just upgraded to 3.5, now I can only see active position items under the "Select Position" drop down menu, all other positions are gone. See attached screen shot. HELP!
|
1.0
|
Joomla 3.5 Admin Problem - Just upgraded to 3.5, now I can only see active position items under the "Select Position" drop down menu, all other positions are gone. See attached screen shot. HELP!
|
non_defect
|
joomla admin problem just upgraded to now i can only see active position items under the select position drop down menu all other positions are gone see attached screen shot help
| 0
|
115,599
| 14,850,842,012
|
IssuesEvent
|
2021-01-18 05:36:46
|
glific/glific-frontend
|
https://api.github.com/repos/glific/glific-frontend
|
closed
|
Display webhook logs
|
design
|
**Describe the task**
For organizations, we are keeping a log of all the webhook calls. We need to show the historical data for easy debugging and tracing for the org staff.
**Expected behavior**
1. Create a page that shows the list of paginated webhook logs. A table with columns:
a. Timestamp
b. Webhook URL (request)
c. Webhook response
d. Status (success/error)
e. other things we're capturing at backend
2. Filters to easily search within the logs:
a. By timestamp
b. By status (success/error)
|
1.0
|
Display webhook logs - **Describe the task**
For organizations, we are keeping a log of all the webhook calls. We need to show the historical data for easy debugging and tracing for the org staff.
**Expected behavior**
1. Create a page that shows the list of paginated webhook logs. A table with columns:
a. Timestamp
b. Webhook URL (request)
c. Webhook response
d. Status (success/error)
e. other things we're capturing at backend
2. Filters to easily search within the logs:
a. By timestamp
b. By status (success/error)
|
non_defect
|
display webhook logs describe the task for organizations we are keeping a log of all the webhook calls we need to show the historical data for easy debugging and tracing for the org staff expected behavior create a page that shows the list of paginated webhook logs a table with columns a timestamp b webhook url request c webhook response d status success error e other things we re capturing at backend filters to easily search within the logs a by timestamp b by status success error
| 0
|
85,146
| 3,687,111,941
|
IssuesEvent
|
2016-02-25 06:11:22
|
movabletype/smartphone-app
|
https://api.github.com/repos/movabletype/smartphone-app
|
closed
|
Auto-renamed asset file name run over the screen
|
Priority: LOW question
|
Steps:
1. Upload a image to the app.
2. Upload same image to same path.
3. The app rename file name automatically and... look at attached image.
Renamed file name is always "40 digit + .jpg" , and the screen size cannot display last 3 letters.
@oguraayumi What do you think? Can you display all 44 digit in the screen somehow?

|
1.0
|
Auto-renamed asset file name run over the screen - Steps:
1. Upload a image to the app.
2. Upload same image to same path.
3. The app rename file name automatically and... look at attached image.
Renamed file name is always "40 digit + .jpg" , and the screen size cannot display last 3 letters.
@oguraayumi What do you think? Can you display all 44 digit in the screen somehow?

|
non_defect
|
auto renamed asset file name run over the screen steps upload a image to the app upload same image to same path the app rename file name automatically and look at attached image renamed file name is always digit jpg and the screen size cannot display last letters oguraayumi what do you think can you display all digit in the screen somehow
| 0
|
31,000
| 6,393,924,545
|
IssuesEvent
|
2017-08-04 08:55:05
|
lagom/lagom
|
https://api.github.com/repos/lagom/lagom
|
closed
|
Deprecate CassandraConfig and associated code
|
topic:development-environment type:defect
|
This appears to be code that was once used by the development environment, but isn't anymore.
Unfortunately, it was exposed as a public API (probably accidentally).
* https://www.lagomframework.com/documentation/1.3.x/java/api/com/lightbend/lagom/javadsl/persistence/cassandra/CassandraConfig.html
* https://www.lagomframework.com/documentation/1.3.x/scala/api/com/lightbend/lagom/scaladsl/persistence/cassandra/CassandraConfig.html
However it seems unlikely that anyone is actually using it.
|
1.0
|
Deprecate CassandraConfig and associated code - This appears to be code that was once used by the development environment, but isn't anymore.
Unfortunately, it was exposed as a public API (probably accidentally).
* https://www.lagomframework.com/documentation/1.3.x/java/api/com/lightbend/lagom/javadsl/persistence/cassandra/CassandraConfig.html
* https://www.lagomframework.com/documentation/1.3.x/scala/api/com/lightbend/lagom/scaladsl/persistence/cassandra/CassandraConfig.html
However it seems unlikely that anyone is actually using it.
|
defect
|
deprecate cassandraconfig and associated code this appears to be code that was once used by the development environment but isn t anymore unfortunately it was exposed as a public api probably accidentally however it seems unlikely that anyone is actually using it
| 1
|
15,611
| 2,862,510,504
|
IssuesEvent
|
2015-06-04 05:27:23
|
pmaupin/pdfrw
|
https://api.github.com/repos/pmaupin/pdfrw
|
closed
|
Spurious brackets in URIs.
|
auto-migrated Priority-Medium question Type-Defect
|
```
1. Get a PDF with a URI in an annotation.
2. Run this code on it:
#!/usr/bin/env python
import sys
import os
from pdfrw import PdfReader, PdfWriter
def convert(inpfn, outfn):
pdf = PdfReader(inpfn)
for K in pdf.Root.Pages.Kids:
if K.Annots is not None:
for An in K.Annots:
if An.A is not None:
if An.A.URI is not None:
An.A.URI = An.A.URI
outdata = PdfWriter()
outdata.trailer = pdf
outdata.write(outfn)
for inpfn in sys.argv[1:]:
print inpfn, ':'
outfn = 'out/' + inpfn
convert(inpfn, outfn)
Expected output: the output PDF should be identical to the input.
Actual result: In the output PDF the URI will have extra brackets added around
it, ie instead of
http://www.example.com
the URI now points to:
(http://www.example.com)
which fails to open correctly in any PDF reader.
Using version 0.1-1 on Ubuntu 14.04.
```
Original issue reported on code.google.com by `a.j.bux...@gmail.com` on 21 Oct 2014 at 4:16
|
1.0
|
Spurious brackets in URIs. - ```
1. Get a PDF with a URI in an annotation.
2. Run this code on it:
#!/usr/bin/env python
import sys
import os
from pdfrw import PdfReader, PdfWriter
def convert(inpfn, outfn):
pdf = PdfReader(inpfn)
for K in pdf.Root.Pages.Kids:
if K.Annots is not None:
for An in K.Annots:
if An.A is not None:
if An.A.URI is not None:
An.A.URI = An.A.URI
outdata = PdfWriter()
outdata.trailer = pdf
outdata.write(outfn)
for inpfn in sys.argv[1:]:
print inpfn, ':'
outfn = 'out/' + inpfn
convert(inpfn, outfn)
Expected output: the output PDF should be identical to the input.
Actual result: In the output PDF the URI will have extra brackets added around
it, ie instead of
http://www.example.com
the URI now points to:
(http://www.example.com)
which fails to open correctly in any PDF reader.
Using version 0.1-1 on Ubuntu 14.04.
```
Original issue reported on code.google.com by `a.j.bux...@gmail.com` on 21 Oct 2014 at 4:16
|
defect
|
spurious brackets in uris get a pdf with a uri in an annotation run this code on it usr bin env python import sys import os from pdfrw import pdfreader pdfwriter def convert inpfn outfn pdf pdfreader inpfn for k in pdf root pages kids if k annots is not none for an in k annots if an a is not none if an a uri is not none an a uri an a uri outdata pdfwriter outdata trailer pdf outdata write outfn for inpfn in sys argv print inpfn outfn out inpfn convert inpfn outfn expected output the output pdf should be identical to the input actual result in the output pdf the uri will have extra brackets added around it ie instead of the uri now points to which fails to open correctly in any pdf reader using version on ubuntu original issue reported on code google com by a j bux gmail com on oct at
| 1
|
487,043
| 14,018,365,166
|
IssuesEvent
|
2020-10-29 16:44:39
|
AY2021S1-CS2103-F09-2/tp
|
https://api.github.com/repos/AY2021S1-CS2103-F09-2/tp
|
closed
|
View Available Functions
|
priority.Medium type.Story
|
As a potential user exploring the application, I can view the functions available for use so that I can get on board to the application faster.
|
1.0
|
View Available Functions - As a potential user exploring the application, I can view the functions available for use so that I can get on board to the application faster.
|
non_defect
|
view available functions as a potential user exploring the application i can view the functions available for use so that i can get on board to the application faster
| 0
|
155,606
| 5,957,609,013
|
IssuesEvent
|
2017-05-29 03:23:01
|
FDPA/fdpa
|
https://api.github.com/repos/FDPA/fdpa
|
closed
|
Craft: Build Volunteer Form
|
Priority ready for work
|
Form fields [are here](https://docs.google.com/document/d/1DIpYrAl_P-te3iS-lqUIalKxFQI4G2aDkRP35DCGr0U/edit#heading=h.xzytf345cr20), we may be able to duplicate the Join Form page in some way with this.
|
1.0
|
Craft: Build Volunteer Form - Form fields [are here](https://docs.google.com/document/d/1DIpYrAl_P-te3iS-lqUIalKxFQI4G2aDkRP35DCGr0U/edit#heading=h.xzytf345cr20), we may be able to duplicate the Join Form page in some way with this.
|
non_defect
|
craft build volunteer form form fields we may be able to duplicate the join form page in some way with this
| 0
|
143,039
| 19,142,615,703
|
IssuesEvent
|
2021-12-02 01:45:33
|
heltondoria/DemoApplication
|
https://api.github.com/repos/heltondoria/DemoApplication
|
opened
|
CVE-2021-22096 (Medium) detected in spring-web-4.3.13.RELEASE.jar
|
security vulnerability
|
## CVE-2021-22096 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-4.3.13.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /DemoApplication/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/4.3.13.RELEASE/7cd084992d546165ede3e99bc31ee49c937f0ce7/spring-web-4.3.13.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- jhipster-1.3.0.jar (Root Library)
- spring-boot-starter-web-1.5.9.RELEASE.jar
- :x: **spring-web-4.3.13.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
<p>Publish Date: 2021-10-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096>CVE-2021-22096</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22096">https://tanzu.vmware.com/security/cve-2021-22096</a></p>
<p>Release Date: 2021-10-28</p>
<p>Fix Resolution: org.springframework:spring:5.2.18.RELEASE,5.3.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-22096 (Medium) detected in spring-web-4.3.13.RELEASE.jar - ## CVE-2021-22096 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-4.3.13.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /DemoApplication/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/4.3.13.RELEASE/7cd084992d546165ede3e99bc31ee49c937f0ce7/spring-web-4.3.13.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- jhipster-1.3.0.jar (Root Library)
- spring-boot-starter-web-1.5.9.RELEASE.jar
- :x: **spring-web-4.3.13.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
<p>Publish Date: 2021-10-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096>CVE-2021-22096</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22096">https://tanzu.vmware.com/security/cve-2021-22096</a></p>
<p>Release Date: 2021-10-28</p>
<p>Fix Resolution: org.springframework:spring:5.2.18.RELEASE,5.3.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in spring web release jar cve medium severity vulnerability vulnerable library spring web release jar spring web library home page a href path to dependency file demoapplication build gradle path to vulnerable library root gradle caches modules files org springframework spring web release spring web release jar dependency hierarchy jhipster jar root library spring boot starter web release jar x spring web release jar vulnerable library vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring release step up your open source security game with whitesource
| 0
|
23,668
| 4,964,382,549
|
IssuesEvent
|
2016-12-03 18:57:58
|
facebook/osquery
|
https://api.github.com/repos/facebook/osquery
|
closed
|
Events are disabled
|
documentation FIM question
|
I am running osquery on Ubuntu 16.04 LTS. I tried running commands like-
select * from file_events;
but got the below error:
W1202 19:24:59.876035 4565 virtual_table.cpp:492] Table file_events is event-based but events are disabled
W1202 19:24:59.876058 4565 virtual_table.cpp:499] Please see the table documentation: https://osquery.io/docs/#file_events
what might I be doing wrong? Do I need to enable any configurations?
|
1.0
|
Events are disabled - I am running osquery on Ubuntu 16.04 LTS. I tried running commands like-
select * from file_events;
but got the below error:
W1202 19:24:59.876035 4565 virtual_table.cpp:492] Table file_events is event-based but events are disabled
W1202 19:24:59.876058 4565 virtual_table.cpp:499] Please see the table documentation: https://osquery.io/docs/#file_events
what might I be doing wrong? Do I need to enable any configurations?
|
non_defect
|
events are disabled i am running osquery on ubuntu lts i tried running commands like select from file events but got the below error virtual table cpp table file events is event based but events are disabled virtual table cpp please see the table documentation what might i be doing wrong do i need to enable any configurations
| 0
|
75,719
| 26,012,111,679
|
IssuesEvent
|
2022-12-21 03:28:34
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
zpool hang when open autotrim and continue working about 3 days under high load
|
Type: Defect Status: Stale Status: Triage Needed
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Ubuntu
Distribution Version | 14.04.1
Linux Kernel | 4.4.0-24-generic
Architecture | x86_64
ZFS Version | 0.8.6-1
SPL Version | 0.8.6-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
Create zpool with autotrim = on, then create 4 zvol to use. After about 3 days, zpool hang for ever.
### Describe how to reproduce the problem
1. create zpool with autotrim = on
2. create 4 zvols with fc target to export to other pc.
3. perform stress test
4. about 3 days, zpool hung
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->

|
1.0
|
zpool hang when open autotrim and continue working about 3 days under high load - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Ubuntu
Distribution Version | 14.04.1
Linux Kernel | 4.4.0-24-generic
Architecture | x86_64
ZFS Version | 0.8.6-1
SPL Version | 0.8.6-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
Create zpool with autotrim = on, then create 4 zvol to use. After about 3 days, zpool hang for ever.
### Describe how to reproduce the problem
1. create zpool with autotrim = on
2. create 4 zvols with fc target to export to other pc.
3. perform stress test
4. about 3 days, zpool hung
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->

|
defect
|
zpool hang when open autotrim and continue working about days under high load thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name ubuntu distribution version linux kernel generic architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing create zpool with autotrim on then create zvol to use after about days zpool hang for ever describe how to reproduce the problem create zpool with autotrim on create zvols with fc target to export to other pc perform stress test about days zpool hung include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
| 1
|
68,347
| 9,168,011,522
|
IssuesEvent
|
2019-03-02 18:34:17
|
BHoM/XML_Toolkit
|
https://api.github.com/repos/BHoM/XML_Toolkit
|
opened
|
XML_Toolkit: ExportType Evolvement
|
discussion documentation
|
@FraserGreenroyd
We need to adjust our `ExportType ` Currently we have Undefined, gbXMLTAS, gbXMLIES.
This could be a bit misleading.
Let's keep this as live thread and we bring all steps and fixes that are implemented and at the end we will move this into Documentation before we close this.
Please add anything we do in each type below
## gbXMLTAS
is in fact native gbXML without construction as per Revit export so our standard
## gbXMLIES
`gbXMLIES` is gbXMLTAS + all ies specific fixes as follow:
- replacing glazed building elements wall, floor, roof... `GLZ` with Curtain Wall approach so inserting (*wall, floor is better than curtain wall as snapping is more accurate)
<img src="https://user-images.githubusercontent.com/26113670/53685894-98963100-3d18-11e9-92bf-85245725a01f.png" width=260>
- opening into BuildingElement replace SLD items like Doors, Windows with separate new Building Element to capture different construction
- replace `Air` walls, floors and make then surfaceTypeEnum: Air (*we prefer to draw with Air Walls,Floor for better snapping see here as gbXML export centre line)
<img src="https://user-images.githubusercontent.com/26113670/53685876-608eee00-3d18-11e9-8e4a-bb482f23f813.png" width=260>
## Conclusion
We need to implement new ExportType: `Custom` that will allow to input all fixes we want to have from above. As I might want in TAS to have replace GLZ element as Curtain Wall etc...
Obiousley we will add predefine versions for e+/OpenStudio but we need to have one `custom `option.
Maybe would be worthwhile to have specific fixes set up as Enum `CustomFixesTypes `to allow easy selection and then as list connecting to `Custom` Node.
As I am in charge of discipline content and you control Toolkit structure this is for you to decide how this should be implemented.
This will allow 100% flexibility and modularity there will be a lot of user of HAP, Dialux, SolarComputer in future so let's make this change sooner than later.
|
1.0
|
XML_Toolkit: ExportType Evolvement - @FraserGreenroyd
We need to adjust our `ExportType ` Currently we have Undefined, gbXMLTAS, gbXMLIES.
This could be a bit misleading.
Let's keep this as live thread and we bring all steps and fixes that are implemented and at the end we will move this into Documentation before we close this.
Please add anything we do in each type below
## gbXMLTAS
is in fact native gbXML without construction as per Revit export so our standard
## gbXMLIES
`gbXMLIES` is gbXMLTAS + all ies specific fixes as follow:
- replacing glazed building elements wall, floor, roof... `GLZ` with Curtain Wall approach so inserting (*wall, floor is better than curtain wall as snapping is more accurate)
<img src="https://user-images.githubusercontent.com/26113670/53685894-98963100-3d18-11e9-92bf-85245725a01f.png" width=260>
- opening into BuildingElement replace SLD items like Doors, Windows with separate new Building Element to capture different construction
- replace `Air` walls, floors and make then surfaceTypeEnum: Air (*we prefer to draw with Air Walls,Floor for better snapping see here as gbXML export centre line)
<img src="https://user-images.githubusercontent.com/26113670/53685876-608eee00-3d18-11e9-8e4a-bb482f23f813.png" width=260>
## Conclusion
We need to implement new ExportType: `Custom` that will allow to input all fixes we want to have from above. As I might want in TAS to have replace GLZ element as Curtain Wall etc...
Obiousley we will add predefine versions for e+/OpenStudio but we need to have one `custom `option.
Maybe would be worthwhile to have specific fixes set up as Enum `CustomFixesTypes `to allow easy selection and then as list connecting to `Custom` Node.
As I am in charge of discipline content and you control Toolkit structure this is for you to decide how this should be implemented.
This will allow 100% flexibility and modularity there will be a lot of user of HAP, Dialux, SolarComputer in future so let's make this change sooner than later.
|
non_defect
|
xml toolkit exporttype evolvement frasergreenroyd we need to adjust our exporttype currently we have undefined gbxmltas gbxmlies this could be a bit misleading let s keep this as live thread and we bring all steps and fixes that are implemented and at the end we will move this into documentation before we close this please add anything we do in each type below gbxmltas is in fact native gbxml without construction as per revit export so our standard gbxmlies gbxmlies is gbxmltas all ies specific fixes as follow replacing glazed building elements wall floor roof glz with curtain wall approach so inserting wall floor is better than curtain wall as snapping is more accurate opening into buildingelement replace sld items like doors windows with separate new building element to capture different construction replace air walls floors and make then surfacetypeenum air we prefer to draw with air walls floor for better snapping see here as gbxml export centre line conclusion we need to implement new exporttype custom that will allow to input all fixes we want to have from above as i might want in tas to have replace glz element as curtain wall etc obiousley we will add predefine versions for e openstudio but we need to have one custom option maybe would be worthwhile to have specific fixes set up as enum customfixestypes to allow easy selection and then as list connecting to custom node as i am in charge of discipline content and you control toolkit structure this is for you to decide how this should be implemented this will allow flexibility and modularity there will be a lot of user of hap dialux solarcomputer in future so let s make this change sooner than later
| 0
|
24,119
| 3,917,070,280
|
IssuesEvent
|
2016-04-21 06:23:36
|
irnawansuprapti/openbiz-cubi
|
https://api.github.com/repos/irnawansuprapti/openbiz-cubi
|
closed
|
Care Your Skin To Look Younger
|
auto-migrated Priority-Medium spam Type-Defect
|
```
Our feature, impertinence, chins and smell region is titled the T-zone.These
areas somebody writer oil glands then any else parts of our embody and
essential to be doped with diametric statement to foreclose overproduction of
oil.For stark mortal of acne problems, a set of products that plow divergent
areas of your confronting is required and sometimes divergent set for the
homophonic extent. Our skin changes with the seasons and we also screw to be
more irritable to await for changes in our pare status. It?s historic that the
ripe pare work products and software to be disclosed.
http://nitroshredadvice.com/novus-serum/
```
Original issue reported on code.google.com by `ChunAnt...@gmail.com` on 17 Apr 2015 at 6:31
|
1.0
|
Care Your Skin To Look Younger - ```
Our feature, impertinence, chins and smell region is titled the T-zone.These
areas somebody writer oil glands then any else parts of our embody and
essential to be doped with diametric statement to foreclose overproduction of
oil.For stark mortal of acne problems, a set of products that plow divergent
areas of your confronting is required and sometimes divergent set for the
homophonic extent. Our skin changes with the seasons and we also screw to be
more irritable to await for changes in our pare status. It?s historic that the
ripe pare work products and software to be disclosed.
http://nitroshredadvice.com/novus-serum/
```
Original issue reported on code.google.com by `ChunAnt...@gmail.com` on 17 Apr 2015 at 6:31
|
defect
|
care your skin to look younger our feature impertinence chins and smell region is titled the t zone these areas somebody writer oil glands then any else parts of our embody and essential to be doped with diametric statement to foreclose overproduction of oil for stark mortal of acne problems a set of products that plow divergent areas of your confronting is required and sometimes divergent set for the homophonic extent our skin changes with the seasons and we also screw to be more irritable to await for changes in our pare status it s historic that the ripe pare work products and software to be disclosed original issue reported on code google com by chunant gmail com on apr at
| 1
|
47,727
| 13,066,148,736
|
IssuesEvent
|
2020-07-30 21:05:33
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
genie - add a check to fail on all genies < v 2.8.6 (Trac #1062)
|
Migrated from Trac cmake defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1062
```json
{
"status": "closed",
"changetime": "2015-07-21T20:39:57",
"description": "",
"reporter": "nega",
"cc": "melanie.day",
"resolution": "fixed",
"_ts": "1437511197741352",
"component": "cmake",
"summary": "genie - add a check to fail on all genies < v 2.8.6",
"priority": "normal",
"keywords": "",
"time": "2015-07-21T20:09:14",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
1.0
|
genie - add a check to fail on all genies < v 2.8.6 (Trac #1062) -
Migrated from https://code.icecube.wisc.edu/ticket/1062
```json
{
"status": "closed",
"changetime": "2015-07-21T20:39:57",
"description": "",
"reporter": "nega",
"cc": "melanie.day",
"resolution": "fixed",
"_ts": "1437511197741352",
"component": "cmake",
"summary": "genie - add a check to fail on all genies < v 2.8.6",
"priority": "normal",
"keywords": "",
"time": "2015-07-21T20:09:14",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
defect
|
genie add a check to fail on all genies v trac migrated from json status closed changetime description reporter nega cc melanie day resolution fixed ts component cmake summary genie add a check to fail on all genies v priority normal keywords time milestone owner nega type defect
| 1
|
12,031
| 7,764,051,774
|
IssuesEvent
|
2018-06-01 18:48:39
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
iOS Simulator rasterizer only hits 8fps
|
severe: performance ⌺ platform-ios
|
It's OK if this is expected. Mostly seeking to confirm. @chinmaygarde
<img width="366" alt="screen shot 2017-04-03 at 7 39 46 pm" src="https://cloud.githubusercontent.com/assets/11857803/24639574/73844c9c-18a5-11e7-9024-52fcc86a1a21.png">
Unclear if this log message is intended:
```
[WARNING:../../flutter/shell/common/platform_view.cc(162)] WARNING: Could not setup an OpenGL context on the resource loader.
```
|
True
|
iOS Simulator rasterizer only hits 8fps - It's OK if this is expected. Mostly seeking to confirm. @chinmaygarde
<img width="366" alt="screen shot 2017-04-03 at 7 39 46 pm" src="https://cloud.githubusercontent.com/assets/11857803/24639574/73844c9c-18a5-11e7-9024-52fcc86a1a21.png">
Unclear if this log message is intended:
```
[WARNING:../../flutter/shell/common/platform_view.cc(162)] WARNING: Could not setup an OpenGL context on the resource loader.
```
|
non_defect
|
ios simulator rasterizer only hits it s ok if this is expected mostly seeking to confirm chinmaygarde img width alt screen shot at pm src unclear if this log message is intended warning could not setup an opengl context on the resource loader
| 0
|
65,904
| 19,771,375,059
|
IssuesEvent
|
2022-01-17 10:22:52
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Record.into() does not consider table names to match fields
|
T: Defect
|
### Expected behavior
Selecting from multiple tables with left outer joins and then mapping them into multiple POJOs corresponding to the joined tables should correctly set the respective fields.
### Actual behavior
Record.into() method does not consider table names to match fields. Which leads to wrong values being mapped when the record has multiple fields with same name but different table names (i.e "id" column in different tables).
### Steps to reproduce the problem
I have noticed the problem while fetching all the entities that we represent with joined tables inheritance. In order to fetch all child entities we use leftJoins on all children with parent table.
Below query results in a record with PARENT_TABLE.ID, CHILD1.ID, CHILD2.ID etc. fields. For an entity of type CHILD1, PARENT.ID and CHILD1.ID fields have the same value while CHILD2.ID field has null value since there is no entity in CHILD2 table.
```
dslContext.select()
.from(PARENT)
.leftJoin(CHILD1).on(PARENT.ID.eq(CHILD1.ID))
.leftJoin(CHILD2).on(PARENT.ID.eq(CHILD2.ID))
```
When we try to map the result record into a generated POJO of type ParentTable with `record.into(Parent.class)` DefaultRecordMapper first sets the id field with the value from PARENT.ID field then with the value from CHILD1.ID then with the value from CHILD2.ID field which sets it to null eventually.
This is also the case for other common fields such as created_at, updated_at etc. that exists in all our tables.
While debugging i found that `[this](https://github.com/jOOQ/jOOQ/blob/efcf5676d438201eb094e70a855834bcff79daef/jOOQ/src/main/java/org/jooq/impl/DefaultRecordMapper.java#L826)` code is setting the members based on just the name.
### Versions
- jOOQ: 3.16 Community Edition
- Java: 17
- Database (include vendor): PostgreSQL
- OS:
- JDBC Driver (include name if inofficial driver): jdbc
|
1.0
|
Record.into() does not consider table names to match fields - ### Expected behavior
Selecting from multiple tables with left outer joins and then mapping them into multiple POJOs corresponding to the joined tables should correctly set the respective fields.
### Actual behavior
Record.into() method does not consider table names to match fields. Which leads to wrong values being mapped when the record has multiple fields with same name but different table names (i.e "id" column in different tables).
### Steps to reproduce the problem
I have noticed the problem while fetching all the entities that we represent with joined tables inheritance. In order to fetch all child entities we use leftJoins on all children with parent table.
Below query results in a record with PARENT_TABLE.ID, CHILD1.ID, CHILD2.ID etc. fields. For an entity of type CHILD1, PARENT.ID and CHILD1.ID fields have the same value while CHILD2.ID field has null value since there is no entity in CHILD2 table.
```
dslContext.select()
.from(PARENT)
.leftJoin(CHILD1).on(PARENT.ID.eq(CHILD1.ID))
.leftJoin(CHILD2).on(PARENT.ID.eq(CHILD2.ID))
```
When we try to map the result record into a generated POJO of type ParentTable with `record.into(Parent.class)` DefaultRecordMapper first sets the id field with the value from PARENT.ID field then with the value from CHILD1.ID then with the value from CHILD2.ID field which sets it to null eventually.
This is also the case for other common fields such as created_at, updated_at etc. that exists in all our tables.
While debugging i found that `[this](https://github.com/jOOQ/jOOQ/blob/efcf5676d438201eb094e70a855834bcff79daef/jOOQ/src/main/java/org/jooq/impl/DefaultRecordMapper.java#L826)` code is setting the members based on just the name.
### Versions
- jOOQ: 3.16 Community Edition
- Java: 17
- Database (include vendor): PostgreSQL
- OS:
- JDBC Driver (include name if inofficial driver): jdbc
|
defect
|
record into does not consider table names to match fields expected behavior selecting from multiple tables with left outer joins and then mapping them into multiple pojos corresponding to the joined tables should correctly set the respective fields actual behavior record into method does not consider table names to match fields which leads to wrong values being mapped when the record has multiple fields with same name but different table names i e id column in different tables steps to reproduce the problem i have noticed the problem while fetching all the entities that we represent with joined tables inheritance in order to fetch all child entities we use leftjoins on all children with parent table below query results in a record with parent table id id id etc fields for an entity of type parent id and id fields have the same value while id field has null value since there is no entity in table dslcontext select from parent leftjoin on parent id eq id leftjoin on parent id eq id when we try to map the result record into a generated pojo of type parenttable with record into parent class defaultrecordmapper first sets the id field with the value from parent id field then with the value from id then with the value from id field which sets it to null eventually this is also the case for other common fields such as created at updated at etc that exists in all our tables while debugging i found that code is setting the members based on just the name versions jooq community edition java database include vendor postgresql os jdbc driver include name if inofficial driver jdbc
| 1
|
33,856
| 7,274,429,607
|
IssuesEvent
|
2018-02-21 09:58:58
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
EventJournal loses data if 2 nodes terminate
|
Estimation: M Source: Internal Team: Core Type: Defect
|
Here is a reproducer for the issue
https://gist.github.com/gurbuzali/897f5cd15d3347e29fb5af6c4ed2e453
- I start a 4 node cluster and a client
- start a thread to produce some data for event journal
- terminate one instance
- wait for some time
- terminate second instance
- check total count of events in the journal
|
1.0
|
EventJournal loses data if 2 nodes terminate - Here is a reproducer for the issue
https://gist.github.com/gurbuzali/897f5cd15d3347e29fb5af6c4ed2e453
- I start a 4 node cluster and a client
- start a thread to produce some data for event journal
- terminate one instance
- wait for some time
- terminate second instance
- check total count of events in the journal
|
defect
|
eventjournal loses data if nodes terminate here is a reproducer for the issue i start a node cluster and a client start a thread to produce some data for event journal terminate one instance wait for some time terminate second instance check total count of events in the journal
| 1
|
288,794
| 8,851,626,267
|
IssuesEvent
|
2019-01-08 16:11:47
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
developer.mozilla.org - desktop site instead of mobile site
|
browser-firefox priority-important
|
<!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: -->
**URL**: https://developer.mozilla.org/en-US/docs/Web/Apps/Progressive/Add_to_home_screen
**Browser / Version**: Firefox 65.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: home screen default embed user personal settings
**Steps to Reproduce**:
samsung home screen default taking advantage to embed proirity of setting for desktop password.
[](https://webcompat.com/uploads/2018/12/69c217ff-11f9-4c21-934b-ea309d0116e4.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Reported by @antitrackers
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
developer.mozilla.org - desktop site instead of mobile site - <!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: -->
**URL**: https://developer.mozilla.org/en-US/docs/Web/Apps/Progressive/Add_to_home_screen
**Browser / Version**: Firefox 65.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: home screen default embed user personal settings
**Steps to Reproduce**:
samsung home screen default taking advantage to embed proirity of setting for desktop password.
[](https://webcompat.com/uploads/2018/12/69c217ff-11f9-4c21-934b-ea309d0116e4.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Reported by @antitrackers
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
developer mozilla org desktop site instead of mobile site url browser version firefox operating system linux tested another browser yes problem type desktop site instead of mobile site description home screen default embed user personal settings steps to reproduce samsung home screen default taking advantage to embed proirity of setting for desktop password browser configuration none reported by antitrackers from with ❤️
| 0
|
384,336
| 26,582,677,420
|
IssuesEvent
|
2023-01-22 16:54:42
|
vlang/v
|
https://api.github.com/repos/vlang/v
|
closed
|
specify files to convert
|
Unit: Documentation
|
### Describe the issue
Both below works (in examples/call_v_from_c):
```
$ v -shared -o v_test_math.c v_test_math.v
$ v -shared -o v_test_both.c .
```
But not below
```
$ v -shared -o v_test_math.c v_test_math.v v_test_print.v
Too many targets. Specify just one target: <target.v|target_directory>.
```
Is it possible to specify a list of files to be converted into C, rather than the full directory?
Use case is simple. I have two folders: vsrc_a, and vsrc_b, each containing some v sources. Now if I do:
```
$ vsrc_a$ v -shared -o vsrc_a.c .
$ vsrc_b$ v -shared -o vsrc_b.c .
$ gcc -o test vsrc_a.c vsrc_b.c
Won't work because vsrc_a.c and vsrc_b.c contain duplicated boilerplate codes.
```
### Links
No link.
|
1.0
|
specify files to convert - ### Describe the issue
Both below works (in examples/call_v_from_c):
```
$ v -shared -o v_test_math.c v_test_math.v
$ v -shared -o v_test_both.c .
```
But not below
```
$ v -shared -o v_test_math.c v_test_math.v v_test_print.v
Too many targets. Specify just one target: <target.v|target_directory>.
```
Is it possible to specify a list of files to be converted into C, rather than the full directory?
Use case is simple. I have two folders: vsrc_a, and vsrc_b, each containing some v sources. Now if I do:
```
$ vsrc_a$ v -shared -o vsrc_a.c .
$ vsrc_b$ v -shared -o vsrc_b.c .
$ gcc -o test vsrc_a.c vsrc_b.c
Won't work because vsrc_a.c and vsrc_b.c contain duplicated boilerplate codes.
```
### Links
No link.
|
non_defect
|
specify files to convert describe the issue both below works in examples call v from c v shared o v test math c v test math v v shared o v test both c but not below v shared o v test math c v test math v v test print v too many targets specify just one target is it possible to specify a list of files to be converted into c rather than the full directory use case is simple i have two folders vsrc a and vsrc b each containing some v sources now if i do vsrc a v shared o vsrc a c vsrc b v shared o vsrc b c gcc o test vsrc a c vsrc b c won t work because vsrc a c and vsrc b c contain duplicated boilerplate codes links no link
| 0
|
381,121
| 11,273,857,986
|
IssuesEvent
|
2020-01-14 17:20:21
|
myceworld/myce
|
https://api.github.com/repos/myceworld/myce
|
closed
|
[third party] update coingecko
|
Priority: Low Type: Bug
|
**Describe the solution**
https://www.coingecko.com/en/coins/myce shows no `circulating supply` and affects several sites taking the info from there
|
1.0
|
[third party] update coingecko - **Describe the solution**
https://www.coingecko.com/en/coins/myce shows no `circulating supply` and affects several sites taking the info from there
|
non_defect
|
update coingecko describe the solution shows no circulating supply and affects several sites taking the info from there
| 0
|
40,311
| 6,818,977,246
|
IssuesEvent
|
2017-11-07 08:32:08
|
DanielBeato/GESPRO_PracticaGestionTareas_1718
|
https://api.github.com/repos/DanielBeato/GESPRO_PracticaGestionTareas_1718
|
closed
|
Documentar introducción
|
documentation
|
**Introducción**
- Situación actual abejas / importancia de estas
- Cómo un apicultor inspecciona el colmenar
- Intentos de automatizar este proceso
- Nuestra solución
**Estructura de la memoria**
|
1.0
|
Documentar introducción - **Introducción**
- Situación actual abejas / importancia de estas
- Cómo un apicultor inspecciona el colmenar
- Intentos de automatizar este proceso
- Nuestra solución
**Estructura de la memoria**
|
non_defect
|
documentar introducción introducción situación actual abejas importancia de estas cómo un apicultor inspecciona el colmenar intentos de automatizar este proceso nuestra solución estructura de la memoria
| 0
|
447,013
| 31,592,092,560
|
IssuesEvent
|
2023-09-05 00:03:35
|
shuding/nextra
|
https://api.github.com/repos/shuding/nextra
|
closed
|
Broken link for useConfig in docs theme documentation
|
documentation
|
There is a link to explain the usage of `useConfig` in [this section of docs theme documentation](https://nextra.site/docs/docs-theme/theme-configuration#dynamic-tags-based-on-page).
It is pointing to [`https://nextra.site/docs/docs-theme/api/use-config`](https://nextra.site/docs/docs-theme/api/use-config)
I tried to search for it in case the page has been moved, but didn't find it.
|
1.0
|
Broken link for useConfig in docs theme documentation - There is a link to explain the usage of `useConfig` in [this section of docs theme documentation](https://nextra.site/docs/docs-theme/theme-configuration#dynamic-tags-based-on-page).
It is pointing to [`https://nextra.site/docs/docs-theme/api/use-config`](https://nextra.site/docs/docs-theme/api/use-config)
I tried to search for it in case the page has been moved, but didn't find it.
|
non_defect
|
broken link for useconfig in docs theme documentation there is a link to explain the usage of useconfig in it is pointing to i tried to search for it in case the page has been moved but didn t find it
| 0
|
166,173
| 26,292,035,682
|
IssuesEvent
|
2023-01-08 14:33:42
|
SINZAK/sinzak-ios
|
https://api.github.com/repos/SINZAK/sinzak-ios
|
closed
|
[Design] 탭바 디자인
|
Design
|
### Feature 설명
탭바 디자인
- [ ] 애셋 업데이트
- [ ] 탭바
### Preceded issue(Optional)
_No response_
### Related View(Optional)
- e.g. SignupView
### 우선 순위
P1🔥
### 예상 마감일 + 소요시간
~ 12/2(2h)
### 실제 마감일 + 소요시간
_No response_
|
1.0
|
[Design] 탭바 디자인 - ### Feature 설명
탭바 디자인
- [ ] 애셋 업데이트
- [ ] 탭바
### Preceded issue(Optional)
_No response_
### Related View(Optional)
- e.g. SignupView
### 우선 순위
P1🔥
### 예상 마감일 + 소요시간
~ 12/2(2h)
### 실제 마감일 + 소요시간
_No response_
|
non_defect
|
탭바 디자인 feature 설명 탭바 디자인 애셋 업데이트 탭바 preceded issue optional no response related view optional e g signupview 우선 순위 🔥 예상 마감일 소요시간 실제 마감일 소요시간 no response
| 0
|
112,240
| 9,558,445,049
|
IssuesEvent
|
2019-05-03 14:14:57
|
ansible-community/ara
|
https://api.github.com/repos/ansible-community/ara
|
opened
|
Missing unit tests for offline and http clients
|
help wanted tests
|
Although the clients are exercised through integration tests, there are no unit tests right now. Even a minimal amount of coverage would be nice.
|
1.0
|
Missing unit tests for offline and http clients - Although the clients are exercised through integration tests, there are no unit tests right now. Even a minimal amount of coverage would be nice.
|
non_defect
|
missing unit tests for offline and http clients although the clients are exercised through integration tests there are no unit tests right now even a minimal amount of coverage would be nice
| 0
|
6,634
| 3,869,987,091
|
IssuesEvent
|
2016-04-10 22:31:47
|
Urigo/angular-meteor
|
https://api.github.com/repos/Urigo/angular-meteor
|
closed
|
Release Angular1Meteor through Npm
|
component: build:npm
|
- [ ] expose Angular1Meteor through Npm as default.
- [ ] use it in an Atmosphere package to be compatible with Meteor versions before 1.3.
- [ ] expose old legacy code to Npm as well as older version 1.2
|
1.0
|
Release Angular1Meteor through Npm - - [ ] expose Angular1Meteor through Npm as default.
- [ ] use it in an Atmosphere package to be compatible with Meteor versions before 1.3.
- [ ] expose old legacy code to Npm as well as older version 1.2
|
non_defect
|
release through npm expose through npm as default use it in an atmosphere package to be compatible with meteor versions before expose old legacy code to npm as well as older version
| 0
|
33,812
| 7,255,223,829
|
IssuesEvent
|
2018-02-16 14:11:28
|
mlpack/mlpack
|
https://api.github.com/repos/mlpack/mlpack
|
closed
|
Greedy KNN doesn't always return enough results
|
D: moderate P: minor T: defect
|
@nvasil pointed this one out. If you try to use k-nearest-neighbor search with the greedy algorithm, it may not return results. Here is a simple way to reproduce the issue:
```
$ bin/mlpack_knn -r test_data_3_1000.csv -k 3 -a greedy -n n.csv -d d.csv -l 1
```
That will calculate the nearest neighbors of every point in `test_data_3_1000.csv` (though any dataset will work) building a tree with a leaf size of 1 (meaning that every leaf in the tree holds only one point). Since we are using the same query set as the reference set, a point cannot be its own nearest neighbor. So for some point p, the greedy algorithm will descend the tree directly to the leaf containing only the point p, and no result will be saved. In our example above, this means that no nearest neighbors are set; we can see that by looking at the results in n.csv and d.csv:
```
$ head d.csv
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
```
```
$ head n.csv
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
```
The responsible code is in `src/mlpack/core/tree/greedy_single_tree_traverser_impl.hpp`: the recursion there simply recurses until hitting a leaf node. However, intuitively, the change that needs to happen is that for nearest neighbor search, we need to terminate the recursion and perform all the point-to-point base cases the level before `referenceNode.NumDescendants() < k`.
This would be a good issue for someone who is looking to learn about the tree recursion code in mlpack.
|
1.0
|
Greedy KNN doesn't always return enough results - @nvasil pointed this one out. If you try to use k-nearest-neighbor search with the greedy algorithm, it may not return results. Here is a simple way to reproduce the issue:
```
$ bin/mlpack_knn -r test_data_3_1000.csv -k 3 -a greedy -n n.csv -d d.csv -l 1
```
That will calculate the nearest neighbors of every point in `test_data_3_1000.csv` (though any dataset will work) building a tree with a leaf size of 1 (meaning that every leaf in the tree holds only one point). Since we are using the same query set as the reference set, a point cannot be its own nearest neighbor. So for some point p, the greedy algorithm will descend the tree directly to the leaf containing only the point p, and no result will be saved. In our example above, this means that no nearest neighbors are set; we can see that by looking at the results in n.csv and d.csv:
```
$ head d.csv
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
1.79769313486232e+308,1.79769313486232e+308,1.79769313486232e+308
```
```
$ head n.csv
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
8017,8017,8017
```
The responsible code is in `src/mlpack/core/tree/greedy_single_tree_traverser_impl.hpp`: the recursion there simply recurses until hitting a leaf node. However, intuitively, the change that needs to happen is that for nearest neighbor search, we need to terminate the recursion and perform all the point-to-point base cases the level before `referenceNode.NumDescendants() < k`.
This would be a good issue for someone who is looking to learn about the tree recursion code in mlpack.
|
defect
|
greedy knn doesn t always return enough results nvasil pointed this one out if you try to use k nearest neighbor search with the greedy algorithm it may not return results here is a simple way to reproduce the issue bin mlpack knn r test data csv k a greedy n n csv d d csv l that will calculate the nearest neighbors of every point in test data csv though any dataset will work building a tree with a leaf size of meaning that every leaf in the tree holds only one point since we are using the same query set as the reference set a point cannot be its own nearest neighbor so for some point p the greedy algorithm will descend the tree directly to the leaf containing only the point p and no result will be saved in our example above this means that no nearest neighbors are set we can see that by looking at the results in n csv and d csv head d csv head n csv the responsible code is in src mlpack core tree greedy single tree traverser impl hpp the recursion there simply recurses until hitting a leaf node however intuitively the change that needs to happen is that for nearest neighbor search we need to terminate the recursion and perform all the point to point base cases the level before referencenode numdescendants k this would be a good issue for someone who is looking to learn about the tree recursion code in mlpack
| 1
|
19,407
| 3,201,927,214
|
IssuesEvent
|
2015-10-02 10:48:29
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
BIND backend fails to ignore bind logging configuration
|
auth defect
|
* using pdns-3.4.6-1.fc22.x86_64 as provided by standard Fedora 22
* using BIND backend
When trying to start with bind configuration file from bind version 9.9.3, I get the following error:
Sep 27 17:41:01 test2 pdns[8645]: Error parsing bind configuration: Error in bind configuration '/var/named/named.conf' on line 43: syntax error
Sep 27 17:41:01 test2 pdns[8645]: Caught an exception instantiating a backend: Error in bind configuration '/var/named/named.conf' on line 43: syntax error
Here's a snippet of the offending configuration file, with line 43 annotated.
logging {
channel query-log {
file "data/named-query.log" versions 1000 size 100m; # line 43
print-time yes;
};
category queries { query-log; };
channel default_debug {
file "data/named.run";
severity dynamic;
print-time yes;
};
};
I'm suspecting that line 48, also starting with `file` might also cause issues if one would simply remove line 43.. didn't try that though. I just removed the whole "logging" snippet presented above after which pdns started fine.
|
1.0
|
BIND backend fails to ignore bind logging configuration - * using pdns-3.4.6-1.fc22.x86_64 as provided by standard Fedora 22
* using BIND backend
When trying to start with bind configuration file from bind version 9.9.3, I get the following error:
Sep 27 17:41:01 test2 pdns[8645]: Error parsing bind configuration: Error in bind configuration '/var/named/named.conf' on line 43: syntax error
Sep 27 17:41:01 test2 pdns[8645]: Caught an exception instantiating a backend: Error in bind configuration '/var/named/named.conf' on line 43: syntax error
Here's a snippet of the offending configuration file, with line 43 annotated.
logging {
channel query-log {
file "data/named-query.log" versions 1000 size 100m; # line 43
print-time yes;
};
category queries { query-log; };
channel default_debug {
file "data/named.run";
severity dynamic;
print-time yes;
};
};
I'm suspecting that line 48, also starting with `file` might also cause issues if one would simply remove line 43.. didn't try that though. I just removed the whole "logging" snippet presented above after which pdns started fine.
|
defect
|
bind backend fails to ignore bind logging configuration using pdns as provided by standard fedora using bind backend when trying to start with bind configuration file from bind version i get the following error sep pdns error parsing bind configuration error in bind configuration var named named conf on line syntax error sep pdns caught an exception instantiating a backend error in bind configuration var named named conf on line syntax error here s a snippet of the offending configuration file with line annotated logging channel query log file data named query log versions size line print time yes category queries query log channel default debug file data named run severity dynamic print time yes i m suspecting that line also starting with file might also cause issues if one would simply remove line didn t try that though i just removed the whole logging snippet presented above after which pdns started fine
| 1
|
45,584
| 13,130,326,406
|
IssuesEvent
|
2020-08-06 15:12:09
|
whatwg/html
|
https://api.github.com/repos/whatwg/html
|
closed
|
System Font Lists / Dialog
|
addition/proposal needs implementer interest security/privacy topic: canvas
|
Due to the possibilities provided with WebGL v2 and WebAssembly, there are more and more companies pushing into the multimedia space with their Web apps, especially as this gives a unified distribution platform for both the Web and the Desktop.
We are for example providing a full featured painting and image editing application here: www.paintsupreme3d.com.
Now, we draw fonts using the 2D canvas. The biggest drawback right now for web based multimedia applications is that they cannot offer the user a list of the SYSTEM fonts to choose from for drawing text.
For web apps to be able to compete with native desktop apps, this is a major drawback right now.
Would it be possible at one stage to either:
1) Provide a list of the installed system fonts to the application so that the app can provide its own font dialog to the user.
2) Provide a browser font dialog where the selected font is passed to the 2D canvas if 1) provides a fingerprinting security issue.
This would be a big step for multimedia web app providers like us.
|
True
|
System Font Lists / Dialog - Due to the possibilities provided with WebGL v2 and WebAssembly, there are more and more companies pushing into the multimedia space with their Web apps, especially as this gives a unified distribution platform for both the Web and the Desktop.
We are for example providing a full featured painting and image editing application here: www.paintsupreme3d.com.
Now, we draw fonts using the 2D canvas. The biggest drawback right now for web based multimedia applications is that they cannot offer the user a list of the SYSTEM fonts to choose from for drawing text.
For web apps to be able to compete with native desktop apps, this is a major drawback right now.
Would it be possible at one stage to either:
1) Provide a list of the installed system fonts to the application so that the app can provide its own font dialog to the user.
2) Provide a browser font dialog where the selected font is passed to the 2D canvas if 1) provides a fingerprinting security issue.
This would be a big step for multimedia web app providers like us.
|
non_defect
|
system font lists dialog due to the possibilities provided with webgl and webassembly there are more and more companies pushing into the multimedia space with their web apps especially as this gives a unified distribution platform for both the web and the desktop we are for example providing a full featured painting and image editing application here now we draw fonts using the canvas the biggest drawback right now for web based multimedia applications is that they cannot offer the user a list of the system fonts to choose from for drawing text for web apps to be able to compete with native desktop apps this is a major drawback right now would it be possible at one stage to either provide a list of the installed system fonts to the application so that the app can provide its own font dialog to the user provide a browser font dialog where the selected font is passed to the canvas if provides a fingerprinting security issue this would be a big step for multimedia web app providers like us
| 0
|
58,316
| 16,483,991,333
|
IssuesEvent
|
2021-05-24 15:21:03
|
snowplow/snowplow-objc-tracker
|
https://api.github.com/repos/snowplow/snowplow-objc-tracker
|
closed
|
Fix duplicate NS_SWIFT_NAME macro
|
priority:high status:completed type:defect
|
**Describe the bug**
Both the protocol SPNetworkController and the class SPNetworkControllerImpl have the same NS_SWIFT_NAME of NetworkController causing Xcode to report sift references to NetworkController as ambiguous.
**To Reproduce**
With a swift environment adding the below will generate the error:
`var networkController: NetworkController?`
**Expected behavior**
Other similar examples in the module have Impl appened to the NS_SWIFT_NAME for the class. eg. NetworkControllerImpl.
Both EmitterController and GDPRController are examples of this.
**Additional context**
Discovered while attempting to Mock TrackerController for unit testing of our implementation.
SnowplowTracker 2.0.1
Xcode 12.5
|
1.0
|
Fix duplicate NS_SWIFT_NAME macro - **Describe the bug**
Both the protocol SPNetworkController and the class SPNetworkControllerImpl have the same NS_SWIFT_NAME of NetworkController causing Xcode to report sift references to NetworkController as ambiguous.
**To Reproduce**
With a swift environment adding the below will generate the error:
`var networkController: NetworkController?`
**Expected behavior**
Other similar examples in the module have Impl appened to the NS_SWIFT_NAME for the class. eg. NetworkControllerImpl.
Both EmitterController and GDPRController are examples of this.
**Additional context**
Discovered while attempting to Mock TrackerController for unit testing of our implementation.
SnowplowTracker 2.0.1
Xcode 12.5
|
defect
|
fix duplicate ns swift name macro describe the bug both the protocol spnetworkcontroller and the class spnetworkcontrollerimpl have the same ns swift name of networkcontroller causing xcode to report sift references to networkcontroller as ambiguous to reproduce with a swift environment adding the below will generate the error var networkcontroller networkcontroller expected behavior other similar examples in the module have impl appened to the ns swift name for the class eg networkcontrollerimpl both emittercontroller and gdprcontroller are examples of this additional context discovered while attempting to mock trackercontroller for unit testing of our implementation snowplowtracker xcode
| 1
|
617,656
| 19,402,098,572
|
IssuesEvent
|
2021-12-19 11:18:05
|
alexcoder04/rfap-go-server
|
https://api.github.com/repos/alexcoder04/rfap-go-server
|
opened
|
Server console
|
enhancement priority:low
|
Some kind of way to control the server: restart it, reload the configuration, shut down, ...
Original issue on the main repo:
> What should we use for the console?
> Stdin is not an option, because we don't have a window with the server running open all the time; moreover the server would print messages while you are typing your command, so it wouldn't really work.
> A Unix domain socket would be much better, but it's not supported on Windows (does Windows have something like that?).
Another socket listening on localhost? Seems complicated to me.
|
1.0
|
Server console - Some kind of way to control the server: restart it, reload the configuration, shut down, ...
Original issue on the main repo:
> What should we use for the console?
> Stdin is not an option, because we don't have a window with the server running open all the time; moreover the server would print messages while you are typing your command, so it wouldn't really work.
> A Unix domain socket would be much better, but it's not supported on Windows (does Windows have something like that?).
Another socket listening on localhost? Seems complicated to me.
|
non_defect
|
server console some kind of way to control the server restart it reload the configuration shut down original issue on the main repo what should we use for the console stdin is not an option because we don t have a window with the server running open all the time moreover the server would print messages while you are typing your command so it wouldn t really work a unix domain socket would be much better but it s not supported on windows does windows have something like that another socket listening on localhost seems complicated to me
| 0
|
66,095
| 19,982,007,074
|
IssuesEvent
|
2022-01-30 02:57:17
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
Removing a disk file from a mirror does not produce an error
|
Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Ubuntu
Distribution Version | 20.04.3 LTS
Kernel Version | 5.4.0-96-generic
Architecture | x86-64
OpenZFS Version | zfs-0.8.3-1ubuntu12.13, zfs-kmod-0.8.3-1ubuntu12.13
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
I have a pool with a single mirror backed by 2 files. When I delete one of these files and run `zpool scrub` I expect it to show me that the pool is degraded, but that doesn't happen. It shows that everything is healthy, until I reboot the machine, and then the pool finally becomes degraded.
### Describe how to reproduce the problem
Create 2 disk files, 300MB each:
```
dd if=/dev/zero of=/home/sg/zpool_testing/disk1.img bs=1M count=300
dd if=/dev/zero of=/home/sg/zpool_testing/disk2.img bs=1M count=300
```
Output:
```
300+0 records in
300+0 records out
314572800 bytes (315 MB, 300 MiB) copied, 0.407219 s, 772 MB/s
300+0 records in
300+0 records out
314572800 bytes (315 MB, 300 MiB) copied, 0.266665 s, 1.2 GB/s
```
Create a ZFS pool with 1 mirror backed by the 2 files created above. Add some data in this newly created pool and print zpool status.
```
zpool create pool1 mirror /home/sg/zpool_testing/disk1.img /home/sg/zpool_testing/disk2.img
echo hello world > /pool1/hello.txt
zpool status
```
Output:
```
pool: pool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/home/sg/zpool_testing/disk1.img ONLINE 0 0 0
/home/sg/zpool_testing/disk2.img ONLINE 0 0 0
errors: No known data errors
```
Delete a disk from the mirror, scrub the pool, and print the pool status:
```
rm /home/sg/zpool_testing/disk2.img
zpool scrub pool1
sleep 1
zpool status
```
Output:
```
pool: pool1
state: ONLINE
scan: scrub repaired 0B in 0 days 00:00:00 with 0 errors on Sun Jan 30 02:46:46 2022
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/home/sg/zpool_testing/disk1.img ONLINE 0 0 0
/home/sg/zpool_testing/disk2.img ONLINE 0 0 0
errors: No known data errors
```
Restarting the server and running `zpool status` finally outputs the error which should have been shown before:
```
pool: pool1
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://zfsonlinux.org/msg/ZFS-8000-2Q
scan: scrub repaired 0B in 0 days 00:00:00 with 0 errors on Sun Jan 30 02:46:46 2022
config:
NAME STATE READ WRITE CKSUM
pool1 DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
/home/sg/zpool_testing/disk1.img ONLINE 0 0 0
15136532953254090313 UNAVAIL 0 0 0 was /home/sg/zpool_testing/disk2.img
errors: No known data errors
```
|
1.0
|
Removing a disk file from a mirror does not produce an error - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Ubuntu
Distribution Version | 20.04.3 LTS
Kernel Version | 5.4.0-96-generic
Architecture | x86-64
OpenZFS Version | zfs-0.8.3-1ubuntu12.13, zfs-kmod-0.8.3-1ubuntu12.13
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
I have a pool with a single mirror backed by 2 files. When I delete one of these files and run `zpool scrub` I expect it to show me that the pool is degraded, but that doesn't happen. It shows that everything is healthy, until I reboot the machine, and then the pool finally becomes degraded.
### Describe how to reproduce the problem
Create 2 disk files, 300MB each:
```
dd if=/dev/zero of=/home/sg/zpool_testing/disk1.img bs=1M count=300
dd if=/dev/zero of=/home/sg/zpool_testing/disk2.img bs=1M count=300
```
Output:
```
300+0 records in
300+0 records out
314572800 bytes (315 MB, 300 MiB) copied, 0.407219 s, 772 MB/s
300+0 records in
300+0 records out
314572800 bytes (315 MB, 300 MiB) copied, 0.266665 s, 1.2 GB/s
```
Create a ZFS pool with 1 mirror backed by the 2 files created above. Add some data in this newly created pool and print zpool status.
```
zpool create pool1 mirror /home/sg/zpool_testing/disk1.img /home/sg/zpool_testing/disk2.img
echo hello world > /pool1/hello.txt
zpool status
```
Output:
```
pool: pool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/home/sg/zpool_testing/disk1.img ONLINE 0 0 0
/home/sg/zpool_testing/disk2.img ONLINE 0 0 0
errors: No known data errors
```
Delete a disk from the mirror, scrub the pool, and print the pool status:
```
rm /home/sg/zpool_testing/disk2.img
zpool scrub pool1
sleep 1
zpool status
```
Output:
```
pool: pool1
state: ONLINE
scan: scrub repaired 0B in 0 days 00:00:00 with 0 errors on Sun Jan 30 02:46:46 2022
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/home/sg/zpool_testing/disk1.img ONLINE 0 0 0
/home/sg/zpool_testing/disk2.img ONLINE 0 0 0
errors: No known data errors
```
Restarting the server and running `zpool status` finally outputs the error which should have been shown before:
```
pool: pool1
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://zfsonlinux.org/msg/ZFS-8000-2Q
scan: scrub repaired 0B in 0 days 00:00:00 with 0 errors on Sun Jan 30 02:46:46 2022
config:
NAME STATE READ WRITE CKSUM
pool1 DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
/home/sg/zpool_testing/disk1.img ONLINE 0 0 0
15136532953254090313 UNAVAIL 0 0 0 was /home/sg/zpool_testing/disk2.img
errors: No known data errors
```
|
defect
|
removing a disk file from a mirror does not produce an error thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name ubuntu distribution version lts kernel version generic architecture openzfs version zfs zfs kmod command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing i have a pool with a single mirror backed by files when i delete one of these files and run zpool scrub i expect it to show me that the pool is degraded but that doesn t happen it shows that everything is healthy until i reboot the machine and then the pool finally becomes degraded describe how to reproduce the problem create disk files each dd if dev zero of home sg zpool testing img bs count dd if dev zero of home sg zpool testing img bs count output records in records out bytes mb mib copied s mb s records in records out bytes mb mib copied s gb s create a zfs pool with mirror backed by the files created above add some data in this newly created pool and print zpool status zpool create mirror home sg zpool testing img home sg zpool testing img echo hello world hello txt zpool status output pool state online scan none requested config name state read write cksum online mirror online home sg zpool testing img online home sg zpool testing img online errors no known data errors delete a disk from the mirror scrub the pool and print the pool status rm home sg zpool testing img zpool scrub sleep zpool status output pool state online scan scrub repaired in days with errors on sun jan config name state read write cksum online mirror online home sg zpool testing img online home sg zpool testing img online errors no known data errors restarting the server and running zpool status finally outputs the error which should have been shown before pool state degraded status one or more devices could not be opened sufficient replicas exist for the pool to continue functioning in a degraded state action attach the missing device and online it using zpool online see scan scrub repaired in days with errors on sun jan config name state read write cksum degraded mirror degraded home sg zpool testing img online unavail was home sg zpool testing img errors no known data errors
| 1
|
28,952
| 5,444,152,733
|
IssuesEvent
|
2017-03-07 01:37:21
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
EntityTrait::setHidden() with $merge set to TRUE returns wrong results.
|
Defect ORM
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: v3.4.2.
* Platform and Target: PHP 7.0.15-0ubuntu0.16.10.4
When the `_hidden` property is not empty and we call `$entity->setHidden(['some_field'], true)`, the ["merge"](https://api.cakephp.org/3.4/source-class-Cake.Datasource.EntityTrait.html#436) fails.
I don't know about other versions, but in php7, Array + Array only works well with associative arrays.
|
1.0
|
EntityTrait::setHidden() with $merge set to TRUE returns wrong results. - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: v3.4.2.
* Platform and Target: PHP 7.0.15-0ubuntu0.16.10.4
When the `_hidden` property is not empty and we call `$entity->setHidden(['some_field'], true)`, the ["merge"](https://api.cakephp.org/3.4/source-class-Cake.Datasource.EntityTrait.html#436) fails.
I don't know about other versions, but in php7, Array + Array only works well with associative arrays.
|
defect
|
entitytrait sethidden with merge set to true returns wrong results this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target php when the hidden property is not empty and we call entity sethidden true the fails i don t know about other versions but in array array only works well with associative arrays
| 1
|
27,778
| 5,100,695,388
|
IssuesEvent
|
2017-01-04 13:13:04
|
ribasco/async-gamequery-lib
|
https://api.github.com/repos/ribasco/async-gamequery-lib
|
closed
|
Missing implementation for Steam ReportCheatData
|
defect
|
`ReportCheatData` does not have any implementation.
- Create interface `SteamCheatReportingService`
- Add method `reportCheatData` into `SteamCheatReportingService`
- Add implementation for `ReportCheatData `class
|
1.0
|
Missing implementation for Steam ReportCheatData - `ReportCheatData` does not have any implementation.
- Create interface `SteamCheatReportingService`
- Add method `reportCheatData` into `SteamCheatReportingService`
- Add implementation for `ReportCheatData `class
|
defect
|
missing implementation for steam reportcheatdata reportcheatdata does not have any implementation create interface steamcheatreportingservice add method reportcheatdata into steamcheatreportingservice add implementation for reportcheatdata class
| 1
|
53,946
| 13,262,544,241
|
IssuesEvent
|
2020-08-20 22:01:20
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
[cmake] make tarball gives directories different names (Trac #2373)
|
Migrated from Trac cmake defect
|
For some reason the normal build directories put simprod-scripts in a directory called `simprod-scripts` with a dash, but make tarball creates a directory called `simprod_scripts` with an underscore. This messes up a lot of my scripts. All other directory names remain unaffected.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2373">https://code.icecube.wisc.edu/projects/icecube/ticket/2373</a>, reported by kjmeagherand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"_ts": "1593001902142004",
"description": "For some reason the normal build directories put simprod-scripts in a directory called `simprod-scripts` with a dash, but make tarball creates a directory called `simprod_scripts` with an underscore. This messes up a lot of my scripts. All other directory names remain unaffected.",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"time": "2019-11-08T16:44:28",
"component": "cmake",
"summary": "[cmake] make tarball gives directories different names",
"priority": "normal",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[cmake] make tarball gives directories different names (Trac #2373) - For some reason the normal build directories put simprod-scripts in a directory called `simprod-scripts` with a dash, but make tarball creates a directory called `simprod_scripts` with an underscore. This messes up a lot of my scripts. All other directory names remain unaffected.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2373">https://code.icecube.wisc.edu/projects/icecube/ticket/2373</a>, reported by kjmeagherand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"_ts": "1593001902142004",
"description": "For some reason the normal build directories put simprod-scripts in a directory called `simprod-scripts` with a dash, but make tarball creates a directory called `simprod_scripts` with an underscore. This messes up a lot of my scripts. All other directory names remain unaffected.",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"time": "2019-11-08T16:44:28",
"component": "cmake",
"summary": "[cmake] make tarball gives directories different names",
"priority": "normal",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
make tarball gives directories different names trac for some reason the normal build directories put simprod scripts in a directory called simprod scripts with a dash but make tarball creates a directory called simprod scripts with an underscore this messes up a lot of my scripts all other directory names remain unaffected migrated from json status closed changetime ts description for some reason the normal build directories put simprod scripts in a directory called simprod scripts with a dash but make tarball creates a directory called simprod scripts with an underscore this messes up a lot of my scripts all other directory names remain unaffected reporter kjmeagher cc resolution fixed time component cmake summary make tarball gives directories different names priority normal keywords milestone autumnal equinox owner nega type defect
| 1
|
103,246
| 4,165,807,893
|
IssuesEvent
|
2016-06-19 18:56:55
|
kaytotes/ImprovedBlizzardUI
|
https://api.github.com/repos/kaytotes/ImprovedBlizzardUI
|
closed
|
Battleground kill feed font issues
|
bug core high priority pvp
|
The Battleground kill feed font seems to be setting incorrectly. Is way smaller than it should be compared to what is currently on WoD live build. Needs to be looked at and adjusted.

|
1.0
|
Battleground kill feed font issues - The Battleground kill feed font seems to be setting incorrectly. Is way smaller than it should be compared to what is currently on WoD live build. Needs to be looked at and adjusted.

|
non_defect
|
battleground kill feed font issues the battleground kill feed font seems to be setting incorrectly is way smaller than it should be compared to what is currently on wod live build needs to be looked at and adjusted
| 0
|
61,976
| 17,023,823,580
|
IssuesEvent
|
2021-07-03 04:02:28
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Nominatim doesn't report POIs mapped as a relation
|
Component: nominatim Priority: minor Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 12.18pm, Sunday, 16th September 2012]**
When a POI (e.g. camping) is mapped as a relation of closed ways (several pieces of land) it is not reported as a result of the search.
e.g. looking for "camping oostwold" is returning 3 campings.
Camping Pool (http://www.openstreetmap.org/browse/relation/2267012) is missing.
Or is there something missing in the mapping?
|
1.0
|
Nominatim doesn't report POIs mapped as a relation - **[Submitted to the original trac issue database at 12.18pm, Sunday, 16th September 2012]**
When a POI (e.g. camping) is mapped as a relation of closed ways (several pieces of land) it is not reported as a result of the search.
e.g. looking for "camping oostwold" is returning 3 campings.
Camping Pool (http://www.openstreetmap.org/browse/relation/2267012) is missing.
Or is there something missing in the mapping?
|
defect
|
nominatim doesn t report pois mapped as a relation when a poi e g camping is mapped as a relation of closed ways several pieces of land it is not reported as a result of the search e g looking for camping oostwold is returning campings camping pool is missing or is there something missing in the mapping
| 1
|
57,408
| 15,768,599,561
|
IssuesEvent
|
2021-03-31 17:24:27
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
Incompatibility between periodic boundary conditions and a Lagrange multiplier constraint implemented as a scalarkerel/scalarvariable in tensor_mechanics
|
C: MOOSE C: libMesh P: normal T: defect
|
## Bug Description
I am trying to impose Hill-Mandel type constraints on a periodic RVE with the ability to impose arbitrary macroscale stress or strain conditions on the cell. I would like to impose periodic BCs using the existing MOOSE system and then impose the cell average constraint using Lagrange multipliers.
As an initial test I've implemented the approach in a branch on my fork (https://github.com/reverendbedford/moose/tree/lagrange_periodic) for small strains and just for control over the macroscale stress. This branch works if you just impose the cell-average constraint (+ constrain rigid body modes) but does not work if you then turn on periodic boundary conditions. Whatever MOOSE does to enforce the periodic conditions affects the Jacobian contributions of the Lagrange multiplier, making the system really, really singular (essentially the Lagrange multiplier equation becomes 0 = 0).
## Steps to Reproduce
1) Clone https://github.com/reverendbedford/moose/tree/lagrange_periodic. This includes only two new objects:
a) A scalarkernel to enforce the Lagrange multiplier constraint: DummyLagrange (though not really, see below)
b) A kernel to provide the off-diagonal entries to the StressDivergenceTensor Jacobian: HomogenizationConstraint (though not really, see below)
2) Run the examples in the modules/tensor_mechanics_examples/lagrange_periodic directory
I provided 4 1D examples. All the same problems occur in 2D and 3D.
1) 1d-noperiodic-works.i: just the lagrange multiplier, does what it should.
2) 1d-pbcs-fails.i: lagrange multiplier + PBCs: fails to converge
3) 1d-periodic-nodecon-penalty-works.i: applies periodic boundary conditions using node-node constraints and the penalty formulation: works
4) 1d-periodic-nodecon-kinematic-fails.i: applies periodic conditions using node-node constraints and the kinematic formulation: fails
So things don't work if periodic constraints rearrange the system of equations. Things work if the equations aren't rearranged (either no periodicity or enforce it with a penalty).
## Impact
We've discussed an alternate implementation on the mailing list (something like the current GeneralStrain capability but with a stress so that it works for large deformations) but I would much prefer this method as it seems much cleaner.
## Potential causes
1) Actually my DummLagrange scalarkernel does nothing except provide an explicit 0 for the on-diagonal Jacobian for the lagrange multiplier (this zero is the correct exact Jacobian). All the work in imposing the residual and off-diagonal (non-zero) Jacobian contributions is done in the HomogenizationConstraint kernel. This is very non-MOOSE like, but as the example in moose/test/tests/kernels/scalar_constraint points out, it is much more efficient than doing things with the proper division of labor between the kernel and the scalarkernel. Does this oddity cause my issue though?
2) The system is actually unstable with the PBCs. I don't think so as
a) The penalty method works
b) It turns out in 2D just imposing the average stress constraint by itself results in an unstable system, presumably because you need the periodic constraint to fully-constrain things.
3) An actual bug with PBCs.
## Why I think this might actually be a bug and not a problem with how I'm trying to implement things
If you compare the Jacobian between the 1d-noperiodic-works case and the 1d-pbcs-fails case you see that MOOSE is actually altering the linear equations provided by the Lagrange multiplier (the last row of the matrix contains non-zero entries for the first case and is all zeros for the second case). Why should the periodic constraint on disp_x do anything to that equation?
|
1.0
|
Incompatibility between periodic boundary conditions and a Lagrange multiplier constraint implemented as a scalarkerel/scalarvariable in tensor_mechanics - ## Bug Description
I am trying to impose Hill-Mandel type constraints on a periodic RVE with the ability to impose arbitrary macroscale stress or strain conditions on the cell. I would like to impose periodic BCs using the existing MOOSE system and then impose the cell average constraint using Lagrange multipliers.
As an initial test I've implemented the approach in a branch on my fork (https://github.com/reverendbedford/moose/tree/lagrange_periodic) for small strains and just for control over the macroscale stress. This branch works if you just impose the cell-average constraint (+ constrain rigid body modes) but does not work if you then turn on periodic boundary conditions. Whatever MOOSE does to enforce the periodic conditions affects the Jacobian contributions of the Lagrange multiplier, making the system really, really singular (essentially the Lagrange multiplier equation becomes 0 = 0).
## Steps to Reproduce
1) Clone https://github.com/reverendbedford/moose/tree/lagrange_periodic. This includes only two new objects:
a) A scalarkernel to enforce the Lagrange multiplier constraint: DummyLagrange (though not really, see below)
b) A kernel to provide the off-diagonal entries to the StressDivergenceTensor Jacobian: HomogenizationConstraint (though not really, see below)
2) Run the examples in the modules/tensor_mechanics_examples/lagrange_periodic directory
I provided 4 1D examples. All the same problems occur in 2D and 3D.
1) 1d-noperiodic-works.i: just the lagrange multiplier, does what it should.
2) 1d-pbcs-fails.i: lagrange multiplier + PBCs: fails to converge
3) 1d-periodic-nodecon-penalty-works.i: applies periodic boundary conditions using node-node constraints and the penalty formulation: works
4) 1d-periodic-nodecon-kinematic-fails.i: applies periodic conditions using node-node constraints and the kinematic formulation: fails
So things don't work if periodic constraints rearrange the system of equations. Things work if the equations aren't rearranged (either no periodicity or enforce it with a penalty).
## Impact
We've discussed an alternate implementation on the mailing list (something like the current GeneralStrain capability but with a stress so that it works for large deformations) but I would much prefer this method as it seems much cleaner.
## Potential causes
1) Actually my DummLagrange scalarkernel does nothing except provide an explicit 0 for the on-diagonal Jacobian for the lagrange multiplier (this zero is the correct exact Jacobian). All the work in imposing the residual and off-diagonal (non-zero) Jacobian contributions is done in the HomogenizationConstraint kernel. This is very non-MOOSE like, but as the example in moose/test/tests/kernels/scalar_constraint points out, it is much more efficient than doing things with the proper division of labor between the kernel and the scalarkernel. Does this oddity cause my issue though?
2) The system is actually unstable with the PBCs. I don't think so as
a) The penalty method works
b) It turns out in 2D just imposing the average stress constraint by itself results in an unstable system, presumably because you need the periodic constraint to fully-constrain things.
3) An actual bug with PBCs.
## Why I think this might actually be a bug and not a problem with how I'm trying to implement things
If you compare the Jacobian between the 1d-noperiodic-works case and the 1d-pbcs-fails case you see that MOOSE is actually altering the linear equations provided by the Lagrange multiplier (the last row of the matrix contains non-zero entries for the first case and is all zeros for the second case). Why should the periodic constraint on disp_x do anything to that equation?
|
defect
|
incompatibility between periodic boundary conditions and a lagrange multiplier constraint implemented as a scalarkerel scalarvariable in tensor mechanics bug description i am trying to impose hill mandel type constraints on a periodic rve with the ability to impose arbitrary macroscale stress or strain conditions on the cell i would like to impose periodic bcs using the existing moose system and then impose the cell average constraint using lagrange multipliers as an initial test i ve implemented the approach in a branch on my fork for small strains and just for control over the macroscale stress this branch works if you just impose the cell average constraint constrain rigid body modes but does not work if you then turn on periodic boundary conditions whatever moose does to enforce the periodic conditions affects the jacobian contributions of the lagrange multiplier making the system really really singular essentially the lagrange multiplier equation becomes steps to reproduce clone this includes only two new objects a a scalarkernel to enforce the lagrange multiplier constraint dummylagrange though not really see below b a kernel to provide the off diagonal entries to the stressdivergencetensor jacobian homogenizationconstraint though not really see below run the examples in the modules tensor mechanics examples lagrange periodic directory i provided examples all the same problems occur in and noperiodic works i just the lagrange multiplier does what it should pbcs fails i lagrange multiplier pbcs fails to converge periodic nodecon penalty works i applies periodic boundary conditions using node node constraints and the penalty formulation works periodic nodecon kinematic fails i applies periodic conditions using node node constraints and the kinematic formulation fails so things don t work if periodic constraints rearrange the system of equations things work if the equations aren t rearranged either no periodicity or enforce it with a penalty impact we ve discussed an alternate implementation on the mailing list something like the current generalstrain capability but with a stress so that it works for large deformations but i would much prefer this method as it seems much cleaner potential causes actually my dummlagrange scalarkernel does nothing except provide an explicit for the on diagonal jacobian for the lagrange multiplier this zero is the correct exact jacobian all the work in imposing the residual and off diagonal non zero jacobian contributions is done in the homogenizationconstraint kernel this is very non moose like but as the example in moose test tests kernels scalar constraint points out it is much more efficient than doing things with the proper division of labor between the kernel and the scalarkernel does this oddity cause my issue though the system is actually unstable with the pbcs i don t think so as a the penalty method works b it turns out in just imposing the average stress constraint by itself results in an unstable system presumably because you need the periodic constraint to fully constrain things an actual bug with pbcs why i think this might actually be a bug and not a problem with how i m trying to implement things if you compare the jacobian between the noperiodic works case and the pbcs fails case you see that moose is actually altering the linear equations provided by the lagrange multiplier the last row of the matrix contains non zero entries for the first case and is all zeros for the second case why should the periodic constraint on disp x do anything to that equation
| 1
|
76,565
| 7,539,945,121
|
IssuesEvent
|
2018-04-17 03:27:48
|
omegaup/omegaup
|
https://api.github.com/repos/omegaup/omegaup
|
closed
|
Refactor de identidad: fase 4
|
5 P1 omegaUp For Contests
|
Esto es parte del feature #1300
https://docs.google.com/document/d/1COTwqOYNvuYER5jkeKZWJXxSdVn_2X-CjoDolmEiQ1Y/edit#
Cambiar Clarifications para referenciar a la identidad en vez de al usuario.
|
1.0
|
Refactor de identidad: fase 4 - Esto es parte del feature #1300
https://docs.google.com/document/d/1COTwqOYNvuYER5jkeKZWJXxSdVn_2X-CjoDolmEiQ1Y/edit#
Cambiar Clarifications para referenciar a la identidad en vez de al usuario.
|
non_defect
|
refactor de identidad fase esto es parte del feature cambiar clarifications para referenciar a la identidad en vez de al usuario
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.