Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
25,362
4,306,653,918
IssuesEvent
2016-07-21 04:47:46
kraigs-android/kraigsandroid
https://api.github.com/repos/kraigs-android/kraigsandroid
closed
Need bigger Dismiss slider for us nearsighted folks for Alarm Klock; best. app. ever. otherwise.
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1.Anything 2. 3. What is the expected output? What do you see instead? Have a hell of a time sliding the Dismiss slider when groggy. Sometimes the phone slides or rotates when trying to use slider on the table. Just too What version of the product are you using? On what operating system? MotoDroid, whatever latest Klock and OS are Please provide any additional information below. Really appreciate it. This thing is great, best app and by far the most frequently used on my phone. Just frustrating to get it to shut up once it goes off! Thanky kindly, Craig. Now if only the phone access slider were bigger, too... Nate W., Oak Harbor, OH; AKA Oogie Wa Wa ``` Original issue reported on code.google.com by `oogi...@gmail.com` on 4 Jan 2012 at 10:46
1.0
Need bigger Dismiss slider for us nearsighted folks for Alarm Klock; best. app. ever. otherwise. - ``` What steps will reproduce the problem? 1.Anything 2. 3. What is the expected output? What do you see instead? Have a hell of a time sliding the Dismiss slider when groggy. Sometimes the phone slides or rotates when trying to use slider on the table. Just too What version of the product are you using? On what operating system? MotoDroid, whatever latest Klock and OS are Please provide any additional information below. Really appreciate it. This thing is great, best app and by far the most frequently used on my phone. Just frustrating to get it to shut up once it goes off! Thanky kindly, Craig. Now if only the phone access slider were bigger, too... Nate W., Oak Harbor, OH; AKA Oogie Wa Wa ``` Original issue reported on code.google.com by `oogi...@gmail.com` on 4 Jan 2012 at 10:46
defect
need bigger dismiss slider for us nearsighted folks for alarm klock best app ever otherwise what steps will reproduce the problem anything what is the expected output what do you see instead have a hell of a time sliding the dismiss slider when groggy sometimes the phone slides or rotates when trying to use slider on the table just too what version of the product are you using on what operating system motodroid whatever latest klock and os are please provide any additional information below really appreciate it this thing is great best app and by far the most frequently used on my phone just frustrating to get it to shut up once it goes off thanky kindly craig now if only the phone access slider were bigger too nate w oak harbor oh aka oogie wa wa original issue reported on code google com by oogi gmail com on jan at
1
7,578
2,603,730,619
IssuesEvent
2015-02-24 17:37:55
w33tmaricich/OpenCV-Eye-Detection
https://api.github.com/repos/w33tmaricich/OpenCV-Eye-Detection
opened
Output Specification Crash
bug current priority
Can Not Specify Output File - When you attempt to specify an output file, the program crashes. It seems to crash when you are attempting to execute the haar cascade search algorithm at around line 200 ``` haar_left.detectMultiScale(imgCopy, lefts); haar_right.detectMultiScale(imgCopy, rights); haar_pair.detectMultiScale(imgCopy, pairs); ``` If this is something wrong with opencv i need to figure out what it is. This issue never arose before. I should look at past code and see what has changed between then and now.
1.0
Output Specification Crash - Can Not Specify Output File - When you attempt to specify an output file, the program crashes. It seems to crash when you are attempting to execute the haar cascade search algorithm at around line 200 ``` haar_left.detectMultiScale(imgCopy, lefts); haar_right.detectMultiScale(imgCopy, rights); haar_pair.detectMultiScale(imgCopy, pairs); ``` If this is something wrong with opencv i need to figure out what it is. This issue never arose before. I should look at past code and see what has changed between then and now.
non_defect
output specification crash can not specify output file when you attempt to specify an output file the program crashes it seems to crash when you are attempting to execute the haar cascade search algorithm at around line haar left detectmultiscale imgcopy lefts haar right detectmultiscale imgcopy rights haar pair detectmultiscale imgcopy pairs if this is something wrong with opencv i need to figure out what it is this issue never arose before i should look at past code and see what has changed between then and now
0
16,816
2,948,312,481
IssuesEvent
2015-07-06 01:19:47
Winetricks/winetricks
https://api.github.com/repos/Winetricks/winetricks
closed
wishlist: software BestPractice
auto-migrated Priority-Medium Type-Defect
``` Forwarded from https://bugs.launchpad.net/ubuntu/+source/winetricks/+bug/718643 Wishlist to add package: BestPractice is a musician's practice tool, to slow down or speed up music, either from an MP3 file or directly from a CD. Ordinarily the sound is distorted when slowed down our sped up - you get the effect like when playing a 33 rpm record on 45 rpm speed (remember the Chipmunks?). BestPractice tries to correct this, so you can slow down and speed up music, while keeping the original pitch. It is also possible to change the pitch of the music without affecting its tempo. Play along with for instance Eb tuned guitars without retuning your own, or slow down that high-speed guitar solo on a CD that you like to learn. Website: http://bestpractice.sourceforge.net/ -- http://sourceforge.net/projects/bestpractice/ License: GNU General Public License (GPL) ``` Original issue reported on code.google.com by `jari.aalto.fi@gmail.com` on 24 Aug 2012 at 10:41
1.0
wishlist: software BestPractice - ``` Forwarded from https://bugs.launchpad.net/ubuntu/+source/winetricks/+bug/718643 Wishlist to add package: BestPractice is a musician's practice tool, to slow down or speed up music, either from an MP3 file or directly from a CD. Ordinarily the sound is distorted when slowed down our sped up - you get the effect like when playing a 33 rpm record on 45 rpm speed (remember the Chipmunks?). BestPractice tries to correct this, so you can slow down and speed up music, while keeping the original pitch. It is also possible to change the pitch of the music without affecting its tempo. Play along with for instance Eb tuned guitars without retuning your own, or slow down that high-speed guitar solo on a CD that you like to learn. Website: http://bestpractice.sourceforge.net/ -- http://sourceforge.net/projects/bestpractice/ License: GNU General Public License (GPL) ``` Original issue reported on code.google.com by `jari.aalto.fi@gmail.com` on 24 Aug 2012 at 10:41
defect
wishlist software bestpractice forwarded from wishlist to add package bestpractice is a musician s practice tool to slow down or speed up music either from an file or directly from a cd ordinarily the sound is distorted when slowed down our sped up you get the effect like when playing a rpm record on rpm speed remember the chipmunks bestpractice tries to correct this so you can slow down and speed up music while keeping the original pitch it is also possible to change the pitch of the music without affecting its tempo play along with for instance eb tuned guitars without retuning your own or slow down that high speed guitar solo on a cd that you like to learn website license gnu general public license gpl original issue reported on code google com by jari aalto fi gmail com on aug at
1
60,988
17,023,573,876
IssuesEvent
2021-07-03 02:43:43
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
issues with rendering railway=tram and light_rail
Component: mapnik Priority: major Resolution: duplicate Type: defect
**[Submitted to the original trac issue database at 2.02am, Tuesday, 6th April 2010]** First, I'll say that there's no clear real-world line between tram and light_rail, so they should be rendered similarly (which they are). One of the few places with justifiable usage of both is Portland, Oregon: http://www.openstreetmap.org/?lat=45.5158&lon=-122.6727&zoom=14&layers=B000FTF. You can see that at some zooms tram is thinner than light_rail (which makes sense). That said, there are some issues. This may not be an exhaustive list, but I've noticed: [[BR]] *Street-running segments of light_rail, in other words ways tagged highway=* and rail=light_rail, are only rendered as highway. [[BR]] *Tram bridges, in other words ways tagged railway=tram and bridge=yes, are not rendered as bridges. [[BR]] Both of these cases are rendered properly if the other railway tag is used, which has led some people (myself included, sadly) to tag for the renderer, taking advantage of the fact that there is no clear line between tram and light_rail.
1.0
issues with rendering railway=tram and light_rail - **[Submitted to the original trac issue database at 2.02am, Tuesday, 6th April 2010]** First, I'll say that there's no clear real-world line between tram and light_rail, so they should be rendered similarly (which they are). One of the few places with justifiable usage of both is Portland, Oregon: http://www.openstreetmap.org/?lat=45.5158&lon=-122.6727&zoom=14&layers=B000FTF. You can see that at some zooms tram is thinner than light_rail (which makes sense). That said, there are some issues. This may not be an exhaustive list, but I've noticed: [[BR]] *Street-running segments of light_rail, in other words ways tagged highway=* and rail=light_rail, are only rendered as highway. [[BR]] *Tram bridges, in other words ways tagged railway=tram and bridge=yes, are not rendered as bridges. [[BR]] Both of these cases are rendered properly if the other railway tag is used, which has led some people (myself included, sadly) to tag for the renderer, taking advantage of the fact that there is no clear line between tram and light_rail.
defect
issues with rendering railway tram and light rail first i ll say that there s no clear real world line between tram and light rail so they should be rendered similarly which they are one of the few places with justifiable usage of both is portland oregon you can see that at some zooms tram is thinner than light rail which makes sense that said there are some issues this may not be an exhaustive list but i ve noticed street running segments of light rail in other words ways tagged highway and rail light rail are only rendered as highway tram bridges in other words ways tagged railway tram and bridge yes are not rendered as bridges both of these cases are rendered properly if the other railway tag is used which has led some people myself included sadly to tag for the renderer taking advantage of the fact that there is no clear line between tram and light rail
1
19,959
3,284,036,506
IssuesEvent
2015-10-28 15:12:45
Axinom/x2js
https://api.github.com/repos/Axinom/x2js
closed
Set a null prefix
auto-migrated Priority-Medium Type-Defect
``` I'd like to have no prefix set by the parsing at all. The issue comes in from: config.attributePrefix = config.attributePrefix || "_"; When passing in an empty string (attributePrefix: ""), this evaluates to false, and the underscore is set as the prefix. ``` Original issue reported on code.google.com by `colinwma...@gmail.com` on 7 Oct 2014 at 8:29
1.0
Set a null prefix - ``` I'd like to have no prefix set by the parsing at all. The issue comes in from: config.attributePrefix = config.attributePrefix || "_"; When passing in an empty string (attributePrefix: ""), this evaluates to false, and the underscore is set as the prefix. ``` Original issue reported on code.google.com by `colinwma...@gmail.com` on 7 Oct 2014 at 8:29
defect
set a null prefix i d like to have no prefix set by the parsing at all the issue comes in from config attributeprefix config attributeprefix when passing in an empty string attributeprefix this evaluates to false and the underscore is set as the prefix original issue reported on code google com by colinwma gmail com on oct at
1
84,871
10,419,728,149
IssuesEvent
2019-09-15 18:42:09
jmfayard/buildSrcVersions
https://api.github.com/repos/jmfayard/buildSrcVersions
closed
Don't apply the gradle-versions-plugin
documentation
If you see this error: ``` An exception occurred applying plugin request [id: 'de.fayard.buildSrcVersions', version: '0.5.0'] > Failed to apply plugin [id 'de.fayard.buildSrcVersions'] > Cannot add task 'dependencyUpdates' as a task with that name already exists. ``` then remove the gradle-versions-plugin ```diff plugins { - id("com.github.ben-manes.versions") version "0.25.0" id("de.fayard.buildSrcVersions") version "0.5.0" } ``` It's already applied by the plugin
1.0
Don't apply the gradle-versions-plugin - If you see this error: ``` An exception occurred applying plugin request [id: 'de.fayard.buildSrcVersions', version: '0.5.0'] > Failed to apply plugin [id 'de.fayard.buildSrcVersions'] > Cannot add task 'dependencyUpdates' as a task with that name already exists. ``` then remove the gradle-versions-plugin ```diff plugins { - id("com.github.ben-manes.versions") version "0.25.0" id("de.fayard.buildSrcVersions") version "0.5.0" } ``` It's already applied by the plugin
non_defect
don t apply the gradle versions plugin if you see this error an exception occurred applying plugin request failed to apply plugin cannot add task dependencyupdates as a task with that name already exists then remove the gradle versions plugin diff plugins id com github ben manes versions version id de fayard buildsrcversions version it s already applied by the plugin
0
26,568
4,761,280,703
IssuesEvent
2016-10-25 07:40:05
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Disabled radio still clickable via label
defect
When you have a radio button that is disabled you can still click on the text beside the radio button to toggle it. For example, dispite the fact that `BAR` is disabled, I can still click on the text `BAR` to select the button: ![image](https://cloud.githubusercontent.com/assets/5657503/18934943/bf1d0c68-85aa-11e6-9c9b-f444db554b68.png) After Click: ![image](https://cloud.githubusercontent.com/assets/5657503/18934978/fe1f6a96-85aa-11e6-8142-54bcbf85d314.png) Code: ``` <p-radioButton class="radio-holder" value="bar" label="BAR" [disabled]="true" [(ngModel)]="foo" ></p-radioButton> <p-radioButton class="radio-holder" value="baz" label="BAZ" [disabled]="true" [(ngModel)]="foo" ></p-radioButton> ``` Clicking on the actual button works as selected (it won't change the selected state).
1.0
Disabled radio still clickable via label - When you have a radio button that is disabled you can still click on the text beside the radio button to toggle it. For example, dispite the fact that `BAR` is disabled, I can still click on the text `BAR` to select the button: ![image](https://cloud.githubusercontent.com/assets/5657503/18934943/bf1d0c68-85aa-11e6-9c9b-f444db554b68.png) After Click: ![image](https://cloud.githubusercontent.com/assets/5657503/18934978/fe1f6a96-85aa-11e6-8142-54bcbf85d314.png) Code: ``` <p-radioButton class="radio-holder" value="bar" label="BAR" [disabled]="true" [(ngModel)]="foo" ></p-radioButton> <p-radioButton class="radio-holder" value="baz" label="BAZ" [disabled]="true" [(ngModel)]="foo" ></p-radioButton> ``` Clicking on the actual button works as selected (it won't change the selected state).
defect
disabled radio still clickable via label when you have a radio button that is disabled you can still click on the text beside the radio button to toggle it for example dispite the fact that bar is disabled i can still click on the text bar to select the button after click code p radiobutton class radio holder value bar label bar true foo p radiobutton class radio holder value baz label baz true foo clicking on the actual button works as selected it won t change the selected state
1
70,230
23,062,988,622
IssuesEvent
2022-07-25 11:35:59
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
opened
MultiSelect: NVDA/JAWS should announce the combo box (multiselect) items are checked or unchecked when it receives keyboard focus.
defect
### Describe the bug When using Multiselect component NVDA/JAWS should announce the combo box items are checked or unchecked when it receives keyboard focus. The expected output was achieved with the earlier version of Primeng. The current version does not have html input element inside div element. Example <div> <input type=” checkbox”> </div> ### Environment Production ### Reproducer https://www.primefaces.org/primeng/multiselect ### Angular version 13.3.1 ### PrimeNG version 13.3.0 ### Build / Runtime Angular CLI App ### Language TypeScript ### Node version (for AoT issues node --version) v14.18.3 ### Browser(s) _No response_ ### Steps to reproduce the behavior We have verified the issue on below Primeng URL URL: (https://www.primefaces.org/primeng/multiselect) Steps to reproduce: 1.Start NVDA, once the above URL is open. 2. Also, add [virtualScroll]="true" so that the NVDA will start reading the data from the multiselect. <h5>Basic</h5> <p-multiSelect [options]="cities" [(ngModel)]="selectedCities1" defaultLabel="Select a City" optionLabel="name" **[virtualScroll]="true">** </p-multiSelect> 4. To navigate between the multiselect items use alt+down arrow key. 5. When we click on enter to check/uncheck the checkbox NVDA/JAWS does not read if the checkbox is checked or unchecked. ### Expected behavior When using Multiselect component NVDA/JAWS should announce the combo box items are checked or unchecked when it receives keyboard focus.
1.0
MultiSelect: NVDA/JAWS should announce the combo box (multiselect) items are checked or unchecked when it receives keyboard focus. - ### Describe the bug When using Multiselect component NVDA/JAWS should announce the combo box items are checked or unchecked when it receives keyboard focus. The expected output was achieved with the earlier version of Primeng. The current version does not have html input element inside div element. Example <div> <input type=” checkbox”> </div> ### Environment Production ### Reproducer https://www.primefaces.org/primeng/multiselect ### Angular version 13.3.1 ### PrimeNG version 13.3.0 ### Build / Runtime Angular CLI App ### Language TypeScript ### Node version (for AoT issues node --version) v14.18.3 ### Browser(s) _No response_ ### Steps to reproduce the behavior We have verified the issue on below Primeng URL URL: (https://www.primefaces.org/primeng/multiselect) Steps to reproduce: 1.Start NVDA, once the above URL is open. 2. Also, add [virtualScroll]="true" so that the NVDA will start reading the data from the multiselect. <h5>Basic</h5> <p-multiSelect [options]="cities" [(ngModel)]="selectedCities1" defaultLabel="Select a City" optionLabel="name" **[virtualScroll]="true">** </p-multiSelect> 4. To navigate between the multiselect items use alt+down arrow key. 5. When we click on enter to check/uncheck the checkbox NVDA/JAWS does not read if the checkbox is checked or unchecked. ### Expected behavior When using Multiselect component NVDA/JAWS should announce the combo box items are checked or unchecked when it receives keyboard focus.
defect
multiselect nvda jaws should announce the combo box multiselect items are checked or unchecked when it receives keyboard focus describe the bug when using multiselect component nvda jaws should announce the combo box items are checked or unchecked when it receives keyboard focus the expected output was achieved with the earlier version of primeng the current version does not have html input element inside div element example environment production reproducer angular version primeng version build runtime angular cli app language typescript node version for aot issues node version browser s no response steps to reproduce the behavior we have verified the issue on below primeng url url steps to reproduce start nvda once the above url is open also add true so that the nvda will start reading the data from the multiselect basic to navigate between the multiselect items use alt down arrow key when we click on enter to check uncheck the checkbox nvda jaws does not read if the checkbox is checked or unchecked expected behavior when using multiselect component nvda jaws should announce the combo box items are checked or unchecked when it receives keyboard focus
1
28,726
5,345,005,686
IssuesEvent
2017-02-17 15:55:42
chandanbansal/gmail-backup
https://api.github.com/repos/chandanbansal/gmail-backup
closed
Network error occured, disconnected Trying to reconnect (1)
auto-migrated Priority-Medium Type-Defect
``` I have a problem when i try to restore one account, this account when recover 4,5% have a error below: Restored 4.5%: xxx@xxx.xxx.xx - Re: [subject] - Proposta - Painel Testes Network error occured, disconnected Trying to reconnect (1) Reconnected! I use a gmail backup revision 15. Anyone can help me? tks and sorry my english ```
1.0
Network error occured, disconnected Trying to reconnect (1) - ``` I have a problem when i try to restore one account, this account when recover 4,5% have a error below: Restored 4.5%: xxx@xxx.xxx.xx - Re: [subject] - Proposta - Painel Testes Network error occured, disconnected Trying to reconnect (1) Reconnected! I use a gmail backup revision 15. Anyone can help me? tks and sorry my english ```
defect
network error occured disconnected trying to reconnect i have a problem when i try to restore one account this account when recover have a error below restored xxx xxx xxx xx re proposta painel testes network error occured disconnected trying to reconnect reconnected i use a gmail backup revision anyone can help me tks and sorry my english
1
244,608
18,763,240,122
IssuesEvent
2021-11-05 19:13:54
kanellakise/group1-groupProject1
https://api.github.com/repos/kanellakise/group1-groupProject1
closed
Revise README
documentation
In the README, the repository and deployed website links are in the wrong places (they're swapped) We could add more information, like info from our presentation about future development goals and development challenges.
1.0
Revise README - In the README, the repository and deployed website links are in the wrong places (they're swapped) We could add more information, like info from our presentation about future development goals and development challenges.
non_defect
revise readme in the readme the repository and deployed website links are in the wrong places they re swapped we could add more information like info from our presentation about future development goals and development challenges
0
19,669
3,237,383,570
IssuesEvent
2015-10-14 11:35:17
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
opened
js_dart_to_string_test fails on Dartium
Area-Dartium Priority-Medium Type-Defect
The test html/js_dart_to_string_test is failing with a RuntimeError on Dartium and drt (content shell). The failure is: FAIL 1 FAIL Expectation: toString custom dart . Expected: '#fooBar#' Actual: '[object DartObject]' Which: is different. Expected: #fooBar# Actual: [object Da ... ^ Differ at offset 0 package:unittest/src/simple_configuration.dart 128:34 SimpleConfiguration.onExpectFailure package:unittest/src/simple_configuration.dart 24:13 _ExpectFailureHandler.fail package:unittest/src/matcher/expect.dart 121:5 DefaultFailureHandler.failMatch package:unittest/src/matcher/expect.dart 95:20 expect http://127.0.0.1:34265/root_dart/tests/html/js_dart_to_string_test.dart 42:7 main.<fn>.<fn> package:unittest/src/internal_test_case.dart 120:37 InternalTestCase.run.<fn> dart:async/zone.dart 914 _rootRunUnary
1.0
js_dart_to_string_test fails on Dartium - The test html/js_dart_to_string_test is failing with a RuntimeError on Dartium and drt (content shell). The failure is: FAIL 1 FAIL Expectation: toString custom dart . Expected: '#fooBar#' Actual: '[object DartObject]' Which: is different. Expected: #fooBar# Actual: [object Da ... ^ Differ at offset 0 package:unittest/src/simple_configuration.dart 128:34 SimpleConfiguration.onExpectFailure package:unittest/src/simple_configuration.dart 24:13 _ExpectFailureHandler.fail package:unittest/src/matcher/expect.dart 121:5 DefaultFailureHandler.failMatch package:unittest/src/matcher/expect.dart 95:20 expect http://127.0.0.1:34265/root_dart/tests/html/js_dart_to_string_test.dart 42:7 main.<fn>.<fn> package:unittest/src/internal_test_case.dart 120:37 InternalTestCase.run.<fn> dart:async/zone.dart 914 _rootRunUnary
defect
js dart to string test fails on dartium the test html js dart to string test is failing with a runtimeerror on dartium and drt content shell the failure is fail fail expectation tostring custom dart expected foobar actual which is different expected foobar actual object da differ at offset package unittest src simple configuration dart simpleconfiguration onexpectfailure package unittest src simple configuration dart expectfailurehandler fail package unittest src matcher expect dart defaultfailurehandler failmatch package unittest src matcher expect dart expect main package unittest src internal test case dart internaltestcase run dart async zone dart rootrununary
1
53,754
13,262,239,227
IssuesEvent
2020-08-20 21:22:21
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
Revisit Random Number Generation (Trac #2013)
Migrated from Trac combo core defect
Currently icetray has its own random number generator interface. With 3 instances: * I3TRandom: ROOT's implementation of mt19937 * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. Random number generation has not been revisited in the past 13 years. Things to consider: * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`) * we only use version 2.0a of SPRNG, newer versions are available * SPRNG has a failure mode where it uses the exact same stream for every job in a batch * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG Requirements: * Determine whether to continue to use our custom RNG interface or switch to the c++11 one * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system. * should be able to set the seed without causing all streams to be identical <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2013">https://code.icecube.wisc.edu/projects/icecube/ticket/2013</a>, reported by kjmeagherand owned by juancarlos</em></summary> <p> ```json { "status": "closed", "changetime": "2019-09-18T05:54:49", "_ts": "1568786089339356", "description": "Currently icetray has its own random number generator interface. With 3 instances:\n * I3TRandom: ROOT's implementation of mt19937\n * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable\n * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. \n\nRandom number generation has not been revisited in the past 13 years. Things to consider:\n * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`)\n * we only use version 2.0a of SPRNG, newer versions are available\n * SPRNG has a failure mode where it uses the exact same stream for every job in a batch \n * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers\n * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG\n\nRequirements: \n * Determine whether to continue to use our custom RNG interface or switch to the c++11 one\n * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system.\n * should be able to set the seed without causing all streams to be identical \n\n\n\n", "reporter": "kjmeagher", "cc": "cweaver, olivas", "resolution": "duplicate", "time": "2017-05-09T19:53:04", "component": "combo core", "summary": "Revisit Random Number Generation", "priority": "normal", "keywords": "random, rng, gsl, sprng", "milestone": "Long-Term Future", "owner": "juancarlos", "type": "defect" } ``` </p> </details>
1.0
Revisit Random Number Generation (Trac #2013) - Currently icetray has its own random number generator interface. With 3 instances: * I3TRandom: ROOT's implementation of mt19937 * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. Random number generation has not been revisited in the past 13 years. Things to consider: * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`) * we only use version 2.0a of SPRNG, newer versions are available * SPRNG has a failure mode where it uses the exact same stream for every job in a batch * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG Requirements: * Determine whether to continue to use our custom RNG interface or switch to the c++11 one * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system. * should be able to set the seed without causing all streams to be identical <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2013">https://code.icecube.wisc.edu/projects/icecube/ticket/2013</a>, reported by kjmeagherand owned by juancarlos</em></summary> <p> ```json { "status": "closed", "changetime": "2019-09-18T05:54:49", "_ts": "1568786089339356", "description": "Currently icetray has its own random number generator interface. With 3 instances:\n * I3TRandom: ROOT's implementation of mt19937\n * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable\n * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. \n\nRandom number generation has not been revisited in the past 13 years. Things to consider:\n * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`)\n * we only use version 2.0a of SPRNG, newer versions are available\n * SPRNG has a failure mode where it uses the exact same stream for every job in a batch \n * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers\n * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG\n\nRequirements: \n * Determine whether to continue to use our custom RNG interface or switch to the c++11 one\n * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system.\n * should be able to set the seed without causing all streams to be identical \n\n\n\n", "reporter": "kjmeagher", "cc": "cweaver, olivas", "resolution": "duplicate", "time": "2017-05-09T19:53:04", "component": "combo core", "summary": "Revisit Random Number Generation", "priority": "normal", "keywords": "random, rng, gsl, sprng", "milestone": "Long-Term Future", "owner": "juancarlos", "type": "defect" } ``` </p> </details>
defect
revisit random number generation trac currently icetray has its own random number generator interface with instances root s implementation of uses gsl s implementation of but can be changed with and environment variable combines the output of sprng and gsl for reasons which are not adequately explained in the documentation random number generation has not been revisited in the past years things to consider boost c has a random number generator interface which has all the functions we need except for the unused poissond we only use version of sprng newer versions are available sprng has a failure mode where it uses the exact same stream for every job in a batch sprng has a nonstandard install script and who knows how long it will continue to work with current compilers it is unclear if combining sprng and is a statistically valid rng requirements determine whether to continue to use our custom rng interface or switch to the c one determine if there is a better rng for batch processing ie it can derive multiple streams from both the dataset number and the job number which are all independent bonus points for having a normal build system should be able to set the seed without causing all streams to be identical migrated from json status closed changetime ts description currently icetray has its own random number generator interface with instances n root s implementation of n uses gsl s implementation of but can be changed with and environment variable n combines the output of sprng and gsl for reasons which are not adequately explained in the documentation n nrandom number generation has not been revisited in the past years things to consider n boost c has a random number generator interface which has all the functions we need except for the unused poissond n we only use version of sprng newer versions are available n sprng has a failure mode where it uses the exact same stream for every job in a batch n sprng has a nonstandard install script and who knows how long it will continue to work with current compilers n it is unclear if combining sprng and is a statistically valid rng n nrequirements n determine whether to continue to use our custom rng interface or switch to the c one n determine if there is a better rng for batch processing ie it can derive multiple streams from both the dataset number and the job number which are all independent bonus points for having a normal build system n should be able to set the seed without causing all streams to be identical n n n n reporter kjmeagher cc cweaver olivas resolution duplicate time component combo core summary revisit random number generation priority normal keywords random rng gsl sprng milestone long term future owner juancarlos type defect
1
48,041
13,067,410,076
IssuesEvent
2020-07-31 00:21:51
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[I3Db] IceACT geometry is missing (Trac #1697)
Migrated from Trac combo reconstruction defect
The IceACT geometry is missing in the GCD files since I3Db starts with string 1 but IceACT is at string 0. Issue for 2015 & 2016. It is required to resolve it now since IceACT mainboard will be enabled today. Blocks L2 offline processing. Migrated from https://code.icecube.wisc.edu/ticket/1697 ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "The IceACT geometry is missing in the GCD files since I3Db starts with string 1 but IceACT is at string 0.\n\nIssue for 2015 & 2016.\n\nIt is required to resolve it now since IceACT mainboard will be enabled today.\n\nBlocks L2 offline processing.", "reporter": "joertlin", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "[I3Db] IceACT geometry is missing", "priority": "blocker", "keywords": "", "time": "2016-05-10T15:18:53", "milestone": "", "owner": "joertlin", "type": "defect" } ```
1.0
[I3Db] IceACT geometry is missing (Trac #1697) - The IceACT geometry is missing in the GCD files since I3Db starts with string 1 but IceACT is at string 0. Issue for 2015 & 2016. It is required to resolve it now since IceACT mainboard will be enabled today. Blocks L2 offline processing. Migrated from https://code.icecube.wisc.edu/ticket/1697 ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "The IceACT geometry is missing in the GCD files since I3Db starts with string 1 but IceACT is at string 0.\n\nIssue for 2015 & 2016.\n\nIt is required to resolve it now since IceACT mainboard will be enabled today.\n\nBlocks L2 offline processing.", "reporter": "joertlin", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "[I3Db] IceACT geometry is missing", "priority": "blocker", "keywords": "", "time": "2016-05-10T15:18:53", "milestone": "", "owner": "joertlin", "type": "defect" } ```
defect
iceact geometry is missing trac the iceact geometry is missing in the gcd files since starts with string but iceact is at string issue for it is required to resolve it now since iceact mainboard will be enabled today blocks offline processing migrated from json status closed changetime description the iceact geometry is missing in the gcd files since starts with string but iceact is at string n nissue for n nit is required to resolve it now since iceact mainboard will be enabled today n nblocks offline processing reporter joertlin cc resolution fixed ts component combo reconstruction summary iceact geometry is missing priority blocker keywords time milestone owner joertlin type defect
1
671,272
22,752,820,399
IssuesEvent
2022-07-07 14:20:17
feast-dev/feast
https://api.github.com/repos/feast-dev/feast
closed
Spark source unable to accept parquet file folder path
kind/bug priority/p2
Spark source has been working well with providing a single parquet file path. But when a folder path is provided it shows following error: ``` To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 07/07/2022 02:43:18 PM ERROR:Spark read of file source failed. Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Admin\anaconda3\envs\feast_21\Scripts\feast.exe\__main__.py", line 7, in <module> File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1062, in main rv = self.invoke(ctx) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 763, in invoke return __callback(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\cli.py", line 489, in apply_total_command apply_total(repo_config, repo, skip_source_validation) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 280, in wrapper raise exc.with_traceback(traceback) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 269, in wrapper return func(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 276, in apply_total apply_total_with_repo_instance( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 234, in apply_total_with_repo_instance data_source.validate(store.config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 149, in validate self.get_table_column_names_and_types(config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 165, in get_table_column_names_and_types df = spark_session.sql(f"SELECT * FROM {self.get_table_query_string()}") File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 190, in get_table_query_string df.createOrReplaceTempView(tmp_table_name) UnboundLocalError: local variable 'df' referenced before assignment ```
1.0
Spark source unable to accept parquet file folder path - Spark source has been working well with providing a single parquet file path. But when a folder path is provided it shows following error: ``` To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 07/07/2022 02:43:18 PM ERROR:Spark read of file source failed. Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Admin\anaconda3\envs\feast_21\Scripts\feast.exe\__main__.py", line 7, in <module> File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1062, in main rv = self.invoke(ctx) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 763, in invoke return __callback(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\cli.py", line 489, in apply_total_command apply_total(repo_config, repo, skip_source_validation) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 280, in wrapper raise exc.with_traceback(traceback) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 269, in wrapper return func(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 276, in apply_total apply_total_with_repo_instance( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 234, in apply_total_with_repo_instance data_source.validate(store.config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 149, in validate self.get_table_column_names_and_types(config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 165, in get_table_column_names_and_types df = spark_session.sql(f"SELECT * FROM {self.get_table_query_string()}") File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 190, in get_table_query_string df.createOrReplaceTempView(tmp_table_name) UnboundLocalError: local variable 'df' referenced before assignment ```
non_defect
spark source unable to accept parquet file folder path spark source has been working well with providing a single parquet file path but when a folder path is provided it shows following error to adjust logging level use sc setloglevel newlevel for sparkr use setloglevel newlevel pm error spark read of file source failed traceback most recent call last file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table query string df spark session read format self file format load self path file c users admin envs feast lib site packages pyspark sql readwriter py line in load return self df self jreader load path file c users admin envs feast lib site packages java gateway py line in call return value get return value file c users admin envs feast lib site packages pyspark sql utils py line in deco raise converted from none pyspark sql utils analysisexception path does not exist file data traceback most recent call last file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table query string df spark session read format self file format load self path file c users admin envs feast lib site packages pyspark sql readwriter py line in load return self df self jreader load path file c users admin envs feast lib site packages java gateway py line in call return value get return value file c users admin envs feast lib site packages pyspark sql utils py line in deco raise converted from none pyspark sql utils analysisexception path does not exist file data traceback most recent call last file c users admin envs feast lib runpy py line in run module as main return run code code main globals none file c users admin envs feast lib runpy py line in run code exec code run globals file c users admin envs feast scripts feast exe main py line in file c users admin envs feast lib site packages click core py line in call return self main args kwargs file c users admin envs feast lib site packages click core py line in main rv self invoke ctx file c users admin envs feast lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file c users admin envs feast lib site packages click core py line in invoke return ctx invoke self callback ctx params file c users admin envs feast lib site packages click core py line in invoke return callback args kwargs file c users admin envs feast lib site packages click decorators py line in new func return f get current context args kwargs file c users admin envs feast lib site packages feast cli py line in apply total command apply total repo config repo skip source validation file c users admin envs feast lib site packages feast usage py line in wrapper raise exc with traceback traceback file c users admin envs feast lib site packages feast usage py line in wrapper return func args kwargs file c users admin envs feast lib site packages feast repo operations py line in apply total apply total with repo instance file c users admin envs feast lib site packages feast repo operations py line in apply total with repo instance data source validate store config file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in validate self get table column names and types config file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table column names and types df spark session sql f select from self get table query string file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table query string df createorreplacetempview tmp table name unboundlocalerror local variable df referenced before assignment
0
330
2,495,567,958
IssuesEvent
2015-01-06 12:35:33
bwapi/bwapi
https://api.github.com/repos/bwapi/bwapi
opened
Create deterministic container variants
enhancement Priority-Low
Having deterministic container variants available will ensure that the `seed_override` configuration option does not go to waste. This is because the ordering of units for example is determined by their memory address, which is non-deterministic in a game.
1.0
Create deterministic container variants - Having deterministic container variants available will ensure that the `seed_override` configuration option does not go to waste. This is because the ordering of units for example is determined by their memory address, which is non-deterministic in a game.
non_defect
create deterministic container variants having deterministic container variants available will ensure that the seed override configuration option does not go to waste this is because the ordering of units for example is determined by their memory address which is non deterministic in a game
0
208,881
16,165,497,179
IssuesEvent
2021-05-01 12:02:51
arturo-lang/arturo
https://api.github.com/repos/arturo-lang/arturo
opened
[Numbers\actanh] add documentation example
documentation easy library todo
[Numbers\actanh] add documentation example https://github.com/arturo-lang/arturo/blob/8571f479d0df2d615129a4a307feffd9d66e5f91/src/library/Numbers.nim#L159 ```text push(newFloating(abs(x.f))) else: push(newFloating(abs(x.z))) builtin "acos", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse cosine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print acos 0 ; 1.570796326794897 print acos 0.3 ; 1.266103672779499 print acos 1.0 ; 0.0 """: ########################################################## if x.kind==Complex: push(newComplex(arccos(x.z))) else: push(newFloating(arccos(asFloat(x)))) builtin "acosh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic cosine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print acosh 1.0 ; 0.0 print acosh 2 ; 1.316957896924817 print acosh 5.0 ; 2.292431669561178 """: ########################################################## if x.kind==Complex: push(newComplex(arccosh(x.z))) else: push(newFloating(arccosh(asFloat(x)))) builtin "acsec", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse cosecant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\acsec): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccsc(x.z))) else: push(newFloating(arccsc(asFloat(x)))) builtin "acsech", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic cosecant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\acsech): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccsch(x.z))) else: push(newFloating(arccsch(asFloat(x)))) builtin "actan", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse cotangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\actan): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccot(x.z))) else: push(newFloating(arccot(asFloat(x)))) builtin "actanh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic cotangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\actanh): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccoth(x.z))) else: push(newFloating(arccoth(asFloat(x)))) builtin "angle", alias = unaliased, rule = PrefixPrecedence, description = "calculate the phase angle of given number", args = { "number" : {Complex} }, attrs = NoAttrs, returns = {Floating}, # TODO(Numbers\angle): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## push(newFloating(phase(x.z))) builtin "asec", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse secant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\asec): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arcsec(x.z))) else: push(newFloating(arcsec(asFloat(x)))) builtin "asech", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic secant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\asech): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arcsech(x.z))) else: push(newFloating(arcsech(asFloat(x)))) builtin "asin", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse sine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print asin 0 ; 0.0 print asin 0.3 ; 0.3046926540153975 print asin 1.0 ; 1.570796326794897 """: ########################################################## if x.kind==Complex: push(newComplex(arcsin(x.z))) else: push(newFloating(arcsin(asFloat(x)))) builtin "asinh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic sine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print asinh 0 ; 0.0 print asinh 0.3 ; 0.2956730475634224 print asinh 1.0 ; 0.881373587019543 """: ########################################################## if x.kind==Complex: push(newComplex(arcsinh(x.z))) else: push(newFloating(arcsinh(asFloat(x)))) builtin "atan", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse tangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print atan 0 ; 0.0 print atan 0.3 ; 0.2914567944778671 print atan 1.0 ; 0.7853981633974483 """: ########################################################## if x.kind==Complex: push(newComplex(arctan(x.z))) else: push(newFloating(arctan(asFloat(x)))) builtin "atan2", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse tangent of y / x", args = { "y" : {Integer,Floating}, "x" : {Integer,Floating} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\atan2): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## push(newFloating(arctan2(asFloat(y), asFloat(x)))) builtin "atanh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic tangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print atanh 0 ; 0.0 print atanh 0.3 ; 0.3095196042031118 print atanh 1.0 ; inf """: ########################################################## if x.kind==Complex: push(newComplex(arctanh(x.z))) else: push(newFloating(arctanh(asFloat(x)))) builtin "average", alias = unaliased, ``` 3b953065dc2d3c349207dd3355c26486772be4bb
1.0
[Numbers\actanh] add documentation example - [Numbers\actanh] add documentation example https://github.com/arturo-lang/arturo/blob/8571f479d0df2d615129a4a307feffd9d66e5f91/src/library/Numbers.nim#L159 ```text push(newFloating(abs(x.f))) else: push(newFloating(abs(x.z))) builtin "acos", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse cosine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print acos 0 ; 1.570796326794897 print acos 0.3 ; 1.266103672779499 print acos 1.0 ; 0.0 """: ########################################################## if x.kind==Complex: push(newComplex(arccos(x.z))) else: push(newFloating(arccos(asFloat(x)))) builtin "acosh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic cosine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print acosh 1.0 ; 0.0 print acosh 2 ; 1.316957896924817 print acosh 5.0 ; 2.292431669561178 """: ########################################################## if x.kind==Complex: push(newComplex(arccosh(x.z))) else: push(newFloating(arccosh(asFloat(x)))) builtin "acsec", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse cosecant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\acsec): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccsc(x.z))) else: push(newFloating(arccsc(asFloat(x)))) builtin "acsech", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic cosecant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\acsech): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccsch(x.z))) else: push(newFloating(arccsch(asFloat(x)))) builtin "actan", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse cotangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\actan): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccot(x.z))) else: push(newFloating(arccot(asFloat(x)))) builtin "actanh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic cotangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\actanh): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arccoth(x.z))) else: push(newFloating(arccoth(asFloat(x)))) builtin "angle", alias = unaliased, rule = PrefixPrecedence, description = "calculate the phase angle of given number", args = { "number" : {Complex} }, attrs = NoAttrs, returns = {Floating}, # TODO(Numbers\angle): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## push(newFloating(phase(x.z))) builtin "asec", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse secant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\asec): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arcsec(x.z))) else: push(newFloating(arcsec(asFloat(x)))) builtin "asech", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic secant of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\asech): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## if x.kind==Complex: push(newComplex(arcsech(x.z))) else: push(newFloating(arcsech(asFloat(x)))) builtin "asin", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse sine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print asin 0 ; 0.0 print asin 0.3 ; 0.3046926540153975 print asin 1.0 ; 1.570796326794897 """: ########################################################## if x.kind==Complex: push(newComplex(arcsin(x.z))) else: push(newFloating(arcsin(asFloat(x)))) builtin "asinh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic sine of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print asinh 0 ; 0.0 print asinh 0.3 ; 0.2956730475634224 print asinh 1.0 ; 0.881373587019543 """: ########################################################## if x.kind==Complex: push(newComplex(arcsinh(x.z))) else: push(newFloating(arcsinh(asFloat(x)))) builtin "atan", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse tangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print atan 0 ; 0.0 print atan 0.3 ; 0.2914567944778671 print atan 1.0 ; 0.7853981633974483 """: ########################################################## if x.kind==Complex: push(newComplex(arctan(x.z))) else: push(newFloating(arctan(asFloat(x)))) builtin "atan2", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse tangent of y / x", args = { "y" : {Integer,Floating}, "x" : {Integer,Floating} }, attrs = NoAttrs, returns = {Floating,Complex}, # TODO(Numbers\atan2): add documentation example # labels: documentation, easy, library example = """ """: ########################################################## push(newFloating(arctan2(asFloat(y), asFloat(x)))) builtin "atanh", alias = unaliased, rule = PrefixPrecedence, description = "calculate the inverse hyperbolic tangent of given angle", args = { "angle" : {Integer,Floating,Complex} }, attrs = NoAttrs, returns = {Floating,Complex}, example = """ print atanh 0 ; 0.0 print atanh 0.3 ; 0.3095196042031118 print atanh 1.0 ; inf """: ########################################################## if x.kind==Complex: push(newComplex(arctanh(x.z))) else: push(newFloating(arctanh(asFloat(x)))) builtin "average", alias = unaliased, ``` 3b953065dc2d3c349207dd3355c26486772be4bb
non_defect
add documentation example add documentation example text push newfloating abs x f else push newfloating abs x z builtin acos alias unaliased rule prefixprecedence description calculate the inverse cosine of given angle args angle integer floating complex attrs noattrs returns floating complex example print acos print acos print acos if x kind complex push newcomplex arccos x z else push newfloating arccos asfloat x builtin acosh alias unaliased rule prefixprecedence description calculate the inverse hyperbolic cosine of given angle args angle integer floating complex attrs noattrs returns floating complex example print acosh print acosh print acosh if x kind complex push newcomplex arccosh x z else push newfloating arccosh asfloat x builtin acsec alias unaliased rule prefixprecedence description calculate the inverse cosecant of given angle args angle integer floating complex attrs noattrs returns floating complex todo numbers acsec add documentation example labels documentation easy library example if x kind complex push newcomplex arccsc x z else push newfloating arccsc asfloat x builtin acsech alias unaliased rule prefixprecedence description calculate the inverse hyperbolic cosecant of given angle args angle integer floating complex attrs noattrs returns floating complex todo numbers acsech add documentation example labels documentation easy library example if x kind complex push newcomplex arccsch x z else push newfloating arccsch asfloat x builtin actan alias unaliased rule prefixprecedence description calculate the inverse cotangent of given angle args angle integer floating complex attrs noattrs returns floating complex todo numbers actan add documentation example labels documentation easy library example if x kind complex push newcomplex arccot x z else push newfloating arccot asfloat x builtin actanh alias unaliased rule prefixprecedence description calculate the inverse hyperbolic cotangent of given angle args angle integer floating complex attrs noattrs returns floating complex todo numbers actanh add documentation example labels documentation easy library example if x kind complex push newcomplex arccoth x z else push newfloating arccoth asfloat x builtin angle alias unaliased rule prefixprecedence description calculate the phase angle of given number args number complex attrs noattrs returns floating todo numbers angle add documentation example labels documentation easy library example push newfloating phase x z builtin asec alias unaliased rule prefixprecedence description calculate the inverse secant of given angle args angle integer floating complex attrs noattrs returns floating complex todo numbers asec add documentation example labels documentation easy library example if x kind complex push newcomplex arcsec x z else push newfloating arcsec asfloat x builtin asech alias unaliased rule prefixprecedence description calculate the inverse hyperbolic secant of given angle args angle integer floating complex attrs noattrs returns floating complex todo numbers asech add documentation example labels documentation easy library example if x kind complex push newcomplex arcsech x z else push newfloating arcsech asfloat x builtin asin alias unaliased rule prefixprecedence description calculate the inverse sine of given angle args angle integer floating complex attrs noattrs returns floating complex example print asin print asin print asin if x kind complex push newcomplex arcsin x z else push newfloating arcsin asfloat x builtin asinh alias unaliased rule prefixprecedence description calculate the inverse hyperbolic sine of given angle args angle integer floating complex attrs noattrs returns floating complex example print asinh print asinh print asinh if x kind complex push newcomplex arcsinh x z else push newfloating arcsinh asfloat x builtin atan alias unaliased rule prefixprecedence description calculate the inverse tangent of given angle args angle integer floating complex attrs noattrs returns floating complex example print atan print atan print atan if x kind complex push newcomplex arctan x z else push newfloating arctan asfloat x builtin alias unaliased rule prefixprecedence description calculate the inverse tangent of y x args y integer floating x integer floating attrs noattrs returns floating complex todo numbers add documentation example labels documentation easy library example push newfloating asfloat y asfloat x builtin atanh alias unaliased rule prefixprecedence description calculate the inverse hyperbolic tangent of given angle args angle integer floating complex attrs noattrs returns floating complex example print atanh print atanh print atanh inf if x kind complex push newcomplex arctanh x z else push newfloating arctanh asfloat x builtin average alias unaliased
0
247,416
7,918,522,213
IssuesEvent
2018-07-04 13:35:06
xwikisas/application-xpoll
https://api.github.com/repos/xwikisas/application-xpoll
closed
Left Navigation panel doesn't show when accessing the Polls app
Priority: Minor Type: Bug
**Preconditions** Have the XPoll Application installed. **Steps to reproduce** 1. Click on Polls from the Applications panel Expected results: The Navigation panel appears under Applications. Actual results: When accessing the Polls app the Navigation panel doesn't appear in the left side of the screen. **Objective** Make sure there the application doesn't overwrite the default panel configuration. ![navigationfromhomepage](https://user-images.githubusercontent.com/22794181/42267433-a8ddad90-7f81-11e8-916c-2fc6a7a3e790.jpg) ![nonavigationpanel](https://user-images.githubusercontent.com/22794181/42267437-acc21310-7f81-11e8-98ce-82a22b3e6799.jpg) Environment: IE 11, local 8.4.5 instance with MySQL Affects Version/s: 1.6.6
1.0
Left Navigation panel doesn't show when accessing the Polls app - **Preconditions** Have the XPoll Application installed. **Steps to reproduce** 1. Click on Polls from the Applications panel Expected results: The Navigation panel appears under Applications. Actual results: When accessing the Polls app the Navigation panel doesn't appear in the left side of the screen. **Objective** Make sure there the application doesn't overwrite the default panel configuration. ![navigationfromhomepage](https://user-images.githubusercontent.com/22794181/42267433-a8ddad90-7f81-11e8-916c-2fc6a7a3e790.jpg) ![nonavigationpanel](https://user-images.githubusercontent.com/22794181/42267437-acc21310-7f81-11e8-98ce-82a22b3e6799.jpg) Environment: IE 11, local 8.4.5 instance with MySQL Affects Version/s: 1.6.6
non_defect
left navigation panel doesn t show when accessing the polls app preconditions have the xpoll application installed steps to reproduce click on polls from the applications panel expected results the navigation panel appears under applications actual results when accessing the polls app the navigation panel doesn t appear in the left side of the screen objective make sure there the application doesn t overwrite the default panel configuration environment ie local instance with mysql affects version s
0
44,382
12,124,470,731
IssuesEvent
2020-04-22 14:13:48
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
the mistake that No database selected appears when dslContext.resultQuery(sql)
T: Defect
String sql = String.format("select * from workorder where workorder.id in "); sql=sql+idString; resultQuery = dslContext.resultQuery(sql).fetchInto(Workorder.class); mistake : "jOOQ; uncategorized SQLException for SQL [select * from workorder where workorder.id in ('1')]; SQL state [3D000]; error code [1046]; No database selected; nested exception is java.sql.SQLException: No database selected",
1.0
the mistake that No database selected appears when dslContext.resultQuery(sql) - String sql = String.format("select * from workorder where workorder.id in "); sql=sql+idString; resultQuery = dslContext.resultQuery(sql).fetchInto(Workorder.class); mistake : "jOOQ; uncategorized SQLException for SQL [select * from workorder where workorder.id in ('1')]; SQL state [3D000]; error code [1046]; No database selected; nested exception is java.sql.SQLException: No database selected",
defect
the mistake that no database selected appears when dslcontext resultquery sql string sql string format select from workorder where workorder id in sql sql idstring resultquery dslcontext resultquery sql fetchinto workorder class mistake jooq uncategorized sqlexception for sql sql state error code no database selected nested exception is java sql sqlexception no database selected
1
612,687
19,028,446,392
IssuesEvent
2021-11-24 08:00:52
Square789/PydayNightFunkin
https://api.github.com/repos/Square789/PydayNightFunkin
opened
Try a custom renderer
enhancement priority: high
- Draw time is skyrocketing/fluctuating around 6-8ms. (And if update also takes 1-3 ms, there is a serious risk of dropped frames) - Try custom groups that divide the OpenGL state that needs to be set and a custom batch that: - Sorts a scene tree by groups that need drawing - Sets hard borders when it identifies sprites that need to be drawn on top of each other (order is set and different) - Reorders groups inside of order blocks so that expensive changes such as shader program sets occurr as little as possible: ``` | --- order 1 --- | | ----- order 2 ----- | this | [X]-[X] [Y] [X] | | [Y] [X] [Y]-[Y] [X] | (6 program switches) becomes | [X]-[X]-[X] [Y] |-| [Y]-[Y]-[Y] [X]-[X] | (2 program switches) ``` - This is incinerating the pyglet warranty sticker and will be a very delayed shot to the foot
1.0
Try a custom renderer - - Draw time is skyrocketing/fluctuating around 6-8ms. (And if update also takes 1-3 ms, there is a serious risk of dropped frames) - Try custom groups that divide the OpenGL state that needs to be set and a custom batch that: - Sorts a scene tree by groups that need drawing - Sets hard borders when it identifies sprites that need to be drawn on top of each other (order is set and different) - Reorders groups inside of order blocks so that expensive changes such as shader program sets occurr as little as possible: ``` | --- order 1 --- | | ----- order 2 ----- | this | [X]-[X] [Y] [X] | | [Y] [X] [Y]-[Y] [X] | (6 program switches) becomes | [X]-[X]-[X] [Y] |-| [Y]-[Y]-[Y] [X]-[X] | (2 program switches) ``` - This is incinerating the pyglet warranty sticker and will be a very delayed shot to the foot
non_defect
try a custom renderer draw time is skyrocketing fluctuating around and if update also takes ms there is a serious risk of dropped frames try custom groups that divide the opengl state that needs to be set and a custom batch that sorts a scene tree by groups that need drawing sets hard borders when it identifies sprites that need to be drawn on top of each other order is set and different reorders groups inside of order blocks so that expensive changes such as shader program sets occurr as little as possible order order this program switches becomes program switches this is incinerating the pyglet warranty sticker and will be a very delayed shot to the foot
0
17,304
2,998,204,579
IssuesEvent
2015-07-23 12:53:35
bardsoftware/ganttproject
https://api.github.com/repos/bardsoftware/ganttproject
closed
Value changed in Overview Tab is discarded if not confirmed by Enter
auto-migrated TasksTable Type-Defect __Target-Ostrava
``` What steps will reproduce the problem? 1. Single Click in the Date-Field on the Gantt-Tab 2. change the date 3. klick into the next field 4. Changed date is discarded and old Date is back What version of the product are you using? On what operating system? 2.7 Build 1891, Windows, Linux ``` Original issue reported on code.google.com by `jojo2...@googlemail.com` on 23 Apr 2015 at 11:35
1.0
Value changed in Overview Tab is discarded if not confirmed by Enter - ``` What steps will reproduce the problem? 1. Single Click in the Date-Field on the Gantt-Tab 2. change the date 3. klick into the next field 4. Changed date is discarded and old Date is back What version of the product are you using? On what operating system? 2.7 Build 1891, Windows, Linux ``` Original issue reported on code.google.com by `jojo2...@googlemail.com` on 23 Apr 2015 at 11:35
defect
value changed in overview tab is discarded if not confirmed by enter what steps will reproduce the problem single click in the date field on the gantt tab change the date klick into the next field changed date is discarded and old date is back what version of the product are you using on what operating system build windows linux original issue reported on code google com by googlemail com on apr at
1
5,487
2,610,188,595
IssuesEvent
2015-02-26 18:59:45
chrsmith/quchuseban
https://api.github.com/repos/chrsmith/quchuseban
opened
求助如何才能淡色斑
auto-migrated Priority-Medium Type-Defect
``` 《摘要》 愉快跟不愉快的回忆,比如一个硬币的两面,存在于我们的�� �一段情感里。就像那个著名的“蝴蝶效应”,如果你经常记� ��不愉快的人、不愉快的事,生活就跟着变得不愉快起来。相 反,有些女人却能在跟老公吵架的时候及其她求婚时的表情�� �他怀抱的温暖。这里的“吵”是一种乐观、积极的沟通方式� ��这样的女人即便是面临命运得不测风云,也不会唉声叹气, 而当它是动力。面带微笑、坦然自处,男人有乐观女人的相�� �,一生都将阳光灿烂。如何才能淡色斑, 《客户案例》   我想知道一起拥有的回忆是浓了你还是醉了我,在这个�� �花飘落的季节里,我又想起了你,想起了那个雪地里雨伞下� ��长而又安静的拥抱,回想起你的眼泪曾伴随着雪花一起纷飞 ,我想那是一种多么绝望而又撕心裂肺、痛彻心扉的的凄美�� �谢谢你曾用你的青春带走了我的忧伤,我在26岁那年,脸上�� �出了许多黄褐斑。看到满脸的黄褐斑,我心中甚是苦恼。于� ��,为了恢复年轻的美丽的容颜,我决定治疗黄褐斑。可是, 黄褐斑的治疗如何进行最有效呢? 怎样去除,面部黄褐斑</br>   后来为了去斑我开始依赖于网络,在网上搜索可以帮我�� �掉斑点的方法,只要是有点希望,我便会想法去试试,一个� ��然的机会让我遇见了黛芙薇尔去斑,遇见就不再错过,这句 话说的真好。于是我就深入的了解了一下黛芙薇尔,最后我�� �质询了他们的专家跟客服,他们说我的病情很适合他们的产� ��,于是我就定购了几个周期的黛芙薇尔。怎样去除,面部黄�� �斑</br>   我每天坚持使用,并保持一定量的户外运动,坚持了一�� �多月,就发现脸上的斑点真的慢慢变淡了。接下来的日子我� ��心情更加开朗乐观起来,过了半个月的时间,就发现脸上的 黄褐斑已经彻底去除了!我觉得那时候自己是多么自信的女人� ��,我的幸福终于被我牢牢抓住了!真的很感谢黛芙薇尔,是�� �让我摆脱了色斑的困扰,彻底去除了我脸上的色斑,现在着� ��觉得没斑的感觉真是太好啦。 阅读了如何才能淡色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 如何才能淡色斑,同时为您分享祛斑小方法 1、将带根的香菜洗净,加水煎煮,用菜汤洗脸,坚持使用可� ��令面部的色斑逐渐消除。 2、桃花、杏花各10克以水浸泡,过滤浸汁,用于洗脸。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:30
1.0
求助如何才能淡色斑 - ``` 《摘要》 愉快跟不愉快的回忆,比如一个硬币的两面,存在于我们的�� �一段情感里。就像那个著名的“蝴蝶效应”,如果你经常记� ��不愉快的人、不愉快的事,生活就跟着变得不愉快起来。相 反,有些女人却能在跟老公吵架的时候及其她求婚时的表情�� �他怀抱的温暖。这里的“吵”是一种乐观、积极的沟通方式� ��这样的女人即便是面临命运得不测风云,也不会唉声叹气, 而当它是动力。面带微笑、坦然自处,男人有乐观女人的相�� �,一生都将阳光灿烂。如何才能淡色斑, 《客户案例》   我想知道一起拥有的回忆是浓了你还是醉了我,在这个�� �花飘落的季节里,我又想起了你,想起了那个雪地里雨伞下� ��长而又安静的拥抱,回想起你的眼泪曾伴随着雪花一起纷飞 ,我想那是一种多么绝望而又撕心裂肺、痛彻心扉的的凄美�� �谢谢你曾用你的青春带走了我的忧伤,我在26岁那年,脸上�� �出了许多黄褐斑。看到满脸的黄褐斑,我心中甚是苦恼。于� ��,为了恢复年轻的美丽的容颜,我决定治疗黄褐斑。可是, 黄褐斑的治疗如何进行最有效呢? 怎样去除,面部黄褐斑</br>   后来为了去斑我开始依赖于网络,在网上搜索可以帮我�� �掉斑点的方法,只要是有点希望,我便会想法去试试,一个� ��然的机会让我遇见了黛芙薇尔去斑,遇见就不再错过,这句 话说的真好。于是我就深入的了解了一下黛芙薇尔,最后我�� �质询了他们的专家跟客服,他们说我的病情很适合他们的产� ��,于是我就定购了几个周期的黛芙薇尔。怎样去除,面部黄�� �斑</br>   我每天坚持使用,并保持一定量的户外运动,坚持了一�� �多月,就发现脸上的斑点真的慢慢变淡了。接下来的日子我� ��心情更加开朗乐观起来,过了半个月的时间,就发现脸上的 黄褐斑已经彻底去除了!我觉得那时候自己是多么自信的女人� ��,我的幸福终于被我牢牢抓住了!真的很感谢黛芙薇尔,是�� �让我摆脱了色斑的困扰,彻底去除了我脸上的色斑,现在着� ��觉得没斑的感觉真是太好啦。 阅读了如何才能淡色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 如何才能淡色斑,同时为您分享祛斑小方法 1、将带根的香菜洗净,加水煎煮,用菜汤洗脸,坚持使用可� ��令面部的色斑逐渐消除。 2、桃花、杏花各10克以水浸泡,过滤浸汁,用于洗脸。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:30
defect
求助如何才能淡色斑 《摘要》 愉快跟不愉快的回忆,比如一个硬币的两面,存在于我们的�� �一段情感里。就像那个著名的“蝴蝶效应”,如果你经常记� ��不愉快的人、不愉快的事,生活就跟着变得不愉快起来。相 反,有些女人却能在跟老公吵架的时候及其她求婚时的表情�� �他怀抱的温暖。这里的“吵”是一种乐观、积极的沟通方式� ��这样的女人即便是面临命运得不测风云,也不会唉声叹气, 而当它是动力。面带微笑、坦然自处,男人有乐观女人的相�� �,一生都将阳光灿烂。如何才能淡色斑, 《客户案例》   我想知道一起拥有的回忆是浓了你还是醉了我,在这个�� �花飘落的季节里,我又想起了你,想起了那个雪地里雨伞下� ��长而又安静的拥抱,回想起你的眼泪曾伴随着雪花一起纷飞 ,我想那是一种多么绝望而又撕心裂肺、痛彻心扉的的凄美�� �谢谢你曾用你的青春带走了我的忧伤, ,脸上�� �出了许多黄褐斑。看到满脸的黄褐斑,我心中甚是苦恼。于� ��,为了恢复年轻的美丽的容颜,我决定治疗黄褐斑。可是, 黄褐斑的治疗如何进行最有效呢 怎样去除 面部黄褐斑   后来为了去斑我开始依赖于网络,在网上搜索可以帮我�� �掉斑点的方法,只要是有点希望,我便会想法去试试,一个� ��然的机会让我遇见了黛芙薇尔去斑,遇见就不再错过,这句 话说的真好。于是我就深入的了解了一下黛芙薇尔,最后我�� �质询了他们的专家跟客服,他们说我的病情很适合他们的产� ��,于是我就定购了几个周期的黛芙薇尔。怎样去除 面部黄�� �斑   我每天坚持使用,并保持一定量的户外运动,坚持了一�� �多月,就发现脸上的斑点真的慢慢变淡了。接下来的日子我� ��心情更加开朗乐观起来,过了半个月的时间,就发现脸上的 黄褐斑已经彻底去除了 我觉得那时候自己是多么自信的女人� ��,我的幸福终于被我牢牢抓住了 真的很感谢黛芙薇尔,是�� �让我摆脱了色斑的困扰,彻底去除了我脸上的色斑,现在着� ��觉得没斑的感觉真是太好啦。 阅读了如何才能淡色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》    黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗   答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来    ,服用黛芙薇尔美白,会伤身体吗 有副作用吗   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖    ,去除黄褐斑之后,会反弹吗   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗    ,你们的价格有点贵,能不能便宜一点   答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗    ,我适合用黛芙薇尔精华液吗   答:黛芙薇尔适用人群:    、生理紊乱引起的黄褐斑人群    、生育引起的妊娠斑人群    、年纪增长引起的老年斑人群    、化妆品色素沉积、辐射斑人群    、长期日照引起的日晒斑人群    、肌肤暗淡急需美白的人群 《祛斑小方法》 如何才能淡色斑,同时为您分享祛斑小方法 、将带根的香菜洗净,加水煎煮,用菜汤洗脸,坚持使用可� ��令面部的色斑逐渐消除。 、桃花、 ,过滤浸汁,用于洗脸。 original issue reported on code google com by additive gmail com on jul at
1
5,158
18,755,522,831
IssuesEvent
2021-11-05 10:14:05
mozilla-mobile/firefox-ios
https://api.github.com/repos/mozilla-mobile/firefox-ios
closed
Mobile Test Eng - Standarize Slack Notifications - Step 1
eng:automation
Our team is working on having a common view for the notifications we receive from the different CI/tools when builds/tests run. As a first step we will introduce a footer and the header will be slightly different. After that we will modify the payload of those notifications. ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-3404)
1.0
Mobile Test Eng - Standarize Slack Notifications - Step 1 - Our team is working on having a common view for the notifications we receive from the different CI/tools when builds/tests run. As a first step we will introduce a footer and the header will be slightly different. After that we will modify the payload of those notifications. ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-3404)
non_defect
mobile test eng standarize slack notifications step our team is working on having a common view for the notifications we receive from the different ci tools when builds tests run as a first step we will introduce a footer and the header will be slightly different after that we will modify the payload of those notifications ┆issue is synchronized with this
0
53,994
13,300,697,005
IssuesEvent
2020-08-25 11:48:10
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
Sidebar: Modal doesnt block clicks to view
defect
It worked correctly in 8.0 the sidebar closed when clicking in the modal / elements on the "hidden" view
1.0
Sidebar: Modal doesnt block clicks to view - It worked correctly in 8.0 the sidebar closed when clicking in the modal / elements on the "hidden" view
defect
sidebar modal doesnt block clicks to view it worked correctly in the sidebar closed when clicking in the modal elements on the hidden view
1
71,425
18,737,728,868
IssuesEvent
2021-11-04 09:51:15
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Typo in generated hello-world
Type/Bug Team/DevTools Points/0.5 Area/ProjectAPI Area/BuildTools
Hello-world program generated by running `bal new` has a comment that says ``` // Prints `Hello World` ``` but it actually prints ``` Hello, World! ```
1.0
Typo in generated hello-world - Hello-world program generated by running `bal new` has a comment that says ``` // Prints `Hello World` ``` but it actually prints ``` Hello, World! ```
non_defect
typo in generated hello world hello world program generated by running bal new has a comment that says prints hello world but it actually prints hello world
0
186
2,495,192,892
IssuesEvent
2015-01-06 08:18:03
tntim96/JSCover
https://api.github.com/repos/tntim96/JSCover
closed
complete coverage of javascript coalesce
enhancement Fix applied - please re-test
is it possible to get complete coverage of this line? this.imageFactory = stuff.imageFactory || function() { return new Image() }; it's a hack of dependency injection. it's never evaluated to false, but the right-most is always true.
1.0
complete coverage of javascript coalesce - is it possible to get complete coverage of this line? this.imageFactory = stuff.imageFactory || function() { return new Image() }; it's a hack of dependency injection. it's never evaluated to false, but the right-most is always true.
non_defect
complete coverage of javascript coalesce is it possible to get complete coverage of this line this imagefactory stuff imagefactory function return new image it s a hack of dependency injection it s never evaluated to false but the right most is always true
0
70,811
13,535,649,988
IssuesEvent
2020-09-16 07:54:43
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Codelens with padded with whitespace not showing
author-verification-requested bug code-lens electron-9-update
We found our codelens disappear in specific case in the latest version of VS Code(1.49.0) which won’t repro in previous version. The root cause is the styling issue which results the third codelens squeezed to another line. ![image](https://user-images.githubusercontent.com/9346867/93162313-17aa5900-f747-11ea-8bee-3b45f24cdb81.png) If the codelens starts or ends with an space will trigger this issue.
1.0
Codelens with padded with whitespace not showing - We found our codelens disappear in specific case in the latest version of VS Code(1.49.0) which won’t repro in previous version. The root cause is the styling issue which results the third codelens squeezed to another line. ![image](https://user-images.githubusercontent.com/9346867/93162313-17aa5900-f747-11ea-8bee-3b45f24cdb81.png) If the codelens starts or ends with an space will trigger this issue.
non_defect
codelens with padded with whitespace not showing we found our codelens disappear in specific case in the latest version of vs code which won’t repro in previous version the root cause is the styling issue which results the third codelens squeezed to another line if the codelens starts or ends with an space will trigger this issue
0
440,608
30,751,753,008
IssuesEvent
2023-07-28 19:59:12
erik-rt/layerfusion
https://api.github.com/repos/erik-rt/layerfusion
opened
Rewrite the README
documentation
The README is for an older version of this app. I rewrote it so I need to update the docs.
1.0
Rewrite the README - The README is for an older version of this app. I rewrote it so I need to update the docs.
non_defect
rewrite the readme the readme is for an older version of this app i rewrote it so i need to update the docs
0
119,870
10,076,201,779
IssuesEvent
2019-07-24 15:44:27
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Multiple CCR tests with uncaught exception on windows
:Distributed/CCR >test-failure v6.8.2 v7.2.2 v7.3.1 v7.4.0 v8.0.0
https://scans.gradle.com/s/vl3yktrv4xey4/tests/htwk6wzdfugzg-ntvlkoitzt6ms?openStackTraces=WzIsMSwwXQ ``` com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=243, name=elasticsearch[follower0][generic][T#4], state=RUNNABLE, group=TGRP-AutoFollowIT]Close stacktrace at __randomizedtesting.SeedInfo.seed([DE1704EC3D6BE8E2:CD64108D18DC74F7]:0) Caused by: java.lang.IllegalStateException: java.lang.InterruptedExceptionClose stacktrace at __randomizedtesting.SeedInfo.seed([DE1704EC3D6BE8E2]:0) at org.elasticsearch.transport.ConnectionManager.close(ConnectionManager.java:259) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:699) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:834) Caused by: java.lang.InterruptedException: (No message provided)Close stacktrace at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1040) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232) at org.elasticsearch.transport.ConnectionManager.close(ConnectionManager.java:256) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:699) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:834) ```
1.0
Multiple CCR tests with uncaught exception on windows - https://scans.gradle.com/s/vl3yktrv4xey4/tests/htwk6wzdfugzg-ntvlkoitzt6ms?openStackTraces=WzIsMSwwXQ ``` com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=243, name=elasticsearch[follower0][generic][T#4], state=RUNNABLE, group=TGRP-AutoFollowIT]Close stacktrace at __randomizedtesting.SeedInfo.seed([DE1704EC3D6BE8E2:CD64108D18DC74F7]:0) Caused by: java.lang.IllegalStateException: java.lang.InterruptedExceptionClose stacktrace at __randomizedtesting.SeedInfo.seed([DE1704EC3D6BE8E2]:0) at org.elasticsearch.transport.ConnectionManager.close(ConnectionManager.java:259) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:699) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:834) Caused by: java.lang.InterruptedException: (No message provided)Close stacktrace at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1040) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232) at org.elasticsearch.transport.ConnectionManager.close(ConnectionManager.java:256) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:699) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:834) ```
non_defect
multiple ccr tests with uncaught exception on windows com carrotsearch randomizedtesting uncaughtexceptionerror captured an uncaught exception in thread thread state runnable group tgrp autofollowit close stacktrace at randomizedtesting seedinfo seed caused by java lang illegalstateexception java lang interruptedexceptionclose stacktrace at randomizedtesting seedinfo seed at org elasticsearch transport connectionmanager close connectionmanager java at org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang interruptedexception no message provided close stacktrace at java util concurrent locks abstractqueuedsynchronizer doacquiresharedinterruptibly abstractqueuedsynchronizer java at java util concurrent locks abstractqueuedsynchronizer acquiresharedinterruptibly abstractqueuedsynchronizer java at java util concurrent countdownlatch await countdownlatch java at org elasticsearch transport connectionmanager close connectionmanager java at org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
0
68,604
21,724,830,063
IssuesEvent
2022-05-11 06:30:36
hpi-swa-teaching/SVGMorph
https://api.github.com/repos/hpi-swa-teaching/SVGMorph
opened
SVG Bounding Boxes
defect User Story
### Was möchte wer tun? Als Nutzer:in möchte ich, dass die gesamte SVG-Grafik angezeigt wird. Aktuell wird die Grafik abgeschnitten. Die SVG-Datei mit dem Code ```svg <svg xmlns="http://www.w3.org/2000/svg" viewBox="-52 -53 100 100" stroke-width="2"> <g fill="none"> <ellipse stroke="#66899a" rx="6" ry="44"/> <ellipse stroke="#e1d85d" rx="6" ry="44" transform="rotate(-66)"/> <ellipse stroke="#80a3cf" rx="6" ry="44" transform="rotate(66)"/> <circle stroke="#4b541f" r="44"/> </g> <g fill="#66899a" stroke="white"> <circle fill="#80a3cf" r="13"/> <circle cy="-44" r="9"/> <circle cx="-40" cy="18" r="9"/> <circle cx="40" cy="18" r="9"/> </g> </svg> ``` wird vom SVG-Morph wie folgt dargestellt (es geht um die fehlende Zentrierung): ![Atom SVG in Squeak](https://user-images.githubusercontent.com/49586507/167571062-ce4298ff-2d6e-4c93-85c2-b0252dc72979.png) Zu erwarten wäre allerdings das folgende Aussehen: ![Atom SVG](https://user-images.githubusercontent.com/49586507/167571203-7bc3f1fa-cd4b-474c-814f-b63de24c4370.png) ### Warum möchte man es tun? Damit SVGs korrekt angezeigt werden. ### Zusätzlicher Kontext Für weitere Grafikartefakte siehe #12.
1.0
SVG Bounding Boxes - ### Was möchte wer tun? Als Nutzer:in möchte ich, dass die gesamte SVG-Grafik angezeigt wird. Aktuell wird die Grafik abgeschnitten. Die SVG-Datei mit dem Code ```svg <svg xmlns="http://www.w3.org/2000/svg" viewBox="-52 -53 100 100" stroke-width="2"> <g fill="none"> <ellipse stroke="#66899a" rx="6" ry="44"/> <ellipse stroke="#e1d85d" rx="6" ry="44" transform="rotate(-66)"/> <ellipse stroke="#80a3cf" rx="6" ry="44" transform="rotate(66)"/> <circle stroke="#4b541f" r="44"/> </g> <g fill="#66899a" stroke="white"> <circle fill="#80a3cf" r="13"/> <circle cy="-44" r="9"/> <circle cx="-40" cy="18" r="9"/> <circle cx="40" cy="18" r="9"/> </g> </svg> ``` wird vom SVG-Morph wie folgt dargestellt (es geht um die fehlende Zentrierung): ![Atom SVG in Squeak](https://user-images.githubusercontent.com/49586507/167571062-ce4298ff-2d6e-4c93-85c2-b0252dc72979.png) Zu erwarten wäre allerdings das folgende Aussehen: ![Atom SVG](https://user-images.githubusercontent.com/49586507/167571203-7bc3f1fa-cd4b-474c-814f-b63de24c4370.png) ### Warum möchte man es tun? Damit SVGs korrekt angezeigt werden. ### Zusätzlicher Kontext Für weitere Grafikartefakte siehe #12.
defect
svg bounding boxes was möchte wer tun als nutzer in möchte ich dass die gesamte svg grafik angezeigt wird aktuell wird die grafik abgeschnitten die svg datei mit dem code svg wird vom svg morph wie folgt dargestellt es geht um die fehlende zentrierung zu erwarten wäre allerdings das folgende aussehen warum möchte man es tun damit svgs korrekt angezeigt werden zusätzlicher kontext für weitere grafikartefakte siehe
1
135,831
12,692,057,805
IssuesEvent
2020-06-21 20:17:20
thephpleague/commonmark
https://api.github.com/repos/thephpleague/commonmark
closed
[1.5] Release Goals & Tasks
documentation pinned
Creating this issue to track a few odds and ends - [x] Ensure changes to [1.5 docs](https://github.com/thephpleague/commonmark/commits/1.5/docs/1.5) are replicated into 2.0 docs - [x] Ask @markcarver to help test the 1.5 branch before tagging - [x] Modify 2.0 docs to show migration path from 1.5 -> 2.0 instead of 1.4 -> 2.0
1.0
[1.5] Release Goals & Tasks - Creating this issue to track a few odds and ends - [x] Ensure changes to [1.5 docs](https://github.com/thephpleague/commonmark/commits/1.5/docs/1.5) are replicated into 2.0 docs - [x] Ask @markcarver to help test the 1.5 branch before tagging - [x] Modify 2.0 docs to show migration path from 1.5 -> 2.0 instead of 1.4 -> 2.0
non_defect
release goals tasks creating this issue to track a few odds and ends ensure changes to are replicated into docs ask markcarver to help test the branch before tagging modify docs to show migration path from instead of
0
290,462
25,069,201,309
IssuesEvent
2022-11-07 10:45:08
crispindeity/issue-tracker
https://api.github.com/repos/crispindeity/issue-tracker
opened
Add Integration Test
📬 API BE ✅ Test
# Description - @SpringBootTest 어노테이션을 활용한 통합 테스트 진행 # Progress - [ ] Milestone Integration Test - [ ] Label Integration Test - [ ] Comment Integration Test - [ ] Oauth Integration Test - [ ] Issue Integration Test
1.0
Add Integration Test - # Description - @SpringBootTest 어노테이션을 활용한 통합 테스트 진행 # Progress - [ ] Milestone Integration Test - [ ] Label Integration Test - [ ] Comment Integration Test - [ ] Oauth Integration Test - [ ] Issue Integration Test
non_defect
add integration test description springboottest 어노테이션을 활용한 통합 테스트 진행 progress milestone integration test label integration test comment integration test oauth integration test issue integration test
0
361,468
25,341,070,251
IssuesEvent
2022-11-18 21:41:21
dawidkomorowski/geisha
https://api.github.com/repos/dawidkomorowski/geisha
closed
Use DocFX and prepare API reference documentation pages.
documentation
Use DocFX and prepare API reference documentation pages. https://dotnet.github.io/docfx/index.html Acceptance criteria: - [x] DocFX project is created and configured for generating basic API documentation. - [x] Build script is prepared for simple update/regeneration of final documentation web pages. - [x] API reference is generated for Geisha.Engine. - [x] API reference is generated for other projects generating documentation file. - [x] Main page has some description and link to repository. - [x] Create GitHub Action building and deploying documentation. - [x] Generated documentation website is deployed as GitHub Pages of engine repository. - [x] GitHub README in engine repository has a link to documentation site. - [x] Update Release Activities template with action point about generating documentation for final code. - [x] Review documentation and unify/improve if necessary at this stage.
1.0
Use DocFX and prepare API reference documentation pages. - Use DocFX and prepare API reference documentation pages. https://dotnet.github.io/docfx/index.html Acceptance criteria: - [x] DocFX project is created and configured for generating basic API documentation. - [x] Build script is prepared for simple update/regeneration of final documentation web pages. - [x] API reference is generated for Geisha.Engine. - [x] API reference is generated for other projects generating documentation file. - [x] Main page has some description and link to repository. - [x] Create GitHub Action building and deploying documentation. - [x] Generated documentation website is deployed as GitHub Pages of engine repository. - [x] GitHub README in engine repository has a link to documentation site. - [x] Update Release Activities template with action point about generating documentation for final code. - [x] Review documentation and unify/improve if necessary at this stage.
non_defect
use docfx and prepare api reference documentation pages use docfx and prepare api reference documentation pages acceptance criteria docfx project is created and configured for generating basic api documentation build script is prepared for simple update regeneration of final documentation web pages api reference is generated for geisha engine api reference is generated for other projects generating documentation file main page has some description and link to repository create github action building and deploying documentation generated documentation website is deployed as github pages of engine repository github readme in engine repository has a link to documentation site update release activities template with action point about generating documentation for final code review documentation and unify improve if necessary at this stage
0
76,059
26,220,556,079
IssuesEvent
2023-01-04 14:32:13
idaholab/HERON
https://api.github.com/repos/idaholab/HERON
closed
[DEFECT] failure in windows test
defect
-------- Defect Description -------- **Describe the defect** ##### What did you expect to see happen? https://github.com/idaholab/HERON/actions/runs/3659256063 to pass ##### What did you see instead? This error: ```-------------------------------------------------- There were 2 warnings during the simulation run: (1 time) <class 'DeprecationWarning'> "variables" node inputted but has been deprecated! Please list variables in the "inputs" and "outputs" nodes instead. This Warning will result in an error in RAVEN 3.0! (1 time) Nothing to write to CSV! Checking metadata ... -------------------------------------------------- ( 15.31 sec) SIMULATION : DEBUG -> Fri Dec 9 10:28:53 2022 ( 15.32 sec) SIMULATION : Message -> Run complete! C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\sklearn\base.py:338: UserWarning: Trying to unpickle estimator KMeans from version 0.24.2 when using version 1.0.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to: https://scikit-learn.org/stable/modules/model_persistence.html#security-maintainability-limitations UserWarning, C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\pyutilib\misc\import_file.py:11: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\threading.py", line 890, in _bootstrap self._bootstrap_inner() File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\threading.py", line 926, in _bootstrap_inner self.run() File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Runners\SharedMemoryRunner.py", line 115, in <lambda> self.thread = InterruptibleThread(target = lambda q, *arg : q.append(self.functionToRun(*arg)), File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\EnsembleModel.py", line 515, in evaluateSample returnValue = (Input,self._externalRun(Input, jobHandler)) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\EnsembleModel.py", line 683, in _externalRun iterationCount, jobHandler) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\EnsembleModel.py", line 744, in __advanceModel evaluation = modelToExecute['Instance'].evaluateSample.original_function(modelToExecute['Instance'], origInputList, samplerType, inputKwargs) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\ExternalModel.py", line 324, in evaluateSample result,instSelf = self._externalRun(inRun,) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\ExternalModel.py", line 266, in _externalRun self.sim.run(externalSelf, InputDict) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\DispatchManager.py", line 725, in run runner.load_heron_lib(path) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\DispatchManager.py", line 76, in load_heron_lib case, components, sources = SerializationManager.load_heron_lib(path, retry=6) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\SerializationManager.py", line 24, in load_heron_lib case, components, sources = pk.load(lib) File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\dill\_dill.py", line 373, in load return Unpickler(file, ignore=ignore, **kwds).load() File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\dill\_dill.py", line 646, in load obj = StockUnpickler.load(self) File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\dill\_dill.py", line 636, in find_class return StockUnpickler.find_class(self, module, name) File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\Cases.py", line 22, in <module> from HERON.src.validators.Factory import known as known_validators File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\validators\Factory.py", line 16, in <module> raven_path = hutils.get_raven_loc() File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\dispatch\..\_utils.py", line 47, in get_raven_loc traceback.print_stack() C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\sklearn\base.py:338: UserWarning: Trying to unpickle estimator KMeans from version 0.24.2 when using version 1.0.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to: https://scikit-learn.org/stable/modules/model_persistence.html#security-maintainability-limitations UserWarning, C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\plugins\TEAL\src\main.py:450: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. projCf[decomissionMask] += lifeCf[-1] * taxMult * np.power(inflRate, -1*years[decomissionMask]) C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\plugins\TEAL\src\main.py:450: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. projCf[decomissionMask] += lifeCf[-1] * taxMult * np.power(inflRate, -1*years[decomissionMask]) C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\plugins\TEAL\src\main.py:450: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. projCf[decomissionMask] += lifeCf[-1] * taxMult * np.power(inflRate, -1*years[decomissionMask]) ``` ##### Do you have a suggested fix for the development team? **Describe how to Reproduce** Steps to reproduce the behavior: 1. 2. 3. 4. **Screenshots and Input Files** Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue. **Platform (please complete the following information):** - OS: [e.g. iOS] - Version: [e.g. 22] - Dependencies Installation: [CONDA or PIP] ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [x] 1. If the issue is a defect, is the defect fixed? - [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
1.0
[DEFECT] failure in windows test - -------- Defect Description -------- **Describe the defect** ##### What did you expect to see happen? https://github.com/idaholab/HERON/actions/runs/3659256063 to pass ##### What did you see instead? This error: ```-------------------------------------------------- There were 2 warnings during the simulation run: (1 time) <class 'DeprecationWarning'> "variables" node inputted but has been deprecated! Please list variables in the "inputs" and "outputs" nodes instead. This Warning will result in an error in RAVEN 3.0! (1 time) Nothing to write to CSV! Checking metadata ... -------------------------------------------------- ( 15.31 sec) SIMULATION : DEBUG -> Fri Dec 9 10:28:53 2022 ( 15.32 sec) SIMULATION : Message -> Run complete! C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\sklearn\base.py:338: UserWarning: Trying to unpickle estimator KMeans from version 0.24.2 when using version 1.0.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to: https://scikit-learn.org/stable/modules/model_persistence.html#security-maintainability-limitations UserWarning, C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\pyutilib\misc\import_file.py:11: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\threading.py", line 890, in _bootstrap self._bootstrap_inner() File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\threading.py", line 926, in _bootstrap_inner self.run() File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Runners\SharedMemoryRunner.py", line 115, in <lambda> self.thread = InterruptibleThread(target = lambda q, *arg : q.append(self.functionToRun(*arg)), File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\EnsembleModel.py", line 515, in evaluateSample returnValue = (Input,self._externalRun(Input, jobHandler)) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\EnsembleModel.py", line 683, in _externalRun iterationCount, jobHandler) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\EnsembleModel.py", line 744, in __advanceModel evaluation = modelToExecute['Instance'].evaluateSample.original_function(modelToExecute['Instance'], origInputList, samplerType, inputKwargs) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\ExternalModel.py", line 324, in evaluateSample result,instSelf = self._externalRun(inRun,) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\ravenframework\Models\ExternalModel.py", line 266, in _externalRun self.sim.run(externalSelf, InputDict) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\DispatchManager.py", line 725, in run runner.load_heron_lib(path) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\DispatchManager.py", line 76, in load_heron_lib case, components, sources = SerializationManager.load_heron_lib(path, retry=6) File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\SerializationManager.py", line 24, in load_heron_lib case, components, sources = pk.load(lib) File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\dill\_dill.py", line 373, in load return Unpickler(file, ignore=ignore, **kwds).load() File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\dill\_dill.py", line 646, in load obj = StockUnpickler.load(self) File "C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\dill\_dill.py", line 636, in find_class return StockUnpickler.find_class(self, module, name) File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\Cases.py", line 22, in <module> from HERON.src.validators.Factory import known as known_validators File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\validators\Factory.py", line 16, in <module> raven_path = hutils.get_raven_loc() File "C:\Users\cogljj\actions-runner-heron-main\_work\HERON\HERON\src\dispatch\..\_utils.py", line 47, in get_raven_loc traceback.print_stack() C:\Users\cogljj\Miniconda3\envs\raven_libraries_actions-runner-heron-main\lib\site-packages\sklearn\base.py:338: UserWarning: Trying to unpickle estimator KMeans from version 0.24.2 when using version 1.0.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to: https://scikit-learn.org/stable/modules/model_persistence.html#security-maintainability-limitations UserWarning, C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\plugins\TEAL\src\main.py:450: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. projCf[decomissionMask] += lifeCf[-1] * taxMult * np.power(inflRate, -1*years[decomissionMask]) C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\plugins\TEAL\src\main.py:450: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. projCf[decomissionMask] += lifeCf[-1] * taxMult * np.power(inflRate, -1*years[decomissionMask]) C:\Users\cogljj\actions-runner-heron-main\_work\HERON\raven\plugins\TEAL\src\main.py:450: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. projCf[decomissionMask] += lifeCf[-1] * taxMult * np.power(inflRate, -1*years[decomissionMask]) ``` ##### Do you have a suggested fix for the development team? **Describe how to Reproduce** Steps to reproduce the behavior: 1. 2. 3. 4. **Screenshots and Input Files** Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue. **Platform (please complete the following information):** - OS: [e.g. iOS] - Version: [e.g. 22] - Dependencies Installation: [CONDA or PIP] ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [x] 1. If the issue is a defect, is the defect fixed? - [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
defect
failure in windows test defect description describe the defect what did you expect to see happen to pass what did you see instead this error there were warnings during the simulation run time variables node inputted but has been deprecated please list variables in the inputs and outputs nodes instead this warning will result in an error in raven time nothing to write to csv checking metadata sec simulation debug fri dec sec simulation message run complete c users cogljj envs raven libraries actions runner heron main lib site packages sklearn base py userwarning trying to unpickle estimator kmeans from version when using version this might lead to breaking code or invalid results use at your own risk for more info please refer to userwarning c users cogljj envs raven libraries actions runner heron main lib site packages pyutilib misc import file py deprecationwarning the imp module is deprecated in favour of importlib see the module s documentation for alternative uses import imp file c users cogljj envs raven libraries actions runner heron main lib threading py line in bootstrap self bootstrap inner file c users cogljj envs raven libraries actions runner heron main lib threading py line in bootstrap inner self run file c users cogljj envs raven libraries actions runner heron main lib threading py line in run self target self args self kwargs file c users cogljj actions runner heron main work heron raven ravenframework runners sharedmemoryrunner py line in self thread interruptiblethread target lambda q arg q append self functiontorun arg file c users cogljj actions runner heron main work heron raven ravenframework models ensemblemodel py line in evaluatesample returnvalue input self externalrun input jobhandler file c users cogljj actions runner heron main work heron raven ravenframework models ensemblemodel py line in externalrun iterationcount jobhandler file c users cogljj actions runner heron main work heron raven ravenframework models ensemblemodel py line in advancemodel evaluation modeltoexecute evaluatesample original function modeltoexecute originputlist samplertype inputkwargs file c users cogljj actions runner heron main work heron raven ravenframework models externalmodel py line in evaluatesample result instself self externalrun inrun file c users cogljj actions runner heron main work heron raven ravenframework models externalmodel py line in externalrun self sim run externalself inputdict file c users cogljj actions runner heron main work heron heron src dispatchmanager py line in run runner load heron lib path file c users cogljj actions runner heron main work heron heron src dispatchmanager py line in load heron lib case components sources serializationmanager load heron lib path retry file c users cogljj actions runner heron main work heron heron src serializationmanager py line in load heron lib case components sources pk load lib file c users cogljj envs raven libraries actions runner heron main lib site packages dill dill py line in load return unpickler file ignore ignore kwds load file c users cogljj envs raven libraries actions runner heron main lib site packages dill dill py line in load obj stockunpickler load self file c users cogljj envs raven libraries actions runner heron main lib site packages dill dill py line in find class return stockunpickler find class self module name file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file c users cogljj actions runner heron main work heron heron src cases py line in from heron src validators factory import known as known validators file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file c users cogljj actions runner heron main work heron heron src validators factory py line in raven path hutils get raven loc file c users cogljj actions runner heron main work heron heron src dispatch utils py line in get raven loc traceback print stack c users cogljj envs raven libraries actions runner heron main lib site packages sklearn base py userwarning trying to unpickle estimator kmeans from version when using version this might lead to breaking code or invalid results use at your own risk for more info please refer to userwarning c users cogljj actions runner heron main work heron raven plugins teal src main py futurewarning using a non tuple sequence for multidimensional indexing is deprecated use arr instead of arr in the future this will be interpreted as an array index arr which will result either in an error or a different result projcf lifecf taxmult np power inflrate years c users cogljj actions runner heron main work heron raven plugins teal src main py futurewarning using a non tuple sequence for multidimensional indexing is deprecated use arr instead of arr in the future this will be interpreted as an array index arr which will result either in an error or a different result projcf lifecf taxmult np power inflrate years c users cogljj actions runner heron main work heron raven plugins teal src main py futurewarning using a non tuple sequence for multidimensional indexing is deprecated use arr instead of arr in the future this will be interpreted as an array index arr which will result either in an error or a different result projcf lifecf taxmult np power inflrate years do you have a suggested fix for the development team describe how to reproduce steps to reproduce the behavior screenshots and input files please attach the input file s that generate this error the simpler the input the faster we can find the issue platform please complete the following information os version dependencies installation for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
1
346,300
10,410,424,271
IssuesEvent
2019-09-13 11:21:51
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.twitch.tv - video or audio doesn't play
browser-firefox-mobile engine-gecko priority-critical
<!-- @browser: Firefox Mobile 67.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:67.0) Gecko/67.0 Firefox/67.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://www.twitch.tv/videos/478649005?filter=archives&sort=time **Browser / Version**: Firefox Mobile 67.0 **Operating System**: Android **Tested Another Browser**: Yes **Problem type**: Video or audio doesn't play **Description**: hurenscheisse **Steps to Reproduce**: nix <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190218094427</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.twitch.tv - video or audio doesn't play - <!-- @browser: Firefox Mobile 67.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:67.0) Gecko/67.0 Firefox/67.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://www.twitch.tv/videos/478649005?filter=archives&sort=time **Browser / Version**: Firefox Mobile 67.0 **Operating System**: Android **Tested Another Browser**: Yes **Problem type**: Video or audio doesn't play **Description**: hurenscheisse **Steps to Reproduce**: nix <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190218094427</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes problem type video or audio doesn t play description hurenscheisse steps to reproduce nix browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly from with ❤️
0
6,563
6,534,216,611
IssuesEvent
2017-08-31 09:49:06
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
reopened
General low level primitive for ciphers (AES-GCM being the first)
area-System.Security
# Rationale There is a general need for a number of ciphers for encryption. Todays mix of interfaces and classes has become a little disjointed. Also there is no support for AEAD style ciphers as they need the ability to provide extra authentication information. The current designs are also prone to allocation and these are hard to avoid due to the returning arrays. # Proposed API A general purpose abstract base class that will be implemented by concrete classes. This will allow for expansion and also by having a class rather than static methods we have the ability to make extension methods as well as hold state between calls. The API should allow for recycling of the class to allow for lower allocations (not needing a new instance each time, and to catch say unmanaged keys). Due to the often unmanaged nature of resources that are tracked the class should implement IDisposable ``` csharp public abstract class Cipher : IDisposable { public virtual int TagSize { get; } public virtual int IVSize { get; } public virtual int BlockSize { get; } public virtual bool SupportsAssociatedData { get; } public abstract void Init(ReadOnlySpan<byte> key, ReadOnlySpan<byte> iv); public abstract void Init(ReadOnlySpan<byte> iv); public abstract int Update(ReadOnlySpan<byte> input, Span<byte> output); public abstract int Finish(ReadOnlySpan<byte> input, Span<byte> output); public abstract void AddAssociatedData(ReadOnlySpan<byte> associatedData); public abstract int GetTag(Span<byte> span); public abstract void SetTag(ReadOnlySpan<byte> tagSpan); } ``` # Example Usage (the input/output source is a mythical span based stream like IO source) ``` csharp using (var cipher = new AesGcmCipher(bitsize: 256)) { cipher.Init(myKey, nonce); while (!inputSource.EOF) { var inputSpan = inputSource.ReadSpan(cipher.BlockSize); cipher.Update(inputSpan); outputSource.Write(inputSpan); } cipher.AddAssociatedData(extraInformation); cipher.Finish(finalBlockData); cipher.GetTag(tagData); } ``` # API Behaviour 1. If get tag is called before finish a [exception type?] should be thrown and the internal state should be set to invalid 1. If the tag is invalid on finish for decrypt it should be an exception thrown 1. Once finished is called, a call to anything other than one of the Init methods will throw 1. Once Init is called, a second call without "finishing" will throw 1. If the type expects an key supplied (a straight "new'd" up instance) if the Initial "Init" call only has an IV it will throw 1. If the type was generated say from a store based key and you attempt to change the key via Init and not just the IV it will throw 1. If get tag is not called before dispose or Init should an exception be thrown? To stop the user not collecting the tag by accident? Reference #7023 # Updates 1. Changed nonce to IV. 1. Added behaviour section 1. Removed the single input/output span cases from finish and update, they can just be extension methods 1. Changed a number of spans to readonlyspan as suggested by @bartonjs 1. Removed Reset, Init with IV should be used instead
True
General low level primitive for ciphers (AES-GCM being the first) - # Rationale There is a general need for a number of ciphers for encryption. Todays mix of interfaces and classes has become a little disjointed. Also there is no support for AEAD style ciphers as they need the ability to provide extra authentication information. The current designs are also prone to allocation and these are hard to avoid due to the returning arrays. # Proposed API A general purpose abstract base class that will be implemented by concrete classes. This will allow for expansion and also by having a class rather than static methods we have the ability to make extension methods as well as hold state between calls. The API should allow for recycling of the class to allow for lower allocations (not needing a new instance each time, and to catch say unmanaged keys). Due to the often unmanaged nature of resources that are tracked the class should implement IDisposable ``` csharp public abstract class Cipher : IDisposable { public virtual int TagSize { get; } public virtual int IVSize { get; } public virtual int BlockSize { get; } public virtual bool SupportsAssociatedData { get; } public abstract void Init(ReadOnlySpan<byte> key, ReadOnlySpan<byte> iv); public abstract void Init(ReadOnlySpan<byte> iv); public abstract int Update(ReadOnlySpan<byte> input, Span<byte> output); public abstract int Finish(ReadOnlySpan<byte> input, Span<byte> output); public abstract void AddAssociatedData(ReadOnlySpan<byte> associatedData); public abstract int GetTag(Span<byte> span); public abstract void SetTag(ReadOnlySpan<byte> tagSpan); } ``` # Example Usage (the input/output source is a mythical span based stream like IO source) ``` csharp using (var cipher = new AesGcmCipher(bitsize: 256)) { cipher.Init(myKey, nonce); while (!inputSource.EOF) { var inputSpan = inputSource.ReadSpan(cipher.BlockSize); cipher.Update(inputSpan); outputSource.Write(inputSpan); } cipher.AddAssociatedData(extraInformation); cipher.Finish(finalBlockData); cipher.GetTag(tagData); } ``` # API Behaviour 1. If get tag is called before finish a [exception type?] should be thrown and the internal state should be set to invalid 1. If the tag is invalid on finish for decrypt it should be an exception thrown 1. Once finished is called, a call to anything other than one of the Init methods will throw 1. Once Init is called, a second call without "finishing" will throw 1. If the type expects an key supplied (a straight "new'd" up instance) if the Initial "Init" call only has an IV it will throw 1. If the type was generated say from a store based key and you attempt to change the key via Init and not just the IV it will throw 1. If get tag is not called before dispose or Init should an exception be thrown? To stop the user not collecting the tag by accident? Reference #7023 # Updates 1. Changed nonce to IV. 1. Added behaviour section 1. Removed the single input/output span cases from finish and update, they can just be extension methods 1. Changed a number of spans to readonlyspan as suggested by @bartonjs 1. Removed Reset, Init with IV should be used instead
non_defect
general low level primitive for ciphers aes gcm being the first rationale there is a general need for a number of ciphers for encryption todays mix of interfaces and classes has become a little disjointed also there is no support for aead style ciphers as they need the ability to provide extra authentication information the current designs are also prone to allocation and these are hard to avoid due to the returning arrays proposed api a general purpose abstract base class that will be implemented by concrete classes this will allow for expansion and also by having a class rather than static methods we have the ability to make extension methods as well as hold state between calls the api should allow for recycling of the class to allow for lower allocations not needing a new instance each time and to catch say unmanaged keys due to the often unmanaged nature of resources that are tracked the class should implement idisposable csharp public abstract class cipher idisposable public virtual int tagsize get public virtual int ivsize get public virtual int blocksize get public virtual bool supportsassociateddata get public abstract void init readonlyspan key readonlyspan iv public abstract void init readonlyspan iv public abstract int update readonlyspan input span output public abstract int finish readonlyspan input span output public abstract void addassociateddata readonlyspan associateddata public abstract int gettag span span public abstract void settag readonlyspan tagspan example usage the input output source is a mythical span based stream like io source csharp using var cipher new aesgcmcipher bitsize cipher init mykey nonce while inputsource eof var inputspan inputsource readspan cipher blocksize cipher update inputspan outputsource write inputspan cipher addassociateddata extrainformation cipher finish finalblockdata cipher gettag tagdata api behaviour if get tag is called before finish a should be thrown and the internal state should be set to invalid if the tag is invalid on finish for decrypt it should be an exception thrown once finished is called a call to anything other than one of the init methods will throw once init is called a second call without finishing will throw if the type expects an key supplied a straight new d up instance if the initial init call only has an iv it will throw if the type was generated say from a store based key and you attempt to change the key via init and not just the iv it will throw if get tag is not called before dispose or init should an exception be thrown to stop the user not collecting the tag by accident reference updates changed nonce to iv added behaviour section removed the single input output span cases from finish and update they can just be extension methods changed a number of spans to readonlyspan as suggested by bartonjs removed reset init with iv should be used instead
0
450,815
31,994,006,137
IssuesEvent
2023-09-21 08:03:41
hammerdirt-analyst/qualite-deau
https://api.github.com/repos/hammerdirt-analyst/qualite-deau
closed
Invalid count values for 2022 samples: "nd"
documentation
Hello, There are missing values for Vernex in 2022. See the results of the data processing module: https://hammerdirt-analyst.github.io/qualite-deau/data_standardization.html Do not attach the correct values to this issue. Send the data to analyst@hammerdirt.ch If the rows can be excluded let us know here. Thanks.
1.0
Invalid count values for 2022 samples: "nd" - Hello, There are missing values for Vernex in 2022. See the results of the data processing module: https://hammerdirt-analyst.github.io/qualite-deau/data_standardization.html Do not attach the correct values to this issue. Send the data to analyst@hammerdirt.ch If the rows can be excluded let us know here. Thanks.
non_defect
invalid count values for samples nd hello there are missing values for vernex in see the results of the data processing module do not attach the correct values to this issue send the data to analyst hammerdirt ch if the rows can be excluded let us know here thanks
0
29,570
5,723,484,625
IssuesEvent
2017-04-20 12:23:17
nci-ats/fs-middlelayer-api
https://api.github.com/repos/nci-ats/fs-middlelayer-api
closed
Update error message for issue getting application from DB
Defect Subtask
Should just respond with a 404, shouldn't include message that application not found in DB.
1.0
Update error message for issue getting application from DB - Should just respond with a 404, shouldn't include message that application not found in DB.
defect
update error message for issue getting application from db should just respond with a shouldn t include message that application not found in db
1
2,315
2,607,871,575
IssuesEvent
2015-02-26 00:00:44
chrsmithdemos/google-styleguide
https://api.github.com/repos/chrsmithdemos/google-styleguide
opened
intellij-java-google-style.xml puts static imports last
auto-migrated Priority-Medium Type-Defect
``` According to http://google-styleguide.googlecode.com/svn/trunk/javaguide.html#s3.3.3-import-o rdering-and-spacing "Import statements are divided into the following groups, in this order, with each group separated by a single blank line: 1. All static imports in a single group ..." static imports should come first. But in intellij-java-google-style.xml they are listed as last: https://code.google.com/p/google-styleguide/source/browse/trunk/intellij-java-go ogle-style.xml#251 Steps to reproduce 1. Open the attached Test.java in IntelliJ 2. In IntelliJ IDEA, do Reformat Code (CTRL+ALT+L on Windows) 3. Check "Organize Import" and "Rearrange Entries" 4. Click OK What is the expected output? package test; import static java.lang.System.out; import java.text.NumberFormat; public class Test { public void test() { out.println(NumberFormat.getInstance().format(Math.PI)); } } What do you see instead? package test; import java.text.NumberFormat; import static java.lang.System.out; public class Test { public void test() { out.println(NumberFormat.getInstance().format(Math.PI)); } } What version of the product are you using? * r134 of intellij-java-google-style.xml * IntelliJ IDEA 13.03 * JRE 1.7.0_40-b43 x86 On what operating system? * Windows 7 Enterprise (V 6.1 SP1) Please provide any additional information below. I attached the corrected intellij-java-google-style.xml to places the static imports first as per the code style. ``` ----- Original issue reported on code.google.com by `florian....@gmail.com` on 8 Jul 2014 at 5:39 Attachments: * [Test.java](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-26/comment-0/Test.java) * [intellij-java-google-style.xml](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-26/comment-0/intellij-java-google-style.xml)
1.0
intellij-java-google-style.xml puts static imports last - ``` According to http://google-styleguide.googlecode.com/svn/trunk/javaguide.html#s3.3.3-import-o rdering-and-spacing "Import statements are divided into the following groups, in this order, with each group separated by a single blank line: 1. All static imports in a single group ..." static imports should come first. But in intellij-java-google-style.xml they are listed as last: https://code.google.com/p/google-styleguide/source/browse/trunk/intellij-java-go ogle-style.xml#251 Steps to reproduce 1. Open the attached Test.java in IntelliJ 2. In IntelliJ IDEA, do Reformat Code (CTRL+ALT+L on Windows) 3. Check "Organize Import" and "Rearrange Entries" 4. Click OK What is the expected output? package test; import static java.lang.System.out; import java.text.NumberFormat; public class Test { public void test() { out.println(NumberFormat.getInstance().format(Math.PI)); } } What do you see instead? package test; import java.text.NumberFormat; import static java.lang.System.out; public class Test { public void test() { out.println(NumberFormat.getInstance().format(Math.PI)); } } What version of the product are you using? * r134 of intellij-java-google-style.xml * IntelliJ IDEA 13.03 * JRE 1.7.0_40-b43 x86 On what operating system? * Windows 7 Enterprise (V 6.1 SP1) Please provide any additional information below. I attached the corrected intellij-java-google-style.xml to places the static imports first as per the code style. ``` ----- Original issue reported on code.google.com by `florian....@gmail.com` on 8 Jul 2014 at 5:39 Attachments: * [Test.java](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-26/comment-0/Test.java) * [intellij-java-google-style.xml](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-26/comment-0/intellij-java-google-style.xml)
defect
intellij java google style xml puts static imports last according to rdering and spacing import statements are divided into the following groups in this order with each group separated by a single blank line all static imports in a single group static imports should come first but in intellij java google style xml they are listed as last ogle style xml steps to reproduce open the attached test java in intellij in intellij idea do reformat code ctrl alt l on windows check organize import and rearrange entries click ok what is the expected output package test import static java lang system out import java text numberformat public class test public void test out println numberformat getinstance format math pi what do you see instead package test import java text numberformat import static java lang system out public class test public void test out println numberformat getinstance format math pi what version of the product are you using of intellij java google style xml intellij idea jre on what operating system windows enterprise v please provide any additional information below i attached the corrected intellij java google style xml to places the static imports first as per the code style original issue reported on code google com by florian gmail com on jul at attachments
1
18,595
3,697,882,224
IssuesEvent
2016-02-27 23:41:34
agda/agda
https://api.github.com/repos/agda/agda
closed
test-suite fails on Cabal sandbox
enhancement test-suite
``` $ cabal sandbox init $ cabal install alex $ cabal install happy $ make install-bin $ make install-fix-agda-whitespace $ make test ... ====================================================================== ===================== Suite of successfull tests ===================== ====================================================================== make[1]: Entering directory `/tmp/agda-master/test/Common' Bool.agda make[1]: /tmp/agda-master/dist-2.5.0/build/agda/agda: Command not found make[1]: *** [Bool.test] Error 127 make[1]: Leaving directory `/tmp/agda-master/test/Common' make: *** [succeed] Error 2 ```
1.0
test-suite fails on Cabal sandbox - ``` $ cabal sandbox init $ cabal install alex $ cabal install happy $ make install-bin $ make install-fix-agda-whitespace $ make test ... ====================================================================== ===================== Suite of successfull tests ===================== ====================================================================== make[1]: Entering directory `/tmp/agda-master/test/Common' Bool.agda make[1]: /tmp/agda-master/dist-2.5.0/build/agda/agda: Command not found make[1]: *** [Bool.test] Error 127 make[1]: Leaving directory `/tmp/agda-master/test/Common' make: *** [succeed] Error 2 ```
non_defect
test suite fails on cabal sandbox cabal sandbox init cabal install alex cabal install happy make install bin make install fix agda whitespace make test suite of successfull tests make entering directory tmp agda master test common bool agda make tmp agda master dist build agda agda command not found make error make leaving directory tmp agda master test common make error
0
32,793
6,942,530,368
IssuesEvent
2017-12-05 00:28:40
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
opened
dd(Type::buildAll()) timeout error
Defect
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: latest master ### What you did Debugging type building (static methods of Type class): ```php dd(Type::build('sth')) dd(Type::buildAll()) ``` ### What happened Runtime timeout after many minutes of nothing happening (probably some inception issue). ### What you expected to happen Output normal dd() debug output
1.0
dd(Type::buildAll()) timeout error - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: latest master ### What you did Debugging type building (static methods of Type class): ```php dd(Type::build('sth')) dd(Type::buildAll()) ``` ### What happened Runtime timeout after many minutes of nothing happening (probably some inception issue). ### What you expected to happen Output normal dd() debug output
defect
dd type buildall timeout error this is a multiple allowed bug enhancement feature discussion rfc cakephp version latest master what you did debugging type building static methods of type class php dd type build sth dd type buildall what happened runtime timeout after many minutes of nothing happening probably some inception issue what you expected to happen output normal dd debug output
1
31,248
25,480,398,187
IssuesEvent
2022-11-25 19:51:57
Matico-Platform/matico
https://api.github.com/repos/Matico-Platform/matico
closed
Setup husky + Prettier / cargo fmt for the monorepo
infrastructure
What I have used in the past which has worked pretty well is - [husky](https://typicode.github.io/husky/#/) to manage the pre-commit hooks. - [Prettier](https://prettier.io/) to format typescript. - cargo fmt to format rust code. Though it looks like there is a [community plugin](https://github.com/jinxdash/prettier-plugin-rust) for Prettier to allow it to work with rust as well, so perhaps we use that for both While we are thinking about dev tooling as well, it might be worth considering using something akin to [comittizen](https://github.com/commitizen/cz-cli) to standardize commit messages and make it easier to produce release notes in future.
1.0
Setup husky + Prettier / cargo fmt for the monorepo - What I have used in the past which has worked pretty well is - [husky](https://typicode.github.io/husky/#/) to manage the pre-commit hooks. - [Prettier](https://prettier.io/) to format typescript. - cargo fmt to format rust code. Though it looks like there is a [community plugin](https://github.com/jinxdash/prettier-plugin-rust) for Prettier to allow it to work with rust as well, so perhaps we use that for both While we are thinking about dev tooling as well, it might be worth considering using something akin to [comittizen](https://github.com/commitizen/cz-cli) to standardize commit messages and make it easier to produce release notes in future.
non_defect
setup husky prettier cargo fmt for the monorepo what i have used in the past which has worked pretty well is to manage the pre commit hooks to format typescript cargo fmt to format rust code though it looks like there is a for prettier to allow it to work with rust as well so perhaps we use that for both while we are thinking about dev tooling as well it might be worth considering using something akin to to standardize commit messages and make it easier to produce release notes in future
0
188,602
6,778,111,204
IssuesEvent
2017-10-28 05:54:28
spring-projects/spring-boot
https://api.github.com/repos/spring-projects/spring-boot
closed
Tie the versions in the docs to those defined in spring-boot-dependencies
priority: normal type: documentation
We sometimes reference versions in the documentation that should match those defined in `spring-boot-dependencies` but it's a manual process to keep them in sync. We should see if there's a way to keep them in sync automatically. One possibility would be a custom Asciidoctor extension that exposes all of the version properties from `spring-boot-dependencies` as Asciidoctor attributes that could then be referenced, for example `{{spring.version}}`.
1.0
Tie the versions in the docs to those defined in spring-boot-dependencies - We sometimes reference versions in the documentation that should match those defined in `spring-boot-dependencies` but it's a manual process to keep them in sync. We should see if there's a way to keep them in sync automatically. One possibility would be a custom Asciidoctor extension that exposes all of the version properties from `spring-boot-dependencies` as Asciidoctor attributes that could then be referenced, for example `{{spring.version}}`.
non_defect
tie the versions in the docs to those defined in spring boot dependencies we sometimes reference versions in the documentation that should match those defined in spring boot dependencies but it s a manual process to keep them in sync we should see if there s a way to keep them in sync automatically one possibility would be a custom asciidoctor extension that exposes all of the version properties from spring boot dependencies as asciidoctor attributes that could then be referenced for example spring version
0
19,487
13,255,331,719
IssuesEvent
2020-08-20 10:44:57
skypyproject/skypy
https://api.github.com/repos/skypyproject/skypy
closed
Can we release SkyPy also on conda?
infrastructure installation release v0.2 hack
The following page seems to give guidance for doing this? https://docs.conda.io/projects/conda-build/en/latest/user-guide/tutorials/build-pkgs-skeleton.html Duncan Macleod is our local expert for this (Ian Harrison also knows him well) so would be a good point to ask if this doesn't go easily! .... A future improvement to all of this would be tie this into Travis and have it all automated. We managed to set Travis up (in our "PyCBC" repository) so that when a new release is tagged it automatically gets uploaded to pypi, and then something does something and it ends up on conda as well.
1.0
Can we release SkyPy also on conda? - The following page seems to give guidance for doing this? https://docs.conda.io/projects/conda-build/en/latest/user-guide/tutorials/build-pkgs-skeleton.html Duncan Macleod is our local expert for this (Ian Harrison also knows him well) so would be a good point to ask if this doesn't go easily! .... A future improvement to all of this would be tie this into Travis and have it all automated. We managed to set Travis up (in our "PyCBC" repository) so that when a new release is tagged it automatically gets uploaded to pypi, and then something does something and it ends up on conda as well.
non_defect
can we release skypy also on conda the following page seems to give guidance for doing this duncan macleod is our local expert for this ian harrison also knows him well so would be a good point to ask if this doesn t go easily a future improvement to all of this would be tie this into travis and have it all automated we managed to set travis up in our pycbc repository so that when a new release is tagged it automatically gets uploaded to pypi and then something does something and it ends up on conda as well
0
591,030
17,793,365,267
IssuesEvent
2021-08-31 18:56:11
airqo-platform/AirQo-frontend
https://api.github.com/repos/airqo-platform/AirQo-frontend
closed
Map nodes do not link to registry
priority-high
**Is your feature request related to a problem? Please describe.** cursor changes to a pointer when passing over a device but cannot click. THis should be linked to device registry. Better still Single click creates a popup window which shows key data. By clicking on a device a brief summary of the current status can be seen Device id Description Sensor correlation Time until next maintenance Time since last downtime Link to device registry: takes the user to the device entry in the registry Link to location registry: takes the user to the location entry in the registry
1.0
Map nodes do not link to registry - **Is your feature request related to a problem? Please describe.** cursor changes to a pointer when passing over a device but cannot click. THis should be linked to device registry. Better still Single click creates a popup window which shows key data. By clicking on a device a brief summary of the current status can be seen Device id Description Sensor correlation Time until next maintenance Time since last downtime Link to device registry: takes the user to the device entry in the registry Link to location registry: takes the user to the location entry in the registry
non_defect
map nodes do not link to registry is your feature request related to a problem please describe cursor changes to a pointer when passing over a device but cannot click this should be linked to device registry better still single click creates a popup window which shows key data by clicking on a device a brief summary of the current status can be seen device id description sensor correlation time until next maintenance time since last downtime link to device registry takes the user to the device entry in the registry link to location registry takes the user to the location entry in the registry
0
57,350
15,731,261,198
IssuesEvent
2021-03-29 16:51:41
danmar/testissues
https://api.github.com/repos/danmar/testissues
opened
false positive:: division by zero (Trac #364)
False positive Incomplete Migration Migrated from Trac aggro80 defect
Migrated from https://trac.cppcheck.net/ticket/364 ```json { "status": "closed", "changetime": "2009-06-05T21:39:03", "description": "{{{\n#include <stdio.h>\n#define SR_MULT (11*12)\n#define A(x) (x) ? (SR_MULT/x) : 0\nstatic const unsigned char sr_adc_mult_table[] = {\n A(2), A(2), A(12), A(12), A(0), A(0), A(3), A(1),\n A(2), A(2), A(11), A(11), A(0), A(0), A(0), A(1)\n};\n\nint main()\n{\n\tunsigned int ui = 0;\n\tfor( ui = 0; ui < 16; ui++)\n\t\tprintf(\"%i \",sr_adc_mult_table[ui]);\n\treturn 0;\n}\n\n\n}}}\n\n\ncppcheck says:\n\n[test.c:5]: (error) Division by zero\n[test.c:6]: (error) Division by zero\n\n\nthis is from a bugreport of the linux kernel:\n\nhttp://bugzilla.kernel.org/show_bug.cgi?id=13445\n\nBest regards\n\nMartin\n", "reporter": "ettlmartin", "cc": "", "resolution": "fixed", "_ts": "1244237943000000", "component": "False positive", "summary": "false positive:: division by zero", "priority": "", "keywords": "", "time": "2009-06-05T19:25:06", "milestone": "1.33", "owner": "aggro80", "type": "defect" } ```
1.0
false positive:: division by zero (Trac #364) - Migrated from https://trac.cppcheck.net/ticket/364 ```json { "status": "closed", "changetime": "2009-06-05T21:39:03", "description": "{{{\n#include <stdio.h>\n#define SR_MULT (11*12)\n#define A(x) (x) ? (SR_MULT/x) : 0\nstatic const unsigned char sr_adc_mult_table[] = {\n A(2), A(2), A(12), A(12), A(0), A(0), A(3), A(1),\n A(2), A(2), A(11), A(11), A(0), A(0), A(0), A(1)\n};\n\nint main()\n{\n\tunsigned int ui = 0;\n\tfor( ui = 0; ui < 16; ui++)\n\t\tprintf(\"%i \",sr_adc_mult_table[ui]);\n\treturn 0;\n}\n\n\n}}}\n\n\ncppcheck says:\n\n[test.c:5]: (error) Division by zero\n[test.c:6]: (error) Division by zero\n\n\nthis is from a bugreport of the linux kernel:\n\nhttp://bugzilla.kernel.org/show_bug.cgi?id=13445\n\nBest regards\n\nMartin\n", "reporter": "ettlmartin", "cc": "", "resolution": "fixed", "_ts": "1244237943000000", "component": "False positive", "summary": "false positive:: division by zero", "priority": "", "keywords": "", "time": "2009-06-05T19:25:06", "milestone": "1.33", "owner": "aggro80", "type": "defect" } ```
defect
false positive division by zero trac migrated from json status closed changetime description n include n define sr mult n define a x x sr mult x nstatic const unsigned char sr adc mult table n a a a a a a a a n a a a a a a a a n n nint main n n tunsigned int ui n tfor ui ui ui n t tprintf i sr adc mult table n treturn n n n n n n ncppcheck says n n error division by zero n error division by zero n n nthis is from a bugreport of the linux kernel n n regards n nmartin n reporter ettlmartin cc resolution fixed ts component false positive summary false positive division by zero priority keywords time milestone owner type defect
1
87,130
10,532,715,671
IssuesEvent
2019-10-01 11:25:59
shivammathur/setup-php
https://api.github.com/repos/shivammathur/setup-php
opened
Add examples of projects using this GitHub Action.
Hacktoberfest documentation help wanted
Add example section in the [README](https://github.com/shivammathur/setup-php/blob/master/README.md) and workflow file for the example you are adding in example directory. Create it you are the first one to add. To add example workflow for Laravel, add in README ## Example - [Laravel](examples/laravel.yaml)
1.0
Add examples of projects using this GitHub Action. - Add example section in the [README](https://github.com/shivammathur/setup-php/blob/master/README.md) and workflow file for the example you are adding in example directory. Create it you are the first one to add. To add example workflow for Laravel, add in README ## Example - [Laravel](examples/laravel.yaml)
non_defect
add examples of projects using this github action add example section in the and workflow file for the example you are adding in example directory create it you are the first one to add to add example workflow for laravel add in readme example examples laravel yaml
0
191,156
14,593,420,463
IssuesEvent
2020-12-19 22:43:31
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
alexcrownus/fabcar: vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go; 19 LoC
fresh small test vendored
Found a possible issue in [alexcrownus/fabcar](https://www.github.com/alexcrownus/fabcar) at [vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go](https://github.com/alexcrownus/fabcar/blob/8d90c87788a72b2a497f2df6f651543827aaf69b/vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go#L326-L344) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to test at line 341 may start a goroutine [Click here to see the code in its original context.](https://github.com/alexcrownus/fabcar/blob/8d90c87788a72b2a497f2df6f651543827aaf69b/vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go#L326-L344) <details> <summary>Click here to show the 19 line(s) of Go which triggered the analyzer.</summary> ```go for i, test := range fixAndLogQueueTests { f := &Fixer{toFix: make(chan *toFix)} c := &http.Client{Transport: &testRoundTripper{}} logClient, err := client.New(test.url, c, jsonclient.Options{}) if err != nil { t.Fatalf("failed to create LogClient: %v", err) } l := &Logger{ ctx: ctx, client: logClient, postCertCache: newLockedMap(), } fl := &FixAndLog{fixer: f, chains: make(chan []*x509.Certificate), logger: l, done: newLockedMap()} fl.wg.Add(1) go testFixAndLogQueueChain(t, i, &test, fl) fl.QueueChain(extractTestChain(t, i, test.chain)) fl.Wait() } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 8d90c87788a72b2a497f2df6f651543827aaf69b
1.0
alexcrownus/fabcar: vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go; 19 LoC - Found a possible issue in [alexcrownus/fabcar](https://www.github.com/alexcrownus/fabcar) at [vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go](https://github.com/alexcrownus/fabcar/blob/8d90c87788a72b2a497f2df6f651543827aaf69b/vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go#L326-L344) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to test at line 341 may start a goroutine [Click here to see the code in its original context.](https://github.com/alexcrownus/fabcar/blob/8d90c87788a72b2a497f2df6f651543827aaf69b/vendor/github.com/google/certificate-transparency-go/fixchain/fix_and_log_test.go#L326-L344) <details> <summary>Click here to show the 19 line(s) of Go which triggered the analyzer.</summary> ```go for i, test := range fixAndLogQueueTests { f := &Fixer{toFix: make(chan *toFix)} c := &http.Client{Transport: &testRoundTripper{}} logClient, err := client.New(test.url, c, jsonclient.Options{}) if err != nil { t.Fatalf("failed to create LogClient: %v", err) } l := &Logger{ ctx: ctx, client: logClient, postCertCache: newLockedMap(), } fl := &FixAndLog{fixer: f, chains: make(chan []*x509.Certificate), logger: l, done: newLockedMap()} fl.wg.Add(1) go testFixAndLogQueueChain(t, i, &test, fl) fl.QueueChain(extractTestChain(t, i, test.chain)) fl.Wait() } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 8d90c87788a72b2a497f2df6f651543827aaf69b
non_defect
alexcrownus fabcar vendor github com google certificate transparency go fixchain fix and log test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to test at line may start a goroutine click here to show the line s of go which triggered the analyzer go for i test range fixandlogqueuetests f fixer tofix make chan tofix c http client transport testroundtripper logclient err client new test url c jsonclient options if err nil t fatalf failed to create logclient v err l logger ctx ctx client logclient postcertcache newlockedmap fl fixandlog fixer f chains make chan certificate logger l done newlockedmap fl wg add go testfixandlogqueuechain t i test fl fl queuechain extracttestchain t i test chain fl wait leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
1,196
3,697,266,488
IssuesEvent
2016-02-27 15:17:29
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
enforce_autocommit_on_reads
ADMIN CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
## WHY Since 3449ab0f598db7bd537f1065983a0594927122d8 ( #438 ) ProxySQL tracks the value of autocommit and enforces it in database connection. This implementation had the drawback that if a client sends `set autocommit=0` , and read/write split is implemented too, transactions are opened on slaves as pointed in #469 , and therefore the feature was reverted. Now ProxySQL doesn't track anymore the value of autocommit, and this means that statements could be executed in autocommit mode when the client assumes that autocommit is OFF . This is problematic for application and libraries that do not explicitly starts transactions. See: - http://docs.sqlalchemy.org/en/latest/core/connections.html#understanding-autocommit - https://www.python.org/dev/peps/pep-0249/#commit ## WHAT * [ ] add a new variable `mysql-enforce_autocommit_on_reads` * [ ] if the variable is set to `false` (default) and a `SELECT` (not `FOR UPDATE`) is executed, the client value of `autocommit` is not enforced into the database connection * [ ] in all the other cases, the client value of `autocommit`enforced into the database connection
1.0
enforce_autocommit_on_reads - ## WHY Since 3449ab0f598db7bd537f1065983a0594927122d8 ( #438 ) ProxySQL tracks the value of autocommit and enforces it in database connection. This implementation had the drawback that if a client sends `set autocommit=0` , and read/write split is implemented too, transactions are opened on slaves as pointed in #469 , and therefore the feature was reverted. Now ProxySQL doesn't track anymore the value of autocommit, and this means that statements could be executed in autocommit mode when the client assumes that autocommit is OFF . This is problematic for application and libraries that do not explicitly starts transactions. See: - http://docs.sqlalchemy.org/en/latest/core/connections.html#understanding-autocommit - https://www.python.org/dev/peps/pep-0249/#commit ## WHAT * [ ] add a new variable `mysql-enforce_autocommit_on_reads` * [ ] if the variable is set to `false` (default) and a `SELECT` (not `FOR UPDATE`) is executed, the client value of `autocommit` is not enforced into the database connection * [ ] in all the other cases, the client value of `autocommit`enforced into the database connection
non_defect
enforce autocommit on reads why since proxysql tracks the value of autocommit and enforces it in database connection this implementation had the drawback that if a client sends set autocommit and read write split is implemented too transactions are opened on slaves as pointed in and therefore the feature was reverted now proxysql doesn t track anymore the value of autocommit and this means that statements could be executed in autocommit mode when the client assumes that autocommit is off this is problematic for application and libraries that do not explicitly starts transactions see what add a new variable mysql enforce autocommit on reads if the variable is set to false default and a select not for update is executed the client value of autocommit is not enforced into the database connection in all the other cases the client value of autocommit enforced into the database connection
0
38,124
8,664,508,998
IssuesEvent
2018-11-28 20:23:23
HelenOS/helenos
https://api.github.com/repos/HelenOS/helenos
closed
HelenOS/ppc32 non-debug does not boot (in Qemu)
C: helenos/kernel/ppc32 P: major defect
**Reported by jermar on 29 Jul 2009 15:19 UTC** HelenOS/ppc32 non-debug does not boot (in Qemu). The last displayed message is the loader's: Booting the kernle... The debug version boots fine.
1.0
HelenOS/ppc32 non-debug does not boot (in Qemu) - **Reported by jermar on 29 Jul 2009 15:19 UTC** HelenOS/ppc32 non-debug does not boot (in Qemu). The last displayed message is the loader's: Booting the kernle... The debug version boots fine.
defect
helenos non debug does not boot in qemu reported by jermar on jul utc helenos non debug does not boot in qemu the last displayed message is the loader s booting the kernle the debug version boots fine
1
140,559
18,903,135,287
IssuesEvent
2021-11-16 05:04:18
three11/react-template-ts
https://api.github.com/repos/three11/react-template-ts
closed
CVE-2021-3918 (High) detected in json-schema-0.3.0.tgz
security vulnerability
## CVE-2021-3918 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.3.0.tgz</b></p></summary> <p>JSON Schema validation and specifications</p> <p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.3.0.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.3.0.tgz</a></p> <p>Path to dependency file: react-template-ts/package.json</p> <p>Path to vulnerable library: react-template-ts/node_modules/json-schema/package.json</p> <p> Dependency Hierarchy: - workbox-cli-6.4.1.tgz (Root Library) - workbox-build-6.4.1.tgz - better-ajv-errors-0.2.7.tgz - :x: **json-schema-0.3.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/three11/react-template-ts/commit/7104b5d87996951a39a8f2b4fccf087c04e8cf0a">7104b5d87996951a39a8f2b4fccf087c04e8cf0a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') <p>Publish Date: 2021-11-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3918 (High) detected in json-schema-0.3.0.tgz - ## CVE-2021-3918 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.3.0.tgz</b></p></summary> <p>JSON Schema validation and specifications</p> <p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.3.0.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.3.0.tgz</a></p> <p>Path to dependency file: react-template-ts/package.json</p> <p>Path to vulnerable library: react-template-ts/node_modules/json-schema/package.json</p> <p> Dependency Hierarchy: - workbox-cli-6.4.1.tgz (Root Library) - workbox-build-6.4.1.tgz - better-ajv-errors-0.2.7.tgz - :x: **json-schema-0.3.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/three11/react-template-ts/commit/7104b5d87996951a39a8f2b4fccf087c04e8cf0a">7104b5d87996951a39a8f2b4fccf087c04e8cf0a</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') <p>Publish Date: 2021-11-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in json schema tgz cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href path to dependency file react template ts package json path to vulnerable library react template ts node modules json schema package json dependency hierarchy workbox cli tgz root library workbox build tgz better ajv errors tgz x json schema tgz vulnerable library found in head commit a href vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource
0
77,622
27,081,072,946
IssuesEvent
2023-02-14 14:04:31
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Messages stuck or very slow at "Encrypting your message..."
T-Defect X-Regression S-Major A-E2EE O-Frequent
### Steps to reproduce 1. Send a message ### Outcome #### What did you expect? The message to send instantanously #### What happened instead? It's very slow, above the 2mn mark! ![Screenshot 2023-01-04 at 10 23 10](https://user-images.githubusercontent.com/769871/210534781-1b155e20-a945-4d70-8e30-a696aef444b1.png) ### Operating system _No response_ ### Application version _No response_ ### How did you install the app? _No response_ ### Homeserver _No response_ ### Will you send logs? Yes
1.0
Messages stuck or very slow at "Encrypting your message..." - ### Steps to reproduce 1. Send a message ### Outcome #### What did you expect? The message to send instantanously #### What happened instead? It's very slow, above the 2mn mark! ![Screenshot 2023-01-04 at 10 23 10](https://user-images.githubusercontent.com/769871/210534781-1b155e20-a945-4d70-8e30-a696aef444b1.png) ### Operating system _No response_ ### Application version _No response_ ### How did you install the app? _No response_ ### Homeserver _No response_ ### Will you send logs? Yes
defect
messages stuck or very slow at encrypting your message steps to reproduce send a message outcome what did you expect the message to send instantanously what happened instead it s very slow above the mark operating system no response application version no response how did you install the app no response homeserver no response will you send logs yes
1
137,191
5,299,462,161
IssuesEvent
2017-02-10 00:14:42
Lombiq/DotNest-Support
https://api.github.com/repos/Lombiq/DotNest-Support
opened
Our own (setup) recipes
Priority: Minor Type: New Feature
DotNest-specific useful recipes for various cases, primarily (but not necessarily) for setup: - Ecommerce - Associativy package - Common package
1.0
Our own (setup) recipes - DotNest-specific useful recipes for various cases, primarily (but not necessarily) for setup: - Ecommerce - Associativy package - Common package
non_defect
our own setup recipes dotnest specific useful recipes for various cases primarily but not necessarily for setup ecommerce associativy package common package
0
7,623
2,603,738,634
IssuesEvent
2015-02-24 17:40:28
chrsmith/bwapi
https://api.github.com/repos/chrsmith/bwapi
closed
Map-Specific BO
auto-migrated Priority-Medium Type-Enhancement
``` Needs to choose a build order based on specific maps; so First priority: Map-specific Second priority: Matchup-specific(race) Third priority: Matchup-unknown Should also detect more than one opponent, and maybe allies. ``` ----- Original issue reported on code.google.com by `AHeinerm` on 13 Oct 2008 at 5:03
1.0
Map-Specific BO - ``` Needs to choose a build order based on specific maps; so First priority: Map-specific Second priority: Matchup-specific(race) Third priority: Matchup-unknown Should also detect more than one opponent, and maybe allies. ``` ----- Original issue reported on code.google.com by `AHeinerm` on 13 Oct 2008 at 5:03
non_defect
map specific bo needs to choose a build order based on specific maps so first priority map specific second priority matchup specific race third priority matchup unknown should also detect more than one opponent and maybe allies original issue reported on code google com by aheinerm on oct at
0
7,198
2,610,357,414
IssuesEvent
2015-02-26 19:55:45
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
opened
Post published with main content
auto-migrated Priority-Medium Type-Defect
``` What's the problem? Published an article to my wordpress blog (self-hosted) and everything I had filled in was correctly published except the content itself (that is the main body) What browser are you using? Firefox 7.0.1 What version of ScribeFire are you running? Scribefire 4 ``` ----- Original issue reported on code.google.com by `just2...@gmail.com` on 6 Nov 2011 at 3:17
1.0
Post published with main content - ``` What's the problem? Published an article to my wordpress blog (self-hosted) and everything I had filled in was correctly published except the content itself (that is the main body) What browser are you using? Firefox 7.0.1 What version of ScribeFire are you running? Scribefire 4 ``` ----- Original issue reported on code.google.com by `just2...@gmail.com` on 6 Nov 2011 at 3:17
defect
post published with main content what s the problem published an article to my wordpress blog self hosted and everything i had filled in was correctly published except the content itself that is the main body what browser are you using firefox what version of scribefire are you running scribefire original issue reported on code google com by gmail com on nov at
1
531,842
15,526,407,476
IssuesEvent
2021-03-13 01:07:12
PMEAL/OpenPNM
https://api.github.com/repos/PMEAL/OpenPNM
opened
Should we test the output of examples notebooks?
discussion low priority maintenance
Every now and then, we're sprinkling the notebooks with `# NBVAL_IGNORE_OUTPUT`. It makes me wonder (as @jgostick once suggested) that maybe we shouldn't really check the output of examples. As long as they don't fail, it should be good. The only downside is that we lose those "few" notebooks that indeed help us catch inconsistencies (like those related to effective properties, etc.). However, we could also add them as integration tests. What do you guys think?
1.0
Should we test the output of examples notebooks? - Every now and then, we're sprinkling the notebooks with `# NBVAL_IGNORE_OUTPUT`. It makes me wonder (as @jgostick once suggested) that maybe we shouldn't really check the output of examples. As long as they don't fail, it should be good. The only downside is that we lose those "few" notebooks that indeed help us catch inconsistencies (like those related to effective properties, etc.). However, we could also add them as integration tests. What do you guys think?
non_defect
should we test the output of examples notebooks every now and then we re sprinkling the notebooks with nbval ignore output it makes me wonder as jgostick once suggested that maybe we shouldn t really check the output of examples as long as they don t fail it should be good the only downside is that we lose those few notebooks that indeed help us catch inconsistencies like those related to effective properties etc however we could also add them as integration tests what do you guys think
0
47,987
13,067,362,619
IssuesEvent
2020-07-31 00:13:11
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
MuonGun Track.Harvest bug (Trac #1583)
Migrated from Trac combo simulation defect
Energy deposited at some point along the track sometimes returns nonsensical negative energy. File below (located at DESY Zeuthen) contains I3MCTree and MMCTrackList for several example events. /lustre/fs11/group/icecube/blotsd/muon_gun/muon_gun_testcase.i3.bz2 Relevant information about the file: **Generation** model.flux.max_multiplicity = 1 spectrum = MuonGun.OffsetPowerLaw(5,100*I3Units.TeV, 1*I3Units.GeV,1*I3Units.TeV) outer = MuonGun.Cylinder(1600*I3Units.m, 800*I3Units.m) model = MuonGun.load_model('GaisserH4a_atmod12_SIBYLL') generator = MuonGun.Floodlight(outer, spectrum, 0.99,1.0) ***Propagation*** muprop = I3PropagatorServicePROPOSAL(type=dataclasses.I3Particle.MuMinus, cylinderHeight=1600, cylinderRadius=800) If I try to get the deposited energy in the volume as below sometimes its negative. for track in MuonGun.Track.harvest(frame['I3MCTree'], frame['MMCTrackList']): # Find distance to entrance and exit from sampling volume intersections = outer.intersection(track.pos, track.dir) # Get the corresponding energies e0, e1 = track.get_energy(intersections.first)/I3Units.GeV, track.get_energy(intersections.second)/I3Units.GeV Sometimes e1 < 0 and that means that e0 - e1 > 1 TeV (maximum generated energy) Migrated from https://code.icecube.wisc.edu/ticket/1583 ```json { "status": "closed", "changetime": "2019-02-13T14:13:10", "description": "Energy deposited at some point along the track sometimes returns nonsensical negative energy. File below (located at DESY Zeuthen) contains I3MCTree and MMCTrackList for several example events. \n\n/lustre/fs11/group/icecube/blotsd/muon_gun/muon_gun_testcase.i3.bz2\n\nRelevant information about the file:\n\n**Generation**\nmodel.flux.max_multiplicity = 1\nspectrum = MuonGun.OffsetPowerLaw(5,100*I3Units.TeV, 1*I3Units.GeV,1*I3Units.TeV)\nouter = MuonGun.Cylinder(1600*I3Units.m, 800*I3Units.m)\nmodel = MuonGun.load_model('GaisserH4a_atmod12_SIBYLL')\ngenerator = MuonGun.Floodlight(outer, spectrum, 0.99,1.0)\n\n***Propagation***\nmuprop = I3PropagatorServicePROPOSAL(type=dataclasses.I3Particle.MuMinus, cylinderHeight=1600, cylinderRadius=800)\n\n\nIf I try to get the deposited energy in the volume as below sometimes its negative.\n\nfor track in MuonGun.Track.harvest(frame['I3MCTree'], frame['MMCTrackList']):\n # Find distance to entrance and exit from sampling volume\n intersections = outer.intersection(track.pos, track.dir)\n # Get the corresponding energies\n e0, e1 = track.get_energy(intersections.first)/I3Units.GeV, track.get_energy(intersections.second)/I3Units.GeV\n\nSometimes e1 < 0 \nand that means that e0 - e1 > 1 TeV (maximum generated energy)\n", "reporter": "summer.blot", "cc": "thomas.kintscher@desy.de", "resolution": "fixed", "_ts": "1550067190995086", "component": "combo simulation", "summary": "MuonGun Track.Harvest bug", "priority": "normal", "keywords": "MuonGun", "time": "2016-03-10T13:44:14", "milestone": "", "owner": "jvansanten", "type": "defect" } ```
1.0
MuonGun Track.Harvest bug (Trac #1583) - Energy deposited at some point along the track sometimes returns nonsensical negative energy. File below (located at DESY Zeuthen) contains I3MCTree and MMCTrackList for several example events. /lustre/fs11/group/icecube/blotsd/muon_gun/muon_gun_testcase.i3.bz2 Relevant information about the file: **Generation** model.flux.max_multiplicity = 1 spectrum = MuonGun.OffsetPowerLaw(5,100*I3Units.TeV, 1*I3Units.GeV,1*I3Units.TeV) outer = MuonGun.Cylinder(1600*I3Units.m, 800*I3Units.m) model = MuonGun.load_model('GaisserH4a_atmod12_SIBYLL') generator = MuonGun.Floodlight(outer, spectrum, 0.99,1.0) ***Propagation*** muprop = I3PropagatorServicePROPOSAL(type=dataclasses.I3Particle.MuMinus, cylinderHeight=1600, cylinderRadius=800) If I try to get the deposited energy in the volume as below sometimes its negative. for track in MuonGun.Track.harvest(frame['I3MCTree'], frame['MMCTrackList']): # Find distance to entrance and exit from sampling volume intersections = outer.intersection(track.pos, track.dir) # Get the corresponding energies e0, e1 = track.get_energy(intersections.first)/I3Units.GeV, track.get_energy(intersections.second)/I3Units.GeV Sometimes e1 < 0 and that means that e0 - e1 > 1 TeV (maximum generated energy) Migrated from https://code.icecube.wisc.edu/ticket/1583 ```json { "status": "closed", "changetime": "2019-02-13T14:13:10", "description": "Energy deposited at some point along the track sometimes returns nonsensical negative energy. File below (located at DESY Zeuthen) contains I3MCTree and MMCTrackList for several example events. \n\n/lustre/fs11/group/icecube/blotsd/muon_gun/muon_gun_testcase.i3.bz2\n\nRelevant information about the file:\n\n**Generation**\nmodel.flux.max_multiplicity = 1\nspectrum = MuonGun.OffsetPowerLaw(5,100*I3Units.TeV, 1*I3Units.GeV,1*I3Units.TeV)\nouter = MuonGun.Cylinder(1600*I3Units.m, 800*I3Units.m)\nmodel = MuonGun.load_model('GaisserH4a_atmod12_SIBYLL')\ngenerator = MuonGun.Floodlight(outer, spectrum, 0.99,1.0)\n\n***Propagation***\nmuprop = I3PropagatorServicePROPOSAL(type=dataclasses.I3Particle.MuMinus, cylinderHeight=1600, cylinderRadius=800)\n\n\nIf I try to get the deposited energy in the volume as below sometimes its negative.\n\nfor track in MuonGun.Track.harvest(frame['I3MCTree'], frame['MMCTrackList']):\n # Find distance to entrance and exit from sampling volume\n intersections = outer.intersection(track.pos, track.dir)\n # Get the corresponding energies\n e0, e1 = track.get_energy(intersections.first)/I3Units.GeV, track.get_energy(intersections.second)/I3Units.GeV\n\nSometimes e1 < 0 \nand that means that e0 - e1 > 1 TeV (maximum generated energy)\n", "reporter": "summer.blot", "cc": "thomas.kintscher@desy.de", "resolution": "fixed", "_ts": "1550067190995086", "component": "combo simulation", "summary": "MuonGun Track.Harvest bug", "priority": "normal", "keywords": "MuonGun", "time": "2016-03-10T13:44:14", "milestone": "", "owner": "jvansanten", "type": "defect" } ```
defect
muongun track harvest bug trac energy deposited at some point along the track sometimes returns nonsensical negative energy file below located at desy zeuthen contains and mmctracklist for several example events lustre group icecube blotsd muon gun muon gun testcase relevant information about the file generation model flux max multiplicity spectrum muongun offsetpowerlaw tev gev tev outer muongun cylinder m m model muongun load model sibyll generator muongun floodlight outer spectrum propagation muprop type dataclasses muminus cylinderheight cylinderradius if i try to get the deposited energy in the volume as below sometimes its negative for track in muongun track harvest frame frame find distance to entrance and exit from sampling volume intersections outer intersection track pos track dir get the corresponding energies track get energy intersections first gev track get energy intersections second gev sometimes and that means that tev maximum generated energy migrated from json status closed changetime description energy deposited at some point along the track sometimes returns nonsensical negative energy file below located at desy zeuthen contains and mmctracklist for several example events n n lustre group icecube blotsd muon gun muon gun testcase n nrelevant information about the file n n generation nmodel flux max multiplicity nspectrum muongun offsetpowerlaw tev gev tev nouter muongun cylinder m m nmodel muongun load model sibyll ngenerator muongun floodlight outer spectrum n n propagation nmuprop type dataclasses muminus cylinderheight cylinderradius n n nif i try to get the deposited energy in the volume as below sometimes its negative n nfor track in muongun track harvest frame frame n find distance to entrance and exit from sampling volume n intersections outer intersection track pos track dir n get the corresponding energies n track get energy intersections first gev track get energy intersections second gev n nsometimes tev maximum generated energy n reporter summer blot cc thomas kintscher desy de resolution fixed ts component combo simulation summary muongun track harvest bug priority normal keywords muongun time milestone owner jvansanten type defect
1
38,988
19,662,248,726
IssuesEvent
2022-01-10 18:15:15
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
closed
Remove Typing notification in the Room list
T-Enhancement A-Performance Z-Papercuts Z-WTF
Let's remove the typing notification from the room list view on Android... It's distracting and can cause performance issues - it doesn't add a great deal of value so let's remove it! Leave your thoughts in the comments...
True
Remove Typing notification in the Room list - Let's remove the typing notification from the room list view on Android... It's distracting and can cause performance issues - it doesn't add a great deal of value so let's remove it! Leave your thoughts in the comments...
non_defect
remove typing notification in the room list let s remove the typing notification from the room list view on android it s distracting and can cause performance issues it doesn t add a great deal of value so let s remove it leave your thoughts in the comments
0
145,223
19,339,194,481
IssuesEvent
2021-12-15 01:05:03
billmcchesney1/pacbot
https://api.github.com/repos/billmcchesney1/pacbot
opened
WS-2021-0152 (High) detected in color-string-0.3.0.tgz
security vulnerability
## WS-2021-0152 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-0.3.0.tgz</b></p></summary> <p>Parser and generator for CSS color strings</p> <p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-0.3.0.tgz">https://registry.npmjs.org/color-string/-/color-string-0.3.0.tgz</a></p> <p>Path to dependency file: pacbot/webapp/package.json</p> <p>Path to vulnerable library: pacbot/webapp/node_modules/color-string/package.json</p> <p> Dependency Hierarchy: - cli-1.6.8.tgz (Root Library) - cssnano-3.10.0.tgz - postcss-colormin-2.2.2.tgz - colormin-1.1.2.tgz - color-0.11.4.tgz - :x: **color-string-0.3.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3>WS-2021-0152</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/Qix-/color-string/releases/tag/1.5.5">https://github.com/Qix-/color-string/releases/tag/1.5.5</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: color-string - 1.5.5</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"color-string","packageVersion":"0.3.0","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular/cli:1.6.8;cssnano:3.10.0;postcss-colormin:2.2.2;colormin:1.1.2;color:0.11.4;color-string:0.3.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"color-string - 1.5.5","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2021-0152","vulnerabilityDetails":"Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5.","vulnerabilityUrl":"https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
WS-2021-0152 (High) detected in color-string-0.3.0.tgz - ## WS-2021-0152 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-0.3.0.tgz</b></p></summary> <p>Parser and generator for CSS color strings</p> <p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-0.3.0.tgz">https://registry.npmjs.org/color-string/-/color-string-0.3.0.tgz</a></p> <p>Path to dependency file: pacbot/webapp/package.json</p> <p>Path to vulnerable library: pacbot/webapp/node_modules/color-string/package.json</p> <p> Dependency Hierarchy: - cli-1.6.8.tgz (Root Library) - cssnano-3.10.0.tgz - postcss-colormin-2.2.2.tgz - colormin-1.1.2.tgz - color-0.11.4.tgz - :x: **color-string-0.3.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3>WS-2021-0152</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/Qix-/color-string/releases/tag/1.5.5">https://github.com/Qix-/color-string/releases/tag/1.5.5</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: color-string - 1.5.5</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"color-string","packageVersion":"0.3.0","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular/cli:1.6.8;cssnano:3.10.0;postcss-colormin:2.2.2;colormin:1.1.2;color:0.11.4;color-string:0.3.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"color-string - 1.5.5","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2021-0152","vulnerabilityDetails":"Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5.","vulnerabilityUrl":"https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_defect
ws high detected in color string tgz ws high severity vulnerability vulnerable library color string tgz parser and generator for css color strings library home page a href path to dependency file pacbot webapp package json path to vulnerable library pacbot webapp node modules color string package json dependency hierarchy cli tgz root library cssnano tgz postcss colormin tgz colormin tgz color tgz x color string tgz vulnerable library found in base branch master vulnerability details regular expression denial of service redos was found in color string before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution color string isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree angular cli cssnano postcss colormin colormin color color string isminimumfixversionavailable true minimumfixversion color string isbinary false basebranches vulnerabilityidentifier ws vulnerabilitydetails regular expression denial of service redos was found in color string before vulnerabilityurl
0
1,122
2,595,183,592
IssuesEvent
2015-02-20 12:14:05
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
[TEST-FAILURE] ExecutorServiceTest.testManagedPostregisteredExecutionCallbackCompletableFuture
Team: Core Type: Defect
``` java.lang.AssertionError: CountDownLatch failed to complete within 120 seconds , count left: 1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:194) at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:187) at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:179) at com.hazelcast.executor.ExecutorServiceTest.testManagedPostregisteredExecutionCallbackCompletableFuture(ExecutorServiceTest.java:1239) ``` https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-OracleJDK1.7/com.hazelcast$hazelcast/10/testReport/junit/com.hazelcast.executor/ExecutorServiceTest/testManagedPostregisteredExecutionCallbackCompletableFuture/
1.0
[TEST-FAILURE] ExecutorServiceTest.testManagedPostregisteredExecutionCallbackCompletableFuture - ``` java.lang.AssertionError: CountDownLatch failed to complete within 120 seconds , count left: 1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:194) at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:187) at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:179) at com.hazelcast.executor.ExecutorServiceTest.testManagedPostregisteredExecutionCallbackCompletableFuture(ExecutorServiceTest.java:1239) ``` https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-OracleJDK1.7/com.hazelcast$hazelcast/10/testReport/junit/com.hazelcast.executor/ExecutorServiceTest/testManagedPostregisteredExecutionCallbackCompletableFuture/
defect
executorservicetest testmanagedpostregisteredexecutioncallbackcompletablefuture java lang assertionerror countdownlatch failed to complete within seconds count left at org junit assert fail assert java at org junit assert asserttrue assert java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast executor executorservicetest testmanagedpostregisteredexecutioncallbackcompletablefuture executorservicetest java
1
203,632
23,159,025,140
IssuesEvent
2022-07-29 15:38:53
jgeraigery/pega-tracerviewer
https://api.github.com/repos/jgeraigery/pega-tracerviewer
opened
CVE-2019-10782 (Medium) detected in checkstyle-8.14.jar
security vulnerability
## CVE-2019-10782 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>checkstyle-8.14.jar</b></p></summary> <p>Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard</p> <p>Library home page: <a href="http://checkstyle.sourceforge.net/">http://checkstyle.sourceforge.net/</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /hes/modules-2/files-2.1/com.puppycrawl.tools/checkstyle/8.14/2dd444e6ab62d93542c2b9d8701c50d9551cee20/checkstyle-8.14.jar</p> <p> Dependency Hierarchy: - :x: **checkstyle-8.14.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/pega-tracerviewer/commit/b3bd0cc294cf5546f9c1f6d88e9b7b7c89940d2a">b3bd0cc294cf5546f9c1f6d88e9b7b7c89940d2a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of com.puppycrawl.tools:checkstyle before 8.29 are vulnerable to XML External Entity (XXE) Injection due to an incomplete fix for CVE-2019-9658. <p>Publish Date: 2020-01-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10782>CVE-2019-10782</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10782">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10782</a></p> <p>Release Date: 2020-02-10</p> <p>Fix Resolution: 8.29</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2019-10782 (Medium) detected in checkstyle-8.14.jar - ## CVE-2019-10782 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>checkstyle-8.14.jar</b></p></summary> <p>Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard</p> <p>Library home page: <a href="http://checkstyle.sourceforge.net/">http://checkstyle.sourceforge.net/</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /hes/modules-2/files-2.1/com.puppycrawl.tools/checkstyle/8.14/2dd444e6ab62d93542c2b9d8701c50d9551cee20/checkstyle-8.14.jar</p> <p> Dependency Hierarchy: - :x: **checkstyle-8.14.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/pega-tracerviewer/commit/b3bd0cc294cf5546f9c1f6d88e9b7b7c89940d2a">b3bd0cc294cf5546f9c1f6d88e9b7b7c89940d2a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of com.puppycrawl.tools:checkstyle before 8.29 are vulnerable to XML External Entity (XXE) Injection due to an incomplete fix for CVE-2019-9658. <p>Publish Date: 2020-01-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10782>CVE-2019-10782</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10782">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10782</a></p> <p>Release Date: 2020-02-10</p> <p>Fix Resolution: 8.29</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_defect
cve medium detected in checkstyle jar cve medium severity vulnerability vulnerable library checkstyle jar checkstyle is a development tool to help programmers write java code that adheres to a coding standard library home page a href path to dependency file build gradle path to vulnerable library hes modules files com puppycrawl tools checkstyle checkstyle jar dependency hierarchy x checkstyle jar vulnerable library found in head commit a href found in base branch master vulnerability details all versions of com puppycrawl tools checkstyle before are vulnerable to xml external entity xxe injection due to an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue
0
63,722
17,870,157,397
IssuesEvent
2021-09-06 14:27:24
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
FOR JSON emulation must use RETURNING CLOB in Oracle
T: Defect C: Functionality C: DB: Oracle P: Medium E: Professional Edition E: Enterprise Edition
Just like the JSON based `MULTISET` emulation, the `FOR JSON` emulation must use `RETURNING CLOB` when emulating the SQL Server syntax in Oracle. This test currently fails: ```java public void testForJSONPathLarge() throws Exception { Table<B> ta = TBook().as("a"); Table<B> tb = TBook().as("b"); Table<B> tc = TBook().as("c"); Table<B> td = TBook().as("d"); assertJSONEquals(json( Seq.crossJoin( Seq.rangeClosed(1, 4), Seq.rangeClosed(1, 4), Seq.rangeClosed(1, 4), Seq.rangeClosed(1, 4)) .map(t -> "{'a_id':" + t.v1 + ",'b_id':" + t.v2 + ",'c_id':" + t.v3 + ",'d_id':" + t.v4 + "}") .collect(joining(",", "[", "]")) ), create().select( ta.field(TBook_ID()).as("a_id"), tb.field(TBook_ID()).as("b_id"), tc.field(TBook_ID()).as("c_id"), td.field(TBook_ID()).as("d_id")) .from(ta, tb, tc, td) .orderBy(1, 2, 3, 4) .forJSON().path() .fetchSingle() .value1()); } ``` The resulting error is: ``` org.jooq.exception.DataAccessException: SQL [select json_arrayagg(json_object(key 'a_id' value "a_id", key 'b_id' value "b_id", key 'c_id' value "c_id", key 'd_id' value "d_id" absent on null) format json) from (select "a"."ID" "a_id", "b"."ID" "b_id", "c"."ID" "c_id", "d"."ID" "d_id" from "TEST"."T_BOOK" "a", "TEST"."T_BOOK" "b", "TEST"."T_BOOK" "c", "TEST"."T_BOOK" "d" order by 1, 2, 3, 4) "t" having count(*) = count(*) -- SQL rendered with a free trial version of jOOQ 3.16.0-SNAPSHOT]; ORA-40478: Ausgabewert zu groß (Höchstwert: 4000) at org.jooq_3.16.0-SNAPSHOT.ORACLE18C.debug(Unknown Source) at org.jooq.impl.Tools.translate(Tools.java:3057) at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:639) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349) at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:295) at org.jooq.impl.AbstractResultQuery.fetchLazyNonAutoClosing(AbstractResultQuery.java:316) at org.jooq.impl.SelectImpl.fetchLazyNonAutoClosing(SelectImpl.java:2849) at org.jooq.impl.ResultQueryTrait.fetchSingle(ResultQueryTrait.java:601) at org.jooq.test.all.testcases.ForJSONTests.testForJSONPathLarge(ForJSONTests.java:282) at org.jooq.test.jOOQAbstractTest.testForJSONPathLarge(jOOQAbstractTest.java:7828) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:93) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:40) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:529) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:756) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:452) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210) Caused by: java.sql.SQLException: ORA-40478: Ausgabewert zu groß (Höchstwert: 4000) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:630) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:564) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1231) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:772) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:299) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:512) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:163) at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:1010) at oracle.jdbc.driver.OracleStatement.prepareDefineBufferAndExecute(OracleStatement.java:1271) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1149) at oracle.jdbc.driver.OracleStatement.executeSQLSelect(OracleStatement.java:1661) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1470) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3761) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:4136) at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1014) at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:219) at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:219) at org.jooq.impl.Tools.executeStatementAndGetFirstResultSet(Tools.java:4198) at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:230) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:335) ... 37 more Caused by: Error : 40478, Position : 253, Sql = select json_arrayagg(json_object(key 'a_id' value "a_id", key 'b_id' value "b_id", key 'c_id' value "c_id", key 'd_id' value "d_id" absent on null) format json) from (select "a"."ID" "a_id", "b"."ID" "b_id", "c"."ID" "c_id", "d"."ID" "d_id" from "TEST"."T_BOOK" "a", "TEST"."T_BOOK" "b", "TEST"."T_BOOK" "c", "TEST"."T_BOOK" "d" order by 1, 2, 3, 4) "t" having count(*) = count(*) -- SQL rendered with a free trial version of jOOQ 3.16.0-SNAPSHOT, OriginalSql = select json_arrayagg(json_object(key 'a_id' value "a_id", key 'b_id' value "b_id", key 'c_id' value "c_id", key 'd_id' value "d_id" absent on null) format json) from (select "a"."ID" "a_id", "b"."ID" "b_id", "c"."ID" "c_id", "d"."ID" "d_id" from "TEST"."T_BOOK" "a", "TEST"."T_BOOK" "b", "TEST"."T_BOOK" "c", "TEST"."T_BOOK" "d" order by 1, 2, 3, 4) "t" having count(*) = count(*) -- SQL rendered with a free trial version of jOOQ 3.16.0-SNAPSHOT, Error Msg = ORA-40478: Ausgabewert zu groß (Höchstwert: 4000) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:637) ... 56 more ```
1.0
FOR JSON emulation must use RETURNING CLOB in Oracle - Just like the JSON based `MULTISET` emulation, the `FOR JSON` emulation must use `RETURNING CLOB` when emulating the SQL Server syntax in Oracle. This test currently fails: ```java public void testForJSONPathLarge() throws Exception { Table<B> ta = TBook().as("a"); Table<B> tb = TBook().as("b"); Table<B> tc = TBook().as("c"); Table<B> td = TBook().as("d"); assertJSONEquals(json( Seq.crossJoin( Seq.rangeClosed(1, 4), Seq.rangeClosed(1, 4), Seq.rangeClosed(1, 4), Seq.rangeClosed(1, 4)) .map(t -> "{'a_id':" + t.v1 + ",'b_id':" + t.v2 + ",'c_id':" + t.v3 + ",'d_id':" + t.v4 + "}") .collect(joining(",", "[", "]")) ), create().select( ta.field(TBook_ID()).as("a_id"), tb.field(TBook_ID()).as("b_id"), tc.field(TBook_ID()).as("c_id"), td.field(TBook_ID()).as("d_id")) .from(ta, tb, tc, td) .orderBy(1, 2, 3, 4) .forJSON().path() .fetchSingle() .value1()); } ``` The resulting error is: ``` org.jooq.exception.DataAccessException: SQL [select json_arrayagg(json_object(key 'a_id' value "a_id", key 'b_id' value "b_id", key 'c_id' value "c_id", key 'd_id' value "d_id" absent on null) format json) from (select "a"."ID" "a_id", "b"."ID" "b_id", "c"."ID" "c_id", "d"."ID" "d_id" from "TEST"."T_BOOK" "a", "TEST"."T_BOOK" "b", "TEST"."T_BOOK" "c", "TEST"."T_BOOK" "d" order by 1, 2, 3, 4) "t" having count(*) = count(*) -- SQL rendered with a free trial version of jOOQ 3.16.0-SNAPSHOT]; ORA-40478: Ausgabewert zu groß (Höchstwert: 4000) at org.jooq_3.16.0-SNAPSHOT.ORACLE18C.debug(Unknown Source) at org.jooq.impl.Tools.translate(Tools.java:3057) at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:639) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:349) at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:295) at org.jooq.impl.AbstractResultQuery.fetchLazyNonAutoClosing(AbstractResultQuery.java:316) at org.jooq.impl.SelectImpl.fetchLazyNonAutoClosing(SelectImpl.java:2849) at org.jooq.impl.ResultQueryTrait.fetchSingle(ResultQueryTrait.java:601) at org.jooq.test.all.testcases.ForJSONTests.testForJSONPathLarge(ForJSONTests.java:282) at org.jooq.test.jOOQAbstractTest.testForJSONPathLarge(jOOQAbstractTest.java:7828) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:93) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:40) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:529) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:756) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:452) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210) Caused by: java.sql.SQLException: ORA-40478: Ausgabewert zu groß (Höchstwert: 4000) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:630) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:564) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1231) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:772) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:299) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:512) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:163) at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:1010) at oracle.jdbc.driver.OracleStatement.prepareDefineBufferAndExecute(OracleStatement.java:1271) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1149) at oracle.jdbc.driver.OracleStatement.executeSQLSelect(OracleStatement.java:1661) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1470) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3761) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:4136) at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1014) at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:219) at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:219) at org.jooq.impl.Tools.executeStatementAndGetFirstResultSet(Tools.java:4198) at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:230) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:335) ... 37 more Caused by: Error : 40478, Position : 253, Sql = select json_arrayagg(json_object(key 'a_id' value "a_id", key 'b_id' value "b_id", key 'c_id' value "c_id", key 'd_id' value "d_id" absent on null) format json) from (select "a"."ID" "a_id", "b"."ID" "b_id", "c"."ID" "c_id", "d"."ID" "d_id" from "TEST"."T_BOOK" "a", "TEST"."T_BOOK" "b", "TEST"."T_BOOK" "c", "TEST"."T_BOOK" "d" order by 1, 2, 3, 4) "t" having count(*) = count(*) -- SQL rendered with a free trial version of jOOQ 3.16.0-SNAPSHOT, OriginalSql = select json_arrayagg(json_object(key 'a_id' value "a_id", key 'b_id' value "b_id", key 'c_id' value "c_id", key 'd_id' value "d_id" absent on null) format json) from (select "a"."ID" "a_id", "b"."ID" "b_id", "c"."ID" "c_id", "d"."ID" "d_id" from "TEST"."T_BOOK" "a", "TEST"."T_BOOK" "b", "TEST"."T_BOOK" "c", "TEST"."T_BOOK" "d" order by 1, 2, 3, 4) "t" having count(*) = count(*) -- SQL rendered with a free trial version of jOOQ 3.16.0-SNAPSHOT, Error Msg = ORA-40478: Ausgabewert zu groß (Höchstwert: 4000) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:637) ... 56 more ```
defect
for json emulation must use returning clob in oracle just like the json based multiset emulation the for json emulation must use returning clob when emulating the sql server syntax in oracle this test currently fails java public void testforjsonpathlarge throws exception table ta tbook as a table tb tbook as b table tc tbook as c table td tbook as d assertjsonequals json seq crossjoin seq rangeclosed seq rangeclosed seq rangeclosed seq rangeclosed map t a id t b id t c id t d id t collect joining create select ta field tbook id as a id tb field tbook id as b id tc field tbook id as c id td field tbook id as d id from ta tb tc td orderby forjson path fetchsingle the resulting error is org jooq exception dataaccessexception sql ora ausgabewert zu groß höchstwert at org jooq snapshot debug unknown source at org jooq impl tools translate tools java at org jooq impl defaultexecutecontext sqlexception defaultexecutecontext java at org jooq impl abstractquery execute abstractquery java at org jooq impl abstractresultquery fetchlazy abstractresultquery java at org jooq impl abstractresultquery fetchlazynonautoclosing abstractresultquery java at org jooq impl selectimpl fetchlazynonautoclosing selectimpl java at org jooq impl resultquerytrait fetchsingle resultquerytrait java at org jooq test all testcases forjsontests testforjsonpathlarge forjsontests java at org jooq test jooqabstracttest testforjsonpathlarge jooqabstracttest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit rules testwatcher evaluate testwatcher java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org eclipse jdt internal runner run java at org eclipse jdt internal junit runner testexecution run testexecution java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner run remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner main remotetestrunner java caused by java sql sqlexception ora ausgabewert zu groß höchstwert at oracle jdbc driver processerror java at oracle jdbc driver processerror java at oracle jdbc driver processerror java at oracle jdbc driver receive java at oracle jdbc driver dorpc java at oracle jdbc driver dooall java at oracle jdbc driver java at oracle jdbc driver executefordescribe java at oracle jdbc driver oraclestatement preparedefinebufferandexecute oraclestatement java at oracle jdbc driver oraclestatement executemaybedescribe oraclestatement java at oracle jdbc driver oraclestatement executesqlselect oraclestatement java at oracle jdbc driver oraclestatement doexecutewithtimeout oraclestatement java at oracle jdbc driver oraclepreparedstatement executeinternal oraclepreparedstatement java at oracle jdbc driver oraclepreparedstatement execute oraclepreparedstatement java at oracle jdbc driver oraclepreparedstatementwrapper execute oraclepreparedstatementwrapper java at org jooq tools jdbc defaultpreparedstatement execute defaultpreparedstatement java at org jooq tools jdbc defaultpreparedstatement execute defaultpreparedstatement java at org jooq impl tools executestatementandgetfirstresultset tools java at org jooq impl abstractresultquery execute abstractresultquery java at org jooq impl abstractquery execute abstractquery java more caused by error position sql select json arrayagg json object key a id value a id key b id value b id key c id value c id key d id value d id absent on null format json from select a id a id b id b id c id c id d id d id from test t book a test t book b test t book c test t book d order by t having count count sql rendered with a free trial version of jooq snapshot originalsql select json arrayagg json object key a id value a id key b id value b id key c id value c id key d id value d id absent on null format json from select a id a id b id b id c id c id d id d id from test t book a test t book b test t book c test t book d order by t having count count sql rendered with a free trial version of jooq snapshot error msg ora ausgabewert zu groß höchstwert at oracle jdbc driver processerror java more
1
31,955
6,668,106,906
IssuesEvent
2017-10-03 14:47:36
hazelcast/hazelcast-csharp-client
https://api.github.com/repos/hazelcast/hazelcast-csharp-client
closed
HazelcastSerializationException is thrown when doing operations in parallel
Type: Defect
When running a test which loads data in parallel, the following exception is thrown: ``` System.AggregateException : One or more errors occurred. ----> Hazelcast.IO.Serialization.HazelcastSerializationException : Class-id: 106 is already registered! at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](IEnumerable`1 source, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Action`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithEverything, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEach[TSource](IEnumerable`1 source, Action`1 body) --HazelcastSerializationException at Hazelcast.IO.Serialization.SerializationService.ToObject[T](Object object) at Hazelcast.Client.Proxy.ClientMapProxy`2.Get(Object key) ``` When it is changed the parallel client down to be single threaded this issue does not arise.
1.0
HazelcastSerializationException is thrown when doing operations in parallel - When running a test which loads data in parallel, the following exception is thrown: ``` System.AggregateException : One or more errors occurred. ----> Hazelcast.IO.Serialization.HazelcastSerializationException : Class-id: 106 is already registered! at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](IEnumerable`1 source, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Action`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithEverything, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEach[TSource](IEnumerable`1 source, Action`1 body) --HazelcastSerializationException at Hazelcast.IO.Serialization.SerializationService.ToObject[T](Object object) at Hazelcast.Client.Proxy.ClientMapProxy`2.Get(Object key) ``` When it is changed the parallel client down to be single threaded this issue does not arise.
defect
hazelcastserializationexception is thrown when doing operations in parallel when running a test which loads data in parallel the following exception is thrown system aggregateexception one or more errors occurred hazelcast io serialization hazelcastserializationexception class id is already registered at system threading tasks task throwifexceptional boolean includetaskcanceledexceptions at system threading tasks task wait millisecondstimeout cancellationtoken cancellationtoken at system threading tasks parallel forworker frominclusive toexclusive paralleloptions paralleloptions action body action bodywithstate func bodywithlocal func localinit action localfinally at system threading tasks parallel foreachworker ienumerable source paralleloptions paralleloptions action body action bodywithstate action bodywithstateandindex func bodywithstateandlocal func bodywitheverything func localinit action localfinally at system threading tasks parallel foreach ienumerable source action body hazelcastserializationexception at hazelcast io serialization serializationservice toobject object object at hazelcast client proxy clientmapproxy get object key when it is changed the parallel client down to be single threaded this issue does not arise
1
63,167
17,402,411,648
IssuesEvent
2021-08-02 21:52:02
unascribed/Fabrication
https://api.github.com/repos/unascribed/Fabrication
closed
Right-clicking with fire aspect does not properly emulate a flint-and-steel right-click.
k: Defect n: Fabric
With flint and steel, you can right click on creepers or tnt to directly ignite them, however with this tweak enabled, that is not possible. It does not do anything when clicking on a creeper and just places fire down as if on any other block when clicking on tnt.
1.0
Right-clicking with fire aspect does not properly emulate a flint-and-steel right-click. - With flint and steel, you can right click on creepers or tnt to directly ignite them, however with this tweak enabled, that is not possible. It does not do anything when clicking on a creeper and just places fire down as if on any other block when clicking on tnt.
defect
right clicking with fire aspect does not properly emulate a flint and steel right click with flint and steel you can right click on creepers or tnt to directly ignite them however with this tweak enabled that is not possible it does not do anything when clicking on a creeper and just places fire down as if on any other block when clicking on tnt
1
45,707
13,041,439,771
IssuesEvent
2020-07-28 20:23:27
Rise-Vision/rise-vision-apps
https://api.github.com/repos/Rise-Vision/rise-vision-apps
closed
[App Homepage] Shadow persists after scrolling to the right/left
visual defect
**Problem:** Shadow persists after scrolling to the right/left. See video: https://www.loom.com/share/6356ce718f364136b187d553ba8ed158
1.0
[App Homepage] Shadow persists after scrolling to the right/left - **Problem:** Shadow persists after scrolling to the right/left. See video: https://www.loom.com/share/6356ce718f364136b187d553ba8ed158
defect
shadow persists after scrolling to the right left problem shadow persists after scrolling to the right left see video
1
14,458
2,812,403,263
IssuesEvent
2015-05-18 08:28:42
steve8x8/geotoad
https://api.github.com/repos/steve8x8/geotoad
closed
Cannot install deb on Ubuntu 14.04
auto-migrated Priority-Low Type-Defect
``` When I try to install geotoad on Ubuntu 14.04, I get dpkg: dependency problems prevent configuration of geotoad: geotoad depends on libopenssl-ruby; however: Package libopenssl-ruby is not installed. Seems like this packet is not available anymore. ``` Original issue reported on code.google.com by `minich...@googlemail.com` on 28 Nov 2014 at 8:16
1.0
Cannot install deb on Ubuntu 14.04 - ``` When I try to install geotoad on Ubuntu 14.04, I get dpkg: dependency problems prevent configuration of geotoad: geotoad depends on libopenssl-ruby; however: Package libopenssl-ruby is not installed. Seems like this packet is not available anymore. ``` Original issue reported on code.google.com by `minich...@googlemail.com` on 28 Nov 2014 at 8:16
defect
cannot install deb on ubuntu when i try to install geotoad on ubuntu i get dpkg dependency problems prevent configuration of geotoad geotoad depends on libopenssl ruby however package libopenssl ruby is not installed seems like this packet is not available anymore original issue reported on code google com by minich googlemail com on nov at
1
25,203
4,233,642,109
IssuesEvent
2016-07-05 08:45:43
arkayenro/arkinventory
https://api.github.com/repos/arkayenro/arkinventory
closed
Crafting items not stacking when transferred to bank
auto-migrated Priority-Medium Type-Defect
``` Downloaded from > Curse What steps will reproduce the problem? 1. It seems to happen pretty consistently, but not always. Try adding items to existing stacks in the bank, especially right-clicking to add them. 2. I *think* it started happening more as my reagent bag started getting more filled up. It wasn't happening at first, now it's happening all the time. What is the expected output? What do you see instead? When adding items to the bank, they are frequently not stacking with ones already there. I've noticed it when: (a) I use the cleanup button to transfer all crafting items to the bank, and (b) when I right-click an item to transfer from bags to bank. I've also noticed that clicking the cleanup button again after everything is transferred no longer re-stacks consistently (and not at all if there are no open slots in the reagent bag). I frequently have to go back and manually add newly transferred items to the existing stacks. What version of the product are you using? On what operating system? v3.04.21 on Mac OSX 10.10.1 Please provide any additional information below. ``` Original issue reported on code.google.com by `Katherin...@gmail.com` on 22 Jan 2015 at 6:18
1.0
Crafting items not stacking when transferred to bank - ``` Downloaded from > Curse What steps will reproduce the problem? 1. It seems to happen pretty consistently, but not always. Try adding items to existing stacks in the bank, especially right-clicking to add them. 2. I *think* it started happening more as my reagent bag started getting more filled up. It wasn't happening at first, now it's happening all the time. What is the expected output? What do you see instead? When adding items to the bank, they are frequently not stacking with ones already there. I've noticed it when: (a) I use the cleanup button to transfer all crafting items to the bank, and (b) when I right-click an item to transfer from bags to bank. I've also noticed that clicking the cleanup button again after everything is transferred no longer re-stacks consistently (and not at all if there are no open slots in the reagent bag). I frequently have to go back and manually add newly transferred items to the existing stacks. What version of the product are you using? On what operating system? v3.04.21 on Mac OSX 10.10.1 Please provide any additional information below. ``` Original issue reported on code.google.com by `Katherin...@gmail.com` on 22 Jan 2015 at 6:18
defect
crafting items not stacking when transferred to bank downloaded from curse what steps will reproduce the problem it seems to happen pretty consistently but not always try adding items to existing stacks in the bank especially right clicking to add them i think it started happening more as my reagent bag started getting more filled up it wasn t happening at first now it s happening all the time what is the expected output what do you see instead when adding items to the bank they are frequently not stacking with ones already there i ve noticed it when a i use the cleanup button to transfer all crafting items to the bank and b when i right click an item to transfer from bags to bank i ve also noticed that clicking the cleanup button again after everything is transferred no longer re stacks consistently and not at all if there are no open slots in the reagent bag i frequently have to go back and manually add newly transferred items to the existing stacks what version of the product are you using on what operating system on mac osx please provide any additional information below original issue reported on code google com by katherin gmail com on jan at
1
158,679
6,033,335,657
IssuesEvent
2017-06-09 08:00:20
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
[k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}
area/workload-api/deployment kind/flake priority/backlog priority/P3 sig/apps
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/1830/ Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite} ``` /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92 Expected error: <*errors.errorString | 0xc42073e800>: { s: "error waiting for deployment test-adopted-deployment (got 1 / gcr.io/google_containers/nginx-slim:0.7) and new RS test-adopted-controller (got 1 / gcr.io/google_containers/nginx-slim:0.7) revision and image to match expectation (expected 1 / gcr.io/google_containers/nginx-slim:0.7): timed out waiting for the condition", } error waiting for deployment test-adopted-deployment (got 1 / gcr.io/google_containers/nginx-slim:0.7) and new RS test-adopted-controller (got 1 / gcr.io/google_containers/nginx-slim:0.7) revision and image to match expectation (expected 1 / gcr.io/google_containers/nginx-slim:0.7): timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:961 ``` Previous issues for this test: #29629 #36270
2.0
[k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-etcd3/1830/ Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite} ``` /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92 Expected error: <*errors.errorString | 0xc42073e800>: { s: "error waiting for deployment test-adopted-deployment (got 1 / gcr.io/google_containers/nginx-slim:0.7) and new RS test-adopted-controller (got 1 / gcr.io/google_containers/nginx-slim:0.7) revision and image to match expectation (expected 1 / gcr.io/google_containers/nginx-slim:0.7): timed out waiting for the condition", } error waiting for deployment test-adopted-deployment (got 1 / gcr.io/google_containers/nginx-slim:0.7) and new RS test-adopted-controller (got 1 / gcr.io/google_containers/nginx-slim:0.7) revision and image to match expectation (expected 1 / gcr.io/google_containers/nginx-slim:0.7): timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:961 ``` Previous issues for this test: #29629 #36270
non_defect
deployment deployment should label adopted rss and pods kubernetes suite failed deployment deployment should label adopted rss and pods kubernetes suite go src io kubernetes output dockerized go src io kubernetes test deployment go expected error s error waiting for deployment test adopted deployment got gcr io google containers nginx slim and new rs test adopted controller got gcr io google containers nginx slim revision and image to match expectation expected gcr io google containers nginx slim timed out waiting for the condition error waiting for deployment test adopted deployment got gcr io google containers nginx slim and new rs test adopted controller got gcr io google containers nginx slim revision and image to match expectation expected gcr io google containers nginx slim timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test deployment go previous issues for this test
0
52,479
13,224,774,434
IssuesEvent
2020-08-17 19:49:15
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
regression tests (Trac #2288)
Incomplete Migration Migrated from Trac csky defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2288">https://code.icecube.wisc.edu/projects/icecube/ticket/2288</a>, reported by richman</summary> <p> ```json { "status": "closed", "changetime": "2020-07-14T16:09:30", "_ts": "1594742970159437", "description": "csky needs regression tests that can be run on some of the buildbots. For now, these can be similar to those of skylab, which essentially run a handful of analyses end-to-end with specific expected numerical results. Much of the library can be covered simply by converting material from the tutorials.\n\nProper unit tests, probing individual components, will take much longer to construct and warrant their own ticket.", "reporter": "richman", "cc": "", "resolution": "wontfix", "time": "2019-05-19T15:25:03", "component": "csky", "summary": "regression tests", "priority": "major", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
1.0
regression tests (Trac #2288) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2288">https://code.icecube.wisc.edu/projects/icecube/ticket/2288</a>, reported by richman</summary> <p> ```json { "status": "closed", "changetime": "2020-07-14T16:09:30", "_ts": "1594742970159437", "description": "csky needs regression tests that can be run on some of the buildbots. For now, these can be similar to those of skylab, which essentially run a handful of analyses end-to-end with specific expected numerical results. Much of the library can be covered simply by converting material from the tutorials.\n\nProper unit tests, probing individual components, will take much longer to construct and warrant their own ticket.", "reporter": "richman", "cc": "", "resolution": "wontfix", "time": "2019-05-19T15:25:03", "component": "csky", "summary": "regression tests", "priority": "major", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
defect
regression tests trac migrated from json status closed changetime ts description csky needs regression tests that can be run on some of the buildbots for now these can be similar to those of skylab which essentially run a handful of analyses end to end with specific expected numerical results much of the library can be covered simply by converting material from the tutorials n nproper unit tests probing individual components will take much longer to construct and warrant their own ticket reporter richman cc resolution wontfix time component csky summary regression tests priority major keywords milestone owner type defect
1
6,390
2,610,242,039
IssuesEvent
2015-02-26 19:17:04
chrsmith/jsjsj122
https://api.github.com/repos/chrsmith/jsjsj122
opened
台州割包皮包茎哪家医院好
auto-migrated Priority-Medium Type-Defect
``` 台州割包皮包茎哪家医院好【台州五洲生殖医院】24小时健康 咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台 州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、1 08、118、198及椒江一金清公交车直达枫南小区,乘坐107、105、 109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 31 May 2014 at 1:09
1.0
台州割包皮包茎哪家医院好 - ``` 台州割包皮包茎哪家医院好【台州五洲生殖医院】24小时健康 咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台 州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、1 08、118、198及椒江一金清公交车直达枫南小区,乘坐107、105、 109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 31 May 2014 at 1:09
defect
台州割包皮包茎哪家医院好 台州割包皮包茎哪家医院好【台州五洲生殖医院】 咨询热线 微信号tzwzszyy 医院地址 台 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
1
42,319
10,962,180,467
IssuesEvent
2019-11-27 16:44:20
STEllAR-GROUP/hpx
https://api.github.com/repos/STEllAR-GROUP/hpx
closed
PAPI performance counters are inaccessible
category: performance counters type: defect
## Expected Behavior I have built HPX with support for the PAPI performance counters. The resulting collection of HPX libraries includes `libhpx_papi_counters.so`. According to the documentation: > For a full list of available PAPI events and their (short) description > use the --hpx:list-counters and --papi-event-info=all command line options. When using these options, I expect to see a list of performance counters including the names of PAPI performance counters. ## Actual Behavior The latter option is not recognized though: ``` my_exe --hpx:list-counters --papi-event-info=all > out runtime_support::load_components: command line processing: unrecognised option '--papi-event-info=all' ``` A list of performance counters is printed to the screen, but does not include the names of PAPI performance counters. ## Steps to Reproduce the Problem 1. I use CMake's FetchContent module to incorporate HPX in my project and I don't perform an install step. My project's targets and HPX's targets are put in the same bin and lib build directories. See below for the HPX configuration. 1. Pick an HPX exe and ask for the performance counters, including the PAPI ones ## Specifications ``` {config}: HPX_WITH_AGAS_DUMP_REFCNT_ENTRIES=OFF HPX_WITH_APEX=OFF HPX_WITH_ATTACH_DEBUGGER_ON_TEST_FAILURE=OFF HPX_WITH_AUTOMATIC_SERIALIZATION_REGISTRATION=ON HPX_WITH_CXX14_RETURN_TYPE_DEDUCTION=TRUE HPX_WITH_DEPRECATION_WARNINGS=ON HPX_WITH_GOOGLE_PERFTOOLS=ON HPX_WITH_HPXMP=OFF HPX_WITH_IO_COUNTERS=ON HPX_WITH_IO_POOL=ON HPX_WITH_ITTNOTIFY=OFF HPX_WITH_LOGGING=ON HPX_WITH_MORE_THAN_64_THREADS=ON HPX_WITH_NATIVE_TLS=ON HPX_WITH_NETWORKING=ON HPX_WITH_PAPI=ON HPX_WITH_PARCELPORT_ACTION_COUNTERS=ON HPX_WITH_PARCELPORT_LIBFABRIC=OFF HPX_WITH_PARCELPORT_MPI=ON HPX_WITH_PARCELPORT_MPI_MULTITHREADED=ON HPX_WITH_PARCELPORT_TCP=ON HPX_WITH_PARCELPORT_VERBS=OFF HPX_WITH_PARCEL_COALESCING=ON HPX_WITH_PARCEL_PROFILING=OFF HPX_WITH_SANITIZERS=OFF HPX_WITH_SCHEDULER_LOCAL_STORAGE=OFF HPX_WITH_SPINLOCK_DEADLOCK_DETECTION=OFF HPX_WITH_STACKTRACES=ON HPX_WITH_SWAP_CONTEXT_EMULATION=OFF HPX_WITH_TESTS_DEBUG_LOG=OFF HPX_WITH_THREAD_BACKTRACE_ON_SUSPENSION=OFF HPX_WITH_THREAD_CREATION_AND_CLEANUP_RATES=OFF HPX_WITH_THREAD_CUMULATIVE_COUNTS=ON HPX_WITH_THREAD_DEBUG_INFO=OFF HPX_WITH_THREAD_DESCRIPTION_FULL=OFF HPX_WITH_THREAD_GUARD_PAGE=ON HPX_WITH_THREAD_IDLE_RATES=ON HPX_WITH_THREAD_LOCAL_STORAGE=OFF HPX_WITH_THREAD_MANAGER_IDLE_BACKOFF=ON HPX_WITH_THREAD_QUEUE_WAITTIME=OFF HPX_WITH_THREAD_STACK_MMAP=ON HPX_WITH_THREAD_STEALING_COUNTS=OFF HPX_WITH_THREAD_TARGET_ADDRESS=OFF HPX_WITH_TIMER_POOL=ON HPX_WITH_TUPLE_RVALUE_SWAP=ON HPX_WITH_VALGRIND=OFF HPX_WITH_VERIFY_LOCKS=OFF HPX_WITH_VERIFY_LOCKS_BACKTRACE=OFF HPX_WITH_VERIFY_LOCKS_GLOBALLY=OFF HPX_PARCEL_MAX_CONNECTIONS=512 HPX_PARCEL_MAX_CONNECTIONS_PER_LOCALITY=4 HPX_AGAS_LOCAL_CACHE_SIZE=4096 HPX_HAVE_MALLOC=tcmalloc HPX_PREFIX (configured)= HPX_PREFIX=/quanta1/home/jong0137/development/objects/RelWithDebInfo/lue {version}: V1.3.0 (AGAS: V3.0), Git: a943fd2c5d {boost}: V1.65.1 {build-type}: release {date}: Sep 24 2019 15:35:21 {platform}: linux {compiler}: GNU C++ version 7.2.0 {stdlib}: GNU libstdc++ version 20170814 ```
1.0
PAPI performance counters are inaccessible - ## Expected Behavior I have built HPX with support for the PAPI performance counters. The resulting collection of HPX libraries includes `libhpx_papi_counters.so`. According to the documentation: > For a full list of available PAPI events and their (short) description > use the --hpx:list-counters and --papi-event-info=all command line options. When using these options, I expect to see a list of performance counters including the names of PAPI performance counters. ## Actual Behavior The latter option is not recognized though: ``` my_exe --hpx:list-counters --papi-event-info=all > out runtime_support::load_components: command line processing: unrecognised option '--papi-event-info=all' ``` A list of performance counters is printed to the screen, but does not include the names of PAPI performance counters. ## Steps to Reproduce the Problem 1. I use CMake's FetchContent module to incorporate HPX in my project and I don't perform an install step. My project's targets and HPX's targets are put in the same bin and lib build directories. See below for the HPX configuration. 1. Pick an HPX exe and ask for the performance counters, including the PAPI ones ## Specifications ``` {config}: HPX_WITH_AGAS_DUMP_REFCNT_ENTRIES=OFF HPX_WITH_APEX=OFF HPX_WITH_ATTACH_DEBUGGER_ON_TEST_FAILURE=OFF HPX_WITH_AUTOMATIC_SERIALIZATION_REGISTRATION=ON HPX_WITH_CXX14_RETURN_TYPE_DEDUCTION=TRUE HPX_WITH_DEPRECATION_WARNINGS=ON HPX_WITH_GOOGLE_PERFTOOLS=ON HPX_WITH_HPXMP=OFF HPX_WITH_IO_COUNTERS=ON HPX_WITH_IO_POOL=ON HPX_WITH_ITTNOTIFY=OFF HPX_WITH_LOGGING=ON HPX_WITH_MORE_THAN_64_THREADS=ON HPX_WITH_NATIVE_TLS=ON HPX_WITH_NETWORKING=ON HPX_WITH_PAPI=ON HPX_WITH_PARCELPORT_ACTION_COUNTERS=ON HPX_WITH_PARCELPORT_LIBFABRIC=OFF HPX_WITH_PARCELPORT_MPI=ON HPX_WITH_PARCELPORT_MPI_MULTITHREADED=ON HPX_WITH_PARCELPORT_TCP=ON HPX_WITH_PARCELPORT_VERBS=OFF HPX_WITH_PARCEL_COALESCING=ON HPX_WITH_PARCEL_PROFILING=OFF HPX_WITH_SANITIZERS=OFF HPX_WITH_SCHEDULER_LOCAL_STORAGE=OFF HPX_WITH_SPINLOCK_DEADLOCK_DETECTION=OFF HPX_WITH_STACKTRACES=ON HPX_WITH_SWAP_CONTEXT_EMULATION=OFF HPX_WITH_TESTS_DEBUG_LOG=OFF HPX_WITH_THREAD_BACKTRACE_ON_SUSPENSION=OFF HPX_WITH_THREAD_CREATION_AND_CLEANUP_RATES=OFF HPX_WITH_THREAD_CUMULATIVE_COUNTS=ON HPX_WITH_THREAD_DEBUG_INFO=OFF HPX_WITH_THREAD_DESCRIPTION_FULL=OFF HPX_WITH_THREAD_GUARD_PAGE=ON HPX_WITH_THREAD_IDLE_RATES=ON HPX_WITH_THREAD_LOCAL_STORAGE=OFF HPX_WITH_THREAD_MANAGER_IDLE_BACKOFF=ON HPX_WITH_THREAD_QUEUE_WAITTIME=OFF HPX_WITH_THREAD_STACK_MMAP=ON HPX_WITH_THREAD_STEALING_COUNTS=OFF HPX_WITH_THREAD_TARGET_ADDRESS=OFF HPX_WITH_TIMER_POOL=ON HPX_WITH_TUPLE_RVALUE_SWAP=ON HPX_WITH_VALGRIND=OFF HPX_WITH_VERIFY_LOCKS=OFF HPX_WITH_VERIFY_LOCKS_BACKTRACE=OFF HPX_WITH_VERIFY_LOCKS_GLOBALLY=OFF HPX_PARCEL_MAX_CONNECTIONS=512 HPX_PARCEL_MAX_CONNECTIONS_PER_LOCALITY=4 HPX_AGAS_LOCAL_CACHE_SIZE=4096 HPX_HAVE_MALLOC=tcmalloc HPX_PREFIX (configured)= HPX_PREFIX=/quanta1/home/jong0137/development/objects/RelWithDebInfo/lue {version}: V1.3.0 (AGAS: V3.0), Git: a943fd2c5d {boost}: V1.65.1 {build-type}: release {date}: Sep 24 2019 15:35:21 {platform}: linux {compiler}: GNU C++ version 7.2.0 {stdlib}: GNU libstdc++ version 20170814 ```
defect
papi performance counters are inaccessible expected behavior i have built hpx with support for the papi performance counters the resulting collection of hpx libraries includes libhpx papi counters so according to the documentation for a full list of available papi events and their short description use the hpx list counters and papi event info all command line options when using these options i expect to see a list of performance counters including the names of papi performance counters actual behavior the latter option is not recognized though my exe hpx list counters papi event info all out runtime support load components command line processing unrecognised option papi event info all a list of performance counters is printed to the screen but does not include the names of papi performance counters steps to reproduce the problem i use cmake s fetchcontent module to incorporate hpx in my project and i don t perform an install step my project s targets and hpx s targets are put in the same bin and lib build directories see below for the hpx configuration pick an hpx exe and ask for the performance counters including the papi ones specifications config hpx with agas dump refcnt entries off hpx with apex off hpx with attach debugger on test failure off hpx with automatic serialization registration on hpx with return type deduction true hpx with deprecation warnings on hpx with google perftools on hpx with hpxmp off hpx with io counters on hpx with io pool on hpx with ittnotify off hpx with logging on hpx with more than threads on hpx with native tls on hpx with networking on hpx with papi on hpx with parcelport action counters on hpx with parcelport libfabric off hpx with parcelport mpi on hpx with parcelport mpi multithreaded on hpx with parcelport tcp on hpx with parcelport verbs off hpx with parcel coalescing on hpx with parcel profiling off hpx with sanitizers off hpx with scheduler local storage off hpx with spinlock deadlock detection off hpx with stacktraces on hpx with swap context emulation off hpx with tests debug log off hpx with thread backtrace on suspension off hpx with thread creation and cleanup rates off hpx with thread cumulative counts on hpx with thread debug info off hpx with thread description full off hpx with thread guard page on hpx with thread idle rates on hpx with thread local storage off hpx with thread manager idle backoff on hpx with thread queue waittime off hpx with thread stack mmap on hpx with thread stealing counts off hpx with thread target address off hpx with timer pool on hpx with tuple rvalue swap on hpx with valgrind off hpx with verify locks off hpx with verify locks backtrace off hpx with verify locks globally off hpx parcel max connections hpx parcel max connections per locality hpx agas local cache size hpx have malloc tcmalloc hpx prefix configured hpx prefix home development objects relwithdebinfo lue version agas git boost build type release date sep platform linux compiler gnu c version stdlib gnu libstdc version
1
127,467
12,323,638,155
IssuesEvent
2020-05-13 12:30:53
daredloco/csharp-collection
https://api.github.com/repos/daredloco/csharp-collection
reopened
Update the Readme file
documentation
Add Documentation for: - RoWa.Functions.DegreesToRadians(double degrees) - RoWa.Functions.GetDistance(Coordinates c1, Coordinates c2) - RoWa.Functions.GetDistance(decimal lat1, decimal long1, decimal lat2, decimal long2) - RoWa.Xamarin.Functions.DegreesToRadians(double degrees) - RoWa.Xamarin.Functions.GetDistance(Coordinates c1, Coordinates c2) - RoWa.Xamarin.Functions.GetDistance(decimal lat1, decimal long1, decimal lat2, decimal long2)
1.0
Update the Readme file - Add Documentation for: - RoWa.Functions.DegreesToRadians(double degrees) - RoWa.Functions.GetDistance(Coordinates c1, Coordinates c2) - RoWa.Functions.GetDistance(decimal lat1, decimal long1, decimal lat2, decimal long2) - RoWa.Xamarin.Functions.DegreesToRadians(double degrees) - RoWa.Xamarin.Functions.GetDistance(Coordinates c1, Coordinates c2) - RoWa.Xamarin.Functions.GetDistance(decimal lat1, decimal long1, decimal lat2, decimal long2)
non_defect
update the readme file add documentation for rowa functions degreestoradians double degrees rowa functions getdistance coordinates coordinates rowa functions getdistance decimal decimal decimal decimal rowa xamarin functions degreestoradians double degrees rowa xamarin functions getdistance coordinates coordinates rowa xamarin functions getdistance decimal decimal decimal decimal
0
169,717
13,158,534,594
IssuesEvent
2020-08-10 14:28:29
astropy/astropy
https://api.github.com/repos/astropy/astropy
opened
TST: Unpin matplotlib in pyinstaller cron job
Upstream Fix Required testing visualization
When pyinstaller supports `matplotlib>=3.3`, pinning done in #10637 should be undone. cc @astrofrog https://github.com/astropy/astropy/blob/d39544138f8afb833ecb3923d49c9c1297b05036/tox.ini#L165-L166 Also see pyinstaller/pyinstaller#5004
1.0
TST: Unpin matplotlib in pyinstaller cron job - When pyinstaller supports `matplotlib>=3.3`, pinning done in #10637 should be undone. cc @astrofrog https://github.com/astropy/astropy/blob/d39544138f8afb833ecb3923d49c9c1297b05036/tox.ini#L165-L166 Also see pyinstaller/pyinstaller#5004
non_defect
tst unpin matplotlib in pyinstaller cron job when pyinstaller supports matplotlib pinning done in should be undone cc astrofrog also see pyinstaller pyinstaller
0
76,022
26,203,917,938
IssuesEvent
2023-01-03 20:21:18
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Chat export has a "view settings" link on join rules, but there's no settings to show
T-Defect
### Steps to reproduce 1. Create a room 2. Export the room timeline ### Outcome #### What did you expect? No settings links. #### What happened instead? ![image](https://user-images.githubusercontent.com/1190097/210435100-01ff16fd-ce82-47f1-92b8-a72b4940387d.png) The link does not work, and doesn't go anywhere. ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
Chat export has a "view settings" link on join rules, but there's no settings to show - ### Steps to reproduce 1. Create a room 2. Export the room timeline ### Outcome #### What did you expect? No settings links. #### What happened instead? ![image](https://user-images.githubusercontent.com/1190097/210435100-01ff16fd-ce82-47f1-92b8-a72b4940387d.png) The link does not work, and doesn't go anywhere. ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
defect
chat export has a view settings link on join rules but there s no settings to show steps to reproduce create a room export the room timeline outcome what did you expect no settings links what happened instead the link does not work and doesn t go anywhere operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs no
1
57,100
15,687,979,328
IssuesEvent
2021-03-25 14:14:00
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
Race between appending AIO DIO and fallocate
Status: Triage Needed Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Distribution Version | Linux Kernel | 5.11 Architecture | ZFS Version | 2.0.0-rc1_466_g8a915ba1f66e SPL Version | 2.0.0-rc1_466_g8a915ba1f66e <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing This problem was detected with generic/586, from the fstest suite. The test description is: > Race an appending aio dio write to the second block of a file while > simultaneously fallocating to the first block. Make sure that we end up > with a two-block file. There's also the [mailing-list thread](https://lore.kernel.org/linux-xfs/20191029100342.GA41131@bfoster/T) for the XFS fix for this race. After looking at the code and running some debug code, I think that `zpl_fallocate_common()` races against `zpl_iter_write()`: - `zpl_iter_write()` calls `zfs_write()` - `zpl_fallocate_common()` starts executing and gets 0 for olen (`olen = i_size_read(ip);`). - However, in `zfs_freesp()`, after the sa_lookup, `zp->z_size` will be already updated from the `zfs_write()` above. I tried to come up with a fix, but I'm sure anyone familiar with this code will have that fix ready a few months before I do, so I decided to create an issue instead :-) ### Describe how to reproduce the problem Simply run test generic/586 from the fstest suite. An easier alternative that doesn't require hacking the fstests to run with ZFS, is to manually execute: ` <fstest-dir>/src/aio-dio-regress/aio-dio-append-write-fallocate-race /storage/fs0/testfile 1 ` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
1.0
Race between appending AIO DIO and fallocate - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Distribution Version | Linux Kernel | 5.11 Architecture | ZFS Version | 2.0.0-rc1_466_g8a915ba1f66e SPL Version | 2.0.0-rc1_466_g8a915ba1f66e <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing This problem was detected with generic/586, from the fstest suite. The test description is: > Race an appending aio dio write to the second block of a file while > simultaneously fallocating to the first block. Make sure that we end up > with a two-block file. There's also the [mailing-list thread](https://lore.kernel.org/linux-xfs/20191029100342.GA41131@bfoster/T) for the XFS fix for this race. After looking at the code and running some debug code, I think that `zpl_fallocate_common()` races against `zpl_iter_write()`: - `zpl_iter_write()` calls `zfs_write()` - `zpl_fallocate_common()` starts executing and gets 0 for olen (`olen = i_size_read(ip);`). - However, in `zfs_freesp()`, after the sa_lookup, `zp->z_size` will be already updated from the `zfs_write()` above. I tried to come up with a fix, but I'm sure anyone familiar with this code will have that fix ready a few months before I do, so I decided to create an issue instead :-) ### Describe how to reproduce the problem Simply run test generic/586 from the fstest suite. An easier alternative that doesn't require hacking the fstests to run with ZFS, is to manually execute: ` <fstest-dir>/src/aio-dio-regress/aio-dio-append-write-fallocate-race /storage/fs0/testfile 1 ` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
defect
race between appending aio dio and fallocate thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name distribution version linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing this problem was detected with generic from the fstest suite the test description is race an appending aio dio write to the second block of a file while simultaneously fallocating to the first block make sure that we end up with a two block file there s also the for the xfs fix for this race after looking at the code and running some debug code i think that zpl fallocate common races against zpl iter write zpl iter write calls zfs write zpl fallocate common starts executing and gets for olen olen i size read ip however in zfs freesp after the sa lookup zp z size will be already updated from the zfs write above i tried to come up with a fix but i m sure anyone familiar with this code will have that fix ready a few months before i do so i decided to create an issue instead describe how to reproduce the problem simply run test generic from the fstest suite an easier alternative that doesn t require hacking the fstests to run with zfs is to manually execute src aio dio regress aio dio append write fallocate race storage testfile include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
1
101,429
12,683,607,777
IssuesEvent
2020-06-19 20:10:55
Subscribie/subscribie
https://api.github.com/repos/Subscribie/subscribie
opened
As a Shop Owner I understand how to fill out item details
needs-design needs-marketing needs-qa needs-user-story
**Is your feature request related to a problem? Please describe.** Some of the input boxes arent clear eg do I need to type £ in price box or not **Describe the solution you'd like** Have a £ before the input box so this is clear **Describe alternatives you've considered** The form would not save if a £ was enter as it is not numeric, forcing shop owner to re-enter details - [ ] Display a £ before the price input box - [ ] Check other input boxes for similar issues & fix
1.0
As a Shop Owner I understand how to fill out item details - **Is your feature request related to a problem? Please describe.** Some of the input boxes arent clear eg do I need to type £ in price box or not **Describe the solution you'd like** Have a £ before the input box so this is clear **Describe alternatives you've considered** The form would not save if a £ was enter as it is not numeric, forcing shop owner to re-enter details - [ ] Display a £ before the price input box - [ ] Check other input boxes for similar issues & fix
non_defect
as a shop owner i understand how to fill out item details is your feature request related to a problem please describe some of the input boxes arent clear eg do i need to type £ in price box or not describe the solution you d like have a £ before the input box so this is clear describe alternatives you ve considered the form would not save if a £ was enter as it is not numeric forcing shop owner to re enter details display a £ before the price input box check other input boxes for similar issues fix
0
52,227
13,211,409,406
IssuesEvent
2020-08-15 22:56:26
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
OpenCL not found on OS X (Trac #1880)
Incomplete Migration Migrated from Trac combo simulation defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1880">https://code.icecube.wisc.edu/projects/icecube/ticket/1880</a>, reported by jbeattyand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2016-10-04T16:09:27", "_ts": "1475597367663170", "description": "Standard build of combo on OS X does not make necessary simlink to find libOpenCL.dylib\n\nTo fix, \n\nln -sf /System/Library/Frameworks/OpenCL.framework/OpenCL $I3_BUILD/lib/libOpenCL.dylib\n", "reporter": "jbeatty", "cc": "", "resolution": "worksforme", "time": "2016-10-02T12:05:46", "component": "combo simulation", "summary": "OpenCL not found on OS X", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
OpenCL not found on OS X (Trac #1880) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1880">https://code.icecube.wisc.edu/projects/icecube/ticket/1880</a>, reported by jbeattyand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2016-10-04T16:09:27", "_ts": "1475597367663170", "description": "Standard build of combo on OS X does not make necessary simlink to find libOpenCL.dylib\n\nTo fix, \n\nln -sf /System/Library/Frameworks/OpenCL.framework/OpenCL $I3_BUILD/lib/libOpenCL.dylib\n", "reporter": "jbeatty", "cc": "", "resolution": "worksforme", "time": "2016-10-02T12:05:46", "component": "combo simulation", "summary": "OpenCL not found on OS X", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
opencl not found on os x trac migrated from json status closed changetime ts description standard build of combo on os x does not make necessary simlink to find libopencl dylib n nto fix n nln sf system library frameworks opencl framework opencl build lib libopencl dylib n reporter jbeatty cc resolution worksforme time component combo simulation summary opencl not found on os x priority normal keywords milestone owner nega type defect
1
81,955
31,833,076,769
IssuesEvent
2023-09-14 11:55:47
nats-io/nats.java
https://api.github.com/repos/nats-io/nats.java
closed
Wrong "if clause" on ConsumerConfigurationComparer
defect
## Defect It's supposed to compare "ratelimit" but it's comparing "startSequence": `if (rateLimit != null && !rateLimit.equals(serverCcc.getStartSequence())) { changes.add("rateLimit"); };` #### Versions of `io.nats:jnats` and `nats-server`: io.nats:jnats:2.16.12
1.0
Wrong "if clause" on ConsumerConfigurationComparer - ## Defect It's supposed to compare "ratelimit" but it's comparing "startSequence": `if (rateLimit != null && !rateLimit.equals(serverCcc.getStartSequence())) { changes.add("rateLimit"); };` #### Versions of `io.nats:jnats` and `nats-server`: io.nats:jnats:2.16.12
defect
wrong if clause on consumerconfigurationcomparer defect it s supposed to compare ratelimit but it s comparing startsequence if ratelimit null ratelimit equals serverccc getstartsequence changes add ratelimit versions of io nats jnats and nats server io nats jnats
1
680,871
23,287,983,174
IssuesEvent
2022-08-05 18:45:47
vmware-tanzu/tanzu-framework
https://api.github.com/repos/vmware-tanzu/tanzu-framework
closed
Support "tanzu cluster get" on TKGS clusters.
area/cli priority/critical-urgent kind/bug not-core-cli
**Bug description** Once the CAPI core is bumped to v1beta1, `tanzu cluster get` on TKGS will be broken. This is due to that fact that the API types supported by TKGm and TKGS are diverging. TKGm is moving to `v1beta1` while TKGS still remains on `v1alpha3`. Since CAPI v1.0.0 still supports querying clusters using the v1alpha3 API type we were able to branch out and have separate logic to manage TKGm and TKGS clusters. Get Machine Deployments for TKGS - https://github.com/saimanoj01/tanzu-framework/blob/capi_v1beta1/pkg/v1/tkg/client/client.go#L155 Get Machine deployments for TKGm - https://github.com/saimanoj01/tanzu-framework/blob/capi_v1beta1/pkg/v1/tkg/client/client.go#L152 But this won't work for the `tanzu get cluster` command because we rely on the DescribeCluster API from the upstream CAPI v1.0.0. The problem with that API is that, it always queries for v1beta1 objects and there is no provision to query v1alpha3 based clusters and hence we can't use the same API for both TKGm and TKG clusters. https://github.com/kubernetes-sigs/cluster-api/blob/release-1.0/cmd/clusterctl/client/tree/discovery.go#L50 The error we are seeing when we execute the `tanzu cluster get` command TKGS clusters. ``` $ ./artifacts/darwin/amd64/cli/cluster/v0.6.0-dev/tanzu-cluster-darwin_amd64 get tkgs-wc-one -n test-gc-e2e-demo-ns I1103 21:46:53.705532 29683 request.go:665] Waited for 1.12895738s due to client-side throttling, not priority and fairness, request: GET:https://192.168.123.2:6443/apis/nsx.vmware.com/v1alpha1?timeout=32s Error: no matches for kind "Cluster" in version "cluster.x-k8s.io/v1beta1" ``` **Affected product area (please put an X in all that apply)** - [ ] APIs - [ ] Addons - [ ] CLI - [ ] Docs - [ ] IAM - [ ] Installation - [ ] Plugin - [ ] Security - [ ] Test and Release - [ ] User Experience **Expected behavior** `tanzu cluster get` should work for TKGS clusters. **Steps to reproduce the bug** **Version** (include the SHA if the version is not obvious) **Environment where the bug was observed (cloud, OS, etc)** **Relevant Debug Output (Logs, manifests, etc)**
1.0
Support "tanzu cluster get" on TKGS clusters. - **Bug description** Once the CAPI core is bumped to v1beta1, `tanzu cluster get` on TKGS will be broken. This is due to that fact that the API types supported by TKGm and TKGS are diverging. TKGm is moving to `v1beta1` while TKGS still remains on `v1alpha3`. Since CAPI v1.0.0 still supports querying clusters using the v1alpha3 API type we were able to branch out and have separate logic to manage TKGm and TKGS clusters. Get Machine Deployments for TKGS - https://github.com/saimanoj01/tanzu-framework/blob/capi_v1beta1/pkg/v1/tkg/client/client.go#L155 Get Machine deployments for TKGm - https://github.com/saimanoj01/tanzu-framework/blob/capi_v1beta1/pkg/v1/tkg/client/client.go#L152 But this won't work for the `tanzu get cluster` command because we rely on the DescribeCluster API from the upstream CAPI v1.0.0. The problem with that API is that, it always queries for v1beta1 objects and there is no provision to query v1alpha3 based clusters and hence we can't use the same API for both TKGm and TKG clusters. https://github.com/kubernetes-sigs/cluster-api/blob/release-1.0/cmd/clusterctl/client/tree/discovery.go#L50 The error we are seeing when we execute the `tanzu cluster get` command TKGS clusters. ``` $ ./artifacts/darwin/amd64/cli/cluster/v0.6.0-dev/tanzu-cluster-darwin_amd64 get tkgs-wc-one -n test-gc-e2e-demo-ns I1103 21:46:53.705532 29683 request.go:665] Waited for 1.12895738s due to client-side throttling, not priority and fairness, request: GET:https://192.168.123.2:6443/apis/nsx.vmware.com/v1alpha1?timeout=32s Error: no matches for kind "Cluster" in version "cluster.x-k8s.io/v1beta1" ``` **Affected product area (please put an X in all that apply)** - [ ] APIs - [ ] Addons - [ ] CLI - [ ] Docs - [ ] IAM - [ ] Installation - [ ] Plugin - [ ] Security - [ ] Test and Release - [ ] User Experience **Expected behavior** `tanzu cluster get` should work for TKGS clusters. **Steps to reproduce the bug** **Version** (include the SHA if the version is not obvious) **Environment where the bug was observed (cloud, OS, etc)** **Relevant Debug Output (Logs, manifests, etc)**
non_defect
support tanzu cluster get on tkgs clusters bug description once the capi core is bumped to tanzu cluster get on tkgs will be broken this is due to that fact that the api types supported by tkgm and tkgs are diverging tkgm is moving to while tkgs still remains on since capi still supports querying clusters using the api type we were able to branch out and have separate logic to manage tkgm and tkgs clusters get machine deployments for tkgs get machine deployments for tkgm but this won t work for the tanzu get cluster command because we rely on the describecluster api from the upstream capi the problem with that api is that it always queries for objects and there is no provision to query based clusters and hence we can t use the same api for both tkgm and tkg clusters the error we are seeing when we execute the tanzu cluster get command tkgs clusters artifacts darwin cli cluster dev tanzu cluster darwin get tkgs wc one n test gc demo ns request go waited for due to client side throttling not priority and fairness request get error no matches for kind cluster in version cluster x io affected product area please put an x in all that apply apis addons cli docs iam installation plugin security test and release user experience expected behavior tanzu cluster get should work for tkgs clusters steps to reproduce the bug version include the sha if the version is not obvious environment where the bug was observed cloud os etc relevant debug output logs manifests etc
0
51,291
13,207,423,438
IssuesEvent
2020-08-14 23:02:50
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
Test ticket. This is not a bug. (Trac #150)
Incomplete Migration Migrated from Trac component1 defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/150">https://code.icecube.wisc.edu/projects/icecube/ticket/150</a>, reported by blaufussand owned by blaufuss</em></summary> <p> ```json { "status": "closed", "changetime": "2008-11-19T02:06:22", "_ts": "1227060382000000", "description": "It's a pigeon\n\nwill need to add components from icecube trac to icetray trac\n\nadd type: rumor-allegation\nwill need to add milestone: offline-v3\nversion : trunk\npriority normal\n\ncomponent list needs updating\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "time": "2008-11-19T02:00:40", "component": "component1", "summary": "Test ticket. This is not a bug.", "priority": "trivial", "keywords": "", "milestone": "", "owner": "blaufuss", "type": "defect" } ``` </p> </details>
1.0
Test ticket. This is not a bug. (Trac #150) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/150">https://code.icecube.wisc.edu/projects/icecube/ticket/150</a>, reported by blaufussand owned by blaufuss</em></summary> <p> ```json { "status": "closed", "changetime": "2008-11-19T02:06:22", "_ts": "1227060382000000", "description": "It's a pigeon\n\nwill need to add components from icecube trac to icetray trac\n\nadd type: rumor-allegation\nwill need to add milestone: offline-v3\nversion : trunk\npriority normal\n\ncomponent list needs updating\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "time": "2008-11-19T02:00:40", "component": "component1", "summary": "Test ticket. This is not a bug.", "priority": "trivial", "keywords": "", "milestone": "", "owner": "blaufuss", "type": "defect" } ``` </p> </details>
defect
test ticket this is not a bug trac migrated from json status closed changetime ts description it s a pigeon n nwill need to add components from icecube trac to icetray trac n nadd type rumor allegation nwill need to add milestone offline nversion trunk npriority normal n ncomponent list needs updating n reporter blaufuss cc resolution fixed time component summary test ticket this is not a bug priority trivial keywords milestone owner blaufuss type defect
1
314,846
27,026,491,113
IssuesEvent
2023-02-11 17:17:19
acikkaynak/deprem-yardim-frontend
https://api.github.com/repos/acikkaynak/deprem-yardim-frontend
closed
bug: making notification selection persistent
bug approved p0 test-failed later
## Bug Definition The page resets when i change notification type and f5 ** discord username: @ss.ibrahimbas#4990 ** ## Bug environment Web environment: rc.afetharita.com rc.afetharita.com afetharita.com ## Describe how you are producing the bug step by step 1. Click to 'Bildirimler/Notifications' dropdown 2. choose 'Erzak Yardımı / Vittles Support' option 3. F5 4. Data is resetted ## Expected Behaviour My selection stay with me on page refresh Sayfayı yenilediğimde tercihlerimin benimle kalması ## Screenshots ![image](https://user-images.githubusercontent.com/76786120/218168817-ee79c322-89bf-4f4e-862c-574ce6b1f3a6.png) ## Desktop Information - Operating System : [Windows 11] - Browser [chrome] - Version[unknown]
1.0
bug: making notification selection persistent - ## Bug Definition The page resets when i change notification type and f5 ** discord username: @ss.ibrahimbas#4990 ** ## Bug environment Web environment: rc.afetharita.com rc.afetharita.com afetharita.com ## Describe how you are producing the bug step by step 1. Click to 'Bildirimler/Notifications' dropdown 2. choose 'Erzak Yardımı / Vittles Support' option 3. F5 4. Data is resetted ## Expected Behaviour My selection stay with me on page refresh Sayfayı yenilediğimde tercihlerimin benimle kalması ## Screenshots ![image](https://user-images.githubusercontent.com/76786120/218168817-ee79c322-89bf-4f4e-862c-574ce6b1f3a6.png) ## Desktop Information - Operating System : [Windows 11] - Browser [chrome] - Version[unknown]
non_defect
bug making notification selection persistent bug definition the page resets when i change notification type and discord username ss ibrahimbas  bug environment web environment rc afetharita com rc afetharita com afetharita com describe how you are producing the bug step by step click to bildirimler notifications dropdown choose erzak yardımı vittles support option data is resetted expected behaviour my selection stay with me on page refresh sayfayı yenilediğimde tercihlerimin benimle kalması screenshots desktop information operating system browser version
0
405,860
11,883,628,004
IssuesEvent
2020-03-27 16:15:31
idaholab/raven
https://api.github.com/repos/idaholab/raven
closed
[TASK] SRAW Plugin
FutureRAVENv2.0 priority_minor task
-------- Issue Description -------- The features of SRAW plugin are suitable for a submodule. Addition of System Risk Analysis Workflows RAVEN plugin. SRAW as a plugin enables RAVEN to perform stochastic analysis of nuclear power plant asset management. The primary function of SRAW is to generate the complex RAVEN workflows necessary to optimize asset management under various scenarios. **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [x] 1. If the issue is a defect, is the defect fixed? - [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
1.0
[TASK] SRAW Plugin - -------- Issue Description -------- The features of SRAW plugin are suitable for a submodule. Addition of System Risk Analysis Workflows RAVEN plugin. SRAW as a plugin enables RAVEN to perform stochastic analysis of nuclear power plant asset management. The primary function of SRAW is to generate the complex RAVEN workflows necessary to optimize asset management under various scenarios. **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [x] 1. If the issue is a defect, is the defect fixed? - [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
non_defect
sraw plugin issue description the features of sraw plugin are suitable for a submodule addition of system risk analysis workflows raven plugin sraw as a plugin enables raven to perform stochastic analysis of nuclear power plant asset management the primary function of sraw is to generate the complex raven workflows necessary to optimize asset management under various scenarios describe the solution you d like n a describe alternatives you ve considered n a for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
0
66,674
20,512,661,921
IssuesEvent
2022-03-01 08:33:30
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
After reloading, clicking a space bounces around to the wrong space
T-Defect X-Regression S-Major A-Spaces Z-t3chguy O-Frequent
The steps to reproduce this are difficult, but extremely common in my daily usage: 1. Visit a room within a space which is included in another space. In my case, the Maunium space's Telegram bridge room 2. Visit a different space/room combination to assist in the bug 3. Reload/restart element 4. Click on the space from step 1 - this should bounce you into the space, then out of it again when the room change is detected 5. Repeat for all available spaces. Does not occur after the space has been visited once. For added context, the t2bot.io space has the Telegram bridge room also listed, and the t2bot.io room is higher on the panel than the Maunium space. I suspect what's happening is a store somewhere is detecting a room change (something -> telegram bridge) and forgetting/ignoring that a space change is also happening and so is trying to find the most relevant space to switch the user to. This is additionally problematic when trying to visit community-specific spaces that get overruled by the Matrix Community, Core, and Corp spaces as they are *massive* and have all sorts of rooms.
1.0
After reloading, clicking a space bounces around to the wrong space - The steps to reproduce this are difficult, but extremely common in my daily usage: 1. Visit a room within a space which is included in another space. In my case, the Maunium space's Telegram bridge room 2. Visit a different space/room combination to assist in the bug 3. Reload/restart element 4. Click on the space from step 1 - this should bounce you into the space, then out of it again when the room change is detected 5. Repeat for all available spaces. Does not occur after the space has been visited once. For added context, the t2bot.io space has the Telegram bridge room also listed, and the t2bot.io room is higher on the panel than the Maunium space. I suspect what's happening is a store somewhere is detecting a room change (something -> telegram bridge) and forgetting/ignoring that a space change is also happening and so is trying to find the most relevant space to switch the user to. This is additionally problematic when trying to visit community-specific spaces that get overruled by the Matrix Community, Core, and Corp spaces as they are *massive* and have all sorts of rooms.
defect
after reloading clicking a space bounces around to the wrong space the steps to reproduce this are difficult but extremely common in my daily usage visit a room within a space which is included in another space in my case the maunium space s telegram bridge room visit a different space room combination to assist in the bug reload restart element click on the space from step this should bounce you into the space then out of it again when the room change is detected repeat for all available spaces does not occur after the space has been visited once for added context the io space has the telegram bridge room also listed and the io room is higher on the panel than the maunium space i suspect what s happening is a store somewhere is detecting a room change something telegram bridge and forgetting ignoring that a space change is also happening and so is trying to find the most relevant space to switch the user to this is additionally problematic when trying to visit community specific spaces that get overruled by the matrix community core and corp spaces as they are massive and have all sorts of rooms
1
61,965
17,023,820,609
IssuesEvent
2021-07-03 04:01:40
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Long list of roads/buildings/features appearing under the "parentage" of a suburb rather than the town
Component: nominatim Priority: minor Resolution: invalid Type: defect
**[Submitted to the original trac issue database at 9.27am, Friday, 7th September 2012]** Pretty much the entirety of central Croydon (Greater London, UK) is listed as being within "Thornton Heath" and also "SE25". Croydon is a large town; Thornton Heath is a suburb lying to the north, and SE25 is the postal code of South Norwood, a different suburb lying to the north-east. Looking at the Nominatim results for Thornton Heath (http://nominatim.openstreetmap.org/details.php?place_id=4358752) and Croydon (http://nominatim.openstreetmap.org/details.php?place_id=113996), the only possible problem I can see is that Croydon is listed as "Rank: Village/Hamlet" but I'm not sure what this means in practice, especially as it also has "Admin Level: 15" which is (I think?) correct... As a specific instance, Surrey Street (http://nominatim.openstreetmap.org/details.php?place_id=37838695) is a street in central Croydon which has the misattribution described above.
1.0
Long list of roads/buildings/features appearing under the "parentage" of a suburb rather than the town - **[Submitted to the original trac issue database at 9.27am, Friday, 7th September 2012]** Pretty much the entirety of central Croydon (Greater London, UK) is listed as being within "Thornton Heath" and also "SE25". Croydon is a large town; Thornton Heath is a suburb lying to the north, and SE25 is the postal code of South Norwood, a different suburb lying to the north-east. Looking at the Nominatim results for Thornton Heath (http://nominatim.openstreetmap.org/details.php?place_id=4358752) and Croydon (http://nominatim.openstreetmap.org/details.php?place_id=113996), the only possible problem I can see is that Croydon is listed as "Rank: Village/Hamlet" but I'm not sure what this means in practice, especially as it also has "Admin Level: 15" which is (I think?) correct... As a specific instance, Surrey Street (http://nominatim.openstreetmap.org/details.php?place_id=37838695) is a street in central Croydon which has the misattribution described above.
defect
long list of roads buildings features appearing under the parentage of a suburb rather than the town pretty much the entirety of central croydon greater london uk is listed as being within thornton heath and also croydon is a large town thornton heath is a suburb lying to the north and is the postal code of south norwood a different suburb lying to the north east looking at the nominatim results for thornton heath and croydon the only possible problem i can see is that croydon is listed as rank village hamlet but i m not sure what this means in practice especially as it also has admin level which is i think correct as a specific instance surrey street is a street in central croydon which has the misattribution described above
1
123,877
17,772,364,697
IssuesEvent
2021-08-30 15:00:38
kapseliboi/spotlight
https://api.github.com/repos/kapseliboi/spotlight
opened
CVE-2020-28500 (Medium) detected in lodash-4.17.4.tgz
security vulnerability
## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.4.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz</a></p> <p>Path to dependency file: spotlight/package.json</p> <p>Path to vulnerable library: spotlight/node_modules/callsite-record/node_modules/lodash/package.json,spotlight/node_modules/inquirer/node_modules/lodash/package.json,spotlight/node_modules/depcheck/node_modules/lodash/package.json,spotlight/node_modules/babel-traverse/node_modules/lodash/package.json,spotlight/node_modules/npm-check/node_modules/lodash/package.json,spotlight/node_modules/babel-types/node_modules/lodash/package.json,spotlight/node_modules/globule/node_modules/lodash/package.json,spotlight/node_modules/sass-graph/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - npm-check-5.5.2.tgz (Root Library) - :x: **lodash-4.17.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/spotlight/commit/efa8ebd4395408150b8ea1a18eec77751d13827b">efa8ebd4395408150b8ea1a18eec77751d13827b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash-4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28500 (Medium) detected in lodash-4.17.4.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.4.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz</a></p> <p>Path to dependency file: spotlight/package.json</p> <p>Path to vulnerable library: spotlight/node_modules/callsite-record/node_modules/lodash/package.json,spotlight/node_modules/inquirer/node_modules/lodash/package.json,spotlight/node_modules/depcheck/node_modules/lodash/package.json,spotlight/node_modules/babel-traverse/node_modules/lodash/package.json,spotlight/node_modules/npm-check/node_modules/lodash/package.json,spotlight/node_modules/babel-types/node_modules/lodash/package.json,spotlight/node_modules/globule/node_modules/lodash/package.json,spotlight/node_modules/sass-graph/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - npm-check-5.5.2.tgz (Root Library) - :x: **lodash-4.17.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/spotlight/commit/efa8ebd4395408150b8ea1a18eec77751d13827b">efa8ebd4395408150b8ea1a18eec77751d13827b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash-4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file spotlight package json path to vulnerable library spotlight node modules callsite record node modules lodash package json spotlight node modules inquirer node modules lodash package json spotlight node modules depcheck node modules lodash package json spotlight node modules babel traverse node modules lodash package json spotlight node modules npm check node modules lodash package json spotlight node modules babel types node modules lodash package json spotlight node modules globule node modules lodash package json spotlight node modules sass graph node modules lodash package json dependency hierarchy npm check tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
0
21,560
14,633,120,126
IssuesEvent
2020-12-24 00:44:45
sigp/lighthouse
https://api.github.com/repos/sigp/lighthouse
closed
MacOS binary
A0 t Infrastructure CI CD Docker Directories
I'm trying to run the [Running Bird](https://github.com/sigp/lighthouse/releases/tag/v0.2.12) release for Spadina. It is the first time I try the precompiled binaries. I tried to execute the binary "x86_64-unknown-linux-gnu: AMD/Intel 64-bit processors (most desktops, laptops, servers)" (both the portable and non-portable versions) and get a "cannot execute binary file" error: ``` Justins-MBP:Downloads justin$ ./lighthouse --help -bash: ./lighthouse: cannot execute binary file ``` ![Screenshot 2020-09-27 at 18 14 08](https://user-images.githubusercontent.com/731743/94371288-47654380-00ed-11eb-81fd-e978f2f5c6c5.png) Are the precompiled binaries meant to work for MacOS?
1.0
MacOS binary - I'm trying to run the [Running Bird](https://github.com/sigp/lighthouse/releases/tag/v0.2.12) release for Spadina. It is the first time I try the precompiled binaries. I tried to execute the binary "x86_64-unknown-linux-gnu: AMD/Intel 64-bit processors (most desktops, laptops, servers)" (both the portable and non-portable versions) and get a "cannot execute binary file" error: ``` Justins-MBP:Downloads justin$ ./lighthouse --help -bash: ./lighthouse: cannot execute binary file ``` ![Screenshot 2020-09-27 at 18 14 08](https://user-images.githubusercontent.com/731743/94371288-47654380-00ed-11eb-81fd-e978f2f5c6c5.png) Are the precompiled binaries meant to work for MacOS?
non_defect
macos binary i m trying to run the release for spadina it is the first time i try the precompiled binaries i tried to execute the binary unknown linux gnu amd intel bit processors most desktops laptops servers both the portable and non portable versions and get a cannot execute binary file error justins mbp downloads justin lighthouse help bash lighthouse cannot execute binary file are the precompiled binaries meant to work for macos
0
38,207
8,697,056,877
IssuesEvent
2018-12-04 19:12:20
primefaces/primereact
https://api.github.com/repos/primefaces/primereact
closed
Today cell is not highlighed in Calendar when selected
defect
In free themes like nova, selecting today does not highlight it. The style for today overrides it.
1.0
Today cell is not highlighed in Calendar when selected - In free themes like nova, selecting today does not highlight it. The style for today overrides it.
defect
today cell is not highlighed in calendar when selected in free themes like nova selecting today does not highlight it the style for today overrides it
1
158,062
24,777,879,573
IssuesEvent
2022-10-23 23:38:03
Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022
https://api.github.com/repos/Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022
opened
Admin Design - Improved UX writing on Authentication Screen (Reset Password)
documentation enhancement hacktoberfest-accepted hacktoberfest design
- Improve UX writing on the reset password screens. - The improvement should be done on all screens (desktop, tablet and mobile) - Ensure you use the colors and typography on the style guide to ensure consistency
1.0
Admin Design - Improved UX writing on Authentication Screen (Reset Password) - - Improve UX writing on the reset password screens. - The improvement should be done on all screens (desktop, tablet and mobile) - Ensure you use the colors and typography on the style guide to ensure consistency
non_defect
admin design improved ux writing on authentication screen reset password improve ux writing on the reset password screens the improvement should be done on all screens desktop tablet and mobile ensure you use the colors and typography on the style guide to ensure consistency
0
186,921
15,087,817,172
IssuesEvent
2021-02-05 22:59:51
Leibniz-HBI/Social-Media-Observatory
https://api.github.com/repos/Leibniz-HBI/Social-Media-Observatory
opened
Update installation procedure in wiki for WSL and pipenv
documentation important
- installation should be done while logged in with admin account - user must check for any outstanding updates (!) - make sure to execute kernel update as admin (it might not prompt for that) - update with more detailed instructions how to install pipenv (with `pip3 install` not `apt install`) - maybe also add installation of pyenv (not done yet, so has to be figured out by assignee) - add installation instructions for jupyter lab - point to possibility to install python with chocolatey instead of installing WSL, but WSL is preferred solution when deployment of scripts to (most likely Linux-)servers is planned
1.0
Update installation procedure in wiki for WSL and pipenv - - installation should be done while logged in with admin account - user must check for any outstanding updates (!) - make sure to execute kernel update as admin (it might not prompt for that) - update with more detailed instructions how to install pipenv (with `pip3 install` not `apt install`) - maybe also add installation of pyenv (not done yet, so has to be figured out by assignee) - add installation instructions for jupyter lab - point to possibility to install python with chocolatey instead of installing WSL, but WSL is preferred solution when deployment of scripts to (most likely Linux-)servers is planned
non_defect
update installation procedure in wiki for wsl and pipenv installation should be done while logged in with admin account user must check for any outstanding updates make sure to execute kernel update as admin it might not prompt for that update with more detailed instructions how to install pipenv with install not apt install maybe also add installation of pyenv not done yet so has to be figured out by assignee add installation instructions for jupyter lab point to possibility to install python with chocolatey instead of installing wsl but wsl is preferred solution when deployment of scripts to most likely linux servers is planned
0
244,915
7,880,693,580
IssuesEvent
2018-06-26 16:39:22
aowen87/FOO
https://api.github.com/repos/aowen87/FOO
closed
Uintah requires lapack
Likelihood: 3 - Occasional OS: All Priority: Normal Severity: 2 - Minor Irritation Support Group: Any Target Version: 2.13.1 bug version: 2.12.3
From Joe Hennessey (ARL) Hello, I have noticed that there is now a requirement for lapack, in order to build Uintah with the lastest visit, but lapack is not built as part of the visit build script. Can lapack be added to the build script with a --lapack flag so that Uintah will build correctly. Thanks, Joe
1.0
Uintah requires lapack - From Joe Hennessey (ARL) Hello, I have noticed that there is now a requirement for lapack, in order to build Uintah with the lastest visit, but lapack is not built as part of the visit build script. Can lapack be added to the build script with a --lapack flag so that Uintah will build correctly. Thanks, Joe
non_defect
uintah requires lapack from joe hennessey arl hello i have noticed that there is now a requirement for lapack in order to build uintah with the lastest visit but lapack is not built as part of the visit build script can lapack be added to the build script with a lapack flag so that uintah will build correctly thanks joe
0
31,525
2,733,515,361
IssuesEvent
2015-04-17 14:26:31
dpaukov/combinatoricslib
https://api.github.com/repos/dpaukov/combinatoricslib
closed
factorial function should change
enhancement imported Priority-Medium
_From [xuhengb...@gmail.com](https://code.google.com/u/116344366755502081790/) on March 13, 2014 05:05:11_ factorial must change to ==> public static BigDecimal factorial(long x) { BigDecimal result = BigDecimal.valueOf(1); for (long i = 2; i <= x; i++) { result=result.multiply(BigDecimal.valueOf(i)); } return result; } becos is x is big enough,result will bigger than Long.maxVal() _Original issue: http://code.google.com/p/combinatoricslib/issues/detail?id=10_
1.0
factorial function should change - _From [xuhengb...@gmail.com](https://code.google.com/u/116344366755502081790/) on March 13, 2014 05:05:11_ factorial must change to ==> public static BigDecimal factorial(long x) { BigDecimal result = BigDecimal.valueOf(1); for (long i = 2; i <= x; i++) { result=result.multiply(BigDecimal.valueOf(i)); } return result; } becos is x is big enough,result will bigger than Long.maxVal() _Original issue: http://code.google.com/p/combinatoricslib/issues/detail?id=10_
non_defect
factorial function should change from on march factorial must change to public static bigdecimal factorial long x bigdecimal result bigdecimal valueof for long i i x i result result multiply bigdecimal valueof i return result becos is x is big enough result will bigger than long maxval original issue
0
50,948
13,187,992,194
IssuesEvent
2020-08-13 05:14:34
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
[steamshovel] Test failure (Trac #1718)
Migrated from Trac combo core defect
```text Start 353: steamshovel::test_shovelscripts.py 6/6 Test #352: steamshovel::test_shovelscripts.py ...............***Failed 12.77 sec ...F. ====================================================================== FAIL: test_particles_artist_draws_tracks (__main__.TestShovelScripts) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py", line 79, in test_particles_artist_draws_tracks self.assertImage( name + ".png", 35 ) File "/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py", line 46, in assertImage self.assertGreater(psnr, threshold) AssertionError: 29.780892595095874 not greater than 35 ---------------------------------------------------------------------- ``` <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1718">https://code.icecube.wisc.edu/ticket/1718</a>, reported by olivas and owned by hdembinski</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:10", "description": "{{{\n Start 353: steamshovel::test_shovelscripts.py\n6/6 Test #353: steamshovel::test_shovelscripts.py ...............***Failed 12.77 sec\n...F.\n======================================================================\nFAIL: test_particles_artist_draws_tracks (__main__.TestShovelScripts)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py\", line 79, in test_particles_artist_draws_tracks\n self.assertImage( name + \".png\", 35 )\n File \"/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py\", line 46, in assertImage\n self.assertGreater(psnr, threshold)\nAssertionError: 29.780892595095874 not greater than 35\n\n----------------------------------------------------------------------\n}}}", "reporter": "olivas", "cc": "david.schultz", "resolution": "fixed", "_ts": "1550067190995086", "component": "combo core", "summary": "[steamshovel] Test failure", "priority": "major", "keywords": "steamshovel X11", "time": "2016-05-31T17:49:41", "milestone": "", "owner": "hdembinski", "type": "defect" } ``` </p> </details>
1.0
[steamshovel] Test failure (Trac #1718) - ```text Start 353: steamshovel::test_shovelscripts.py 6/6 Test #352: steamshovel::test_shovelscripts.py ...............***Failed 12.77 sec ...F. ====================================================================== FAIL: test_particles_artist_draws_tracks (__main__.TestShovelScripts) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py", line 79, in test_particles_artist_draws_tracks self.assertImage( name + ".png", 35 ) File "/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py", line 46, in assertImage self.assertGreater(psnr, threshold) AssertionError: 29.780892595095874 not greater than 35 ---------------------------------------------------------------------- ``` <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1718">https://code.icecube.wisc.edu/ticket/1718</a>, reported by olivas and owned by hdembinski</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:10", "description": "{{{\n Start 353: steamshovel::test_shovelscripts.py\n6/6 Test #353: steamshovel::test_shovelscripts.py ...............***Failed 12.77 sec\n...F.\n======================================================================\nFAIL: test_particles_artist_draws_tracks (__main__.TestShovelScripts)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py\", line 79, in test_particles_artist_draws_tracks\n self.assertImage( name + \".png\", 35 )\n File \"/home/olivas/icecube/combo/src/steamshovel/resources/test/test_shovelscripts.py\", line 46, in assertImage\n self.assertGreater(psnr, threshold)\nAssertionError: 29.780892595095874 not greater than 35\n\n----------------------------------------------------------------------\n}}}", "reporter": "olivas", "cc": "david.schultz", "resolution": "fixed", "_ts": "1550067190995086", "component": "combo core", "summary": "[steamshovel] Test failure", "priority": "major", "keywords": "steamshovel X11", "time": "2016-05-31T17:49:41", "milestone": "", "owner": "hdembinski", "type": "defect" } ``` </p> </details>
defect
test failure trac text start steamshovel test shovelscripts py test steamshovel test shovelscripts py failed sec f fail test particles artist draws tracks main testshovelscripts traceback most recent call last file home olivas icecube combo src steamshovel resources test test shovelscripts py line in test particles artist draws tracks self assertimage name png file home olivas icecube combo src steamshovel resources test test shovelscripts py line in assertimage self assertgreater psnr threshold assertionerror not greater than migrated from json status closed changetime description n start steamshovel test shovelscripts py test steamshovel test shovelscripts py failed sec n f n nfail test particles artist draws tracks main testshovelscripts n ntraceback most recent call last n file home olivas icecube combo src steamshovel resources test test shovelscripts py line in test particles artist draws tracks n self assertimage name png n file home olivas icecube combo src steamshovel resources test test shovelscripts py line in assertimage n self assertgreater psnr threshold nassertionerror not greater than n n n reporter olivas cc david schultz resolution fixed ts component combo core summary test failure priority major keywords steamshovel time milestone owner hdembinski type defect
1
48,111
13,067,454,448
IssuesEvent
2020-07-31 00:30:25
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[icetray] uninitialized value (Trac #1799)
Migrated from Trac combo core defect
found by static analysis: http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-581265.html#EndPath Migrated from https://code.icecube.wisc.edu/ticket/1799 ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "description": "found by static analysis: http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-581265.html#EndPath", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067158057333", "component": "combo core", "summary": "[icetray] uninitialized value", "priority": "normal", "keywords": "", "time": "2016-07-27T08:01:06", "milestone": "", "owner": "olivas", "type": "defect" } ```
1.0
[icetray] uninitialized value (Trac #1799) - found by static analysis: http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-581265.html#EndPath Migrated from https://code.icecube.wisc.edu/ticket/1799 ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "description": "found by static analysis: http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-581265.html#EndPath", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067158057333", "component": "combo core", "summary": "[icetray] uninitialized value", "priority": "normal", "keywords": "", "time": "2016-07-27T08:01:06", "milestone": "", "owner": "olivas", "type": "defect" } ```
defect
uninitialized value trac found by static analysis migrated from json status closed changetime description found by static analysis reporter kjmeagher cc resolution fixed ts component combo core summary uninitialized value priority normal keywords time milestone owner olivas type defect
1
70,728
23,294,342,903
IssuesEvent
2022-08-06 10:15:13
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Read position is lost after exit program
T-Defect
### Steps to reproduce View a group chat. Scroll up. Exit the program. Start again then the position is lost. ### Outcome #### What did you expect? Position should be not lost. #### What happened instead? But it losts. ### Operating system win7 x64 ### Application version 1.11.2 ### How did you install the app? _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
Read position is lost after exit program - ### Steps to reproduce View a group chat. Scroll up. Exit the program. Start again then the position is lost. ### Outcome #### What did you expect? Position should be not lost. #### What happened instead? But it losts. ### Operating system win7 x64 ### Application version 1.11.2 ### How did you install the app? _No response_ ### Homeserver _No response_ ### Will you send logs? No
defect
read position is lost after exit program steps to reproduce view a group chat scroll up exit the program start again then the position is lost outcome what did you expect position should be not lost what happened instead but it losts operating system application version how did you install the app no response homeserver no response will you send logs no
1
31,124
6,423,855,633
IssuesEvent
2017-08-09 12:10:56
wooowooo/phpsocks5
https://api.github.com/repos/wooowooo/phpsocks5
closed
请问“修改为PHP虚拟主机提供的数据库配置”是什么
auto-migrated Priority-Medium Type-Defect
``` 您好, 请问使用方法中 1、修改socks5.php前5行代码的数据库配置,修改为PHP虚拟主机�� �供的数据库配置。这里的“PHP虚拟主机提供的数据库配置”� ��指什么,怎么得到这些数据,是自己新建一个MySQL数据库供p hpsocks5代理使用吗? 谢谢 ``` Original issue reported on code.google.com by `yong...@gmail.com` on 27 Feb 2011 at 12:19
1.0
请问“修改为PHP虚拟主机提供的数据库配置”是什么 - ``` 您好, 请问使用方法中 1、修改socks5.php前5行代码的数据库配置,修改为PHP虚拟主机�� �供的数据库配置。这里的“PHP虚拟主机提供的数据库配置”� ��指什么,怎么得到这些数据,是自己新建一个MySQL数据库供p hpsocks5代理使用吗? 谢谢 ``` Original issue reported on code.google.com by `yong...@gmail.com` on 27 Feb 2011 at 12:19
defect
请问“修改为php虚拟主机提供的数据库配置”是什么 您好, 请问使用方法中 、 ,修改为php虚拟主机�� �供的数据库配置。这里的“php虚拟主机提供的数据库配置”� ��指什么,怎么得到这些数据,是自己新建一个mysql数据库供p ? 谢谢 original issue reported on code google com by yong gmail com on feb at
1
39,117
6,721,301,069
IssuesEvent
2017-10-16 11:08:59
yaseminalpay/motive
https://api.github.com/repos/yaseminalpay/motive
opened
Prepare requirement document
documentation good first issue requirement
Using google docs. Source for description document: interest-based-communities.pdf
1.0
Prepare requirement document - Using google docs. Source for description document: interest-based-communities.pdf
non_defect
prepare requirement document using google docs source for description document interest based communities pdf
0
74,023
24,908,738,120
IssuesEvent
2022-10-29 15:45:12
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
RIP: 0010:nfsd4_process_open2+0x1201/0x14c0 [nfsd] after upgrading to zfs-dkms 2.0.5-1
Type: Defect Status: Stale Status: Triage Needed
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Archlinux Distribution Version | Linux Kernel | 5.10.46-1-lts Architecture | x86_64 ZFS Version | 2.0.5-1 SPL Version | 2.0.5-1 <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing juin 28 18:13:53 smicro kernel: ------------[ cut here ]------------ juin 28 18:13:53 smicro kernel: WARNING: CPU: 0 PID: 1955 at fs/nfsd/nfs4state.c:4973 nfsd4_process_open2+0x1201/0x14c0 [nfsd] juin 28 18:13:53 smicro kernel: Modules linked in: rpcrdma rdma_cm iw_cm ib_cm ib_core nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_limit nft_counter nft_ct nf_conntrack nf_defrag_ipv6 nf_defrag> juin 28 18:13:53 smicro kernel: CPU: 0 PID: 1955 Comm: nfsd Tainted: P OE 5.10.46-1-lts #1 juin 28 18:13:53 smicro kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.5 11/25/2013 juin 28 18:13:53 smicro kernel: RIP: 0010:nfsd4_process_open2+0x1201/0x14c0 [nfsd] juin 28 18:13:53 smicro kernel: Code: f9 ff ff 4c 89 cf e8 7e 2e fe ff 49 89 c1 e9 39 f9 ff ff 49 83 7f 70 00 0f 84 01 01 00 00 41 83 47 78 01 31 d2 e9 8e f9 ff ff <0f> 0b e9 c5 f3 ff ff 0f b6 d4 89 d2 48 0f a3> juin 28 18:13:53 smicro kernel: RSP: 0018:ffffad50c94fbcf0 EFLAGS: 00010246 juin 28 18:13:53 smicro kernel: RAX: 0000000000000000 RBX: ffff95b5e77511e0 RCX: ffff95a7398137b0 juin 28 18:13:53 smicro kernel: RDX: 0000000000000001 RSI: ffff95a739929648 RDI: ffff95a7398da5a4 juin 28 18:13:53 smicro kernel: RBP: ffffad50c94fbdc8 R08: ffff95a7398da5a8 R09: 0000000000000000 juin 28 18:13:53 smicro kernel: R10: ffff95a739875688 R11: ffff95a739918200 R12: ffff95a739929648 juin 28 18:13:53 smicro kernel: R13: ffff95a7398da5a0 R14: ffff95a7398137b0 R15: ffff95a7398da5a0 juin 28 18:13:53 smicro kernel: FS: 0000000000000000(0000) GS:ffff95b55fc00000(0000) knlGS:0000000000000000 juin 28 18:13:53 smicro kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 juin 28 18:13:53 smicro kernel: CR2: 00007ff832ee28d4 CR3: 0000000374e10000 CR4: 00000000000006f0 juin 28 18:13:53 smicro kernel: Call Trace: juin 28 18:13:53 smicro kernel: ? nfsd_permission+0x63/0xe0 [nfsd] juin 28 18:13:53 smicro kernel: ? fh_verify+0x1d5/0x630 [nfsd] juin 28 18:13:53 smicro kernel: nfsd4_open+0x3be/0x750 [nfsd] juin 28 18:13:53 smicro kernel: nfsd4_proc_compound+0x494/0x6d0 [nfsd] juin 28 18:13:53 smicro kernel: nfsd_dispatch+0xd3/0x180 [nfsd] juin 28 18:13:53 smicro kernel: svc_process_common+0x3d7/0x6d0 [sunrpc] juin 28 18:13:53 smicro kernel: ? nfsd_svc+0x310/0x310 [nfsd] juin 28 18:13:53 smicro kernel: svc_process+0xb7/0xf0 [sunrpc] juin 28 18:13:53 smicro kernel: nfsd+0xe8/0x140 [nfsd] juin 28 18:13:53 smicro kernel: ? nfsd_destroy+0x60/0x60 [nfsd] juin 28 18:13:53 smicro kernel: kthread+0x11b/0x140 juin 28 18:13:53 smicro kernel: ? kthread_associate_blkcg+0xa0/0xa0 juin 28 18:13:53 smicro kernel: ret_from_fork+0x22/0x30 juin 28 18:13:53 smicro kernel: ---[ end trace 299b1c52e1b7ef74 ]--- ### Describe how to reproduce the problem using nfs4, we have home directories and a shared Documents hierarchy for aarch64 linux clients. samba too. Never seen this before. Linux-lts was upgraded the 26/06 and zfs-{dkms,util} today. After reboot, I immediately came across the dump in dmesg/journalctl. It can only be related to the zfs upgrade from 2.0.4 to 2.0.5. ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
1.0
RIP: 0010:nfsd4_process_open2+0x1201/0x14c0 [nfsd] after upgrading to zfs-dkms 2.0.5-1 - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Archlinux Distribution Version | Linux Kernel | 5.10.46-1-lts Architecture | x86_64 ZFS Version | 2.0.5-1 SPL Version | 2.0.5-1 <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing juin 28 18:13:53 smicro kernel: ------------[ cut here ]------------ juin 28 18:13:53 smicro kernel: WARNING: CPU: 0 PID: 1955 at fs/nfsd/nfs4state.c:4973 nfsd4_process_open2+0x1201/0x14c0 [nfsd] juin 28 18:13:53 smicro kernel: Modules linked in: rpcrdma rdma_cm iw_cm ib_cm ib_core nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_limit nft_counter nft_ct nf_conntrack nf_defrag_ipv6 nf_defrag> juin 28 18:13:53 smicro kernel: CPU: 0 PID: 1955 Comm: nfsd Tainted: P OE 5.10.46-1-lts #1 juin 28 18:13:53 smicro kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.5 11/25/2013 juin 28 18:13:53 smicro kernel: RIP: 0010:nfsd4_process_open2+0x1201/0x14c0 [nfsd] juin 28 18:13:53 smicro kernel: Code: f9 ff ff 4c 89 cf e8 7e 2e fe ff 49 89 c1 e9 39 f9 ff ff 49 83 7f 70 00 0f 84 01 01 00 00 41 83 47 78 01 31 d2 e9 8e f9 ff ff <0f> 0b e9 c5 f3 ff ff 0f b6 d4 89 d2 48 0f a3> juin 28 18:13:53 smicro kernel: RSP: 0018:ffffad50c94fbcf0 EFLAGS: 00010246 juin 28 18:13:53 smicro kernel: RAX: 0000000000000000 RBX: ffff95b5e77511e0 RCX: ffff95a7398137b0 juin 28 18:13:53 smicro kernel: RDX: 0000000000000001 RSI: ffff95a739929648 RDI: ffff95a7398da5a4 juin 28 18:13:53 smicro kernel: RBP: ffffad50c94fbdc8 R08: ffff95a7398da5a8 R09: 0000000000000000 juin 28 18:13:53 smicro kernel: R10: ffff95a739875688 R11: ffff95a739918200 R12: ffff95a739929648 juin 28 18:13:53 smicro kernel: R13: ffff95a7398da5a0 R14: ffff95a7398137b0 R15: ffff95a7398da5a0 juin 28 18:13:53 smicro kernel: FS: 0000000000000000(0000) GS:ffff95b55fc00000(0000) knlGS:0000000000000000 juin 28 18:13:53 smicro kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 juin 28 18:13:53 smicro kernel: CR2: 00007ff832ee28d4 CR3: 0000000374e10000 CR4: 00000000000006f0 juin 28 18:13:53 smicro kernel: Call Trace: juin 28 18:13:53 smicro kernel: ? nfsd_permission+0x63/0xe0 [nfsd] juin 28 18:13:53 smicro kernel: ? fh_verify+0x1d5/0x630 [nfsd] juin 28 18:13:53 smicro kernel: nfsd4_open+0x3be/0x750 [nfsd] juin 28 18:13:53 smicro kernel: nfsd4_proc_compound+0x494/0x6d0 [nfsd] juin 28 18:13:53 smicro kernel: nfsd_dispatch+0xd3/0x180 [nfsd] juin 28 18:13:53 smicro kernel: svc_process_common+0x3d7/0x6d0 [sunrpc] juin 28 18:13:53 smicro kernel: ? nfsd_svc+0x310/0x310 [nfsd] juin 28 18:13:53 smicro kernel: svc_process+0xb7/0xf0 [sunrpc] juin 28 18:13:53 smicro kernel: nfsd+0xe8/0x140 [nfsd] juin 28 18:13:53 smicro kernel: ? nfsd_destroy+0x60/0x60 [nfsd] juin 28 18:13:53 smicro kernel: kthread+0x11b/0x140 juin 28 18:13:53 smicro kernel: ? kthread_associate_blkcg+0xa0/0xa0 juin 28 18:13:53 smicro kernel: ret_from_fork+0x22/0x30 juin 28 18:13:53 smicro kernel: ---[ end trace 299b1c52e1b7ef74 ]--- ### Describe how to reproduce the problem using nfs4, we have home directories and a shared Documents hierarchy for aarch64 linux clients. samba too. Never seen this before. Linux-lts was upgraded the 26/06 and zfs-{dkms,util} today. After reboot, I immediately came across the dump in dmesg/journalctl. It can only be related to the zfs upgrade from 2.0.4 to 2.0.5. ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
defect
rip process after upgrading to zfs dkms thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name archlinux distribution version linux kernel lts architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing juin smicro kernel juin smicro kernel warning cpu pid at fs nfsd c process juin smicro kernel modules linked in rpcrdma rdma cm iw cm ib cm ib core nft reject inet nf reject nf reject nft reject nft limit nft counter nft ct nf conntrack nf defrag nf defrag juin smicro kernel cpu pid comm nfsd tainted p oe lts juin smicro kernel hardware name supermicro bios juin smicro kernel rip process juin smicro kernel code ff ff cf fe ff ff ff ff ff ff ff juin smicro kernel rsp eflags juin smicro kernel rax rbx rcx juin smicro kernel rdx rsi rdi juin smicro kernel rbp juin smicro kernel juin smicro kernel juin smicro kernel fs gs knlgs juin smicro kernel cs ds es juin smicro kernel juin smicro kernel call trace juin smicro kernel nfsd permission juin smicro kernel fh verify juin smicro kernel open juin smicro kernel proc compound juin smicro kernel nfsd dispatch juin smicro kernel svc process common juin smicro kernel nfsd svc juin smicro kernel svc process juin smicro kernel nfsd juin smicro kernel nfsd destroy juin smicro kernel kthread juin smicro kernel kthread associate blkcg juin smicro kernel ret from fork juin smicro kernel describe how to reproduce the problem using we have home directories and a shared documents hierarchy for linux clients samba too never seen this before linux lts was upgraded the and zfs dkms util today after reboot i immediately came across the dump in dmesg journalctl it can only be related to the zfs upgrade from to include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
1
101,459
16,511,655,273
IssuesEvent
2021-05-26 05:25:24
kijunb33/test
https://api.github.com/repos/kijunb33/test
opened
CVE-2019-0221 (Medium) detected in tomcat-embed-core-7.0.90.jar
security vulnerability
## CVE-2019-0221 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-7.0.90.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p> <p>Path to vulnerable library: test/tomcat-embed-core-7.0.90.jar</p> <p> Dependency Hierarchy: - :x: **tomcat-embed-core-7.0.90.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/kijunb33/test/commits/8df5c209ab0589b3f881b1e4a6c004c81ae3d659">8df5c209ab0589b3f881b1e4a6c004c81ae3d659</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website. <p>Publish Date: 2019-05-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221>CVE-2019-0221</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221</a></p> <p>Release Date: 2019-05-28</p> <p>Fix Resolution: 9.0.0.18,8.5.40,7.0.94</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-0221 (Medium) detected in tomcat-embed-core-7.0.90.jar - ## CVE-2019-0221 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-7.0.90.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p> <p>Path to vulnerable library: test/tomcat-embed-core-7.0.90.jar</p> <p> Dependency Hierarchy: - :x: **tomcat-embed-core-7.0.90.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/kijunb33/test/commits/8df5c209ab0589b3f881b1e4a6c004c81ae3d659">8df5c209ab0589b3f881b1e4a6c004c81ae3d659</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website. <p>Publish Date: 2019-05-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221>CVE-2019-0221</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221</a></p> <p>Release Date: 2019-05-28</p> <p>Fix Resolution: 9.0.0.18,8.5.40,7.0.94</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in tomcat embed core jar cve medium severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to vulnerable library test tomcat embed core jar dependency hierarchy x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details the ssi printenv command in apache tomcat to to and to echoes user provided data without escaping and is therefore vulnerable to xss ssi is disabled by default the printenv command is intended for debugging and is unlikely to be present in a production website publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
665,305
22,308,107,349
IssuesEvent
2022-06-13 14:39:01
phetsims/bending-light
https://api.github.com/repos/phetsims/bending-light
closed
Suspicions devDependencies in package.json
priority:2-high dev:typescript
Why does bending-light have these `devDependencies` in package.json? I don't see anything like this in package.json for other sims. ``` "@typescript-eslint/eslint-plugin": "^5.0.0", "@typescript-eslint/parser": "^5.0.0", "eslint": "^8.0.1", ... "typescript": "^4.4.4" ```
1.0
Suspicions devDependencies in package.json - Why does bending-light have these `devDependencies` in package.json? I don't see anything like this in package.json for other sims. ``` "@typescript-eslint/eslint-plugin": "^5.0.0", "@typescript-eslint/parser": "^5.0.0", "eslint": "^8.0.1", ... "typescript": "^4.4.4" ```
non_defect
suspicions devdependencies in package json why does bending light have these devdependencies in package json i don t see anything like this in package json for other sims typescript eslint eslint plugin typescript eslint parser eslint typescript
0
19,573
2,622,153,908
IssuesEvent
2015-03-04 00:07:17
byzhang/terrastore
https://api.github.com/repos/byzhang/terrastore
closed
Improve Terrastore server behavior on network failures
auto-migrated Milestone-0.4 Priority-High Type-Enhancement
``` Currently, the Terrastore server will freeze if a network failure happens preventing proper communication toward the master. As discussed in the following mailing list thread, http://groups.google.com/group/terrastore-discussions/t/c8b58d0cbbbefab, this is not a good default behavior. We have two options: 1) Enabling automatic server-to-master reconnection. 2) Let the server stop itself (and properly log reasons). The best would be, obviously, to implement both and let the user choose between the two. ``` Original issue reported on code.google.com by `sergio.b...@gmail.com` on 14 Jan 2010 at 9:24
1.0
Improve Terrastore server behavior on network failures - ``` Currently, the Terrastore server will freeze if a network failure happens preventing proper communication toward the master. As discussed in the following mailing list thread, http://groups.google.com/group/terrastore-discussions/t/c8b58d0cbbbefab, this is not a good default behavior. We have two options: 1) Enabling automatic server-to-master reconnection. 2) Let the server stop itself (and properly log reasons). The best would be, obviously, to implement both and let the user choose between the two. ``` Original issue reported on code.google.com by `sergio.b...@gmail.com` on 14 Jan 2010 at 9:24
non_defect
improve terrastore server behavior on network failures currently the terrastore server will freeze if a network failure happens preventing proper communication toward the master as discussed in the following mailing list thread this is not a good default behavior we have two options enabling automatic server to master reconnection let the server stop itself and properly log reasons the best would be obviously to implement both and let the user choose between the two original issue reported on code google com by sergio b gmail com on jan at
0
444,568
31,075,652,699
IssuesEvent
2023-08-12 12:55:01
spring-projects/spring-framework
https://api.github.com/repos/spring-projects/spring-framework
closed
Propagation REQUIRES_NEW may cause connection pool deadlock
in: data status: backported type: documentation
A method that opens a transaction calls another method that starts a new transaction,if all connections are exhausted before the last new transaction method is executed,then all threads in the process will block,this process will fail. Pseudo code: ``` @Transactional methodA(){ // All the threads that started the transaction were executed here, but the connection was exhausted. // The latter method execution will get a new connection,but it will never get it. @Transactional(propagation=Propagation.REQUIRES_NEW) methodB(){ } } ```
1.0
Propagation REQUIRES_NEW may cause connection pool deadlock - A method that opens a transaction calls another method that starts a new transaction,if all connections are exhausted before the last new transaction method is executed,then all threads in the process will block,this process will fail. Pseudo code: ``` @Transactional methodA(){ // All the threads that started the transaction were executed here, but the connection was exhausted. // The latter method execution will get a new connection,but it will never get it. @Transactional(propagation=Propagation.REQUIRES_NEW) methodB(){ } } ```
non_defect
propagation requires new may cause connection pool deadlock a method that opens a transaction calls another method that starts a new transaction,if all connections are exhausted before the last new transaction method is executed then all threads in the process will block,this process will fail pseudo code: transactional methoda all the threads that started the transaction were executed here but the connection was exhausted the latter method execution will get a new connection but it will never get it transactional propagation propagation requires new methodb
0