Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
296,047
| 22,286,915,931
|
IssuesEvent
|
2022-06-11 19:32:18
|
bluesky/hklpy
|
https://api.github.com/repos/bluesky/hklpy
|
opened
|
DOC note about add_reflection(h,k,l) will use current positions
|
documentation
|
Note in the documentation that `add_reflection(h,k,l)` will use the current positions if `positions=None` (the default): https://github.com/bluesky/hklpy/blob/906f00e7044449d6bcee6a5347f70efcf628003b/hkl/sample.py#L301-L320
|
1.0
|
DOC note about add_reflection(h,k,l) will use current positions - Note in the documentation that `add_reflection(h,k,l)` will use the current positions if `positions=None` (the default): https://github.com/bluesky/hklpy/blob/906f00e7044449d6bcee6a5347f70efcf628003b/hkl/sample.py#L301-L320
|
non_process
|
doc note about add reflection h k l will use current positions note in the documentation that add reflection h k l will use the current positions if positions none the default
| 0
|
7,381
| 10,514,634,654
|
IssuesEvent
|
2019-09-28 02:15:12
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
SparkSQL does not properly compile datetime filters with :default bucketing
|
.Backend Database/Spark Priority:P3 Query Processor Type:Bug
|
In Spark SQL for whatever strange reason a datetime and timestamp can be compared against another, but it doesn't work correctly. For example,
```sql
SELECT date '2019-09-27' < timestamp '2019-09-27 00:00:00'
-- true
```
is `true` in Spark SQL, for some reason I am not able to figure out. Casting the `date` to a timestamp works as expected:
```sql
SELECT CAST(date '2019-09-27' AS timestamp) < timestamp '2019-09-27 00:00:00'
```
`:default` bucketing is mostly used internally and isn't exposed in the UI AFAIK so marking this as a p3.
|
1.0
|
SparkSQL does not properly compile datetime filters with :default bucketing - In Spark SQL for whatever strange reason a datetime and timestamp can be compared against another, but it doesn't work correctly. For example,
```sql
SELECT date '2019-09-27' < timestamp '2019-09-27 00:00:00'
-- true
```
is `true` in Spark SQL, for some reason I am not able to figure out. Casting the `date` to a timestamp works as expected:
```sql
SELECT CAST(date '2019-09-27' AS timestamp) < timestamp '2019-09-27 00:00:00'
```
`:default` bucketing is mostly used internally and isn't exposed in the UI AFAIK so marking this as a p3.
|
process
|
sparksql does not properly compile datetime filters with default bucketing in spark sql for whatever strange reason a datetime and timestamp can be compared against another but it doesn t work correctly for example sql select date timestamp true is true in spark sql for some reason i am not able to figure out casting the date to a timestamp works as expected sql select cast date as timestamp timestamp default bucketing is mostly used internally and isn t exposed in the ui afaik so marking this as a
| 1
|
19,636
| 26,004,727,970
|
IssuesEvent
|
2022-12-20 18:13:08
|
daoanhhuy26012001/pacific-hotel
|
https://api.github.com/repos/daoanhhuy26012001/pacific-hotel
|
closed
|
setup structure project
|
in-process
|
- [x] update README.md
- [x] create new file.html
- [x] create new file.css
- [x] create new file.js
- [x] add images
- [x] add bootstrap
- [x] add jquery
- [x] add fonts
|
1.0
|
setup structure project - - [x] update README.md
- [x] create new file.html
- [x] create new file.css
- [x] create new file.js
- [x] add images
- [x] add bootstrap
- [x] add jquery
- [x] add fonts
|
process
|
setup structure project update readme md create new file html create new file css create new file js add images add bootstrap add jquery add fonts
| 1
|
20,987
| 16,389,590,648
|
IssuesEvent
|
2021-05-17 14:35:47
|
hochschule-darmstadt/openartbrowser
|
https://api.github.com/repos/hochschule-darmstadt/openartbrowser
|
opened
|
Mobile Overhaul: Tab layout
|
User Interface make ready usability improvement
|
**Reason (Why?)**
In the current mobile version of the openArtBrowser, the labels for the tabs are aligned vertically at full width.
We should investigate how to display them in a more compact way, because in some places they take up to half of the screen.
Additionally, the selected tab indication via the background is looking weird without the border at the bottom.
To showcase a few occurences, here are several screenshots from the mobile frontend:
[Artwork Page](https://cai-artbrowserstaging.fbi.h-da.de/de/artwork/Q463392) | [Artist Page](https://cai-artbrowserstaging.fbi.h-da.de/de/artist/Q5582) | [Movement Page](https://cai-artbrowserstaging.fbi.h-da.de/de/movement/Q1404472)
:----:|:----:|:----:
 |  | 
**Solution (What?)**
We should try different alternatives to display the tab labels. Maybe one of the following alternatives is suitable for our usecase:
- **Horizontal Scroll:** Display the tab labels horizontally in one row. If there are too much labels and they don't fit to the screen width, the overflowing ones should be reachable via an horizontal scrollbar (this might not be clear for all users, because the scrollbar is usually hidden on mobile browsers).
- **Select Box**: Display the tab labels as a [select box](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select).
**Acceptance criteria**
_- as described in the solution -_
|
True
|
Mobile Overhaul: Tab layout - **Reason (Why?)**
In the current mobile version of the openArtBrowser, the labels for the tabs are aligned vertically at full width.
We should investigate how to display them in a more compact way, because in some places they take up to half of the screen.
Additionally, the selected tab indication via the background is looking weird without the border at the bottom.
To showcase a few occurences, here are several screenshots from the mobile frontend:
[Artwork Page](https://cai-artbrowserstaging.fbi.h-da.de/de/artwork/Q463392) | [Artist Page](https://cai-artbrowserstaging.fbi.h-da.de/de/artist/Q5582) | [Movement Page](https://cai-artbrowserstaging.fbi.h-da.de/de/movement/Q1404472)
:----:|:----:|:----:
 |  | 
**Solution (What?)**
We should try different alternatives to display the tab labels. Maybe one of the following alternatives is suitable for our usecase:
- **Horizontal Scroll:** Display the tab labels horizontally in one row. If there are too much labels and they don't fit to the screen width, the overflowing ones should be reachable via an horizontal scrollbar (this might not be clear for all users, because the scrollbar is usually hidden on mobile browsers).
- **Select Box**: Display the tab labels as a [select box](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select).
**Acceptance criteria**
_- as described in the solution -_
|
non_process
|
mobile overhaul tab layout reason why in the current mobile version of the openartbrowser the labels for the tabs are aligned vertically at full width we should investigate how to display them in a more compact way because in some places they take up to half of the screen additionally the selected tab indication via the background is looking weird without the border at the bottom to showcase a few occurences here are several screenshots from the mobile frontend solution what we should try different alternatives to display the tab labels maybe one of the following alternatives is suitable for our usecase horizontal scroll display the tab labels horizontally in one row if there are too much labels and they don t fit to the screen width the overflowing ones should be reachable via an horizontal scrollbar this might not be clear for all users because the scrollbar is usually hidden on mobile browsers select box display the tab labels as a acceptance criteria as described in the solution
| 0
|
769,801
| 27,018,825,275
|
IssuesEvent
|
2023-02-10 22:25:32
|
googleapis/nodejs-automl
|
https://api.github.com/repos/googleapis/nodejs-automl
|
closed
|
Automl Video Object Tracking Create Dataset Test: "after all" hook: delete created dataset for "should create a dataset" failed
|
type: bug priority: p1 api: automl flakybot: issue
|
Note: #678 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 97411a2bb514b9921bb3932543a2d895c452d5c6
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/111acef3-99d9-40ee-ba64-bedce26a8918), [Sponge](http://sponge2/111acef3-99d9-40ee-ba64-bedce26a8918)
status: failed
<details><summary>Test output</summary><br><pre>Cannot read property 'toString' of undefined
TypeError: Cannot read property 'toString' of undefined
at PathTemplate.render (/workspace/node_modules/google-gax/build/src/pathTemplate.js:114:37)
-> /workspace/node_modules/google-gax/src/pathTemplate.ts:144:31
at AutoMlClient.datasetPath (/workspace/build/src/v1beta1/auto_ml_client.js:1699:55)
-> /workspace/src/v1beta1/auto_ml_client.ts:4108:51
at Context.<anonymous> (test/video-object-tracking-create-dataset.beta.test.js:49:20)</pre></details>
|
1.0
|
Automl Video Object Tracking Create Dataset Test: "after all" hook: delete created dataset for "should create a dataset" failed - Note: #678 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 97411a2bb514b9921bb3932543a2d895c452d5c6
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/111acef3-99d9-40ee-ba64-bedce26a8918), [Sponge](http://sponge2/111acef3-99d9-40ee-ba64-bedce26a8918)
status: failed
<details><summary>Test output</summary><br><pre>Cannot read property 'toString' of undefined
TypeError: Cannot read property 'toString' of undefined
at PathTemplate.render (/workspace/node_modules/google-gax/build/src/pathTemplate.js:114:37)
-> /workspace/node_modules/google-gax/src/pathTemplate.ts:144:31
at AutoMlClient.datasetPath (/workspace/build/src/v1beta1/auto_ml_client.js:1699:55)
-> /workspace/src/v1beta1/auto_ml_client.ts:4108:51
at Context.<anonymous> (test/video-object-tracking-create-dataset.beta.test.js:49:20)</pre></details>
|
non_process
|
automl video object tracking create dataset test after all hook delete created dataset for should create a dataset failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output cannot read property tostring of undefined typeerror cannot read property tostring of undefined at pathtemplate render workspace node modules google gax build src pathtemplate js workspace node modules google gax src pathtemplate ts at automlclient datasetpath workspace build src auto ml client js workspace src auto ml client ts at context test video object tracking create dataset beta test js
| 0
|
18,666
| 24,583,050,828
|
IssuesEvent
|
2022-10-13 17:10:40
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
closed
|
Add RandomRain preprocessing layer
|
preprocessing
|
## Weather Augmentation
One of the real-world scenarios that pose challenges for training neural networks of Autonomous vehicles

Impl. Ref.
- https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library
- https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomRain
|
1.0
|
Add RandomRain preprocessing layer - ## Weather Augmentation
One of the real-world scenarios that pose challenges for training neural networks of Autonomous vehicles

Impl. Ref.
- https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library
- https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomRain
|
process
|
add randomrain preprocessing layer weather augmentation one of the real world scenarios that pose challenges for training neural networks of autonomous vehicles impl ref
| 1
|
1,445
| 4,017,459,061
|
IssuesEvent
|
2016-05-16 04:15:01
|
inasafe/inasafe
|
https://api.github.com/repos/inasafe/inasafe
|
closed
|
Feature update: Softcode postprocessors
|
Aggregation Postprocessing
|
# Problem
The postprocessors are built around the OSM data structures and naming conventions. This causes any non-conforming building or road exposure to cause false Zeros and 'No Data'
# Proposed solution
Softcode postprocessors' building and road types based on exposure layer.
# Related issues:
- #2318
- #2320
|
1.0
|
Feature update: Softcode postprocessors - # Problem
The postprocessors are built around the OSM data structures and naming conventions. This causes any non-conforming building or road exposure to cause false Zeros and 'No Data'
# Proposed solution
Softcode postprocessors' building and road types based on exposure layer.
# Related issues:
- #2318
- #2320
|
process
|
feature update softcode postprocessors problem the postprocessors are built around the osm data structures and naming conventions this causes any non conforming building or road exposure to cause false zeros and no data proposed solution softcode postprocessors building and road types based on exposure layer related issues
| 1
|
12,037
| 3,250,687,281
|
IssuesEvent
|
2015-10-19 03:19:47
|
kumulsoft/Fixed-Assets
|
https://api.github.com/repos/kumulsoft/Fixed-Assets
|
closed
|
SETUP >> Manage Staff. Small Adjustments to the entry screen
|
bug enhancement Fixed Ready for testing UI
|
1. Rename section 'Contact Information' to 'Staff Information'
2. Rename label 'Employee Name' to 'Staff Name'
3. Contact Type must be defaulted to 'Staff' and Read Only
4. Arrange Centre and Location to be Side by Side (like Division and Section)
5. Remove/Hide the Address section
6. Position Field too is missing, put it back

|
1.0
|
SETUP >> Manage Staff. Small Adjustments to the entry screen - 1. Rename section 'Contact Information' to 'Staff Information'
2. Rename label 'Employee Name' to 'Staff Name'
3. Contact Type must be defaulted to 'Staff' and Read Only
4. Arrange Centre and Location to be Side by Side (like Division and Section)
5. Remove/Hide the Address section
6. Position Field too is missing, put it back

|
non_process
|
setup manage staff small adjustments to the entry screen rename section contact information to staff information rename label employee name to staff name contact type must be defaulted to staff and read only arrange centre and location to be side by side like division and section remove hide the address section position field too is missing put it back
| 0
|
14,033
| 8,445,858,422
|
IssuesEvent
|
2018-10-18 23:16:09
|
letsencrypt/boulder
|
https://api.github.com/repos/letsencrypt/boulder
|
opened
|
Update go-gorp and use Context
|
area/sa kind/performance layer/storage
|
Gorp [added a dbMap.WithContext function](https://github.com/go-gorp/gorp/commit/fe96e856d4ed65f604a6c1564df37c005bbd048f) that we can use to plumb through contexts. We should update our vendored dep and use it.
|
True
|
Update go-gorp and use Context - Gorp [added a dbMap.WithContext function](https://github.com/go-gorp/gorp/commit/fe96e856d4ed65f604a6c1564df37c005bbd048f) that we can use to plumb through contexts. We should update our vendored dep and use it.
|
non_process
|
update go gorp and use context gorp that we can use to plumb through contexts we should update our vendored dep and use it
| 0
|
191,878
| 14,596,909,580
|
IssuesEvent
|
2020-12-20 17:49:14
|
rancher/dashboard
|
https://api.github.com/repos/rancher/dashboard
|
closed
|
Drop-down are not the correct size
|
[zube]: To Test bug
|
Some of the drop-downs are smaller then other drop-down and all the other fields:
Examples:
On deployment create, in the container tab: Pull Policy is the only drop-down that is corecct. All the other drop-downs are 2 or 3 pixels to short: Here are some examples:



Comments:
This seems to be happening all over the Cluster Explorer and the other apps I install.
|
1.0
|
Drop-down are not the correct size - Some of the drop-downs are smaller then other drop-down and all the other fields:
Examples:
On deployment create, in the container tab: Pull Policy is the only drop-down that is corecct. All the other drop-downs are 2 or 3 pixels to short: Here are some examples:



Comments:
This seems to be happening all over the Cluster Explorer and the other apps I install.
|
non_process
|
drop down are not the correct size some of the drop downs are smaller then other drop down and all the other fields examples on deployment create in the container tab pull policy is the only drop down that is corecct all the other drop downs are or pixels to short here are some examples comments this seems to be happening all over the cluster explorer and the other apps i install
| 0
|
19,484
| 25,794,165,440
|
IssuesEvent
|
2022-12-10 11:16:50
|
dealii/dealii
|
https://api.github.com/repos/dealii/dealii
|
closed
|
New postprocessing functions?
|
Discussion Post-processing
|
With PR #11804 merged, we will have an easy way to evaluate solution vectors at arbitrary (distributed) point.
Although the intention for writing this function was completely different, I have used locally this function for the following two post-processing tasks:
1) evaluate the solution along a line with points positioned equidistantly and write the result into a text file (only by the root rank) - see also: https://github.com/dealii/dealii/pull/11804/files#diff-c64adc9b1ec4bb22e54afbc87f625e97c8843aa279ab338e8649da5d841ba7a2
2) instead of writing 3D data on a very fine mesh with `DataOut`, I could write the data on a slice (2D) or an coarser (non-matching) 3D mesh - see also: https://github.com/dealii/dealii/pull/11804/files#diff-dd3e88a18dc69bf7b9ccf6444ad6fbe43d1deb33ffae7ab5bb399dca357cdf58
Both of these postprocessing tasks are different, however, need somewhere the ability to evaluate the solution at some points. In the second case, these points correspond to the support points.
My question would be if such postprocessing function would be useful in the library? If yes, what would be the correct place and how could appropriate interfaces look like?
Furthermore, I am not sure how to approach the second case? At the moment, I am doing the following: 1) create a new triangulation, 2) create a new `DoFHandler`, 3) collect the support points, 4) call the new function (point -> value), 5) write the result into a global vector, and 6) finally use `DataOut` to output the result. This works for now locally but I feel it is a bit clumsy. Wouldn't it be possible to write a new `DataOut` class (maybe `DataOutSampling`): it would be used as usual but in the `attach_triangulation()` function users could attach a triangulation object that does not match the vectors added for post processing? I am absolutely not an expert in `DataOut` and the interaction with the base class but I think this way one might be able to skip the creation of the `DoFHandler` since the patches could be directly filled.
Is there interest? Any ideas?
@tjhei This topic might be also interesting for you!?
|
1.0
|
New postprocessing functions? - With PR #11804 merged, we will have an easy way to evaluate solution vectors at arbitrary (distributed) point.
Although the intention for writing this function was completely different, I have used locally this function for the following two post-processing tasks:
1) evaluate the solution along a line with points positioned equidistantly and write the result into a text file (only by the root rank) - see also: https://github.com/dealii/dealii/pull/11804/files#diff-c64adc9b1ec4bb22e54afbc87f625e97c8843aa279ab338e8649da5d841ba7a2
2) instead of writing 3D data on a very fine mesh with `DataOut`, I could write the data on a slice (2D) or an coarser (non-matching) 3D mesh - see also: https://github.com/dealii/dealii/pull/11804/files#diff-dd3e88a18dc69bf7b9ccf6444ad6fbe43d1deb33ffae7ab5bb399dca357cdf58
Both of these postprocessing tasks are different, however, need somewhere the ability to evaluate the solution at some points. In the second case, these points correspond to the support points.
My question would be if such postprocessing function would be useful in the library? If yes, what would be the correct place and how could appropriate interfaces look like?
Furthermore, I am not sure how to approach the second case? At the moment, I am doing the following: 1) create a new triangulation, 2) create a new `DoFHandler`, 3) collect the support points, 4) call the new function (point -> value), 5) write the result into a global vector, and 6) finally use `DataOut` to output the result. This works for now locally but I feel it is a bit clumsy. Wouldn't it be possible to write a new `DataOut` class (maybe `DataOutSampling`): it would be used as usual but in the `attach_triangulation()` function users could attach a triangulation object that does not match the vectors added for post processing? I am absolutely not an expert in `DataOut` and the interaction with the base class but I think this way one might be able to skip the creation of the `DoFHandler` since the patches could be directly filled.
Is there interest? Any ideas?
@tjhei This topic might be also interesting for you!?
|
process
|
new postprocessing functions with pr merged we will have an easy way to evaluate solution vectors at arbitrary distributed point although the intention for writing this function was completely different i have used locally this function for the following two post processing tasks evaluate the solution along a line with points positioned equidistantly and write the result into a text file only by the root rank see also instead of writing data on a very fine mesh with dataout i could write the data on a slice or an coarser non matching mesh see also both of these postprocessing tasks are different however need somewhere the ability to evaluate the solution at some points in the second case these points correspond to the support points my question would be if such postprocessing function would be useful in the library if yes what would be the correct place and how could appropriate interfaces look like furthermore i am not sure how to approach the second case at the moment i am doing the following create a new triangulation create a new dofhandler collect the support points call the new function point value write the result into a global vector and finally use dataout to output the result this works for now locally but i feel it is a bit clumsy wouldn t it be possible to write a new dataout class maybe dataoutsampling it would be used as usual but in the attach triangulation function users could attach a triangulation object that does not match the vectors added for post processing i am absolutely not an expert in dataout and the interaction with the base class but i think this way one might be able to skip the creation of the dofhandler since the patches could be directly filled is there interest any ideas tjhei this topic might be also interesting for you
| 1
|
20,225
| 26,820,485,347
|
IssuesEvent
|
2023-02-02 09:08:00
|
X-Sharp/XSharpPublic
|
https://api.github.com/repos/X-Sharp/XSharpPublic
|
closed
|
Preprocessor problem with nested square brackets
|
bug Preprocessor
|
With the SUM UDC defined in dbcmd.xh (and identical in VO's STD.UDC), the preprocessor throws an error XS9002: Parser: unexpected input '0':
```
#command SUM [ <x1> [, <xn>] TO <v1> [, <vn>] ] ;
[FOR <lfor>] ;
[WHILE <lwhile>] ;
[NEXT <nnext>] ;
[RECORD <rec>] ;
[<rest:REST>] ;
[<noopt: NOOPTIMIZE>] ;
[ALL] ;
;
=> <v1> := [ <vn> := ] 0 ;
; DbEval( ;
{|| <v1> += <x1> [, <vn> += <xn> ]}, ;
<{lfor}>, <{lwhile}>, <nnext>, <rec>, <.rest.>, <.noopt.>;
)
FUNCTION Start() AS VOID
LOCAL uSum1,uSum2 AS USUAL
SUM 10 TO uSum1 WHILE somealias->T_DATA < 100 FOR somealias->T_CATEG == "45"
? uSum1
SUM 100,200 TO uSum1 , uSum2 WHILE somealias->T_DATA < 100 FOR somealias->T_CATEG == "45"
? uSum1, uSum2
FUNCTION DbEval(cb)
Eval(cb)
RETURN NIL
```
The problem is the stray "0" as shown in the .ppo:
```
LOCAL uSum1,uSum2 AS USUAL
:= 0 ; DbEval( {|| += 10TOuSum1 }, , {||somealias->T_DATA<100} , , , .F. , .F. )somealias->T_CATEG=="45"
uSum1 := uSum2 :=somealias->T_CATEG=="45" 0 ; DbEval( {|| uSum1 += 100 , uSum2 += 200somealias->T_CATEG=="45" }, , {||somealias->T_DATA<100} , , , .F. , .F. )somealias->T_CATEG=="45"
```
I think the problem is caused by the nested brackets in the first line of the #command. By removing the outer brackets, the code compiles with no errors and runs as expected.
|
1.0
|
Preprocessor problem with nested square brackets - With the SUM UDC defined in dbcmd.xh (and identical in VO's STD.UDC), the preprocessor throws an error XS9002: Parser: unexpected input '0':
```
#command SUM [ <x1> [, <xn>] TO <v1> [, <vn>] ] ;
[FOR <lfor>] ;
[WHILE <lwhile>] ;
[NEXT <nnext>] ;
[RECORD <rec>] ;
[<rest:REST>] ;
[<noopt: NOOPTIMIZE>] ;
[ALL] ;
;
=> <v1> := [ <vn> := ] 0 ;
; DbEval( ;
{|| <v1> += <x1> [, <vn> += <xn> ]}, ;
<{lfor}>, <{lwhile}>, <nnext>, <rec>, <.rest.>, <.noopt.>;
)
FUNCTION Start() AS VOID
LOCAL uSum1,uSum2 AS USUAL
SUM 10 TO uSum1 WHILE somealias->T_DATA < 100 FOR somealias->T_CATEG == "45"
? uSum1
SUM 100,200 TO uSum1 , uSum2 WHILE somealias->T_DATA < 100 FOR somealias->T_CATEG == "45"
? uSum1, uSum2
FUNCTION DbEval(cb)
Eval(cb)
RETURN NIL
```
The problem is the stray "0" as shown in the .ppo:
```
LOCAL uSum1,uSum2 AS USUAL
:= 0 ; DbEval( {|| += 10TOuSum1 }, , {||somealias->T_DATA<100} , , , .F. , .F. )somealias->T_CATEG=="45"
uSum1 := uSum2 :=somealias->T_CATEG=="45" 0 ; DbEval( {|| uSum1 += 100 , uSum2 += 200somealias->T_CATEG=="45" }, , {||somealias->T_DATA<100} , , , .F. , .F. )somealias->T_CATEG=="45"
```
I think the problem is caused by the nested brackets in the first line of the #command. By removing the outer brackets, the code compiles with no errors and runs as expected.
|
process
|
preprocessor problem with nested square brackets with the sum udc defined in dbcmd xh and identical in vo s std udc the preprocessor throws an error parser unexpected input command sum to dbeval function start as void local as usual sum to while somealias t data t categ sum to while somealias t data t categ function dbeval cb eval cb return nil the problem is the stray as shown in the ppo local as usual dbeval somealias t data t categ somealias t categ dbeval t categ somealias t data t categ i think the problem is caused by the nested brackets in the first line of the command by removing the outer brackets the code compiles with no errors and runs as expected
| 1
|
247
| 2,669,107,399
|
IssuesEvent
|
2015-03-23 13:48:07
|
FrustratedGameDev/Papers
|
https://api.github.com/repos/FrustratedGameDev/Papers
|
closed
|
README identifying process?
|
Our Process
|
Should there be a README file identifying the process to go through the papers?
|
1.0
|
README identifying process? - Should there be a README file identifying the process to go through the papers?
|
process
|
readme identifying process should there be a readme file identifying the process to go through the papers
| 1
|
250,285
| 7,974,655,155
|
IssuesEvent
|
2018-07-17 06:40:11
|
octavian-paraschiv/protone-suite
|
https://api.github.com/repos/octavian-paraschiv/protone-suite
|
opened
|
Track info list enhancements for Deezer
|
Category-Player OS-All Priority-P1 ReportSource-EndUser Type-Story
|
Should be possible to display the Deezer track properties in the Track Info screen.
We should be able to see here the same properties as those shown in the tool tips, when we're on the Playlist screen:
- Artist
- Title
- Album
- Duration
TODO: Investigate whether we can add more info:
- Year
- Copyright notice
- Multiple artists
|
1.0
|
Track info list enhancements for Deezer - Should be possible to display the Deezer track properties in the Track Info screen.
We should be able to see here the same properties as those shown in the tool tips, when we're on the Playlist screen:
- Artist
- Title
- Album
- Duration
TODO: Investigate whether we can add more info:
- Year
- Copyright notice
- Multiple artists
|
non_process
|
track info list enhancements for deezer should be possible to display the deezer track properties in the track info screen we should be able to see here the same properties as those shown in the tool tips when we re on the playlist screen artist title album duration todo investigate whether we can add more info year copyright notice multiple artists
| 0
|
10,208
| 14,876,783,415
|
IssuesEvent
|
2021-01-20 01:38:20
|
DualSaturn/wi21-cse110-lab3
|
https://api.github.com/repos/DualSaturn/wi21-cse110-lab3
|
closed
|
Need colors and background-color to be used
|
requirement
|
- rgb(r, g, b), rgba(r, g, b, a)
- #FFF, #FFFFFF
- hsl(h, s, l), hsla(h, s, l, a)
- Color name (‘green’)
- background-color
|
1.0
|
Need colors and background-color to be used - - rgb(r, g, b), rgba(r, g, b, a)
- #FFF, #FFFFFF
- hsl(h, s, l), hsla(h, s, l, a)
- Color name (‘green’)
- background-color
|
non_process
|
need colors and background color to be used rgb r g b rgba r g b a fff ffffff hsl h s l hsla h s l a color name ‘green’ background color
| 0
|
50,018
| 6,049,713,432
|
IssuesEvent
|
2017-06-12 19:23:42
|
numbbo/coco
|
https://api.github.com/repos/numbbo/coco
|
closed
|
activate test_suites
|
bug Priority-Critical Tests & CI
|
I factored out the regression tests of the suites in `do.py` from `test_python()` into `test_suites()`.
What needs to be done now is to initiate this test automatically. It can be done by
```
python do.py test-suites 2 10 20
```
where the numbers are optional and refer to a data set with the respective number of solutions tested per problem, default is 2 (for a very quick test), it can be all three to test 32 solutions (for overnight test). The numbers refer to the respective test data which must be present under `test/regression-test/data`.
|
1.0
|
activate test_suites - I factored out the regression tests of the suites in `do.py` from `test_python()` into `test_suites()`.
What needs to be done now is to initiate this test automatically. It can be done by
```
python do.py test-suites 2 10 20
```
where the numbers are optional and refer to a data set with the respective number of solutions tested per problem, default is 2 (for a very quick test), it can be all three to test 32 solutions (for overnight test). The numbers refer to the respective test data which must be present under `test/regression-test/data`.
|
non_process
|
activate test suites i factored out the regression tests of the suites in do py from test python into test suites what needs to be done now is to initiate this test automatically it can be done by python do py test suites where the numbers are optional and refer to a data set with the respective number of solutions tested per problem default is for a very quick test it can be all three to test solutions for overnight test the numbers refer to the respective test data which must be present under test regression test data
| 0
|
12,391
| 14,908,806,269
|
IssuesEvent
|
2021-01-22 06:44:14
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
PM>add user>Select study>Sites not displayed when selected
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Describe the bug**
The number of sites when Study is selected is to be displayed even in add new user
**To Reproduce**
Steps to reproduce the behavior:
1. Go to PM
2. Click on User>add new user
3. select study
4. The number of sites selected under study is not displayed next to the study for add user
Note: this is implemented only in edit user and the same should be in add user
**Expected behavior**
The number of sites selected under study should be displayed next to the study for add user
**Screenshot**

|
3.0
|
PM>add user>Select study>Sites not displayed when selected - **Describe the bug**
The number of sites when Study is selected is to be displayed even in add new user
**To Reproduce**
Steps to reproduce the behavior:
1. Go to PM
2. Click on User>add new user
3. select study
4. The number of sites selected under study is not displayed next to the study for add user
Note: this is implemented only in edit user and the same should be in add user
**Expected behavior**
The number of sites selected under study should be displayed next to the study for add user
**Screenshot**

|
process
|
pm add user select study sites not displayed when selected describe the bug the number of sites when study is selected is to be displayed even in add new user to reproduce steps to reproduce the behavior go to pm click on user add new user select study the number of sites selected under study is not displayed next to the study for add user note this is implemented only in edit user and the same should be in add user expected behavior the number of sites selected under study should be displayed next to the study for add user screenshot
| 1
|
22,200
| 3,618,508,450
|
IssuesEvent
|
2016-02-08 11:59:04
|
Threesixty/aufo-jde-ppst
|
https://api.github.com/repos/Threesixty/aufo-jde-ppst
|
closed
|
Explication - Alerte création d'un compte utilisateur et/ou société
|
auto-migrated Priority-Medium Type-Defect
|
```
Gestion des erreurs
nom remplissage : ne pas effacer tout ce qui avait été écrit
Remplissage automatique (ne fonctionne pas pour Vincent)
```
Original issue reported on code.google.com by `delegati...@gmail.com` on 31 Jul 2014 at 1:16
|
1.0
|
Explication - Alerte création d'un compte utilisateur et/ou société - ```
Gestion des erreurs
nom remplissage : ne pas effacer tout ce qui avait été écrit
Remplissage automatique (ne fonctionne pas pour Vincent)
```
Original issue reported on code.google.com by `delegati...@gmail.com` on 31 Jul 2014 at 1:16
|
non_process
|
explication alerte création d un compte utilisateur et ou société gestion des erreurs nom remplissage ne pas effacer tout ce qui avait été écrit remplissage automatique ne fonctionne pas pour vincent original issue reported on code google com by delegati gmail com on jul at
| 0
|
10,015
| 13,043,900,985
|
IssuesEvent
|
2020-07-29 02:59:56
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `Substring3Args` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `Substring3Args` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `Substring3Args` from TiDB -
## Description
Port the scalar function `Substring3Args` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function from tidb description port the scalar function from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
283,184
| 8,717,638,752
|
IssuesEvent
|
2018-12-07 17:46:59
|
cilium/cilium
|
https://api.github.com/repos/cilium/cilium
|
closed
|
Better cilium-health status output if HTTP check fails
|
help-wanted kind/enhancement kind/microtask priority/medium
|
Adjust `cilium-health status` output to indicate that failure to HTTP connectivity implies that the cilium agent cannot be reached to explain difference to ICMP health check.
|
1.0
|
Better cilium-health status output if HTTP check fails - Adjust `cilium-health status` output to indicate that failure to HTTP connectivity implies that the cilium agent cannot be reached to explain difference to ICMP health check.
|
non_process
|
better cilium health status output if http check fails adjust cilium health status output to indicate that failure to http connectivity implies that the cilium agent cannot be reached to explain difference to icmp health check
| 0
|
346,172
| 24,886,602,482
|
IssuesEvent
|
2022-10-28 08:19:42
|
Aishwarya-Hariharan-Iyer/ped
|
https://api.github.com/repos/Aishwarya-Hariharan-Iyer/ped
|
opened
|
User Guide expected outputs are not shown
|
type.DocumentationBug severity.Low
|
The UG does not contain either sample outputs or descriptions of system response (for example, "NotionUS shows a message saying --" for this command) for the commands which makes it hard to understand which messages, especially error messages, are a part of design and which are a potential flaw
<!--session: 1666943918601-d0c00464-9b52-4e7e-9af0-4ad89ec55563-->
<!--Version: Web v3.4.4-->
|
1.0
|
User Guide expected outputs are not shown - The UG does not contain either sample outputs or descriptions of system response (for example, "NotionUS shows a message saying --" for this command) for the commands which makes it hard to understand which messages, especially error messages, are a part of design and which are a potential flaw
<!--session: 1666943918601-d0c00464-9b52-4e7e-9af0-4ad89ec55563-->
<!--Version: Web v3.4.4-->
|
non_process
|
user guide expected outputs are not shown the ug does not contain either sample outputs or descriptions of system response for example notionus shows a message saying for this command for the commands which makes it hard to understand which messages especially error messages are a part of design and which are a potential flaw
| 0
|
280,958
| 30,865,618,726
|
IssuesEvent
|
2023-08-03 07:50:29
|
BogdanOrg/WebGoat-DEV
|
https://api.github.com/repos/BogdanOrg/WebGoat-DEV
|
opened
|
CVE-2021-29505 (High) detected in xstream-1.4.5.jar
|
Mend: dependency security vulnerability
|
## CVE-2021-29505 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://codehaus.org/xstream-parent/xstream/">http://codehaus.org/xstream-parent/xstream/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/BogdanOrg/WebGoat-DEV/commit/dfb449e5aba1a4b466b34364256f42943ca18d4e">dfb449e5aba1a4b466b34364256f42943ca18d4e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is software for serializing Java objects to XML and back again. A vulnerability in XStream versions prior to 1.4.17 may allow a remote attacker has sufficient rights to execute commands of the host only by manipulating the processed input stream. No user who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types is affected. The vulnerability is patched in version 1.4.17.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-29505>CVE-2021-29505</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-7chv-rrw6-w6fc">https://github.com/advisories/GHSA-7chv-rrw6-w6fc</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: 1.4.17</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2021-29505 (High) detected in xstream-1.4.5.jar - ## CVE-2021-29505 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://codehaus.org/xstream-parent/xstream/">http://codehaus.org/xstream-parent/xstream/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/BogdanOrg/WebGoat-DEV/commit/dfb449e5aba1a4b466b34364256f42943ca18d4e">dfb449e5aba1a4b466b34364256f42943ca18d4e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is software for serializing Java objects to XML and back again. A vulnerability in XStream versions prior to 1.4.17 may allow a remote attacker has sufficient rights to execute commands of the host only by manipulating the processed input stream. No user who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types is affected. The vulnerability is patched in version 1.4.17.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-29505>CVE-2021-29505</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-7chv-rrw6-w6fc">https://github.com/advisories/GHSA-7chv-rrw6-w6fc</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: 1.4.17</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_process
|
cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com thoughtworks xstream xstream xstream jar dependency hierarchy x xstream jar vulnerable library found in head commit a href found in base branch main vulnerability details xstream is software for serializing java objects to xml and back again a vulnerability in xstream versions prior to may allow a remote attacker has sufficient rights to execute commands of the host only by manipulating the processed input stream no user who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types is affected the vulnerability is patched in version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr
| 0
|
29,377
| 13,102,172,058
|
IssuesEvent
|
2020-08-04 06:04:19
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
Webapp:issui while deploying zip
|
Service Attention Web Apps
|
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp deployment source config-zip`
**Errors:**
```
('Connection aborted.', OSError("(10054, 'WSAECONNRESET')",))
Traceback (most recent call last):
urllib3\urllib3\contrib\pyopenssl.py, ln 320, in _send_until_done
pip-install-_1mlo87m\pyOpenSSL\OpenSSL\SSL.py, ln 1757, in send
pip-install-_1mlo87m\pyOpenSSL\OpenSSL\SSL.py, ln 1663, in _raise_ssl_error
OpenSSL.SSL.SysCallError: (10054, 'WSAECONNRESET')
...
pip-install-_1mlo87m\requests\requests\sessions.py, ln 643, in send
pip-install-_1mlo87m\requests\requests\adapters.py, ln 498, in send
requests.exceptions.ConnectionError: ('Connection aborted.', OSError("(10054, 'WSAECONNRESET')",))
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az webapp deployment source config-zip --resource-group {} --name {} --src {}`
## Expected Behavior
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Installer: MSI
azure-cli 2.3.1
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
1.0
|
Webapp:issui while deploying zip -
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp deployment source config-zip`
**Errors:**
```
('Connection aborted.', OSError("(10054, 'WSAECONNRESET')",))
Traceback (most recent call last):
urllib3\urllib3\contrib\pyopenssl.py, ln 320, in _send_until_done
pip-install-_1mlo87m\pyOpenSSL\OpenSSL\SSL.py, ln 1757, in send
pip-install-_1mlo87m\pyOpenSSL\OpenSSL\SSL.py, ln 1663, in _raise_ssl_error
OpenSSL.SSL.SysCallError: (10054, 'WSAECONNRESET')
...
pip-install-_1mlo87m\requests\requests\sessions.py, ln 643, in send
pip-install-_1mlo87m\requests\requests\adapters.py, ln 498, in send
requests.exceptions.ConnectionError: ('Connection aborted.', OSError("(10054, 'WSAECONNRESET')",))
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az webapp deployment source config-zip --resource-group {} --name {} --src {}`
## Expected Behavior
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Installer: MSI
azure-cli 2.3.1
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
non_process
|
webapp issui while deploying zip this is autogenerated please review and update as needed describe the bug command name az webapp deployment source config zip errors connection aborted oserror wsaeconnreset traceback most recent call last contrib pyopenssl py ln in send until done pip install pyopenssl openssl ssl py ln in send pip install pyopenssl openssl ssl py ln in raise ssl error openssl ssl syscallerror wsaeconnreset pip install requests requests sessions py ln in send pip install requests requests adapters py ln in send requests exceptions connectionerror connection aborted oserror wsaeconnreset to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az webapp deployment source config zip resource group name src expected behavior environment summary windows python installer msi azure cli additional context
| 0
|
16,596
| 21,651,220,886
|
IssuesEvent
|
2022-05-06 09:32:58
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Dependency convergence error
|
kind/bug team/process-automation area/project
|
**Describe the bug**
When building `zeebe-process-test` with 8.1.0-alpha1, we get the following error.
```
Dependency convergence error for org.camunda.bpm.model:camunda-dmn-model:jar:7.16.0:compile paths to dependency are:
+-io.camunda:zeebe-process-test-engine:jar:8.0.1-SNAPSHOT
+-io.camunda:zeebe-workflow-engine:jar:8.1.0-alpha1:compile
+-io.camunda:zeebe-dmn:jar:8.1.0-alpha1:compile
+-org.camunda.bpm.extension.dmn.scala:dmn-engine:jar:1.7.1:compile
+-org.camunda.bpm.model:camunda-dmn-model:jar:7.16.0:compile
and
+-io.camunda:zeebe-process-test-engine:jar:8.0.1-SNAPSHOT
+-io.camunda:zeebe-workflow-engine:jar:8.1.0-alpha1:compile
+-io.camunda:zeebe-dmn:jar:8.1.0-alpha1:compile
+-org.camunda.bpm.model:camunda-dmn-model:jar:7.17.0:compile
```
zeebe-worlflow-engine pulls in the `camunda-dmn-model` 2 times with different versions. However, zeebe builds fine. We can temporarily fix it in zeebe-process-test. But the issue seems to be in zeebe itself.
**Expected behavior**
No conflicting versions in published artifacts.
|
1.0
|
Dependency convergence error - **Describe the bug**
When building `zeebe-process-test` with 8.1.0-alpha1, we get the following error.
```
Dependency convergence error for org.camunda.bpm.model:camunda-dmn-model:jar:7.16.0:compile paths to dependency are:
+-io.camunda:zeebe-process-test-engine:jar:8.0.1-SNAPSHOT
+-io.camunda:zeebe-workflow-engine:jar:8.1.0-alpha1:compile
+-io.camunda:zeebe-dmn:jar:8.1.0-alpha1:compile
+-org.camunda.bpm.extension.dmn.scala:dmn-engine:jar:1.7.1:compile
+-org.camunda.bpm.model:camunda-dmn-model:jar:7.16.0:compile
and
+-io.camunda:zeebe-process-test-engine:jar:8.0.1-SNAPSHOT
+-io.camunda:zeebe-workflow-engine:jar:8.1.0-alpha1:compile
+-io.camunda:zeebe-dmn:jar:8.1.0-alpha1:compile
+-org.camunda.bpm.model:camunda-dmn-model:jar:7.17.0:compile
```
zeebe-worlflow-engine pulls in the `camunda-dmn-model` 2 times with different versions. However, zeebe builds fine. We can temporarily fix it in zeebe-process-test. But the issue seems to be in zeebe itself.
**Expected behavior**
No conflicting versions in published artifacts.
|
process
|
dependency convergence error describe the bug when building zeebe process test with we get the following error dependency convergence error for org camunda bpm model camunda dmn model jar compile paths to dependency are io camunda zeebe process test engine jar snapshot io camunda zeebe workflow engine jar compile io camunda zeebe dmn jar compile org camunda bpm extension dmn scala dmn engine jar compile org camunda bpm model camunda dmn model jar compile and io camunda zeebe process test engine jar snapshot io camunda zeebe workflow engine jar compile io camunda zeebe dmn jar compile org camunda bpm model camunda dmn model jar compile zeebe worlflow engine pulls in the camunda dmn model times with different versions however zeebe builds fine we can temporarily fix it in zeebe process test but the issue seems to be in zeebe itself expected behavior no conflicting versions in published artifacts
| 1
|
71,449
| 7,245,092,466
|
IssuesEvent
|
2018-02-14 16:57:40
|
EyeSeeTea/pictureapp
|
https://api.github.com/repos/EyeSeeTea/pictureapp
|
closed
|
During a soft login, clear & logout doesn't toggle simple/advance switch
|
complexity - low (1hr) eReferrals priority - critical testing type - bug
|
If during a soft login you click on clear & logout you're taken to the full login screen collapsed (so simple) but the simple/advance switch says "Simple", so we're in a Simple/Advance switch inversion situation
|
1.0
|
During a soft login, clear & logout doesn't toggle simple/advance switch - If during a soft login you click on clear & logout you're taken to the full login screen collapsed (so simple) but the simple/advance switch says "Simple", so we're in a Simple/Advance switch inversion situation
|
non_process
|
during a soft login clear logout doesn t toggle simple advance switch if during a soft login you click on clear logout you re taken to the full login screen collapsed so simple but the simple advance switch says simple so we re in a simple advance switch inversion situation
| 0
|
294,908
| 25,414,048,500
|
IssuesEvent
|
2022-11-22 21:49:33
|
LIBCAS/INDIHU-Exhibition
|
https://api.github.com/repos/LIBCAS/INDIHU-Exhibition
|
opened
|
Dokresli
|
waiting for test
|
Stejný problém s názvem a zadáním, nekreslí se, ale gumuje, nejsou tam barvy a tloušťky dle plánovaných funkcionalit, také se nepřejde na další obrazovku
|
1.0
|
Dokresli - Stejný problém s názvem a zadáním, nekreslí se, ale gumuje, nejsou tam barvy a tloušťky dle plánovaných funkcionalit, také se nepřejde na další obrazovku
|
non_process
|
dokresli stejný problém s názvem a zadáním nekreslí se ale gumuje nejsou tam barvy a tloušťky dle plánovaných funkcionalit také se nepřejde na další obrazovku
| 0
|
15,160
| 18,912,278,321
|
IssuesEvent
|
2021-11-16 15:12:07
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
[Process] defaultEnv not properly generated in PHP build-in webserver
|
Bug Process Status: Needs Review
|
### Symfony version(s) affected
process 5.3.7
### Description
The php-doc says
https://github.com/symfony/symfony/blob/f7d70f1ab4be3bd98432cf9eccc4fa53f9fd36c4/src/Symfony/Component/Process/Process.php#L137
With `$env = null` it will take the Enviroment of the current process. This is not allways true.
Using the [PHP Build-In Webserver](https://www.php.net/manual/en/features.commandline.webserver.php) (`php -S localhost:80`) `$_ENV` is empty (but `getenv()` is not), so
https://github.com/symfony/symfony/blob/f7d70f1ab4be3bd98432cf9eccc4fa53f9fd36c4/src/Symfony/Component/Process/Process.php#L1672
gives unexpected result. Especially the PATH variable remains unset, which leads to unfound binaries
### How to reproduce
Use php build in webserver (see above) and use e.g
```php
$p = new Process(['pdflatex'], __DIR__, null)
```
(set `$env = null`)
-> PATH Variable is not set in `$_ENV` only in `getenv()`
### Possible Solution
Use `getEnv()` instead of `$_ENV` or check for
```php
if (php_sapi_name() === 'cli-server') {
```
### Additional Context
`php -S` is (often) used in local development environments. Missing PATH Variables can be a big issue there.
|
1.0
|
[Process] defaultEnv not properly generated in PHP build-in webserver - ### Symfony version(s) affected
process 5.3.7
### Description
The php-doc says
https://github.com/symfony/symfony/blob/f7d70f1ab4be3bd98432cf9eccc4fa53f9fd36c4/src/Symfony/Component/Process/Process.php#L137
With `$env = null` it will take the Enviroment of the current process. This is not allways true.
Using the [PHP Build-In Webserver](https://www.php.net/manual/en/features.commandline.webserver.php) (`php -S localhost:80`) `$_ENV` is empty (but `getenv()` is not), so
https://github.com/symfony/symfony/blob/f7d70f1ab4be3bd98432cf9eccc4fa53f9fd36c4/src/Symfony/Component/Process/Process.php#L1672
gives unexpected result. Especially the PATH variable remains unset, which leads to unfound binaries
### How to reproduce
Use php build in webserver (see above) and use e.g
```php
$p = new Process(['pdflatex'], __DIR__, null)
```
(set `$env = null`)
-> PATH Variable is not set in `$_ENV` only in `getenv()`
### Possible Solution
Use `getEnv()` instead of `$_ENV` or check for
```php
if (php_sapi_name() === 'cli-server') {
```
### Additional Context
`php -S` is (often) used in local development environments. Missing PATH Variables can be a big issue there.
|
process
|
defaultenv not properly generated in php build in webserver symfony version s affected process description the php doc says with env null it will take the enviroment of the current process this is not allways true using the php s localhost env is empty but getenv is not so gives unexpected result especially the path variable remains unset which leads to unfound binaries how to reproduce use php build in webserver see above and use e g php p new process dir null set env null path variable is not set in env only in getenv possible solution use getenv instead of env or check for php if php sapi name cli server additional context php s is often used in local development environments missing path variables can be a big issue there
| 1
|
13,303
| 15,777,983,511
|
IssuesEvent
|
2021-04-01 07:06:22
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
Consider introducing general `label` field to processors (and other components?)
|
enhancement processors v4
|
Currently the labels for components is either the path (`pipeline.processors.0.1.processors.0`) of the component, or if it's a resource the name (`resources.processors.foo`). This means if a user wants nice labels in their logs and metrics they need to configure them as resources.
It would be possible to add a general purpose `label` field to components that overrides the default label assigned to it which removes that restriction. However, this is likely to need breaking changes in the logging and metrics set up for components and needs some further consideration.
For users it would look something like this:
```yaml
pipeline:
processors:
- bloblang: 'root = this.foo.bar'
label: 'my_foo_mapping'
```
|
1.0
|
Consider introducing general `label` field to processors (and other components?) - Currently the labels for components is either the path (`pipeline.processors.0.1.processors.0`) of the component, or if it's a resource the name (`resources.processors.foo`). This means if a user wants nice labels in their logs and metrics they need to configure them as resources.
It would be possible to add a general purpose `label` field to components that overrides the default label assigned to it which removes that restriction. However, this is likely to need breaking changes in the logging and metrics set up for components and needs some further consideration.
For users it would look something like this:
```yaml
pipeline:
processors:
- bloblang: 'root = this.foo.bar'
label: 'my_foo_mapping'
```
|
process
|
consider introducing general label field to processors and other components currently the labels for components is either the path pipeline processors processors of the component or if it s a resource the name resources processors foo this means if a user wants nice labels in their logs and metrics they need to configure them as resources it would be possible to add a general purpose label field to components that overrides the default label assigned to it which removes that restriction however this is likely to need breaking changes in the logging and metrics set up for components and needs some further consideration for users it would look something like this yaml pipeline processors bloblang root this foo bar label my foo mapping
| 1
|
204,632
| 7,089,566,636
|
IssuesEvent
|
2018-01-12 03:34:41
|
dmwm/WMCore
|
https://api.github.com/repos/dmwm/WMCore
|
closed
|
Workflow Summary Makeover
|
Medium Priority WMAgent WMStats
|
In a nutshell, improve document structure, move visualization to WMStats and improve visualization.
The needed features are:
- User friendly performance histograms with API for retrieval
- Error display, organized by sites, exit codes.
- Missing lumis per output dataset with API for JSON retrieval
- Input description (including run information)
- Output description
- DQMHarvest information, run list of what was harvested.
This needs also more discussion on what else is required by Ops.
This will include the following issues:
#3967 #4438 #4315 #4219
This covers all that is contained in:
https://twiki.cern.ch/twiki/bin/view/CMSPublic/CompOpsWorkflowWMAgentFixesRequestSummary
Most of it is closed already anyway.
|
1.0
|
Workflow Summary Makeover - In a nutshell, improve document structure, move visualization to WMStats and improve visualization.
The needed features are:
- User friendly performance histograms with API for retrieval
- Error display, organized by sites, exit codes.
- Missing lumis per output dataset with API for JSON retrieval
- Input description (including run information)
- Output description
- DQMHarvest information, run list of what was harvested.
This needs also more discussion on what else is required by Ops.
This will include the following issues:
#3967 #4438 #4315 #4219
This covers all that is contained in:
https://twiki.cern.ch/twiki/bin/view/CMSPublic/CompOpsWorkflowWMAgentFixesRequestSummary
Most of it is closed already anyway.
|
non_process
|
workflow summary makeover in a nutshell improve document structure move visualization to wmstats and improve visualization the needed features are user friendly performance histograms with api for retrieval error display organized by sites exit codes missing lumis per output dataset with api for json retrieval input description including run information output description dqmharvest information run list of what was harvested this needs also more discussion on what else is required by ops this will include the following issues this covers all that is contained in most of it is closed already anyway
| 0
|
20,473
| 27,131,557,685
|
IssuesEvent
|
2023-02-16 10:08:39
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Improve python shell integration test coverage
|
P4 type: process team-Rules-Python stale
|
Namely, OSS some internal-only basic "hello world" tests demonstrating py_binary, py_library, runfiles access, and choosing the Python version.
|
1.0
|
Improve python shell integration test coverage - Namely, OSS some internal-only basic "hello world" tests demonstrating py_binary, py_library, runfiles access, and choosing the Python version.
|
process
|
improve python shell integration test coverage namely oss some internal only basic hello world tests demonstrating py binary py library runfiles access and choosing the python version
| 1
|
8,832
| 10,781,259,811
|
IssuesEvent
|
2019-11-04 14:35:56
|
Yoast/wordpress-seo
|
https://api.github.com/repos/Yoast/wordpress-seo
|
closed
|
High hosting CPU usage with Yoast SEO and WooCommerce
|
backlog compatibility component: performance type: bug
|
Copied from https://github.com/Yoast/wordpress-seo/issues/9170#issuecomment-373472498
---
high hosting cpu usage
after yoast seo update cpu usage when I click woo–products–all is above 90-95%
and loading time (by query monitor) is above 8 seconds.
also query monitor detects duplicate queries
SELECT ID
FROM wpx1_posts AS posts
LEFT JOIN wpx1_yoast_seo_meta AS yoast_meta
ON yoast_meta.object_id = posts.ID
WHERE posts.post_status = “publish”
AND posts.post_type IN ( “post”, “page”, “product” )
AND yoast_meta.internal_link_count IS NULL
LIMIT 1
27 WPSEO_Link_Query::has_unprocessed_posts()
27 calls
Plugin: wordpress-seo
27 calls
get_column_headers()
1 call
WP_List_Table->get_default_primary_column_name()
26 calls
its just 20 records (products) per page.
with a 100 products per page (woo–products-all) cpu usage is above 97% loading time about 40 seconds and over a 100 duplicate queries
SELECT ID
FROM wpx1_posts AS posts
LEFT JOIN wpx1_yoast_seo_meta AS yoast_meta
ON yoast_meta.object_id = posts.ID
WHERE posts.post_status = “publish”
AND posts.post_type IN ( “post”, “page”, “product” )
AND yoast_meta.internal_link_count IS NULL
LIMIT 1
107 WPSEO_Link_Query::has_unprocessed_posts()
107 calls
Plugin: wordpress-seo
107 calls
get_column_headers()
1 call
WP_List_Table->get_default_primary_column_name()
106 calls
with the deactivated yoast seo plugin is just below 25-30% and 2 seconds.
wordpress 4.9.4
yoast seo 7.0.3
woocommerce 3.3.3
products 1500
jetpack
cloudflare
wp super cache
metaslider
wc checkout field editor
wc pdf invoices
wc products per page
wc table rate shipping
newsletter
query monitor
/////////
looks like yoast seo 7.0.1-9170-beta doesn’t works for me.
loading time decrease to 3-4 seconds against 8 seconds but cpu usage still 60-95%.
|
True
|
High hosting CPU usage with Yoast SEO and WooCommerce - Copied from https://github.com/Yoast/wordpress-seo/issues/9170#issuecomment-373472498
---
high hosting cpu usage
after yoast seo update cpu usage when I click woo–products–all is above 90-95%
and loading time (by query monitor) is above 8 seconds.
also query monitor detects duplicate queries
SELECT ID
FROM wpx1_posts AS posts
LEFT JOIN wpx1_yoast_seo_meta AS yoast_meta
ON yoast_meta.object_id = posts.ID
WHERE posts.post_status = “publish”
AND posts.post_type IN ( “post”, “page”, “product” )
AND yoast_meta.internal_link_count IS NULL
LIMIT 1
27 WPSEO_Link_Query::has_unprocessed_posts()
27 calls
Plugin: wordpress-seo
27 calls
get_column_headers()
1 call
WP_List_Table->get_default_primary_column_name()
26 calls
its just 20 records (products) per page.
with a 100 products per page (woo–products-all) cpu usage is above 97% loading time about 40 seconds and over a 100 duplicate queries
SELECT ID
FROM wpx1_posts AS posts
LEFT JOIN wpx1_yoast_seo_meta AS yoast_meta
ON yoast_meta.object_id = posts.ID
WHERE posts.post_status = “publish”
AND posts.post_type IN ( “post”, “page”, “product” )
AND yoast_meta.internal_link_count IS NULL
LIMIT 1
107 WPSEO_Link_Query::has_unprocessed_posts()
107 calls
Plugin: wordpress-seo
107 calls
get_column_headers()
1 call
WP_List_Table->get_default_primary_column_name()
106 calls
with the deactivated yoast seo plugin is just below 25-30% and 2 seconds.
wordpress 4.9.4
yoast seo 7.0.3
woocommerce 3.3.3
products 1500
jetpack
cloudflare
wp super cache
metaslider
wc checkout field editor
wc pdf invoices
wc products per page
wc table rate shipping
newsletter
query monitor
/////////
looks like yoast seo 7.0.1-9170-beta doesn’t works for me.
loading time decrease to 3-4 seconds against 8 seconds but cpu usage still 60-95%.
|
non_process
|
high hosting cpu usage with yoast seo and woocommerce copied from high hosting cpu usage after yoast seo update cpu usage when i click woo–products–all is above and loading time by query monitor is above seconds also query monitor detects duplicate queries select id from posts as posts left join yoast seo meta as yoast meta on yoast meta object id posts id where posts post status “publish” and posts post type in “post” “page” “product” and yoast meta internal link count is null limit wpseo link query has unprocessed posts calls plugin wordpress seo calls get column headers call wp list table get default primary column name calls its just records products per page with a products per page woo–products all cpu usage is above loading time about seconds and over a duplicate queries select id from posts as posts left join yoast seo meta as yoast meta on yoast meta object id posts id where posts post status “publish” and posts post type in “post” “page” “product” and yoast meta internal link count is null limit wpseo link query has unprocessed posts calls plugin wordpress seo calls get column headers call wp list table get default primary column name calls with the deactivated yoast seo plugin is just below and seconds wordpress yoast seo woocommerce products jetpack cloudflare wp super cache metaslider wc checkout field editor wc pdf invoices wc products per page wc table rate shipping newsletter query monitor looks like yoast seo beta doesn’t works for me loading time decrease to seconds against seconds but cpu usage still
| 0
|
9,462
| 12,440,515,559
|
IssuesEvent
|
2020-05-26 12:08:44
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
[prisma-format] should eliminate excessive extra lines
|
kind/regression process/candidate
|
Recently `prisma-fmt` allowed newlines to reset the formatter. This is awesome, but it's incomplete:
Given this **unformatted** model:
```prisma
model User {
email String @unique
name String?
}
```
**Actual (after formatting)**
```prisma
model User {
email String @unique
name String?
}
```
**Expected (after formatting)**
```prisma
model User {
email String @unique
name String?
}
```
|
1.0
|
[prisma-format] should eliminate excessive extra lines - Recently `prisma-fmt` allowed newlines to reset the formatter. This is awesome, but it's incomplete:
Given this **unformatted** model:
```prisma
model User {
email String @unique
name String?
}
```
**Actual (after formatting)**
```prisma
model User {
email String @unique
name String?
}
```
**Expected (after formatting)**
```prisma
model User {
email String @unique
name String?
}
```
|
process
|
should eliminate excessive extra lines recently prisma fmt allowed newlines to reset the formatter this is awesome but it s incomplete given this unformatted model prisma model user email string unique name string actual after formatting prisma model user email string unique name string expected after formatting prisma model user email string unique name string
| 1
|
6,368
| 9,418,934,311
|
IssuesEvent
|
2019-04-10 20:31:41
|
mick-warehime/sixth_corp
|
https://api.github.com/repos/mick-warehime/sixth_corp
|
opened
|
If no moves are available, AI should return None
|
ai development process
|
This will likely occur in certain situations.
|
1.0
|
If no moves are available, AI should return None - This will likely occur in certain situations.
|
process
|
if no moves are available ai should return none this will likely occur in certain situations
| 1
|
650,233
| 21,343,609,454
|
IssuesEvent
|
2022-04-19 00:05:59
|
cyrusae/highlighter-public
|
https://api.github.com/repos/cyrusae/highlighter-public
|
opened
|
Let new codes check against existing ones when entered
|
enhancement frontend Priority: + QOL
|
Probably onBlur but I need to get the list of codes back into Colormaker component first
|
1.0
|
Let new codes check against existing ones when entered - Probably onBlur but I need to get the list of codes back into Colormaker component first
|
non_process
|
let new codes check against existing ones when entered probably onblur but i need to get the list of codes back into colormaker component first
| 0
|
8,361
| 11,516,123,331
|
IssuesEvent
|
2020-02-14 03:46:12
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Adding questionId to query remark for embedded reports and dashboard
|
Querying/Processor Type:New Feature
|
**Is your feature request related to a problem? Please describe.**
Currently, we add user id and query hash to query remark when a user executes a query. But for a query that is executed from embedded dashboard or report, there is no information. Debugging gets difficult if we have multiple reports with similar looking query.
**Describe the solution you'd like**
It will be good idea to at least add question Id [[ and dashboard id if available ]] to query remark
**Describe alternatives you've considered**
None
**How important is this feature to you?**
Good to have. Time saver while debugging.
**Additional context**
Related to #2386
|
1.0
|
Adding questionId to query remark for embedded reports and dashboard - **Is your feature request related to a problem? Please describe.**
Currently, we add user id and query hash to query remark when a user executes a query. But for a query that is executed from embedded dashboard or report, there is no information. Debugging gets difficult if we have multiple reports with similar looking query.
**Describe the solution you'd like**
It will be good idea to at least add question Id [[ and dashboard id if available ]] to query remark
**Describe alternatives you've considered**
None
**How important is this feature to you?**
Good to have. Time saver while debugging.
**Additional context**
Related to #2386
|
process
|
adding questionid to query remark for embedded reports and dashboard is your feature request related to a problem please describe currently we add user id and query hash to query remark when a user executes a query but for a query that is executed from embedded dashboard or report there is no information debugging gets difficult if we have multiple reports with similar looking query describe the solution you d like it will be good idea to at least add question id to query remark describe alternatives you ve considered none how important is this feature to you good to have time saver while debugging additional context related to
| 1
|
198,791
| 14,997,497,018
|
IssuesEvent
|
2021-01-29 16:59:12
|
tracim/tracim
|
https://api.github.com/repos/tracim/tracim
|
closed
|
Bug: JS Error in the activity feed page
|
activity-feed add to changelog frontend manually tested
|
## Description and expectations
When going we sometimes get the following JS error:
```
Uncaught (in promise) TypeError: activityParams is null
activityIndex activity.js:183
addMessageToActivityList activity.js:183
_temp withActivity.jsx:90
```
### How to reproduce
1. Login as administrator
2. Visit the activity feed
3. in another browser login as another administrator
4. create a new user
## Diagnostic
TLM related to users are not handled by the activity feed component. The function handling TLMs, `getActivityParams`, therefore (correctly) returns null. We did not notice that the case was not explicitly handled because There are no exhaustive matches in Javascript.
`addMessageToActivityList` does not check for null when using `getActivityParams`. We did not notice because the language does not enfore checking for the presence of a result as TypeScript would.
The solution is to check for null there.
### Version information
- Tracim version: 3.5 before release
|
1.0
|
Bug: JS Error in the activity feed page - ## Description and expectations
When going we sometimes get the following JS error:
```
Uncaught (in promise) TypeError: activityParams is null
activityIndex activity.js:183
addMessageToActivityList activity.js:183
_temp withActivity.jsx:90
```
### How to reproduce
1. Login as administrator
2. Visit the activity feed
3. in another browser login as another administrator
4. create a new user
## Diagnostic
TLM related to users are not handled by the activity feed component. The function handling TLMs, `getActivityParams`, therefore (correctly) returns null. We did not notice that the case was not explicitly handled because There are no exhaustive matches in Javascript.
`addMessageToActivityList` does not check for null when using `getActivityParams`. We did not notice because the language does not enfore checking for the presence of a result as TypeScript would.
The solution is to check for null there.
### Version information
- Tracim version: 3.5 before release
|
non_process
|
bug js error in the activity feed page description and expectations when going we sometimes get the following js error uncaught in promise typeerror activityparams is null activityindex activity js addmessagetoactivitylist activity js temp withactivity jsx how to reproduce login as administrator visit the activity feed in another browser login as another administrator create a new user diagnostic tlm related to users are not handled by the activity feed component the function handling tlms getactivityparams therefore correctly returns null we did not notice that the case was not explicitly handled because there are no exhaustive matches in javascript addmessagetoactivitylist does not check for null when using getactivityparams we did not notice because the language does not enfore checking for the presence of a result as typescript would the solution is to check for null there version information tracim version before release
| 0
|
674,447
| 23,051,064,166
|
IssuesEvent
|
2022-07-24 16:31:47
|
xmlew/UniMod
|
https://api.github.com/repos/xmlew/UniMod
|
closed
|
Milestone 1 changelog: 23/5/2022 - 30/5/2022
|
High priority
|
Backend:
Creation of MongoDB database using MongoDB compass with University Module Dat
Registration function and module subscription level functions developed and integrated with database
Backend API using Node.js, available in server folder.
Frontend:
Login page development using HTML and CSS
Figma files designed
UI design flow developed
|
1.0
|
Milestone 1 changelog: 23/5/2022 - 30/5/2022 - Backend:
Creation of MongoDB database using MongoDB compass with University Module Dat
Registration function and module subscription level functions developed and integrated with database
Backend API using Node.js, available in server folder.
Frontend:
Login page development using HTML and CSS
Figma files designed
UI design flow developed
|
non_process
|
milestone changelog backend creation of mongodb database using mongodb compass with university module dat registration function and module subscription level functions developed and integrated with database backend api using node js available in server folder frontend login page development using html and css figma files designed ui design flow developed
| 0
|
280,519
| 21,280,119,961
|
IssuesEvent
|
2022-04-14 00:14:10
|
google/multispecies-whale-detection
|
https://api.github.com/repos/google/multispecies-whale-detection
|
opened
|
Flesh out examplegen usage documentation
|
documentation
|
User testing ran into a pitfall:
It is not obvious from the docs that examplgen reads all audio files and CSV files within a whole directory tree (recursive) and converts them all to examples.
Running without realizing this can cause problems downstream, like overlap between two runs that were intended to be separate for train and validation or multiplication of labeled examples if, for example, multiple versions of label CSV are within the same directory tree.
The documentation should elaborate on a suggested directory structure
> examplegen_train_run/
> input/
> output/
>
> examplegen_validation_run/
> input/
> output/
and make clear that unlabeled sections of audio are treated as implicit negatives.
(depends on issue #14 since the documentation should also explain the treatment of absolute and relative paths in labels CSV as detailed in #14)
|
1.0
|
Flesh out examplegen usage documentation - User testing ran into a pitfall:
It is not obvious from the docs that examplgen reads all audio files and CSV files within a whole directory tree (recursive) and converts them all to examples.
Running without realizing this can cause problems downstream, like overlap between two runs that were intended to be separate for train and validation or multiplication of labeled examples if, for example, multiple versions of label CSV are within the same directory tree.
The documentation should elaborate on a suggested directory structure
> examplegen_train_run/
> input/
> output/
>
> examplegen_validation_run/
> input/
> output/
and make clear that unlabeled sections of audio are treated as implicit negatives.
(depends on issue #14 since the documentation should also explain the treatment of absolute and relative paths in labels CSV as detailed in #14)
|
non_process
|
flesh out examplegen usage documentation user testing ran into a pitfall it is not obvious from the docs that examplgen reads all audio files and csv files within a whole directory tree recursive and converts them all to examples running without realizing this can cause problems downstream like overlap between two runs that were intended to be separate for train and validation or multiplication of labeled examples if for example multiple versions of label csv are within the same directory tree the documentation should elaborate on a suggested directory structure examplegen train run input output examplegen validation run input output and make clear that unlabeled sections of audio are treated as implicit negatives depends on issue since the documentation should also explain the treatment of absolute and relative paths in labels csv as detailed in
| 0
|
8,758
| 11,879,942,445
|
IssuesEvent
|
2020-03-27 09:46:00
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
opened
|
Update Prisma 1 docs to link to Prisma 2 docs
|
kind/docs process/candidate
|
The existing Prisma 1 docs need an entry "Prisma 2" in the dropdown.
At the same time it would make sense to replace "Prisma" with "Prisma 1" in the normal docs content.
|
1.0
|
Update Prisma 1 docs to link to Prisma 2 docs - The existing Prisma 1 docs need an entry "Prisma 2" in the dropdown.
At the same time it would make sense to replace "Prisma" with "Prisma 1" in the normal docs content.
|
process
|
update prisma docs to link to prisma docs the existing prisma docs need an entry prisma in the dropdown at the same time it would make sense to replace prisma with prisma in the normal docs content
| 1
|
13,613
| 16,195,303,442
|
IssuesEvent
|
2021-05-04 13:55:49
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Wrong Documentation: Runtime parameters are not supported in Azure DevOps Server 2019
|
Pri2 devops-cicd-process/tech devops/prod doc-bug
|
Runtime parameters are not supported in Azure DevOps Server 2019.
But the documentation is not correct, this section (https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops-2019&tabs=script#use-parameters-in-pipelines) must be applied only for Cloud Version, not for Azure DevOps Server 2019.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Wrong Documentation: Runtime parameters are not supported in Azure DevOps Server 2019 - Runtime parameters are not supported in Azure DevOps Server 2019.
But the documentation is not correct, this section (https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops-2019&tabs=script#use-parameters-in-pipelines) must be applied only for Cloud Version, not for Azure DevOps Server 2019.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
wrong documentation runtime parameters are not supported in azure devops server runtime parameters are not supported in azure devops server but the documentation is not correct this section must be applied only for cloud version not for azure devops server document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
2,155
| 5,005,968,002
|
IssuesEvent
|
2016-12-12 12:35:02
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
...positive regulation of argininosuccinate synthase act...
|
auto-migrated multiorganism processes New term request toxin Uniprot
|
Hi,
I will need this new term:
Process: envenomation resulting in positive regulation of argininosuccinate synthase activity in other organism; GO:new.
Def:
A process that begins with venom being forced into an organism by the bite or sting of another organism,
and ends with the activation of the cytosolic argininosuccinate synthase in the bitten organism.
PubMed=19491403
Is the child of
GO:0035738; envenomation resulting in modification of morphology or physiology of other organism
Best,
Florence
Reported by: fjungo
Original Ticket: [geneontology/ontology-requests/9998](https://sourceforge.net/p/geneontology/ontology-requests/9998)
|
1.0
|
...positive regulation of argininosuccinate synthase act... - Hi,
I will need this new term:
Process: envenomation resulting in positive regulation of argininosuccinate synthase activity in other organism; GO:new.
Def:
A process that begins with venom being forced into an organism by the bite or sting of another organism,
and ends with the activation of the cytosolic argininosuccinate synthase in the bitten organism.
PubMed=19491403
Is the child of
GO:0035738; envenomation resulting in modification of morphology or physiology of other organism
Best,
Florence
Reported by: fjungo
Original Ticket: [geneontology/ontology-requests/9998](https://sourceforge.net/p/geneontology/ontology-requests/9998)
|
process
|
positive regulation of argininosuccinate synthase act hi i will need this new term process envenomation resulting in positive regulation of argininosuccinate synthase activity in other organism go new def a process that begins with venom being forced into an organism by the bite or sting of another organism and ends with the activation of the cytosolic argininosuccinate synthase in the bitten organism pubmed is the child of go envenomation resulting in modification of morphology or physiology of other organism best florence reported by fjungo original ticket
| 1
|
13,171
| 15,595,609,061
|
IssuesEvent
|
2021-03-18 15:03:33
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Add tests for custom engine env vars like `PRISMA_QUERY_ENGINE_BINARY`
|
process/candidate team/client tech/typescript topic: binary topic: cli topic: internal topic: tests
|
## Problem
These env vars can be used to have to point to a custom engine location:
```
PRISMA_QUERY_ENGINE_BINARY
PRISMA_MIGRATION_ENGINE_BINARY
PRISMA_INTROSPECTION_ENGINE_BINARY
PRISMA_FMT_BINARY
```
Currently, they work but are not covered by tests.
## Suggested solution
Add a test for the "happy path", maybe move binaries to a random location and set the env vars to that location?
## Alternatives
Add a test in e2e-tests
## Additional context
The mapping to env vars is done in 2 places
https://github.com/prisma/prisma/blob/a949fdb477c1f3b8431c4064bc8ea89b4648ea02/src/packages/fetch-engine/src/download.ts#L58-L64
https://github.com/prisma/prisma/blob/a949fdb477c1f3b8431c4064bc8ea89b4648ea02/src/packages/sdk/src/resolveBinary.ts#L19-L24
We have 2 (currently skipped?!) tests here testing error handling with `PRISMA_QUERY_ENGINE_BINARY` in fetch-engine
https://github.com/prisma/prisma/blob/9aed14fd21dfa46f4dcf3803f8b01a85419e3f10/src/packages/fetch-engine/src/__tests__/download.test.ts#L173-L212
|
1.0
|
Add tests for custom engine env vars like `PRISMA_QUERY_ENGINE_BINARY` - ## Problem
These env vars can be used to have to point to a custom engine location:
```
PRISMA_QUERY_ENGINE_BINARY
PRISMA_MIGRATION_ENGINE_BINARY
PRISMA_INTROSPECTION_ENGINE_BINARY
PRISMA_FMT_BINARY
```
Currently, they work but are not covered by tests.
## Suggested solution
Add a test for the "happy path", maybe move binaries to a random location and set the env vars to that location?
## Alternatives
Add a test in e2e-tests
## Additional context
The mapping to env vars is done in 2 places
https://github.com/prisma/prisma/blob/a949fdb477c1f3b8431c4064bc8ea89b4648ea02/src/packages/fetch-engine/src/download.ts#L58-L64
https://github.com/prisma/prisma/blob/a949fdb477c1f3b8431c4064bc8ea89b4648ea02/src/packages/sdk/src/resolveBinary.ts#L19-L24
We have 2 (currently skipped?!) tests here testing error handling with `PRISMA_QUERY_ENGINE_BINARY` in fetch-engine
https://github.com/prisma/prisma/blob/9aed14fd21dfa46f4dcf3803f8b01a85419e3f10/src/packages/fetch-engine/src/__tests__/download.test.ts#L173-L212
|
process
|
add tests for custom engine env vars like prisma query engine binary problem these env vars can be used to have to point to a custom engine location prisma query engine binary prisma migration engine binary prisma introspection engine binary prisma fmt binary currently they work but are not covered by tests suggested solution add a test for the happy path maybe move binaries to a random location and set the env vars to that location alternatives add a test in tests additional context the mapping to env vars is done in places we have currently skipped tests here testing error handling with prisma query engine binary in fetch engine
| 1
|
585,825
| 17,535,662,180
|
IssuesEvent
|
2021-08-12 06:01:33
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
docs.google.com - site is not usable
|
status-needsinfo browser-firefox priority-critical os-linux engine-gecko
|
<!-- @browser: Firefox 90.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:90.0) Gecko/20100101 Firefox/90.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: https://docs.google.com/forms/d/e/1FAIpQLSefcr9Kj_Pz8MNjn0YvKBI3kMjOFCf3gsX9VPZ4tNTfIVyj7w/viewform?pli=1
**Browser / Version**: Firefox 90.0
**Operating System**: Ubuntu
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Unable to login
**Steps to Reproduce**:
When click login, page is not redirected to Google login page. This is also reproduce in Firefox for Android but not in Google Chrome.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/fc6c74c3-5fb8-4a85-8c89-d526c64ec131.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
docs.google.com - site is not usable - <!-- @browser: Firefox 90.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:90.0) Gecko/20100101 Firefox/90.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: https://docs.google.com/forms/d/e/1FAIpQLSefcr9Kj_Pz8MNjn0YvKBI3kMjOFCf3gsX9VPZ4tNTfIVyj7w/viewform?pli=1
**Browser / Version**: Firefox 90.0
**Operating System**: Ubuntu
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Unable to login
**Steps to Reproduce**:
When click login, page is not redirected to Google login page. This is also reproduce in Firefox for Android but not in Google Chrome.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/fc6c74c3-5fb8-4a85-8c89-d526c64ec131.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
docs google com site is not usable url browser version firefox operating system ubuntu tested another browser yes chrome problem type site is not usable description unable to login steps to reproduce when click login page is not redirected to google login page this is also reproduce in firefox for android but not in google chrome view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
20,724
| 27,424,771,250
|
IssuesEvent
|
2023-03-01 19:22:55
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: [egg deposition]
|
New term request organism-level process
|
Please provide as much information as you can:
* **Suggested term label:**
egg deposition
* **Definition (free text)**
The multicellular organismal reproductive process that results in the movement of an egg from within an organism into the external environment.
* **Reference, in format PMID:#######**
PMID:31164023
PMID:18050396
* **Gene product name and ID to be annotated to this term**
unc-54 (WBGene00006789)
* **Parent term(s)**
is_a multicellular organismal reproductive process (GO:0048609), part_of oviposition/egg-laying behavior (GO:0018991)
* **Children terms (if applicable)** Should any existing terms that should be moved underneath this new proposed term?
* **Synonyms (please specify, EXACT, BROAD, NARROW or RELATED)**
* **Cross-references**
* For enzymes, please provide RHEA and/or EC numbers.
* Can also provide MetaCyc, KEGG, Wikipedia, and other links.
* **Any other information**
Proposed to delineate nervous system processes involved egg-laying behavior (GO:0018991) from the mechanics of egg deposition.
|
1.0
|
NTR: [egg deposition] - Please provide as much information as you can:
* **Suggested term label:**
egg deposition
* **Definition (free text)**
The multicellular organismal reproductive process that results in the movement of an egg from within an organism into the external environment.
* **Reference, in format PMID:#######**
PMID:31164023
PMID:18050396
* **Gene product name and ID to be annotated to this term**
unc-54 (WBGene00006789)
* **Parent term(s)**
is_a multicellular organismal reproductive process (GO:0048609), part_of oviposition/egg-laying behavior (GO:0018991)
* **Children terms (if applicable)** Should any existing terms that should be moved underneath this new proposed term?
* **Synonyms (please specify, EXACT, BROAD, NARROW or RELATED)**
* **Cross-references**
* For enzymes, please provide RHEA and/or EC numbers.
* Can also provide MetaCyc, KEGG, Wikipedia, and other links.
* **Any other information**
Proposed to delineate nervous system processes involved egg-laying behavior (GO:0018991) from the mechanics of egg deposition.
|
process
|
ntr please provide as much information as you can suggested term label egg deposition definition free text the multicellular organismal reproductive process that results in the movement of an egg from within an organism into the external environment reference in format pmid pmid pmid gene product name and id to be annotated to this term unc parent term s is a multicellular organismal reproductive process go part of oviposition egg laying behavior go children terms if applicable should any existing terms that should be moved underneath this new proposed term synonyms please specify exact broad narrow or related cross references for enzymes please provide rhea and or ec numbers can also provide metacyc kegg wikipedia and other links any other information proposed to delineate nervous system processes involved egg laying behavior go from the mechanics of egg deposition
| 1
|
51,875
| 10,731,881,023
|
IssuesEvent
|
2019-10-28 20:33:04
|
dictyBase/Dicty-Stock-Center
|
https://api.github.com/repos/dictyBase/Dicty-Stock-Center
|
closed
|
Fix "method_lines" issue in src/components/Stocks/CatalogPageItems/AppBar/AppBarRightMenu.js
|
code climate
|
Function `AppBarRightMenu` has 33 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/dictyBase/Dicty-Stock-Center/src/components/Stocks/CatalogPageItems/AppBar/AppBarRightMenu.js#issue_5db0e02e409aff0001000198
|
1.0
|
Fix "method_lines" issue in src/components/Stocks/CatalogPageItems/AppBar/AppBarRightMenu.js - Function `AppBarRightMenu` has 33 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/dictyBase/Dicty-Stock-Center/src/components/Stocks/CatalogPageItems/AppBar/AppBarRightMenu.js#issue_5db0e02e409aff0001000198
|
non_process
|
fix method lines issue in src components stocks catalogpageitems appbar appbarrightmenu js function appbarrightmenu has lines of code exceeds allowed consider refactoring
| 0
|
639,709
| 20,762,606,203
|
IssuesEvent
|
2022-03-15 17:29:27
|
JeffreyCHChan/SOEN390
|
https://api.github.com/repos/JeffreyCHChan/SOEN390
|
closed
|
Appointment Bookings - Add success Message
|
bug Priority 1 Front End
|
Alex, make sure when the user submits ANY time slots and submits, makes sure its saved and that a **success message** pops up. Lastly, make sure these notifications are appearing on doc dashboard #115
|
1.0
|
Appointment Bookings - Add success Message - Alex, make sure when the user submits ANY time slots and submits, makes sure its saved and that a **success message** pops up. Lastly, make sure these notifications are appearing on doc dashboard #115
|
non_process
|
appointment bookings add success message alex make sure when the user submits any time slots and submits makes sure its saved and that a success message pops up lastly make sure these notifications are appearing on doc dashboard
| 0
|
12,230
| 7,812,150,461
|
IssuesEvent
|
2018-06-12 12:34:24
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
stability: investigate performance difference: DO vs Azure
|
C-investigation C-performance
|
Placeholder issue:
`cyan` (azure) and `adriatic` (DO) have similar specs, at least according to the machine types docs and linux system info:
`cyan`:
- 6 nodes
- 4 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
- 14GiB RAM
- 200GiB SSD
`adriatic`:
- 6 nodes
- 4 Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
- 8GiB RAM
- 80GiB SSD
Both are running with cockroach versions from today. Slight change in ENV on `cyan`, but the discrepancy existed before.
Running block writer on each node against localhost yield ~3K QPS on cyan, but ~1.5K QPS on adriatic.
More poking required.
|
True
|
stability: investigate performance difference: DO vs Azure - Placeholder issue:
`cyan` (azure) and `adriatic` (DO) have similar specs, at least according to the machine types docs and linux system info:
`cyan`:
- 6 nodes
- 4 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
- 14GiB RAM
- 200GiB SSD
`adriatic`:
- 6 nodes
- 4 Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
- 8GiB RAM
- 80GiB SSD
Both are running with cockroach versions from today. Slight change in ENV on `cyan`, but the discrepancy existed before.
Running block writer on each node against localhost yield ~3K QPS on cyan, but ~1.5K QPS on adriatic.
More poking required.
|
non_process
|
stability investigate performance difference do vs azure placeholder issue cyan azure and adriatic do have similar specs at least according to the machine types docs and linux system info cyan nodes intel r xeon r cpu ram ssd adriatic nodes intel r xeon r cpu ram ssd both are running with cockroach versions from today slight change in env on cyan but the discrepancy existed before running block writer on each node against localhost yield qps on cyan but qps on adriatic more poking required
| 0
|
4,371
| 11,009,105,509
|
IssuesEvent
|
2019-12-04 11:56:06
|
the-tale/the-tale
|
https://api.github.com/repos/the-tale/the-tale
|
closed
|
Удалить поддержку библиотеки dext
|
comp_architecture est_medium type_refactoring
|
В dext остаётся часть устаревшего кода и часть кода фреймворка, надо влить его в репозиторий игры. так как больше ни для чего он использоваться не будет.
|
1.0
|
Удалить поддержку библиотеки dext - В dext остаётся часть устаревшего кода и часть кода фреймворка, надо влить его в репозиторий игры. так как больше ни для чего он использоваться не будет.
|
non_process
|
удалить поддержку библиотеки dext в dext остаётся часть устаревшего кода и часть кода фреймворка надо влить его в репозиторий игры так как больше ни для чего он использоваться не будет
| 0
|
602
| 3,074,431,219
|
IssuesEvent
|
2015-08-20 07:12:43
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Merge move-meta-entries and mappull preprocessing targets
|
feature P2 preprocess
|
Merge `move-meta-entries` and `mappull` preprocessing targets as they as part of the same processing phase and one shouldn't be ran without the other. The functional code for `mappull` is moved into `move-meta-entries` and the empty target for `mappull` is kept for backwards compatibilty. The implementations are kept the same for now, but this will allow easier refactoring in the future.
|
1.0
|
Merge move-meta-entries and mappull preprocessing targets - Merge `move-meta-entries` and `mappull` preprocessing targets as they as part of the same processing phase and one shouldn't be ran without the other. The functional code for `mappull` is moved into `move-meta-entries` and the empty target for `mappull` is kept for backwards compatibilty. The implementations are kept the same for now, but this will allow easier refactoring in the future.
|
process
|
merge move meta entries and mappull preprocessing targets merge move meta entries and mappull preprocessing targets as they as part of the same processing phase and one shouldn t be ran without the other the functional code for mappull is moved into move meta entries and the empty target for mappull is kept for backwards compatibilty the implementations are kept the same for now but this will allow easier refactoring in the future
| 1
|
19,528
| 25,839,413,923
|
IssuesEvent
|
2022-12-12 22:34:49
|
PyCQA/pylint
|
https://api.github.com/repos/PyCQA/pylint
|
closed
|
Hangs forever after a parallel job gets killed
|
Bug :beetle: topic-multiprocessing
|
This is similar to issue #1495 -- there is apparently some giant memory leak in pylint which makes it eat ginormous amounts of RAM. This issue is about mishandling parallel workers with `-j` -- once one worker dies, pylint runs into a deadlock.
Steps to reproduce
1. `podman run -it --rm --memory=2G fedora:33`
This issues is not specific to podman -- you can use docker, or a VM, or anything where you can control the amount of RAM.
2. Inside the container, prepare:
```
dnf install -y python3-pip git
pip install pylint
git clone --depth=1 https://github.com/rhinstaller/anaconda
cd anaconda
```
3. Run pylint in parallel; my `nproc` is 4, thus `-j0` selects 4, but let's select `-j4` explicitly:
```
pylint -j4 pyanaconda/
```
### Current behavior
As per issue #1495, the 5 pylint processes pile up more and more RAM usage, until one gets killed because of OOM:
Memory cgroup out of memory: Killed process 89155 (pylint) total-vm:573588kB, anon-rss:564804kB, file-rss:0kB, shmem-rss:8kB, UID:1000 pgtables:1164kB oom_score_adj:0
I still get some remaining output, then pylint hangs forever:
```
[...]
pyanaconda/payload/dnf/payload.py:41:0: C0411: standard import "from fnmatch import fnmatch" should be placed before "import dnf" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:42:0: C0411: standard import "from glob import glob" should be placed before "import dnf" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:46:0: C0411: third party import "from pykickstart.constants import GROUP_ALL, GROUP_DEFAULT, KS_MISSING_IGNORE, KS_BROKEN_IGNORE, GROUP_REQUIRED" should be placed before "import pyanaconda.localization" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:48:0: C0411: third party import "from pykickstart.parser import Group" should be placed before "import pyanaconda.localization" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:50:0: C0412: Imports from package pyanaconda are not grouped (ungrouped-imports)
```
The other processes are still around:
```
martin 89152 5.1 0.2 266584 40680 pts/0 Sl+ 17:23 0:24 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89153 26.9 2.8 472352 465536 pts/0 S+ 17:23 2:09 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89154 27.4 3.3 541924 536164 pts/0 S+ 17:23 2:12 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89156 28.4 3.2 529740 522980 pts/0 S+ 17:23 2:16 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89586 19.2 0.8 348184 135620 pts/0 S+ 17:30 0:21 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
```
The first three and the fifth wait in a futex:
```
❱❱❱ strace -p 89152
strace: Process 89152 attached
futex(0x55d4edd66550, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY^Cstrace: Process 89152 detached
<detached ...>
```
and the fourth is apparently the controller, which waits in a read():
```
❱❱❱ strace -p 89156
strace: Process 89156 attached
read(3,
```
After pressing Control-C, I get
```
Process ForkPoolWorker-1:
Process ForkPoolWorker-2:
Process ForkPoolWorker-4:
Process ForkPoolWorker-5:
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 365, in get
with self._rlock:
File "/usr/lib64/python3.9/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
KeyboardInterrupt
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 365, in get
with self._rlock:
File "/usr/lib64/python3.9/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 365, in get
with self._rlock:
File "/usr/lib64/python3.9/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 366, in get
res = self._reader.recv_bytes()
File "/usr/lib64/python3.9/multiprocessing/connection.py", line 221, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib64/python3.9/multiprocessing/connection.py", line 419, in _recv_bytes
buf = self._recv(4)
File "/usr/lib64/python3.9/multiprocessing/connection.py", line 384, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
```
### Expected behavior
pylint should notice that one worker died, and immediately abort, instead of hanging forever (which also blocks CI systems until their timeout, which is usually quite long).
### pylint --version output
```
pylint 2.6.0
astroid 2.4.2
Python 3.9.0 (default, Oct 6 2020, 00:00:00)
[GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
```
I tried `pip install pylint astroid --pre -U` but that doesn't install anything new.
|
1.0
|
Hangs forever after a parallel job gets killed - This is similar to issue #1495 -- there is apparently some giant memory leak in pylint which makes it eat ginormous amounts of RAM. This issue is about mishandling parallel workers with `-j` -- once one worker dies, pylint runs into a deadlock.
Steps to reproduce
1. `podman run -it --rm --memory=2G fedora:33`
This issues is not specific to podman -- you can use docker, or a VM, or anything where you can control the amount of RAM.
2. Inside the container, prepare:
```
dnf install -y python3-pip git
pip install pylint
git clone --depth=1 https://github.com/rhinstaller/anaconda
cd anaconda
```
3. Run pylint in parallel; my `nproc` is 4, thus `-j0` selects 4, but let's select `-j4` explicitly:
```
pylint -j4 pyanaconda/
```
### Current behavior
As per issue #1495, the 5 pylint processes pile up more and more RAM usage, until one gets killed because of OOM:
Memory cgroup out of memory: Killed process 89155 (pylint) total-vm:573588kB, anon-rss:564804kB, file-rss:0kB, shmem-rss:8kB, UID:1000 pgtables:1164kB oom_score_adj:0
I still get some remaining output, then pylint hangs forever:
```
[...]
pyanaconda/payload/dnf/payload.py:41:0: C0411: standard import "from fnmatch import fnmatch" should be placed before "import dnf" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:42:0: C0411: standard import "from glob import glob" should be placed before "import dnf" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:46:0: C0411: third party import "from pykickstart.constants import GROUP_ALL, GROUP_DEFAULT, KS_MISSING_IGNORE, KS_BROKEN_IGNORE, GROUP_REQUIRED" should be placed before "import pyanaconda.localization" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:48:0: C0411: third party import "from pykickstart.parser import Group" should be placed before "import pyanaconda.localization" (wrong-import-order)
pyanaconda/payload/dnf/payload.py:50:0: C0412: Imports from package pyanaconda are not grouped (ungrouped-imports)
```
The other processes are still around:
```
martin 89152 5.1 0.2 266584 40680 pts/0 Sl+ 17:23 0:24 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89153 26.9 2.8 472352 465536 pts/0 S+ 17:23 2:09 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89154 27.4 3.3 541924 536164 pts/0 S+ 17:23 2:12 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89156 28.4 3.2 529740 522980 pts/0 S+ 17:23 2:16 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
martin 89586 19.2 0.8 348184 135620 pts/0 S+ 17:30 0:21 /usr/bin/python3 /usr/local/bin/pylint -j0 pyanaconda/
```
The first three and the fifth wait in a futex:
```
❱❱❱ strace -p 89152
strace: Process 89152 attached
futex(0x55d4edd66550, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY^Cstrace: Process 89152 detached
<detached ...>
```
and the fourth is apparently the controller, which waits in a read():
```
❱❱❱ strace -p 89156
strace: Process 89156 attached
read(3,
```
After pressing Control-C, I get
```
Process ForkPoolWorker-1:
Process ForkPoolWorker-2:
Process ForkPoolWorker-4:
Process ForkPoolWorker-5:
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 365, in get
with self._rlock:
File "/usr/lib64/python3.9/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
KeyboardInterrupt
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 365, in get
with self._rlock:
File "/usr/lib64/python3.9/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 365, in get
with self._rlock:
File "/usr/lib64/python3.9/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 366, in get
res = self._reader.recv_bytes()
File "/usr/lib64/python3.9/multiprocessing/connection.py", line 221, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib64/python3.9/multiprocessing/connection.py", line 419, in _recv_bytes
buf = self._recv(4)
File "/usr/lib64/python3.9/multiprocessing/connection.py", line 384, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
```
### Expected behavior
pylint should notice that one worker died, and immediately abort, instead of hanging forever (which also blocks CI systems until their timeout, which is usually quite long).
### pylint --version output
```
pylint 2.6.0
astroid 2.4.2
Python 3.9.0 (default, Oct 6 2020, 00:00:00)
[GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
```
I tried `pip install pylint astroid --pre -U` but that doesn't install anything new.
|
process
|
hangs forever after a parallel job gets killed this is similar to issue there is apparently some giant memory leak in pylint which makes it eat ginormous amounts of ram this issue is about mishandling parallel workers with j once one worker dies pylint runs into a deadlock steps to reproduce podman run it rm memory fedora this issues is not specific to podman you can use docker or a vm or anything where you can control the amount of ram inside the container prepare dnf install y pip git pip install pylint git clone depth cd anaconda run pylint in parallel my nproc is thus selects but let s select explicitly pylint pyanaconda current behavior as per issue the pylint processes pile up more and more ram usage until one gets killed because of oom memory cgroup out of memory killed process pylint total vm anon rss file rss shmem rss uid pgtables oom score adj i still get some remaining output then pylint hangs forever pyanaconda payload dnf payload py standard import from fnmatch import fnmatch should be placed before import dnf wrong import order pyanaconda payload dnf payload py standard import from glob import glob should be placed before import dnf wrong import order pyanaconda payload dnf payload py third party import from pykickstart constants import group all group default ks missing ignore ks broken ignore group required should be placed before import pyanaconda localization wrong import order pyanaconda payload dnf payload py third party import from pykickstart parser import group should be placed before import pyanaconda localization wrong import order pyanaconda payload dnf payload py imports from package pyanaconda are not grouped ungrouped imports the other processes are still around martin pts sl usr bin usr local bin pylint pyanaconda martin pts s usr bin usr local bin pylint pyanaconda martin pts s usr bin usr local bin pylint pyanaconda martin pts s usr bin usr local bin pylint pyanaconda martin pts s usr bin usr local bin pylint pyanaconda the first three and the fifth wait in a futex ❱❱❱ strace p strace process attached futex futex wait bitset private futex clock realtime null futex bitset match any cstrace process detached and the fourth is apparently the controller which waits in a read ❱❱❱ strace p strace process attached read after pressing control c i get process forkpoolworker process forkpoolworker process forkpoolworker process forkpoolworker traceback most recent call last traceback most recent call last file usr multiprocessing process py line in bootstrap self run file usr multiprocessing process py line in run self target self args self kwargs file usr multiprocessing pool py line in worker task get file usr multiprocessing queues py line in get with self rlock file usr multiprocessing synchronize py line in enter return self semlock enter file usr multiprocessing process py line in bootstrap self run keyboardinterrupt file usr multiprocessing process py line in run self target self args self kwargs file usr multiprocessing pool py line in worker task get file usr multiprocessing queues py line in get with self rlock file usr multiprocessing synchronize py line in enter return self semlock enter keyboardinterrupt traceback most recent call last file usr multiprocessing process py line in bootstrap self run file usr multiprocessing process py line in run self target self args self kwargs file usr multiprocessing pool py line in worker task get file usr multiprocessing queues py line in get with self rlock file usr multiprocessing synchronize py line in enter return self semlock enter keyboardinterrupt traceback most recent call last file usr multiprocessing process py line in bootstrap self run file usr multiprocessing process py line in run self target self args self kwargs file usr multiprocessing pool py line in worker task get file usr multiprocessing queues py line in get res self reader recv bytes file usr multiprocessing connection py line in recv bytes buf self recv bytes maxlength file usr multiprocessing connection py line in recv bytes buf self recv file usr multiprocessing connection py line in recv chunk read handle remaining keyboardinterrupt expected behavior pylint should notice that one worker died and immediately abort instead of hanging forever which also blocks ci systems until their timeout which is usually quite long pylint version output pylint astroid python default oct i tried pip install pylint astroid pre u but that doesn t install anything new
| 1
|
35,491
| 2,789,928,798
|
IssuesEvent
|
2015-05-08 22:29:12
|
google/google-visualization-api-issues
|
https://api.github.com/repos/google/google-visualization-api-issues
|
opened
|
Provide a method to convert charts to downloadable image
|
Priority-Low Type-Enhancement
|
Original [issue 544](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=544) created by orwant on 2011-03-09T06:59:07.000Z:
<b>What would you like to see us add to this API?</b>
Please add a method that will convert the interactive charts into a downloadable image.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
PieChart, LineChart, BarChart, ColumnChart
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
1.0
|
Provide a method to convert charts to downloadable image - Original [issue 544](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=544) created by orwant on 2011-03-09T06:59:07.000Z:
<b>What would you like to see us add to this API?</b>
Please add a method that will convert the interactive charts into a downloadable image.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
PieChart, LineChart, BarChart, ColumnChart
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
non_process
|
provide a method to convert charts to downloadable image original created by orwant on what would you like to see us add to this api please add a method that will convert the interactive charts into a downloadable image what component is this issue related to piechart linechart datatable query etc piechart linechart barchart columnchart for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
| 0
|
21,142
| 28,114,027,989
|
IssuesEvent
|
2023-03-31 09:22:22
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
reopened
|
[MLv2] `shared.ns` tools should preserve metadata from original var
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
`potemkin/import-vars` doesn't work for ClojureScript, but we have a basic replacement with `shared.ns/import-function(s)`. This does not copy important metadata like `^:export`, `^:arglists`, or `^:doc` from the original functions.
We can use `cljs.analyzer` to access the referenced namespace, then `resolve` the imported var and access its metadata in the macroexpansions.
|
1.0
|
[MLv2] `shared.ns` tools should preserve metadata from original var - `potemkin/import-vars` doesn't work for ClojureScript, but we have a basic replacement with `shared.ns/import-function(s)`. This does not copy important metadata like `^:export`, `^:arglists`, or `^:doc` from the original functions.
We can use `cljs.analyzer` to access the referenced namespace, then `resolve` the imported var and access its metadata in the macroexpansions.
|
process
|
shared ns tools should preserve metadata from original var potemkin import vars doesn t work for clojurescript but we have a basic replacement with shared ns import function s this does not copy important metadata like export arglists or doc from the original functions we can use cljs analyzer to access the referenced namespace then resolve the imported var and access its metadata in the macroexpansions
| 1
|
132,245
| 28,127,531,066
|
IssuesEvent
|
2023-03-31 19:08:38
|
creativecommons/cc-resource-archive
|
https://api.github.com/repos/creativecommons/cc-resource-archive
|
closed
|
[Feature] Add Explore CC button and expanding menu on the Navigation Bar
|
🟩 priority: low 🏁 status: ready for work ✨ goal: improvement 💻 aspect: code
|
## Description
Adds a button to allow the visitor to navigate to other CC websites
## Alternatives
An alternative could be implementing a suitable footer to the website but adding the Explore CC button will improve the UX of the website.
## Additional context
It should work in a similar manner as on the CC opensource site.
## Implementation
- [x] I would be interested in implementing this feature.
|
1.0
|
[Feature] Add Explore CC button and expanding menu on the Navigation Bar - ## Description
Adds a button to allow the visitor to navigate to other CC websites
## Alternatives
An alternative could be implementing a suitable footer to the website but adding the Explore CC button will improve the UX of the website.
## Additional context
It should work in a similar manner as on the CC opensource site.
## Implementation
- [x] I would be interested in implementing this feature.
|
non_process
|
add explore cc button and expanding menu on the navigation bar description adds a button to allow the visitor to navigate to other cc websites alternatives an alternative could be implementing a suitable footer to the website but adding the explore cc button will improve the ux of the website additional context it should work in a similar manner as on the cc opensource site implementation i would be interested in implementing this feature
| 0
|
492,702
| 14,218,024,686
|
IssuesEvent
|
2020-11-17 11:11:11
|
threefoldtech/0-fs
|
https://api.github.com/repos/threefoldtech/0-fs
|
closed
|
0-fs panic with a nil pointer exception
|
priority_major type_bug
|
This has happened randomly before. (version 2.0.5)
```
goroutine 650422 [running]:
github.com/threefoldtech/0-fs/meta.(*Dir).Name(0x0, 0xc0033f8980, 0x5)
/gopath/src/github.com/threefoldtech/0-fs/meta/dir.go:31 +0x22
github.com/threefoldtech/0-fs/meta.(*sqlStore).get(0xc000176a80, 0xc001d26eb0, 0x42, 0x10, 0x964300, 0xc0010e5c01, 0xc0033d1340)
/gopath/src/github.com/threefoldtech/0-fs/meta/store.go:253 +0x1dd
github.com/threefoldtech/0-fs/meta.(*sqlStore).Get(0xc000176a80, 0xc001d26eb0, 0x42, 0xc0033d1340, 0x1, 0x1)
/gopath/src/github.com/threefoldtech/0-fs/meta/store.go:267 +0x3f
github.com/threefoldtech/0-fs/rofs.(*filesystem).GetAttr(0xc000168bc0, 0xc001d26eb0, 0x42, 0xc002d84858, 0x0, 0x0)
/gopath/src/github.com/threefoldtech/0-fs/rofs/rofs.go:69 +0x159
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs.(*readonlyFileSystem).GetAttr(0xc0001ba240, 0xc001d26eb0, 0x42, 0xc002d84858, 0xc0039ac000, 0xc0010e5e78)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs/readonlyfs.go:26 +0x51
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs.(*pathInode).GetAttr(0xc00070d2f0, 0xc002d847c0, 0x0, 0x0, 0xc002d84858, 0x481a5c)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs/pathfs.go:603 +0x7c
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/nodefs.(*rawBridge).GetAttr(0xc000176db0, 0xc002d84840, 0xc002d847b0, 0x958b20)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/nodefs/fsops.go:139 +0x88
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.doGetAttr(0xc0001d2000, 0xc002d846c0)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/opcode.go:246 +0x64
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.(*Server).handleRequest(0xc0001d2000, 0xc002d846c0, 0xc002d846c0)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/server.go:404 +0x30c
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.(*Server).loop(0xc0001d2000, 0xc002c59c01)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/server.go:376 +0x14d
created by github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.(*Server).readRequest
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/server.go:284 +0x332
```
|
1.0
|
0-fs panic with a nil pointer exception - This has happened randomly before. (version 2.0.5)
```
goroutine 650422 [running]:
github.com/threefoldtech/0-fs/meta.(*Dir).Name(0x0, 0xc0033f8980, 0x5)
/gopath/src/github.com/threefoldtech/0-fs/meta/dir.go:31 +0x22
github.com/threefoldtech/0-fs/meta.(*sqlStore).get(0xc000176a80, 0xc001d26eb0, 0x42, 0x10, 0x964300, 0xc0010e5c01, 0xc0033d1340)
/gopath/src/github.com/threefoldtech/0-fs/meta/store.go:253 +0x1dd
github.com/threefoldtech/0-fs/meta.(*sqlStore).Get(0xc000176a80, 0xc001d26eb0, 0x42, 0xc0033d1340, 0x1, 0x1)
/gopath/src/github.com/threefoldtech/0-fs/meta/store.go:267 +0x3f
github.com/threefoldtech/0-fs/rofs.(*filesystem).GetAttr(0xc000168bc0, 0xc001d26eb0, 0x42, 0xc002d84858, 0x0, 0x0)
/gopath/src/github.com/threefoldtech/0-fs/rofs/rofs.go:69 +0x159
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs.(*readonlyFileSystem).GetAttr(0xc0001ba240, 0xc001d26eb0, 0x42, 0xc002d84858, 0xc0039ac000, 0xc0010e5e78)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs/readonlyfs.go:26 +0x51
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs.(*pathInode).GetAttr(0xc00070d2f0, 0xc002d847c0, 0x0, 0x0, 0xc002d84858, 0x481a5c)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/pathfs/pathfs.go:603 +0x7c
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/nodefs.(*rawBridge).GetAttr(0xc000176db0, 0xc002d84840, 0xc002d847b0, 0x958b20)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/nodefs/fsops.go:139 +0x88
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.doGetAttr(0xc0001d2000, 0xc002d846c0)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/opcode.go:246 +0x64
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.(*Server).handleRequest(0xc0001d2000, 0xc002d846c0, 0xc002d846c0)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/server.go:404 +0x30c
github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.(*Server).loop(0xc0001d2000, 0xc002c59c01)
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/server.go:376 +0x14d
created by github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse.(*Server).readRequest
/gopath/src/github.com/threefoldtech/0-fs/vendor/github.com/hanwen/go-fuse/fuse/server.go:284 +0x332
```
|
non_process
|
fs panic with a nil pointer exception this has happened randomly before version goroutine github com threefoldtech fs meta dir name gopath src github com threefoldtech fs meta dir go github com threefoldtech fs meta sqlstore get gopath src github com threefoldtech fs meta store go github com threefoldtech fs meta sqlstore get gopath src github com threefoldtech fs meta store go github com threefoldtech fs rofs filesystem getattr gopath src github com threefoldtech fs rofs rofs go github com threefoldtech fs vendor github com hanwen go fuse fuse pathfs readonlyfilesystem getattr gopath src github com threefoldtech fs vendor github com hanwen go fuse fuse pathfs readonlyfs go github com threefoldtech fs vendor github com hanwen go fuse fuse pathfs pathinode getattr gopath src github com threefoldtech fs vendor github com hanwen go fuse fuse pathfs pathfs go github com threefoldtech fs vendor github com hanwen go fuse fuse nodefs rawbridge getattr gopath src github com threefoldtech fs vendor github com hanwen go fuse fuse nodefs fsops go github com threefoldtech fs vendor github com hanwen go fuse fuse dogetattr gopath src github com threefoldtech fs vendor github com hanwen go fuse fuse opcode go github com threefoldtech fs vendor github com hanwen go fuse fuse server handlerequest gopath src github com threefoldtech fs vendor github com hanwen go fuse fuse server go github com threefoldtech fs vendor github com hanwen go fuse fuse server loop gopath src github com threefoldtech fs vendor github com hanwen go fuse fuse server go created by github com threefoldtech fs vendor github com hanwen go fuse fuse server readrequest gopath src github com threefoldtech fs vendor github com hanwen go fuse fuse server go
| 0
|
14,539
| 17,651,648,334
|
IssuesEvent
|
2021-08-20 13:58:24
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
bazel build invalid external libary on windows
|
more data needed type: support / not a bug (process)
|
I try to build mediapipe on Windows system.It uses babel build system.When I installed the relevant dependencies and started to build, an error occurred during the link.
`avx512f_ukernels.lib(vrndz-avx512f-x16.obj) : error LNK2001: unresolved external symbol _cvtu32_mask16`
`....`
I thought it was a problem with the msvc compiler at first.But I wrote a deme test _cvtu32_mask16 and found that my msvc supports AVX256. It works.The error appeared in the lib of XNNPACK, so I downloaded the code of XNNPACK and compiled it with msvc, the compilation was successful.
The lib in error is avx512f_ukernels.lib.I checked the link parameters that generated it.Then use the same parameters to find the relevant obj files in the successfully compiled XNNPACK project, and then link them
It is found that the new avx512f_ukernels.lib has 2000kb. The one generated by bazel build is only a few hundred kb. After replacing it, the error disappears
I set the bazel compiler to msvc, I don’t know why the size of lib compiled by the same compiler is so different
|
1.0
|
bazel build invalid external libary on windows - I try to build mediapipe on Windows system.It uses babel build system.When I installed the relevant dependencies and started to build, an error occurred during the link.
`avx512f_ukernels.lib(vrndz-avx512f-x16.obj) : error LNK2001: unresolved external symbol _cvtu32_mask16`
`....`
I thought it was a problem with the msvc compiler at first.But I wrote a deme test _cvtu32_mask16 and found that my msvc supports AVX256. It works.The error appeared in the lib of XNNPACK, so I downloaded the code of XNNPACK and compiled it with msvc, the compilation was successful.
The lib in error is avx512f_ukernels.lib.I checked the link parameters that generated it.Then use the same parameters to find the relevant obj files in the successfully compiled XNNPACK project, and then link them
It is found that the new avx512f_ukernels.lib has 2000kb. The one generated by bazel build is only a few hundred kb. After replacing it, the error disappears
I set the bazel compiler to msvc, I don’t know why the size of lib compiled by the same compiler is so different
|
process
|
bazel build invalid external libary on windows i try to build mediapipe on windows system it uses babel build system when i installed the relevant dependencies and started to build an error occurred during the link ukernels lib vrndz obj error unresolved external symbol i thought it was a problem with the msvc compiler at first but i wrote a deme test and found that my msvc supports it works the error appeared in the lib of xnnpack so i downloaded the code of xnnpack and compiled it with msvc the compilation was successful the lib in error is ukernels lib i checked the link parameters that generated it then use the same parameters to find the relevant obj files in the successfully compiled xnnpack project and then link them it is found that the new ukernels lib has the one generated by bazel build is only a few hundred kb after replacing it the error disappears i set the bazel compiler to msvc i don’t know why the size of lib compiled by the same compiler is so different
| 1
|
79,940
| 23,073,612,321
|
IssuesEvent
|
2022-07-25 20:39:13
|
WGBH-MLA/ov_wag
|
https://api.github.com/repos/WGBH-MLA/ov_wag
|
opened
|
Rename repo from `ov_wag` => `ov-wag`
|
documentation :scroll: CD :building_construction: maintenance :wrench: breaking :broken_heart:
|
# Because
With [ov-deploy #23](https://github.com/WGBH-MLA/ov-deploy/issues/23):
We need to rename this repository from `ov_wag` to `ov-wag` because the deploy script expects names to be identical and Rancher does not allow `_` in namespaces
Also because standardizing to `ov-*` will simplify many other issues going forward.
# Done when
- [ ] This repo is renamed `ov-wag`
- [ ] Local git urls are changed across all machines
- [ ] Docker hub repository created for ov-wag
- [ ] New ov-wag Images are pushed to Docker hub
- [ ] Links across repositories are amended (optional)
|
1.0
|
Rename repo from `ov_wag` => `ov-wag` - # Because
With [ov-deploy #23](https://github.com/WGBH-MLA/ov-deploy/issues/23):
We need to rename this repository from `ov_wag` to `ov-wag` because the deploy script expects names to be identical and Rancher does not allow `_` in namespaces
Also because standardizing to `ov-*` will simplify many other issues going forward.
# Done when
- [ ] This repo is renamed `ov-wag`
- [ ] Local git urls are changed across all machines
- [ ] Docker hub repository created for ov-wag
- [ ] New ov-wag Images are pushed to Docker hub
- [ ] Links across repositories are amended (optional)
|
non_process
|
rename repo from ov wag ov wag because with we need to rename this repository from ov wag to ov wag because the deploy script expects names to be identical and rancher does not allow in namespaces also because standardizing to ov will simplify many other issues going forward done when this repo is renamed ov wag local git urls are changed across all machines docker hub repository created for ov wag new ov wag images are pushed to docker hub links across repositories are amended optional
| 0
|
266,114
| 23,223,391,664
|
IssuesEvent
|
2022-08-02 20:36:57
|
envasquez/SABC
|
https://api.github.com/repos/envasquez/SABC
|
closed
|
Enable unit testing and code coverage
|
tests
|
Use django-nose and coverage to enable tracking unit test code coverage.
|
1.0
|
Enable unit testing and code coverage - Use django-nose and coverage to enable tracking unit test code coverage.
|
non_process
|
enable unit testing and code coverage use django nose and coverage to enable tracking unit test code coverage
| 0
|
8,410
| 2,611,496,397
|
IssuesEvent
|
2015-02-27 05:35:50
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Team is added with an existing color
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Add one team - it's added with red clan color
2. Some guy adds another team - it's also added with red clan color
What is the expected output? What do you see instead?
Colors are assigned one after another, like in 0.9.17
What version of the product are you using? On what operating system?
0.9.18-7740-4ba77e6178cd / win32
Please provide any additional information below.
Screenshot: http://img.vos.uz/yafk.jpg
Screenshot: http://img.vos.uz/gjwf0.jpg
```
Original issue reported on code.google.com by `v...@vos.uz` on 26 Oct 2012 at 6:42
|
1.0
|
Team is added with an existing color - ```
What steps will reproduce the problem?
1. Add one team - it's added with red clan color
2. Some guy adds another team - it's also added with red clan color
What is the expected output? What do you see instead?
Colors are assigned one after another, like in 0.9.17
What version of the product are you using? On what operating system?
0.9.18-7740-4ba77e6178cd / win32
Please provide any additional information below.
Screenshot: http://img.vos.uz/yafk.jpg
Screenshot: http://img.vos.uz/gjwf0.jpg
```
Original issue reported on code.google.com by `v...@vos.uz` on 26 Oct 2012 at 6:42
|
non_process
|
team is added with an existing color what steps will reproduce the problem add one team it s added with red clan color some guy adds another team it s also added with red clan color what is the expected output what do you see instead colors are assigned one after another like in what version of the product are you using on what operating system please provide any additional information below screenshot screenshot original issue reported on code google com by v vos uz on oct at
| 0
|
266,415
| 8,367,272,496
|
IssuesEvent
|
2018-10-04 11:43:27
|
openbankingspace/tpp-issues
|
https://api.github.com/repos/openbankingspace/tpp-issues
|
closed
|
LBG: PSU Standing Order contains an invalid "Frequency" definition.
|
aspsp:lbg env:live issue:bug priority:high type:aisp1
|
According to the [Account and Transaction API v1.1 spec](https://openbanking.atlassian.net/wiki/spaces/DZ/pages/5785171/Account+and+Transaction+API+Specification+-+v1.1.0), The Standing Order "Frequency" field (`OBReadStandingOrder1/Data/StandingOrder/Frequency`)
#### Frequency Regex
`^(EvryDay)$|^(EvryWorkgDay)$|^(IntrvlWkDay:0[1-9]:0[1-7])$|^(WkInMnthDay:0[1-5]:0[1-7])$|^(IntrvlMnthDay:(0[1-6]|12|24):(-0[1-5]|0[1-9]|[12][0-9]|3[01]))$|^(QtrDay:(ENGLISH|SCOTTISH|RECEIVED))$`
(This is perhaps better visualised as below)

However, Halifax returned a "Frequency" of `IntrvlWkDay:01:00` which is invalid. As the second pair of digits corresponds with the day of the week of the payment (01-07), a value of `00` is non-sensical.
## Impact
High. An unknown cohort of PSUs cannot access account information services that parse standing order information, as the information that is presented by the API violates the spec and has no "meaning" that we can assign as a workaround on our side. Repeated attempts will not change the customer outcome, as the data remains immutable between consent requests.
## Remediation
Change API endpoint to only return Frequencies aligned to the frequency regex.
---
|
1.0
|
LBG: PSU Standing Order contains an invalid "Frequency" definition. - According to the [Account and Transaction API v1.1 spec](https://openbanking.atlassian.net/wiki/spaces/DZ/pages/5785171/Account+and+Transaction+API+Specification+-+v1.1.0), The Standing Order "Frequency" field (`OBReadStandingOrder1/Data/StandingOrder/Frequency`)
#### Frequency Regex
`^(EvryDay)$|^(EvryWorkgDay)$|^(IntrvlWkDay:0[1-9]:0[1-7])$|^(WkInMnthDay:0[1-5]:0[1-7])$|^(IntrvlMnthDay:(0[1-6]|12|24):(-0[1-5]|0[1-9]|[12][0-9]|3[01]))$|^(QtrDay:(ENGLISH|SCOTTISH|RECEIVED))$`
(This is perhaps better visualised as below)

However, Halifax returned a "Frequency" of `IntrvlWkDay:01:00` which is invalid. As the second pair of digits corresponds with the day of the week of the payment (01-07), a value of `00` is non-sensical.
## Impact
High. An unknown cohort of PSUs cannot access account information services that parse standing order information, as the information that is presented by the API violates the spec and has no "meaning" that we can assign as a workaround on our side. Repeated attempts will not change the customer outcome, as the data remains immutable between consent requests.
## Remediation
Change API endpoint to only return Frequencies aligned to the frequency regex.
---
|
non_process
|
lbg psu standing order contains an invalid frequency definition according to the the standing order frequency field data standingorder frequency frequency regex evryday evryworkgday intrvlwkday wkinmnthday intrvlmnthday qtrday english scottish received this is perhaps better visualised as below however halifax returned a frequency of intrvlwkday which is invalid as the second pair of digits corresponds with the day of the week of the payment a value of is non sensical impact high an unknown cohort of psus cannot access account information services that parse standing order information as the information that is presented by the api violates the spec and has no meaning that we can assign as a workaround on our side repeated attempts will not change the customer outcome as the data remains immutable between consent requests remediation change api endpoint to only return frequencies aligned to the frequency regex
| 0
|
14,605
| 17,703,632,001
|
IssuesEvent
|
2021-08-25 03:26:28
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - originalNameUsage
|
Term - change Class - Taxon non-normative Process - complete
|
## Term change
* Submitter: Quentin Groom
* Efficacy Justification (why is this change necessary?): To improve clarity of the term usage, particularly to distinguish the different terms that can hold a scientific Latin name
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): This is largely for people and organizations publishing Darwin Core files to avoid repeated questions that keep cropping up. The issue #28 highlighted that the definitions of `scientificName`, `acceptedNameUsage `and `originalNameUsage` are all similar to one another, however, their intended usage is quite distinct, even though they are not clearly documented. The intension of this suggested change is to add to the comments of the term to help users understand the use of the terms more easily. The suggested explanations were given by @deepreef in #28, but are only preliminary.
* Stability Justification (what concerns are there that this might affect existing implementations?): The intension is that the comments would reinforce the existing definition and thus improve stability.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No implication
Current Term definition: https://dwc.tdwg.org/list/#dwc_originalNameUsage
Proposed attributes of the new term:
* Usage comments (recommendations regarding content, etc., not normative): **The full scientific name, with authorship and date information if known, of the name usage in which the terminal element of the scientificName was originally established under the rules of the associated nomenclaturalCode. For example, for names governed by the ICNafp, this term would indicate the basionym of a record representing a subsequent combination. Unlike basionyms, however, this term can apply to scientific names at all ranks.**
|
1.0
|
Change term - originalNameUsage - ## Term change
* Submitter: Quentin Groom
* Efficacy Justification (why is this change necessary?): To improve clarity of the term usage, particularly to distinguish the different terms that can hold a scientific Latin name
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): This is largely for people and organizations publishing Darwin Core files to avoid repeated questions that keep cropping up. The issue #28 highlighted that the definitions of `scientificName`, `acceptedNameUsage `and `originalNameUsage` are all similar to one another, however, their intended usage is quite distinct, even though they are not clearly documented. The intension of this suggested change is to add to the comments of the term to help users understand the use of the terms more easily. The suggested explanations were given by @deepreef in #28, but are only preliminary.
* Stability Justification (what concerns are there that this might affect existing implementations?): The intension is that the comments would reinforce the existing definition and thus improve stability.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No implication
Current Term definition: https://dwc.tdwg.org/list/#dwc_originalNameUsage
Proposed attributes of the new term:
* Usage comments (recommendations regarding content, etc., not normative): **The full scientific name, with authorship and date information if known, of the name usage in which the terminal element of the scientificName was originally established under the rules of the associated nomenclaturalCode. For example, for names governed by the ICNafp, this term would indicate the basionym of a record representing a subsequent combination. Unlike basionyms, however, this term can apply to scientific names at all ranks.**
|
process
|
change term originalnameusage term change submitter quentin groom efficacy justification why is this change necessary to improve clarity of the term usage particularly to distinguish the different terms that can hold a scientific latin name demand justification if the change is semantic in nature name at least two organizations that independently need this term this is largely for people and organizations publishing darwin core files to avoid repeated questions that keep cropping up the issue highlighted that the definitions of scientificname acceptednameusage and originalnameusage are all similar to one another however their intended usage is quite distinct even though they are not clearly documented the intension of this suggested change is to add to the comments of the term to help users understand the use of the terms more easily the suggested explanations were given by deepreef in but are only preliminary stability justification what concerns are there that this might affect existing implementations the intension is that the comments would reinforce the existing definition and thus improve stability implications for dwciri namespace does this change affect a dwciri term version no implication current term definition proposed attributes of the new term usage comments recommendations regarding content etc not normative the full scientific name with authorship and date information if known of the name usage in which the terminal element of the scientificname was originally established under the rules of the associated nomenclaturalcode for example for names governed by the icnafp this term would indicate the basionym of a record representing a subsequent combination unlike basionyms however this term can apply to scientific names at all ranks
| 1
|
14,391
| 17,403,913,134
|
IssuesEvent
|
2021-08-03 01:14:35
|
googleapis/python-recaptcha-enterprise
|
https://api.github.com/repos/googleapis/python-recaptcha-enterprise
|
closed
|
Release as GA
|
api: recaptchaenterprise type: process
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-recaptcha-enterprise/releases).
- [x] Server API is GA. See [API Release Notes](https://cloud.google.com/recaptcha-enterprise/docs/release-notes#January_10_2020).
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-recaptcha-enterprise/releases).
- [x] Server API is GA. See [API Release Notes](https://cloud.google.com/recaptcha-enterprise/docs/release-notes#January_10_2020).
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as ga required days elapsed since last beta release with new api surface see server api is ga see package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
62,982
| 14,656,654,277
|
IssuesEvent
|
2020-12-28 13:54:34
|
fu1771695yongxie/node
|
https://api.github.com/repos/fu1771695yongxie/node
|
opened
|
CVE-2015-9251 (Medium) detected in jquery-1.8.1.min.js
|
security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: node/deps/npm/node_modules/tap/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: node/deps/npm/node_modules/tap/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/node/commit/e8880d405b1130ab16334d72a55c6c02d0575609">e8880d405b1130ab16334d72a55c6c02d0575609</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-9251 (Medium) detected in jquery-1.8.1.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: node/deps/npm/node_modules/tap/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: node/deps/npm/node_modules/tap/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/node/commit/e8880d405b1130ab16334d72a55c6c02d0575609">e8880d405b1130ab16334d72a55c6c02d0575609</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file node deps npm node modules tap node modules redeyed examples browser index html path to vulnerable library node deps npm node modules tap node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
4,525
| 7,371,096,275
|
IssuesEvent
|
2018-03-13 10:35:52
|
fablabbcn/fablabs.io
|
https://api.github.com/repos/fablabbcn/fablabs.io
|
opened
|
Users cannot update their Lab during the approval process
|
Approval Process bug
|
It has been reported that users cannot edit their `Lab` when it is in the approval process, they get:
_"Access Denied. You do not have permission to access this resource."_
Let's check this, because if it is true, they cannot update the information of the `Lab`, that Referees might ask them!
|
1.0
|
Users cannot update their Lab during the approval process - It has been reported that users cannot edit their `Lab` when it is in the approval process, they get:
_"Access Denied. You do not have permission to access this resource."_
Let's check this, because if it is true, they cannot update the information of the `Lab`, that Referees might ask them!
|
process
|
users cannot update their lab during the approval process it has been reported that users cannot edit their lab when it is in the approval process they get access denied you do not have permission to access this resource let s check this because if it is true they cannot update the information of the lab that referees might ask them
| 1
|
20,322
| 26,962,169,765
|
IssuesEvent
|
2023-02-08 19:05:17
|
GoogleCloudPlatform/nodejs-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/nodejs-docs-samples
|
reopened
|
Dependency Dashboard
|
type: process samples
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/node-fetch-3.x -->[chore(deps): update dependency node-fetch to v3](../pull/2323)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/node-14.x -->[chore(deps): update node.js to v14](../pull/2184)
- [ ] <!-- recreate-branch=renovate/node-16.x -->[chore(deps): update node.js to v16](../pull/2233)
---
- [x] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/node-fetch-3.x -->[chore(deps): update dependency node-fetch to v3](../pull/2323)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/node-14.x -->[chore(deps): update node.js to v14](../pull/2184)
- [ ] <!-- recreate-branch=renovate/node-16.x -->[chore(deps): update node.js to v16](../pull/2233)
---
- [x] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any pull ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull check this box to trigger a request for renovate to run again on this repository
| 1
|
28,537
| 4,106,458,438
|
IssuesEvent
|
2016-06-06 08:53:01
|
Roy2014Kimi/JI3U47XFP7X25PCEAD3LMVXQ
|
https://api.github.com/repos/Roy2014Kimi/JI3U47XFP7X25PCEAD3LMVXQ
|
closed
|
Znh8K8D6WDaO1vmDAIS1rOObhIXKcLiLuvXOqkvfeXRDEkAxhC/sXQdcHIRUFh/jpim5S6UbhFcAr0qiV59PH7fofncyNY0l2gYySZZiB3ZBAATo9728MW5TtoSrRKLdPIN0gAQSrpe84983K1qqICWOV0SIX7S7VyJRJwszsLk=
|
design
|
caWrJXEGh9LfesliA8L6mz4SCk5CvKkBOZ6VjKYlmcPBhBjIFTlmSNGoY7TYkyEbKOZNCgyMxrdMsyAfoMUWnUsWJKvRwmLjl29sGWdsbj3ocqZnxtpBU6uP/oGRF7TiJThQ/MlozILwvPibVFSNsbVYlHYDXvM7iGA+4IIlqe1wolcMEQifKNuQBwHfdmQbh/CHqWLbiVR6iAbeYIb2dXFr5hdUW4/DoHxL7YMK/jFFJQxyOubnrn/COTFWclgIlDBnVCjqf+XKlCSLeVX1oosTLeIXRLjXeokhhL11Vv5mHB33T39IgXpzNDHKJwz/+EXvk8zTqzzfLSNg98L8a/JCwJtyI45iR74oF07aDtYs742+fyBZZPwFfQ+SxRDUhbqc7OQBd8LGFunEWpD2lzTGY0Z46SPdrIdGqRnYJi5FaY0OvTvSyHyG6Daza/WHT3etqRyWNO7TYp0WWL1qOw/vvBUBqdsjlKMc2W2F/xonoyw0Cck1cB829TViMGCW7XG+2E+/E3OuxuWze+/7VIsCsTPMcmTWozz2+cKlqAxjc6QxFHv6GNQajPfgk7Kt6JFiw9QiZyJtNBrgLVoAIPJ3GzkV4wwTbRxDxu3AdVYwws/aocjiaD9ffQGGqnoYa2ijLDw+P9jim5WEXxcz1yrqkCvU2cR1ivRDOd+T8vD79hChGiRtTryKeJh8qLm4NrWqtV34xsgxH+RwunDYrD0avToXuMpA5lhZDkhjVLEIPnbRSmvHkyldXquo7btm6mznPxws5UsQf3FdZgxAaARD5K19fvuThVGpL6yIDZ3Fht8Jpwu0FL0Yd1CXUtyfscyR6BFL0vj3a/LvHX+9FHXK4lIeriX+MmcrxydpBYo58k67pEm3m4HqI08e6mTNhHdN7c0vY7bS1c4QhXP+4tijvDB1UmmZzHsivxdW2bVROeqYirqwU9cJ6ImcXyGimbIRBE1Nq8eVadV7fFNwF4sgfAf//gaYneDe/RcMNRSyYlN68BAHA9qNzEgey4Ol
|
1.0
|
Znh8K8D6WDaO1vmDAIS1rOObhIXKcLiLuvXOqkvfeXRDEkAxhC/sXQdcHIRUFh/jpim5S6UbhFcAr0qiV59PH7fofncyNY0l2gYySZZiB3ZBAATo9728MW5TtoSrRKLdPIN0gAQSrpe84983K1qqICWOV0SIX7S7VyJRJwszsLk= - caWrJXEGh9LfesliA8L6mz4SCk5CvKkBOZ6VjKYlmcPBhBjIFTlmSNGoY7TYkyEbKOZNCgyMxrdMsyAfoMUWnUsWJKvRwmLjl29sGWdsbj3ocqZnxtpBU6uP/oGRF7TiJThQ/MlozILwvPibVFSNsbVYlHYDXvM7iGA+4IIlqe1wolcMEQifKNuQBwHfdmQbh/CHqWLbiVR6iAbeYIb2dXFr5hdUW4/DoHxL7YMK/jFFJQxyOubnrn/COTFWclgIlDBnVCjqf+XKlCSLeVX1oosTLeIXRLjXeokhhL11Vv5mHB33T39IgXpzNDHKJwz/+EXvk8zTqzzfLSNg98L8a/JCwJtyI45iR74oF07aDtYs742+fyBZZPwFfQ+SxRDUhbqc7OQBd8LGFunEWpD2lzTGY0Z46SPdrIdGqRnYJi5FaY0OvTvSyHyG6Daza/WHT3etqRyWNO7TYp0WWL1qOw/vvBUBqdsjlKMc2W2F/xonoyw0Cck1cB829TViMGCW7XG+2E+/E3OuxuWze+/7VIsCsTPMcmTWozz2+cKlqAxjc6QxFHv6GNQajPfgk7Kt6JFiw9QiZyJtNBrgLVoAIPJ3GzkV4wwTbRxDxu3AdVYwws/aocjiaD9ffQGGqnoYa2ijLDw+P9jim5WEXxcz1yrqkCvU2cR1ivRDOd+T8vD79hChGiRtTryKeJh8qLm4NrWqtV34xsgxH+RwunDYrD0avToXuMpA5lhZDkhjVLEIPnbRSmvHkyldXquo7btm6mznPxws5UsQf3FdZgxAaARD5K19fvuThVGpL6yIDZ3Fht8Jpwu0FL0Yd1CXUtyfscyR6BFL0vj3a/LvHX+9FHXK4lIeriX+MmcrxydpBYo58k67pEm3m4HqI08e6mTNhHdN7c0vY7bS1c4QhXP+4tijvDB1UmmZzHsivxdW2bVROeqYirqwU9cJ6ImcXyGimbIRBE1Nq8eVadV7fFNwF4sgfAf//gaYneDe/RcMNRSyYlN68BAHA9qNzEgey4Ol
|
non_process
|
sxqdchirufh jffjqxyoubnrn cotfwclgildbnvcjqf fybzzpwffq lvhx gaynede
| 0
|
831,505
| 32,051,258,008
|
IssuesEvent
|
2023-09-23 15:28:18
|
ubiquity/ubiquibot
|
https://api.github.com/repos/ubiquity/ubiquibot
|
closed
|
Debits Table Schema
|
Priority: 3 (High) Time: <2 Hours Price: 112.5 USD
|
## **Debits**
Represents deductions "penalties" made to assignees when an issue is re-opened.
| Field | Type | Description | Example Value |
|------------|----------|-------------------------------------------------------|-----------------------------|
| `id` | int8 | Unique identifier for the debit entry. | 1 |
| `created` | timestamptz | Timestamp when the debit was created. | 2023-09-15T10:30:45.012Z |
| `updated` | timestamptz | Timestamp when the debit was last updated. | 2023-09-15T10:35:20.100Z |
| `amount` | int8 | Amount deducted. | 500 |
|
1.0
|
Debits Table Schema - ## **Debits**
Represents deductions "penalties" made to assignees when an issue is re-opened.
| Field | Type | Description | Example Value |
|------------|----------|-------------------------------------------------------|-----------------------------|
| `id` | int8 | Unique identifier for the debit entry. | 1 |
| `created` | timestamptz | Timestamp when the debit was created. | 2023-09-15T10:30:45.012Z |
| `updated` | timestamptz | Timestamp when the debit was last updated. | 2023-09-15T10:35:20.100Z |
| `amount` | int8 | Amount deducted. | 500 |
|
non_process
|
debits table schema debits represents deductions penalties made to assignees when an issue is re opened field type description example value id unique identifier for the debit entry created timestamptz timestamp when the debit was created updated timestamptz timestamp when the debit was last updated amount amount deducted
| 0
|
16,060
| 20,201,787,342
|
IssuesEvent
|
2022-02-11 15:56:17
|
neuropoly/axondeepseg
|
https://api.github.com/repos/neuropoly/axondeepseg
|
opened
|
Fix weird filenames of data_axondeepseg_wakehealth_training
|
processing discussion
|
There is a problem with the filenames of this dataset. It will not pass the bids-validator and is inconsistent with the *data_axondeepseg_bf_training* dataset, which is very similar in terms of structure. When this dataset was originally created last summer, the ivadomed implementation into ADS was still in progress and the microscopy BEP as well. In order to train a model as fast as possible, some dirty fixes were used.
The problem originates from the fact that we obtained ROIs by manually cropping some samples and partial corrections from the *data_axondeepseg_wakehealth_source* dataset (*datasets/wakehealth* on git-annex). These files are named something like this:
> sub-42_sample-42_chunk-1_BF.tif
As we can see, the `chunk` entity is already used. Then, when we split this file in 4 sub-images, we need to identify these sub-files with an entity. Last summer, as a temporary solution, we went for something like this:
> sub-42_sample-42_chunk-1_desc-crop1_BF.tif
Here, we use the `desc` entity to assign an id to the cropped images. However, this entity is only available for derivative files or, as we figured out, for a derivative dataset. For more information about this, see https://github.com/ivadomed/ivadomed/issues/860. For now, the *data_axondeepseg_wakehealth_training* dataset type is set as derivative. I was able to train models on this data using this convention, but bids-validator will still fail here.
The problem is that we would ideally want to use the `chunk` entity to enumerate these cropped files, but by doing so, we are losing information about where the files came from because we have to overwrite the `chunk` field. Also, the `acq` entity might be more appropriate than the `desc` entity, like in the *data_axondeepseg_bf_training* dataset. A version of the wakehealth training set consistent with this aforementioned bf dataset is available on the `ac/consistent_with_bf_dataset` branch, which also uses the `acq` entity. On this branch, I overwrote the `chunk` entity, but the chunk information is still present in the `sourcedata` folder where I included the original files along with a README for more clarity. Note that this version passes the bids-validator test.
This issue was raised in #603
|
1.0
|
Fix weird filenames of data_axondeepseg_wakehealth_training - There is a problem with the filenames of this dataset. It will not pass the bids-validator and is inconsistent with the *data_axondeepseg_bf_training* dataset, which is very similar in terms of structure. When this dataset was originally created last summer, the ivadomed implementation into ADS was still in progress and the microscopy BEP as well. In order to train a model as fast as possible, some dirty fixes were used.
The problem originates from the fact that we obtained ROIs by manually cropping some samples and partial corrections from the *data_axondeepseg_wakehealth_source* dataset (*datasets/wakehealth* on git-annex). These files are named something like this:
> sub-42_sample-42_chunk-1_BF.tif
As we can see, the `chunk` entity is already used. Then, when we split this file in 4 sub-images, we need to identify these sub-files with an entity. Last summer, as a temporary solution, we went for something like this:
> sub-42_sample-42_chunk-1_desc-crop1_BF.tif
Here, we use the `desc` entity to assign an id to the cropped images. However, this entity is only available for derivative files or, as we figured out, for a derivative dataset. For more information about this, see https://github.com/ivadomed/ivadomed/issues/860. For now, the *data_axondeepseg_wakehealth_training* dataset type is set as derivative. I was able to train models on this data using this convention, but bids-validator will still fail here.
The problem is that we would ideally want to use the `chunk` entity to enumerate these cropped files, but by doing so, we are losing information about where the files came from because we have to overwrite the `chunk` field. Also, the `acq` entity might be more appropriate than the `desc` entity, like in the *data_axondeepseg_bf_training* dataset. A version of the wakehealth training set consistent with this aforementioned bf dataset is available on the `ac/consistent_with_bf_dataset` branch, which also uses the `acq` entity. On this branch, I overwrote the `chunk` entity, but the chunk information is still present in the `sourcedata` folder where I included the original files along with a README for more clarity. Note that this version passes the bids-validator test.
This issue was raised in #603
|
process
|
fix weird filenames of data axondeepseg wakehealth training there is a problem with the filenames of this dataset it will not pass the bids validator and is inconsistent with the data axondeepseg bf training dataset which is very similar in terms of structure when this dataset was originally created last summer the ivadomed implementation into ads was still in progress and the microscopy bep as well in order to train a model as fast as possible some dirty fixes were used the problem originates from the fact that we obtained rois by manually cropping some samples and partial corrections from the data axondeepseg wakehealth source dataset datasets wakehealth on git annex these files are named something like this sub sample chunk bf tif as we can see the chunk entity is already used then when we split this file in sub images we need to identify these sub files with an entity last summer as a temporary solution we went for something like this sub sample chunk desc bf tif here we use the desc entity to assign an id to the cropped images however this entity is only available for derivative files or as we figured out for a derivative dataset for more information about this see for now the data axondeepseg wakehealth training dataset type is set as derivative i was able to train models on this data using this convention but bids validator will still fail here the problem is that we would ideally want to use the chunk entity to enumerate these cropped files but by doing so we are losing information about where the files came from because we have to overwrite the chunk field also the acq entity might be more appropriate than the desc entity like in the data axondeepseg bf training dataset a version of the wakehealth training set consistent with this aforementioned bf dataset is available on the ac consistent with bf dataset branch which also uses the acq entity on this branch i overwrote the chunk entity but the chunk information is still present in the sourcedata folder where i included the original files along with a readme for more clarity note that this version passes the bids validator test this issue was raised in
| 1
|
21,719
| 30,220,978,335
|
IssuesEvent
|
2023-07-05 19:22:32
|
googleapis/proto-plus-python
|
https://api.github.com/repos/googleapis/proto-plus-python
|
closed
|
Dependency Dashboard
|
type: process
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Rate-Limited
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
- [ ] <!-- unlimit-branch=renovate/googleapis-common-protos-1.x -->chore(deps): update dependency googleapis-common-protos to v1.59.1
- [ ] <!-- unlimit-branch=renovate/importlib-metadata-5.x -->chore(deps): update dependency importlib-metadata to v5.2.0
- [ ] <!-- unlimit-branch=renovate/keyring-23.x -->chore(deps): update dependency keyring to v23.13.1
- [ ] <!-- unlimit-branch=renovate/more-itertools-9.x -->chore(deps): update dependency more-itertools to v9.1.0
- [ ] <!-- unlimit-branch=renovate/pkginfo-1.x -->chore(deps): update dependency pkginfo to v1.9.6
- [ ] <!-- unlimit-branch=renovate/platformdirs-2.x -->chore(deps): update dependency platformdirs to v2.6.2
- [ ] <!-- unlimit-branch=renovate/pyasn1-0.x -->chore(deps): update dependency pyasn1 to v0.5.0
- [ ] <!-- unlimit-branch=renovate/pyasn1-modules-0.x -->chore(deps): update dependency pyasn1-modules to v0.3.0
- [ ] <!-- unlimit-branch=renovate/pygments-2.x -->chore(deps): update dependency pygments to v2.15.1
- [ ] <!-- unlimit-branch=renovate/pyjwt-2.x -->chore(deps): update dependency pyjwt to v2.7.0
- [ ] <!-- unlimit-branch=renovate/pyparsing-3.x -->chore(deps): update dependency pyparsing to v3.1.0
- [ ] <!-- unlimit-branch=renovate/setuptools-65.x -->chore(deps): update dependency setuptools to v65.7.0
- [ ] <!-- unlimit-branch=renovate/typing-extensions-4.x -->chore(deps): update dependency typing-extensions to v4.7.1
- [ ] <!-- unlimit-branch=renovate/virtualenv-20.x -->chore(deps): update dependency virtualenv to v20.23.1
- [ ] <!-- unlimit-branch=renovate/wheel-0.x -->chore(deps): update dependency wheel to v0.40.0
- [ ] <!-- unlimit-branch=renovate/zipp-3.x -->chore(deps): update dependency zipp to v3.15.0
- [ ] <!-- unlimit-branch=renovate/argcomplete-3.x -->chore(deps): update dependency argcomplete to v3
- [ ] <!-- unlimit-branch=renovate/attrs-23.x -->chore(deps): update dependency attrs to v23
- [ ] <!-- unlimit-branch=renovate/bleach-6.x -->chore(deps): update dependency bleach to v6
- [ ] <!-- unlimit-branch=renovate/certifi-2023.x -->chore(deps): update dependency certifi to v2023
- [ ] <!-- unlimit-branch=renovate/charset-normalizer-3.x -->chore(deps): update dependency charset-normalizer to v3
- [ ] <!-- unlimit-branch=renovate/importlib-metadata-6.x -->chore(deps): update dependency importlib-metadata to v6
- [ ] <!-- unlimit-branch=renovate/keyring-24.x -->chore(deps): update dependency keyring to v24
- [ ] <!-- unlimit-branch=renovate/nox-2023.x -->chore(deps): update dependency nox to v2023
- [ ] <!-- unlimit-branch=renovate/packaging-23.x -->chore(deps): update dependency packaging to v23
- [ ] <!-- unlimit-branch=renovate/platformdirs-3.x -->chore(deps): update dependency platformdirs to v3
- [ ] <!-- unlimit-branch=renovate/readme-renderer-40.x -->chore(deps): update dependency readme-renderer to v40
- [ ] <!-- unlimit-branch=renovate/requests-toolbelt-1.x -->chore(deps): update dependency requests-toolbelt to v1
- [ ] <!-- unlimit-branch=renovate/rich-13.x -->chore(deps): update dependency rich to v13
- [ ] <!-- unlimit-branch=renovate/setuptools-68.x -->chore(deps): update dependency setuptools to v68
- [ ] <!-- unlimit-branch=renovate/urllib3-2.x -->chore(deps): update dependency urllib3 to v2
- [ ] <!-- create-all-rate-limited-prs -->🔐 **Create all rate-limited PRs at once** 🔐
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/attrs-22.x -->[chore(deps): update dependency attrs to v22.2.0](../pull/363)
- [ ] <!-- rebase-branch=renovate/cachetools-5.x -->[chore(deps): update dependency cachetools to v5.3.1](../pull/364)
- [ ] <!-- rebase-branch=renovate/click-8.x -->[chore(deps): update dependency click to v8.1.3](../pull/366)
- [ ] <!-- rebase-branch=renovate/docutils-0.x -->[chore(deps): update dependency docutils to v0.20.1](../pull/367)
- [ ] <!-- rebase-branch=renovate/filelock-3.x -->[chore(deps): update dependency filelock to v3.12.2](../pull/368)
- [ ] <!-- rebase-branch=renovate/gcp-releasetool-1.x -->[chore(deps): update dependency gcp-releasetool to v1.15.0](../pull/369)
- [ ] <!-- rebase-branch=renovate/google-api-core-2.x -->[chore(deps): update dependency google-api-core to v2.11.1](../pull/370)
- [ ] <!-- rebase-branch=renovate/google-auth-2.x -->[chore(deps): update dependency google-auth to v2.21.0](../pull/371)
- [ ] <!-- rebase-branch=renovate/google-cloud-storage-2.x -->[chore(deps): update dependency google-cloud-storage to v2.10.0](../pull/372)
- [ ] <!-- rebase-branch=renovate/google-resumable-media-2.x -->[chore(deps): update dependency google-resumable-media to v2.5.0](../pull/373)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/cryptography-41.x -->[chore(deps): update dependency cryptography to v41.0.1](../pull/356)
- [ ] <!-- recreate-branch=renovate/gcp-docuploader-0.x -->[chore(deps): update dependency gcp-docuploader to v0.6.5](../pull/357)
- [ ] <!-- recreate-branch=renovate/markupsafe-2.x -->[chore(deps): update dependency markupsafe to v2.1.3](../pull/359)
- [ ] <!-- recreate-branch=renovate/twine-4.x -->[chore(deps): update dependency twine to v4.0.2](../pull/360)
- [ ] <!-- recreate-branch=renovate/urllib3-1.x -->[chore(deps): update dependency urllib3 to v1.26.16](../pull/361)
- [ ] <!-- recreate-branch=renovate/argcomplete-2.x -->[chore(deps): update dependency argcomplete to v2.1.2](../pull/362)
- [ ] <!-- recreate-branch=renovate/actions-checkout-3.x -->[chore(deps): update actions/checkout action to v3](../pull/308)
- [ ] <!-- recreate-branch=renovate/protobuf-4.x -->[chore(deps): update dependency protobuf to v4](../pull/318)
## Detected dependencies
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>.kokoro/docker/docs/Dockerfile</summary>
- `ubuntu 22.04`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/tests.yml</summary>
- `actions/checkout v2`
- `actions/setup-python v4`
- `actions/checkout v2`
- `actions/setup-python v4`
- `actions/checkout v2`
- `actions/setup-python v4`
- `actions/upload-artifact v3`
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/download-artifact v3`
</details>
</blockquote>
</details>
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>.kokoro/requirements.txt</summary>
- `argcomplete ==2.0.0`
- `attrs ==22.1.0`
- `bleach ==5.0.1`
- `cachetools ==5.2.0`
- `certifi ==2022.12.7`
- `cffi ==1.15.1`
- `charset-normalizer ==2.1.1`
- `click ==8.0.4`
- `colorlog ==6.7.0`
- `commonmark ==0.9.1`
- `cryptography ==41.0.0`
- `distlib ==0.3.6`
- `docutils ==0.19`
- `filelock ==3.8.0`
- `gcp-docuploader ==0.6.4`
- `gcp-releasetool ==1.10.5`
- `google-api-core ==2.10.2`
- `google-auth ==2.14.1`
- `google-cloud-core ==2.3.2`
- `google-cloud-storage ==2.6.0`
- `google-crc32c ==1.5.0`
- `google-resumable-media ==2.4.0`
- `googleapis-common-protos ==1.57.0`
- `idna ==3.4`
- `importlib-metadata ==5.0.0`
- `jaraco-classes ==3.2.3`
- `jeepney ==0.8.0`
- `jinja2 ==3.1.2`
- `keyring ==23.11.0`
- `markupsafe ==2.1.1`
- `more-itertools ==9.0.0`
- `nox ==2022.11.21`
- `packaging ==21.3`
- `pkginfo ==1.8.3`
- `platformdirs ==2.5.4`
- `protobuf ==3.20.3`
- `pyasn1 ==0.4.8`
- `pyasn1-modules ==0.2.8`
- `pycparser ==2.21`
- `pygments ==2.13.0`
- `pyjwt ==2.6.0`
- `pyparsing ==3.0.9`
- `pyperclip ==1.8.2`
- `python-dateutil ==2.8.2`
- `readme-renderer ==37.3`
- `requests ==2.31.0`
- `requests-toolbelt ==0.10.1`
- `rfc3986 ==2.0.0`
- `rich ==12.6.0`
- `rsa ==4.9`
- `secretstorage ==3.3.3`
- `six ==1.16.0`
- `twine ==4.0.1`
- `typing-extensions ==4.4.0`
- `urllib3 ==1.26.12`
- `virtualenv ==20.16.7`
- `webencodings ==0.5.1`
- `wheel ==0.38.4`
- `zipp ==3.10.0`
- `setuptools ==65.5.1`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>setup.py</summary>
- `protobuf >= 3.19.0, <5.0.0dev`
- `google-api-core >= 1.31.5`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Rate-Limited
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
- [ ] <!-- unlimit-branch=renovate/googleapis-common-protos-1.x -->chore(deps): update dependency googleapis-common-protos to v1.59.1
- [ ] <!-- unlimit-branch=renovate/importlib-metadata-5.x -->chore(deps): update dependency importlib-metadata to v5.2.0
- [ ] <!-- unlimit-branch=renovate/keyring-23.x -->chore(deps): update dependency keyring to v23.13.1
- [ ] <!-- unlimit-branch=renovate/more-itertools-9.x -->chore(deps): update dependency more-itertools to v9.1.0
- [ ] <!-- unlimit-branch=renovate/pkginfo-1.x -->chore(deps): update dependency pkginfo to v1.9.6
- [ ] <!-- unlimit-branch=renovate/platformdirs-2.x -->chore(deps): update dependency platformdirs to v2.6.2
- [ ] <!-- unlimit-branch=renovate/pyasn1-0.x -->chore(deps): update dependency pyasn1 to v0.5.0
- [ ] <!-- unlimit-branch=renovate/pyasn1-modules-0.x -->chore(deps): update dependency pyasn1-modules to v0.3.0
- [ ] <!-- unlimit-branch=renovate/pygments-2.x -->chore(deps): update dependency pygments to v2.15.1
- [ ] <!-- unlimit-branch=renovate/pyjwt-2.x -->chore(deps): update dependency pyjwt to v2.7.0
- [ ] <!-- unlimit-branch=renovate/pyparsing-3.x -->chore(deps): update dependency pyparsing to v3.1.0
- [ ] <!-- unlimit-branch=renovate/setuptools-65.x -->chore(deps): update dependency setuptools to v65.7.0
- [ ] <!-- unlimit-branch=renovate/typing-extensions-4.x -->chore(deps): update dependency typing-extensions to v4.7.1
- [ ] <!-- unlimit-branch=renovate/virtualenv-20.x -->chore(deps): update dependency virtualenv to v20.23.1
- [ ] <!-- unlimit-branch=renovate/wheel-0.x -->chore(deps): update dependency wheel to v0.40.0
- [ ] <!-- unlimit-branch=renovate/zipp-3.x -->chore(deps): update dependency zipp to v3.15.0
- [ ] <!-- unlimit-branch=renovate/argcomplete-3.x -->chore(deps): update dependency argcomplete to v3
- [ ] <!-- unlimit-branch=renovate/attrs-23.x -->chore(deps): update dependency attrs to v23
- [ ] <!-- unlimit-branch=renovate/bleach-6.x -->chore(deps): update dependency bleach to v6
- [ ] <!-- unlimit-branch=renovate/certifi-2023.x -->chore(deps): update dependency certifi to v2023
- [ ] <!-- unlimit-branch=renovate/charset-normalizer-3.x -->chore(deps): update dependency charset-normalizer to v3
- [ ] <!-- unlimit-branch=renovate/importlib-metadata-6.x -->chore(deps): update dependency importlib-metadata to v6
- [ ] <!-- unlimit-branch=renovate/keyring-24.x -->chore(deps): update dependency keyring to v24
- [ ] <!-- unlimit-branch=renovate/nox-2023.x -->chore(deps): update dependency nox to v2023
- [ ] <!-- unlimit-branch=renovate/packaging-23.x -->chore(deps): update dependency packaging to v23
- [ ] <!-- unlimit-branch=renovate/platformdirs-3.x -->chore(deps): update dependency platformdirs to v3
- [ ] <!-- unlimit-branch=renovate/readme-renderer-40.x -->chore(deps): update dependency readme-renderer to v40
- [ ] <!-- unlimit-branch=renovate/requests-toolbelt-1.x -->chore(deps): update dependency requests-toolbelt to v1
- [ ] <!-- unlimit-branch=renovate/rich-13.x -->chore(deps): update dependency rich to v13
- [ ] <!-- unlimit-branch=renovate/setuptools-68.x -->chore(deps): update dependency setuptools to v68
- [ ] <!-- unlimit-branch=renovate/urllib3-2.x -->chore(deps): update dependency urllib3 to v2
- [ ] <!-- create-all-rate-limited-prs -->🔐 **Create all rate-limited PRs at once** 🔐
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/attrs-22.x -->[chore(deps): update dependency attrs to v22.2.0](../pull/363)
- [ ] <!-- rebase-branch=renovate/cachetools-5.x -->[chore(deps): update dependency cachetools to v5.3.1](../pull/364)
- [ ] <!-- rebase-branch=renovate/click-8.x -->[chore(deps): update dependency click to v8.1.3](../pull/366)
- [ ] <!-- rebase-branch=renovate/docutils-0.x -->[chore(deps): update dependency docutils to v0.20.1](../pull/367)
- [ ] <!-- rebase-branch=renovate/filelock-3.x -->[chore(deps): update dependency filelock to v3.12.2](../pull/368)
- [ ] <!-- rebase-branch=renovate/gcp-releasetool-1.x -->[chore(deps): update dependency gcp-releasetool to v1.15.0](../pull/369)
- [ ] <!-- rebase-branch=renovate/google-api-core-2.x -->[chore(deps): update dependency google-api-core to v2.11.1](../pull/370)
- [ ] <!-- rebase-branch=renovate/google-auth-2.x -->[chore(deps): update dependency google-auth to v2.21.0](../pull/371)
- [ ] <!-- rebase-branch=renovate/google-cloud-storage-2.x -->[chore(deps): update dependency google-cloud-storage to v2.10.0](../pull/372)
- [ ] <!-- rebase-branch=renovate/google-resumable-media-2.x -->[chore(deps): update dependency google-resumable-media to v2.5.0](../pull/373)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/cryptography-41.x -->[chore(deps): update dependency cryptography to v41.0.1](../pull/356)
- [ ] <!-- recreate-branch=renovate/gcp-docuploader-0.x -->[chore(deps): update dependency gcp-docuploader to v0.6.5](../pull/357)
- [ ] <!-- recreate-branch=renovate/markupsafe-2.x -->[chore(deps): update dependency markupsafe to v2.1.3](../pull/359)
- [ ] <!-- recreate-branch=renovate/twine-4.x -->[chore(deps): update dependency twine to v4.0.2](../pull/360)
- [ ] <!-- recreate-branch=renovate/urllib3-1.x -->[chore(deps): update dependency urllib3 to v1.26.16](../pull/361)
- [ ] <!-- recreate-branch=renovate/argcomplete-2.x -->[chore(deps): update dependency argcomplete to v2.1.2](../pull/362)
- [ ] <!-- recreate-branch=renovate/actions-checkout-3.x -->[chore(deps): update actions/checkout action to v3](../pull/308)
- [ ] <!-- recreate-branch=renovate/protobuf-4.x -->[chore(deps): update dependency protobuf to v4](../pull/318)
## Detected dependencies
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>.kokoro/docker/docs/Dockerfile</summary>
- `ubuntu 22.04`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/tests.yml</summary>
- `actions/checkout v2`
- `actions/setup-python v4`
- `actions/checkout v2`
- `actions/setup-python v4`
- `actions/checkout v2`
- `actions/setup-python v4`
- `actions/upload-artifact v3`
- `actions/checkout v3`
- `actions/setup-python v4`
- `actions/download-artifact v3`
</details>
</blockquote>
</details>
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>.kokoro/requirements.txt</summary>
- `argcomplete ==2.0.0`
- `attrs ==22.1.0`
- `bleach ==5.0.1`
- `cachetools ==5.2.0`
- `certifi ==2022.12.7`
- `cffi ==1.15.1`
- `charset-normalizer ==2.1.1`
- `click ==8.0.4`
- `colorlog ==6.7.0`
- `commonmark ==0.9.1`
- `cryptography ==41.0.0`
- `distlib ==0.3.6`
- `docutils ==0.19`
- `filelock ==3.8.0`
- `gcp-docuploader ==0.6.4`
- `gcp-releasetool ==1.10.5`
- `google-api-core ==2.10.2`
- `google-auth ==2.14.1`
- `google-cloud-core ==2.3.2`
- `google-cloud-storage ==2.6.0`
- `google-crc32c ==1.5.0`
- `google-resumable-media ==2.4.0`
- `googleapis-common-protos ==1.57.0`
- `idna ==3.4`
- `importlib-metadata ==5.0.0`
- `jaraco-classes ==3.2.3`
- `jeepney ==0.8.0`
- `jinja2 ==3.1.2`
- `keyring ==23.11.0`
- `markupsafe ==2.1.1`
- `more-itertools ==9.0.0`
- `nox ==2022.11.21`
- `packaging ==21.3`
- `pkginfo ==1.8.3`
- `platformdirs ==2.5.4`
- `protobuf ==3.20.3`
- `pyasn1 ==0.4.8`
- `pyasn1-modules ==0.2.8`
- `pycparser ==2.21`
- `pygments ==2.13.0`
- `pyjwt ==2.6.0`
- `pyparsing ==3.0.9`
- `pyperclip ==1.8.2`
- `python-dateutil ==2.8.2`
- `readme-renderer ==37.3`
- `requests ==2.31.0`
- `requests-toolbelt ==0.10.1`
- `rfc3986 ==2.0.0`
- `rich ==12.6.0`
- `rsa ==4.9`
- `secretstorage ==3.3.3`
- `six ==1.16.0`
- `twine ==4.0.1`
- `typing-extensions ==4.4.0`
- `urllib3 ==1.26.12`
- `virtualenv ==20.16.7`
- `webencodings ==0.5.1`
- `wheel ==0.38.4`
- `zipp ==3.10.0`
- `setuptools ==65.5.1`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>setup.py</summary>
- `protobuf >= 3.19.0, <5.0.0dev`
- `google-api-core >= 1.31.5`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more rate limited these updates are currently rate limited click on a checkbox below to force their creation now chore deps update dependency googleapis common protos to chore deps update dependency importlib metadata to chore deps update dependency keyring to chore deps update dependency more itertools to chore deps update dependency pkginfo to chore deps update dependency platformdirs to chore deps update dependency to chore deps update dependency modules to chore deps update dependency pygments to chore deps update dependency pyjwt to chore deps update dependency pyparsing to chore deps update dependency setuptools to chore deps update dependency typing extensions to chore deps update dependency virtualenv to chore deps update dependency wheel to chore deps update dependency zipp to chore deps update dependency argcomplete to chore deps update dependency attrs to chore deps update dependency bleach to chore deps update dependency certifi to chore deps update dependency charset normalizer to chore deps update dependency importlib metadata to chore deps update dependency keyring to chore deps update dependency nox to chore deps update dependency packaging to chore deps update dependency platformdirs to chore deps update dependency readme renderer to chore deps update dependency requests toolbelt to chore deps update dependency rich to chore deps update dependency setuptools to chore deps update dependency to 🔐 create all rate limited prs at once 🔐 edited blocked these updates have been manually edited so renovate will no longer make changes to discard all commits and start over click on a checkbox pull pull pull pull pull pull pull pull pull pull ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull pull pull pull pull pull pull detected dependencies dockerfile kokoro docker docs dockerfile ubuntu github actions github workflows tests yml actions checkout actions setup python actions checkout actions setup python actions checkout actions setup python actions upload artifact actions checkout actions setup python actions download artifact pip requirements kokoro requirements txt argcomplete attrs bleach cachetools certifi cffi charset normalizer click colorlog commonmark cryptography distlib docutils filelock gcp docuploader gcp releasetool google api core google auth google cloud core google cloud storage google google resumable media googleapis common protos idna importlib metadata jaraco classes jeepney keyring markupsafe more itertools nox packaging pkginfo platformdirs protobuf modules pycparser pygments pyjwt pyparsing pyperclip python dateutil readme renderer requests requests toolbelt rich rsa secretstorage six twine typing extensions virtualenv webencodings wheel zipp setuptools pip setup setup py protobuf google api core check this box to trigger a request for renovate to run again on this repository
| 1
|
42,835
| 11,299,752,020
|
IssuesEvent
|
2020-01-17 11:59:06
|
primefaces/primereact
|
https://api.github.com/repos/primefaces/primereact
|
closed
|
Dropdown cannot open the panel after double clicking an option
|
defect
|
Hello,
As the title, I cannot open the panel after double clicking an option in dropdown component.
And the examples in [here](https://www.primefaces.org/primereact/#/dropdown) have the same problem.
However, it is weird if you give the editable property to dropdown, it works fine after entering some value.
How can I resolve it? Thank you.
|
1.0
|
Dropdown cannot open the panel after double clicking an option - Hello,
As the title, I cannot open the panel after double clicking an option in dropdown component.
And the examples in [here](https://www.primefaces.org/primereact/#/dropdown) have the same problem.
However, it is weird if you give the editable property to dropdown, it works fine after entering some value.
How can I resolve it? Thank you.
|
non_process
|
dropdown cannot open the panel after double clicking an option hello as the title i cannot open the panel after double clicking an option in dropdown component and the examples in have the same problem however it is weird if you give the editable property to dropdown it works fine after entering some value how can i resolve it thank you
| 0
|
826
| 3,295,617,087
|
IssuesEvent
|
2015-11-01 03:50:54
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
closed
|
add a uniform scale parameter to the feedback module
|
enhancement video processing
|
it's useful to be able to easily scale x and y to the same value
|
1.0
|
add a uniform scale parameter to the feedback module - it's useful to be able to easily scale x and y to the same value
|
process
|
add a uniform scale parameter to the feedback module it s useful to be able to easily scale x and y to the same value
| 1
|
817,848
| 30,659,292,109
|
IssuesEvent
|
2023-07-25 14:04:57
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Improve the flow for manually adding an NFT
|
priority/P4 QA/Yes release-notes/include feature/web3/wallet OS/Android
|
## Description
Adding an NFT on Android is a cumbersome operation, as the button to reach the dialog is a bit hidden in the UI.
### Android - iOS Comparison
https://github.com/brave/brave-browser/assets/3308503/cde9b008-b6bc-4dc6-85f1-deb66da41a82
The main difference is that on Android the button is inside the screen to edit the visible assets.
We should go in favor of the iOS solution: the plus icon should be repositioned in the NFT screen.
⚠️ Note: because the current Android implementation is relying on a dialog and is sharing the logic with the screen behind, we should implement [27070](https://github.com/brave/brave-browser/issues/27070) first.
|
1.0
|
Improve the flow for manually adding an NFT - ## Description
Adding an NFT on Android is a cumbersome operation, as the button to reach the dialog is a bit hidden in the UI.
### Android - iOS Comparison
https://github.com/brave/brave-browser/assets/3308503/cde9b008-b6bc-4dc6-85f1-deb66da41a82
The main difference is that on Android the button is inside the screen to edit the visible assets.
We should go in favor of the iOS solution: the plus icon should be repositioned in the NFT screen.
⚠️ Note: because the current Android implementation is relying on a dialog and is sharing the logic with the screen behind, we should implement [27070](https://github.com/brave/brave-browser/issues/27070) first.
|
non_process
|
improve the flow for manually adding an nft description adding an nft on android is a cumbersome operation as the button to reach the dialog is a bit hidden in the ui android ios comparison the main difference is that on android the button is inside the screen to edit the visible assets we should go in favor of the ios solution the plus icon should be repositioned in the nft screen ⚠️ note because the current android implementation is relying on a dialog and is sharing the logic with the screen behind we should implement first
| 0
|
680,232
| 23,263,353,292
|
IssuesEvent
|
2022-08-04 15:10:13
|
netdata/netdata-cloud
|
https://api.github.com/repos/netdata/netdata-cloud
|
closed
|
[Bug]: Cannot delete agent from cloud dashboard
|
bug priority/medium mgmt-navigation-team
|
### Bug description
I can't seem to delete a number of offline clients.
I coulddn't upgrade some nodes and so had to uninstall-reinstall. This lead to duplicates in the cloud dashboard, which I've had before; however, this time when I deleted the 'old' nodes from the dashboard's "nodes" view I got an internal server error.
andrewm4894 passed the info on to the backend team and soon I could delete without any errors.
However, after a few seconds the deleted node reappears.
I've tried deleting the cache via the developer tools and then doing a hard refresh (ctrl + F5) as well as a different browser.
Note that the agents are showing as Unreachable, not as stale etc.
Let me know what you need :)
### Expected behavior
Nodes should be deleted.
### Steps to reproduce
1.
https://app.netdata.cloud/spaces/home-office-space/rooms/all-nodes/nodes
2. Click delete/trash icon for node.
3. 3. Also happens via the node tab under the settings for the room.
### Screenshots
_No response_
### Error Logs
Dev tools console shows two errors:
with "the server responded with a status of 499". Attached
[app.netdata.cloud-1656953801417.log](https://github.com/netdata/netdata-cloud/files/9041179/app.netdata.cloud-1656953801417.log)
[app.netdata.cloud-1656953793862.log](https://github.com/netdata/netdata-cloud/files/9041181/app.netdata.cloud-1656953793862.log)
### Desktop
OS: Windows 10 and 11
Browser: Edge and Chrome
Browser Version: 103.0.1264.44 and 103.0.5060.66
### Additional context
These clients had been intermittently on and off over a few months and had been failing to auto-update.
When I tried to update them via the kick-start script (which is how they were installed in the first place) it kept failing complaining that it could not establish the type of the existing installation.
So I uninstalled completely using the uninstall script and did fresh installs.
|
1.0
|
[Bug]: Cannot delete agent from cloud dashboard - ### Bug description
I can't seem to delete a number of offline clients.
I coulddn't upgrade some nodes and so had to uninstall-reinstall. This lead to duplicates in the cloud dashboard, which I've had before; however, this time when I deleted the 'old' nodes from the dashboard's "nodes" view I got an internal server error.
andrewm4894 passed the info on to the backend team and soon I could delete without any errors.
However, after a few seconds the deleted node reappears.
I've tried deleting the cache via the developer tools and then doing a hard refresh (ctrl + F5) as well as a different browser.
Note that the agents are showing as Unreachable, not as stale etc.
Let me know what you need :)
### Expected behavior
Nodes should be deleted.
### Steps to reproduce
1.
https://app.netdata.cloud/spaces/home-office-space/rooms/all-nodes/nodes
2. Click delete/trash icon for node.
3. 3. Also happens via the node tab under the settings for the room.
### Screenshots
_No response_
### Error Logs
Dev tools console shows two errors:
with "the server responded with a status of 499". Attached
[app.netdata.cloud-1656953801417.log](https://github.com/netdata/netdata-cloud/files/9041179/app.netdata.cloud-1656953801417.log)
[app.netdata.cloud-1656953793862.log](https://github.com/netdata/netdata-cloud/files/9041181/app.netdata.cloud-1656953793862.log)
### Desktop
OS: Windows 10 and 11
Browser: Edge and Chrome
Browser Version: 103.0.1264.44 and 103.0.5060.66
### Additional context
These clients had been intermittently on and off over a few months and had been failing to auto-update.
When I tried to update them via the kick-start script (which is how they were installed in the first place) it kept failing complaining that it could not establish the type of the existing installation.
So I uninstalled completely using the uninstall script and did fresh installs.
|
non_process
|
cannot delete agent from cloud dashboard bug description i can t seem to delete a number of offline clients i coulddn t upgrade some nodes and so had to uninstall reinstall this lead to duplicates in the cloud dashboard which i ve had before however this time when i deleted the old nodes from the dashboard s nodes view i got an internal server error passed the info on to the backend team and soon i could delete without any errors however after a few seconds the deleted node reappears i ve tried deleting the cache via the developer tools and then doing a hard refresh ctrl as well as a different browser note that the agents are showing as unreachable not as stale etc let me know what you need expected behavior nodes should be deleted steps to reproduce click delete trash icon for node also happens via the node tab under the settings for the room screenshots no response error logs dev tools console shows two errors with the server responded with a status of attached desktop os windows and browser edge and chrome browser version and additional context these clients had been intermittently on and off over a few months and had been failing to auto update when i tried to update them via the kick start script which is how they were installed in the first place it kept failing complaining that it could not establish the type of the existing installation so i uninstalled completely using the uninstall script and did fresh installs
| 0
|
163,427
| 6,198,188,595
|
IssuesEvent
|
2017-07-05 18:34:46
|
GoogleCloudPlatform/google-cloud-eclipse
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
|
closed
|
Import Java 8 project
|
App Engine Standard high priority task
|
See what happens when we import an existing Java 8 app engine standard project (`<runtime>java8</runtime>`). Make sure the facets (Java and Dynamic Web project) are set appropriately without further action.
|
1.0
|
Import Java 8 project - See what happens when we import an existing Java 8 app engine standard project (`<runtime>java8</runtime>`). Make sure the facets (Java and Dynamic Web project) are set appropriately without further action.
|
non_process
|
import java project see what happens when we import an existing java app engine standard project make sure the facets java and dynamic web project are set appropriately without further action
| 0
|
4,365
| 7,260,514,615
|
IssuesEvent
|
2018-02-18 10:53:40
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] Optimised points along geometry algorithm
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/8db9284cb304618c91c22c380711a0d4dd02f707 by nyalldawson
Supports also polygon geometries, handles null geometries,
and records the original line angle along with the distance
for each point.
|
1.0
|
[FEATURE][processing] Optimised points along geometry algorithm - Original commit: https://github.com/qgis/QGIS/commit/8db9284cb304618c91c22c380711a0d4dd02f707 by nyalldawson
Supports also polygon geometries, handles null geometries,
and records the original line angle along with the distance
for each point.
|
process
|
optimised points along geometry algorithm original commit by nyalldawson supports also polygon geometries handles null geometries and records the original line angle along with the distance for each point
| 1
|
10,084
| 13,044,161,986
|
IssuesEvent
|
2020-07-29 03:47:28
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `ConvertTz` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `ConvertTz` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `ConvertTz` from TiDB -
## Description
Port the scalar function `ConvertTz` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function converttz from tidb description port the scalar function converttz from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
95,040
| 3,933,560,996
|
IssuesEvent
|
2016-04-25 19:33:48
|
ghutchis/avogadro
|
https://api.github.com/repos/ghutchis/avogadro
|
closed
|
Crash viewing properties after optimisation
|
auto-migrated Commands / Extensions high priority v_1.1.0
|
Create/load a molecule.
Select: Extensions->Optimize Geometry.
Select: View -> Properties -> Molecule Properties.
This generates the following seg faut:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libstdc++.6.dylib 0x00007fff8ba29bde std::ostream::sentry::sentry(std::ostream&) + 24
1 libstdc++.6.dylib 0x00007fff8ba2b5bf std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long) + 39
2 libstdc++.6.dylib 0x00007fff8ba2b7ea std::basic_ostream<char, std::char_traits<char> >& std::operator<< <std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*) + 68
3 libopenbabel.4.dylib 0x000000010b6e3177 OpenBabel::OBForceField::PrintTypes() + 55 (forcefield.h:1013)
4 libopenbabel.4.dylib 0x000000010b6e3500 OpenBabel::OBForceField::Setup(OpenBabel::OBMol&) + 378 (forcefield.cpp:890)
5 libavogadro.1.dylib 0x000000010b947939 Avogadro::Molecule::dipoleMoment(bool*) const + 145 (molecule.cpp:782)
6 molecularpropextension.so 0x0000000110b02b77 Avogadro::MolecularPropertiesExtension::update() + 797 (molecularpropextension.cpp:160)
7 molecularpropextension.so 0x0000000110b03307 Avogadro::MolecularPropertiesExtension::performAction(QAction*, Avogadro::GLWidget*) + 607 (molecularpropextension.cpp:128)
8 net.sourceforge 0x000000010a2731fc Avogadro::MainWindow::actionTriggered() + 84
9 net.sourceforge 0x000000010a270c8e Avogadro::MainWindow::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) + 1442
10 QtCore 0x000000010b47b22e QMetaObject::activate(QObject*, QMetaObject const*, int, void**) + 1566
11 QtGui 0x000000010a5182c1 QAction::triggered(bool) + 49
12 QtGui 0x000000010a519654 QAction::activate(QAction::ActionEvent) + 180
13 QtGui 0x000000010a4cbf0a -[QCocoaMenuLoader qtDispatcherToQAction:] + 106
14 com.apple.CoreFoundation 0x00007fff86c9475d -[NSObject performSelector:withObject:] + 61
15 com.apple.AppKit 0x00007fff8d31ecb2 -[NSApplication sendAction:to:from:] + 139
16 com.apple.AppKit 0x00007fff8d40bfe7 -[NSMenuItem _corePerformAction] + 399
17 com.apple.AppKit 0x00007fff8d40bd1e -[NSCarbonMenuImpl performActionWithHighlightingForItemAtIndex:] + 125
18 com.apple.AppKit 0x00007fff8d6a9dd4 -[NSMenu _internalPerformActionForItemAtIndex:] + 38
19 com.apple.AppKit 0x00007fff8d53a3a9 -[NSCarbonMenuImpl _carbonCommandProcessEvent:handlerCallRef:] + 138
20 com.apple.AppKit 0x00007fff8d385b4b NSSLMMenuEventHandler + 339
21 com.apple.HIToolbox 0x00007fff8a514294 _ZL23DispatchEventToHandlersP14EventTargetRecP14OpaqueEventRefP14HandlerCallRec + 1263
22 com.apple.HIToolbox 0x00007fff8a5138a0 _ZL30SendEventToEventTargetInternalP14OpaqueEventRefP20OpaqueEventTargetRefP14HandlerCallRec + 446
23 com.apple.HIToolbox 0x00007fff8a52a677 SendEventToEventTarget + 76
24 com.apple.HIToolbox 0x00007fff8a5706c1 _ZL18SendHICommandEventjPK9HICommandjjhPKvP20OpaqueEventTargetRefS5_PP14OpaqueEventRef + 398
25 com.apple.HIToolbox 0x00007fff8a657c59 SendMenuCommandWithContextAndModifiers + 56
26 com.apple.HIToolbox 0x00007fff8a69e73d SendMenuItemSelectedEvent + 253
27 com.apple.HIToolbox 0x00007fff8a5697bb _ZL19FinishMenuSelectionP13SelectionDataP10MenuResultS2_ + 101
28 com.apple.HIToolbox 0x00007fff8a560f01 _ZL14MenuSelectCoreP8MenuData5PointdjPP13OpaqueMenuRefPt + 600
29 com.apple.HIToolbox 0x00007fff8a5604ca _HandleMenuSelection2 + 580
30 com.apple.AppKit 0x00007fff8d284fb2 _NSHandleCarbonMenuEvent + 250
31 com.apple.AppKit 0x00007fff8d21a4ad _DPSNextEvent + 2019
32 com.apple.AppKit 0x00007fff8d219861 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 135
33 com.apple.AppKit 0x00007fff8d21619d -[NSApplication run] + 470
34 QtGui 0x000000010a4d7900 QEventDispatcherMac::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 1824
35 QtCore 0x000000010b462094 QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 68
36 QtCore 0x000000010b462444 QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 324
37 QtCore 0x000000010b464b2c QCoreApplication::exec() + 188
38 net.sourceforge 0x000000010a26f38a main + 5786
39 net.sourceforge 0x000000010a262564 start + 52
This on:
Model: MacBookAir4,2, BootROM MBA41.0077.B0F, 2 processors, Intel Core i5, 1.7 GHz, 4 GB, SMC 1.73f63
OS Version: Mac OS X 10.7.3 (11D50b)
Using Qt 4.8.1 latest version of source as built with avogadro_squared
Reported by: jensthomas
|
1.0
|
Crash viewing properties after optimisation - Create/load a molecule.
Select: Extensions->Optimize Geometry.
Select: View -> Properties -> Molecule Properties.
This generates the following seg faut:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libstdc++.6.dylib 0x00007fff8ba29bde std::ostream::sentry::sentry(std::ostream&) + 24
1 libstdc++.6.dylib 0x00007fff8ba2b5bf std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long) + 39
2 libstdc++.6.dylib 0x00007fff8ba2b7ea std::basic_ostream<char, std::char_traits<char> >& std::operator<< <std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*) + 68
3 libopenbabel.4.dylib 0x000000010b6e3177 OpenBabel::OBForceField::PrintTypes() + 55 (forcefield.h:1013)
4 libopenbabel.4.dylib 0x000000010b6e3500 OpenBabel::OBForceField::Setup(OpenBabel::OBMol&) + 378 (forcefield.cpp:890)
5 libavogadro.1.dylib 0x000000010b947939 Avogadro::Molecule::dipoleMoment(bool*) const + 145 (molecule.cpp:782)
6 molecularpropextension.so 0x0000000110b02b77 Avogadro::MolecularPropertiesExtension::update() + 797 (molecularpropextension.cpp:160)
7 molecularpropextension.so 0x0000000110b03307 Avogadro::MolecularPropertiesExtension::performAction(QAction*, Avogadro::GLWidget*) + 607 (molecularpropextension.cpp:128)
8 net.sourceforge 0x000000010a2731fc Avogadro::MainWindow::actionTriggered() + 84
9 net.sourceforge 0x000000010a270c8e Avogadro::MainWindow::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) + 1442
10 QtCore 0x000000010b47b22e QMetaObject::activate(QObject*, QMetaObject const*, int, void**) + 1566
11 QtGui 0x000000010a5182c1 QAction::triggered(bool) + 49
12 QtGui 0x000000010a519654 QAction::activate(QAction::ActionEvent) + 180
13 QtGui 0x000000010a4cbf0a -[QCocoaMenuLoader qtDispatcherToQAction:] + 106
14 com.apple.CoreFoundation 0x00007fff86c9475d -[NSObject performSelector:withObject:] + 61
15 com.apple.AppKit 0x00007fff8d31ecb2 -[NSApplication sendAction:to:from:] + 139
16 com.apple.AppKit 0x00007fff8d40bfe7 -[NSMenuItem _corePerformAction] + 399
17 com.apple.AppKit 0x00007fff8d40bd1e -[NSCarbonMenuImpl performActionWithHighlightingForItemAtIndex:] + 125
18 com.apple.AppKit 0x00007fff8d6a9dd4 -[NSMenu _internalPerformActionForItemAtIndex:] + 38
19 com.apple.AppKit 0x00007fff8d53a3a9 -[NSCarbonMenuImpl _carbonCommandProcessEvent:handlerCallRef:] + 138
20 com.apple.AppKit 0x00007fff8d385b4b NSSLMMenuEventHandler + 339
21 com.apple.HIToolbox 0x00007fff8a514294 _ZL23DispatchEventToHandlersP14EventTargetRecP14OpaqueEventRefP14HandlerCallRec + 1263
22 com.apple.HIToolbox 0x00007fff8a5138a0 _ZL30SendEventToEventTargetInternalP14OpaqueEventRefP20OpaqueEventTargetRefP14HandlerCallRec + 446
23 com.apple.HIToolbox 0x00007fff8a52a677 SendEventToEventTarget + 76
24 com.apple.HIToolbox 0x00007fff8a5706c1 _ZL18SendHICommandEventjPK9HICommandjjhPKvP20OpaqueEventTargetRefS5_PP14OpaqueEventRef + 398
25 com.apple.HIToolbox 0x00007fff8a657c59 SendMenuCommandWithContextAndModifiers + 56
26 com.apple.HIToolbox 0x00007fff8a69e73d SendMenuItemSelectedEvent + 253
27 com.apple.HIToolbox 0x00007fff8a5697bb _ZL19FinishMenuSelectionP13SelectionDataP10MenuResultS2_ + 101
28 com.apple.HIToolbox 0x00007fff8a560f01 _ZL14MenuSelectCoreP8MenuData5PointdjPP13OpaqueMenuRefPt + 600
29 com.apple.HIToolbox 0x00007fff8a5604ca _HandleMenuSelection2 + 580
30 com.apple.AppKit 0x00007fff8d284fb2 _NSHandleCarbonMenuEvent + 250
31 com.apple.AppKit 0x00007fff8d21a4ad _DPSNextEvent + 2019
32 com.apple.AppKit 0x00007fff8d219861 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 135
33 com.apple.AppKit 0x00007fff8d21619d -[NSApplication run] + 470
34 QtGui 0x000000010a4d7900 QEventDispatcherMac::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 1824
35 QtCore 0x000000010b462094 QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 68
36 QtCore 0x000000010b462444 QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 324
37 QtCore 0x000000010b464b2c QCoreApplication::exec() + 188
38 net.sourceforge 0x000000010a26f38a main + 5786
39 net.sourceforge 0x000000010a262564 start + 52
This on:
Model: MacBookAir4,2, BootROM MBA41.0077.B0F, 2 processors, Intel Core i5, 1.7 GHz, 4 GB, SMC 1.73f63
OS Version: Mac OS X 10.7.3 (11D50b)
Using Qt 4.8.1 latest version of source as built with avogadro_squared
Reported by: jensthomas
|
non_process
|
crash viewing properties after optimisation create load a molecule select extensions optimize geometry select view properties molecule properties this generates the following seg faut thread crashed dispatch queue com apple main thread libstdc dylib std ostream sentry sentry std ostream libstdc dylib std basic ostream std ostream insert std basic ostream char const long libstdc dylib std basic ostream std operator std basic ostream char const libopenbabel dylib openbabel obforcefield printtypes forcefield h libopenbabel dylib openbabel obforcefield setup openbabel obmol forcefield cpp libavogadro dylib avogadro molecule dipolemoment bool const molecule cpp molecularpropextension so avogadro molecularpropertiesextension update molecularpropextension cpp molecularpropextension so avogadro molecularpropertiesextension performaction qaction avogadro glwidget molecularpropextension cpp net sourceforge avogadro mainwindow actiontriggered net sourceforge avogadro mainwindow qt static metacall qobject qmetaobject call int void qtcore qmetaobject activate qobject qmetaobject const int void qtgui qaction triggered bool qtgui qaction activate qaction actionevent qtgui com apple corefoundation com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit nsslmmenueventhandler com apple hitoolbox com apple hitoolbox com apple hitoolbox sendeventtoeventtarget com apple hitoolbox com apple hitoolbox sendmenucommandwithcontextandmodifiers com apple hitoolbox sendmenuitemselectedevent com apple hitoolbox com apple hitoolbox com apple hitoolbox com apple appkit nshandlecarbonmenuevent com apple appkit dpsnextevent com apple appkit com apple appkit qtgui qeventdispatchermac processevents qflags qtcore qeventloop processevents qflags qtcore qeventloop exec qflags qtcore qcoreapplication exec net sourceforge main net sourceforge start this on model bootrom processors intel core ghz gb smc os version mac os x using qt latest version of source as built with avogadro squared reported by jensthomas
| 0
|
3,807
| 6,793,673,461
|
IssuesEvent
|
2017-11-01 08:46:17
|
gaocegege/Processing.R
|
https://api.github.com/repos/gaocegege/Processing.R
|
closed
|
Support Table in R
|
community/processing difficulty/low priority/p1 size/small status/to-be-claimed type/enhancement
|
From @jeremydouglass
`Table` might be good to import from the core as well for working with large sets of 2D or 3D coordinates -- loadTable and saveTable are already present, but they wouldn't work without Table. At least, I think it is not already included, because it is not listed in the reference....
- https://processing-r.github.io/reference/loadTable.html
- https://processing-r.github.io/reference/Table.html
|
1.0
|
Support Table in R - From @jeremydouglass
`Table` might be good to import from the core as well for working with large sets of 2D or 3D coordinates -- loadTable and saveTable are already present, but they wouldn't work without Table. At least, I think it is not already included, because it is not listed in the reference....
- https://processing-r.github.io/reference/loadTable.html
- https://processing-r.github.io/reference/Table.html
|
process
|
support table in r from jeremydouglass table might be good to import from the core as well for working with large sets of or coordinates loadtable and savetable are already present but they wouldn t work without table at least i think it is not already included because it is not listed in the reference
| 1
|
271,191
| 23,589,024,854
|
IssuesEvent
|
2022-08-23 13:53:00
|
O-market/O-market
|
https://api.github.com/repos/O-market/O-market
|
closed
|
NetworkService 테스트 코드 구현
|
test
|
### 배경
NetworkService 테스트 코드를 구현하여 제대로 데이터를 가져오는지 테스트
### 체크 리스트
- [ ] NetworkService 테스트 구현
|
1.0
|
NetworkService 테스트 코드 구현 - ### 배경
NetworkService 테스트 코드를 구현하여 제대로 데이터를 가져오는지 테스트
### 체크 리스트
- [ ] NetworkService 테스트 구현
|
non_process
|
networkservice 테스트 코드 구현 배경 networkservice 테스트 코드를 구현하여 제대로 데이터를 가져오는지 테스트 체크 리스트 networkservice 테스트 구현
| 0
|
459,470
| 13,193,840,880
|
IssuesEvent
|
2020-08-13 15:50:19
|
google/ground-platform
|
https://api.github.com/repos/google/ground-platform
|
closed
|
[Layer list] Order of layers changes on each reload
|
CUJ: layers/ forms priority: p1 type: bug
|
This is due to index not being implemented in Layer.
Blocked by #308.
Fyi @gyarill, @parulraheja98
|
1.0
|
[Layer list] Order of layers changes on each reload - This is due to index not being implemented in Layer.
Blocked by #308.
Fyi @gyarill, @parulraheja98
|
non_process
|
order of layers changes on each reload this is due to index not being implemented in layer blocked by fyi gyarill
| 0
|
314,975
| 9,605,275,529
|
IssuesEvent
|
2019-05-10 23:09:45
|
INN/link-roundups
|
https://api.github.com/repos/INN/link-roundups
|
opened
|
[roundup_block] shortcode: has anyone ever used it?
|
priority: low type: question
|
PR https://github.com/INN/link-roundups/pull/117/ was opened for 0.4, after the WordPress MailChimp Tools submodule was created. It was never merged; the docs were never added.
- [ ] can we remove the `[roundup_block]` shortcode and its associated JS?
|
1.0
|
[roundup_block] shortcode: has anyone ever used it? - PR https://github.com/INN/link-roundups/pull/117/ was opened for 0.4, after the WordPress MailChimp Tools submodule was created. It was never merged; the docs were never added.
- [ ] can we remove the `[roundup_block]` shortcode and its associated JS?
|
non_process
|
shortcode has anyone ever used it pr was opened for after the wordpress mailchimp tools submodule was created it was never merged the docs were never added can we remove the shortcode and its associated js
| 0
|
302,663
| 26,158,923,945
|
IssuesEvent
|
2022-12-31 07:09:09
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: cdc/initial-scan failed
|
C-test-failure O-robot O-roachtest release-blocker branch-release-22.1
|
roachtest.cdc/initial-scan [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=artifacts#/cdc/initial-scan) on release-22.1 @ [000c9624b56b09d5fbd06557c559b2f910142a9c](https://github.com/cockroachdb/cockroach/commits/000c9624b56b09d5fbd06557c559b2f910142a9c):
```
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2083
| | github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:661
| | github.com/cockroachdb/cockroach/pkg/roachprod.Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:384
| | main.execCmdEx
| | main/pkg/cmd/roachtest/cluster.go:341
| | main.execCmd
| | main/pkg/cmd/roachtest/cluster.go:229
| | main.(*clusterImpl).RunE
| | main/pkg/cmd/roachtest/cluster.go:1954
| | main.(*clusterImpl).Run
| | main/pkg/cmd/roachtest/cluster.go:1932
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccWorkload).run
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:1578
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest.func1
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:186
| | main.(*monitorImpl).Go.func1
| | main/pkg/cmd/roachtest/monitor.go:105
| | golang.org/x/sync/errgroup.(*Group).Go.func1
| | golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:57
| | runtime.goexit
| | GOROOT/src/runtime/asm_amd64.s:1581
| Wraps: (2) one or more parallel execution failure
| Error types: (1) *withstack.withStack (2) *errutil.leafError
Wraps: (5) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString
monitor.go:127,cdc.go:296,cdc.go:727,test_runner.go:883: monitor failure: monitor task failed: pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:296
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerCDC.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:727
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (4) monitor task failed
Wraps: (5) pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *pq.Error
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #92510 roachtest: cdc/initial-scan-only failed [C-test-failure O-roachtest O-robot T-cdc branch-master release-blocker]
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/initial-scan.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: cdc/initial-scan failed - roachtest.cdc/initial-scan [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=artifacts#/cdc/initial-scan) on release-22.1 @ [000c9624b56b09d5fbd06557c559b2f910142a9c](https://github.com/cockroachdb/cockroach/commits/000c9624b56b09d5fbd06557c559b2f910142a9c):
```
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2083
| | github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:661
| | github.com/cockroachdb/cockroach/pkg/roachprod.Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:384
| | main.execCmdEx
| | main/pkg/cmd/roachtest/cluster.go:341
| | main.execCmd
| | main/pkg/cmd/roachtest/cluster.go:229
| | main.(*clusterImpl).RunE
| | main/pkg/cmd/roachtest/cluster.go:1954
| | main.(*clusterImpl).Run
| | main/pkg/cmd/roachtest/cluster.go:1932
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccWorkload).run
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:1578
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest.func1
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:186
| | main.(*monitorImpl).Go.func1
| | main/pkg/cmd/roachtest/monitor.go:105
| | golang.org/x/sync/errgroup.(*Group).Go.func1
| | golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:57
| | runtime.goexit
| | GOROOT/src/runtime/asm_amd64.s:1581
| Wraps: (2) one or more parallel execution failure
| Error types: (1) *withstack.withStack (2) *errutil.leafError
Wraps: (5) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString
monitor.go:127,cdc.go:296,cdc.go:727,test_runner.go:883: monitor failure: monitor task failed: pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:296
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerCDC.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:727
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (4) monitor task failed
Wraps: (5) pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *pq.Error
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #92510 roachtest: cdc/initial-scan-only failed [C-test-failure O-roachtest O-robot T-cdc branch-master release-blocker]
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/initial-scan.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_process
|
roachtest cdc initial scan failed roachtest cdc initial scan with on release github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster run github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod run github com cockroachdb cockroach pkg roachprod roachprod go main execcmdex main pkg cmd roachtest cluster go main execcmd main pkg cmd roachtest cluster go main clusterimpl rune main pkg cmd roachtest cluster go main clusterimpl run main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests tpccworkload run github com cockroachdb cockroach pkg cmd roachtest tests cdc go github com cockroachdb cockroach pkg cmd roachtest tests cdcbasictest github com cockroachdb cockroach pkg cmd roachtest tests cdc go main monitorimpl go main pkg cmd roachtest monitor go golang org x sync errgroup group go golang org x sync errgroup external org golang x sync errgroup errgroup go runtime goexit goroot src runtime asm s wraps one or more parallel execution failure error types withstack withstack errutil leaferror wraps context canceled error types withstack withstack errutil withprefix cluster withcommanddetails secondary withsecondaryerror errors errorstring monitor go cdc go cdc go test runner go monitor failure monitor task failed pq use of changefeed requires an enterprise license your evaluation license expired on december if you re interested in getting a new license please contact subscriptions cockroachlabs com and we can help you out attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests cdcbasictest github com cockroachdb cockroach pkg cmd roachtest tests cdc go github com cockroachdb cockroach pkg cmd roachtest tests registercdc github com cockroachdb cockroach pkg cmd roachtest tests cdc go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go runtime goexit goroot src runtime asm s wraps monitor task failed wraps pq use of changefeed requires an enterprise license your evaluation license expired on december if you re interested in getting a new license please contact subscriptions cockroachlabs com and we can help you out error types withstack withstack errutil withprefix withstack withstack errutil withprefix pq error help see see same failure on other branches roachtest cdc initial scan only failed cc cockroachdb cdc
| 0
|
17,684
| 23,525,249,859
|
IssuesEvent
|
2022-08-19 10:09:11
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
incompatible_use_toolchain_resolution_for_java_rules: use toolchain resolution for Java rules
|
P1 type: process team-Rules-Java incompatible-change area-java-toolchains
|
**Flag:** `--incompatible_use_toolchain_resolution_for_java_rules`
**Available since:** 5.0.0
**Will be flipped in:** 5.0.0
**Tracking issue:** #4592
### **Motivation**
Currently, Java rules find their Java toolchain and JDK using the `--javabase` / `--java_toolchain` / `--host_javabase` / `--host_java_toolchain` command line options. This will be changed to use [platform-based toolchain resolution](https://docs.bazel.build/versions/master/toolchains.html) so as to be consistent with the rest of Bazel and to support multiple platforms more easily.
### **Migration notes**
For `--javabase` with old values:
- `@local_jdk://jdk`
- `@remotejdk11_{linux,window,darwin}_{cpu}//:jdk`
- `@remotejdk14_{linux,window,darwin}//:jdk`
- `@remotejdk15_{linux,window,darwin}.*//:jdk`
Replace the flag with `--java_runtime_version={local_jdk,remotejdk_14,remotejdk_15}`.
For `--java_toolchain` with old values:
- `@bazel_tools//tools/jdk:toolchain`,
- `@bazel_tools_hostjdk8`,
- `@bazel_tools//jdk:legacy_toolchain`,
- `@bazel_tools//tools/jdk:remote_toolchain`,
- `@bazel_tools//tools/jdk:toolchain_java_{ver}`,
- `@remote_java_tools_xxx//:toolchain`,
- `@remote_java_tools_xxx//:toolchain_jdk_11`,
- `@remote_java_tools_xxx//:toolchain_jdk_14`,
- `@remote_java_tools_xxx//:toolchain_jdk_15`
Replace the flag with `--java_language_version={8,...,15}`
### **Migration of more advanced cases**
For `--javabase=@bazel_tools//tools/jdk:absolute_javabase`, use `local_java_repositoy` in the `WORKSPACE` file.
For custom `--javabase` labels, do the following:
1. replace `http_archive` with `remote_java_repository`:
```py
remote_java_repository(
name = ...
sha256 = ...
strip_prefix = ...
urls = ...
prefix = "myjdk",
version = "11",
exec_compatible_with = ["@platforms//cpu:arm", "@platforms//os:linux"]
)
```
2. replace the old flag with `--java_runtime_version` with the specified `version` or `prefix_version` value (for example `--java_runtime_version=myjdk_11`).
For old `--java_toolchain` values:
- `@bazel_tools//tools/jdk:toolchain_vanilla`
- `@remote_java_tools_xxx//:prebuilt_toolchain`
- custom label
1. add custom toolchain definition to a `BUILD` file (or replace custom target):
```py
default_java_toolchain(
name = "mytoolchain",
configuration = "PREBUILT_TOOLCHAIN_CONFIGURATION"
#or "VANILLA_TOOLCHAIN_CONFIGURATION"
...
)
```
2. register custom toolchain in the `WORKSPACE` or use configuration flag `--extra_toolchains`.
### *RBE migration*
1. Update the version of bazel_toolchains.
2. Add following flags to `.bazelrc` affecting remote configuration:
```
build:remote --java_runtime_version=rbe_jdk # Uses JDK installed on docker, configured by bazel_toolchains
build:remote --tool_java_runtime_version=rbe_jdk
build:remote --extra_toolchains=@rbe_ubuntu1804//java:all # Optional: uses JDK installed on docker to compile
```
3. In case the sources are not Java 8, also add:
```
build --java_language_version=11
build --tool_java_language_version=11
```
4. Once Bazel 4.1.0 is released and used on RBE Remove `--{,host}javabase` and `--{,host}_javatoolchain` flags.
|
1.0
|
incompatible_use_toolchain_resolution_for_java_rules: use toolchain resolution for Java rules - **Flag:** `--incompatible_use_toolchain_resolution_for_java_rules`
**Available since:** 5.0.0
**Will be flipped in:** 5.0.0
**Tracking issue:** #4592
### **Motivation**
Currently, Java rules find their Java toolchain and JDK using the `--javabase` / `--java_toolchain` / `--host_javabase` / `--host_java_toolchain` command line options. This will be changed to use [platform-based toolchain resolution](https://docs.bazel.build/versions/master/toolchains.html) so as to be consistent with the rest of Bazel and to support multiple platforms more easily.
### **Migration notes**
For `--javabase` with old values:
- `@local_jdk://jdk`
- `@remotejdk11_{linux,window,darwin}_{cpu}//:jdk`
- `@remotejdk14_{linux,window,darwin}//:jdk`
- `@remotejdk15_{linux,window,darwin}.*//:jdk`
Replace the flag with `--java_runtime_version={local_jdk,remotejdk_14,remotejdk_15}`.
For `--java_toolchain` with old values:
- `@bazel_tools//tools/jdk:toolchain`,
- `@bazel_tools_hostjdk8`,
- `@bazel_tools//jdk:legacy_toolchain`,
- `@bazel_tools//tools/jdk:remote_toolchain`,
- `@bazel_tools//tools/jdk:toolchain_java_{ver}`,
- `@remote_java_tools_xxx//:toolchain`,
- `@remote_java_tools_xxx//:toolchain_jdk_11`,
- `@remote_java_tools_xxx//:toolchain_jdk_14`,
- `@remote_java_tools_xxx//:toolchain_jdk_15`
Replace the flag with `--java_language_version={8,...,15}`
### **Migration of more advanced cases**
For `--javabase=@bazel_tools//tools/jdk:absolute_javabase`, use `local_java_repositoy` in the `WORKSPACE` file.
For custom `--javabase` labels, do the following:
1. replace `http_archive` with `remote_java_repository`:
```py
remote_java_repository(
name = ...
sha256 = ...
strip_prefix = ...
urls = ...
prefix = "myjdk",
version = "11",
exec_compatible_with = ["@platforms//cpu:arm", "@platforms//os:linux"]
)
```
2. replace the old flag with `--java_runtime_version` with the specified `version` or `prefix_version` value (for example `--java_runtime_version=myjdk_11`).
For old `--java_toolchain` values:
- `@bazel_tools//tools/jdk:toolchain_vanilla`
- `@remote_java_tools_xxx//:prebuilt_toolchain`
- custom label
1. add custom toolchain definition to a `BUILD` file (or replace custom target):
```py
default_java_toolchain(
name = "mytoolchain",
configuration = "PREBUILT_TOOLCHAIN_CONFIGURATION"
#or "VANILLA_TOOLCHAIN_CONFIGURATION"
...
)
```
2. register custom toolchain in the `WORKSPACE` or use configuration flag `--extra_toolchains`.
### *RBE migration*
1. Update the version of bazel_toolchains.
2. Add following flags to `.bazelrc` affecting remote configuration:
```
build:remote --java_runtime_version=rbe_jdk # Uses JDK installed on docker, configured by bazel_toolchains
build:remote --tool_java_runtime_version=rbe_jdk
build:remote --extra_toolchains=@rbe_ubuntu1804//java:all # Optional: uses JDK installed on docker to compile
```
3. In case the sources are not Java 8, also add:
```
build --java_language_version=11
build --tool_java_language_version=11
```
4. Once Bazel 4.1.0 is released and used on RBE Remove `--{,host}javabase` and `--{,host}_javatoolchain` flags.
|
process
|
incompatible use toolchain resolution for java rules use toolchain resolution for java rules flag incompatible use toolchain resolution for java rules available since will be flipped in tracking issue motivation currently java rules find their java toolchain and jdk using the javabase java toolchain host javabase host java toolchain command line options this will be changed to use so as to be consistent with the rest of bazel and to support multiple platforms more easily migration notes for javabase with old values local jdk jdk linux window darwin cpu jdk linux window darwin jdk linux window darwin jdk replace the flag with java runtime version local jdk remotejdk remotejdk for java toolchain with old values bazel tools tools jdk toolchain bazel tools bazel tools jdk legacy toolchain bazel tools tools jdk remote toolchain bazel tools tools jdk toolchain java ver remote java tools xxx toolchain remote java tools xxx toolchain jdk remote java tools xxx toolchain jdk remote java tools xxx toolchain jdk replace the flag with java language version migration of more advanced cases for javabase bazel tools tools jdk absolute javabase use local java repositoy in the workspace file for custom javabase labels do the following replace http archive with remote java repository py remote java repository name strip prefix urls prefix myjdk version exec compatible with replace the old flag with java runtime version with the specified version or prefix version value for example java runtime version myjdk for old java toolchain values bazel tools tools jdk toolchain vanilla remote java tools xxx prebuilt toolchain custom label add custom toolchain definition to a build file or replace custom target py default java toolchain name mytoolchain configuration prebuilt toolchain configuration or vanilla toolchain configuration register custom toolchain in the workspace or use configuration flag extra toolchains rbe migration update the version of bazel toolchains add following flags to bazelrc affecting remote configuration build remote java runtime version rbe jdk uses jdk installed on docker configured by bazel toolchains build remote tool java runtime version rbe jdk build remote extra toolchains rbe java all optional uses jdk installed on docker to compile in case the sources are not java also add build java language version build tool java language version once bazel is released and used on rbe remove host javabase and host javatoolchain flags
| 1
|
295,507
| 25,480,121,908
|
IssuesEvent
|
2022-11-25 19:22:49
|
vegaprotocol/vega
|
https://api.github.com/repos/vegaprotocol/vega
|
closed
|
Unstable CI: `TestRestoringFromDifferentHeightsWithFullHistory`
|
tests ci dehistory
|
# Task Overview
The test `TestRestoringFromDifferentHeightsWithFullHistory` has been seen to sometimes fail in three separate ways, the first being (`TestRestoreFromFullHistorySnapshotAndProcessEvents` has also been seen to fail this way):
```
[2022-11-10T14:04:25.289Z] === RUN TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.368Z INFO snapshot schema creation v3@v3.6.1/migration.go:38 OK 0001_initial.sql
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.382Z INFO snapshot schema creation v3@v3.6.1/up.go:210 goose: no migrations to run. current version: 1
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.382Z INFO root dehistory/service_test.go:447 fetching history for segment id:QmNdrVHMKW1KtyNJeZteHZJ1ka3ejY1rtT66mW3EGTWGVw
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.388Z INFO root dehistory/service_test.go:447 fetched history:{HeightFrom:1 HeightTo:1000 PreviousHistorySegmentID:}
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.418Z INFO root dehistory/service_test.go:287 creating database
[2022-11-10T14:04:25.289Z] service_test.go:288:
[2022-11-10T14:04:25.289Z] Error Trace: /jenkins/workspace/vega_develop/vega/datanode/dehistory/service_test.go:288
[2022-11-10T14:04:25.289Z] Error: Received unexpected error:
[2022-11-10T14:04:25.289Z] failed to create vega database: failed to drop database:unable to drop existing database:conn closed
[2022-11-10T14:04:25.289Z] Test: TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-10T14:04:25.289Z] --- FAIL: TestRestoringFromDifferentHeightsWithFullHistory (50.69s)
```
the second:
```
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.415Z INFO snapshot schema creation v3@v3.6.1/migration.go:38 OK 0001_initial.sql
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.419Z INFO snapshot schema creation v3@v3.6.1/up.go:210 goose: no migrations to run. current version: 1
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.420Z INFO root dehistory/service_test.go:447 fetching history for segment id:QmdUrVBM71aSp8jqnAh3ttS59zGXBFstog64u3VdnC776F
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.437Z INFO root dehistory/service_test.go:447 fetched history:{HeightFrom:2001 HeightTo:3000 PreviousHistorySegmentID:QmbG5YMPbhnVQUyaFXsVMwTWbbBoL88ahTuAVLTYr5jJAJ}
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.498Z INFO root dehistory/service_test.go:287 creating database
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:15.002Z INFO root dehistory/service_test.go:287 creating schema
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:18.997Z INFO snapshot schema creation v3@v3.6.1/migration.go:38 OK 0001_initial.sql
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:19.005Z INFO snapshot schema creation v3@v3.6.1/up.go:210 goose: no migrations to run. current version: 1
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:19.151Z INFO root dehistory/service.go:174 preparing for bulk load
[2022-11-08T18:32:17.654Z] service_test.go:288:
[2022-11-08T18:32:17.654Z] Error Trace: /jenkins/workspace/vega_master/vega/datanode/dehistory/service_test.go:288
[2022-11-08T18:32:17.654Z] Error: Received unexpected error:
[2022-11-08T18:32:17.654Z] failed to load snapshot data:failed to prepare database for bulk load: failed to execute drop constrain ALTER TABLE public.reward_scores DROP CONSTRAINT reward_scores_pkey;: unexpected EOF
[2022-11-08T18:32:17.654Z] Test: TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-08T18:32:17.654Z] --- FAIL: TestRestoringFromDifferentHeightsWithFullHistory (42.17s)
```
the third:
```
[2022-11-10T12:24:06.692Z] 2022-11-10T12:16:43.185Z INFO root snapshot/service_load_snapshot.go:54 copying data into database {"database version": 1}
[2022-11-10T12:24:06.692Z] 2022-11-10T12:16:43.286Z INFO root snapshot/service_load_snapshot.go:58 copying testnet-1-1000-historysnapshot into database
[2022-11-10T12:24:06.692Z] service_test.go:288:
[2022-11-10T12:24:06.692Z] Error Trace: /jenkins/workspace/vega_PR-6774/vega/datanode/dehistory/service_test.go:288
[2022-11-10T12:24:06.692Z] Error: Received unexpected error:
[2022-11-10T12:24:06.692Z] failed to load snapshot data:failed to load history snapshot {History Snapshot for Chain ID:testnet Height From:1 Height To:1000}: failed to copy uncompressed data into the database testnet-1-1000-historysnapshot : failed to disable triggers, setting session replication role to replica failed:write failed: write unix @->/tmp/84f41c65-f7f3-4c69-9b15-37b5c835799b3005738789/sqlstore/.s.PGSQL.5432: write: broken pipe
[2022-11-10T12:24:06.692Z] Test: TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-10T12:24:06.692Z] --- FAIL: TestRestoringFromDifferentHeightsWithFullHistory (5.90s)
```
## Specs
- [Link](xyz) to spec or milestone document info for the feature
# Acceptance Criteria
How do we know when this technical task is complete:
- It is possible to...
- Vega is able to...
# Test Scenarios
Detailed scenarios (1-3!) that can be executed as feature tests to verify that the feature has been implemented as expected.
GIVEN (setup/context)
WHEN (action)
THEN (assertion)
See [here](https://github.com/vegaprotocol/vega/tree/develop/integration/) for more format information and examples.
# Dependencies
Links to any tickets that have a dependant relationship witht his task.
# Additional Details (optional)
Any additional information including known dependencies, impacted components.
# Examples (optional)
Code snippets, API calls that could be used on dependant tasks.
# Definition of Done
>ℹ️ Not every issue will need every item checked, however, every item on this list should be properly considered and actioned to meet the [DoD](https://github.com/vegaprotocol/vega/blob/develop/DEFINITION_OF_DONE.md).
**Before Merging**
- [ ] Create relevant for [system-test](https://github.com/vegaprotocol/system-tests/issues) tickets with feature labels
- [ ] Code refactored to meet SOLID and other code design principles
- [ ] Code is compilation error, warning, and hint free
- [ ] Carry out a basic happy path end-to-end check of the new code
- [ ] All acceptance criteria confirmed to be met, or, reasons why not discussed with the engineering leadership team
- [ ] All APIs are documented so auto-generated documentation is created
- [ ] All Unit, Integration and BVT tests are passing
- [ ] Implementation is peer reviewed (coding standards, meeting acceptance criteria, code/design quality)
- [ ] Create [front end](https://github.com/vegaprotocol/token-frontend/issues) or [console](https://github.com/vegaprotocol/console/issues) tickets with feature labels (should be done when starting the work if dependencies known i.e. API changes)
**After Merging**
- [ ] Move development ticket to `Done` if there is **NO** requirement for new system-tests
- [ ] Resolve any issues with broken system-tests
- [ ] Create [documentation](https://github.com/vegaprotocol/documentation/issues) tickets with feature labels if functionality has changed, or is a new feature
|
1.0
|
Unstable CI: `TestRestoringFromDifferentHeightsWithFullHistory` - # Task Overview
The test `TestRestoringFromDifferentHeightsWithFullHistory` has been seen to sometimes fail in three separate ways, the first being (`TestRestoreFromFullHistorySnapshotAndProcessEvents` has also been seen to fail this way):
```
[2022-11-10T14:04:25.289Z] === RUN TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.368Z INFO snapshot schema creation v3@v3.6.1/migration.go:38 OK 0001_initial.sql
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.382Z INFO snapshot schema creation v3@v3.6.1/up.go:210 goose: no migrations to run. current version: 1
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.382Z INFO root dehistory/service_test.go:447 fetching history for segment id:QmNdrVHMKW1KtyNJeZteHZJ1ka3ejY1rtT66mW3EGTWGVw
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.388Z INFO root dehistory/service_test.go:447 fetched history:{HeightFrom:1 HeightTo:1000 PreviousHistorySegmentID:}
[2022-11-10T14:04:25.289Z] 2022-11-10T13:54:01.418Z INFO root dehistory/service_test.go:287 creating database
[2022-11-10T14:04:25.289Z] service_test.go:288:
[2022-11-10T14:04:25.289Z] Error Trace: /jenkins/workspace/vega_develop/vega/datanode/dehistory/service_test.go:288
[2022-11-10T14:04:25.289Z] Error: Received unexpected error:
[2022-11-10T14:04:25.289Z] failed to create vega database: failed to drop database:unable to drop existing database:conn closed
[2022-11-10T14:04:25.289Z] Test: TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-10T14:04:25.289Z] --- FAIL: TestRestoringFromDifferentHeightsWithFullHistory (50.69s)
```
the second:
```
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.415Z INFO snapshot schema creation v3@v3.6.1/migration.go:38 OK 0001_initial.sql
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.419Z INFO snapshot schema creation v3@v3.6.1/up.go:210 goose: no migrations to run. current version: 1
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.420Z INFO root dehistory/service_test.go:447 fetching history for segment id:QmdUrVBM71aSp8jqnAh3ttS59zGXBFstog64u3VdnC776F
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.437Z INFO root dehistory/service_test.go:447 fetched history:{HeightFrom:2001 HeightTo:3000 PreviousHistorySegmentID:QmbG5YMPbhnVQUyaFXsVMwTWbbBoL88ahTuAVLTYr5jJAJ}
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:14.498Z INFO root dehistory/service_test.go:287 creating database
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:15.002Z INFO root dehistory/service_test.go:287 creating schema
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:18.997Z INFO snapshot schema creation v3@v3.6.1/migration.go:38 OK 0001_initial.sql
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:19.005Z INFO snapshot schema creation v3@v3.6.1/up.go:210 goose: no migrations to run. current version: 1
[2022-11-08T18:32:17.654Z] 2022-11-08T18:22:19.151Z INFO root dehistory/service.go:174 preparing for bulk load
[2022-11-08T18:32:17.654Z] service_test.go:288:
[2022-11-08T18:32:17.654Z] Error Trace: /jenkins/workspace/vega_master/vega/datanode/dehistory/service_test.go:288
[2022-11-08T18:32:17.654Z] Error: Received unexpected error:
[2022-11-08T18:32:17.654Z] failed to load snapshot data:failed to prepare database for bulk load: failed to execute drop constrain ALTER TABLE public.reward_scores DROP CONSTRAINT reward_scores_pkey;: unexpected EOF
[2022-11-08T18:32:17.654Z] Test: TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-08T18:32:17.654Z] --- FAIL: TestRestoringFromDifferentHeightsWithFullHistory (42.17s)
```
the third:
```
[2022-11-10T12:24:06.692Z] 2022-11-10T12:16:43.185Z INFO root snapshot/service_load_snapshot.go:54 copying data into database {"database version": 1}
[2022-11-10T12:24:06.692Z] 2022-11-10T12:16:43.286Z INFO root snapshot/service_load_snapshot.go:58 copying testnet-1-1000-historysnapshot into database
[2022-11-10T12:24:06.692Z] service_test.go:288:
[2022-11-10T12:24:06.692Z] Error Trace: /jenkins/workspace/vega_PR-6774/vega/datanode/dehistory/service_test.go:288
[2022-11-10T12:24:06.692Z] Error: Received unexpected error:
[2022-11-10T12:24:06.692Z] failed to load snapshot data:failed to load history snapshot {History Snapshot for Chain ID:testnet Height From:1 Height To:1000}: failed to copy uncompressed data into the database testnet-1-1000-historysnapshot : failed to disable triggers, setting session replication role to replica failed:write failed: write unix @->/tmp/84f41c65-f7f3-4c69-9b15-37b5c835799b3005738789/sqlstore/.s.PGSQL.5432: write: broken pipe
[2022-11-10T12:24:06.692Z] Test: TestRestoringFromDifferentHeightsWithFullHistory
[2022-11-10T12:24:06.692Z] --- FAIL: TestRestoringFromDifferentHeightsWithFullHistory (5.90s)
```
## Specs
- [Link](xyz) to spec or milestone document info for the feature
# Acceptance Criteria
How do we know when this technical task is complete:
- It is possible to...
- Vega is able to...
# Test Scenarios
Detailed scenarios (1-3!) that can be executed as feature tests to verify that the feature has been implemented as expected.
GIVEN (setup/context)
WHEN (action)
THEN (assertion)
See [here](https://github.com/vegaprotocol/vega/tree/develop/integration/) for more format information and examples.
# Dependencies
Links to any tickets that have a dependant relationship witht his task.
# Additional Details (optional)
Any additional information including known dependencies, impacted components.
# Examples (optional)
Code snippets, API calls that could be used on dependant tasks.
# Definition of Done
>ℹ️ Not every issue will need every item checked, however, every item on this list should be properly considered and actioned to meet the [DoD](https://github.com/vegaprotocol/vega/blob/develop/DEFINITION_OF_DONE.md).
**Before Merging**
- [ ] Create relevant for [system-test](https://github.com/vegaprotocol/system-tests/issues) tickets with feature labels
- [ ] Code refactored to meet SOLID and other code design principles
- [ ] Code is compilation error, warning, and hint free
- [ ] Carry out a basic happy path end-to-end check of the new code
- [ ] All acceptance criteria confirmed to be met, or, reasons why not discussed with the engineering leadership team
- [ ] All APIs are documented so auto-generated documentation is created
- [ ] All Unit, Integration and BVT tests are passing
- [ ] Implementation is peer reviewed (coding standards, meeting acceptance criteria, code/design quality)
- [ ] Create [front end](https://github.com/vegaprotocol/token-frontend/issues) or [console](https://github.com/vegaprotocol/console/issues) tickets with feature labels (should be done when starting the work if dependencies known i.e. API changes)
**After Merging**
- [ ] Move development ticket to `Done` if there is **NO** requirement for new system-tests
- [ ] Resolve any issues with broken system-tests
- [ ] Create [documentation](https://github.com/vegaprotocol/documentation/issues) tickets with feature labels if functionality has changed, or is a new feature
|
non_process
|
unstable ci testrestoringfromdifferentheightswithfullhistory task overview the test testrestoringfromdifferentheightswithfullhistory has been seen to sometimes fail in three separate ways the first being testrestorefromfullhistorysnapshotandprocessevents has also been seen to fail this way run testrestoringfromdifferentheightswithfullhistory info snapshot schema creation migration go ok initial sql info snapshot schema creation up go goose no migrations to run current version info root dehistory service test go fetching history for segment id info root dehistory service test go fetched history heightfrom heightto previoushistorysegmentid info root dehistory service test go creating database service test go error trace jenkins workspace vega develop vega datanode dehistory service test go error received unexpected error failed to create vega database failed to drop database unable to drop existing database conn closed test testrestoringfromdifferentheightswithfullhistory fail testrestoringfromdifferentheightswithfullhistory the second info snapshot schema creation migration go ok initial sql info snapshot schema creation up go goose no migrations to run current version info root dehistory service test go fetching history for segment id info root dehistory service test go fetched history heightfrom heightto previoushistorysegmentid info root dehistory service test go creating database info root dehistory service test go creating schema info snapshot schema creation migration go ok initial sql info snapshot schema creation up go goose no migrations to run current version info root dehistory service go preparing for bulk load service test go error trace jenkins workspace vega master vega datanode dehistory service test go error received unexpected error failed to load snapshot data failed to prepare database for bulk load failed to execute drop constrain alter table public reward scores drop constraint reward scores pkey unexpected eof test testrestoringfromdifferentheightswithfullhistory fail testrestoringfromdifferentheightswithfullhistory the third info root snapshot service load snapshot go copying data into database database version info root snapshot service load snapshot go copying testnet historysnapshot into database service test go error trace jenkins workspace vega pr vega datanode dehistory service test go error received unexpected error failed to load snapshot data failed to load history snapshot history snapshot for chain id testnet height from height to failed to copy uncompressed data into the database testnet historysnapshot failed to disable triggers setting session replication role to replica failed write failed write unix tmp sqlstore s pgsql write broken pipe test testrestoringfromdifferentheightswithfullhistory fail testrestoringfromdifferentheightswithfullhistory specs xyz to spec or milestone document info for the feature acceptance criteria how do we know when this technical task is complete it is possible to vega is able to test scenarios detailed scenarios that can be executed as feature tests to verify that the feature has been implemented as expected given setup context when action then assertion see for more format information and examples dependencies links to any tickets that have a dependant relationship witht his task additional details optional any additional information including known dependencies impacted components examples optional code snippets api calls that could be used on dependant tasks definition of done ℹ️ not every issue will need every item checked however every item on this list should be properly considered and actioned to meet the before merging create relevant for tickets with feature labels code refactored to meet solid and other code design principles code is compilation error warning and hint free carry out a basic happy path end to end check of the new code all acceptance criteria confirmed to be met or reasons why not discussed with the engineering leadership team all apis are documented so auto generated documentation is created all unit integration and bvt tests are passing implementation is peer reviewed coding standards meeting acceptance criteria code design quality create or tickets with feature labels should be done when starting the work if dependencies known i e api changes after merging move development ticket to done if there is no requirement for new system tests resolve any issues with broken system tests create tickets with feature labels if functionality has changed or is a new feature
| 0
|
10,498
| 13,259,944,972
|
IssuesEvent
|
2020-08-20 17:27:14
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE] Zoom in/out and fit items to view actions for the modeler (#3939)
|
3.0 Automatic new feature Graphical modeler Processing ToDocOrNotToDoc?
|
Original commit: https://github.com/qgis/QGIS/commit/56d5a375a1a6def850753d241647e12c675961c7 by web-flow
Unfortunately this naughty coder did not write a description... :-(
|
1.0
|
[FEATURE] Zoom in/out and fit items to view actions for the modeler (#3939) - Original commit: https://github.com/qgis/QGIS/commit/56d5a375a1a6def850753d241647e12c675961c7 by web-flow
Unfortunately this naughty coder did not write a description... :-(
|
process
|
zoom in out and fit items to view actions for the modeler original commit by web flow unfortunately this naughty coder did not write a description
| 1
|
17,419
| 23,240,774,572
|
IssuesEvent
|
2022-08-03 15:24:04
|
dtcenter/MET
|
https://api.github.com/repos/dtcenter/MET
|
closed
|
Add "station_ob" to metadata_map as a message_type metadata variable for ioda2nc
|
type: enhancement priority: medium requestor: Community MET: PreProcessing Tools (Point)
|
*Replace italics below with details for this issue.*
## Describe the Enhancement ##
*Provide a description of the enhancement request here.*
From https://github.com/dtcenter/METplus/discussions/1705
The IODA input has "station_ob@MetaData" variable instead of msg_tyoe@MetaData variable which contains the message types.
The name of the message type variable is configured at metadata_map ("message_type" key). The current setting is "msg_type". It will be convenient adding "station_ob" into metadata_map for message_type.
The sample IODA file exists at seneca:/d1/personal/kalb/ioda/raob_all_v1_20201215T1200Z.nc4
### Time Estimate ###
*Estimate the amount of work required here.*
*Issues should represent approximately 1 to 3 days of work.*
4 hours
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
- [ ] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2799992 - NRL
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
1.0
|
Add "station_ob" to metadata_map as a message_type metadata variable for ioda2nc - *Replace italics below with details for this issue.*
## Describe the Enhancement ##
*Provide a description of the enhancement request here.*
From https://github.com/dtcenter/METplus/discussions/1705
The IODA input has "station_ob@MetaData" variable instead of msg_tyoe@MetaData variable which contains the message types.
The name of the message type variable is configured at metadata_map ("message_type" key). The current setting is "msg_type". It will be convenient adding "station_ob" into metadata_map for message_type.
The sample IODA file exists at seneca:/d1/personal/kalb/ioda/raob_all_v1_20201215T1200Z.nc4
### Time Estimate ###
*Estimate the amount of work required here.*
*Issues should represent approximately 1 to 3 days of work.*
4 hours
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
- [ ] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2799992 - NRL
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
process
|
add station ob to metadata map as a message type metadata variable for replace italics below with details for this issue describe the enhancement provide a description of the enhancement request here from the ioda input has station ob metadata variable instead of msg tyoe metadata variable which contains the message types the name of the message type variable is configured at metadata map message type key the current setting is msg type it will be convenient adding station ob into metadata map for message type the sample ioda file exists at seneca personal kalb ioda raob all time estimate estimate the amount of work required here issues should represent approximately to days of work hours sub issues consider breaking the enhancement down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none funding source nrl define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
| 1
|
109,584
| 4,390,265,786
|
IssuesEvent
|
2016-08-09 02:17:09
|
aodn/aatams
|
https://api.github.com/repos/aodn/aatams
|
opened
|
Error when sorting the Activity list
|
bug medium priority
|
Observed on the production web app
<h3> Steps to reproduce <h3>
Go to the [Animal Tracking web app] (animaltracking.aodn.org.au) and log in.
Then click on the activity button on the top right corner of the page or use [this link] (https://animaltracking.aodn.org.au/auditLogEvent/list) for direct access. Once the list is displayed, click on any of the first three column headers.
<h3> What happens? <h3>
The web app returns the error below.

<h3> What should happen? <h3>
No error should be returned. It's worth noting that clicking on the last column header to sort by `User` doesn't return an error.
|
1.0
|
Error when sorting the Activity list - Observed on the production web app
<h3> Steps to reproduce <h3>
Go to the [Animal Tracking web app] (animaltracking.aodn.org.au) and log in.
Then click on the activity button on the top right corner of the page or use [this link] (https://animaltracking.aodn.org.au/auditLogEvent/list) for direct access. Once the list is displayed, click on any of the first three column headers.
<h3> What happens? <h3>
The web app returns the error below.

<h3> What should happen? <h3>
No error should be returned. It's worth noting that clicking on the last column header to sort by `User` doesn't return an error.
|
non_process
|
error when sorting the activity list observed on the production web app steps to reproduce go to the animaltracking aodn org au and log in then click on the activity button on the top right corner of the page or use for direct access once the list is displayed click on any of the first three column headers what happens the web app returns the error below what should happen no error should be returned it s worth noting that clicking on the last column header to sort by user doesn t return an error
| 0
|
204,743
| 15,949,199,782
|
IssuesEvent
|
2021-04-15 07:05:43
|
fangwei123456/spikingjelly
|
https://api.github.com/repos/fangwei123456/spikingjelly
|
closed
|
How to train Neuromorphic Datasets?
|
documentation enhancement good first issue
|
Will spikingjelly have a tutorial train with Neuromorphic Datasets like N-MNIST, special about DVS128 Gesture,...?
Thanks for your time!
|
1.0
|
How to train Neuromorphic Datasets? - Will spikingjelly have a tutorial train with Neuromorphic Datasets like N-MNIST, special about DVS128 Gesture,...?
Thanks for your time!
|
non_process
|
how to train neuromorphic datasets will spikingjelly have a tutorial train with neuromorphic datasets like n mnist special about gesture thanks for your time
| 0
|
474,439
| 13,670,138,386
|
IssuesEvent
|
2020-09-29 03:55:19
|
sirmammingtonham/smartrider
|
https://api.github.com/repos/sirmammingtonham/smartrider
|
opened
|
💡 Feature Request: Welcome Tour
|
enhancement low-priority
|
**Is your feature request related to a problem? Please describe.**
App features aren't always clear on first use.
**Describe the solution you'd like**
Create a material app tour on first sign-in.
See https://material.io/archive/guidelines/growth-communications/feature-discovery.html#
Walk through all features like calling safe ride, seeing schedules, tapping on schedule to zoom in on the map, profile page, settings, etc, and allow users to skip.
Also prompt users to allow location access. Most people just click cancel on any popups if they aren't ready for it lol
**Additional context**
Low priority
|
1.0
|
💡 Feature Request: Welcome Tour - **Is your feature request related to a problem? Please describe.**
App features aren't always clear on first use.
**Describe the solution you'd like**
Create a material app tour on first sign-in.
See https://material.io/archive/guidelines/growth-communications/feature-discovery.html#
Walk through all features like calling safe ride, seeing schedules, tapping on schedule to zoom in on the map, profile page, settings, etc, and allow users to skip.
Also prompt users to allow location access. Most people just click cancel on any popups if they aren't ready for it lol
**Additional context**
Low priority
|
non_process
|
💡 feature request welcome tour is your feature request related to a problem please describe app features aren t always clear on first use describe the solution you d like create a material app tour on first sign in see walk through all features like calling safe ride seeing schedules tapping on schedule to zoom in on the map profile page settings etc and allow users to skip also prompt users to allow location access most people just click cancel on any popups if they aren t ready for it lol additional context low priority
| 0
|
287,487
| 8,816,033,865
|
IssuesEvent
|
2018-12-30 03:52:16
|
PCSX2/pcsx2
|
https://api.github.com/repos/PCSX2/pcsx2
|
opened
|
GSdx: Custom resolution discussion (drop or keep)
|
GSdx: Hardware High Priority Question / Discussion
|
Currently custom resolution is in a bad state, aside from the improper aliasing it causes because of improper scaling there are some major features that don't work with custom resolution.
Major features being such as Depth, Channel Shuffle, 8bit shader, all of which rely on upscaling multiplier to do the math in the shaders. This was less of an issue in the past because d3d11 was less accurate and we also had d3d9, but with the current fancy new features d3d11 got from gl and the drop of d3d9 this renders custom resolution basically useless. Some games will suffer more while others less from the issues however it's becoming quite an issue.
My proposal is we remove the gui option from Windows just like Gregory did on Linux. We can leave the ini options hoping someone will fix it in the future however I don't want 1.6 to be released with a broken feature like that that causes some major issues.
|
1.0
|
GSdx: Custom resolution discussion (drop or keep) - Currently custom resolution is in a bad state, aside from the improper aliasing it causes because of improper scaling there are some major features that don't work with custom resolution.
Major features being such as Depth, Channel Shuffle, 8bit shader, all of which rely on upscaling multiplier to do the math in the shaders. This was less of an issue in the past because d3d11 was less accurate and we also had d3d9, but with the current fancy new features d3d11 got from gl and the drop of d3d9 this renders custom resolution basically useless. Some games will suffer more while others less from the issues however it's becoming quite an issue.
My proposal is we remove the gui option from Windows just like Gregory did on Linux. We can leave the ini options hoping someone will fix it in the future however I don't want 1.6 to be released with a broken feature like that that causes some major issues.
|
non_process
|
gsdx custom resolution discussion drop or keep currently custom resolution is in a bad state aside from the improper aliasing it causes because of improper scaling there are some major features that don t work with custom resolution major features being such as depth channel shuffle shader all of which rely on upscaling multiplier to do the math in the shaders this was less of an issue in the past because was less accurate and we also had but with the current fancy new features got from gl and the drop of this renders custom resolution basically useless some games will suffer more while others less from the issues however it s becoming quite an issue my proposal is we remove the gui option from windows just like gregory did on linux we can leave the ini options hoping someone will fix it in the future however i don t want to be released with a broken feature like that that causes some major issues
| 0
|
3,086
| 6,101,721,305
|
IssuesEvent
|
2017-06-20 15:07:26
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Keyref text resolution not using link text unless on link element
|
bug P1 preprocess/keyref
|
Based on clarifications to the keyref processing rules in DITA 1.3 (content was there in 1.2 but difficult to interpret):
http://docs.oasis-open.org/dita/dita/v1.3/errata01/os/complete/part1-base/archSpec/base/processing-keyref-for-text.html#processing_key_references
For _any element_, text can be pulled from `<linktext>`, which makes that a good default way to specify text with a key (number five in the list of rules). Otherwise (rule 6), normal rules apply as they would for xref, such as pulling from `<navtitle>`.
I've tested this with traditionally non-linking elements `<cite>` and `<keyword>`, then with `<xref>` and `<link>`. I tried keys that place text in `<keyword>`, `<linktext>`>, `@navtitle`, and `<navtitle>`. Results:
- For non-linking elements, text is only pulled from `<keyword>`. If a link target is added, references to the other three will use the URI as link text.
- For `<xref>`, text is pulled from the first three but not from `<navtitle>`. If a link target is added, the URI is used for that case.
- For `<link>`, cases without a link target are removed. With a link target, results match `<xref>`.
Including `test.ditamap` and `cit-test.dita`:
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"map.dtd">
<map xml:lang="en-us">
<keydef keys="citkeyword"><topicmeta><keywords><keyword>text in keyword</keyword></keywords></topicmeta></keydef>
<keydef keys="citlinktext"><topicmeta><linktext>text in linktext</linktext></topicmeta></keydef>
<keydef keys="citnavatt" navtitle="navtitle attribute"/>
<keydef keys="citnavel"><topicmeta><navtitle>navtitle element</navtitle></topicmeta></keydef>
<keydef keys="xrefkeyword" href="http://www.dita-ot.org" format="html" scope="external">
<topicmeta><keywords><keyword>text in keyword</keyword></keywords></topicmeta>
</keydef>
<keydef keys="xreflinktext" href="http://www.dita-ot.org" format="html" scope="external">
<topicmeta><linktext>text in linktext</linktext></topicmeta>
</keydef>
<keydef keys="xrefnavatt" navtitle="navtitle attribute" href="http://www.dita-ot.org" format="html" scope="external"/>
<keydef keys="xrefnavel" href="http://www.dita-ot.org" format="html" scope="external">
<topicmeta><navtitle>navtitle element</navtitle></topicmeta>
</keydef>
<topicref href="cit-test.dita"/>
</map>
```
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE task PUBLIC "-//OASIS//DTD DITA Task//EN"
"task.dtd">
<task id="cites" xml:lang="en-us"><?Pub Caret?>
<title>testing some citations</title>
<taskbody>
<context>
<ul>
<li><cite keyref="citkeyword"/></li>
<li><cite keyref="citlinktext"/></li>
<li><cite keyref="citnavatt"/></li>
<li><cite keyref="citnavel"/></li>
</ul>
<p>Repeat as citations that have an external link target, dita-ot.org:</p>
<ul>
<li><cite keyref="xrefkeyword"/></li>
<li><cite keyref="xreflinktext"/></li>
<li><cite keyref="xrefnavatt"/></li>
<li><cite keyref="xrefnavel"/></li>
</ul>
<p>Now try with keyword</p>
<ul>
<li><keyword keyref="citkeyword"/></li>
<li><keyword keyref="citlinktext"/></li>
<li><keyword keyref="citnavatt"/></li>
<li><keyword keyref="citnavel"/></li>
</ul>
<p>Repeat as keywords that have an external link target, dita-ot.org:</p>
<ul>
<li><keyword keyref="xrefkeyword"/></li>
<li><keyword keyref="xreflinktext"/></li>
<li><keyword keyref="xrefnavatt"/></li>
<li><keyword keyref="xrefnavel"/></li>
</ul>
<p>Repeat as cross references:</p>
<ul>
<li><xref keyref="citkeyword"/></li>
<li><xref keyref="citlinktext"/></li>
<li><xref keyref="citnavatt"/></li>
<li><xref keyref="citnavel"/></li>
</ul>
<p>Repeat as citations that have an external link target, dita-ot.org:</p>
<ul>
<li><xref keyref="xrefkeyword"/></li>
<li><xref keyref="xreflinktext"/></li>
<li><xref keyref="xrefnavatt"/></li>
<li><xref keyref="xrefnavel"/></li>
</ul>
</context>
</taskbody>
<related-links>
<link keyref="citkeyword"/>
<link keyref="citlinktext"/>
<link keyref="citnavatt"/>
<link keyref="citnavel"/>
<link keyref="xrefkeyword"/>
<link keyref="xreflinktext"/>
<link keyref="xrefnavatt"/>
<link keyref="xrefnavel"/>
</related-links>
</task>
```
|
1.0
|
Keyref text resolution not using link text unless on link element - Based on clarifications to the keyref processing rules in DITA 1.3 (content was there in 1.2 but difficult to interpret):
http://docs.oasis-open.org/dita/dita/v1.3/errata01/os/complete/part1-base/archSpec/base/processing-keyref-for-text.html#processing_key_references
For _any element_, text can be pulled from `<linktext>`, which makes that a good default way to specify text with a key (number five in the list of rules). Otherwise (rule 6), normal rules apply as they would for xref, such as pulling from `<navtitle>`.
I've tested this with traditionally non-linking elements `<cite>` and `<keyword>`, then with `<xref>` and `<link>`. I tried keys that place text in `<keyword>`, `<linktext>`>, `@navtitle`, and `<navtitle>`. Results:
- For non-linking elements, text is only pulled from `<keyword>`. If a link target is added, references to the other three will use the URI as link text.
- For `<xref>`, text is pulled from the first three but not from `<navtitle>`. If a link target is added, the URI is used for that case.
- For `<link>`, cases without a link target are removed. With a link target, results match `<xref>`.
Including `test.ditamap` and `cit-test.dita`:
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"map.dtd">
<map xml:lang="en-us">
<keydef keys="citkeyword"><topicmeta><keywords><keyword>text in keyword</keyword></keywords></topicmeta></keydef>
<keydef keys="citlinktext"><topicmeta><linktext>text in linktext</linktext></topicmeta></keydef>
<keydef keys="citnavatt" navtitle="navtitle attribute"/>
<keydef keys="citnavel"><topicmeta><navtitle>navtitle element</navtitle></topicmeta></keydef>
<keydef keys="xrefkeyword" href="http://www.dita-ot.org" format="html" scope="external">
<topicmeta><keywords><keyword>text in keyword</keyword></keywords></topicmeta>
</keydef>
<keydef keys="xreflinktext" href="http://www.dita-ot.org" format="html" scope="external">
<topicmeta><linktext>text in linktext</linktext></topicmeta>
</keydef>
<keydef keys="xrefnavatt" navtitle="navtitle attribute" href="http://www.dita-ot.org" format="html" scope="external"/>
<keydef keys="xrefnavel" href="http://www.dita-ot.org" format="html" scope="external">
<topicmeta><navtitle>navtitle element</navtitle></topicmeta>
</keydef>
<topicref href="cit-test.dita"/>
</map>
```
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE task PUBLIC "-//OASIS//DTD DITA Task//EN"
"task.dtd">
<task id="cites" xml:lang="en-us"><?Pub Caret?>
<title>testing some citations</title>
<taskbody>
<context>
<ul>
<li><cite keyref="citkeyword"/></li>
<li><cite keyref="citlinktext"/></li>
<li><cite keyref="citnavatt"/></li>
<li><cite keyref="citnavel"/></li>
</ul>
<p>Repeat as citations that have an external link target, dita-ot.org:</p>
<ul>
<li><cite keyref="xrefkeyword"/></li>
<li><cite keyref="xreflinktext"/></li>
<li><cite keyref="xrefnavatt"/></li>
<li><cite keyref="xrefnavel"/></li>
</ul>
<p>Now try with keyword</p>
<ul>
<li><keyword keyref="citkeyword"/></li>
<li><keyword keyref="citlinktext"/></li>
<li><keyword keyref="citnavatt"/></li>
<li><keyword keyref="citnavel"/></li>
</ul>
<p>Repeat as keywords that have an external link target, dita-ot.org:</p>
<ul>
<li><keyword keyref="xrefkeyword"/></li>
<li><keyword keyref="xreflinktext"/></li>
<li><keyword keyref="xrefnavatt"/></li>
<li><keyword keyref="xrefnavel"/></li>
</ul>
<p>Repeat as cross references:</p>
<ul>
<li><xref keyref="citkeyword"/></li>
<li><xref keyref="citlinktext"/></li>
<li><xref keyref="citnavatt"/></li>
<li><xref keyref="citnavel"/></li>
</ul>
<p>Repeat as citations that have an external link target, dita-ot.org:</p>
<ul>
<li><xref keyref="xrefkeyword"/></li>
<li><xref keyref="xreflinktext"/></li>
<li><xref keyref="xrefnavatt"/></li>
<li><xref keyref="xrefnavel"/></li>
</ul>
</context>
</taskbody>
<related-links>
<link keyref="citkeyword"/>
<link keyref="citlinktext"/>
<link keyref="citnavatt"/>
<link keyref="citnavel"/>
<link keyref="xrefkeyword"/>
<link keyref="xreflinktext"/>
<link keyref="xrefnavatt"/>
<link keyref="xrefnavel"/>
</related-links>
</task>
```
|
process
|
keyref text resolution not using link text unless on link element based on clarifications to the keyref processing rules in dita content was there in but difficult to interpret for any element text can be pulled from which makes that a good default way to specify text with a key number five in the list of rules otherwise rule normal rules apply as they would for xref such as pulling from i ve tested this with traditionally non linking elements and then with and i tried keys that place text in navtitle and results for non linking elements text is only pulled from if a link target is added references to the other three will use the uri as link text for text is pulled from the first three but not from if a link target is added the uri is used for that case for cases without a link target are removed with a link target results match including test ditamap and cit test dita xml doctype map public oasis dtd dita map en map dtd text in keyword text in linktext navtitle element text in keyword text in linktext navtitle element xml doctype task public oasis dtd dita task en task dtd testing some citations repeat as citations that have an external link target dita ot org now try with keyword repeat as keywords that have an external link target dita ot org repeat as cross references repeat as citations that have an external link target dita ot org
| 1
|
123,200
| 12,194,851,181
|
IssuesEvent
|
2020-04-29 16:25:15
|
kythe/kythe
|
https://api.github.com/repos/kythe/kythe
|
opened
|
extracting //kythe/cxx/... fails
|
C++ documentation extraction
|
In https://kythe.io/examples/#extracting-the-kythe-repository we tell people they can run `bazel build -k \
--experimental_action_listener=@io_kythe//kythe/extractors:extract_kzip_java \
--experimental_action_listener=@io_kythe//kythe/extractors:extract_kzip_cxx \
--experimental_extra_action_top_level_only \
//kythe/cxx/... //kythe/java/...` but this doesn't work any longer.
For example:
```
$ bazel build --experimental_action_listener=@io_kythe//kythe/extractors:extract_kzip_cxx --experimental_extra_action_top_level_only //kythe/cxx/indexer/cxx/testdata/proto/...
INFO: Build option --experimental_action_listener has changed, discarding analysis cache.
INFO: Analyzed 24 targets (0 packages loaded, 13818 targets configured).
INFO: Found 24 targets...
ERROR: .../kythe/kythe/testdata/indexers/proto/BUILD:39:1: C++ compilation of rule '//kythe/testdata/indexers/proto:testdata4a_proto' failed (Exit 1) clang failed: error executing command /usr/bin/clang -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -fcolor-diagnostics -fno-omit-frame-pointer '-std=c++17' -MD -MF ... (remaining 33 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
In file included from bazel-out/k8-fastbuild/bin/kythe/testdata/indexers/proto/testdata4a.pb.cc:4:
bazel-out/k8-fastbuild/bin/kythe/testdata/indexers/proto/testdata4a.pb.h:35:10: fatal error: 'kythe/testdata/indexers/proto/testdata4c.pb.h' file not found
#include "kythe/testdata/indexers/proto/testdata4c.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
ERROR: .../kythe/kythe/testdata/indexers/proto/BUILD:39:1 C++ compilation of rule '//kythe/testdata/indexers/proto:testdata4a_proto' failed (Exit 1) clang failed: error executing command /usr/bin/clang -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -fcolor-diagnostics -fno-omit-frame-pointer '-std=c++17' -MD -MF ... (remaining 33 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
INFO: Elapsed time: 1.958s, Critical Path: 0.75s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
```
|
1.0
|
extracting //kythe/cxx/... fails - In https://kythe.io/examples/#extracting-the-kythe-repository we tell people they can run `bazel build -k \
--experimental_action_listener=@io_kythe//kythe/extractors:extract_kzip_java \
--experimental_action_listener=@io_kythe//kythe/extractors:extract_kzip_cxx \
--experimental_extra_action_top_level_only \
//kythe/cxx/... //kythe/java/...` but this doesn't work any longer.
For example:
```
$ bazel build --experimental_action_listener=@io_kythe//kythe/extractors:extract_kzip_cxx --experimental_extra_action_top_level_only //kythe/cxx/indexer/cxx/testdata/proto/...
INFO: Build option --experimental_action_listener has changed, discarding analysis cache.
INFO: Analyzed 24 targets (0 packages loaded, 13818 targets configured).
INFO: Found 24 targets...
ERROR: .../kythe/kythe/testdata/indexers/proto/BUILD:39:1: C++ compilation of rule '//kythe/testdata/indexers/proto:testdata4a_proto' failed (Exit 1) clang failed: error executing command /usr/bin/clang -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -fcolor-diagnostics -fno-omit-frame-pointer '-std=c++17' -MD -MF ... (remaining 33 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
In file included from bazel-out/k8-fastbuild/bin/kythe/testdata/indexers/proto/testdata4a.pb.cc:4:
bazel-out/k8-fastbuild/bin/kythe/testdata/indexers/proto/testdata4a.pb.h:35:10: fatal error: 'kythe/testdata/indexers/proto/testdata4c.pb.h' file not found
#include "kythe/testdata/indexers/proto/testdata4c.pb.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
ERROR: .../kythe/kythe/testdata/indexers/proto/BUILD:39:1 C++ compilation of rule '//kythe/testdata/indexers/proto:testdata4a_proto' failed (Exit 1) clang failed: error executing command /usr/bin/clang -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -fcolor-diagnostics -fno-omit-frame-pointer '-std=c++17' -MD -MF ... (remaining 33 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
INFO: Elapsed time: 1.958s, Critical Path: 0.75s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
```
|
non_process
|
extracting kythe cxx fails in we tell people they can run bazel build k experimental action listener io kythe kythe extractors extract kzip java experimental action listener io kythe kythe extractors extract kzip cxx experimental extra action top level only kythe cxx kythe java but this doesn t work any longer for example bazel build experimental action listener io kythe kythe extractors extract kzip cxx experimental extra action top level only kythe cxx indexer cxx testdata proto info build option experimental action listener has changed discarding analysis cache info analyzed targets packages loaded targets configured info found targets error kythe kythe testdata indexers proto build c compilation of rule kythe testdata indexers proto proto failed exit clang failed error executing command usr bin clang u fortify source fstack protector wall wthread safety wself assign fcolor diagnostics fno omit frame pointer std c md mf remaining argument s skipped use sandbox debug to see verbose messages from the sandbox in file included from bazel out fastbuild bin kythe testdata indexers proto pb cc bazel out fastbuild bin kythe testdata indexers proto pb h fatal error kythe testdata indexers proto pb h file not found include kythe testdata indexers proto pb h error generated error kythe kythe testdata indexers proto build c compilation of rule kythe testdata indexers proto proto failed exit clang failed error executing command usr bin clang u fortify source fstack protector wall wthread safety wself assign fcolor diagnostics fno omit frame pointer std c md mf remaining argument s skipped use sandbox debug to see verbose messages from the sandbox info elapsed time critical path info processes failed build did not complete successfully
| 0
|
8,284
| 11,448,776,078
|
IssuesEvent
|
2020-02-06 04:48:28
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
closed
|
v0.7.0 Release Checklist
|
area/engprod kind/process lifecycle/stale priority/p0
|
P0 issues list: https://github.com/issues?utf8=%E2%9C%93&q=org%3Akubeflow+label%3Apriority%2Fp0+project%3Akubeflow%2F22+is%3Aopen+
- [x] App instance manifests (https://github.com/kubeflow/manifests/issues/489)
- [x] Cut release branch for kubeflow/manifests
- [x] Merge Kfctl upgrades PR
- [x] Migrate kftcl to kubeflow/kfctl
- [x] Cut release branch for kubeflow/kubeflow
- [x] Cut release branch for kubeflow/kfctl
- [x] Make sure periodic tests are running against release branches
- [x] Create initial RC
|
1.0
|
v0.7.0 Release Checklist - P0 issues list: https://github.com/issues?utf8=%E2%9C%93&q=org%3Akubeflow+label%3Apriority%2Fp0+project%3Akubeflow%2F22+is%3Aopen+
- [x] App instance manifests (https://github.com/kubeflow/manifests/issues/489)
- [x] Cut release branch for kubeflow/manifests
- [x] Merge Kfctl upgrades PR
- [x] Migrate kftcl to kubeflow/kfctl
- [x] Cut release branch for kubeflow/kubeflow
- [x] Cut release branch for kubeflow/kfctl
- [x] Make sure periodic tests are running against release branches
- [x] Create initial RC
|
process
|
release checklist issues list app instance manifests cut release branch for kubeflow manifests merge kfctl upgrades pr migrate kftcl to kubeflow kfctl cut release branch for kubeflow kubeflow cut release branch for kubeflow kfctl make sure periodic tests are running against release branches create initial rc
| 1
|
14,040
| 16,849,214,143
|
IssuesEvent
|
2021-06-20 06:24:48
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] Add action to model designer to delete all selected components
|
3.14 Automatic new feature Graphical modeler Processing
|
Original commit: https://github.com/qgis/QGIS/commit/1e67ee42fa95d7ea3078f346f28a5d29b77c2177 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
1.0
|
[FEATURE][processing] Add action to model designer to delete all selected components - Original commit: https://github.com/qgis/QGIS/commit/1e67ee42fa95d7ea3078f346f28a5d29b77c2177 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
process
|
add action to model designer to delete all selected components original commit by nyalldawson unfortunately this naughty coder did not write a description
| 1
|
13,721
| 16,484,813,532
|
IssuesEvent
|
2021-05-24 16:22:41
|
DSpace/dspace-angular
|
https://api.github.com/repos/DSpace/dspace-angular
|
closed
|
Cannot Import Metadata (CSV) as an Admin
|
bug e/1 high priority testathon tools: import tools:processes
|
**Describe the bug**
Cannot import a Metadata CSV while logged in as an Administrator. Seems like the same problem as reported in https://github.com/DSpace/dspace-angular/issues/1132
**To Reproduce**
Steps to reproduce the behavior:
1. Login as an Admin
2. Create a CSV to import (preferably something small. It could be one line starting with a "+" to test creating a metadata import Item)
3. In the Admin Sidebar, select "Import" -> "Metadata" and drag & drop your CSV
4. A Process kicks off, but it immediately fails with this error:
```
2021-04-27 22:12:06.662 ERROR metadata-import - 2 @ Failed to parse the arguments given to the script with name: metadata-import and args: [-e, dspacedemo+admin@gmail.com, -f, 10673-2.csv]
2021-04-27 22:12:06.663 ERROR metadata-import - 2 @ org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: -e
at org.apache.commons.cli.DefaultParser.handleUnknownToken(DefaultParser.java:360)
at org.apache.commons.cli.DefaultParser.handleShortAndLongOption(DefaultParser.java:497)
at org.apache.commons.cli.DefaultParser.handleToken(DefaultParser.java:243)
at org.apache.commons.cli.DefaultParser.parse(DefaultParser.java:120)
at org.apache.commons.cli.DefaultParser.parse(DefaultParser.java:76)
at org.apache.commons.cli.DefaultParser.parse(DefaultParser.java:60)
at org.dspace.scripts.DSpaceRunnable.parse(DSpaceRunnable.java:85)
at org.dspace.scripts.DSpaceRunnable.initialize(DSpaceRunnable.java:75)
at org.dspace.app.rest.repository.ScriptRestRepository.runDSpaceScript(ScriptRestRepository.java:149)
at org.dspace.app.rest.repository.ScriptRestRepository.startProcess(ScriptRestRepository.java:114)
at org.dspace.app.rest.repository.ScriptRestRepository$$FastClassBySpringCGLIB$$952d0402.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:687)
at org.dspace.app.rest.repository.ScriptRestRepository$$EnhancerBySpringCGLIB$$a9f8c134.startProcess(<generated>)
at org.dspace.app.rest.ScriptProcessesController.startProcess(ScriptProcessesController.java:70)
at org.dspace.app.rest.ScriptProcessesController$$FastClassBySpringCGLIB$$7be45783.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691)
at org.dspace.app.rest.ScriptProcessesController$$EnhancerBySpringCGLIB$$172a7457.startProcess(<generated>)
```
**Expected behavior**
Obviously, Import of Metadata CSV should work.
**Related work**
This seems like a very similar problem to https://github.com/DSpace/dspace-angular/issues/1132 and can likely be tackled by the same person
|
1.0
|
Cannot Import Metadata (CSV) as an Admin - **Describe the bug**
Cannot import a Metadata CSV while logged in as an Administrator. Seems like the same problem as reported in https://github.com/DSpace/dspace-angular/issues/1132
**To Reproduce**
Steps to reproduce the behavior:
1. Login as an Admin
2. Create a CSV to import (preferably something small. It could be one line starting with a "+" to test creating a metadata import Item)
3. In the Admin Sidebar, select "Import" -> "Metadata" and drag & drop your CSV
4. A Process kicks off, but it immediately fails with this error:
```
2021-04-27 22:12:06.662 ERROR metadata-import - 2 @ Failed to parse the arguments given to the script with name: metadata-import and args: [-e, dspacedemo+admin@gmail.com, -f, 10673-2.csv]
2021-04-27 22:12:06.663 ERROR metadata-import - 2 @ org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: -e
at org.apache.commons.cli.DefaultParser.handleUnknownToken(DefaultParser.java:360)
at org.apache.commons.cli.DefaultParser.handleShortAndLongOption(DefaultParser.java:497)
at org.apache.commons.cli.DefaultParser.handleToken(DefaultParser.java:243)
at org.apache.commons.cli.DefaultParser.parse(DefaultParser.java:120)
at org.apache.commons.cli.DefaultParser.parse(DefaultParser.java:76)
at org.apache.commons.cli.DefaultParser.parse(DefaultParser.java:60)
at org.dspace.scripts.DSpaceRunnable.parse(DSpaceRunnable.java:85)
at org.dspace.scripts.DSpaceRunnable.initialize(DSpaceRunnable.java:75)
at org.dspace.app.rest.repository.ScriptRestRepository.runDSpaceScript(ScriptRestRepository.java:149)
at org.dspace.app.rest.repository.ScriptRestRepository.startProcess(ScriptRestRepository.java:114)
at org.dspace.app.rest.repository.ScriptRestRepository$$FastClassBySpringCGLIB$$952d0402.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:687)
at org.dspace.app.rest.repository.ScriptRestRepository$$EnhancerBySpringCGLIB$$a9f8c134.startProcess(<generated>)
at org.dspace.app.rest.ScriptProcessesController.startProcess(ScriptProcessesController.java:70)
at org.dspace.app.rest.ScriptProcessesController$$FastClassBySpringCGLIB$$7be45783.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:771)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691)
at org.dspace.app.rest.ScriptProcessesController$$EnhancerBySpringCGLIB$$172a7457.startProcess(<generated>)
```
**Expected behavior**
Obviously, Import of Metadata CSV should work.
**Related work**
This seems like a very similar problem to https://github.com/DSpace/dspace-angular/issues/1132 and can likely be tackled by the same person
|
process
|
cannot import metadata csv as an admin describe the bug cannot import a metadata csv while logged in as an administrator seems like the same problem as reported in to reproduce steps to reproduce the behavior login as an admin create a csv to import preferably something small it could be one line starting with a to test creating a metadata import item in the admin sidebar select import metadata and drag drop your csv a process kicks off but it immediately fails with this error error metadata import failed to parse the arguments given to the script with name metadata import and args error metadata import org apache commons cli unrecognizedoptionexception unrecognized option e at org apache commons cli defaultparser handleunknowntoken defaultparser java at org apache commons cli defaultparser handleshortandlongoption defaultparser java at org apache commons cli defaultparser handletoken defaultparser java at org apache commons cli defaultparser parse defaultparser java at org apache commons cli defaultparser parse defaultparser java at org apache commons cli defaultparser parse defaultparser java at org dspace scripts dspacerunnable parse dspacerunnable java at org dspace scripts dspacerunnable initialize dspacerunnable java at org dspace app rest repository scriptrestrepository rundspacescript scriptrestrepository java at org dspace app rest repository scriptrestrepository startprocess scriptrestrepository java at org dspace app rest repository scriptrestrepository fastclassbyspringcglib invoke at org springframework cglib proxy methodproxy invoke methodproxy java at org springframework aop framework cglibaopproxy dynamicadvisedinterceptor intercept cglibaopproxy java at org dspace app rest repository scriptrestrepository enhancerbyspringcglib startprocess at org dspace app rest scriptprocessescontroller startprocess scriptprocessescontroller java at org dspace app rest scriptprocessescontroller fastclassbyspringcglib invoke at org springframework cglib proxy methodproxy invoke methodproxy java at org springframework aop framework cglibaopproxy cglibmethodinvocation invokejoinpoint cglibaopproxy java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop framework cglibaopproxy cglibmethodinvocation proceed cglibaopproxy java at org springframework security access intercept aopalliance methodsecurityinterceptor invoke methodsecurityinterceptor java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop framework cglibaopproxy cglibmethodinvocation proceed cglibaopproxy java at org springframework aop framework cglibaopproxy dynamicadvisedinterceptor intercept cglibaopproxy java at org dspace app rest scriptprocessescontroller enhancerbyspringcglib startprocess expected behavior obviously import of metadata csv should work related work this seems like a very similar problem to and can likely be tackled by the same person
| 1
|
284,869
| 21,474,757,533
|
IssuesEvent
|
2022-04-26 12:47:50
|
SDdylan/API-BileMo
|
https://api.github.com/repos/SDdylan/API-BileMo
|
closed
|
Documentation
|
documentation
|
Créer une documentation qui aidera l'utilisateur dans son utilisation de l'API. (Swagger, OpenApi)
Temps estimé : 2 jours.
|
1.0
|
Documentation - Créer une documentation qui aidera l'utilisateur dans son utilisation de l'API. (Swagger, OpenApi)
Temps estimé : 2 jours.
|
non_process
|
documentation créer une documentation qui aidera l utilisateur dans son utilisation de l api swagger openapi temps estimé jours
| 0
|
64,009
| 7,758,622,926
|
IssuesEvent
|
2018-05-31 20:13:57
|
phetsims/equality-explorer
|
https://api.github.com/repos/phetsims/equality-explorer
|
opened
|
range of pickers on Lab screen
|
design:general meeting:design
|
The range of the pickers on the Lab screen is currently [0,20]. Zero has potential for confusion, since the objects here are physical. Do we want to change to [1,20]?
|
2.0
|
range of pickers on Lab screen - The range of the pickers on the Lab screen is currently [0,20]. Zero has potential for confusion, since the objects here are physical. Do we want to change to [1,20]?
|
non_process
|
range of pickers on lab screen the range of the pickers on the lab screen is currently zero has potential for confusion since the objects here are physical do we want to change to
| 0
|
13,063
| 15,395,200,963
|
IssuesEvent
|
2021-03-03 18:54:25
|
googleapis/nodejs-speech
|
https://api.github.com/repos/googleapis/nodejs-speech
|
closed
|
Language code en-GB not working on infinite streaming example
|
api: speech priority: p2 type: bug type: process
|
Hi,
Running both MicrophoneStream and infiniteStreaming work fine after cloning the repository. Changing the language to en-GB on the MicrophoneStream sample works, albeit with several seconds of latency. However infiniteStreaming with en-GB just sits there not finalising the translation. I have tried the language from the command line and by changing the default in the code. Am I missing something here? This is the code out of the box, nothing changed other than trying to switch language.
#### Environment details
- OS: MacOS 10.15.6
- Node.js version: v14.11.0
- npm version: 6.14.8
- `@google-cloud/speech` version: 4.1.3
#### Steps to reproduce
1. clone the repository
2. setup export (API credentials, etc...)
3. Run node samples/infiniteStreaming.js infiniteStream -l en-GB
|
1.0
|
Language code en-GB not working on infinite streaming example - Hi,
Running both MicrophoneStream and infiniteStreaming work fine after cloning the repository. Changing the language to en-GB on the MicrophoneStream sample works, albeit with several seconds of latency. However infiniteStreaming with en-GB just sits there not finalising the translation. I have tried the language from the command line and by changing the default in the code. Am I missing something here? This is the code out of the box, nothing changed other than trying to switch language.
#### Environment details
- OS: MacOS 10.15.6
- Node.js version: v14.11.0
- npm version: 6.14.8
- `@google-cloud/speech` version: 4.1.3
#### Steps to reproduce
1. clone the repository
2. setup export (API credentials, etc...)
3. Run node samples/infiniteStreaming.js infiniteStream -l en-GB
|
process
|
language code en gb not working on infinite streaming example hi running both microphonestream and infinitestreaming work fine after cloning the repository changing the language to en gb on the microphonestream sample works albeit with several seconds of latency however infinitestreaming with en gb just sits there not finalising the translation i have tried the language from the command line and by changing the default in the code am i missing something here this is the code out of the box nothing changed other than trying to switch language environment details os macos node js version npm version google cloud speech version steps to reproduce clone the repository setup export api credentials etc run node samples infinitestreaming js infinitestream l en gb
| 1
|
591,347
| 17,837,585,505
|
IssuesEvent
|
2021-09-03 05:01:26
|
DelvUI/DelvUI
|
https://api.github.com/repos/DelvUI/DelvUI
|
closed
|
Context Menu
|
High Priority Work in Progress Infra
|
- [ ] Add clip logic to focus bar
- [ ] Add clip logic to target of target bar
- [ ] Add clip logic for text on frames as well?
|
1.0
|
Context Menu - - [ ] Add clip logic to focus bar
- [ ] Add clip logic to target of target bar
- [ ] Add clip logic for text on frames as well?
|
non_process
|
context menu add clip logic to focus bar add clip logic to target of target bar add clip logic for text on frames as well
| 0
|
3,642
| 6,677,416,515
|
IssuesEvent
|
2017-10-05 10:21:41
|
our-city-app/oca-backend
|
https://api.github.com/repos/our-city-app/oca-backend
|
closed
|
qr-code with URL
|
process_wontfix type_feature
|
@ this moment it is not possible in a good way to create QR-codes with a URL behind it.
There is a superfluous question when you scan the qr-code with the URL.
This is not user friendly @ all.
It would be great to scan and open immediately.


|
1.0
|
qr-code with URL - @ this moment it is not possible in a good way to create QR-codes with a URL behind it.
There is a superfluous question when you scan the qr-code with the URL.
This is not user friendly @ all.
It would be great to scan and open immediately.


|
process
|
qr code with url this moment it is not possible in a good way to create qr codes with a url behind it there is a superfluous question when you scan the qr code with the url this is not user friendly all it would be great to scan and open immediately
| 1
|
3,357
| 6,487,658,430
|
IssuesEvent
|
2017-08-20 10:05:14
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
blockScraper, if the files are locked, proceeds anyway without really showing a usful message.
|
apps-blockScrape status-inprocess type-bug
|
To reproduce, create .lck file and run.
|
1.0
|
blockScraper, if the files are locked, proceeds anyway without really showing a usful message. - To reproduce, create .lck file and run.
|
process
|
blockscraper if the files are locked proceeds anyway without really showing a usful message to reproduce create lck file and run
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.