Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
957
| labels
stringlengths 4
1.11k
| body
stringlengths 1
261k
| index
stringclasses 11
values | text_combine
stringlengths 95
261k
| label
stringclasses 2
values | text
stringlengths 96
250k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
77,978
| 9,648,695,657
|
IssuesEvent
|
2019-05-17 16:58:50
|
participedia/usersnaps
|
https://api.github.com/repos/participedia/usersnaps
|
closed
|
Usersnap Feedback - Will the links field auto generate a url if i just type jesicarson.com rat[...]
|
bug design-changes
|
**Sender**: jesi.carson@gmail.com
**Comment**: Will the links field auto generate a url if i just type jesicarson.com rather than http://jesicarson.com? it probably should.
design backlog
[Open #226 in Usersnap Dashboard](https://usersnap.com/a/#/participedia-dev/p/participedia-201-a37e4cf6/226)
<a href='https://usersnappublic.s3.amazonaws.com/2019-05-16/00-56/b512cd33-8dc1-499f-8968-8c9c847689f6.png'></a>
<a href='https://usersnappublic.s3.amazonaws.com/2019-05-16/00-56/b512cd33-8dc1-499f-8968-8c9c847689f6.png'>Download original image</a>
**Browser**: Chrome 74 (macOS Mojave)
**Referer**: [http://ppedia-stage.herokuapp.com/case/new](http://ppedia-stage.herokuapp.com/case/new)
**Screen size**: 1680 x 1050 **Browser size**: 1680 x 948
Powered by [usersnap.com](https://usersnap.com/?utm_source=github_entry&utm_medium=web&utm_campaign=product)
|
1.0
|
Usersnap Feedback - Will the links field auto generate a url if i just type jesicarson.com rat[...] - **Sender**: jesi.carson@gmail.com
**Comment**: Will the links field auto generate a url if i just type jesicarson.com rather than http://jesicarson.com? it probably should.
design backlog
[Open #226 in Usersnap Dashboard](https://usersnap.com/a/#/participedia-dev/p/participedia-201-a37e4cf6/226)
<a href='https://usersnappublic.s3.amazonaws.com/2019-05-16/00-56/b512cd33-8dc1-499f-8968-8c9c847689f6.png'></a>
<a href='https://usersnappublic.s3.amazonaws.com/2019-05-16/00-56/b512cd33-8dc1-499f-8968-8c9c847689f6.png'>Download original image</a>
**Browser**: Chrome 74 (macOS Mojave)
**Referer**: [http://ppedia-stage.herokuapp.com/case/new](http://ppedia-stage.herokuapp.com/case/new)
**Screen size**: 1680 x 1050 **Browser size**: 1680 x 948
Powered by [usersnap.com](https://usersnap.com/?utm_source=github_entry&utm_medium=web&utm_campaign=product)
|
design
|
usersnap feedback will the links field auto generate a url if i just type jesicarson com rat sender jesi carson gmail com comment will the links field auto generate a url if i just type jesicarson com rather than it probably should design backlog a href browser chrome macos mojave referer screen size x browser size x powered by
| 1
|
93,046
| 11,736,385,112
|
IssuesEvent
|
2020-03-11 12:59:07
|
simplabs/simplabs.github.io
|
https://api.github.com/repos/simplabs/simplabs.github.io
|
closed
|
Replace Trainline quote with Qonto quote
|
design
|
We just added the Trainline quote but I realized we have a pretty good one from Qonto that we could use instead since we're referring to Trainline in quite a few other places already. We can simply replace the `<ShapeQuoteTrainline>` component with a `<ShapeQuoteQonto>` component and update all usages of the component respectively.
### TODO
- [ ] create image for the quote component
- [ ] rename `<ShapeQuoteTrainline>` to `<ShapeQuoteQonto>` and update usages
- [ ] replace quote from Carl with the quote from Marc-Antoine below
### Resources
* Quote: _"simplabs are well known as the Ember.js experts and they absolutely live up to the expectations. They had an immediate as well as significant positive impact on both our velocity and quality of output. - Marc-Antoine Lacroix, Qonto CTO"_
* Image: we could use something from their [media kit](https://projects.invisionapp.com/boards/9T3UDRXEHSN) maybe?
|
1.0
|
Replace Trainline quote with Qonto quote - We just added the Trainline quote but I realized we have a pretty good one from Qonto that we could use instead since we're referring to Trainline in quite a few other places already. We can simply replace the `<ShapeQuoteTrainline>` component with a `<ShapeQuoteQonto>` component and update all usages of the component respectively.
### TODO
- [ ] create image for the quote component
- [ ] rename `<ShapeQuoteTrainline>` to `<ShapeQuoteQonto>` and update usages
- [ ] replace quote from Carl with the quote from Marc-Antoine below
### Resources
* Quote: _"simplabs are well known as the Ember.js experts and they absolutely live up to the expectations. They had an immediate as well as significant positive impact on both our velocity and quality of output. - Marc-Antoine Lacroix, Qonto CTO"_
* Image: we could use something from their [media kit](https://projects.invisionapp.com/boards/9T3UDRXEHSN) maybe?
|
design
|
replace trainline quote with qonto quote we just added the trainline quote but i realized we have a pretty good one from qonto that we could use instead since we re referring to trainline in quite a few other places already we can simply replace the component with a component and update all usages of the component respectively todo create image for the quote component rename to and update usages replace quote from carl with the quote from marc antoine below resources quote simplabs are well known as the ember js experts and they absolutely live up to the expectations they had an immediate as well as significant positive impact on both our velocity and quality of output marc antoine lacroix qonto cto image we could use something from their maybe
| 1
|
146,956
| 23,143,704,229
|
IssuesEvent
|
2022-07-28 21:17:45
|
MozillaFoundation/Design
|
https://api.github.com/repos/MozillaFoundation/Design
|
closed
|
[PNI Holiday Marketing] Header graphic for blog post "Amazon Echo, Google Nest, or Apple Homepod: Which is the least creepy/invasive?"
|
design
|
[Draft Doc](https://docs.google.com/document/d/1-V_k7w2CAp3LNePzIQkO-Y36dFRBna6r4BIF_Bi_FAo/edit#)
Drive Folder
|
1.0
|
[PNI Holiday Marketing] Header graphic for blog post "Amazon Echo, Google Nest, or Apple Homepod: Which is the least creepy/invasive?" - [Draft Doc](https://docs.google.com/document/d/1-V_k7w2CAp3LNePzIQkO-Y36dFRBna6r4BIF_Bi_FAo/edit#)
Drive Folder
|
design
|
header graphic for blog post amazon echo google nest or apple homepod which is the least creepy invasive drive folder
| 1
|
31,257
| 4,240,017,250
|
IssuesEvent
|
2016-07-06 11:52:32
|
cayleygraph/cayley
|
https://api.github.com/repos/cayleygraph/cayley
|
closed
|
Redis backend
|
Feature Design New Backends
|
There seems to be a lot of interest in a Redis backend, this issue is to give people a place to track that. As of yet we don't have one. :)
|
1.0
|
Redis backend - There seems to be a lot of interest in a Redis backend, this issue is to give people a place to track that. As of yet we don't have one. :)
|
design
|
redis backend there seems to be a lot of interest in a redis backend this issue is to give people a place to track that as of yet we don t have one
| 1
|
124,548
| 16,613,785,462
|
IssuesEvent
|
2021-06-02 14:28:05
|
dotnet/upgrade-assistant
|
https://api.github.com/repos/dotnet/upgrade-assistant
|
opened
|
There should be more control over diagnostic verbosity than a single -v switch
|
design-proposal
|
There's enough of a difference between verbose and non-verbose diagnostic output, that it would probably be useful to have additional verbosity levels at this point.
Maybe there could be three levels:
- Normal (same as current non-verbose)
- Verbose (-v) which includes debug diagnostics, but not trace ones)
- Very Verbose (-vv) which is the same as current verbose
|
1.0
|
There should be more control over diagnostic verbosity than a single -v switch - There's enough of a difference between verbose and non-verbose diagnostic output, that it would probably be useful to have additional verbosity levels at this point.
Maybe there could be three levels:
- Normal (same as current non-verbose)
- Verbose (-v) which includes debug diagnostics, but not trace ones)
- Very Verbose (-vv) which is the same as current verbose
|
design
|
there should be more control over diagnostic verbosity than a single v switch there s enough of a difference between verbose and non verbose diagnostic output that it would probably be useful to have additional verbosity levels at this point maybe there could be three levels normal same as current non verbose verbose v which includes debug diagnostics but not trace ones very verbose vv which is the same as current verbose
| 1
|
40,998
| 6,886,966,095
|
IssuesEvent
|
2017-11-21 21:27:13
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Can't click on clickable anchors inside the documentation
|
bug documentation topic:editor
|
**Operating system or device, Godot version, GPU Model and driver (if graphics related):**
Godot 3.0 master https://github.com/godotengine/godot/commit/7715a261d5f387b7769bb8149735e4131ea97757
**Issue description:**
<!-- What happened, and what was expected. -->

It seems to only happen inside the Members and Public Methods blocks
This doesn't happen in Godot 3.0 alpha2 so it must be something new
**Steps to reproduce:**
1. Open some documentation
2. Try to click any anchor inside Members or Public Methods
|
1.0
|
Can't click on clickable anchors inside the documentation - **Operating system or device, Godot version, GPU Model and driver (if graphics related):**
Godot 3.0 master https://github.com/godotengine/godot/commit/7715a261d5f387b7769bb8149735e4131ea97757
**Issue description:**
<!-- What happened, and what was expected. -->

It seems to only happen inside the Members and Public Methods blocks
This doesn't happen in Godot 3.0 alpha2 so it must be something new
**Steps to reproduce:**
1. Open some documentation
2. Try to click any anchor inside Members or Public Methods
|
non_design
|
can t click on clickable anchors inside the documentation operating system or device godot version gpu model and driver if graphics related godot master issue description it seems to only happen inside the members and public methods blocks this doesn t happen in godot so it must be something new steps to reproduce open some documentation try to click any anchor inside members or public methods
| 0
|
411,443
| 12,018,146,622
|
IssuesEvent
|
2020-04-10 20:06:07
|
GeyserMC/Geyser
|
https://api.github.com/repos/GeyserMC/Geyser
|
closed
|
Bricks slab and trap door can't be placed properly
|
Confirmed Bug Priority: Medium
|
**Describe the bug**
When player place a bricks slab or a trap door it just fall down.
More information in the video
[video](https://streamable.com/zptei)
**To Reproduce**
Steps to reproduce the behavior:
1. Setup server with geyser
2. Place a brick slab or a trap door
3. See error
**Expected behavior**
Brick slab and trap door can stay where they are and not falling down
**Screenshots**
See video above
**Server version**
paper 1.15 latest
**Geyser version**
Jenkins inventory branch latest
**Bedrock version**
latest
|
1.0
|
Bricks slab and trap door can't be placed properly - **Describe the bug**
When player place a bricks slab or a trap door it just fall down.
More information in the video
[video](https://streamable.com/zptei)
**To Reproduce**
Steps to reproduce the behavior:
1. Setup server with geyser
2. Place a brick slab or a trap door
3. See error
**Expected behavior**
Brick slab and trap door can stay where they are and not falling down
**Screenshots**
See video above
**Server version**
paper 1.15 latest
**Geyser version**
Jenkins inventory branch latest
**Bedrock version**
latest
|
non_design
|
bricks slab and trap door can t be placed properly describe the bug when player place a bricks slab or a trap door it just fall down more information in the video to reproduce steps to reproduce the behavior setup server with geyser place a brick slab or a trap door see error expected behavior brick slab and trap door can stay where they are and not falling down screenshots see video above server version paper latest geyser version jenkins inventory branch latest bedrock version latest
| 0
|
547,657
| 16,044,519,921
|
IssuesEvent
|
2021-04-22 12:09:41
|
teamforus/general
|
https://api.github.com/repos/teamforus/general
|
closed
|
Providers: "Last seen"
|
Priority: Could have Scope: Too Big (should split) Status: On hold
|
Learn more about change requests here: https://bit.ly/39CWeEE
### Requested by:
Intenal
### Change description
We would like to have an actualised offering of offers in the webshops, and avoid providers that signed up a couple years ago, from still being visible while they are not active anymore. One of the ways we achieve this, is by manually renewing the fund each year, but actually this is a big hassle for the sponsor, providers and for us. Therefore we have this cr: https://github.com/teamforus/general/issues/252
The proposal of this change is to introduce "last seen" for the providers, so that the sponsor can see when the provider was last active, and so that they can take action (contact the provider / remove them from the webshop)
|
1.0
|
Providers: "Last seen" - Learn more about change requests here: https://bit.ly/39CWeEE
### Requested by:
Intenal
### Change description
We would like to have an actualised offering of offers in the webshops, and avoid providers that signed up a couple years ago, from still being visible while they are not active anymore. One of the ways we achieve this, is by manually renewing the fund each year, but actually this is a big hassle for the sponsor, providers and for us. Therefore we have this cr: https://github.com/teamforus/general/issues/252
The proposal of this change is to introduce "last seen" for the providers, so that the sponsor can see when the provider was last active, and so that they can take action (contact the provider / remove them from the webshop)
|
non_design
|
providers last seen learn more about change requests here requested by intenal change description we would like to have an actualised offering of offers in the webshops and avoid providers that signed up a couple years ago from still being visible while they are not active anymore one of the ways we achieve this is by manually renewing the fund each year but actually this is a big hassle for the sponsor providers and for us therefore we have this cr the proposal of this change is to introduce last seen for the providers so that the sponsor can see when the provider was last active and so that they can take action contact the provider remove them from the webshop
| 0
|
421,779
| 12,261,298,799
|
IssuesEvent
|
2020-05-06 19:50:38
|
syntax-prosody-ot/main
|
https://api.github.com/repos/syntax-prosody-ot/main
|
closed
|
Custom Align (SP & PS)
|
enhancement low priority
|
Let's add Custom Align to the interface. Exactly the same options as with Custom Match.
|
1.0
|
Custom Align (SP & PS) - Let's add Custom Align to the interface. Exactly the same options as with Custom Match.
|
non_design
|
custom align sp ps let s add custom align to the interface exactly the same options as with custom match
| 0
|
759,637
| 26,604,141,112
|
IssuesEvent
|
2023-01-23 17:55:01
|
inverse-inc/packetfence
|
https://api.github.com/repos/inverse-inc/packetfence
|
closed
|
Security events save don't update all the fields
|
Type: Bug Priority: High
|
**Describe the bug**
When you create/save a security events some fields are not saved correctly in the configuration file.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a security event
2. Change the window value and you will see that it's not saved.
3. Change the windowd value and set a unit , it's saved.
**Expected behavior**
In the api call the unit is not defined even if the default one is there (like second)
|
1.0
|
Security events save don't update all the fields - **Describe the bug**
When you create/save a security events some fields are not saved correctly in the configuration file.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a security event
2. Change the window value and you will see that it's not saved.
3. Change the windowd value and set a unit , it's saved.
**Expected behavior**
In the api call the unit is not defined even if the default one is there (like second)
|
non_design
|
security events save don t update all the fields describe the bug when you create save a security events some fields are not saved correctly in the configuration file to reproduce steps to reproduce the behavior create a security event change the window value and you will see that it s not saved change the windowd value and set a unit it s saved expected behavior in the api call the unit is not defined even if the default one is there like second
| 0
|
6,507
| 9,594,890,829
|
IssuesEvent
|
2019-05-09 14:54:15
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[needs-docs][processing] Rename "remove duplicates by attribute" to
"delete duplicates by attribute"
|
Automatic new feature Easy Processing Alg
|
Original commit: https://github.com/qgis/QGIS/commit/d79cee1fe1108965a1e62922809ff4082dcad018 by nyalldawson
for consistency with "delete duplicate geometries". Also add some
tags to delete duplicate geometries algorithm.
|
1.0
|
[needs-docs][processing] Rename "remove duplicates by attribute" to
"delete duplicates by attribute" - Original commit: https://github.com/qgis/QGIS/commit/d79cee1fe1108965a1e62922809ff4082dcad018 by nyalldawson
for consistency with "delete duplicate geometries". Also add some
tags to delete duplicate geometries algorithm.
|
non_design
|
rename remove duplicates by attribute to delete duplicates by attribute original commit by nyalldawson for consistency with delete duplicate geometries also add some tags to delete duplicate geometries algorithm
| 0
|
179,748
| 30,293,810,505
|
IssuesEvent
|
2023-07-09 15:55:34
|
Team-Umbba/Umbba-Android
|
https://api.github.com/repos/Team-Umbba/Umbba-Android
|
closed
|
[Design] ๋ฉ์ธ ๋ทฐ ๋ค์ด์ผ๋ก๊ทธ ํ๋ฉด UI ๊ตฌํ
|
design ์ฐ์ง
|
## ๐ Issue
๋ค์ด์ผ๋ก๊ทธ ์์ฑ
## โ๏ธ To do
- [x] InviteCodeFragmentDialog ํ๋ฉด UI ๊ตฌํ
- [x] ConfirmAnswerFragmentDialog ํ๋ฉด UI ๊ตฌํ
|
1.0
|
[Design] ๋ฉ์ธ ๋ทฐ ๋ค์ด์ผ๋ก๊ทธ ํ๋ฉด UI ๊ตฌํ - ## ๐ Issue
๋ค์ด์ผ๋ก๊ทธ ์์ฑ
## โ๏ธ To do
- [x] InviteCodeFragmentDialog ํ๋ฉด UI ๊ตฌํ
- [x] ConfirmAnswerFragmentDialog ํ๋ฉด UI ๊ตฌํ
|
design
|
๋ฉ์ธ ๋ทฐ ๋ค์ด์ผ๋ก๊ทธ ํ๋ฉด ui ๊ตฌํ ๐ issue ๋ค์ด์ผ๋ก๊ทธ ์์ฑ โ๏ธ to do invitecodefragmentdialog ํ๋ฉด ui ๊ตฌํ confirmanswerfragmentdialog ํ๋ฉด ui ๊ตฌํ
| 1
|
101,402
| 12,682,066,265
|
IssuesEvent
|
2020-06-19 16:36:58
|
paritytech/ink
|
https://api.github.com/repos/paritytech/ink
|
closed
|
Safe Math Types
|
A-ink_core B-design B-enhancement
|
We found out that enabling `overflow-checks` for an entire binary results in increased Wasm binary size. This is unfortunate since one of the main goals of ink! is to make sure that binary sizes are kept as small as possible.
For this we could try to move to another strategy by only having "safe maths", e.g. overflow checked math for the user provided ink! parts. E.g. only within those parts of ink! that are actually written by users and to not have overflow checks enabled for all the other generated parts that should generally be assumed to behave "correctly".
## Potential Solution
One potential work-loaded solution is to provide some custom primitive integer types such as `ink::{U8, U16, ..., U128, I8, I16, ... I128}` that should be usable just like their primitive counterparts with the only difference of always being overflow-checked.
As their primitive counterparts they should also provide APIs to allow for overflow-enabled arithmetic, e.g. `wrapped_add` etc.
This solution would require lots of work on our side to provide those wrappers and also make it clear through documentation and tutorial efforts for users to make use of them instead of Rust's primitive types. Maybe we might be able to even assert at compile time that users do not use Rust's primitive types if we have good reasons to do so.
|
1.0
|
Safe Math Types - We found out that enabling `overflow-checks` for an entire binary results in increased Wasm binary size. This is unfortunate since one of the main goals of ink! is to make sure that binary sizes are kept as small as possible.
For this we could try to move to another strategy by only having "safe maths", e.g. overflow checked math for the user provided ink! parts. E.g. only within those parts of ink! that are actually written by users and to not have overflow checks enabled for all the other generated parts that should generally be assumed to behave "correctly".
## Potential Solution
One potential work-loaded solution is to provide some custom primitive integer types such as `ink::{U8, U16, ..., U128, I8, I16, ... I128}` that should be usable just like their primitive counterparts with the only difference of always being overflow-checked.
As their primitive counterparts they should also provide APIs to allow for overflow-enabled arithmetic, e.g. `wrapped_add` etc.
This solution would require lots of work on our side to provide those wrappers and also make it clear through documentation and tutorial efforts for users to make use of them instead of Rust's primitive types. Maybe we might be able to even assert at compile time that users do not use Rust's primitive types if we have good reasons to do so.
|
design
|
safe math types we found out that enabling overflow checks for an entire binary results in increased wasm binary size this is unfortunate since one of the main goals of ink is to make sure that binary sizes are kept as small as possible for this we could try to move to another strategy by only having safe maths e g overflow checked math for the user provided ink parts e g only within those parts of ink that are actually written by users and to not have overflow checks enabled for all the other generated parts that should generally be assumed to behave correctly potential solution one potential work loaded solution is to provide some custom primitive integer types such as ink that should be usable just like their primitive counterparts with the only difference of always being overflow checked as their primitive counterparts they should also provide apis to allow for overflow enabled arithmetic e g wrapped add etc this solution would require lots of work on our side to provide those wrappers and also make it clear through documentation and tutorial efforts for users to make use of them instead of rust s primitive types maybe we might be able to even assert at compile time that users do not use rust s primitive types if we have good reasons to do so
| 1
|
10,525
| 13,307,371,871
|
IssuesEvent
|
2020-08-25 22:02:42
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
closed
|
7/9 provisioning tests fail/error on manual run instructions
|
priority: p2 type: process
|
- [x] Tests are passing on e2e
But while manually running provisioning tests on Mac/unix, 7/9 tests fail on various errors likely due to the missing .config volumes that persist credentials
Steps to reproduce:
1. Provision sandbox on personal account
2. docker run [per instructions](https://github.com/GoogleCloudPlatform/cloud-ops-sandbox/tree/master/tests/provisioning)
[Full error log output](https://docs.google.com/document/d/1pA6bS30_S8VWhfS5_OyUxJnnX6RhE3Zt9P_U5eiH9ho/edit?hl=en)
**2/9 tests passing:**
```
to select an already authenticated account to use.
testNodeMachineType (__main__.TestGKECluster)
Test if the machine type for the nodes is as specified ... ERROR
testNumberOfNode (__main__.TestGKECluster)
Test if the number of nodes in the node pool is as specified ... ERROR
testReachOfHipsterShop (__main__.TestGKECluster)
Test if querying hipster shop returns 200 ... ERROR
testStatusOfServices (__main__.TestGKECluster)
Test if all the service deployments are ready ... ERROR
testDifferentZone (__main__.TestLoadGenerator)
Test if load generator is in a different zone from the GKE cluster ... ok
testNumberOfLoadgen (__main__.TestLoadGenerator)
Test if there's only one load generator instance ... ok
testReachOfLoadgen (__main__.TestLoadGenerator)
Test if querying load generator returns 200 ... ERROR
testAPIEnabled (__main__.TestProjectResources)
Test if all APIs requested are enabled ... FAIL
testErrorReporting (__main__.TestProjectResources)
Test if we can report error using Error Reporting API ... ERROR
```
Note: I have an active, credentialed GCloud user account. Though some errors could still be set up issues on my end.
|
1.0
|
7/9 provisioning tests fail/error on manual run instructions - - [x] Tests are passing on e2e
But while manually running provisioning tests on Mac/unix, 7/9 tests fail on various errors likely due to the missing .config volumes that persist credentials
Steps to reproduce:
1. Provision sandbox on personal account
2. docker run [per instructions](https://github.com/GoogleCloudPlatform/cloud-ops-sandbox/tree/master/tests/provisioning)
[Full error log output](https://docs.google.com/document/d/1pA6bS30_S8VWhfS5_OyUxJnnX6RhE3Zt9P_U5eiH9ho/edit?hl=en)
**2/9 tests passing:**
```
to select an already authenticated account to use.
testNodeMachineType (__main__.TestGKECluster)
Test if the machine type for the nodes is as specified ... ERROR
testNumberOfNode (__main__.TestGKECluster)
Test if the number of nodes in the node pool is as specified ... ERROR
testReachOfHipsterShop (__main__.TestGKECluster)
Test if querying hipster shop returns 200 ... ERROR
testStatusOfServices (__main__.TestGKECluster)
Test if all the service deployments are ready ... ERROR
testDifferentZone (__main__.TestLoadGenerator)
Test if load generator is in a different zone from the GKE cluster ... ok
testNumberOfLoadgen (__main__.TestLoadGenerator)
Test if there's only one load generator instance ... ok
testReachOfLoadgen (__main__.TestLoadGenerator)
Test if querying load generator returns 200 ... ERROR
testAPIEnabled (__main__.TestProjectResources)
Test if all APIs requested are enabled ... FAIL
testErrorReporting (__main__.TestProjectResources)
Test if we can report error using Error Reporting API ... ERROR
```
Note: I have an active, credentialed GCloud user account. Though some errors could still be set up issues on my end.
|
non_design
|
provisioning tests fail error on manual run instructions tests are passing on but while manually running provisioning tests on mac unix tests fail on various errors likely due to the missing config volumes that persist credentials steps to reproduce provision sandbox on personal account docker run tests passing to select an already authenticated account to use testnodemachinetype main testgkecluster test if the machine type for the nodes is as specified error testnumberofnode main testgkecluster test if the number of nodes in the node pool is as specified error testreachofhipstershop main testgkecluster test if querying hipster shop returns error teststatusofservices main testgkecluster test if all the service deployments are ready error testdifferentzone main testloadgenerator test if load generator is in a different zone from the gke cluster ok testnumberofloadgen main testloadgenerator test if there s only one load generator instance ok testreachofloadgen main testloadgenerator test if querying load generator returns error testapienabled main testprojectresources test if all apis requested are enabled fail testerrorreporting main testprojectresources test if we can report error using error reporting api error note i have an active credentialed gcloud user account though some errors could still be set up issues on my end
| 0
|
520,872
| 15,096,211,387
|
IssuesEvent
|
2021-02-07 14:15:02
|
holixon/axon-testing
|
https://api.github.com/repos/holixon/axon-testing
|
closed
|
Prepare for release to central.
|
Priority: MUST in progress
|
Put everything needed to `pom.xml`
Try out to avoid explicit settings manipulation (use `setupjava`)
|
1.0
|
Prepare for release to central. - Put everything needed to `pom.xml`
Try out to avoid explicit settings manipulation (use `setupjava`)
|
non_design
|
prepare for release to central put everything needed to pom xml try out to avoid explicit settings manipulation use setupjava
| 0
|
61,698
| 7,495,240,600
|
IssuesEvent
|
2018-04-07 18:41:36
|
NYU-Shopcarts/shopcarts
|
https://api.github.com/repos/NYU-Shopcarts/shopcarts
|
closed
|
Create the design for all CRUD routes
|
design
|
**As a** developler
**I need** to know the what to name the API routes
**So that** I can create a standardized API
**Assumptions:**
* We can create and publish a documentation for the Shopcarts API
**Acceptance Criteria:**
```
Given that I'm a user of the Shopcarts API
When my client calls the CRUD routes on the Shopcarts API
Then my products gets created, updated, deleted, or fetched from the Shopcarts DB
```
|
1.0
|
Create the design for all CRUD routes - **As a** developler
**I need** to know the what to name the API routes
**So that** I can create a standardized API
**Assumptions:**
* We can create and publish a documentation for the Shopcarts API
**Acceptance Criteria:**
```
Given that I'm a user of the Shopcarts API
When my client calls the CRUD routes on the Shopcarts API
Then my products gets created, updated, deleted, or fetched from the Shopcarts DB
```
|
design
|
create the design for all crud routes as a developler i need to know the what to name the api routes so that i can create a standardized api assumptions we can create and publish a documentation for the shopcarts api acceptance criteria given that i m a user of the shopcarts api when my client calls the crud routes on the shopcarts api then my products gets created updated deleted or fetched from the shopcarts db
| 1
|
465,351
| 13,383,569,446
|
IssuesEvent
|
2020-09-02 10:32:29
|
vacuumlabs/adalite
|
https://api.github.com/repos/vacuumlabs/adalite
|
closed
|
Refactor transaction signing so we dont need to reparse signed transaction to update utxo cache
|
low priority
|
Currently TxAux are prepared in different function that just returns signed transaction which needs to be parsed to get back the txAux data for local utxo cache
|
1.0
|
Refactor transaction signing so we dont need to reparse signed transaction to update utxo cache - Currently TxAux are prepared in different function that just returns signed transaction which needs to be parsed to get back the txAux data for local utxo cache
|
non_design
|
refactor transaction signing so we dont need to reparse signed transaction to update utxo cache currently txaux are prepared in different function that just returns signed transaction which needs to be parsed to get back the txaux data for local utxo cache
| 0
|
106,513
| 13,305,559,744
|
IssuesEvent
|
2020-08-25 18:44:44
|
flexion/ef-cms
|
https://api.github.com/repos/flexion/ef-cms
|
closed
|
Docket Clerk: Strike a Document
|
(3) DD Workflow - External Light/No Design
|
As a Docket Clerk, in order to manage the docket record, I need the ability to strike an item from the record.
## Pre-Conditions
## Acceptance Criteria
* Docket Clerk can "strike" [this is a term of art at the Court] an item from the docket record
* When the item has been stricken, it remains on the docket record with the date and notice of proceedings crossed out and (STRICKEN)" in parens is at the end
* Once an item is stricken, the document is still viewable to all Court personnel (hyperlink still active), but is not visable to parties or public (no hyperlink)
* User cannot unstrike a document
## Notes
* Similar to #5506 from data migration
## Mobile Design/Considerations
## Security Considerations
- [ ] Does this work make you nervous about privacy or security?
- [ ] Does this work make major changes to the system?
- [ ] Does this work implement new authentication or security controls?
- [ ] Does this work create new methods of authentication, modify existing security controls, or explicitly implement any security or privacy features?
## Tasks
- [x] Strike entry link and modal (Jojo #6197)
- [x] Strike proxy, lambda, interactor, and persistence (Jojo #6199)
- [x] Display success message after document is stricken (Jojo #6199)
- [x] Show stricken info (user, date) on edit docket record meta view (Jojo #6203)
- [x] Public-facing docket record should show stricken docket entries (Jojo #6203)
- [x] Add `(STRICKEN)` to Document View (list and in document title) if a document is stricken (Jojo)
## Definition of Done (Updated 7-2-20)
**Product Owner**
- [x] Acceptance criteria have been met
**UX**
- [x] Business test scenarios to meet all acceptance criteria have been written
- [x] Usability has been validated
- [x] Wiki has been updated (if applicable)
- [x] Story has been tested on a mobile device (for external users only)
**Engineering**
- [x] Automated test scripts have been written (Jojo #6210)
- [x] Field level and page level validation errors (front-end and server-side) integrated and functioning
- [x] Paired w/Mark on data migration work (if applicable)
- [x] Verify that language for docket record for internal users and external users is identical
- [x] New screens have been added to pa11y scripts
- [x] All new functionality verified to work with keyboard and macOS voiceover https://www.apple.com/voiceover/info/guide/_1124.html
- [x] READMEs, other appropriate docs, JSDocs and swagger/APIs fully updated
- [x] UI should be touch optimized and responsive for external only (functions on supported mobile devices and optimized for screen sizes as required)
- [x] Module dependencies are up-to-date and are at the latest resolvable version (npm update)
- [x] Errors in Sonarcloud are fixed https://sonarcloud.io/organizations/flexion-github/projects
- [x] Lambdas include CloudWatch logging of users, inputs and outputs
- [x] Interactors should validate entities before calling persistence methods
- [x] Code refactored for clarity and to remove any known technical debt
- [x] Rebuild entity documentation
- [x] Acceptance criteria for the story has been met
- [x] Deployed to the dev environment
- [x] Deployed to the stage environment
|
1.0
|
Docket Clerk: Strike a Document - As a Docket Clerk, in order to manage the docket record, I need the ability to strike an item from the record.
## Pre-Conditions
## Acceptance Criteria
* Docket Clerk can "strike" [this is a term of art at the Court] an item from the docket record
* When the item has been stricken, it remains on the docket record with the date and notice of proceedings crossed out and (STRICKEN)" in parens is at the end
* Once an item is stricken, the document is still viewable to all Court personnel (hyperlink still active), but is not visable to parties or public (no hyperlink)
* User cannot unstrike a document
## Notes
* Similar to #5506 from data migration
## Mobile Design/Considerations
## Security Considerations
- [ ] Does this work make you nervous about privacy or security?
- [ ] Does this work make major changes to the system?
- [ ] Does this work implement new authentication or security controls?
- [ ] Does this work create new methods of authentication, modify existing security controls, or explicitly implement any security or privacy features?
## Tasks
- [x] Strike entry link and modal (Jojo #6197)
- [x] Strike proxy, lambda, interactor, and persistence (Jojo #6199)
- [x] Display success message after document is stricken (Jojo #6199)
- [x] Show stricken info (user, date) on edit docket record meta view (Jojo #6203)
- [x] Public-facing docket record should show stricken docket entries (Jojo #6203)
- [x] Add `(STRICKEN)` to Document View (list and in document title) if a document is stricken (Jojo)
## Definition of Done (Updated 7-2-20)
**Product Owner**
- [x] Acceptance criteria have been met
**UX**
- [x] Business test scenarios to meet all acceptance criteria have been written
- [x] Usability has been validated
- [x] Wiki has been updated (if applicable)
- [x] Story has been tested on a mobile device (for external users only)
**Engineering**
- [x] Automated test scripts have been written (Jojo #6210)
- [x] Field level and page level validation errors (front-end and server-side) integrated and functioning
- [x] Paired w/Mark on data migration work (if applicable)
- [x] Verify that language for docket record for internal users and external users is identical
- [x] New screens have been added to pa11y scripts
- [x] All new functionality verified to work with keyboard and macOS voiceover https://www.apple.com/voiceover/info/guide/_1124.html
- [x] READMEs, other appropriate docs, JSDocs and swagger/APIs fully updated
- [x] UI should be touch optimized and responsive for external only (functions on supported mobile devices and optimized for screen sizes as required)
- [x] Module dependencies are up-to-date and are at the latest resolvable version (npm update)
- [x] Errors in Sonarcloud are fixed https://sonarcloud.io/organizations/flexion-github/projects
- [x] Lambdas include CloudWatch logging of users, inputs and outputs
- [x] Interactors should validate entities before calling persistence methods
- [x] Code refactored for clarity and to remove any known technical debt
- [x] Rebuild entity documentation
- [x] Acceptance criteria for the story has been met
- [x] Deployed to the dev environment
- [x] Deployed to the stage environment
|
design
|
docket clerk strike a document as a docket clerk in order to manage the docket record i need the ability to strike an item from the record pre conditions acceptance criteria docket clerk can strike an item from the docket record when the item has been stricken it remains on the docket record with the date and notice of proceedings crossed out and stricken in parens is at the end once an item is stricken the document is still viewable to all court personnel hyperlink still active but is not visable to parties or public no hyperlink user cannot unstrike a document notes similar to from data migration mobile design considerations security considerations does this work make you nervous about privacy or security does this work make major changes to the system does this work implement new authentication or security controls does this work create new methods of authentication modify existing security controls or explicitly implement any security or privacy features tasks strike entry link and modal jojo strike proxy lambda interactor and persistence jojo display success message after document is stricken jojo show stricken info user date on edit docket record meta view jojo public facing docket record should show stricken docket entries jojo add stricken to document view list and in document title if a document is stricken jojo definition of done updated product owner acceptance criteria have been met ux business test scenarios to meet all acceptance criteria have been written usability has been validated wiki has been updated if applicable story has been tested on a mobile device for external users only engineering automated test scripts have been written jojo field level and page level validation errors front end and server side integrated and functioning paired w mark on data migration work if applicable verify that language for docket record for internal users and external users is identical new screens have been added to scripts all new functionality verified to work with keyboard and macos voiceover readmes other appropriate docs jsdocs and swagger apis fully updated ui should be touch optimized and responsive for external only functions on supported mobile devices and optimized for screen sizes as required module dependencies are up to date and are at the latest resolvable version npm update errors in sonarcloud are fixed lambdas include cloudwatch logging of users inputs and outputs interactors should validate entities before calling persistence methods code refactored for clarity and to remove any known technical debt rebuild entity documentation acceptance criteria for the story has been met deployed to the dev environment deployed to the stage environment
| 1
|
323,637
| 23,958,826,217
|
IssuesEvent
|
2022-09-12 17:09:15
|
LinuxCNC/linuxcnc
|
https://api.github.com/repos/LinuxCNC/linuxcnc
|
closed
|
new asciidoc syntax problem
|
documentation
|
An asciidoc syntax problem was introduced in @smoe 's commit fb3e8fbc31f871eb94356e7a758a05dd8f75e444:

The commit before is wrong, but in a different way:

I'm not sure what the correct asciidoc syntax should be here, but it's neither of these two ;-)
I'm also not sure if any other new bugs were introduced among all the fixes in fb3e8fbc31f871eb94356e7a758a05dd8f75e444.
|
1.0
|
new asciidoc syntax problem - An asciidoc syntax problem was introduced in @smoe 's commit fb3e8fbc31f871eb94356e7a758a05dd8f75e444:

The commit before is wrong, but in a different way:

I'm not sure what the correct asciidoc syntax should be here, but it's neither of these two ;-)
I'm also not sure if any other new bugs were introduced among all the fixes in fb3e8fbc31f871eb94356e7a758a05dd8f75e444.
|
non_design
|
new asciidoc syntax problem an asciidoc syntax problem was introduced in smoe s commit the commit before is wrong but in a different way i m not sure what the correct asciidoc syntax should be here but it s neither of these two i m also not sure if any other new bugs were introduced among all the fixes in
| 0
|
160,584
| 6,100,393,314
|
IssuesEvent
|
2017-06-20 12:30:40
|
javaee/glassfish
|
https://api.github.com/repos/javaee/glassfish
|
closed
|
Not creating junit report file for all results files in security devtests.
|
Component: security Priority: Major Type: Bug
|
There are two result files getting generated after running security devtests. One of which(security-gtest-results.xml) is getting ignored while generating junit report file.
|
1.0
|
Not creating junit report file for all results files in security devtests. - There are two result files getting generated after running security devtests. One of which(security-gtest-results.xml) is getting ignored while generating junit report file.
|
non_design
|
not creating junit report file for all results files in security devtests there are two result files getting generated after running security devtests one of which security gtest results xml is getting ignored while generating junit report file
| 0
|
114,425
| 14,578,124,671
|
IssuesEvent
|
2020-12-18 03:55:59
|
naher94/rehanbutt.com
|
https://api.github.com/repos/naher94/rehanbutt.com
|
closed
|
Button Animations
|
design development note
|
Fizzy Button:
https://codepen.io/webLeister/pen/XwGENz?editors=1100
Playful Button:
https://codepen.io/aaroniker/pen/OJPqPMR
8-Bit Button:
https://codepen.io/tstoik/pen/EjMzRZ
Ink Style Button:
https://codemyui.com/smoke-liquid-button-animation-effects/
|
1.0
|
Button Animations - Fizzy Button:
https://codepen.io/webLeister/pen/XwGENz?editors=1100
Playful Button:
https://codepen.io/aaroniker/pen/OJPqPMR
8-Bit Button:
https://codepen.io/tstoik/pen/EjMzRZ
Ink Style Button:
https://codemyui.com/smoke-liquid-button-animation-effects/
|
design
|
button animations fizzy button playful button bit button ink style button
| 1
|
14,452
| 3,399,423,580
|
IssuesEvent
|
2015-12-02 10:48:57
|
blackwatchint/blackwatchint
|
https://api.github.com/repos/blackwatchint/blackwatchint
|
opened
|
ACE v3.4.0
|
Arma 1.54 Modpack Needs Testing Update Urgent Priority
|
ACE have released version 3.4.0. This version is expected to fix issues #10 and #112.
The changelog and download can be found on the ACE 3 Github: https://github.com/acemod/ACE3/releases
|
1.0
|
ACE v3.4.0 - ACE have released version 3.4.0. This version is expected to fix issues #10 and #112.
The changelog and download can be found on the ACE 3 Github: https://github.com/acemod/ACE3/releases
|
non_design
|
ace ace have released version this version is expected to fix issues and the changelog and download can be found on the ace github
| 0
|
162,428
| 25,536,201,565
|
IssuesEvent
|
2022-11-29 12:11:09
|
Energinet-DataHub/greenforce-frontend
|
https://api.github.com/repos/Energinet-DataHub/greenforce-frontend
|
opened
|
From date in date range picker gets reset when used inside tabs
|
Frontend Mighty Ducks Bug Design system (Watt)
|
**Description:**
In a very specific scenario, where a date range picker is used in tabs, inside a drawer, the from date gets reset.
"/charges/prices" route has search results displayed in a table. Clicking a row, opens a drawer with tabs in it. Each tab has a date range picker which must display the same time period across all tabs. This is achieved by saving the selected time period in a service that is shared between all tabs.

**The issue only appears when the dates are adjusted manually.**
To reproduce the issue:
- navigate to "/charges/prices" and make sure results are displayed in the table (e.g. by serving the app in mocked state)
- click on a row to open a drawer
- open the "Network" tab in DevTools (to observe outgoing requests)
- in the date range picker under the "Prices" tab, manually edit the "from" date (e.g. by changing the month)
- a new request is sent to the BFF with the new date
- click on the "Messages" tabs in the drawer
- a new request is sent again with the new date
- in the date range picker manually edit the "to" date (e.g. by changing the month)
- a new request is sent but this time the "from" date is reset back to the initial date when the drawer was first opened

**AC:**
- [ ] the date range picker values should stay consistent when used inside tabs
**Definition of Ready:**
- [ ] The issue is correctly estimated
- [ ] The issue is adequately described
- [ ] Possible dependencies are defined and aligned
- [ ] We have the necessary skills to complete this issue
- [ ] The issue can be completed withing 1 iteration
- [ ] The issue has acceptance criteria defined
- [ ] The issue has adequate Definition Of Done described
**Definition of Done:**
- [ ] Acceptance Criteria have been met
- [ ] The product has been demoโed for relevant stakeholders
- [ ] Dependencies are handled
- [ ] The work has been documented
- [ ] The issue has been handed over and reviewed
- [ ] The PO has accepted the product
|
1.0
|
From date in date range picker gets reset when used inside tabs - **Description:**
In a very specific scenario, where a date range picker is used in tabs, inside a drawer, the from date gets reset.
"/charges/prices" route has search results displayed in a table. Clicking a row, opens a drawer with tabs in it. Each tab has a date range picker which must display the same time period across all tabs. This is achieved by saving the selected time period in a service that is shared between all tabs.

**The issue only appears when the dates are adjusted manually.**
To reproduce the issue:
- navigate to "/charges/prices" and make sure results are displayed in the table (e.g. by serving the app in mocked state)
- click on a row to open a drawer
- open the "Network" tab in DevTools (to observe outgoing requests)
- in the date range picker under the "Prices" tab, manually edit the "from" date (e.g. by changing the month)
- a new request is sent to the BFF with the new date
- click on the "Messages" tabs in the drawer
- a new request is sent again with the new date
- in the date range picker manually edit the "to" date (e.g. by changing the month)
- a new request is sent but this time the "from" date is reset back to the initial date when the drawer was first opened

**AC:**
- [ ] the date range picker values should stay consistent when used inside tabs
**Definition of Ready:**
- [ ] The issue is correctly estimated
- [ ] The issue is adequately described
- [ ] Possible dependencies are defined and aligned
- [ ] We have the necessary skills to complete this issue
- [ ] The issue can be completed withing 1 iteration
- [ ] The issue has acceptance criteria defined
- [ ] The issue has adequate Definition Of Done described
**Definition of Done:**
- [ ] Acceptance Criteria have been met
- [ ] The product has been demoโed for relevant stakeholders
- [ ] Dependencies are handled
- [ ] The work has been documented
- [ ] The issue has been handed over and reviewed
- [ ] The PO has accepted the product
|
design
|
from date in date range picker gets reset when used inside tabs description in a very specific scenario where a date range picker is used in tabs inside a drawer the from date gets reset charges prices route has search results displayed in a table clicking a row opens a drawer with tabs in it each tab has a date range picker which must display the same time period across all tabs this is achieved by saving the selected time period in a service that is shared between all tabs the issue only appears when the dates are adjusted manually to reproduce the issue navigate to charges prices and make sure results are displayed in the table e g by serving the app in mocked state click on a row to open a drawer open the network tab in devtools to observe outgoing requests in the date range picker under the prices tab manually edit the from date e g by changing the month a new request is sent to the bff with the new date click on the messages tabs in the drawer a new request is sent again with the new date in the date range picker manually edit the to date e g by changing the month a new request is sent but this time the from date is reset back to the initial date when the drawer was first opened ac the date range picker values should stay consistent when used inside tabs definition of ready the issue is correctly estimated the issue is adequately described possible dependencies are defined and aligned we have the necessary skills to complete this issue the issue can be completed withing iteration the issue has acceptance criteria defined the issue has adequate definition of done described definition of done acceptance criteria have been met the product has been demoโed for relevant stakeholders dependencies are handled the work has been documented the issue has been handed over and reviewed the po has accepted the product
| 1
|
20,495
| 6,893,809,652
|
IssuesEvent
|
2017-11-23 06:59:51
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
closed
|
Unable to build boost with error "Archive was empty"
|
build-error unreproducible
|
After successfully installing boost 1.63.0 with GCC 5.4.0, I failed to buid it against Intel compiler 16.0.3. The erorro message says "Archive was empty for boost". However, I am pretty sure that the boost package has been downloaded and I can unarchive it manually.
Any advice on further digging into this issue? Thank you.
```
$ spack install boost %intel@16.0.3
==> Installing boost
==> bzip2 is already installed in /lustre/spack/sandybridge/linux-centos7-x86_64/intel-16.0.3/bzip2-1.0.6-qz2i6vx23esuzvr3jntzoavyewf5aucw
==> zlib is already installed in /lustre/spack/sandybridge/linux-centos7-x86_64/intel-16.0.3/zlib-1.2.10-exoivhx3ti4qedcevvrr6bcv7s7mjz5c
==> Fetching http://downloads.sourceforge.net/project/boost/boost/1.63.0/boost_1_63_0.tar.bz2
######################################################################## 100.0%
==> Already staged boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h in /home/rpm/spack/var/spack/stage/boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h
==> Error: StageError: Archive was empty for boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h
/home/rpm/spack/lib/spack/spack/package.py:965, in do_stage:
957 def do_stage(self, mirror_only=False):
958 """Unpacks the fetched tarball, then changes into the expanded tarball
959 directory."""
960 if not self.spec.concrete:
961 raise ValueError("Can only stage concrete packages.")
962
963 self.do_fetch(mirror_only)
964 self.stage.expand_archive()
>> 965 self.stage.chdir_to_source()
$ ls -alh /home/rpm/spack/var/spack/stage/boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h/
total 79M
drwx------ 3 rpm rpm 80 Feb 3 23:40 .
drwxrwxr-x 3 rpm rpm 60 Feb 3 23:33 ..
-rw-rw-r-- 1 rpm rpm 79M Feb 3 23:40 boost_1_63_0.tar.bz2
drwxrwxr-x 2 rpm rpm 40 Feb 3 23:33 spack-expanded-archive
```
|
1.0
|
Unable to build boost with error "Archive was empty" - After successfully installing boost 1.63.0 with GCC 5.4.0, I failed to buid it against Intel compiler 16.0.3. The erorro message says "Archive was empty for boost". However, I am pretty sure that the boost package has been downloaded and I can unarchive it manually.
Any advice on further digging into this issue? Thank you.
```
$ spack install boost %intel@16.0.3
==> Installing boost
==> bzip2 is already installed in /lustre/spack/sandybridge/linux-centos7-x86_64/intel-16.0.3/bzip2-1.0.6-qz2i6vx23esuzvr3jntzoavyewf5aucw
==> zlib is already installed in /lustre/spack/sandybridge/linux-centos7-x86_64/intel-16.0.3/zlib-1.2.10-exoivhx3ti4qedcevvrr6bcv7s7mjz5c
==> Fetching http://downloads.sourceforge.net/project/boost/boost/1.63.0/boost_1_63_0.tar.bz2
######################################################################## 100.0%
==> Already staged boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h in /home/rpm/spack/var/spack/stage/boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h
==> Error: StageError: Archive was empty for boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h
/home/rpm/spack/lib/spack/spack/package.py:965, in do_stage:
957 def do_stage(self, mirror_only=False):
958 """Unpacks the fetched tarball, then changes into the expanded tarball
959 directory."""
960 if not self.spec.concrete:
961 raise ValueError("Can only stage concrete packages.")
962
963 self.do_fetch(mirror_only)
964 self.stage.expand_archive()
>> 965 self.stage.chdir_to_source()
$ ls -alh /home/rpm/spack/var/spack/stage/boost-1.63.0-dlne5jb6p33roxb3bn5edke4a3ya6b2h/
total 79M
drwx------ 3 rpm rpm 80 Feb 3 23:40 .
drwxrwxr-x 3 rpm rpm 60 Feb 3 23:33 ..
-rw-rw-r-- 1 rpm rpm 79M Feb 3 23:40 boost_1_63_0.tar.bz2
drwxrwxr-x 2 rpm rpm 40 Feb 3 23:33 spack-expanded-archive
```
|
non_design
|
unable to build boost with error archive was empty after successfully installing boost with gcc i failed to buid it against intel compiler the erorro message says archive was empty for boost however i am pretty sure that the boost package has been downloaded and i can unarchive it manually any advice on further digging into this issue thank you spack install boost intel installing boost is already installed in lustre spack sandybridge linux intel zlib is already installed in lustre spack sandybridge linux intel zlib fetching already staged boost in home rpm spack var spack stage boost error stageerror archive was empty for boost home rpm spack lib spack spack package py in do stage def do stage self mirror only false unpacks the fetched tarball then changes into the expanded tarball directory if not self spec concrete raise valueerror can only stage concrete packages self do fetch mirror only self stage expand archive self stage chdir to source ls alh home rpm spack var spack stage boost total drwx rpm rpm feb drwxrwxr x rpm rpm feb rw rw r rpm rpm feb boost tar drwxrwxr x rpm rpm feb spack expanded archive
| 0
|
52,447
| 6,625,054,099
|
IssuesEvent
|
2017-09-22 14:08:12
|
enow-dev/enow
|
https://api.github.com/repos/enow-dev/enow
|
opened
|
ใใญใใฃใผใซ็ป้ขใซใณใ
|
Design
|
<!-- ่ฆๆใฎใใณใใฌใผใ -->
# ๆฆ่ฆ
# ็ฎ็
# ๆๆกๅ
ๅฎน
<!-- ็ดฐใใใฟในใฏใซๅ่งฃใงใใฆใใใชใๆธใๅบใ -->
# ใฟในใฏ
- [ ]
<!-- ไธๅ
ทๅใฎใใณใใฌใผใ -->
# ๆฆ่ฆ
# ๅ็พๆ้
<!-- ๅฏ่ฝๆงใใใใใใชใใจใๅใใใฐๆธใ -->
# ๅๅ
# ไฟฎๆญฃๆก
|
1.0
|
ใใญใใฃใผใซ็ป้ขใซใณใ - <!-- ่ฆๆใฎใใณใใฌใผใ -->
# ๆฆ่ฆ
# ็ฎ็
# ๆๆกๅ
ๅฎน
<!-- ็ดฐใใใฟในใฏใซๅ่งฃใงใใฆใใใชใๆธใๅบใ -->
# ใฟในใฏ
- [ ]
<!-- ไธๅ
ทๅใฎใใณใใฌใผใ -->
# ๆฆ่ฆ
# ๅ็พๆ้
<!-- ๅฏ่ฝๆงใใใใใใชใใจใๅใใใฐๆธใ -->
# ๅๅ
# ไฟฎๆญฃๆก
|
design
|
ใใญใใฃใผใซ็ป้ขใซใณใ ๆฆ่ฆ ็ฎ็ ๆๆกๅ
ๅฎน ใฟในใฏ ๆฆ่ฆ ๅ็พๆ้ ๅๅ ไฟฎๆญฃๆก
| 1
|
169,334
| 26,782,444,815
|
IssuesEvent
|
2023-01-31 22:30:16
|
quantumlib/Cirq
|
https://api.github.com/repos/quantumlib/Cirq
|
closed
|
cirq.M and cirq.R as aliases for measure and reset
|
kind/design-issue
|
<!--
Note: this is an open ended discussion that may or may not become a feature.
If you are blocked by this, please raise a feature request instead.
-->
**Is your design idea/issue related to a use case or problem? Please describe.**
I was watching someone construct a cirq Circuit, and they tried to type `cirq.M` and `cirq.R` for measure and reset, which didn't work. However, this seems like it might be a reasonable thing to support, since M and R are often in used in Circuit diagrams in similar manner as H and X?
**Describe your design idea/issue**
Make cirq.M and cirq.R aliases for `cirq.measure` and `cirq.reset`.
|
1.0
|
cirq.M and cirq.R as aliases for measure and reset - <!--
Note: this is an open ended discussion that may or may not become a feature.
If you are blocked by this, please raise a feature request instead.
-->
**Is your design idea/issue related to a use case or problem? Please describe.**
I was watching someone construct a cirq Circuit, and they tried to type `cirq.M` and `cirq.R` for measure and reset, which didn't work. However, this seems like it might be a reasonable thing to support, since M and R are often in used in Circuit diagrams in similar manner as H and X?
**Describe your design idea/issue**
Make cirq.M and cirq.R aliases for `cirq.measure` and `cirq.reset`.
|
design
|
cirq m and cirq r as aliases for measure and reset note this is an open ended discussion that may or may not become a feature if you are blocked by this please raise a feature request instead is your design idea issue related to a use case or problem please describe i was watching someone construct a cirq circuit and they tried to type cirq m and cirq r for measure and reset which didn t work however this seems like it might be a reasonable thing to support since m and r are often in used in circuit diagrams in similar manner as h and x describe your design idea issue make cirq m and cirq r aliases for cirq measure and cirq reset
| 1
|
286,349
| 21,572,855,880
|
IssuesEvent
|
2022-05-02 10:21:55
|
pytorch/fairseq
|
https://api.github.com/repos/pytorch/fairseq
|
closed
|
BART pretraining instructions
|
documentation stale
|
Hi, Is there any pretrained BART model for Japanese? If not, could you please explain the procedure to train new BART model for Japanese data from scratch?
|
1.0
|
BART pretraining instructions - Hi, Is there any pretrained BART model for Japanese? If not, could you please explain the procedure to train new BART model for Japanese data from scratch?
|
non_design
|
bart pretraining instructions hi is there any pretrained bart model for japanese if not could you please explain the procedure to train new bart model for japanese data from scratch
| 0
|
44,192
| 12,034,793,085
|
IssuesEvent
|
2020-04-13 16:41:57
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
SAML logged-in user getting logged out and work lost.
|
Defect DevOps Needs grooming โญ๏ธ Infrastructure
|
## Background
[See Slack message](https://dsva.slack.com/archives/CDHBKAL9W/p1582235071113200?thread_ts=1582235071113200&cid=CDHBKAL9W) from @jenniferlee-dsva on February 20.
To do
*
Questions to look at
* session length
* where data is stored
* audit how many users are logging in with PIV vs drupal user/password
* enforce PIV?
Possible solutions, depending on the problem
* Patch drupal module?
* move data storage?
|
1.0
|
SAML logged-in user getting logged out and work lost. - ## Background
[See Slack message](https://dsva.slack.com/archives/CDHBKAL9W/p1582235071113200?thread_ts=1582235071113200&cid=CDHBKAL9W) from @jenniferlee-dsva on February 20.
To do
*
Questions to look at
* session length
* where data is stored
* audit how many users are logging in with PIV vs drupal user/password
* enforce PIV?
Possible solutions, depending on the problem
* Patch drupal module?
* move data storage?
|
non_design
|
saml logged in user getting logged out and work lost background from jenniferlee dsva on february to do questions to look at session length where data is stored audit how many users are logging in with piv vs drupal user password enforce piv possible solutions depending on the problem patch drupal module move data storage
| 0
|
158,905
| 24,915,047,114
|
IssuesEvent
|
2022-10-30 09:51:03
|
zeeguu/browser-extension
|
https://api.github.com/repos/zeeguu/browser-extension
|
opened
|
remember pronounciation preference
|
design-decision
|
if a user turns on pronounciation together with translation, save this option in the localstorage and enable it next time. for me, at least, if I read danish, I always like to hear the translations
<img width="796" alt="image" src="https://user-images.githubusercontent.com/464519/198872592-b99234a2-7a3f-4f69-94e8-5b8d527d316f.png">
|
1.0
|
remember pronounciation preference - if a user turns on pronounciation together with translation, save this option in the localstorage and enable it next time. for me, at least, if I read danish, I always like to hear the translations
<img width="796" alt="image" src="https://user-images.githubusercontent.com/464519/198872592-b99234a2-7a3f-4f69-94e8-5b8d527d316f.png">
|
design
|
remember pronounciation preference if a user turns on pronounciation together with translation save this option in the localstorage and enable it next time for me at least if i read danish i always like to hear the translations img width alt image src
| 1
|
72,193
| 8,710,892,801
|
IssuesEvent
|
2018-12-06 17:36:11
|
SpineEventEngine/SpineEventEngine.github.io
|
https://api.github.com/repos/SpineEventEngine/SpineEventEngine.github.io
|
closed
|
Change the icon for the feature card
|
design
|
Go to the `about` page and change current icon for the โChoice of Storage and Deployment Platformsโ to this https://fontawesome.com/icons/hexagon?style=regular
|
1.0
|
Change the icon for the feature card - Go to the `about` page and change current icon for the โChoice of Storage and Deployment Platformsโ to this https://fontawesome.com/icons/hexagon?style=regular
|
design
|
change the icon for the feature card go to the about page and change current icon for the โchoice of storage and deployment platformsโ to this
| 1
|
155,271
| 13,616,963,248
|
IssuesEvent
|
2020-09-23 16:18:55
|
metal3-io/cluster-api-provider-metal3
|
https://api.github.com/repos/metal3-io/cluster-api-provider-metal3
|
closed
|
Node.Spec.ProviderID not updated after Move
|
documentation good first issue help wanted kind/bug kind/documentation
|
After a clusterctl move from say a bootstrap cluster to a target cluster, the K.Node.Spec.ProviderID still contains the UID of the old BMH object.
Simply updating the Node.Spec.ProviderID is not currently allowed as validation checks prevent Node.Spec.ProviderID to be changed after it has been set to a non-nil value (see https://github.com/kubernetes/kubernetes/pull/51761)
```Forbidden: node updates may not change providerID except from "" to valid, []: ```
Possible Solutions:
1) Provide an option to have ProviderID be something other than UID of BMH -- for example: BMH 'Namespace/Name' may be unique for most environments.
2) Donโt use BMH.UID which is generated automatically during object creation but instead create a new field (say BMH.ProviderID) that we can populate with a random id. This field would be copied over as part of the move.
3) Open an issue with K/K to allow updates to Node.Spec.ProviderID -- ie. Relax the validation checks.
/kind bug
xref: https://github.com/metal3-io/baremetal-operator/issues/431
@kashifest @maelk
|
2.0
|
Node.Spec.ProviderID not updated after Move - After a clusterctl move from say a bootstrap cluster to a target cluster, the K.Node.Spec.ProviderID still contains the UID of the old BMH object.
Simply updating the Node.Spec.ProviderID is not currently allowed as validation checks prevent Node.Spec.ProviderID to be changed after it has been set to a non-nil value (see https://github.com/kubernetes/kubernetes/pull/51761)
```Forbidden: node updates may not change providerID except from "" to valid, []: ```
Possible Solutions:
1) Provide an option to have ProviderID be something other than UID of BMH -- for example: BMH 'Namespace/Name' may be unique for most environments.
2) Donโt use BMH.UID which is generated automatically during object creation but instead create a new field (say BMH.ProviderID) that we can populate with a random id. This field would be copied over as part of the move.
3) Open an issue with K/K to allow updates to Node.Spec.ProviderID -- ie. Relax the validation checks.
/kind bug
xref: https://github.com/metal3-io/baremetal-operator/issues/431
@kashifest @maelk
|
non_design
|
node spec providerid not updated after move after a clusterctl move from say a bootstrap cluster to a target cluster the k node spec providerid still contains the uid of the old bmh object simply updating the node spec providerid is not currently allowed as validation checks prevent node spec providerid to be changed after it has been set to a non nil value see forbidden node updates may not change providerid except from to valid possible solutions provide an option to have providerid be something other than uid of bmh for example bmh namespace name may be unique for most environments donโt use bmh uid which is generated automatically during object creation but instead create a new field say bmh providerid that we can populate with a random id this field would be copied over as part of the move open an issue with k k to allow updates to node spec providerid ie relax the validation checks kind bug xref kashifest maelk
| 0
|
176,904
| 28,292,554,102
|
IssuesEvent
|
2023-04-09 11:55:58
|
kihyeoon/life-chart
|
https://api.github.com/repos/kihyeoon/life-chart
|
closed
|
storybook์ผ๋ก ์ปดํฌ๋ํธ ์ ์
|
๐ design โจ feat
|
## ๋ชฉ์
> ์ด๋ฒ ๊ธฐ๋ฅ ๊ฐ๋ฐ์ ๋ชฉ์ ์ ๊ฐ๋จํ ์ ์ด์ฃผ์ธ์
๋ถํ์ผ๋ก ์ฌ์ฉํ ์์ ์ปดํฌ๋ํธ๋ค ๋ถํฐ atomicํ๊ฒ ์ ์ํ๋ค.
<br><br>
## ์์ธ ๋ด์ฉ
> ๊ตฌํํ ๊ธฐ๋ฅ ๊ด๋ จ ์์ธ ๋ด์ฉ์ ์ ์ด์ฃผ์ธ์
<br><br>
## ์ฐธ๊ณ ์ฌํญ
> ์ฐธ๊ณ ํ ๋ด์ฉ์ด ์๋ค๋ฉด ๊ณต์ ํด์ฃผ์ธ์
<br><br>
|
1.0
|
storybook์ผ๋ก ์ปดํฌ๋ํธ ์ ์ - ## ๋ชฉ์
> ์ด๋ฒ ๊ธฐ๋ฅ ๊ฐ๋ฐ์ ๋ชฉ์ ์ ๊ฐ๋จํ ์ ์ด์ฃผ์ธ์
๋ถํ์ผ๋ก ์ฌ์ฉํ ์์ ์ปดํฌ๋ํธ๋ค ๋ถํฐ atomicํ๊ฒ ์ ์ํ๋ค.
<br><br>
## ์์ธ ๋ด์ฉ
> ๊ตฌํํ ๊ธฐ๋ฅ ๊ด๋ จ ์์ธ ๋ด์ฉ์ ์ ์ด์ฃผ์ธ์
<br><br>
## ์ฐธ๊ณ ์ฌํญ
> ์ฐธ๊ณ ํ ๋ด์ฉ์ด ์๋ค๋ฉด ๊ณต์ ํด์ฃผ์ธ์
<br><br>
|
design
|
storybook์ผ๋ก ์ปดํฌ๋ํธ ์ ์ ๋ชฉ์ ์ด๋ฒ ๊ธฐ๋ฅ ๊ฐ๋ฐ์ ๋ชฉ์ ์ ๊ฐ๋จํ ์ ์ด์ฃผ์ธ์ ๋ถํ์ผ๋ก ์ฌ์ฉํ ์์ ์ปดํฌ๋ํธ๋ค ๋ถํฐ atomicํ๊ฒ ์ ์ํ๋ค ์์ธ ๋ด์ฉ ๊ตฌํํ ๊ธฐ๋ฅ ๊ด๋ จ ์์ธ ๋ด์ฉ์ ์ ์ด์ฃผ์ธ์ ์ฐธ๊ณ ์ฌํญ ์ฐธ๊ณ ํ ๋ด์ฉ์ด ์๋ค๋ฉด ๊ณต์ ํด์ฃผ์ธ์
| 1
|
423,371
| 12,295,078,460
|
IssuesEvent
|
2020-05-11 02:39:40
|
UQ-RCC/nimrodg
|
https://api.github.com/repos/UQ-RCC/nimrodg
|
closed
|
Resources can be deleted when assigned.
|
bug priority
|
This was intended behaviour if assigned experiments were stopped, but not when they're running.
|
1.0
|
Resources can be deleted when assigned. - This was intended behaviour if assigned experiments were stopped, but not when they're running.
|
non_design
|
resources can be deleted when assigned this was intended behaviour if assigned experiments were stopped but not when they re running
| 0
|
78,480
| 27,549,072,829
|
IssuesEvent
|
2023-03-07 13:50:35
|
hazelcast/hazelcast-cpp-client
|
https://api.github.com/repos/hazelcast/hazelcast-cpp-client
|
closed
|
Nested exception is not accessible via `std::rethrow_if_nested`
|
Type: Defect to-jira
|
C++ compiler version: GCC 11.3.0
Hazelcast Cpp client version: 5.0.0
Hazelcast server version: -
Number of the clients: -
Cluster size, i.e. the number of Hazelcast cluster members:
OS version (Windows/Linux/OSX):
Ubuntu 22.04 (Boost version is 1.76.0)
Please attach relevant logs and files for client and server side.
#### Expected behaviour
Nested exception should be accessible via `std::rethrow_if_nested`.
#### Actual behaviour
`std::rethrow_if_nested` doesn't throw the inner exception.
#### Steps to reproduce the behaviour
Take a look at [here](https://github.com/hazelcast/hazelcast-cpp-client/blob/master/hazelcast/test/src/HazelcastTests7.cpp#L1868-L1874)
Run `cloud_discovery_test.token_should_not_be_leaked` and `std::rethrow_if_nested` doesn't throw anything.
|
1.0
|
Nested exception is not accessible via `std::rethrow_if_nested` - C++ compiler version: GCC 11.3.0
Hazelcast Cpp client version: 5.0.0
Hazelcast server version: -
Number of the clients: -
Cluster size, i.e. the number of Hazelcast cluster members:
OS version (Windows/Linux/OSX):
Ubuntu 22.04 (Boost version is 1.76.0)
Please attach relevant logs and files for client and server side.
#### Expected behaviour
Nested exception should be accessible via `std::rethrow_if_nested`.
#### Actual behaviour
`std::rethrow_if_nested` doesn't throw the inner exception.
#### Steps to reproduce the behaviour
Take a look at [here](https://github.com/hazelcast/hazelcast-cpp-client/blob/master/hazelcast/test/src/HazelcastTests7.cpp#L1868-L1874)
Run `cloud_discovery_test.token_should_not_be_leaked` and `std::rethrow_if_nested` doesn't throw anything.
|
non_design
|
nested exception is not accessible via std rethrow if nested c compiler version gcc hazelcast cpp client version hazelcast server version number of the clients cluster size i e the number of hazelcast cluster members os version windows linux osx ubuntu boost version is please attach relevant logs and files for client and server side expected behaviour nested exception should be accessible via std rethrow if nested actual behaviour std rethrow if nested doesn t throw the inner exception steps to reproduce the behaviour take a look at run cloud discovery test token should not be leaked and std rethrow if nested doesn t throw anything
| 0
|
10,734
| 6,898,485,840
|
IssuesEvent
|
2017-11-24 09:43:07
|
surveyjs/editor
|
https://api.github.com/repos/surveyjs/editor
|
closed
|
Focus the inplace editor in the property grid immediately on selecting the property
|
Implemented usability issue
|
It requires to make two clicks to start editing the property. However it should be done in one click.
|
True
|
Focus the inplace editor in the property grid immediately on selecting the property - It requires to make two clicks to start editing the property. However it should be done in one click.
|
non_design
|
focus the inplace editor in the property grid immediately on selecting the property it requires to make two clicks to start editing the property however it should be done in one click
| 0
|
594,027
| 18,022,188,294
|
IssuesEvent
|
2021-09-16 21:00:53
|
coyim/coyim
|
https://api.github.com/repos/coyim/coyim
|
closed
|
Bottom of conversation view should scroll down when new messages arrive
|
basic im feature MUC Priority: Now Improvements Estimate - small State: Done pre-release 0.4
|
The auto-scroll should only work when you are seeing the last message sent, and when you send a new message.
|
1.0
|
Bottom of conversation view should scroll down when new messages arrive - The auto-scroll should only work when you are seeing the last message sent, and when you send a new message.
|
non_design
|
bottom of conversation view should scroll down when new messages arrive the auto scroll should only work when you are seeing the last message sent and when you send a new message
| 0
|
408,487
| 11,947,884,626
|
IssuesEvent
|
2020-04-03 10:45:08
|
geosolutions-it/geonode
|
https://api.github.com/repos/geosolutions-it/geonode
|
opened
|
Monitoring Collect metrics jobs pile up until total consumption of memory
|
Priority: Blocker analytics investigation monitoring
|
This has been reported both for Malawi and UNESCO.
In the latter case the report is self explanatory: https://app.zenhub.com/workspaces/support-kanban-board-5cb856a728ffb631f619c6de/issues/geosolutions-it/support/493
The spwaned collect_metrics commands pile up and consumes all the available RAM.
|
1.0
|
Monitoring Collect metrics jobs pile up until total consumption of memory - This has been reported both for Malawi and UNESCO.
In the latter case the report is self explanatory: https://app.zenhub.com/workspaces/support-kanban-board-5cb856a728ffb631f619c6de/issues/geosolutions-it/support/493
The spwaned collect_metrics commands pile up and consumes all the available RAM.
|
non_design
|
monitoring collect metrics jobs pile up until total consumption of memory this has been reported both for malawi and unesco in the latter case the report is self explanatory the spwaned collect metrics commands pile up and consumes all the available ram
| 0
|
273,801
| 8,552,912,054
|
IssuesEvent
|
2018-11-07 22:35:42
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
closed
|
Allow deferring clicks on elements in amp-story-grid-layer
|
Category: AMP Story P1: High Priority Type: Feature Request
|
When an element is clicked in `amp-story-grid-layer`, we would want to defer its action until later (after the user has provided a confirmation that they would like to take the action).
|
1.0
|
Allow deferring clicks on elements in amp-story-grid-layer - When an element is clicked in `amp-story-grid-layer`, we would want to defer its action until later (after the user has provided a confirmation that they would like to take the action).
|
non_design
|
allow deferring clicks on elements in amp story grid layer when an element is clicked in amp story grid layer we would want to defer its action until later after the user has provided a confirmation that they would like to take the action
| 0
|
343,317
| 10,328,039,705
|
IssuesEvent
|
2019-09-02 08:35:41
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
outlook.live.com - see bug description
|
browser-focus-geckoview engine-gecko priority-critical
|
<!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://outlook.live.com/owa/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: always disconnected.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
outlook.live.com - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://outlook.live.com/owa/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: always disconnected.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_design
|
outlook live com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description always disconnected steps to reproduce browser configuration none from with โค๏ธ
| 0
|
599,401
| 18,272,725,942
|
IssuesEvent
|
2021-10-04 15:19:16
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Fix Missing/Broken icons in New Wallet
|
priority/P3 QA/No release-notes/exclude feature/wallet OS/Desktop
|
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
1) Currently the `ETH` icon is not showing up for the `Portfolio` subview page.
2) Currently token icons are broken on the `Accounts` subview page.


|
1.0
|
Fix Missing/Broken icons in New Wallet - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
1) Currently the `ETH` icon is not showing up for the `Portfolio` subview page.
2) Currently token icons are broken on the `Accounts` subview page.


|
non_design
|
fix missing broken icons in new wallet have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description currently the eth icon is not showing up for the portfolio subview page currently token icons are broken on the accounts subview page
| 0
|
188,545
| 22,046,610,510
|
IssuesEvent
|
2022-05-30 02:59:10
|
sshivananda/ts-sqs-consumer
|
https://api.github.com/repos/sshivananda/ts-sqs-consumer
|
closed
|
CVE-2020-8203 (High) detected in lodash-4.17.15.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ts-sqs-consumer/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/ts-sqs-consumer/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- cucumber-6.0.5.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sshivananda/ts-sqs-consumer/commit/8e86a2adfdf841f4ff57d761e7ba0359998b420d">8e86a2adfdf841f4ff57d761e7ba0359998b420d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash <= 4.17.15.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-8203 (High) detected in lodash-4.17.15.tgz - autoclosed - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ts-sqs-consumer/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/ts-sqs-consumer/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- cucumber-6.0.5.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sshivananda/ts-sqs-consumer/commit/8e86a2adfdf841f4ff57d761e7ba0359998b420d">8e86a2adfdf841f4ff57d761e7ba0359998b420d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash <= 4.17.15.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_design
|
cve high detected in lodash tgz autoclosed cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tmp ws scm ts sqs consumer package json path to vulnerable library tmp ws scm ts sqs consumer node modules lodash package json dependency hierarchy cucumber tgz root library x lodash tgz vulnerable library found in head commit a href vulnerability details prototype pollution attack when using zipobjectdeep in lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
301,749
| 26,094,094,762
|
IssuesEvent
|
2022-12-26 16:10:30
|
red-hat-storage/ocs-ci
|
https://api.github.com/repos/red-hat-storage/ocs-ci
|
closed
|
test_bidirectional_bucket_replication failed with AssertionError
|
TestCase failing Squad/Red
|
Run details:
URL: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#OCS/launches/362/7015/292448/292491/292492/log
Run ID: 1670007275
Test Case: test_bidirectional_bucket_replication
ODF Build: 4.12.0-120
OCP Version: 4.12
Job name: AZURE IPI 3AZ RHCOS 3M 3W tier1 or tier_after_upgrade post upgrade
Jenkins job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/6280/
Logs URL: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-004zi3c33-uba/j-004zi3c33-uba_20221202T083924/logs/
Failure Details:
```
Message: AssertionError
Type: None
Text:
self = <ocs_ci.ocs.resources.mcg.MCG object at 0x7f4c2e182e50>
backingstore_name = 'azure-backingstore-8cebfd182d164482a96bd'
desired_state = 'OPTIMAL', timeout = 600
def check_backingstore_state(self, backingstore_name, desired_state, timeout=600):
"""
Checks whether the backing store reached a specific state
Args:
backingstore_name (str): Name of the backing store to be checked
desired_state (str): The desired state of the backing store
timeout (int): Number of seconds for timeout which will be used
in the checks used in this function.
Returns:
bool: Whether the backing store has reached the desired state
"""
def _check_state():
sysinfo = self.read_system()
for pool in sysinfo.get("pools"):
if pool.get("name") in backingstore_name:
current_state = pool.get("mode")
logger.info(
f"Current state of backingstore {backingstore_name} "
f"is {current_state}"
)
if current_state == desired_state:
return True
return False
try:
> for reached_state in TimeoutSampler(timeout, 10, _check_state):
ocs_ci/ocs/resources/mcg.py:830:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ocs_ci.utility.utils.TimeoutSampler object at 0x7f4c01d02b80>
def __iter__(self):
if self.start_time is None:
self.start_time = time.time()
while True:
self.last_sample_time = time.time()
if self.timeout <= (self.last_sample_time - self.start_time):
> raise self.timeout_exc_cls(*self.timeout_exc_args)
E ocs_ci.ocs.exceptions.TimeoutExpiredError: Timed out after 600s running _check_state()
ocs_ci/utility/utils.py:1173: TimeoutExpiredError
During handling of the above exception, another exception occurred:
self = <tests.manage.mcg.test_bucket_replication.TestReplication object at 0x7f4bdfb73d90>
awscli_pod_session = <ocs_ci.ocs.resources.pod.Pod object at 0x7f4c05a02490>
mcg_obj_session = <ocs_ci.ocs.resources.mcg.MCG object at 0x7f4bdc8caf70>
bucket_factory = <function bucket_factory_fixture.<locals>._create_buckets at 0x7f4bda131670>
first_bucketclass = {'backingstore_dict': {'aws': [(1, 'eu-central-1')]}, 'interface': 'OC'}
second_bucketclass = {'backingstore_dict': {'azure': [(1, None)]}, 'interface': 'OC'}
test_directory_setup = SetupDirs(origin_dir='test_bidirectional_bucket_replication[AWStoAZURE-BS-OC]/origin', result_dir='test_bidirectional_bucket_replication[AWStoAZURE-BS-OC]/result')
@pytest.mark.parametrize(
argnames=["first_bucketclass", "second_bucketclass"],
argvalues=[
pytest.param(
{
"interface": "OC",
"backingstore_dict": {"aws": [(1, "eu-central-1")]},
},
{"interface": "OC", "backingstore_dict": {"azure": [(1, None)]}},
marks=[tier1, pytest.mark.polarion_id("OCS-2683")],
),
],
ids=[
"AWStoAZURE-BS-OC",
],
)
def test_bidirectional_bucket_replication(
self,
awscli_pod_session,
mcg_obj_session,
bucket_factory,
first_bucketclass,
second_bucketclass,
test_directory_setup,
):
"""
Test bidirectional bucket replication using CLI and YAML
"""
first_bucket_name = bucket_factory(bucketclass=first_bucketclass)[0].name
replication_policy = ("basic-replication-rule", first_bucket_name, None)
> second_bucket_name = bucket_factory(
1, bucketclass=second_bucketclass, replication_policy=replication_policy
)[0].name
tests/manage/mcg/test_bucket_replication.py:252:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/conftest.py:2557: in _create_buckets
bucketclass if bucketclass is None else bucket_class_factory(bucketclass)
ocs_ci/ocs/resources/bucketclass.py:169: in _create_bucket_class
for backingstore in backingstore_factory(
ocs_ci/ocs/resources/backingstore.py:356: in _create_backingstore
mcg_obj.check_backingstore_state(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ocs_ci.ocs.resources.mcg.MCG object at 0x7f4c2e182e50>
backingstore_name = 'azure-backingstore-8cebfd182d164482a96bd'
desired_state = 'OPTIMAL', timeout = 600
def check_backingstore_state(self, backingstore_name, desired_state, timeout=600):
"""
Checks whether the backing store reached a specific state
Args:
backingstore_name (str): Name of the backing store to be checked
desired_state (str): The desired state of the backing store
timeout (int): Number of seconds for timeout which will be used
in the checks used in this function.
Returns:
bool: Whether the backing store has reached the desired state
"""
def _check_state():
sysinfo = self.read_system()
for pool in sysinfo.get("pools"):
if pool.get("name") in backingstore_name:
current_state = pool.get("mode")
logger.info(
f"Current state of backingstore {backingstore_name} "
f"is {current_state}"
)
if current_state == desired_state:
return True
return False
try:
for reached_state in TimeoutSampler(timeout, 10, _check_state):
if reached_state:
logger.info(
f"BackingStore {backingstore_name} reached state "
f"{desired_state}."
)
return True
else:
logger.info(
f"Waiting for BackingStore {backingstore_name} to "
f"reach state {desired_state}..."
)
except TimeoutExpiredError:
logger.error(
f"The BackingStore did not reach the desired state "
f"{desired_state} within the time limit."
)
> assert False
E AssertionError
ocs_ci/ocs/resources/mcg.py:847: AssertionError
```
|
1.0
|
test_bidirectional_bucket_replication failed with AssertionError - Run details:
URL: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#OCS/launches/362/7015/292448/292491/292492/log
Run ID: 1670007275
Test Case: test_bidirectional_bucket_replication
ODF Build: 4.12.0-120
OCP Version: 4.12
Job name: AZURE IPI 3AZ RHCOS 3M 3W tier1 or tier_after_upgrade post upgrade
Jenkins job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/6280/
Logs URL: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-004zi3c33-uba/j-004zi3c33-uba_20221202T083924/logs/
Failure Details:
```
Message: AssertionError
Type: None
Text:
self = <ocs_ci.ocs.resources.mcg.MCG object at 0x7f4c2e182e50>
backingstore_name = 'azure-backingstore-8cebfd182d164482a96bd'
desired_state = 'OPTIMAL', timeout = 600
def check_backingstore_state(self, backingstore_name, desired_state, timeout=600):
"""
Checks whether the backing store reached a specific state
Args:
backingstore_name (str): Name of the backing store to be checked
desired_state (str): The desired state of the backing store
timeout (int): Number of seconds for timeout which will be used
in the checks used in this function.
Returns:
bool: Whether the backing store has reached the desired state
"""
def _check_state():
sysinfo = self.read_system()
for pool in sysinfo.get("pools"):
if pool.get("name") in backingstore_name:
current_state = pool.get("mode")
logger.info(
f"Current state of backingstore {backingstore_name} "
f"is {current_state}"
)
if current_state == desired_state:
return True
return False
try:
> for reached_state in TimeoutSampler(timeout, 10, _check_state):
ocs_ci/ocs/resources/mcg.py:830:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ocs_ci.utility.utils.TimeoutSampler object at 0x7f4c01d02b80>
def __iter__(self):
if self.start_time is None:
self.start_time = time.time()
while True:
self.last_sample_time = time.time()
if self.timeout <= (self.last_sample_time - self.start_time):
> raise self.timeout_exc_cls(*self.timeout_exc_args)
E ocs_ci.ocs.exceptions.TimeoutExpiredError: Timed out after 600s running _check_state()
ocs_ci/utility/utils.py:1173: TimeoutExpiredError
During handling of the above exception, another exception occurred:
self = <tests.manage.mcg.test_bucket_replication.TestReplication object at 0x7f4bdfb73d90>
awscli_pod_session = <ocs_ci.ocs.resources.pod.Pod object at 0x7f4c05a02490>
mcg_obj_session = <ocs_ci.ocs.resources.mcg.MCG object at 0x7f4bdc8caf70>
bucket_factory = <function bucket_factory_fixture.<locals>._create_buckets at 0x7f4bda131670>
first_bucketclass = {'backingstore_dict': {'aws': [(1, 'eu-central-1')]}, 'interface': 'OC'}
second_bucketclass = {'backingstore_dict': {'azure': [(1, None)]}, 'interface': 'OC'}
test_directory_setup = SetupDirs(origin_dir='test_bidirectional_bucket_replication[AWStoAZURE-BS-OC]/origin', result_dir='test_bidirectional_bucket_replication[AWStoAZURE-BS-OC]/result')
@pytest.mark.parametrize(
argnames=["first_bucketclass", "second_bucketclass"],
argvalues=[
pytest.param(
{
"interface": "OC",
"backingstore_dict": {"aws": [(1, "eu-central-1")]},
},
{"interface": "OC", "backingstore_dict": {"azure": [(1, None)]}},
marks=[tier1, pytest.mark.polarion_id("OCS-2683")],
),
],
ids=[
"AWStoAZURE-BS-OC",
],
)
def test_bidirectional_bucket_replication(
self,
awscli_pod_session,
mcg_obj_session,
bucket_factory,
first_bucketclass,
second_bucketclass,
test_directory_setup,
):
"""
Test bidirectional bucket replication using CLI and YAML
"""
first_bucket_name = bucket_factory(bucketclass=first_bucketclass)[0].name
replication_policy = ("basic-replication-rule", first_bucket_name, None)
> second_bucket_name = bucket_factory(
1, bucketclass=second_bucketclass, replication_policy=replication_policy
)[0].name
tests/manage/mcg/test_bucket_replication.py:252:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/conftest.py:2557: in _create_buckets
bucketclass if bucketclass is None else bucket_class_factory(bucketclass)
ocs_ci/ocs/resources/bucketclass.py:169: in _create_bucket_class
for backingstore in backingstore_factory(
ocs_ci/ocs/resources/backingstore.py:356: in _create_backingstore
mcg_obj.check_backingstore_state(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ocs_ci.ocs.resources.mcg.MCG object at 0x7f4c2e182e50>
backingstore_name = 'azure-backingstore-8cebfd182d164482a96bd'
desired_state = 'OPTIMAL', timeout = 600
def check_backingstore_state(self, backingstore_name, desired_state, timeout=600):
"""
Checks whether the backing store reached a specific state
Args:
backingstore_name (str): Name of the backing store to be checked
desired_state (str): The desired state of the backing store
timeout (int): Number of seconds for timeout which will be used
in the checks used in this function.
Returns:
bool: Whether the backing store has reached the desired state
"""
def _check_state():
sysinfo = self.read_system()
for pool in sysinfo.get("pools"):
if pool.get("name") in backingstore_name:
current_state = pool.get("mode")
logger.info(
f"Current state of backingstore {backingstore_name} "
f"is {current_state}"
)
if current_state == desired_state:
return True
return False
try:
for reached_state in TimeoutSampler(timeout, 10, _check_state):
if reached_state:
logger.info(
f"BackingStore {backingstore_name} reached state "
f"{desired_state}."
)
return True
else:
logger.info(
f"Waiting for BackingStore {backingstore_name} to "
f"reach state {desired_state}..."
)
except TimeoutExpiredError:
logger.error(
f"The BackingStore did not reach the desired state "
f"{desired_state} within the time limit."
)
> assert False
E AssertionError
ocs_ci/ocs/resources/mcg.py:847: AssertionError
```
|
non_design
|
test bidirectional bucket replication failed with assertionerror run details url run id test case test bidirectional bucket replication odf build ocp version job name azure ipi rhcos or tier after upgrade post upgrade jenkins job logs url failure details message assertionerror type none text self backingstore name azure backingstore desired state optimal timeout def check backingstore state self backingstore name desired state timeout checks whether the backing store reached a specific state args backingstore name str name of the backing store to be checked desired state str the desired state of the backing store timeout int number of seconds for timeout which will be used in the checks used in this function returns bool whether the backing store has reached the desired state def check state sysinfo self read system for pool in sysinfo get pools if pool get name in backingstore name current state pool get mode logger info f current state of backingstore backingstore name f is current state if current state desired state return true return false try for reached state in timeoutsampler timeout check state ocs ci ocs resources mcg py self def iter self if self start time is none self start time time time while true self last sample time time time if self timeout self last sample time self start time raise self timeout exc cls self timeout exc args e ocs ci ocs exceptions timeoutexpirederror timed out after running check state ocs ci utility utils py timeoutexpirederror during handling of the above exception another exception occurred self awscli pod session mcg obj session bucket factory create buckets at first bucketclass backingstore dict aws interface oc second bucketclass backingstore dict azure interface oc test directory setup setupdirs origin dir test bidirectional bucket replication origin result dir test bidirectional bucket replication result pytest mark parametrize argnames argvalues pytest param interface oc backingstore dict aws interface oc backingstore dict azure marks ids awstoazure bs oc def test bidirectional bucket replication self awscli pod session mcg obj session bucket factory first bucketclass second bucketclass test directory setup test bidirectional bucket replication using cli and yaml first bucket name bucket factory bucketclass first bucketclass name replication policy basic replication rule first bucket name none second bucket name bucket factory bucketclass second bucketclass replication policy replication policy name tests manage mcg test bucket replication py tests conftest py in create buckets bucketclass if bucketclass is none else bucket class factory bucketclass ocs ci ocs resources bucketclass py in create bucket class for backingstore in backingstore factory ocs ci ocs resources backingstore py in create backingstore mcg obj check backingstore state self backingstore name azure backingstore desired state optimal timeout def check backingstore state self backingstore name desired state timeout checks whether the backing store reached a specific state args backingstore name str name of the backing store to be checked desired state str the desired state of the backing store timeout int number of seconds for timeout which will be used in the checks used in this function returns bool whether the backing store has reached the desired state def check state sysinfo self read system for pool in sysinfo get pools if pool get name in backingstore name current state pool get mode logger info f current state of backingstore backingstore name f is current state if current state desired state return true return false try for reached state in timeoutsampler timeout check state if reached state logger info f backingstore backingstore name reached state f desired state return true else logger info f waiting for backingstore backingstore name to f reach state desired state except timeoutexpirederror logger error f the backingstore did not reach the desired state f desired state within the time limit assert false e assertionerror ocs ci ocs resources mcg py assertionerror
| 0
|
422,119
| 28,373,439,723
|
IssuesEvent
|
2023-04-12 18:49:36
|
opendatahub-io/odh-dashboard
|
https://api.github.com/repos/opendatahub-io/odh-dashboard
|
opened
|
[Feature Request]: Create ODH UX Team Group & Add them to the PR Template
|
kind/documentation untriaged
|
### Feature description
Currently in our [PR Template](https://raw.githubusercontent.com/opendatahub-io/odh-dashboard/main/.github/pull_request_template.md) we ask you to tag the UX team. How will anyone open source know who is on the UX team?
Using https://github.com/opendatahub-io/org-management, we should create a group (@LaVLaS is setting it up). Then we should update our template to reference this group name.
### Describe alternatives you've considered
_No response_
### Anything else?
_No response_
|
1.0
|
[Feature Request]: Create ODH UX Team Group & Add them to the PR Template - ### Feature description
Currently in our [PR Template](https://raw.githubusercontent.com/opendatahub-io/odh-dashboard/main/.github/pull_request_template.md) we ask you to tag the UX team. How will anyone open source know who is on the UX team?
Using https://github.com/opendatahub-io/org-management, we should create a group (@LaVLaS is setting it up). Then we should update our template to reference this group name.
### Describe alternatives you've considered
_No response_
### Anything else?
_No response_
|
non_design
|
create odh ux team group add them to the pr template feature description currently in our we ask you to tag the ux team how will anyone open source know who is on the ux team using we should create a group lavlas is setting it up then we should update our template to reference this group name describe alternatives you ve considered no response anything else no response
| 0
|
227,752
| 25,118,980,117
|
IssuesEvent
|
2022-11-09 06:07:15
|
nidhi7598/linux-3.0.35_CVE-2018-13405
|
https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2018-13405
|
opened
|
CVE-2013-7269 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2013-7269 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The nr_recvmsg function in net/netrom/af_netrom.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call.
<p>Publish Date: 2014-01-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-7269>CVE-2013-7269</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2013-7269">https://www.linuxkernelcves.com/cves/CVE-2013-7269</a></p>
<p>Release Date: 2014-01-06</p>
<p>Fix Resolution: v3.13-rc1,v3.12.4,v3.2.54</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2013-7269 (Medium) detected in multiple libraries - ## CVE-2013-7269 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.40</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b>, <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The nr_recvmsg function in net/netrom/af_netrom.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call.
<p>Publish Date: 2014-01-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-7269>CVE-2013-7269</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2013-7269">https://www.linuxkernelcves.com/cves/CVE-2013-7269</a></p>
<p>Release Date: 2014-01-06</p>
<p>Fix Resolution: v3.13-rc1,v3.12.4,v3.2.54</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_design
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries linuxlinux linux stable linuxlinux linuxlinux linux stable linux stable linux stable linux stable linuxlinux linux stable linux stable linuxlinux linux stable linuxlinux linuxlinux linuxlinux linuxlinux linux stable linuxlinux linux stable linuxlinux linux stable linux stable linux stable linux stable linux stable vulnerability details the nr recvmsg function in net netrom af netrom c in the linux kernel before updates a certain length value without ensuring that an associated data structure has been initialized which allows local users to obtain sensitive information from kernel memory via a recvfrom recvmmsg or recvmsg system call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
21,126
| 28,092,826,695
|
IssuesEvent
|
2023-03-30 14:04:13
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
cumulativetodelta: initial data points are included for monotonic counters
|
bug help wanted Stale priority:p2 processor/cumulativetodelta
|
### Component(s)
processor/cumulativetodelta
### What happened?
The cumulative-to-delta processor includes the initial delta from zero to the current value as a data point:
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/34b31da629f65c195125e8db07480574b181477f/processor/cumulativetodeltaprocessor/internal/tracking/tracker.go#L92-L104
Even for monotonic counters, this is counterintuitive and noisy. For example, here is what that looks like in real production data when the collector is restarted:
<img width="868" alt="Screen Shot 2023-01-26 at 1 03 13 PM" src="https://user-images.githubusercontent.com/102976597/214830848-8bb75e25-c4e0-454c-9e04-2abe302db114.png">
The processor should drop the first data point regardless of whether the dataset is monotonic or not, because the first data point can not be guaranteed to be a delta.
### Collector version
0.70.0
### Environment information
## Environment
OS: Ubuntu 22.04
### OpenTelemetry Collector configuration
```yaml
<snip>
processors:
cumulativetodelta:
include:
metrics:
- "bytes_total\\z"
match_type: regexp
metricstransform:
transforms:
- include: "(.*)_bytes_total\\z"
action: insert
new_name: "$${1}_bitrate"
match_type: regexp
operations:
- action: experimental_scale_value
# The starting unit is bytes per 5s. 0.2 * 8 = 1.6
experimental_scale: 1.6
```
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
cumulativetodelta: initial data points are included for monotonic counters - ### Component(s)
processor/cumulativetodelta
### What happened?
The cumulative-to-delta processor includes the initial delta from zero to the current value as a data point:
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/34b31da629f65c195125e8db07480574b181477f/processor/cumulativetodeltaprocessor/internal/tracking/tracker.go#L92-L104
Even for monotonic counters, this is counterintuitive and noisy. For example, here is what that looks like in real production data when the collector is restarted:
<img width="868" alt="Screen Shot 2023-01-26 at 1 03 13 PM" src="https://user-images.githubusercontent.com/102976597/214830848-8bb75e25-c4e0-454c-9e04-2abe302db114.png">
The processor should drop the first data point regardless of whether the dataset is monotonic or not, because the first data point can not be guaranteed to be a delta.
### Collector version
0.70.0
### Environment information
## Environment
OS: Ubuntu 22.04
### OpenTelemetry Collector configuration
```yaml
<snip>
processors:
cumulativetodelta:
include:
metrics:
- "bytes_total\\z"
match_type: regexp
metricstransform:
transforms:
- include: "(.*)_bytes_total\\z"
action: insert
new_name: "$${1}_bitrate"
match_type: regexp
operations:
- action: experimental_scale_value
# The starting unit is bytes per 5s. 0.2 * 8 = 1.6
experimental_scale: 1.6
```
### Log output
_No response_
### Additional context
_No response_
|
non_design
|
cumulativetodelta initial data points are included for monotonic counters component s processor cumulativetodelta what happened the cumulative to delta processor includes the initial delta from zero to the current value as a data point even for monotonic counters this is counterintuitive and noisy for example here is what that looks like in real production data when the collector is restarted img width alt screen shot at pm src the processor should drop the first data point regardless of whether the dataset is monotonic or not because the first data point can not be guaranteed to be a delta collector version environment information environment os ubuntu opentelemetry collector configuration yaml processors cumulativetodelta include metrics bytes total z match type regexp metricstransform transforms include bytes total z action insert new name bitrate match type regexp operations action experimental scale value the starting unit is bytes per experimental scale log output no response additional context no response
| 0
|
247,980
| 26,771,135,847
|
IssuesEvent
|
2023-01-31 14:12:34
|
TreyM-WSS/whitesource-demo-1
|
https://api.github.com/repos/TreyM-WSS/whitesource-demo-1
|
closed
|
CVE-2022-23529 (High) detected in jsonwebtoken-8.5.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-23529 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonwebtoken-8.5.1.tgz</b></p></summary>
<p>JSON Web Token implementation (symmetric and asymmetric)</p>
<p>Library home page: <a href="https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz">https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/jsonwebtoken/package.json</p>
<p>
Dependency Hierarchy:
- firebase-tools-7.1.0.tgz (Root Library)
- :x: **jsonwebtoken-8.5.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/whitesource-demo-1/commit/afe1334984105dcff7dbeba0cbcb6b5f49444b16">afe1334984105dcff7dbeba0cbcb6b5f49444b16</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
node-jsonwebtoken is a JsonWebToken implementation for node.js. For versions `<= 8.5.1` of `jsonwebtoken` library, if a malicious actor has the ability to modify the key retrieval parameter (referring to the `secretOrPublicKey` argument from the readme link of the `jwt.verify()` function, they can write arbitrary files on the host machine. Users are affected only if untrusted entities are allowed to modify the key retrieval parameter of the `jwt.verify()` on a host that you control. This issue has been fixed, please update to version 9.0.0.
<p>Publish Date: 2022-12-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23529>CVE-2022-23529</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/auth0/node-jsonwebtoken/security/advisories/GHSA-27h2-hvpr-p74q">https://github.com/auth0/node-jsonwebtoken/security/advisories/GHSA-27h2-hvpr-p74q</a></p>
<p>Release Date: 2022-12-21</p>
<p>Fix Resolution: jsonwebtoken - 9.0.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-23529 (High) detected in jsonwebtoken-8.5.1.tgz - autoclosed - ## CVE-2022-23529 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonwebtoken-8.5.1.tgz</b></p></summary>
<p>JSON Web Token implementation (symmetric and asymmetric)</p>
<p>Library home page: <a href="https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz">https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-8.5.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/jsonwebtoken/package.json</p>
<p>
Dependency Hierarchy:
- firebase-tools-7.1.0.tgz (Root Library)
- :x: **jsonwebtoken-8.5.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/whitesource-demo-1/commit/afe1334984105dcff7dbeba0cbcb6b5f49444b16">afe1334984105dcff7dbeba0cbcb6b5f49444b16</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
node-jsonwebtoken is a JsonWebToken implementation for node.js. For versions `<= 8.5.1` of `jsonwebtoken` library, if a malicious actor has the ability to modify the key retrieval parameter (referring to the `secretOrPublicKey` argument from the readme link of the `jwt.verify()` function, they can write arbitrary files on the host machine. Users are affected only if untrusted entities are allowed to modify the key retrieval parameter of the `jwt.verify()` on a host that you control. This issue has been fixed, please update to version 9.0.0.
<p>Publish Date: 2022-12-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23529>CVE-2022-23529</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/auth0/node-jsonwebtoken/security/advisories/GHSA-27h2-hvpr-p74q">https://github.com/auth0/node-jsonwebtoken/security/advisories/GHSA-27h2-hvpr-p74q</a></p>
<p>Release Date: 2022-12-21</p>
<p>Fix Resolution: jsonwebtoken - 9.0.0</p>
</p>
</details>
<p></p>
|
non_design
|
cve high detected in jsonwebtoken tgz autoclosed cve high severity vulnerability vulnerable library jsonwebtoken tgz json web token implementation symmetric and asymmetric library home page a href path to dependency file package json path to vulnerable library node modules jsonwebtoken package json dependency hierarchy firebase tools tgz root library x jsonwebtoken tgz vulnerable library found in head commit a href found in base branch master vulnerability details node jsonwebtoken is a jsonwebtoken implementation for node js for versions of jsonwebtoken library if a malicious actor has the ability to modify the key retrieval parameter referring to the secretorpublickey argument from the readme link of the jwt verify function they can write arbitrary files on the host machine users are affected only if untrusted entities are allowed to modify the key retrieval parameter of the jwt verify on a host that you control this issue has been fixed please update to version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jsonwebtoken
| 0
|
148,908
| 23,396,066,404
|
IssuesEvent
|
2022-08-11 23:47:37
|
pulumi/pulumi-azure-native
|
https://api.github.com/repos/pulumi/pulumi-azure-native
|
closed
|
Creating KeyVault Secret fails because "ParameterNotSpecified" and no more information given
|
kind/bug resolution/by-design
|
### What happened?
Trying to push a secret into KeyVault
```fsharp
let vaultPostgresPassword =
Pulumi.AzureNative.KeyVault.Secret(namer "postgresPwd",
Pulumi.AzureNative.KeyVault.SecretArgs(
Properties = Pulumi.AzureNative.KeyVault.Inputs.SecretPropertiesArgs(
Value = io postgresPwd.Result,
Attributes = input (Pulumi.AzureNative.KeyVault.Inputs.SecretAttributesArgs(Enabled = input true)),
ContentType = input "text/plain"
),
ResourceGroupName = io resourceGroup.Name,
SecretName = input "postgresPassword",
VaultName = io keyVault.Name
))
```
I get the response
> azure-native:keyvault:Secret (postgresPwd-shared-dev-uk):
error: autorest/azure: Service returned an error. Status=400 Code="ParameterNotSpecified" Message="The parameter value is not specified."
### Steps to reproduce
```fsharp
let vaultPostgresPassword =
Pulumi.AzureNative.KeyVault.Secret(namer "postgresPwd",
Pulumi.AzureNative.KeyVault.SecretArgs(
Properties = Pulumi.AzureNative.KeyVault.Inputs.SecretPropertiesArgs(
Value = io postgresPwd.Result,
Attributes = input (Pulumi.AzureNative.KeyVault.Inputs.SecretAttributesArgs(Enabled = input true)),
ContentType = input "text/plain"
),
ResourceGroupName = io resourceGroup.Name,
SecretName = input "postgresPassword",
VaultName = io keyVault.Name
))
```
### Expected Behavior
To work - or give more detailed output
### Actual Behavior
Failed with
> azure-native:keyvault:Secret (postgresPwd-shared-dev-uk):
error: autorest/azure: Service returned an error. Status=400 Code="ParameterNotSpecified" Message="The parameter value is not specified."
even when running
```
pulumi up --yes --refresh -d --skip-preview -v 3
```
### Versions used
v3.37.2
### Additional context
_No response_
### Contributing
Vote on this issue by adding a ๐ reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
|
1.0
|
Creating KeyVault Secret fails because "ParameterNotSpecified" and no more information given - ### What happened?
Trying to push a secret into KeyVault
```fsharp
let vaultPostgresPassword =
Pulumi.AzureNative.KeyVault.Secret(namer "postgresPwd",
Pulumi.AzureNative.KeyVault.SecretArgs(
Properties = Pulumi.AzureNative.KeyVault.Inputs.SecretPropertiesArgs(
Value = io postgresPwd.Result,
Attributes = input (Pulumi.AzureNative.KeyVault.Inputs.SecretAttributesArgs(Enabled = input true)),
ContentType = input "text/plain"
),
ResourceGroupName = io resourceGroup.Name,
SecretName = input "postgresPassword",
VaultName = io keyVault.Name
))
```
I get the response
> azure-native:keyvault:Secret (postgresPwd-shared-dev-uk):
error: autorest/azure: Service returned an error. Status=400 Code="ParameterNotSpecified" Message="The parameter value is not specified."
### Steps to reproduce
```fsharp
let vaultPostgresPassword =
Pulumi.AzureNative.KeyVault.Secret(namer "postgresPwd",
Pulumi.AzureNative.KeyVault.SecretArgs(
Properties = Pulumi.AzureNative.KeyVault.Inputs.SecretPropertiesArgs(
Value = io postgresPwd.Result,
Attributes = input (Pulumi.AzureNative.KeyVault.Inputs.SecretAttributesArgs(Enabled = input true)),
ContentType = input "text/plain"
),
ResourceGroupName = io resourceGroup.Name,
SecretName = input "postgresPassword",
VaultName = io keyVault.Name
))
```
### Expected Behavior
To work - or give more detailed output
### Actual Behavior
Failed with
> azure-native:keyvault:Secret (postgresPwd-shared-dev-uk):
error: autorest/azure: Service returned an error. Status=400 Code="ParameterNotSpecified" Message="The parameter value is not specified."
even when running
```
pulumi up --yes --refresh -d --skip-preview -v 3
```
### Versions used
v3.37.2
### Additional context
_No response_
### Contributing
Vote on this issue by adding a ๐ reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
|
design
|
creating keyvault secret fails because parameternotspecified and no more information given what happened trying to push a secret into keyvault fsharp let vaultpostgrespassword pulumi azurenative keyvault secret namer postgrespwd pulumi azurenative keyvault secretargs properties pulumi azurenative keyvault inputs secretpropertiesargs value io postgrespwd result attributes input pulumi azurenative keyvault inputs secretattributesargs enabled input true contenttype input text plain resourcegroupname io resourcegroup name secretname input postgrespassword vaultname io keyvault name i get the response azure native keyvault secret postgrespwd shared dev uk error autorest azure service returned an error status code parameternotspecified message the parameter value is not specified steps to reproduce fsharp let vaultpostgrespassword pulumi azurenative keyvault secret namer postgrespwd pulumi azurenative keyvault secretargs properties pulumi azurenative keyvault inputs secretpropertiesargs value io postgrespwd result attributes input pulumi azurenative keyvault inputs secretattributesargs enabled input true contenttype input text plain resourcegroupname io resourcegroup name secretname input postgrespassword vaultname io keyvault name expected behavior to work or give more detailed output actual behavior failed with azure native keyvault secret postgrespwd shared dev uk error autorest azure service returned an error status code parameternotspecified message the parameter value is not specified even when running pulumi up yes refresh d skip preview v versions used additional context no response contributing vote on this issue by adding a ๐ reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already
| 1
|
327,278
| 28,051,607,072
|
IssuesEvent
|
2023-03-29 06:20:25
|
prgrms-web-devcourse/Team-DarkNight-Kkini-BE
|
https://api.github.com/repos/prgrms-web-devcourse/Team-DarkNight-Kkini-BE
|
closed
|
๋ชจ์ ์ํ ์๋ ๋ณ๊ฒฝ ํ๋ก์ธ์ค
|
๐ Feat โ
Test
|
### ๐ ๋ชฉ์
- ํ์ฌ ์๋น์ค์ ์ํฉ์ ๋ชจ์ ์ํ ๋ณ๊ฒฝ์ ๋ฐ๋์ ์๋์ผ๋ก ํด์ผํฉ๋๋ค. ํ์ง๋ง, ์ด๋ ๋งค์ฐ ๋ฒ๊ฑฐ๋ก์ด ๊ณผ์ ์ด๋ฉฐ, ์ฌ์ฉ์ UX๊ด์ ์์๋ ์ข์ง ์๋ค๊ณ ํ๋จ์ด ๋ฉ๋๋ค. ๋ฐ๋ผ์, ํ ํ์์์ ๊ฒฐ์ ์ฌํญ์ ๋ฐ๋ผ ์ด๋ฒ ์คํ๋ฆฐํธ์์๋ ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ์ ์ํ๋ฅผ ๋ชจ์ ์ข
๋ฃ๋ก ์๋์ผ๋ก ๋ณ๊ฒฝํ ์ ์๋๋ก ๋ก์ง์ ์ถ๊ฐํ๊ณ ์ ํฉ๋๋ค.
### ๐ ์์
- ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ๋ฅผ ์๋ ์ญ์ ํ๊ธฐ ์ํ ์ค์ผ์ค๋ฌ ๋์
- Spring Scheduler๋ฅผ ํตํด ๋ฐค 12์์ ๋ง์ถฐ์ ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ๋ฅผ ์ผ๊ด ๋ชจ์ ์ข
๋ฃ๋ก ๋ณํ
### โ
์๋ฃ์กฐ๊ฑด
- ํ
์คํธ ํต๊ณผ
- ์ค์ ๋์ํ๋ ์ง ํ์ธ
|
1.0
|
๋ชจ์ ์ํ ์๋ ๋ณ๊ฒฝ ํ๋ก์ธ์ค - ### ๐ ๋ชฉ์
- ํ์ฌ ์๋น์ค์ ์ํฉ์ ๋ชจ์ ์ํ ๋ณ๊ฒฝ์ ๋ฐ๋์ ์๋์ผ๋ก ํด์ผํฉ๋๋ค. ํ์ง๋ง, ์ด๋ ๋งค์ฐ ๋ฒ๊ฑฐ๋ก์ด ๊ณผ์ ์ด๋ฉฐ, ์ฌ์ฉ์ UX๊ด์ ์์๋ ์ข์ง ์๋ค๊ณ ํ๋จ์ด ๋ฉ๋๋ค. ๋ฐ๋ผ์, ํ ํ์์์ ๊ฒฐ์ ์ฌํญ์ ๋ฐ๋ผ ์ด๋ฒ ์คํ๋ฆฐํธ์์๋ ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ์ ์ํ๋ฅผ ๋ชจ์ ์ข
๋ฃ๋ก ์๋์ผ๋ก ๋ณ๊ฒฝํ ์ ์๋๋ก ๋ก์ง์ ์ถ๊ฐํ๊ณ ์ ํฉ๋๋ค.
### ๐ ์์
- ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ๋ฅผ ์๋ ์ญ์ ํ๊ธฐ ์ํ ์ค์ผ์ค๋ฌ ๋์
- Spring Scheduler๋ฅผ ํตํด ๋ฐค 12์์ ๋ง์ถฐ์ ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ๋ฅผ ์ผ๊ด ๋ชจ์ ์ข
๋ฃ๋ก ๋ณํ
### โ
์๋ฃ์กฐ๊ฑด
- ํ
์คํธ ํต๊ณผ
- ์ค์ ๋์ํ๋ ์ง ํ์ธ
|
non_design
|
๋ชจ์ ์ํ ์๋ ๋ณ๊ฒฝ ํ๋ก์ธ์ค ๐ ๋ชฉ์ ํ์ฌ ์๋น์ค์ ์ํฉ์ ๋ชจ์ ์ํ ๋ณ๊ฒฝ์ ๋ฐ๋์ ์๋์ผ๋ก ํด์ผํฉ๋๋ค ํ์ง๋ง ์ด๋ ๋งค์ฐ ๋ฒ๊ฑฐ๋ก์ด ๊ณผ์ ์ด๋ฉฐ ์ฌ์ฉ์ ux๊ด์ ์์๋ ์ข์ง ์๋ค๊ณ ํ๋จ์ด ๋ฉ๋๋ค ๋ฐ๋ผ์ ํ ํ์์์ ๊ฒฐ์ ์ฌํญ์ ๋ฐ๋ผ ์ด๋ฒ ์คํ๋ฆฐํธ์์๋ ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ์ ์ํ๋ฅผ ๋ชจ์ ์ข
๋ฃ๋ก ์๋์ผ๋ก ๋ณ๊ฒฝํ ์ ์๋๋ก ๋ก์ง์ ์ถ๊ฐํ๊ณ ์ ํฉ๋๋ค ๐ ์์
์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ๋ฅผ ์๋ ์ญ์ ํ๊ธฐ ์ํ ์ค์ผ์ค๋ฌ ๋์
spring scheduler๋ฅผ ํตํด ๋ฐค ๋ง์ถฐ์ ์ฝ์ ์๊ฐ์ด ์ง๋ ๋ฐ์ดํฐ๋ฅผ ์ผ๊ด ๋ชจ์ ์ข
๋ฃ๋ก ๋ณํ โ
์๋ฃ์กฐ๊ฑด ํ
์คํธ ํต๊ณผ ์ค์ ๋์ํ๋ ์ง ํ์ธ
| 0
|
6,355
| 2,839,333,674
|
IssuesEvent
|
2015-05-27 13:15:50
|
dojo/loader
|
https://api.github.com/repos/dojo/loader
|
opened
|
Test loader plugins
|
tests
|
## Task
Write tests for text loader plugin. Also write a test plugin to test the plugin API
## Loader Functional Testing Pattern
Write loader tests as functional tests to provide a clean environment in which the loader can operate. Whenever possible use the pattern outlined in [this wiki](../wiki/Loader Functional Testing Pattern) for inspecting test results.
|
1.0
|
Test loader plugins - ## Task
Write tests for text loader plugin. Also write a test plugin to test the plugin API
## Loader Functional Testing Pattern
Write loader tests as functional tests to provide a clean environment in which the loader can operate. Whenever possible use the pattern outlined in [this wiki](../wiki/Loader Functional Testing Pattern) for inspecting test results.
|
non_design
|
test loader plugins task write tests for text loader plugin also write a test plugin to test the plugin api loader functional testing pattern write loader tests as functional tests to provide a clean environment in which the loader can operate whenever possible use the pattern outlined in wiki loader functional testing pattern for inspecting test results
| 0
|
328,984
| 10,010,768,026
|
IssuesEvent
|
2019-07-15 08:54:15
|
IATI/ckanext-iati
|
https://api.github.com/repos/IATI/ckanext-iati
|
closed
|
Many error messages on the Registry
|
High priority bug
|
Hi,
I tried to purge the Registry today but received a timeout error, and it didn't compete.
This data file is also showing a fair few error messages. Not sure if it's related to the purging or not? https://iatiregistry.org/dataset/ec-fpi-88

|
1.0
|
Many error messages on the Registry - Hi,
I tried to purge the Registry today but received a timeout error, and it didn't compete.
This data file is also showing a fair few error messages. Not sure if it's related to the purging or not? https://iatiregistry.org/dataset/ec-fpi-88

|
non_design
|
many error messages on the registry hi i tried to purge the registry today but received a timeout error and it didn t compete this data file is also showing a fair few error messages not sure if it s related to the purging or not
| 0
|
24,893
| 24,449,175,677
|
IssuesEvent
|
2022-10-06 20:52:39
|
PurpleI2P/i2pd
|
https://api.github.com/repos/PurpleI2P/i2pd
|
closed
|
Enable Docker users to access i2pd.conf
|
question docs & usability
|
I see that the directory /home/i2pd/data is used, but I cannot find any file named i2pd.conf in the docker image. Where is the config for docker image? A note for how to edit config options in docker should be in the docs.
|
True
|
Enable Docker users to access i2pd.conf - I see that the directory /home/i2pd/data is used, but I cannot find any file named i2pd.conf in the docker image. Where is the config for docker image? A note for how to edit config options in docker should be in the docs.
|
non_design
|
enable docker users to access conf i see that the directory home data is used but i cannot find any file named conf in the docker image where is the config for docker image a note for how to edit config options in docker should be in the docs
| 0
|
131,406
| 18,281,070,314
|
IssuesEvent
|
2021-10-05 03:28:39
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
opened
|
API/DISC: engine="numba" (computational API) location in reductions functions or constructors
|
API Design Needs Discussion numba
|
Currently, `engine="numba"` is enabled for `rolling` reduction functions (e.g. mean). I am making efforts to enable numba execution for `groupby` and regular `DataFrame/Series` reduction functions as well.
For `rolling`, the `engine` keyword is currently defined on the reduction functions e.g.
```
df.rolling(2).mean(engine="numba", engine_kwargs={...})
df.groupby(key).mean(engine="numba", engine_kwargs={...}) # planned
df.mean(engine="numba, engine_kwargs={...}) # planned
```
https://github.com/pandas-dev/pandas/pull/43731#discussion_r715931881 suggested possibly putting engine in the "constructor" e.g.
```
df.rolling(2, engine="numba", engine_kwargs={...}).mean()
df.groupby(key, engine="numba", engine_kwargs={...}).mean()
DataFrame(..., engine="numba, engine_kwargs={...}.mean() ?
```
Pros of "in the constructor"
1. Less verbose e.g.
```
roll = df.rolling(2, engine="numba", engine_kwargs={...})
roll.mean() # numba
roll.median() # numba
roll.unsupported_agg() # raise NotImplementedError
```
Cons of "in the constructor"
1. Less flexibility in switching engines (probably minor) e.g.
```
gb = df.groupby(key, engine="numba", engine_kwargs={...})
gb.mean() # numba
# Reconstruct to switch engines
gb = df.groupby(key, engine="cython")
gb.cython_only_agg() # cython
```
2. Increased complexity with "chained constructors" e.g.
```
gb_roll = df.groupby(key, engine="numba").rolling(2, engine="numba").mean()
```
I prefer having `engine` in the reduction function, but opening this issue to solicit thoughts.
|
1.0
|
API/DISC: engine="numba" (computational API) location in reductions functions or constructors - Currently, `engine="numba"` is enabled for `rolling` reduction functions (e.g. mean). I am making efforts to enable numba execution for `groupby` and regular `DataFrame/Series` reduction functions as well.
For `rolling`, the `engine` keyword is currently defined on the reduction functions e.g.
```
df.rolling(2).mean(engine="numba", engine_kwargs={...})
df.groupby(key).mean(engine="numba", engine_kwargs={...}) # planned
df.mean(engine="numba, engine_kwargs={...}) # planned
```
https://github.com/pandas-dev/pandas/pull/43731#discussion_r715931881 suggested possibly putting engine in the "constructor" e.g.
```
df.rolling(2, engine="numba", engine_kwargs={...}).mean()
df.groupby(key, engine="numba", engine_kwargs={...}).mean()
DataFrame(..., engine="numba, engine_kwargs={...}.mean() ?
```
Pros of "in the constructor"
1. Less verbose e.g.
```
roll = df.rolling(2, engine="numba", engine_kwargs={...})
roll.mean() # numba
roll.median() # numba
roll.unsupported_agg() # raise NotImplementedError
```
Cons of "in the constructor"
1. Less flexibility in switching engines (probably minor) e.g.
```
gb = df.groupby(key, engine="numba", engine_kwargs={...})
gb.mean() # numba
# Reconstruct to switch engines
gb = df.groupby(key, engine="cython")
gb.cython_only_agg() # cython
```
2. Increased complexity with "chained constructors" e.g.
```
gb_roll = df.groupby(key, engine="numba").rolling(2, engine="numba").mean()
```
I prefer having `engine` in the reduction function, but opening this issue to solicit thoughts.
|
design
|
api disc engine numba computational api location in reductions functions or constructors currently engine numba is enabled for rolling reduction functions e g mean i am making efforts to enable numba execution for groupby and regular dataframe series reduction functions as well for rolling the engine keyword is currently defined on the reduction functions e g df rolling mean engine numba engine kwargs df groupby key mean engine numba engine kwargs planned df mean engine numba engine kwargs planned suggested possibly putting engine in the constructor e g df rolling engine numba engine kwargs mean df groupby key engine numba engine kwargs mean dataframe engine numba engine kwargs mean pros of in the constructor less verbose e g roll df rolling engine numba engine kwargs roll mean numba roll median numba roll unsupported agg raise notimplementederror cons of in the constructor less flexibility in switching engines probably minor e g gb df groupby key engine numba engine kwargs gb mean numba reconstruct to switch engines gb df groupby key engine cython gb cython only agg cython increased complexity with chained constructors e g gb roll df groupby key engine numba rolling engine numba mean i prefer having engine in the reduction function but opening this issue to solicit thoughts
| 1
|
25,613
| 3,946,634,486
|
IssuesEvent
|
2016-04-28 06:00:09
|
az-webdevs/azwebdevs.org
|
https://api.github.com/repos/az-webdevs/azwebdevs.org
|
closed
|
Add standard style for links
|
bug Design in progress
|
Links look like normal text except for the cursor when mouseover. The base style for `<a>` needs to look like a clickable link differentiated by color and/or underline.
|
1.0
|
Add standard style for links - Links look like normal text except for the cursor when mouseover. The base style for `<a>` needs to look like a clickable link differentiated by color and/or underline.
|
design
|
add standard style for links links look like normal text except for the cursor when mouseover the base style for needs to look like a clickable link differentiated by color and or underline
| 1
|
129,526
| 5,098,039,933
|
IssuesEvent
|
2017-01-03 23:40:55
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Use docker labels
|
area/kubelet priority/awaiting-more-evidence sig/node
|
https://github.com/docker/docker/pull/9882
Once that lands, we can start to think about how to use it best - maybe we can stop jamming all sorts of junk into our docker names :)
|
1.0
|
Use docker labels - https://github.com/docker/docker/pull/9882
Once that lands, we can start to think about how to use it best - maybe we can stop jamming all sorts of junk into our docker names :)
|
non_design
|
use docker labels once that lands we can start to think about how to use it best maybe we can stop jamming all sorts of junk into our docker names
| 0
|
62,597
| 7,612,317,967
|
IssuesEvent
|
2018-05-01 17:06:48
|
Opentrons/opentrons
|
https://api.github.com/repos/Opentrons/opentrons
|
opened
|
Add well selection instructions to well selection modal
|
feature protocol designer
|
As a user I'd like to know how to select and de-select wells
##Acceptance criteria
-Add lines of text from design to well selection modal
##Designs
On the way
|
1.0
|
Add well selection instructions to well selection modal - As a user I'd like to know how to select and de-select wells
##Acceptance criteria
-Add lines of text from design to well selection modal
##Designs
On the way
|
design
|
add well selection instructions to well selection modal as a user i d like to know how to select and de select wells acceptance criteria add lines of text from design to well selection modal designs on the way
| 1
|
18,238
| 10,918,463,726
|
IssuesEvent
|
2019-11-21 16:56:37
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
EBS snapshot: Add tags while copying
|
enhancement service/ec2
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a ๐ [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
As of [AWS SDK v1.25.36](https://github.com/aws/aws-sdk-go/releases/tag/v1.25.36), you can now add tags while copying EBS snapshots.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* [`aws_ebs_snapshot_copy`](https://www.terraform.io/docs/providers/aws/r/ebs_snapshot_copy.html)
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_ebs_snapshot_copy" "example" {
tags = {
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
[Announcement](https://aws.amazon.com/about-aws/whats-new/2019/11/copy-snapshot-api-supports-adding-tags-while-copying-snapshots/).
Requires:
* #10900
|
1.0
|
EBS snapshot: Add tags while copying - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a ๐ [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
As of [AWS SDK v1.25.36](https://github.com/aws/aws-sdk-go/releases/tag/v1.25.36), you can now add tags while copying EBS snapshots.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* [`aws_ebs_snapshot_copy`](https://www.terraform.io/docs/providers/aws/r/ebs_snapshot_copy.html)
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_ebs_snapshot_copy" "example" {
tags = {
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
[Announcement](https://aws.amazon.com/about-aws/whats-new/2019/11/copy-snapshot-api-supports-adding-tags-while-copying-snapshots/).
Requires:
* #10900
|
non_design
|
ebs snapshot add tags while copying community note please vote on this issue by adding a ๐ to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description as of you can now add tags while copying ebs snapshots new or affected resource s potential terraform configuration hcl resource aws ebs snapshot copy example tags references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example requires
| 0
|
262,826
| 8,272,518,885
|
IssuesEvent
|
2018-09-16 21:01:39
|
javaee/glassfish
|
https://api.github.com/repos/javaee/glassfish
|
closed
|
'stop-everything' CLI command
|
Component: admin ERR: Assignee Priority: Minor Type: New Feature
|
This request came out of a GF 3.1 training review where it would be nice to have one shutdown command to use after a lab exercise. It would be a time saver to have one command that iterates through the clusters to shut them down, followed by the DAS.
#### Affected Versions
[3.1]
|
1.0
|
'stop-everything' CLI command - This request came out of a GF 3.1 training review where it would be nice to have one shutdown command to use after a lab exercise. It would be a time saver to have one command that iterates through the clusters to shut them down, followed by the DAS.
#### Affected Versions
[3.1]
|
non_design
|
stop everything cli command this request came out of a gf training review where it would be nice to have one shutdown command to use after a lab exercise it would be a time saver to have one command that iterates through the clusters to shut them down followed by the das affected versions
| 0
|
49,265
| 20,718,727,692
|
IssuesEvent
|
2022-03-13 02:57:37
|
Edd-wordd/monaTech
|
https://api.github.com/repos/Edd-wordd/monaTech
|
closed
|
Content Writing Section
|
Service Offered
|
Content Writing Section/btn of the services offered section
- build out page and components from wireframe prototype
- make modular in order to use for all the other services offered
- use props to make it modular
|
1.0
|
Content Writing Section - Content Writing Section/btn of the services offered section
- build out page and components from wireframe prototype
- make modular in order to use for all the other services offered
- use props to make it modular
|
non_design
|
content writing section content writing section btn of the services offered section build out page and components from wireframe prototype make modular in order to use for all the other services offered use props to make it modular
| 0
|
155,644
| 24,494,157,727
|
IssuesEvent
|
2022-10-10 07:03:29
|
maxime841/second-life-front
|
https://api.github.com/repos/maxime841/second-life-front
|
closed
|
page home public
|
bug dev design
|
- [x] ajouter search bar dans la page home en mode mobile
- [x] rectifier responsive des image en mode mobile (in-line doit etre column)
|
1.0
|
page home public - - [x] ajouter search bar dans la page home en mode mobile
- [x] rectifier responsive des image en mode mobile (in-line doit etre column)
|
design
|
page home public ajouter search bar dans la page home en mode mobile rectifier responsive des image en mode mobile in line doit etre column
| 1
|
3,631
| 2,695,057,432
|
IssuesEvent
|
2015-04-02 00:44:23
|
piwik/piwik
|
https://api.github.com/repos/piwik/piwik
|
opened
|
Update icons in Piwik
|
c: Design / UI Enhancement RFC
|
Currently icons in Piwik [are not consistent and are implemented as images](https://github.com/piwik/piwik/tree/master/plugins/Morpheus/images).
With the WIP of the redesign, we could take the opportunity to move towards font icons. Not doing this would mean either not using icons at all in redesigned parts, or using existing icons which don't fit well in it.
So we need a set of font icons.
The obvious solution IMO would be [Bootstrap](http://getbootstrap.com/components/#glyphicons) or even better, [Font Awesome](http://fortawesome.github.io/Font-Awesome/icons/) but it requires Bootstrapโฆ
Thoughts?
|
1.0
|
Update icons in Piwik - Currently icons in Piwik [are not consistent and are implemented as images](https://github.com/piwik/piwik/tree/master/plugins/Morpheus/images).
With the WIP of the redesign, we could take the opportunity to move towards font icons. Not doing this would mean either not using icons at all in redesigned parts, or using existing icons which don't fit well in it.
So we need a set of font icons.
The obvious solution IMO would be [Bootstrap](http://getbootstrap.com/components/#glyphicons) or even better, [Font Awesome](http://fortawesome.github.io/Font-Awesome/icons/) but it requires Bootstrapโฆ
Thoughts?
|
design
|
update icons in piwik currently icons in piwik with the wip of the redesign we could take the opportunity to move towards font icons not doing this would mean either not using icons at all in redesigned parts or using existing icons which don t fit well in it so we need a set of font icons the obvious solution imo would be or even better but it requires bootstrapโฆ thoughts
| 1
|
85,747
| 10,679,661,622
|
IssuesEvent
|
2019-10-21 19:44:37
|
Quaver/Quaver
|
https://api.github.com/repos/Quaver/Quaver
|
closed
|
[redesign] Right-click on empty place in difficulty selection doesn't return to the song selection
|
Input UI Redesign
|
**Is your feature request related to a problem? Please describe.**
Just like with the modifiers panel thing, it just feels unnatural and against natural intuition, considering many are used to the current Quaver behavior and/or other rhythm games.
|
1.0
|
[redesign] Right-click on empty place in difficulty selection doesn't return to the song selection - **Is your feature request related to a problem? Please describe.**
Just like with the modifiers panel thing, it just feels unnatural and against natural intuition, considering many are used to the current Quaver behavior and/or other rhythm games.
|
design
|
right click on empty place in difficulty selection doesn t return to the song selection is your feature request related to a problem please describe just like with the modifiers panel thing it just feels unnatural and against natural intuition considering many are used to the current quaver behavior and or other rhythm games
| 1
|
102,551
| 12,807,311,990
|
IssuesEvent
|
2020-07-03 11:12:56
|
UniversityOfHelsinkiCS/mobvita
|
https://api.github.com/repos/UniversityOfHelsinkiCS/mobvita
|
closed
|
exercise history changes
|
UI design feature
|
- [x] Use time window selected in progress page for exercise history
- [x] ? Color changes ?
|
1.0
|
exercise history changes - - [x] Use time window selected in progress page for exercise history
- [x] ? Color changes ?
|
design
|
exercise history changes use time window selected in progress page for exercise history color changes
| 1
|
65,913
| 7,930,594,573
|
IssuesEvent
|
2018-07-06 19:32:32
|
project-koku/koku-ui
|
https://api.github.com/repos/project-koku/koku-ui
|
opened
|
Design: refine dashboard content to provide more details/context when looking at cost
|
design
|
## User Story
As a user, I want to see more details (breakdowns) of the various summary information that is shown on the dashboard, so that I can readily drill down through a filtered path instead of having to navigate to a list before filtering down.
## Impacts
- API: maybe
- UI: definitely
- Docs: maybe
## Role
- User is an Admin/Operator of AWS accounts for the company they are managing
## Assumptions
- The user is in troubleshooting mode where they don't know what account or service that might be going over a set budget/limit
## UI Details
- refine the dashboard/overview page
- will use PF4 styling
## Acceptance Criteria
- [ ] Review with HCCM stakeholders
- [ ] Review with UXD Mgmt (includes Insights) stakeholders
- [ ] Designs are posted in invision and is ready to be reviewed by internal customers
|
1.0
|
Design: refine dashboard content to provide more details/context when looking at cost - ## User Story
As a user, I want to see more details (breakdowns) of the various summary information that is shown on the dashboard, so that I can readily drill down through a filtered path instead of having to navigate to a list before filtering down.
## Impacts
- API: maybe
- UI: definitely
- Docs: maybe
## Role
- User is an Admin/Operator of AWS accounts for the company they are managing
## Assumptions
- The user is in troubleshooting mode where they don't know what account or service that might be going over a set budget/limit
## UI Details
- refine the dashboard/overview page
- will use PF4 styling
## Acceptance Criteria
- [ ] Review with HCCM stakeholders
- [ ] Review with UXD Mgmt (includes Insights) stakeholders
- [ ] Designs are posted in invision and is ready to be reviewed by internal customers
|
design
|
design refine dashboard content to provide more details context when looking at cost user story as a user i want to see more details breakdowns of the various summary information that is shown on the dashboard so that i can readily drill down through a filtered path instead of having to navigate to a list before filtering down impacts api maybe ui definitely docs maybe role user is an admin operator of aws accounts for the company they are managing assumptions the user is in troubleshooting mode where they don t know what account or service that might be going over a set budget limit ui details refine the dashboard overview page will use styling acceptance criteria review with hccm stakeholders review with uxd mgmt includes insights stakeholders designs are posted in invision and is ready to be reviewed by internal customers
| 1
|
54,432
| 6,820,954,613
|
IssuesEvent
|
2017-11-07 15:26:47
|
OpenLiberty/open-liberty
|
https://api.github.com/repos/OpenLiberty/open-liberty
|
closed
|
Design Issue: Repeatable annotations for Tags, SecurityScheme and others
|
design in:MicroProfile/OpenAPI
|
Java 8 added support for Repeatable annotation:
https://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Repeatable.html
allows to specify annotation more then once at same location
Eg
```
@Tag(name="tag1")
@Tag(name="tag2")
@GET
public Response getDoc(){
}
```
|
1.0
|
Design Issue: Repeatable annotations for Tags, SecurityScheme and others - Java 8 added support for Repeatable annotation:
https://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Repeatable.html
allows to specify annotation more then once at same location
Eg
```
@Tag(name="tag1")
@Tag(name="tag2")
@GET
public Response getDoc(){
}
```
|
design
|
design issue repeatable annotations for tags securityscheme and others java added support for repeatable annotation allows to specify annotation more then once at same location eg tag name tag name get public response getdoc
| 1
|
180,131
| 30,446,925,589
|
IssuesEvent
|
2023-07-15 19:52:28
|
Trip-To-Travel/TTT-front
|
https://api.github.com/repos/Trip-To-Travel/TTT-front
|
closed
|
design: 'ํ
๋ง ๋ญํน' ํ์ด์ง
|
design
|
# ๊ฐ์
์ฌ์ฉ์๋ 'ํ
๋ง ๋ญํน' ๊ธฐ๋ฅ์ ํตํด, ๋ฐฉ๋ฌธ ์์๋ก ์ ๋ ฌ๋ ๊ฐ์์ ํ
๋ง ์ฅ์๋ค์ ์ด๋ํ ์ ์๋ค.
# ๋ด์ฉ
์ฅ์๋ช
(+ ์ฃผ์), ๋ฐฉ๋ฌธํ ํ์ ์, ๋งค๊ฒจ์ง ๋ณ์ ์ ํ๊ท ๋ฑ ์ถ๋ ฅ..
์ฐธ๊ณ : ์นด์นด์ค ๋งต์์ ๋ณผ ์ ์๋ ํ
๋ง ์ข
๋ฅ๋ค
- ๋ง์ง
- ์นดํ
- ํธ์์
- ์ฆ๊ฒจ์ฐพ๊ธฐ ์ธ๊ธฐ์คํ
- ์ฃผ์ฐจ์ฅ
- ์ฃผ์ ์
- ์ ๊ธฐ์ฐจ์ถฉ์ ์
์ฐธ๊ณ : ์นด์นด์ค ๋งต์ 'ํ
๋ง ์ง๋' ๋ผ๋ ๊ฒ๋ ์์์
- [x] ์ด ์ ๋ณด ๊ฐ์ ธ์ฌ ์ ์๋ API ์๋ ์ง ํ์ธํด๋ณผ ๊ฒ
- ๊ฐ์ ธ์ฌ ์ ์๋ค.
<img width="40%" src="https://github.com/Trip-To-Travel/TTT-front/assets/53112143/5be934ef-feee-4df0-805c-2408bb56ecf6" />
|
1.0
|
design: 'ํ
๋ง ๋ญํน' ํ์ด์ง - # ๊ฐ์
์ฌ์ฉ์๋ 'ํ
๋ง ๋ญํน' ๊ธฐ๋ฅ์ ํตํด, ๋ฐฉ๋ฌธ ์์๋ก ์ ๋ ฌ๋ ๊ฐ์์ ํ
๋ง ์ฅ์๋ค์ ์ด๋ํ ์ ์๋ค.
# ๋ด์ฉ
์ฅ์๋ช
(+ ์ฃผ์), ๋ฐฉ๋ฌธํ ํ์ ์, ๋งค๊ฒจ์ง ๋ณ์ ์ ํ๊ท ๋ฑ ์ถ๋ ฅ..
์ฐธ๊ณ : ์นด์นด์ค ๋งต์์ ๋ณผ ์ ์๋ ํ
๋ง ์ข
๋ฅ๋ค
- ๋ง์ง
- ์นดํ
- ํธ์์
- ์ฆ๊ฒจ์ฐพ๊ธฐ ์ธ๊ธฐ์คํ
- ์ฃผ์ฐจ์ฅ
- ์ฃผ์ ์
- ์ ๊ธฐ์ฐจ์ถฉ์ ์
์ฐธ๊ณ : ์นด์นด์ค ๋งต์ 'ํ
๋ง ์ง๋' ๋ผ๋ ๊ฒ๋ ์์์
- [x] ์ด ์ ๋ณด ๊ฐ์ ธ์ฌ ์ ์๋ API ์๋ ์ง ํ์ธํด๋ณผ ๊ฒ
- ๊ฐ์ ธ์ฌ ์ ์๋ค.
<img width="40%" src="https://github.com/Trip-To-Travel/TTT-front/assets/53112143/5be934ef-feee-4df0-805c-2408bb56ecf6" />
|
design
|
design ํ
๋ง ๋ญํน ํ์ด์ง ๊ฐ์ ์ฌ์ฉ์๋ ํ
๋ง ๋ญํน ๊ธฐ๋ฅ์ ํตํด ๋ฐฉ๋ฌธ ์์๋ก ์ ๋ ฌ๋ ๊ฐ์์ ํ
๋ง ์ฅ์๋ค์ ์ด๋ํ ์ ์๋ค ๋ด์ฉ ์ฅ์๋ช
์ฃผ์ ๋ฐฉ๋ฌธํ ํ์ ์ ๋งค๊ฒจ์ง ๋ณ์ ์ ํ๊ท ๋ฑ ์ถ๋ ฅ ์ฐธ๊ณ ์นด์นด์ค ๋งต์์ ๋ณผ ์ ์๋ ํ
๋ง ์ข
๋ฅ๋ค ๋ง์ง ์นดํ ํธ์์ ์ฆ๊ฒจ์ฐพ๊ธฐ ์ธ๊ธฐ์คํ ์ฃผ์ฐจ์ฅ ์ฃผ์ ์ ์ ๊ธฐ์ฐจ์ถฉ์ ์ ์ฐธ๊ณ ์นด์นด์ค ๋งต์ ํ
๋ง ์ง๋ ๋ผ๋ ๊ฒ๋ ์์์ ์ด ์ ๋ณด ๊ฐ์ ธ์ฌ ์ ์๋ api ์๋ ์ง ํ์ธํด๋ณผ ๊ฒ ๊ฐ์ ธ์ฌ ์ ์๋ค
| 1
|
138,060
| 20,322,293,312
|
IssuesEvent
|
2022-02-18 00:22:03
|
ZcashFoundation/zebra
|
https://api.github.com/repos/ZcashFoundation/zebra
|
closed
|
Design: mempool transaction handling
|
A-docs C-design A-rust S-needs-design
|
This is not strictly needed for 'Sync and Validate Mainnet' but is needed for 'behaving like a friendly node on the network', so probably before our first major release.
### Specification
This design should include the [ZIP-200](https://zips.z.cash/zip-0200) network upgrade mechanism for the mempool (#2374):
- [x] Reject transactions that are invalid for the current network upgrade
- [x] Clear mempool each time the network upgrade changes
- [x] Allow rollback and multiple activations of the same network upgrade
- resolved by clearing the mempool every time the best chain changes
### Security
This design should resist the following attack:
1. A malicious peer sends lots of small "dust" transactions to Zebra
- unlike fake valid blocks, fake transactions are cheap to generate and send
- resolved using ZIP-401
2. Zebra broadcasts all those transactions to its ~50 peers, making them unready, and causing Zebra to open more peer connections
- resolved by only broadcasting transaction IDs after validation
- resolved by broadcasting transaction IDs (Zebra never broadcasts transactions)
3. Zebra's mempool becomes full, using excessive memory, or crowding out non-dust transactions
- resolved using ZIP-401
|
2.0
|
Design: mempool transaction handling - This is not strictly needed for 'Sync and Validate Mainnet' but is needed for 'behaving like a friendly node on the network', so probably before our first major release.
### Specification
This design should include the [ZIP-200](https://zips.z.cash/zip-0200) network upgrade mechanism for the mempool (#2374):
- [x] Reject transactions that are invalid for the current network upgrade
- [x] Clear mempool each time the network upgrade changes
- [x] Allow rollback and multiple activations of the same network upgrade
- resolved by clearing the mempool every time the best chain changes
### Security
This design should resist the following attack:
1. A malicious peer sends lots of small "dust" transactions to Zebra
- unlike fake valid blocks, fake transactions are cheap to generate and send
- resolved using ZIP-401
2. Zebra broadcasts all those transactions to its ~50 peers, making them unready, and causing Zebra to open more peer connections
- resolved by only broadcasting transaction IDs after validation
- resolved by broadcasting transaction IDs (Zebra never broadcasts transactions)
3. Zebra's mempool becomes full, using excessive memory, or crowding out non-dust transactions
- resolved using ZIP-401
|
design
|
design mempool transaction handling this is not strictly needed for sync and validate mainnet but is needed for behaving like a friendly node on the network so probably before our first major release specification this design should include the network upgrade mechanism for the mempool reject transactions that are invalid for the current network upgrade clear mempool each time the network upgrade changes allow rollback and multiple activations of the same network upgrade resolved by clearing the mempool every time the best chain changes security this design should resist the following attack a malicious peer sends lots of small dust transactions to zebra unlike fake valid blocks fake transactions are cheap to generate and send resolved using zip zebra broadcasts all those transactions to its peers making them unready and causing zebra to open more peer connections resolved by only broadcasting transaction ids after validation resolved by broadcasting transaction ids zebra never broadcasts transactions zebra s mempool becomes full using excessive memory or crowding out non dust transactions resolved using zip
| 1
|
387,595
| 11,463,398,637
|
IssuesEvent
|
2020-02-07 15:56:11
|
canonical-web-and-design/tutorials.ubuntu.com
|
https://api.github.com/repos/canonical-web-and-design/tutorials.ubuntu.com
|
closed
|
ubuntu-image -o is deprecated
|
Bug ๐ Priority: Medium Tutorials Content
|
In https://tutorials.ubuntu.com/tutorial/create-your-own-core-image#5 there's a step that asks to run:
sudo ubuntu-image -o joule.img -c beta joule.model
The -o flag in ubuntu-image is deprecated. It should be replaced with -O.
|
1.0
|
ubuntu-image -o is deprecated - In https://tutorials.ubuntu.com/tutorial/create-your-own-core-image#5 there's a step that asks to run:
sudo ubuntu-image -o joule.img -c beta joule.model
The -o flag in ubuntu-image is deprecated. It should be replaced with -O.
|
non_design
|
ubuntu image o is deprecated in there s a step that asks to run sudo ubuntu image o joule img c beta joule model the o flag in ubuntu image is deprecated it should be replaced with o
| 0
|
11,218
| 13,998,914,218
|
IssuesEvent
|
2020-10-28 10:06:06
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
closed
|
BSQ trading fee update for Cycle 15
|
re:compensation re:processes was:approved
|
This proposal keeps a record of the [process](https://bisq.wiki/Updating_BSQ_trading_fees) to keep the BSQ trading fee at 50% discount to BTC trading fee. It will remain open until we need to update BSQ trading fees again.
Link to last issue: #202
## Cycle 15
**Parameters**
* USD/BTC price: 9380USD
* USD/BSQ price: 0.61USD
* Current BSQ discount: 57.2%
There's a need to **update BSQ trading fee for Cycle 15.** The 15% cap for BSQ trading fee increase has been used.

<!--
If it's under 45%, use: "There's a need to **update BSQ trading fee for Cycle .**"
If it's over 55% you might need to add "The 15% cap for BSQ trading fee increase has be reached."
-->
### New BSQ trading fees
A change parameter request for BSQ trading fees will be submitted to DAO voting:
New BSQ maker fee: `7.6 BSQ`
New BSQ taker fee: `53.23BSQ`
|
1.0
|
BSQ trading fee update for Cycle 15 - This proposal keeps a record of the [process](https://bisq.wiki/Updating_BSQ_trading_fees) to keep the BSQ trading fee at 50% discount to BTC trading fee. It will remain open until we need to update BSQ trading fees again.
Link to last issue: #202
## Cycle 15
**Parameters**
* USD/BTC price: 9380USD
* USD/BSQ price: 0.61USD
* Current BSQ discount: 57.2%
There's a need to **update BSQ trading fee for Cycle 15.** The 15% cap for BSQ trading fee increase has been used.

<!--
If it's under 45%, use: "There's a need to **update BSQ trading fee for Cycle .**"
If it's over 55% you might need to add "The 15% cap for BSQ trading fee increase has be reached."
-->
### New BSQ trading fees
A change parameter request for BSQ trading fees will be submitted to DAO voting:
New BSQ maker fee: `7.6 BSQ`
New BSQ taker fee: `53.23BSQ`
|
non_design
|
bsq trading fee update for cycle this proposal keeps a record of the to keep the bsq trading fee at discount to btc trading fee it will remain open until we need to update bsq trading fees again link to last issue cycle parameters usd btc price usd bsq price current bsq discount there s a need to update bsq trading fee for cycle the cap for bsq trading fee increase has been used if it s under use there s a need to update bsq trading fee for cycle if it s over you might need to add the cap for bsq trading fee increase has be reached new bsq trading fees a change parameter request for bsq trading fees will be submitted to dao voting new bsq maker fee bsq new bsq taker fee
| 0
|
174,511
| 14,484,724,573
|
IssuesEvent
|
2020-12-10 16:42:03
|
econ-ark/HARK
|
https://api.github.com/repos/econ-ark/HARK
|
closed
|
โJourney into HARK for a 1st year PhD studentโ updates
|
Tag: Documentation
|
Read over the the Journey notebooks:
https://hark.readthedocs.io/en/latest/example_notebooks/Journey_1_PhD.html
- Proofread
- Correct for anything that has changed in HARK usage.
|
1.0
|
โJourney into HARK for a 1st year PhD studentโ updates - Read over the the Journey notebooks:
https://hark.readthedocs.io/en/latest/example_notebooks/Journey_1_PhD.html
- Proofread
- Correct for anything that has changed in HARK usage.
|
non_design
|
โjourney into hark for a year phd studentโ updates read over the the journey notebooks proofread correct for anything that has changed in hark usage
| 0
|
485,599
| 13,995,883,454
|
IssuesEvent
|
2020-10-28 04:25:46
|
AY2021S1-CS2103T-W12-1/tp
|
https://api.github.com/repos/AY2021S1-CS2103T-W12-1/tp
|
closed
|
Bug: UI for list view still grey
|
Priority.Low Status.Bugged
|
The background colour of VBox (?) for list view is still black. Try calling search commands which returns a list with only 1 result and you see it. Not sure which part hasn't changed colour yet.
|
1.0
|
Bug: UI for list view still grey - The background colour of VBox (?) for list view is still black. Try calling search commands which returns a list with only 1 result and you see it. Not sure which part hasn't changed colour yet.
|
non_design
|
bug ui for list view still grey the background colour of vbox for list view is still black try calling search commands which returns a list with only result and you see it not sure which part hasn t changed colour yet
| 0
|
297,470
| 25,735,358,926
|
IssuesEvent
|
2022-12-08 00:04:50
|
tricia-holmes/rancid-tomatillos
|
https://api.github.com/repos/tricia-holmes/rancid-tomatillos
|
opened
|
Add propTypes & defaultProp tests
|
testing
|
### **User Story:**
N/A
### **Acceptance Criteria:**
Have passing tests for any case where `propTypes` or `defaultProp` is used.
---
### **Definition of Done:**
- [ ] Add test for `Banner propTypes`
- [ ] Add test for `Banner defaultProp`
- [ ] Add test for `Movies propTypes`
- [ ] Add test for `Movie propTypes`
- [ ] Add test for `Movie Details propTypes`
|
1.0
|
Add propTypes & defaultProp tests - ### **User Story:**
N/A
### **Acceptance Criteria:**
Have passing tests for any case where `propTypes` or `defaultProp` is used.
---
### **Definition of Done:**
- [ ] Add test for `Banner propTypes`
- [ ] Add test for `Banner defaultProp`
- [ ] Add test for `Movies propTypes`
- [ ] Add test for `Movie propTypes`
- [ ] Add test for `Movie Details propTypes`
|
non_design
|
add proptypes defaultprop tests user story n a acceptance criteria have passing tests for any case where proptypes or defaultprop is used definition of done add test for banner proptypes add test for banner defaultprop add test for movies proptypes add test for movie proptypes add test for movie details proptypes
| 0
|
177,995
| 29,477,831,751
|
IssuesEvent
|
2023-06-02 00:57:30
|
microsoft/rushstack
|
https://api.github.com/repos/microsoft/rushstack
|
closed
|
[heft] Design Proposal: Alignment with Rush phased commands
|
design proposal
|
This is a proposal for aligning the architecture of Heft to be more compatible with Rush "phased" commands in the interests of improving parallelism, customizability for other tools (esbuild, swc, etc.), reducing Heft aggregate boot time, and optimizing multi-project watching.
# Goal 1: Increased Parallelism and Configurability
## Current state
Today `heft test` runs a sequence of hardcoded pipeline stages:
```
[Clean] (if --clean) -> [Build] (unless --no-build) -> [Test]
```
Where the `Build` stage is further subdivided into hardcoded sub-stages:
```
[Pre-Compile] -> [Compile] -> [Bundle] -> [Post-Build]
```
This limits the ability of Rush to exploit task parallelism to running `heft build --clean` and `heft test --no-build` for each project, i.e. if:
```
A <-(depends on)- B
```
Then the `test` phase for `A` can run concurrently with the `build` phase for `B`.
The `heft.json` file provides event actions and plugins to inject build steps at various points within this pipeline, but the pipeline itself is not particularly customizable.
When run from the command line, Heft loads a single `HeftConfiguration` object and creates a `HeftSession` that corresponds to the command line session.
## Desired state
In future build rigs that exploit the `isolatedModules` contract to allow transpilation of each and every module from TypeScript -> JavaScript to be an independent operation, we instead have stages more like the following, each of which handles cleaning internally:
- **Compile**: Converts TypeScript -> ECMAScript.
- Dependencies: none
- **SASS**: Convert SASS -> CSS, emit .d.ts files.
- Dependencies: none
- **Analyze**: Type Check, Lint, emit .d.ts files.
- Dependencies: Analyze in dependency projects, SASS in current project
- **Test**: Run unit tests.
- Dependencies: Compile in self and dependency projects
- **Bundle**: Combine ECMAScript/CSS/etc. into more compact bundled form.
- Dependencies: Compile in self and dependency projects. Potentially bundle in dependency projects.
Custom rigs may require more or fewer stages to accommodate other build steps, and importantly, may alter the dependency relationship between the stages. For example a rig may opt to run its tests on bundled output, and therefore have the "test" stage depend on the "bundle" stage.
# Goal 2: Reduce time booting Heft repeatedly in a large Rush monorepo
## Current state
The initialization time of a Heft process is currently measured in seconds. In a monorepo with 600 projects, even 1 second of overhead is 10 minutes of CPU-time, since for each operation on each project, Rush boots Heft and its CLI parser in a fresh process.
## Desired state
Since Heft is designed to scope state to `HeftSession` objects and closures in plugin taps, it should be possible to reuse a single `Heft` process across multiple operations on multiple projects.
# Goal 3: Multi-project watch
## Current state
Custom watch-mode commands in Rush rely on the underlying command-line script to support efficient incremental execution and are unable to preserve a running process across build passes. Some tools, such as TypeScript or Webpack 5 have support for this model, but others, such as Jest, do not.
## Desired state
Using IPC or stdin/stdout, a Heft (or other compatible tool) process can communicate with Rush to receive a notification of changed inputs and to report the result of the command.
# Design Spec
Instead of a hardcoded pipeline definition, `heft.json` gains the ability to define a list of stages, their dependencies on other stages, and the event actions and plugins required to implement the functionality for each.
## Heft.json
```jsonc
{
/**
* Command line aliases to run a set of stages, so that developers can continue to run `heft build` or similar
*/
"actions": [
{
"name": "build",
"stages": [
"compile",
"analyze",
"bundle"
]
},
{
"name": "test",
"stages": [
"compile",
// "analyze" and "bundle" are omitted since they are not necessary for "test" to run
"test"
]
}
],
/**
* Individual build steps defined for this project (or rig). Projects will typically inherit from `@rushstack/heft-web-rig` or `@rushstack/heft-node-rig`,
* but custom rigs or even individual projects may need different stages or different plugins in each stage.
*/
"stages": [
{
"name": "compile",
/**
* This build rig uses isolatedModules, so emitting ECMAScript does not depend on typings for other file types.
*/
"dependsOn": [],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["lib/**/*.js", "lib/**/*.js.map", "lib-commonjs/**/*.js", "lib-commonjs/**/*.js.map"]
}
],
"plugins": [
{
/**
* Plugin that uses TypeScript's transpileModule() API to bulk convert TypeScript -> ECMAScript.
* Could use a SWC or Babel-based plugin instead.
*/
"plugin": "@rushstack/heft-typescript-plugin/lib/TranspileOnlyPlugin"
}
]
},
{
"name": "sass",
/**
* Compiling SASS does not depend on other stages
*/
"dependsOn": [],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["lib/**/*.css", "temp/sass-ts"]
}
],
"plugins": [
{
/**
* Plugin that uses TypeScript to type check and emit declaration files, but not transpile to ECMAScript
*/
"plugin": "@rushstack/heft-typescript-plugin/lib/DeclarationOnlyPlugin"
}
]
},
{
"name": "analyze",
/**
* Type checking and Linting can be done in parallel with other stages, but depend on the generated .scss.d.ts files
*/
"dependsOn": ["sass"],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["lib/**/*.d.ts", "lib/**/*.d.ts.map"]
}
],
"plugins": [
{
/**
* Plugin that uses TypeScript to type check and emit declaration files, but not transpile to ECMAScript
*/
"plugin": "@rushstack/heft-typescript-plugin/lib/DeclarationOnlyPlugin"
}
]
},
{
"name": "bundle",
/**
* The bundler needs the compiled ECMAScript and CSS
*/
"dependsOn": ["compile", "sass"],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["dist"]
}
],
"plugins": [
{
"plugin": "@rushstack/heft-webpack5-plugin"
}
]
},
{
"name": "test",
/**
* Jest needs compiled ECMAScript
*/
"dependsOn": ["compile"],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["temp/jest"]
}
],
"plugins": [
{
"plugin": "@rushstack/heft-jest-plugin"
}
]
}
]
}
```
## HeftServer
The `HeftServer` is a new component in Heft that is responsible for handling requests to execute a specific stage in a specific project. Upon receiving a request it will either locate an existing `HeftSession` that corresponds to a prior issuance of that request, or else create a fresh `HeftSession`, then execute the `clean (optional), beforeRun, run, afterRun` hooks in order. The request may also contain an input state object and/or a hint to indicate that the stage will likely be re-executed in the future (for watch mode). When the `HeftServer` has finished executing the stage, it will report back to the caller with a list of warnings/errors, the success/failure of the stage, and potentially additional metadata. It may also pipe logs.
Heft plugins that need to communicate with other Heft plugins--for example to customize the webpack configuration used by `@rushstack/heft-webpack4-plugin`--should use the Plugin accessor mechanism that has already been implemented.
A separate CLI executable will be defined that creates a `HeftServer` and waits for IPC messages.
## Heft CLI
The Heft CLI process reads `heft.json`, identifies the requested action and uses `HeftServer` instances to execute the relevant stages in topological order. If running in `--debug` mode or if the stage topology does not contain any parallelism, the Heft CLI will load the `HeftServer` in the current process, otherwise it may boot multiple external `HeftServer` processes, or potentially be instructed to connect to an existing `HeftServer` process.
Edit 2/11/2022:
### CLI parsing and custom parameters
In order to support custom parameters defined by plugins, the Heft CLI will introduce a synthetic "CLI Validation" stage at the very beginning of the pipeline for each action. This stage will apply all plugins from all stages used by that action (for optimization, plugins may have a flag in the plugin manifest that indicates that the plugin does not affect the CLI and does not need to be loaded during this stage), then run the CLI parser. No other hooks (clean, pre, run, post) will get run during this synthetic stage.
Once the command line has been parsed and validated, Heft will use runtime metadata about which plugins registered each parameter to extract the set of parameters that should be forwarded to each of the defined stages. If multiple plugin instances register the same parameter, as long as the definitions are compatible (exact meaning TBD), Heft will simply forward the parameter to all of them.
Each executing stage will receive a scoped command line and run the aggregate parser derived from the plugins for that stage. This avoids global state in the system to keep stage execution compartmentalized and thereby portable.
## @rushstack/rush-heft-operation-runner-plugin
The `@rushstack/rush-heft-operation-runner-plugin` is a Rush plugin that provides an implementation of the `IOperationRunner` contract (responsible for executing Rush Operations, i.e. a specific phase in a specific Rush project) that executes each Heft stage in the Operation (usually 1) by checking out a `HeftServer` instance from a pool maintained by the plugin and issuing an IPC request. The pool will maintain an affinity mapping of the last `HeftServer` used by each `Operation` identity, such that watch mode execution can re-use the same `HeftServer` process for subsequent build passes when the watcher detects changes. The mapping between `Operation` and Heft `stages` should be defined in an extension of the `rush-project.json` file to prevent Rush from needing to load additional files.
|
1.0
|
[heft] Design Proposal: Alignment with Rush phased commands - This is a proposal for aligning the architecture of Heft to be more compatible with Rush "phased" commands in the interests of improving parallelism, customizability for other tools (esbuild, swc, etc.), reducing Heft aggregate boot time, and optimizing multi-project watching.
# Goal 1: Increased Parallelism and Configurability
## Current state
Today `heft test` runs a sequence of hardcoded pipeline stages:
```
[Clean] (if --clean) -> [Build] (unless --no-build) -> [Test]
```
Where the `Build` stage is further subdivided into hardcoded sub-stages:
```
[Pre-Compile] -> [Compile] -> [Bundle] -> [Post-Build]
```
This limits the ability of Rush to exploit task parallelism to running `heft build --clean` and `heft test --no-build` for each project, i.e. if:
```
A <-(depends on)- B
```
Then the `test` phase for `A` can run concurrently with the `build` phase for `B`.
The `heft.json` file provides event actions and plugins to inject build steps at various points within this pipeline, but the pipeline itself is not particularly customizable.
When run from the command line, Heft loads a single `HeftConfiguration` object and creates a `HeftSession` that corresponds to the command line session.
## Desired state
In future build rigs that exploit the `isolatedModules` contract to allow transpilation of each and every module from TypeScript -> JavaScript to be an independent operation, we instead have stages more like the following, each of which handles cleaning internally:
- **Compile**: Converts TypeScript -> ECMAScript.
- Dependencies: none
- **SASS**: Convert SASS -> CSS, emit .d.ts files.
- Dependencies: none
- **Analyze**: Type Check, Lint, emit .d.ts files.
- Dependencies: Analyze in dependency projects, SASS in current project
- **Test**: Run unit tests.
- Dependencies: Compile in self and dependency projects
- **Bundle**: Combine ECMAScript/CSS/etc. into more compact bundled form.
- Dependencies: Compile in self and dependency projects. Potentially bundle in dependency projects.
Custom rigs may require more or fewer stages to accommodate other build steps, and importantly, may alter the dependency relationship between the stages. For example a rig may opt to run its tests on bundled output, and therefore have the "test" stage depend on the "bundle" stage.
# Goal 2: Reduce time booting Heft repeatedly in a large Rush monorepo
## Current state
The initialization time of a Heft process is currently measured in seconds. In a monorepo with 600 projects, even 1 second of overhead is 10 minutes of CPU-time, since for each operation on each project, Rush boots Heft and its CLI parser in a fresh process.
## Desired state
Since Heft is designed to scope state to `HeftSession` objects and closures in plugin taps, it should be possible to reuse a single `Heft` process across multiple operations on multiple projects.
# Goal 3: Multi-project watch
## Current state
Custom watch-mode commands in Rush rely on the underlying command-line script to support efficient incremental execution and are unable to preserve a running process across build passes. Some tools, such as TypeScript or Webpack 5 have support for this model, but others, such as Jest, do not.
## Desired state
Using IPC or stdin/stdout, a Heft (or other compatible tool) process can communicate with Rush to receive a notification of changed inputs and to report the result of the command.
# Design Spec
Instead of a hardcoded pipeline definition, `heft.json` gains the ability to define a list of stages, their dependencies on other stages, and the event actions and plugins required to implement the functionality for each.
## Heft.json
```jsonc
{
/**
* Command line aliases to run a set of stages, so that developers can continue to run `heft build` or similar
*/
"actions": [
{
"name": "build",
"stages": [
"compile",
"analyze",
"bundle"
]
},
{
"name": "test",
"stages": [
"compile",
// "analyze" and "bundle" are omitted since they are not necessary for "test" to run
"test"
]
}
],
/**
* Individual build steps defined for this project (or rig). Projects will typically inherit from `@rushstack/heft-web-rig` or `@rushstack/heft-node-rig`,
* but custom rigs or even individual projects may need different stages or different plugins in each stage.
*/
"stages": [
{
"name": "compile",
/**
* This build rig uses isolatedModules, so emitting ECMAScript does not depend on typings for other file types.
*/
"dependsOn": [],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["lib/**/*.js", "lib/**/*.js.map", "lib-commonjs/**/*.js", "lib-commonjs/**/*.js.map"]
}
],
"plugins": [
{
/**
* Plugin that uses TypeScript's transpileModule() API to bulk convert TypeScript -> ECMAScript.
* Could use a SWC or Babel-based plugin instead.
*/
"plugin": "@rushstack/heft-typescript-plugin/lib/TranspileOnlyPlugin"
}
]
},
{
"name": "sass",
/**
* Compiling SASS does not depend on other stages
*/
"dependsOn": [],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["lib/**/*.css", "temp/sass-ts"]
}
],
"plugins": [
{
/**
* Plugin that uses TypeScript to type check and emit declaration files, but not transpile to ECMAScript
*/
"plugin": "@rushstack/heft-typescript-plugin/lib/DeclarationOnlyPlugin"
}
]
},
{
"name": "analyze",
/**
* Type checking and Linting can be done in parallel with other stages, but depend on the generated .scss.d.ts files
*/
"dependsOn": ["sass"],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["lib/**/*.d.ts", "lib/**/*.d.ts.map"]
}
],
"plugins": [
{
/**
* Plugin that uses TypeScript to type check and emit declaration files, but not transpile to ECMAScript
*/
"plugin": "@rushstack/heft-typescript-plugin/lib/DeclarationOnlyPlugin"
}
]
},
{
"name": "bundle",
/**
* The bundler needs the compiled ECMAScript and CSS
*/
"dependsOn": ["compile", "sass"],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["dist"]
}
],
"plugins": [
{
"plugin": "@rushstack/heft-webpack5-plugin"
}
]
},
{
"name": "test",
/**
* Jest needs compiled ECMAScript
*/
"dependsOn": ["compile"],
"eventActions": [
{
/**
* The kind of built-in operation that should be performed.
* The "deleteGlobs" action deletes files or folders that match the
* specified glob patterns.
*/
"actionKind": "deleteGlobs",
/**
* The stage of the Heft run during which this action should occur. One of "clean", "beforeRun", "run", "afterRun"
*/
"heftEvent": "clean",
"actionId": "defaultClean",
/**
* Glob patterns to be deleted. The paths are resolved relative to the project folder.
*/
"globsToDelete": ["temp/jest"]
}
],
"plugins": [
{
"plugin": "@rushstack/heft-jest-plugin"
}
]
}
]
}
```
## HeftServer
The `HeftServer` is a new component in Heft that is responsible for handling requests to execute a specific stage in a specific project. Upon receiving a request it will either locate an existing `HeftSession` that corresponds to a prior issuance of that request, or else create a fresh `HeftSession`, then execute the `clean (optional), beforeRun, run, afterRun` hooks in order. The request may also contain an input state object and/or a hint to indicate that the stage will likely be re-executed in the future (for watch mode). When the `HeftServer` has finished executing the stage, it will report back to the caller with a list of warnings/errors, the success/failure of the stage, and potentially additional metadata. It may also pipe logs.
Heft plugins that need to communicate with other Heft plugins--for example to customize the webpack configuration used by `@rushstack/heft-webpack4-plugin`--should use the Plugin accessor mechanism that has already been implemented.
A separate CLI executable will be defined that creates a `HeftServer` and waits for IPC messages.
## Heft CLI
The Heft CLI process reads `heft.json`, identifies the requested action and uses `HeftServer` instances to execute the relevant stages in topological order. If running in `--debug` mode or if the stage topology does not contain any parallelism, the Heft CLI will load the `HeftServer` in the current process, otherwise it may boot multiple external `HeftServer` processes, or potentially be instructed to connect to an existing `HeftServer` process.
Edit 2/11/2022:
### CLI parsing and custom parameters
In order to support custom parameters defined by plugins, the Heft CLI will introduce a synthetic "CLI Validation" stage at the very beginning of the pipeline for each action. This stage will apply all plugins from all stages used by that action (for optimization, plugins may have a flag in the plugin manifest that indicates that the plugin does not affect the CLI and does not need to be loaded during this stage), then run the CLI parser. No other hooks (clean, pre, run, post) will get run during this synthetic stage.
Once the command line has been parsed and validated, Heft will use runtime metadata about which plugins registered each parameter to extract the set of parameters that should be forwarded to each of the defined stages. If multiple plugin instances register the same parameter, as long as the definitions are compatible (exact meaning TBD), Heft will simply forward the parameter to all of them.
Each executing stage will receive a scoped command line and run the aggregate parser derived from the plugins for that stage. This avoids global state in the system to keep stage execution compartmentalized and thereby portable.
## @rushstack/rush-heft-operation-runner-plugin
The `@rushstack/rush-heft-operation-runner-plugin` is a Rush plugin that provides an implementation of the `IOperationRunner` contract (responsible for executing Rush Operations, i.e. a specific phase in a specific Rush project) that executes each Heft stage in the Operation (usually 1) by checking out a `HeftServer` instance from a pool maintained by the plugin and issuing an IPC request. The pool will maintain an affinity mapping of the last `HeftServer` used by each `Operation` identity, such that watch mode execution can re-use the same `HeftServer` process for subsequent build passes when the watcher detects changes. The mapping between `Operation` and Heft `stages` should be defined in an extension of the `rush-project.json` file to prevent Rush from needing to load additional files.
|
design
|
design proposal alignment with rush phased commands this is a proposal for aligning the architecture of heft to be more compatible with rush phased commands in the interests of improving parallelism customizability for other tools esbuild swc etc reducing heft aggregate boot time and optimizing multi project watching goal increased parallelism and configurability current state today heft test runs a sequence of hardcoded pipeline stages if clean unless no build where the build stage is further subdivided into hardcoded sub stages this limits the ability of rush to exploit task parallelism to running heft build clean and heft test no build for each project i e if a depends on b then the test phase for a can run concurrently with the build phase for b the heft json file provides event actions and plugins to inject build steps at various points within this pipeline but the pipeline itself is not particularly customizable when run from the command line heft loads a single heftconfiguration object and creates a heftsession that corresponds to the command line session desired state in future build rigs that exploit the isolatedmodules contract to allow transpilation of each and every module from typescript javascript to be an independent operation we instead have stages more like the following each of which handles cleaning internally compile converts typescript ecmascript dependencies none sass convert sass css emit d ts files dependencies none analyze type check lint emit d ts files dependencies analyze in dependency projects sass in current project test run unit tests dependencies compile in self and dependency projects bundle combine ecmascript css etc into more compact bundled form dependencies compile in self and dependency projects potentially bundle in dependency projects custom rigs may require more or fewer stages to accommodate other build steps and importantly may alter the dependency relationship between the stages for example a rig may opt to run its tests on bundled output and therefore have the test stage depend on the bundle stage goal reduce time booting heft repeatedly in a large rush monorepo current state the initialization time of a heft process is currently measured in seconds in a monorepo with projects even second of overhead is minutes of cpu time since for each operation on each project rush boots heft and its cli parser in a fresh process desired state since heft is designed to scope state to heftsession objects and closures in plugin taps it should be possible to reuse a single heft process across multiple operations on multiple projects goal multi project watch current state custom watch mode commands in rush rely on the underlying command line script to support efficient incremental execution and are unable to preserve a running process across build passes some tools such as typescript or webpack have support for this model but others such as jest do not desired state using ipc or stdin stdout a heft or other compatible tool process can communicate with rush to receive a notification of changed inputs and to report the result of the command design spec instead of a hardcoded pipeline definition heft json gains the ability to define a list of stages their dependencies on other stages and the event actions and plugins required to implement the functionality for each heft json jsonc command line aliases to run a set of stages so that developers can continue to run heft build or similar actions name build stages compile analyze bundle name test stages compile analyze and bundle are omitted since they are not necessary for test to run test individual build steps defined for this project or rig projects will typically inherit from rushstack heft web rig or rushstack heft node rig but custom rigs or even individual projects may need different stages or different plugins in each stage stages name compile this build rig uses isolatedmodules so emitting ecmascript does not depend on typings for other file types dependson eventactions the kind of built in operation that should be performed the deleteglobs action deletes files or folders that match the specified glob patterns actionkind deleteglobs the stage of the heft run during which this action should occur one of clean beforerun run afterrun heftevent clean actionid defaultclean glob patterns to be deleted the paths are resolved relative to the project folder globstodelete plugins plugin that uses typescript s transpilemodule api to bulk convert typescript ecmascript could use a swc or babel based plugin instead plugin rushstack heft typescript plugin lib transpileonlyplugin name sass compiling sass does not depend on other stages dependson eventactions the kind of built in operation that should be performed the deleteglobs action deletes files or folders that match the specified glob patterns actionkind deleteglobs the stage of the heft run during which this action should occur one of clean beforerun run afterrun heftevent clean actionid defaultclean glob patterns to be deleted the paths are resolved relative to the project folder globstodelete plugins plugin that uses typescript to type check and emit declaration files but not transpile to ecmascript plugin rushstack heft typescript plugin lib declarationonlyplugin name analyze type checking and linting can be done in parallel with other stages but depend on the generated scss d ts files dependson eventactions the kind of built in operation that should be performed the deleteglobs action deletes files or folders that match the specified glob patterns actionkind deleteglobs the stage of the heft run during which this action should occur one of clean beforerun run afterrun heftevent clean actionid defaultclean glob patterns to be deleted the paths are resolved relative to the project folder globstodelete plugins plugin that uses typescript to type check and emit declaration files but not transpile to ecmascript plugin rushstack heft typescript plugin lib declarationonlyplugin name bundle the bundler needs the compiled ecmascript and css dependson eventactions the kind of built in operation that should be performed the deleteglobs action deletes files or folders that match the specified glob patterns actionkind deleteglobs the stage of the heft run during which this action should occur one of clean beforerun run afterrun heftevent clean actionid defaultclean glob patterns to be deleted the paths are resolved relative to the project folder globstodelete plugins plugin rushstack heft plugin name test jest needs compiled ecmascript dependson eventactions the kind of built in operation that should be performed the deleteglobs action deletes files or folders that match the specified glob patterns actionkind deleteglobs the stage of the heft run during which this action should occur one of clean beforerun run afterrun heftevent clean actionid defaultclean glob patterns to be deleted the paths are resolved relative to the project folder globstodelete plugins plugin rushstack heft jest plugin heftserver the heftserver is a new component in heft that is responsible for handling requests to execute a specific stage in a specific project upon receiving a request it will either locate an existing heftsession that corresponds to a prior issuance of that request or else create a fresh heftsession then execute the clean optional beforerun run afterrun hooks in order the request may also contain an input state object and or a hint to indicate that the stage will likely be re executed in the future for watch mode when the heftserver has finished executing the stage it will report back to the caller with a list of warnings errors the success failure of the stage and potentially additional metadata it may also pipe logs heft plugins that need to communicate with other heft plugins for example to customize the webpack configuration used by rushstack heft plugin should use the plugin accessor mechanism that has already been implemented a separate cli executable will be defined that creates a heftserver and waits for ipc messages heft cli the heft cli process reads heft json identifies the requested action and uses heftserver instances to execute the relevant stages in topological order if running in debug mode or if the stage topology does not contain any parallelism the heft cli will load the heftserver in the current process otherwise it may boot multiple external heftserver processes or potentially be instructed to connect to an existing heftserver process edit cli parsing and custom parameters in order to support custom parameters defined by plugins the heft cli will introduce a synthetic cli validation stage at the very beginning of the pipeline for each action this stage will apply all plugins from all stages used by that action for optimization plugins may have a flag in the plugin manifest that indicates that the plugin does not affect the cli and does not need to be loaded during this stage then run the cli parser no other hooks clean pre run post will get run during this synthetic stage once the command line has been parsed and validated heft will use runtime metadata about which plugins registered each parameter to extract the set of parameters that should be forwarded to each of the defined stages if multiple plugin instances register the same parameter as long as the definitions are compatible exact meaning tbd heft will simply forward the parameter to all of them each executing stage will receive a scoped command line and run the aggregate parser derived from the plugins for that stage this avoids global state in the system to keep stage execution compartmentalized and thereby portable rushstack rush heft operation runner plugin the rushstack rush heft operation runner plugin is a rush plugin that provides an implementation of the ioperationrunner contract responsible for executing rush operations i e a specific phase in a specific rush project that executes each heft stage in the operation usually by checking out a heftserver instance from a pool maintained by the plugin and issuing an ipc request the pool will maintain an affinity mapping of the last heftserver used by each operation identity such that watch mode execution can re use the same heftserver process for subsequent build passes when the watcher detects changes the mapping between operation and heft stages should be defined in an extension of the rush project json file to prevent rush from needing to load additional files
| 1
|
14,871
| 11,207,889,680
|
IssuesEvent
|
2020-01-06 05:49:21
|
APSIMInitiative/ApsimX
|
https://api.github.com/repos/APSIMInitiative/ApsimX
|
closed
|
Scope caching issues
|
bug interface/infrastructure
|
Adding or removing models from a simulation doesn't cause the scope cache to be updated. This is easy enough to reproduce:
1. Open up any file with a simulation which doesn't contain a crop (e.g. factorial example)
2. Add fertilise at sowing script
3. Click on fertilise at sowing script - crop drop-down is empty (as it should be)
4. Add a crop to the paddock
5. Click on fertilise at sowing script - crop drop-down is still empty (it shouldn't be!)
The only workaround currently is to close the file and reopen it. I'm not sure what the best solution would be. If the user has clicked on every component in the file, then updating the cache for each component could be time-consuming, even if we only update those models in scope of the newly-added model.
A simpler option might be to erase the cache whenever we add/remove a model. This doesn't sound ideal but it would be faster than the first option so I think I will go with this option unless anyone has a better idea.
|
1.0
|
Scope caching issues - Adding or removing models from a simulation doesn't cause the scope cache to be updated. This is easy enough to reproduce:
1. Open up any file with a simulation which doesn't contain a crop (e.g. factorial example)
2. Add fertilise at sowing script
3. Click on fertilise at sowing script - crop drop-down is empty (as it should be)
4. Add a crop to the paddock
5. Click on fertilise at sowing script - crop drop-down is still empty (it shouldn't be!)
The only workaround currently is to close the file and reopen it. I'm not sure what the best solution would be. If the user has clicked on every component in the file, then updating the cache for each component could be time-consuming, even if we only update those models in scope of the newly-added model.
A simpler option might be to erase the cache whenever we add/remove a model. This doesn't sound ideal but it would be faster than the first option so I think I will go with this option unless anyone has a better idea.
|
non_design
|
scope caching issues adding or removing models from a simulation doesn t cause the scope cache to be updated this is easy enough to reproduce open up any file with a simulation which doesn t contain a crop e g factorial example add fertilise at sowing script click on fertilise at sowing script crop drop down is empty as it should be add a crop to the paddock click on fertilise at sowing script crop drop down is still empty it shouldn t be the only workaround currently is to close the file and reopen it i m not sure what the best solution would be if the user has clicked on every component in the file then updating the cache for each component could be time consuming even if we only update those models in scope of the newly added model a simpler option might be to erase the cache whenever we add remove a model this doesn t sound ideal but it would be faster than the first option so i think i will go with this option unless anyone has a better idea
| 0
|
27,495
| 5,031,637,165
|
IssuesEvent
|
2016-12-16 08:06:22
|
TNGSB/eWallet
|
https://api.github.com/repos/TNGSB/eWallet
|
closed
|
eWallet_FinancialReports_In the Transaction History - User suggests to use better terminology for the columns in the Transaction History # Live_030
|
ABL Defect - Medium (Sev-3) Live Environment
|
The term used in the column is misleading โ Expenditure & Income (Need to change to something else that is more meaningful)
|
1.0
|
eWallet_FinancialReports_In the Transaction History - User suggests to use better terminology for the columns in the Transaction History # Live_030 - The term used in the column is misleading โ Expenditure & Income (Need to change to something else that is more meaningful)
|
non_design
|
ewallet financialreports in the transaction history user suggests to use better terminology for the columns in the transaction history live the term used in the column is misleading โ expenditure income need to change to something else that is more meaningful
| 0
|
106,608
| 13,326,169,336
|
IssuesEvent
|
2020-08-27 11:10:47
|
litmuschaos/litmus
|
https://api.github.com/repos/litmuschaos/litmus
|
closed
|
Add functionality in the workflow table.
|
area/litmus-portal state/design-in-progress
|
The following functionalities are required in the workflow table:
* Sorting
* Searching
* Filtering
|
1.0
|
Add functionality in the workflow table. - The following functionalities are required in the workflow table:
* Sorting
* Searching
* Filtering
|
design
|
add functionality in the workflow table the following functionalities are required in the workflow table sorting searching filtering
| 1
|
71,392
| 8,651,015,917
|
IssuesEvent
|
2018-11-27 01:05:29
|
elastic/eui
|
https://api.github.com/repos/elastic/eui
|
closed
|
Request for an email icon
|
assign:designer icons
|
Strangely we haven't got a classic `email` icon in Eui yet. I'm reworking the cloud support page and am in need of one.
something like:

|
1.0
|
Request for an email icon - Strangely we haven't got a classic `email` icon in Eui yet. I'm reworking the cloud support page and am in need of one.
something like:

|
design
|
request for an email icon strangely we haven t got a classic email icon in eui yet i m reworking the cloud support page and am in need of one something like
| 1
|
92,290
| 11,622,906,096
|
IssuesEvent
|
2020-02-27 07:46:27
|
PiotrGrochowski/DMCAsansserif-entity
|
https://api.github.com/repos/PiotrGrochowski/DMCAsansserif-entity
|
opened
|
Diverse Arabic letterforms
|
DesignFeatReq
|
It appears problematic, that the initial/median/final forms are duplicated rather than having the proper diverse letterforms. This could be added in both the Unicode characters, and an OpenType feature.
|
1.0
|
Diverse Arabic letterforms - It appears problematic, that the initial/median/final forms are duplicated rather than having the proper diverse letterforms. This could be added in both the Unicode characters, and an OpenType feature.
|
design
|
diverse arabic letterforms it appears problematic that the initial median final forms are duplicated rather than having the proper diverse letterforms this could be added in both the unicode characters and an opentype feature
| 1
|
17,018
| 10,591,809,893
|
IssuesEvent
|
2019-10-09 11:45:56
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
Support for SemVer2 build metadata
|
Container Registry Feature Request Service Attention
|
**Describe the bug**
Unable to push helm packages with build metadata (e.g. mypack-1.0.0+1.tgz).
`az acr helm push mypack-1.0.0+1.tgz` results in:
`ERROR: Invalid helm package name 'mypack-1.0.0+1.tgz'. Is it a '*.tgz' or '*.tgz.prov' file?`
**To Reproduce**
1. `helm package --version 1.0.0+1 charts/mypack`
1. `az acr helm push mypack-1.0.0+1.tgz`
1. Behold
**Expected behavior**
Should push successfully.
**Environment summary**
Helm v2.11.0
Azure CLI v2.0.49
**Additional context**
I'm able to push helm packages with either pre-release version data or with just the base MAJOR.MINOR.PATCH format, but not with build metadata.
Build metadata is valid SemVer2 as described here: [https://semver.org/#spec-item-10](https://semver.org/#spec-item-10)
|
1.0
|
Support for SemVer2 build metadata - **Describe the bug**
Unable to push helm packages with build metadata (e.g. mypack-1.0.0+1.tgz).
`az acr helm push mypack-1.0.0+1.tgz` results in:
`ERROR: Invalid helm package name 'mypack-1.0.0+1.tgz'. Is it a '*.tgz' or '*.tgz.prov' file?`
**To Reproduce**
1. `helm package --version 1.0.0+1 charts/mypack`
1. `az acr helm push mypack-1.0.0+1.tgz`
1. Behold
**Expected behavior**
Should push successfully.
**Environment summary**
Helm v2.11.0
Azure CLI v2.0.49
**Additional context**
I'm able to push helm packages with either pre-release version data or with just the base MAJOR.MINOR.PATCH format, but not with build metadata.
Build metadata is valid SemVer2 as described here: [https://semver.org/#spec-item-10](https://semver.org/#spec-item-10)
|
non_design
|
support for build metadata describe the bug unable to push helm packages with build metadata e g mypack tgz az acr helm push mypack tgz results in error invalid helm package name mypack tgz is it a tgz or tgz prov file to reproduce helm package version charts mypack az acr helm push mypack tgz behold expected behavior should push successfully environment summary helm azure cli additional context i m able to push helm packages with either pre release version data or with just the base major minor patch format but not with build metadata build metadata is valid as described here
| 0
|
452,481
| 13,051,163,050
|
IssuesEvent
|
2020-07-29 16:37:23
|
unfoldingWord/tc-create-app
|
https://api.github.com/repos/unfoldingWord/tc-create-app
|
closed
|
Changing Rows Per page with Filter On sends the app to First page
|
Priority/Low QA/Passed bug demoed
|
Changing Rows Per page with Filter On sends the app to First page and removes the filter.
- Set the filter to a later chapter (Say chapter 4).
- Select Rows per page to 50 or 100.
- The filter is removed and the app lands on Page 1.
|
1.0
|
Changing Rows Per page with Filter On sends the app to First page - Changing Rows Per page with Filter On sends the app to First page and removes the filter.
- Set the filter to a later chapter (Say chapter 4).
- Select Rows per page to 50 or 100.
- The filter is removed and the app lands on Page 1.
|
non_design
|
changing rows per page with filter on sends the app to first page changing rows per page with filter on sends the app to first page and removes the filter set the filter to a later chapter say chapter select rows per page to or the filter is removed and the app lands on page
| 0
|
250,196
| 27,054,184,214
|
IssuesEvent
|
2023-02-13 15:08:52
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[Security Solution] Importing Rule with Exceptions without indexing a document, displays the Exception entries incorrectly
|
bug impact:medium Team:Detections and Resp Team: SecuritySolution Feature:Rule Exceptions Team:Security Solution Platform
|
**Describe the bug:**
In the Rules page under the Security Solution.
When the user import a Rule with some exceptions without indexing any document satisfies the Exception Entry Mapping, the Exception Flyout will render the exception entity incorrectly as shown below
**Kibana/Elasticsearch Stack version:**
latest Kibana version
**Screenshots (if relevant):**
https://user-images.githubusercontent.com/12671903/187926917-78d571e2-64e7-4a16-8a40-241f0f77016d.mov
|
True
|
[Security Solution] Importing Rule with Exceptions without indexing a document, displays the Exception entries incorrectly - **Describe the bug:**
In the Rules page under the Security Solution.
When the user import a Rule with some exceptions without indexing any document satisfies the Exception Entry Mapping, the Exception Flyout will render the exception entity incorrectly as shown below
**Kibana/Elasticsearch Stack version:**
latest Kibana version
**Screenshots (if relevant):**
https://user-images.githubusercontent.com/12671903/187926917-78d571e2-64e7-4a16-8a40-241f0f77016d.mov
|
non_design
|
importing rule with exceptions without indexing a document displays the exception entries incorrectly describe the bug in the rules page under the security solution when the user import a rule with some exceptions without indexing any document satisfies the exception entry mapping the exception flyout will render the exception entity incorrectly as shown below kibana elasticsearch stack version latest kibana version screenshots if relevant
| 0
|
183,705
| 31,722,015,012
|
IssuesEvent
|
2023-09-10 14:15:36
|
OpenRefine/OpenRefine
|
https://api.github.com/repos/OpenRefine/OpenRefine
|
closed
|
Reconciling without names
|
reconciliation reconciliation API design
|
Sometimes, you want to reconcile with objects no matter what their name (label) in Wikidata is. I have therefore created an empty column and tried to reconcile to it:
<img width="603" alt="Bez nรกzvu" src="https://user-images.githubusercontent.com/21329813/76943122-21b9df00-68ff-11ea-8cc6-d4dea83059b3.png">
However, the reconciliation process fails because it is probably dependent on a non-empty string in the main field. Is it possible to somehow only reconcile based on coordinates?
|
1.0
|
Reconciling without names - Sometimes, you want to reconcile with objects no matter what their name (label) in Wikidata is. I have therefore created an empty column and tried to reconcile to it:
<img width="603" alt="Bez nรกzvu" src="https://user-images.githubusercontent.com/21329813/76943122-21b9df00-68ff-11ea-8cc6-d4dea83059b3.png">
However, the reconciliation process fails because it is probably dependent on a non-empty string in the main field. Is it possible to somehow only reconcile based on coordinates?
|
design
|
reconciling without names sometimes you want to reconcile with objects no matter what their name label in wikidata is i have therefore created an empty column and tried to reconcile to it img width alt bez nรกzvu src however the reconciliation process fails because it is probably dependent on a non empty string in the main field is it possible to somehow only reconcile based on coordinates
| 1
|
389,071
| 11,497,008,831
|
IssuesEvent
|
2020-02-12 09:14:13
|
Materials-Consortia/optimade-python-tools
|
https://api.github.com/repos/Materials-Consortia/optimade-python-tools
|
closed
|
Possibly add CORS middleware
|
enhancement priority/medium
|
Considering this issue Materials-Consortia/OPTiMaDe#249 and PR Materials-Consortia/OPTiMaDe#105 in the spec repo, it may be good to know that `Starlette` have a [`CORSMiddleware` class](https://www.starlette.io/middleware/#corsmiddleware).
Edit: `FastAPI` also has its own [tutorial](https://fastapi.tiangolo.com/tutorial/cors/) on this.
|
1.0
|
Possibly add CORS middleware - Considering this issue Materials-Consortia/OPTiMaDe#249 and PR Materials-Consortia/OPTiMaDe#105 in the spec repo, it may be good to know that `Starlette` have a [`CORSMiddleware` class](https://www.starlette.io/middleware/#corsmiddleware).
Edit: `FastAPI` also has its own [tutorial](https://fastapi.tiangolo.com/tutorial/cors/) on this.
|
non_design
|
possibly add cors middleware considering this issue materials consortia optimade and pr materials consortia optimade in the spec repo it may be good to know that starlette have a edit fastapi also has its own on this
| 0
|
153,458
| 24,131,362,405
|
IssuesEvent
|
2022-09-21 07:43:39
|
dotnet/SqlClient
|
https://api.github.com/repos/dotnet/SqlClient
|
closed
|
CommandTimeout doesn't work for insert with EF Core 6
|
By Design
|
### Describe the bug
Sample project: Sample project: https://github.com/janseris/EFCoreTest
Database tables:

[CancellationTokenTestDB.sql.txt](https://github.com/dotnet/SqlClient/files/9590553/CancellationTokenTestDB.sql.txt)
(sorry, GitHub "doesn't support" files with names ending with `.sql`)
When inserting a 100 MB file, from EF Core 6 log I can see the following:
```
Executed DbCommand (151ms) [Parameters=[@p0='?' (Size = 50)], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [IMAGE] ([Name])
VALUES (@p0);
SELECT [ID]
FROM [IMAGE]
WHERE @@ROWCOUNT = 1 AND [ID] = scope_identity();
...
dbug: 17.09.2022 00:09:03.728 RelationalEventId.CommandExecuting[20100] (Microsoft.EntityFrameworkCore.Database.Command)
Executing DbCommand [Parameters=[@p1='?' (DbType = Int32), @p2='?' (Size = -1) (DbType = Binary)], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [IMAGE_DATA] ([ImageID], [Data])
VALUES (@p1, @p2);
SELECT [FS_GUID]
FROM [IMAGE_DATA]
WHERE @@ROWCOUNT = 1 AND [ImageID] = @p1;
```
Because of my slow internet connection, after 5 minutes, the insert db call finishes.
It is not supposed to because default timeout for SqlCommand is 30 seconds. After this period, it should fail.
```
info: 17.09.2022 00:14:20.314 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
Executed DbCommand (316,587ms) [Parameters=[@p1='?' (DbType = Int32), @p2='?' (Size = -1) (DbType = Binary)], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [IMAGE_DATA] ([ImageID], [Data])
VALUES (@p1, @p2);
SELECT [FS_GUID]
FROM [IMAGE_DATA]
WHERE @@ROWCOUNT = 1 AND [ImageID] = @p1;
dbug: 17.09.2022 00:14:20.325 RelationalEventId.DataReaderDisposing[20300] (Microsoft.EntityFrameworkCore.Database.Command)
A data reader was disposed.
dbug: 17.09.2022 00:14:20.335 RelationalEventId.TransactionCommitting[20210] (Microsoft.EntityFrameworkCore.Database.Transaction)
Committing transaction.
dbug: 17.09.2022 00:14:20.382 RelationalEventId.TransactionCommitted[20202] (Microsoft.EntityFrameworkCore.Database.Transaction)
Committed transaction.
dbug: 17.09.2022 00:14:20.393 RelationalEventId.ConnectionClosing[20002] (Microsoft.EntityFrameworkCore.Database.Connection)
Closing connection to database 'devRemin' on server 'myServer'.
dbug: 17.09.2022 00:14:20.408 RelationalEventId.ConnectionClosed[20003] (Microsoft.EntityFrameworkCore.Database.Connection)
Closed connection to database 'devRemin' on server 'myServer'.
dbug: 17.09.2022 00:14:20.419 RelationalEventId.TransactionDisposed[20204] (Microsoft.EntityFrameworkCore.Database.Transaction)
Disposing transaction.
dbug: 17.09.2022 00:14:20.434 CoreEventId.StateChanged[10807] (Microsoft.EntityFrameworkCore.ChangeTracking)
An entity of type 'IMAGE' tracked by 'CancellationTokenTestContext' changed state from 'Added' to 'Unchanged'. Consider using 'DbContextOptionsBuilder.EnableSensitiveDataLogging' to see key values.
dbug: 17.09.2022 00:14:20.443 CoreEventId.StateChanged[10807] (Microsoft.EntityFrameworkCore.ChangeTracking)
An entity of type 'IMAGE_DATA' tracked by 'CancellationTokenTestContext' changed state from 'Added' to 'Unchanged'. Consider using 'DbContextOptionsBuilder.EnableSensitiveDataLogging' to see key values.
dbug: 17.09.2022 00:14:20.451 CoreEventId.SaveChangesCompleted[10005] (Microsoft.EntityFrameworkCore.Update)
SaveChanges completed for 'CancellationTokenTestContext' with 2 entities written to the database.
dbug: 17.09.2022 00:14:20.457 CoreEventId.ContextDisposed[10407] (Microsoft.EntityFrameworkCore.Infrastructure)
'CancellationTokenTestContext' disposed.
Inserted
```
### To reproduce
Set up a SQL Server with a slow internet connection to be able to simulate this.
I am using a SQL Server 2012 with latest patch.
### Expected behavior
Command fails after 30 seconds which is the default timeout.
### Further technical details
"Microsoft.EntityFrameworkCore.SqlServer" Version="6.0.9"
Microsoft Windows 10 Home 10.0.19043 N/A Build 19043

|
1.0
|
CommandTimeout doesn't work for insert with EF Core 6 - ### Describe the bug
Sample project: Sample project: https://github.com/janseris/EFCoreTest
Database tables:

[CancellationTokenTestDB.sql.txt](https://github.com/dotnet/SqlClient/files/9590553/CancellationTokenTestDB.sql.txt)
(sorry, GitHub "doesn't support" files with names ending with `.sql`)
When inserting a 100 MB file, from EF Core 6 log I can see the following:
```
Executed DbCommand (151ms) [Parameters=[@p0='?' (Size = 50)], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [IMAGE] ([Name])
VALUES (@p0);
SELECT [ID]
FROM [IMAGE]
WHERE @@ROWCOUNT = 1 AND [ID] = scope_identity();
...
dbug: 17.09.2022 00:09:03.728 RelationalEventId.CommandExecuting[20100] (Microsoft.EntityFrameworkCore.Database.Command)
Executing DbCommand [Parameters=[@p1='?' (DbType = Int32), @p2='?' (Size = -1) (DbType = Binary)], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [IMAGE_DATA] ([ImageID], [Data])
VALUES (@p1, @p2);
SELECT [FS_GUID]
FROM [IMAGE_DATA]
WHERE @@ROWCOUNT = 1 AND [ImageID] = @p1;
```
Because of my slow internet connection, after 5 minutes, the insert db call finishes.
It is not supposed to because default timeout for SqlCommand is 30 seconds. After this period, it should fail.
```
info: 17.09.2022 00:14:20.314 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
Executed DbCommand (316,587ms) [Parameters=[@p1='?' (DbType = Int32), @p2='?' (Size = -1) (DbType = Binary)], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
INSERT INTO [IMAGE_DATA] ([ImageID], [Data])
VALUES (@p1, @p2);
SELECT [FS_GUID]
FROM [IMAGE_DATA]
WHERE @@ROWCOUNT = 1 AND [ImageID] = @p1;
dbug: 17.09.2022 00:14:20.325 RelationalEventId.DataReaderDisposing[20300] (Microsoft.EntityFrameworkCore.Database.Command)
A data reader was disposed.
dbug: 17.09.2022 00:14:20.335 RelationalEventId.TransactionCommitting[20210] (Microsoft.EntityFrameworkCore.Database.Transaction)
Committing transaction.
dbug: 17.09.2022 00:14:20.382 RelationalEventId.TransactionCommitted[20202] (Microsoft.EntityFrameworkCore.Database.Transaction)
Committed transaction.
dbug: 17.09.2022 00:14:20.393 RelationalEventId.ConnectionClosing[20002] (Microsoft.EntityFrameworkCore.Database.Connection)
Closing connection to database 'devRemin' on server 'myServer'.
dbug: 17.09.2022 00:14:20.408 RelationalEventId.ConnectionClosed[20003] (Microsoft.EntityFrameworkCore.Database.Connection)
Closed connection to database 'devRemin' on server 'myServer'.
dbug: 17.09.2022 00:14:20.419 RelationalEventId.TransactionDisposed[20204] (Microsoft.EntityFrameworkCore.Database.Transaction)
Disposing transaction.
dbug: 17.09.2022 00:14:20.434 CoreEventId.StateChanged[10807] (Microsoft.EntityFrameworkCore.ChangeTracking)
An entity of type 'IMAGE' tracked by 'CancellationTokenTestContext' changed state from 'Added' to 'Unchanged'. Consider using 'DbContextOptionsBuilder.EnableSensitiveDataLogging' to see key values.
dbug: 17.09.2022 00:14:20.443 CoreEventId.StateChanged[10807] (Microsoft.EntityFrameworkCore.ChangeTracking)
An entity of type 'IMAGE_DATA' tracked by 'CancellationTokenTestContext' changed state from 'Added' to 'Unchanged'. Consider using 'DbContextOptionsBuilder.EnableSensitiveDataLogging' to see key values.
dbug: 17.09.2022 00:14:20.451 CoreEventId.SaveChangesCompleted[10005] (Microsoft.EntityFrameworkCore.Update)
SaveChanges completed for 'CancellationTokenTestContext' with 2 entities written to the database.
dbug: 17.09.2022 00:14:20.457 CoreEventId.ContextDisposed[10407] (Microsoft.EntityFrameworkCore.Infrastructure)
'CancellationTokenTestContext' disposed.
Inserted
```
### To reproduce
Set up a SQL Server with a slow internet connection to be able to simulate this.
I am using a SQL Server 2012 with latest patch.
### Expected behavior
Command fails after 30 seconds which is the default timeout.
### Further technical details
"Microsoft.EntityFrameworkCore.SqlServer" Version="6.0.9"
Microsoft Windows 10 Home 10.0.19043 N/A Build 19043

|
design
|
commandtimeout doesn t work for insert with ef core describe the bug sample project sample project database tables sorry github doesn t support files with names ending with sql when inserting a mb file from ef core log i can see the following executed dbcommand commandtype text commandtimeout set nocount on insert into values select from where rowcount and scope identity dbug relationaleventid commandexecuting microsoft entityframeworkcore database command executing dbcommand commandtype text commandtimeout set nocount on insert into values select from where rowcount and because of my slow internet connection after minutes the insert db call finishes it is not supposed to because default timeout for sqlcommand is seconds after this period it should fail info relationaleventid commandexecuted microsoft entityframeworkcore database command executed dbcommand commandtype text commandtimeout set nocount on insert into values select from where rowcount and dbug relationaleventid datareaderdisposing microsoft entityframeworkcore database command a data reader was disposed dbug relationaleventid transactioncommitting microsoft entityframeworkcore database transaction committing transaction dbug relationaleventid transactioncommitted microsoft entityframeworkcore database transaction committed transaction dbug relationaleventid connectionclosing microsoft entityframeworkcore database connection closing connection to database devremin on server myserver dbug relationaleventid connectionclosed microsoft entityframeworkcore database connection closed connection to database devremin on server myserver dbug relationaleventid transactiondisposed microsoft entityframeworkcore database transaction disposing transaction dbug coreeventid statechanged microsoft entityframeworkcore changetracking an entity of type image tracked by cancellationtokentestcontext changed state from added to unchanged consider using dbcontextoptionsbuilder enablesensitivedatalogging to see key values dbug coreeventid statechanged microsoft entityframeworkcore changetracking an entity of type image data tracked by cancellationtokentestcontext changed state from added to unchanged consider using dbcontextoptionsbuilder enablesensitivedatalogging to see key values dbug coreeventid savechangescompleted microsoft entityframeworkcore update savechanges completed for cancellationtokentestcontext with entities written to the database dbug coreeventid contextdisposed microsoft entityframeworkcore infrastructure cancellationtokentestcontext disposed inserted to reproduce set up a sql server with a slow internet connection to be able to simulate this i am using a sql server with latest patch expected behavior command fails after seconds which is the default timeout further technical details microsoft entityframeworkcore sqlserver version microsoft windows home n a build
| 1
|
254,698
| 19,268,059,290
|
IssuesEvent
|
2021-12-10 00:06:20
|
mwouts/jupytext
|
https://api.github.com/repos/mwouts/jupytext
|
closed
|
Extension not loading in jupyterlab 3
|
documentation
|
In Jupyterlab 3.0.4 with Jupytext 1.9.1 on Ubuntu 20.04.1 LTS the filemanager does not appear to be loading. Specifically the log line that says the jupytext file manager loads in not showing up. I can use the console commands to pair a notebook but no additional files are saved. The GUI also does not show previously paired files as notebooks either.
I can use the most recent version of jupytext with jupyter notebook just fine. I can also downgrade to lab 2.2 and the old plugin and it works fine as well.
I have tried all variations of enabling and configuration files listed in the install documentation and nothing has worked. I am fairly sure there is a bug here but there's no error message so it is a bit hard to say.
Let me know if there's any more information I can provide.
Here's a little bit of additional debug info:
```
g@g-ThinkPad-P51:~$ jupyter labextension list
JupyterLab v3.0.4
/home/g/.local/share/jupyter/labextensions
jupyterlab-jupytext v1.3.0 enabled OK (python, jupytext)
@jupyter-widgets/jupyterlab-manager v3.0.0 enabled OK (python, jupyterlab_widgets)
Other labextensions (built into JupyterLab)
app dir: /home/g/.local/share/jupyter/lab
```
Contents of .jupyter/jupyter_notebook_config.json:
```
{
"NotebookApp": {
"nbserver_extensions": {
"jupytext": true
}
}
}
```
|
1.0
|
Extension not loading in jupyterlab 3 - In Jupyterlab 3.0.4 with Jupytext 1.9.1 on Ubuntu 20.04.1 LTS the filemanager does not appear to be loading. Specifically the log line that says the jupytext file manager loads in not showing up. I can use the console commands to pair a notebook but no additional files are saved. The GUI also does not show previously paired files as notebooks either.
I can use the most recent version of jupytext with jupyter notebook just fine. I can also downgrade to lab 2.2 and the old plugin and it works fine as well.
I have tried all variations of enabling and configuration files listed in the install documentation and nothing has worked. I am fairly sure there is a bug here but there's no error message so it is a bit hard to say.
Let me know if there's any more information I can provide.
Here's a little bit of additional debug info:
```
g@g-ThinkPad-P51:~$ jupyter labextension list
JupyterLab v3.0.4
/home/g/.local/share/jupyter/labextensions
jupyterlab-jupytext v1.3.0 enabled OK (python, jupytext)
@jupyter-widgets/jupyterlab-manager v3.0.0 enabled OK (python, jupyterlab_widgets)
Other labextensions (built into JupyterLab)
app dir: /home/g/.local/share/jupyter/lab
```
Contents of .jupyter/jupyter_notebook_config.json:
```
{
"NotebookApp": {
"nbserver_extensions": {
"jupytext": true
}
}
}
```
|
non_design
|
extension not loading in jupyterlab in jupyterlab with jupytext on ubuntu lts the filemanager does not appear to be loading specifically the log line that says the jupytext file manager loads in not showing up i can use the console commands to pair a notebook but no additional files are saved the gui also does not show previously paired files as notebooks either i can use the most recent version of jupytext with jupyter notebook just fine i can also downgrade to lab and the old plugin and it works fine as well i have tried all variations of enabling and configuration files listed in the install documentation and nothing has worked i am fairly sure there is a bug here but there s no error message so it is a bit hard to say let me know if there s any more information i can provide here s a little bit of additional debug info g g thinkpad jupyter labextension list jupyterlab home g local share jupyter labextensions jupyterlab jupytext enabled ok python jupytext jupyter widgets jupyterlab manager enabled ok python jupyterlab widgets other labextensions built into jupyterlab app dir home g local share jupyter lab contents of jupyter jupyter notebook config json notebookapp nbserver extensions jupytext true
| 0
|
3,000
| 2,653,519,179
|
IssuesEvent
|
2015-03-17 00:10:54
|
CECS343Project/Farm
|
https://api.github.com/repos/CECS343Project/Farm
|
closed
|
Make application responsive
|
Design Development enhancement
|
Position of main panel needs to move corresponding to the screen width
|
1.0
|
Make application responsive - Position of main panel needs to move corresponding to the screen width
|
design
|
make application responsive position of main panel needs to move corresponding to the screen width
| 1
|
18,626
| 3,392,469,365
|
IssuesEvent
|
2015-11-30 19:44:38
|
M-Zuber/VirtualGabbai
|
https://api.github.com/repos/M-Zuber/VirtualGabbai
|
closed
|
Delete methods in the DAL
|
DAL Design Question
|
The delete methods pull the entity from the database per id of the object pulled in.
Should there be a check to make sure that the other fields match to make sure that the object being deleted is the one we actually want?
@DvirSh what do you think?
|
1.0
|
Delete methods in the DAL - The delete methods pull the entity from the database per id of the object pulled in.
Should there be a check to make sure that the other fields match to make sure that the object being deleted is the one we actually want?
@DvirSh what do you think?
|
design
|
delete methods in the dal the delete methods pull the entity from the database per id of the object pulled in should there be a check to make sure that the other fields match to make sure that the object being deleted is the one we actually want dvirsh what do you think
| 1
|
575,389
| 17,029,345,020
|
IssuesEvent
|
2021-07-04 08:23:59
|
emmamei/cdkey
|
https://api.github.com/repos/emmamei/cdkey
|
closed
|
poseChannel doesn't propogate properly
|
bug priority
|
This results in not having poses that work: the `Poses...` menu results in text in open chat and no action.
|
1.0
|
poseChannel doesn't propogate properly - This results in not having poses that work: the `Poses...` menu results in text in open chat and no action.
|
non_design
|
posechannel doesn t propogate properly this results in not having poses that work the poses menu results in text in open chat and no action
| 0
|
169,363
| 26,787,658,696
|
IssuesEvent
|
2023-02-01 05:07:42
|
ronin-rb/ronin-rb.github.io
|
https://api.github.com/repos/ronin-rb/ronin-rb.github.io
|
closed
|
Switch to a dark mode color scheme
|
design
|
Switch the color scheme to white text on black / dark gray background. Example: https://runlet.app/
|
1.0
|
Switch to a dark mode color scheme - Switch the color scheme to white text on black / dark gray background. Example: https://runlet.app/
|
design
|
switch to a dark mode color scheme switch the color scheme to white text on black dark gray background example
| 1
|
104,487
| 16,616,840,167
|
IssuesEvent
|
2021-06-02 17:50:01
|
Dima2021/t-vault
|
https://api.github.com/repos/Dima2021/t-vault
|
opened
|
CVE-2018-14042 (Medium) detected in bootstrap-3.3.4.min.js
|
security vulnerability
|
## CVE-2018-14042 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.4.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.4/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.4/js/bootstrap.min.js</a></p>
<p>Path to dependency file: t-vault/tvaultui/bower_components/ng-table/docs/template/index.template.html</p>
<p>Path to vulnerable library: t-vault/tvaultui/bower_components/ng-table/docs/template/index.template.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/t-vault/commit/259885b704776a5554c5d008b51b19c9b0ea9fd5">259885b704776a5554c5d008b51b19c9b0ea9fd5</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.3.4","packageFilePaths":["/tvaultui/bower_components/ng-table/docs/template/index.template.html"],"isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.3.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0"}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2018-14042","vulnerabilityDetails":"In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2018-14042 (Medium) detected in bootstrap-3.3.4.min.js - ## CVE-2018-14042 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.4.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.4/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.4/js/bootstrap.min.js</a></p>
<p>Path to dependency file: t-vault/tvaultui/bower_components/ng-table/docs/template/index.template.html</p>
<p>Path to vulnerable library: t-vault/tvaultui/bower_components/ng-table/docs/template/index.template.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/t-vault/commit/259885b704776a5554c5d008b51b19c9b0ea9fd5">259885b704776a5554c5d008b51b19c9b0ea9fd5</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.
<p>Publish Date: 2018-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p>
<p>Release Date: 2018-07-13</p>
<p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.3.4","packageFilePaths":["/tvaultui/bower_components/ng-table/docs/template/index.template.html"],"isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.3.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0"}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2018-14042","vulnerabilityDetails":"In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_design
|
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file t vault tvaultui bower components ng table docs template index template html path to vulnerable library t vault tvaultui bower components ng table docs template index template html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch dev vulnerability details in bootstrap before xss is possible in the data container property of tooltip publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org webjars npm bootstrap org webjars bootstrap isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree twitter bootstrap isminimumfixversionavailable true minimumfixversion org webjars npm bootstrap org webjars bootstrap basebranches vulnerabilityidentifier cve vulnerabilitydetails in bootstrap before xss is possible in the data container property of tooltip vulnerabilityurl
| 0
|
26,470
| 2,684,556,245
|
IssuesEvent
|
2015-03-29 03:31:33
|
gtcasl/gpuocelot
|
https://api.github.com/repos/gtcasl/gpuocelot
|
opened
|
Add support for CUDA 5 features: dynamic parallelism etc..
|
enhancement imported Priority-High
|
_From [rtf...@gmail.com](https://code.google.com/u/113306493658803437520/) on May 17, 2012 09:59:33_
Only a remainder of features added to cuda 5.0 and that would be good to have in gpuocelot:
*SM_30 and SM_35 PTX instrinsics support
*Dynamic parallelism
object linking? don't know if that makes sense here..
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=68_
|
1.0
|
Add support for CUDA 5 features: dynamic parallelism etc.. - _From [rtf...@gmail.com](https://code.google.com/u/113306493658803437520/) on May 17, 2012 09:59:33_
Only a remainder of features added to cuda 5.0 and that would be good to have in gpuocelot:
*SM_30 and SM_35 PTX instrinsics support
*Dynamic parallelism
object linking? don't know if that makes sense here..
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=68_
|
non_design
|
add support for cuda features dynamic parallelism etc from on may only a remainder of features added to cuda and that would be good to have in gpuocelot sm and sm ptx instrinsics support dynamic parallelism object linking don t know if that makes sense here original issue
| 0
|
213,638
| 16,529,905,233
|
IssuesEvent
|
2021-05-27 03:35:10
|
xinbailu/DripLoader
|
https://api.github.com/repos/xinbailu/DripLoader
|
closed
|
can u add more documentation about the first steps
|
documentation question
|
so i have a problem, i download it , compiled it, i fired notepad as a process to get its pid, and here is what happened:


so it needed around 1800 min ?!
what did i do wrong, and where to put my shellcode, i dont know a lot of cpp :(
seeking ur help !
|
1.0
|
can u add more documentation about the first steps - so i have a problem, i download it , compiled it, i fired notepad as a process to get its pid, and here is what happened:


so it needed around 1800 min ?!
what did i do wrong, and where to put my shellcode, i dont know a lot of cpp :(
seeking ur help !
|
non_design
|
can u add more documentation about the first steps so i have a problem i download it compiled it i fired notepad as a process to get its pid and here is what happened so it needed around min what did i do wrong and where to put my shellcode i dont know a lot of cpp seeking ur help
| 0
|
123,630
| 16,517,617,300
|
IssuesEvent
|
2021-05-26 11:27:02
|
EscolaDeSaudePublica/FeliciLab
|
https://api.github.com/repos/EscolaDeSaudePublica/FeliciLab
|
closed
|
[25/05] 2ยบ Episรณdio Mulheres na TI
|
Comunicaรงรฃo Externa DesignLab Instagram Narrativas Redes sociais
|
## **Objetivo**
**Como** externo das redes sociais
**Quero** saber como as mulheres estรฃo atuando na รกrea de TI
**Para** saber como as mulheres estรฃo atuando na รกrea de TI
## **Contexto**
Vamos construir a ideia de fazer um especial "Mulheres na TI", serรฃo 5 posts, sempre as terรงas-feiras, comeรงando dia 18/05. Fernanda รฉ nossa primeira personagem. Ela vai nos contar sobre a sua formaรงรฃo, se teve dificuldade para entrar na รกrea, a importรขncia das mulheres na TI, e jรก que ela mora no interior, vamos explorar tambรฉm essa ideia, se ela tem dificuldade na carreira por conta disso, se o ensino e o trabalho sรฃo acessรญveis e se ainda existem preconceitos.
## **PubliciLab**
- [x] Falar com Fernanda para escrever o seu perfil
- [x] Montar o texto com um contexto comum
- [x] Revisar o conteรบdo
- [x] Pegar uma foto da Fernanda
## **DesignLab**
- [x] Montar uma arte incluindo a foto
- [x] Entregar atรฉ segunda (24/05)
## **Critรฉrios de Aceitaรงรฃo**
- [ ] Visualizar conteรบdo sobre o especial
**Dado** que sou uma usuรกria da internet
**Quando** visito o blog e instagram do felicilab
**Entรฃo** eu visualizo e interajo com a publicaรงรฃo
## Observaรงรตes
https://docs.google.com/document/d/1PQiAi3B5kLrbw1uuXrQuIAc3duvvYLnmVbv_4TfFpQU/edit?usp=sharing
|
1.0
|
[25/05] 2ยบ Episรณdio Mulheres na TI - ## **Objetivo**
**Como** externo das redes sociais
**Quero** saber como as mulheres estรฃo atuando na รกrea de TI
**Para** saber como as mulheres estรฃo atuando na รกrea de TI
## **Contexto**
Vamos construir a ideia de fazer um especial "Mulheres na TI", serรฃo 5 posts, sempre as terรงas-feiras, comeรงando dia 18/05. Fernanda รฉ nossa primeira personagem. Ela vai nos contar sobre a sua formaรงรฃo, se teve dificuldade para entrar na รกrea, a importรขncia das mulheres na TI, e jรก que ela mora no interior, vamos explorar tambรฉm essa ideia, se ela tem dificuldade na carreira por conta disso, se o ensino e o trabalho sรฃo acessรญveis e se ainda existem preconceitos.
## **PubliciLab**
- [x] Falar com Fernanda para escrever o seu perfil
- [x] Montar o texto com um contexto comum
- [x] Revisar o conteรบdo
- [x] Pegar uma foto da Fernanda
## **DesignLab**
- [x] Montar uma arte incluindo a foto
- [x] Entregar atรฉ segunda (24/05)
## **Critรฉrios de Aceitaรงรฃo**
- [ ] Visualizar conteรบdo sobre o especial
**Dado** que sou uma usuรกria da internet
**Quando** visito o blog e instagram do felicilab
**Entรฃo** eu visualizo e interajo com a publicaรงรฃo
## Observaรงรตes
https://docs.google.com/document/d/1PQiAi3B5kLrbw1uuXrQuIAc3duvvYLnmVbv_4TfFpQU/edit?usp=sharing
|
design
|
episรณdio mulheres na ti objetivo como externo das redes sociais quero saber como as mulheres estรฃo atuando na รกrea de ti para saber como as mulheres estรฃo atuando na รกrea de ti contexto vamos construir a ideia de fazer um especial mulheres na ti serรฃo posts sempre as terรงas feiras comeรงando dia fernanda รฉ nossa primeira personagem ela vai nos contar sobre a sua formaรงรฃo se teve dificuldade para entrar na รกrea a importรขncia das mulheres na ti e jรก que ela mora no interior vamos explorar tambรฉm essa ideia se ela tem dificuldade na carreira por conta disso se o ensino e o trabalho sรฃo acessรญveis e se ainda existem preconceitos publicilab falar com fernanda para escrever o seu perfil montar o texto com um contexto comum revisar o conteรบdo pegar uma foto da fernanda designlab montar uma arte incluindo a foto entregar atรฉ segunda critรฉrios de aceitaรงรฃo visualizar conteรบdo sobre o especial dado que sou uma usuรกria da internet quando visito o blog e instagram do felicilab entรฃo eu visualizo e interajo com a publicaรงรฃo observaรงรตes
| 1
|
85,597
| 15,755,091,038
|
IssuesEvent
|
2021-03-31 01:10:04
|
jgeraigery/beaker-notebook
|
https://api.github.com/repos/jgeraigery/beaker-notebook
|
opened
|
CVE-2016-1000339 (Medium) detected in bcprov-jdk14-1.38.jar
|
security vulnerability
|
## CVE-2016-1000339 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk14-1.38.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.4.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: beaker-notebook/plugin/groovy/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/bouncycastle/bcprov-jdk14/138/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-138.jar,/root/.gradle/caches/modules-2/files-2.1/bouncycastle/bcprov-jdk14/138/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-138.jar,/root/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk14/1.38/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-1.38.jar,/root/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk14/1.38/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-1.38.jar</p>
<p>
Dependency Hierarchy:
- itext-2.1.7.jar (Root Library)
- :x: **bcprov-jdk14-1.38.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider version 1.55 and earlier the primary engine class used for AES was AESFastEngine. Due to the highly table driven approach used in the algorithm it turns out that if the data channel on the CPU can be monitored the lookup table accesses are sufficient to leak information on the AES key being used. There was also a leak in AESEngine although it was substantially less. AESEngine has been modified to remove any signs of leakage (testing carried out on Intel X86-64) and is now the primary AES class for the BC JCE provider from 1.56. Use of AESFastEngine is now only recommended where otherwise deemed appropriate.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000339>CVE-2016-1000339</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000339">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000339</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: org.bouncycastle:bcprov-debug-jdk15on:1.56,org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk14","packageVersion":"1.38","packageFilePaths":["/plugin/groovy/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.lowagie:itext:2.1.7;org.bouncycastle:bcprov-jdk14:1.38","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-debug-jdk15on:1.56,org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2016-1000339","vulnerabilityDetails":"In the Bouncy Castle JCE Provider version 1.55 and earlier the primary engine class used for AES was AESFastEngine. Due to the highly table driven approach used in the algorithm it turns out that if the data channel on the CPU can be monitored the lookup table accesses are sufficient to leak information on the AES key being used. There was also a leak in AESEngine although it was substantially less. AESEngine has been modified to remove any signs of leakage (testing carried out on Intel X86-64) and is now the primary AES class for the BC JCE provider from 1.56. Use of AESFastEngine is now only recommended where otherwise deemed appropriate.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000339","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-1000339 (Medium) detected in bcprov-jdk14-1.38.jar - ## CVE-2016-1000339 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk14-1.38.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.4.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: beaker-notebook/plugin/groovy/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/bouncycastle/bcprov-jdk14/138/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-138.jar,/root/.gradle/caches/modules-2/files-2.1/bouncycastle/bcprov-jdk14/138/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-138.jar,/root/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk14/1.38/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-1.38.jar,/root/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk14/1.38/de366c3243a586eb3c0e2bcde1ed9bb1bfb985ff/bcprov-jdk14-1.38.jar</p>
<p>
Dependency Hierarchy:
- itext-2.1.7.jar (Root Library)
- :x: **bcprov-jdk14-1.38.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Bouncy Castle JCE Provider version 1.55 and earlier the primary engine class used for AES was AESFastEngine. Due to the highly table driven approach used in the algorithm it turns out that if the data channel on the CPU can be monitored the lookup table accesses are sufficient to leak information on the AES key being used. There was also a leak in AESEngine although it was substantially less. AESEngine has been modified to remove any signs of leakage (testing carried out on Intel X86-64) and is now the primary AES class for the BC JCE provider from 1.56. Use of AESFastEngine is now only recommended where otherwise deemed appropriate.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000339>CVE-2016-1000339</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000339">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000339</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: org.bouncycastle:bcprov-debug-jdk15on:1.56,org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk14","packageVersion":"1.38","packageFilePaths":["/plugin/groovy/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.lowagie:itext:2.1.7;org.bouncycastle:bcprov-jdk14:1.38","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-debug-jdk15on:1.56,org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2016-1000339","vulnerabilityDetails":"In the Bouncy Castle JCE Provider version 1.55 and earlier the primary engine class used for AES was AESFastEngine. Due to the highly table driven approach used in the algorithm it turns out that if the data channel on the CPU can be monitored the lookup table accesses are sufficient to leak information on the AES key being used. There was also a leak in AESEngine although it was substantially less. AESEngine has been modified to remove any signs of leakage (testing carried out on Intel X86-64) and is now the primary AES class for the BC JCE provider from 1.56. Use of AESFastEngine is now only recommended where otherwise deemed appropriate.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000339","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_design
|
cve medium detected in bcprov jar cve medium severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk library home page a href path to dependency file beaker notebook plugin groovy build gradle path to vulnerable library root gradle caches modules files bouncycastle bcprov bcprov jar root gradle caches modules files bouncycastle bcprov bcprov jar root gradle caches modules files org bouncycastle bcprov bcprov jar root gradle caches modules files org bouncycastle bcprov bcprov jar dependency hierarchy itext jar root library x bcprov jar vulnerable library vulnerability details in the bouncy castle jce provider version and earlier the primary engine class used for aes was aesfastengine due to the highly table driven approach used in the algorithm it turns out that if the data channel on the cpu can be monitored the lookup table accesses are sufficient to leak information on the aes key being used there was also a leak in aesengine although it was substantially less aesengine has been modified to remove any signs of leakage testing carried out on intel and is now the primary aes class for the bc jce provider from use of aesfastengine is now only recommended where otherwise deemed appropriate publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov debug org bouncycastle bcprov debug org bouncycastle bcprov ext org bouncycastle bcprov ext org bouncycastle bcprov org bouncycastle bcprov org bouncycastle bcprov ext debug isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com lowagie itext org bouncycastle bcprov isminimumfixversionavailable true minimumfixversion org bouncycastle bcprov debug org bouncycastle bcprov debug org bouncycastle bcprov ext org bouncycastle bcprov ext org bouncycastle bcprov org bouncycastle bcprov org bouncycastle bcprov ext debug basebranches vulnerabilityidentifier cve vulnerabilitydetails in the bouncy castle jce provider version and earlier the primary engine class used for aes was aesfastengine due to the highly table driven approach used in the algorithm it turns out that if the data channel on the cpu can be monitored the lookup table accesses are sufficient to leak information on the aes key being used there was also a leak in aesengine although it was substantially less aesengine has been modified to remove any signs of leakage testing carried out on intel and is now the primary aes class for the bc jce provider from use of aesfastengine is now only recommended where otherwise deemed appropriate vulnerabilityurl
| 0
|
60,330
| 7,331,789,394
|
IssuesEvent
|
2018-03-05 14:36:03
|
fabric8-ui/fabric8-ux
|
https://api.github.com/repos/fabric8-ui/fabric8-ux
|
opened
|
VISUALS: Import Application Flow Gaps
|
work-type/visual design
|
VCs:
- Refer to Laura's wireframes/current implementation for import flow
- Create any visual design mockups for screens where we have gaps in the existing screens for the existing flows.
|
1.0
|
VISUALS: Import Application Flow Gaps - VCs:
- Refer to Laura's wireframes/current implementation for import flow
- Create any visual design mockups for screens where we have gaps in the existing screens for the existing flows.
|
design
|
visuals import application flow gaps vcs refer to laura s wireframes current implementation for import flow create any visual design mockups for screens where we have gaps in the existing screens for the existing flows
| 1
|
170,957
| 27,036,861,978
|
IssuesEvent
|
2023-02-12 21:47:02
|
posomo/posomo-scrapper
|
https://api.github.com/repos/posomo/posomo-scrapper
|
closed
|
[feat]ํฌ๋กค๋ฌ ์ถ์ํ
|
crawler design pattern
|
## ๐ ๊ฐ์
beutiful soup ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก ๋ด๋ถ html์ ๋ถ์ํ๋ script์
seleniul ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก script๋ฅผ ์ผ๊ด ์ฒ๋ฆฌํ๋ scrapper ํด๋์ค๋ฅผ ๋ถ๋ฆฌํ ํ
์ถ์ํด๋์ค๋ฅผ ์ ์ฉํ์ฌ
์ค๋ณต์ฝ๋ ๋ฐ ํ์ ํด๋์ค์์ ์ธ๋ถ ๋ผ์ด๋ธ๋ฌ๋ฆฌ ์์กด ์ต์ํ
## ๐ฉโ๐ป ๊ธฐ๋ฅ ์ค๋ช
### โ
์ฐธ๊ณ ์ฌํญ
<!-- ๊ณต์ ํ ๋ด์ฉ, ์คํฌ๋ฆฐ์ท ๋ฑ์ ๋ฃ์ด ์ฃผ์ธ์. -->
- ์ถ๊ฐ์ ์ธ ๊ณต์ ๊ฐ ํ์ํ ์ฌํญ์ Comment
|
1.0
|
[feat]ํฌ๋กค๋ฌ ์ถ์ํ - ## ๐ ๊ฐ์
beutiful soup ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก ๋ด๋ถ html์ ๋ถ์ํ๋ script์
seleniul ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก script๋ฅผ ์ผ๊ด ์ฒ๋ฆฌํ๋ scrapper ํด๋์ค๋ฅผ ๋ถ๋ฆฌํ ํ
์ถ์ํด๋์ค๋ฅผ ์ ์ฉํ์ฌ
์ค๋ณต์ฝ๋ ๋ฐ ํ์ ํด๋์ค์์ ์ธ๋ถ ๋ผ์ด๋ธ๋ฌ๋ฆฌ ์์กด ์ต์ํ
## ๐ฉโ๐ป ๊ธฐ๋ฅ ์ค๋ช
### โ
์ฐธ๊ณ ์ฌํญ
<!-- ๊ณต์ ํ ๋ด์ฉ, ์คํฌ๋ฆฐ์ท ๋ฑ์ ๋ฃ์ด ์ฃผ์ธ์. -->
- ์ถ๊ฐ์ ์ธ ๊ณต์ ๊ฐ ํ์ํ ์ฌํญ์ Comment
|
design
|
ํฌ๋กค๋ฌ ์ถ์ํ ๐ ๊ฐ์ beutiful soup ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก ๋ด๋ถ html์ ๋ถ์ํ๋ script์ seleniul ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก script๋ฅผ ์ผ๊ด ์ฒ๋ฆฌํ๋ scrapper ํด๋์ค๋ฅผ ๋ถ๋ฆฌํ ํ ์ถ์ํด๋์ค๋ฅผ ์ ์ฉํ์ฌ ์ค๋ณต์ฝ๋ ๋ฐ ํ์ ํด๋์ค์์ ์ธ๋ถ ๋ผ์ด๋ธ๋ฌ๋ฆฌ ์์กด ์ต์ํ ๐ฉโ๐ป ๊ธฐ๋ฅ ์ค๋ช
โ
์ฐธ๊ณ ์ฌํญ ์ถ๊ฐ์ ์ธ ๊ณต์ ๊ฐ ํ์ํ ์ฌํญ์ comment
| 1
|
273,468
| 29,820,314,837
|
IssuesEvent
|
2023-06-17 01:26:08
|
pazhanivel07/frameworks_base_2021-0970
|
https://api.github.com/repos/pazhanivel07/frameworks_base_2021-0970
|
closed
|
CVE-2022-20354 (High) detected in baseandroid-10.0.0_r44 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2022-20354 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r44</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/services/core/java/com/android/server/connectivity/Vpn.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In onDefaultNetworkChanged of Vpn.java, there is a possible way to disable VPN due to a logic error in the code. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-12 Android-12LAndroid ID: A-219546241
<p>Publish Date: 2022-08-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-20354>CVE-2022-20354</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-20354 (High) detected in baseandroid-10.0.0_r44 - autoclosed - ## CVE-2022-20354 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r44</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/services/core/java/com/android/server/connectivity/Vpn.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In onDefaultNetworkChanged of Vpn.java, there is a possible way to disable VPN due to a logic error in the code. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-12 Android-12LAndroid ID: A-219546241
<p>Publish Date: 2022-08-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-20354>CVE-2022-20354</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_design
|
cve high detected in baseandroid autoclosed cve high severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in base branch master vulnerable source files services core java com android server connectivity vpn java vulnerability details in ondefaultnetworkchanged of vpn java there is a possible way to disable vpn due to a logic error in the code this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with mend
| 0
|
43,687
| 11,287,246,876
|
IssuesEvent
|
2020-01-16 03:40:16
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
CAFFE2_API macros missing for DecodeMultipleClipsFromVideo and FreeDecodedData function
|
caffe2 high priority module: build module: vision triage review triaged
|
## ๐ Bug
While **DecodeMultipleClipsFromVideo** and **FreeDecodedData** was moved from **caffe2/video/video_io.h** to **caffe2/video/video_decoder.h** in d2ceab27661b6f16e0b0d9e958b0d880793ba564, CAFFE2_API macros was removed from functions signature which were added in c2a75926cac36b770c9afcca894d07feb307592e.
Now building PyTorch with enabled USE_OPENCV=1 and USE_FFMPEG=1 flags fails with
```
[ 95%] Linking CXX executable ../../../../bin/torch_shm_manager
/home/user/PyTorch/build/lib/libtorch_cuda.so: undefined reference to `caffe2::DecodeMultipleClipsFromVideo(char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, caffe2::Params const&, int, int, std::vector<int, std::allocator<int> > const&, bool, int&, int&, std::vector<unsigned char*, std::allocator<unsigned char*> >&)'
caffe2/torch/lib/libshm/CMakeFiles/torch_shm_manager.dir/build.make:102: recipe for target 'bin/torch_shm_manager' failed
collect2: error: ld returned 1 exit status
```
Adding back CAFFE2_API macros to function declarations (probably enough to add only to DecodeMultipleClipsFromVideo) solves the problem.
## To Reproduce
Build current master from source using command:
`USE_OPENCV=1 USE_FFMPEG=1 python3 setup.py install`
cc @ezyang @gchanan @zou3519 @fmassa
|
1.0
|
CAFFE2_API macros missing for DecodeMultipleClipsFromVideo and FreeDecodedData function - ## ๐ Bug
While **DecodeMultipleClipsFromVideo** and **FreeDecodedData** was moved from **caffe2/video/video_io.h** to **caffe2/video/video_decoder.h** in d2ceab27661b6f16e0b0d9e958b0d880793ba564, CAFFE2_API macros was removed from functions signature which were added in c2a75926cac36b770c9afcca894d07feb307592e.
Now building PyTorch with enabled USE_OPENCV=1 and USE_FFMPEG=1 flags fails with
```
[ 95%] Linking CXX executable ../../../../bin/torch_shm_manager
/home/user/PyTorch/build/lib/libtorch_cuda.so: undefined reference to `caffe2::DecodeMultipleClipsFromVideo(char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, caffe2::Params const&, int, int, std::vector<int, std::allocator<int> > const&, bool, int&, int&, std::vector<unsigned char*, std::allocator<unsigned char*> >&)'
caffe2/torch/lib/libshm/CMakeFiles/torch_shm_manager.dir/build.make:102: recipe for target 'bin/torch_shm_manager' failed
collect2: error: ld returned 1 exit status
```
Adding back CAFFE2_API macros to function declarations (probably enough to add only to DecodeMultipleClipsFromVideo) solves the problem.
## To Reproduce
Build current master from source using command:
`USE_OPENCV=1 USE_FFMPEG=1 python3 setup.py install`
cc @ezyang @gchanan @zou3519 @fmassa
|
non_design
|
api macros missing for decodemultipleclipsfromvideo and freedecodeddata function ๐ bug while decodemultipleclipsfromvideo and freedecodeddata was moved from video video io h to video video decoder h in api macros was removed from functions signature which were added in now building pytorch with enabled use opencv and use ffmpeg flags fails with linking cxx executable bin torch shm manager home user pytorch build lib libtorch cuda so undefined reference to decodemultipleclipsfromvideo char const std basic string std allocator const int params const int int std vector const bool int int std vector torch lib libshm cmakefiles torch shm manager dir build make recipe for target bin torch shm manager failed error ld returned exit status adding back api macros to function declarations probably enough to add only to decodemultipleclipsfromvideo solves the problem to reproduce build current master from source using command use opencv use ffmpeg setup py install cc ezyang gchanan fmassa
| 0
|
416,795
| 12,151,630,162
|
IssuesEvent
|
2020-04-24 20:21:54
|
imazen/imageflow
|
https://api.github.com/repos/imazen/imageflow
|
closed
|
Transparent png show black background
|
priority-high
|
Hi,
Thanks for your amazing software, it works really well!!
We are having a weird problem with these images:
## Original

(by the way, its the same image of #153, different problem)
I optimized the original image using [Tinypng](https://tinypng.com):
## Optimized

In both cases the image has a transparent background.
Everything look fine if you see the image directly.
But when asked through imageflow appears with a different background:
## Served by imageflow

In the optimization the image size decrease 75%, which is great for our home page.
It is something about the Tinypng optimization? or perhaps is a profile mishandling in imageflow?
Thanks for all your help!!
|
1.0
|
Transparent png show black background - Hi,
Thanks for your amazing software, it works really well!!
We are having a weird problem with these images:
## Original

(by the way, its the same image of #153, different problem)
I optimized the original image using [Tinypng](https://tinypng.com):
## Optimized

In both cases the image has a transparent background.
Everything look fine if you see the image directly.
But when asked through imageflow appears with a different background:
## Served by imageflow

In the optimization the image size decrease 75%, which is great for our home page.
It is something about the Tinypng optimization? or perhaps is a profile mishandling in imageflow?
Thanks for all your help!!
|
non_design
|
transparent png show black background hi thanks for your amazing software it works really well we are having a weird problem with these images original by the way its the same image of different problem i optimized the original image using optimized in both cases the image has a transparent background everything look fine if you see the image directly but when asked through imageflow appears with a different background served by imageflow in the optimization the image size decrease which is great for our home page it is something about the tinypng optimization or perhaps is a profile mishandling in imageflow thanks for all your help
| 0
|
38,469
| 5,188,424,863
|
IssuesEvent
|
2017-01-20 19:54:16
|
googleapis/toolkit
|
https://api.github.com/repos/googleapis/toolkit
|
opened
|
Support ClientStreaming test generation
|
Java NodeJS Test
|
Unit test generation for ClientStreaming methods is not yet supported in these languages (which otherwise have unit test generation for gRPC streaming methods):
- [ ] Java
- [ ] NodeJS
|
1.0
|
Support ClientStreaming test generation - Unit test generation for ClientStreaming methods is not yet supported in these languages (which otherwise have unit test generation for gRPC streaming methods):
- [ ] Java
- [ ] NodeJS
|
non_design
|
support clientstreaming test generation unit test generation for clientstreaming methods is not yet supported in these languages which otherwise have unit test generation for grpc streaming methods java nodejs
| 0
|
55,167
| 23,399,635,124
|
IssuesEvent
|
2022-08-12 06:22:40
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
"az sf cluster node add" command fails
|
Service Attention Service Fabric
|
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az sf cluster node add`
**Errors:**
```
'VirtualMachineScaleSetExtension' object has no attribute 'type1'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 233, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 659, in execute
raise ex
cli/core/commands/__init__.py, ln 722, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
...
cli/command_modules/servicefabric/custom.py, ln 1236, in <listcomp>
if ext.type1 is not None and (ext.type1.lower() == SERVICE_FABRIC_WINDOWS_NODE_EXT_NAME or ext.type1.lower() == SERVICE_FABRIC_LINUX_NODE_EXT_NAME)]
AttributeError: 'VirtualMachineScaleSetExtension' object has no attribute 'type1'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az sf cluster node add -c {} --node-type {} --nodes-to-add {} -g {} --verbose --subscription {}`
## Expected Behavior
## Environment Summary
```
Linux-4.4.0-19041-Microsoft-x86_64-with-debian-buster-sid
Python 3.6.10
Installer: DEB
azure-cli 2.18.0
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
2.0
|
"az sf cluster node add" command fails -
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az sf cluster node add`
**Errors:**
```
'VirtualMachineScaleSetExtension' object has no attribute 'type1'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 233, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 659, in execute
raise ex
cli/core/commands/__init__.py, ln 722, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
...
cli/command_modules/servicefabric/custom.py, ln 1236, in <listcomp>
if ext.type1 is not None and (ext.type1.lower() == SERVICE_FABRIC_WINDOWS_NODE_EXT_NAME or ext.type1.lower() == SERVICE_FABRIC_LINUX_NODE_EXT_NAME)]
AttributeError: 'VirtualMachineScaleSetExtension' object has no attribute 'type1'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az sf cluster node add -c {} --node-type {} --nodes-to-add {} -g {} --verbose --subscription {}`
## Expected Behavior
## Environment Summary
```
Linux-4.4.0-19041-Microsoft-x86_64-with-debian-buster-sid
Python 3.6.10
Installer: DEB
azure-cli 2.18.0
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
non_design
|
az sf cluster node add command fails this is autogenerated please review and update as needed describe the bug command name az sf cluster node add errors virtualmachinescalesetextension object has no attribute traceback most recent call last site packages knack cli py ln in invoke cmd result self invocation execute args cli core commands init py ln in execute raise ex cli core commands init py ln in run jobs serially results append self run job expanded arg cmd copy cli command modules servicefabric custom py ln in if ext is not none and ext lower service fabric windows node ext name or ext lower service fabric linux node ext name attributeerror virtualmachinescalesetextension object has no attribute to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az sf cluster node add c node type nodes to add g verbose subscription expected behavior environment summary linux microsoft with debian buster sid python installer deb azure cli additional context
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.