Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
471,783 | 13,610,407,937 | IssuesEvent | 2020-09-23 07:19:09 | HackYourFuture-CPH/chattie | https://api.github.com/repos/HackYourFuture-CPH/chattie | closed | List of message components | High priority User story | ## User story
**Who:** **As a** user
**What:** **I want to** see all messages on a conversation
**Why:** **so that we can** see who has created the message, what the message says and the date and time of the message
## Implementation details
- There are two distinct components (MessageList and Message) There could be more but these two there will minimum be
- This story is a very central component, so you will be talking with lots of people about how to use this component
- See mockup 2 for the mockup of the message
- Should be able to have two states (from user or from other) see the either grey of blue background. The message should be positioned to the right of the left based on the state.
- Remember to talk to the person making the api endpoint to make the interface clear!
- How do we get new messages? Maybe a setInterval that checks if there are new messages every 2 seconds!? | 1.0 | List of message components - ## User story
**Who:** **As a** user
**What:** **I want to** see all messages on a conversation
**Why:** **so that we can** see who has created the message, what the message says and the date and time of the message
## Implementation details
- There are two distinct components (MessageList and Message) There could be more but these two there will minimum be
- This story is a very central component, so you will be talking with lots of people about how to use this component
- See mockup 2 for the mockup of the message
- Should be able to have two states (from user or from other) see the either grey of blue background. The message should be positioned to the right of the left based on the state.
- Remember to talk to the person making the api endpoint to make the interface clear!
- How do we get new messages? Maybe a setInterval that checks if there are new messages every 2 seconds!? | priority | list of message components user story who as a user what i want to see all messages on a conversation why so that we can see who has created the message what the message says and the date and time of the message implementation details there are two distinct components messagelist and message there could be more but these two there will minimum be this story is a very central component so you will be talking with lots of people about how to use this component see mockup for the mockup of the message should be able to have two states from user or from other see the either grey of blue background the message should be positioned to the right of the left based on the state remember to talk to the person making the api endpoint to make the interface clear how do we get new messages maybe a setinterval that checks if there are new messages every seconds | 1 |
3,255 | 4,288,559,431 | IssuesEvent | 2016-07-17 14:54:49 | iszwnc/eapisy.js | https://api.github.com/repos/iszwnc/eapisy.js | opened | [Snyk] Build failed while vulnerability tests run | enhancement security | I follow the documentation that https://snyk.io/docs/ provides, but for some reason I cannot get my tests to run during the building process. Here is the thing that I know so far:
- If you run `snyk protect`, it will look for the `.snyk` configuration and run our patches before testing.
### What are patches?
Patches are in simple words, local modifications of the dependency that you are hanging on. This usually happens when *Snyk* cannot update automatically the dependency, so it asks you to create a **Patch**.
---
- After the `snyk protect` is run, we can normally start our tests using the flag `--dev`. This flag is going to test both our `dependencies` and `devDependencies`.
> If you do not use `--dev` by default *Snyk* will look straight foward into your `dependencies`, ignoring completely your `devDependencies` configurations.


> The stange thing is that when the build starts I get the first image and when I run the tests locally `snyk test --dev` I get the second one. | True | [Snyk] Build failed while vulnerability tests run - I follow the documentation that https://snyk.io/docs/ provides, but for some reason I cannot get my tests to run during the building process. Here is the thing that I know so far:
- If you run `snyk protect`, it will look for the `.snyk` configuration and run our patches before testing.
### What are patches?
Patches are in simple words, local modifications of the dependency that you are hanging on. This usually happens when *Snyk* cannot update automatically the dependency, so it asks you to create a **Patch**.
---
- After the `snyk protect` is run, we can normally start our tests using the flag `--dev`. This flag is going to test both our `dependencies` and `devDependencies`.
> If you do not use `--dev` by default *Snyk* will look straight foward into your `dependencies`, ignoring completely your `devDependencies` configurations.


> The stange thing is that when the build starts I get the first image and when I run the tests locally `snyk test --dev` I get the second one. | non_priority | build failed while vulnerability tests run i follow the documentation that provides but for some reason i cannot get my tests to run during the building process here is the thing that i know so far if you run snyk protect it will look for the snyk configuration and run our patches before testing what are patches patches are in simple words local modifications of the dependency that you are hanging on this usually happens when snyk cannot update automatically the dependency so it asks you to create a patch after the snyk protect is run we can normally start our tests using the flag dev this flag is going to test both our dependencies and devdependencies if you do not use dev by default snyk will look straight foward into your dependencies ignoring completely your devdependencies configurations the stange thing is that when the build starts i get the first image and when i run the tests locally snyk test dev i get the second one | 0 |
785,345 | 27,610,170,501 | IssuesEvent | 2023-03-09 15:29:17 | bats-core/bats-core | https://api.github.com/repos/bats-core/bats-core | closed | `setup_[suite|file]` errors are not reported (tests are skipped) making them hard to debug | Type: Bug Priority: NeedsTriage | **Describe the bug**
Errors in these functions are not reported to callers. It would be nice to see an error message. `-x` doesn't reveal them (though it does for `setup`).
**To Reproduce**
```
function setup_file(){
false
}
@test blah {
true
}
```
`blah` will be skipped, but without suggestion as to why.
**Expected behavior**
It would be nice to see that `false` failed. I appreciate this might be problematic for some output formatters.
**Environment (please complete the following information):**
- 1.8.2
- MacOS
- 3.2.57
| 1.0 | `setup_[suite|file]` errors are not reported (tests are skipped) making them hard to debug - **Describe the bug**
Errors in these functions are not reported to callers. It would be nice to see an error message. `-x` doesn't reveal them (though it does for `setup`).
**To Reproduce**
```
function setup_file(){
false
}
@test blah {
true
}
```
`blah` will be skipped, but without suggestion as to why.
**Expected behavior**
It would be nice to see that `false` failed. I appreciate this might be problematic for some output formatters.
**Environment (please complete the following information):**
- 1.8.2
- MacOS
- 3.2.57
| priority | setup errors are not reported tests are skipped making them hard to debug describe the bug errors in these functions are not reported to callers it would be nice to see an error message x doesn t reveal them though it does for setup to reproduce function setup file false test blah true blah will be skipped but without suggestion as to why expected behavior it would be nice to see that false failed i appreciate this might be problematic for some output formatters environment please complete the following information macos | 1 |
519,341 | 15,049,340,074 | IssuesEvent | 2021-02-03 11:21:21 | assemblee-virtuelle/semapps | https://api.github.com/repos/assemblee-virtuelle/semapps | opened | Persister l'ordre des items des OrderedCollection avec rdf:Seq | activitypub low priority | **Problématique**
Actuellement pour gérer les `OrderedCollection` on ordonne les items selon un prédicat (par exemple `as:published`). Mais normalement l'ordre des items est persisté.
**Proposition**
- Utiliser les [rdf:Seq](https://ontola.io/blog/ordered-data-in-rdf/) pour persister l'ordre des items.
- On pourrait utiliser (ou pas) cette librairie: https://js.rdf.dev/modules/_rdfdev_collections
- Lorsqu'on attache un élément à une collection, il faudrait pouvoir définir si on le mets au début ou à la fin de la collection.
- Enlever le paramètre `sort` de `activitypub.collection.get`, qui ne sert plus à rien.
**Composants concernés**
Service `activitypub.collection` | 1.0 | Persister l'ordre des items des OrderedCollection avec rdf:Seq - **Problématique**
Actuellement pour gérer les `OrderedCollection` on ordonne les items selon un prédicat (par exemple `as:published`). Mais normalement l'ordre des items est persisté.
**Proposition**
- Utiliser les [rdf:Seq](https://ontola.io/blog/ordered-data-in-rdf/) pour persister l'ordre des items.
- On pourrait utiliser (ou pas) cette librairie: https://js.rdf.dev/modules/_rdfdev_collections
- Lorsqu'on attache un élément à une collection, il faudrait pouvoir définir si on le mets au début ou à la fin de la collection.
- Enlever le paramètre `sort` de `activitypub.collection.get`, qui ne sert plus à rien.
**Composants concernés**
Service `activitypub.collection` | priority | persister l ordre des items des orderedcollection avec rdf seq problématique actuellement pour gérer les orderedcollection on ordonne les items selon un prédicat par exemple as published mais normalement l ordre des items est persisté proposition utiliser les pour persister l ordre des items on pourrait utiliser ou pas cette librairie lorsqu on attache un élément à une collection il faudrait pouvoir définir si on le mets au début ou à la fin de la collection enlever le paramètre sort de activitypub collection get qui ne sert plus à rien composants concernés service activitypub collection | 1 |
317,027 | 9,659,977,457 | IssuesEvent | 2019-05-20 14:34:03 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | TreeList pager does not refresh when All pageSize is selected | Bug C: Pager C: TreeList Kendo1 Priority 1 SEV: Medium | ### Bug report
Reported in ticket with ID 1389358
### Reproduction of the problem
- page the TreeList next page/any page
- select "All" from the pageSizes dropdown
[Dojo](https://dojo.telerik.com/@bubblemaster/UNOgAzuz)
### Current behavior

Possibly related with #4144
### Expected/desired behavior
Pager input and current page should show 1.
### Workaround
[Dojo](https://dojo.telerik.com/@bubblemaster/eQogAcIb)
```
var treelist = $("#treelist").data("kendoTreeList");
treelist.one("dataBound", function(e){
var pager = e.sender.pager;
pager.dataSource.bind("change", function(ev){
// All pages selected
if(ev.items.length === ev.sender.data().length){
$("[title='More pages']").click();
}
});
});
```
### Environment
* **Kendo UI version:** 2019.1.220
| 1.0 | TreeList pager does not refresh when All pageSize is selected - ### Bug report
Reported in ticket with ID 1389358
### Reproduction of the problem
- page the TreeList next page/any page
- select "All" from the pageSizes dropdown
[Dojo](https://dojo.telerik.com/@bubblemaster/UNOgAzuz)
### Current behavior

Possibly related with #4144
### Expected/desired behavior
Pager input and current page should show 1.
### Workaround
[Dojo](https://dojo.telerik.com/@bubblemaster/eQogAcIb)
```
var treelist = $("#treelist").data("kendoTreeList");
treelist.one("dataBound", function(e){
var pager = e.sender.pager;
pager.dataSource.bind("change", function(ev){
// All pages selected
if(ev.items.length === ev.sender.data().length){
$("[title='More pages']").click();
}
});
});
```
### Environment
* **Kendo UI version:** 2019.1.220
| priority | treelist pager does not refresh when all pagesize is selected bug report reported in ticket with id reproduction of the problem page the treelist next page any page select all from the pagesizes dropdown current behavior possibly related with expected desired behavior pager input and current page should show workaround var treelist treelist data kendotreelist treelist one databound function e var pager e sender pager pager datasource bind change function ev all pages selected if ev items length ev sender data length click environment kendo ui version | 1 |
172,391 | 6,502,490,796 | IssuesEvent | 2017-08-23 13:49:47 | wordpress-mobile/AztecEditor-Android | https://api.github.com/repos/wordpress-mobile/AztecEditor-Android | opened | Add support for [wpvideo] shortcode | medium priority new feature | `[wpvideo] is another video shortcode that must be implemented. It does not contain a full path to the video, only a reference to videopress.com file.
More information: https://en.support.wordpress.com/videopress/ | 1.0 | Add support for [wpvideo] shortcode - `[wpvideo] is another video shortcode that must be implemented. It does not contain a full path to the video, only a reference to videopress.com file.
More information: https://en.support.wordpress.com/videopress/ | priority | add support for shortcode is another video shortcode that must be implemented it does not contain a full path to the video only a reference to videopress com file more information | 1 |
325,025 | 27,840,941,230 | IssuesEvent | 2023-03-20 12:43:58 | FuelLabs/fuels-ts | https://api.github.com/repos/FuelLabs/fuels-ts | closed | Improve confidence of Typegen tests | bug refactor tests | Sometimes the Typegen tests lose track of stuff and give errors such as this:
```
ENOENT: no such file or directory, open '/fuels-ts/packages/abi-typegen/test/fixtures/out/abis/minimal.bin'
``` | 1.0 | Improve confidence of Typegen tests - Sometimes the Typegen tests lose track of stuff and give errors such as this:
```
ENOENT: no such file or directory, open '/fuels-ts/packages/abi-typegen/test/fixtures/out/abis/minimal.bin'
``` | non_priority | improve confidence of typegen tests sometimes the typegen tests lose track of stuff and give errors such as this enoent no such file or directory open fuels ts packages abi typegen test fixtures out abis minimal bin | 0 |
510,770 | 14,815,951,472 | IssuesEvent | 2021-01-14 08:16:12 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | opened | Remove file name (pcap) from the menu layout | low-priority bug | <img width="1180" alt="Screenshot 2021-01-14 at 09 15 39" src="https://user-images.githubusercontent.com/4493366/104562785-17649080-5649-11eb-9847-dd520952f85e.png">
| 1.0 | Remove file name (pcap) from the menu layout - <img width="1180" alt="Screenshot 2021-01-14 at 09 15 39" src="https://user-images.githubusercontent.com/4493366/104562785-17649080-5649-11eb-9847-dd520952f85e.png">
| priority | remove file name pcap from the menu layout img width alt screenshot at src | 1 |
156,003 | 12,290,104,315 | IssuesEvent | 2020-05-10 01:36:38 | input-output-hk/cardano-ledger-specs | https://api.github.com/repos/input-output-hk/cardano-ledger-specs | opened | Return only the first validation error | priority medium shelley testnet | The shelley ledger is currently returning all the validation errors when a block/transaction is invalid. This can be confusing, though, in cases where one error is really the cause of the others. For example, if you supply a bad transaction input, the `ValueNotConservedUTxO` error is almost certainly also going to trigger.
We can start to make this clearer by reporting only the first error. | 1.0 | Return only the first validation error - The shelley ledger is currently returning all the validation errors when a block/transaction is invalid. This can be confusing, though, in cases where one error is really the cause of the others. For example, if you supply a bad transaction input, the `ValueNotConservedUTxO` error is almost certainly also going to trigger.
We can start to make this clearer by reporting only the first error. | non_priority | return only the first validation error the shelley ledger is currently returning all the validation errors when a block transaction is invalid this can be confusing though in cases where one error is really the cause of the others for example if you supply a bad transaction input the valuenotconservedutxo error is almost certainly also going to trigger we can start to make this clearer by reporting only the first error | 0 |
23,763 | 7,373,920,070 | IssuesEvent | 2018-03-13 18:40:12 | TerabyteQbt/meta | https://api.github.com/repos/TerabyteQbt/meta | opened | Reproducibility Project - make standard java build process bit-for-bit reproducible | discussion enhancement help wanted standard java build process | This will require stripping out or otherwise dealing with file timestamps, etc. The end condition here is that if two different computers both execute a build of a java package (like meta_tools.release), with the exact same JDK version, then they should produce identical output (sha1s match). | 1.0 | Reproducibility Project - make standard java build process bit-for-bit reproducible - This will require stripping out or otherwise dealing with file timestamps, etc. The end condition here is that if two different computers both execute a build of a java package (like meta_tools.release), with the exact same JDK version, then they should produce identical output (sha1s match). | non_priority | reproducibility project make standard java build process bit for bit reproducible this will require stripping out or otherwise dealing with file timestamps etc the end condition here is that if two different computers both execute a build of a java package like meta tools release with the exact same jdk version then they should produce identical output match | 0 |
3,422 | 2,538,379,335 | IssuesEvent | 2015-01-27 05:51:45 | GoogleCloudPlatform/kubernetes | https://api.github.com/repos/GoogleCloudPlatform/kubernetes | closed | etcd on master died due to running out of memory on day-old cluster with very little activity | area/introspection priority/P1 | Yesterday I spun up a cluster on GCE around 2pm using the 0.5.2 release. I scheduled two nginx pods on it with a replication controller, then ran kube-push using the 0.5.3 release to update the master. I then started 3 nginx pods on a different replication controller. Everything seemed to be working.
This morning I tried listing the pods and got an error from kubecfg:
F1202 11:11:14.120573 22165 kubecfg.go:428] Got request error: 501: All the given peers are not reachable (Tried to connect to each peer twice and failed) [0]
On the master, etcd is no longer running, and the etcd logs indicate that it ran out of memory. Over night, it looks like snapshots were taken about every half hour, but nothing else is in the logs other than the stack traces:
[etcd] Dec 1 22:13:06.623 INFO | kubernetes-master: state changed from 'follower' to 'leader'.
[etcd] Dec 1 22:13:06.623 INFO | kubernetes-master: leader changed from '' to 'kubernetes-master'.
[etcd] Dec 1 23:33:51.960 INFO | kubernetes-master: snapshot of 10006 events at index 10006 completed
[etcd] Dec 2 00:42:22.390 INFO | kubernetes-master: snapshot of 10011 events at index 20017 completed
[etcd] Dec 2 01:41:25.973 INFO | kubernetes-master: snapshot of 10001 events at index 30018 completed
[etcd] Dec 2 02:33:53.816 INFO | kubernetes-master: snapshot of 10012 events at index 40030 completed
[etcd] Dec 2 03:21:15.942 INFO | kubernetes-master: snapshot of 10005 events at index 50035 completed
[etcd] Dec 2 04:04:56.331 INFO | kubernetes-master: snapshot of 10001 events at index 60036 completed
[etcd] Dec 2 04:45:31.110 INFO | kubernetes-master: snapshot of 10004 events at index 70040 completed
[etcd] Dec 2 05:23:48.165 INFO | kubernetes-master: snapshot of 10006 events at index 80046 completed
[etcd] Dec 2 06:00:05.639 INFO | kubernetes-master: snapshot of 10015 events at index 90061 completed
[etcd] Dec 2 06:34:32.590 INFO | kubernetes-master: snapshot of 10005 events at index 100066 completed
[etcd] Dec 2 07:07:29.809 INFO | kubernetes-master: snapshot of 10002 events at index 110068 completed
[etcd] Dec 2 07:39:15.418 INFO | kubernetes-master: snapshot of 10009 events at index 120077 completed
[etcd] Dec 2 08:09:46.621 INFO | kubernetes-master: snapshot of 10007 events at index 130084 completed
[etcd] Dec 2 08:39:15.081 INFO | kubernetes-master: snapshot of 10011 events at index 140095 completed
[etcd] Dec 2 09:07:46.872 INFO | kubernetes-master: snapshot of 10010 events at index 150105 completed
[etcd] Dec 2 09:35:28.577 INFO | kubernetes-master: snapshot of 10013 events at index 160118 completed
[etcd] Dec 2 10:02:19.479 INFO | kubernetes-master: snapshot of 10007 events at index 170125 completed
[etcd] Dec 2 10:28:28.815 INFO | kubernetes-master: snapshot of 10009 events at index 180134 completed
[etcd] Dec 2 10:53:59.705 INFO | kubernetes-master: snapshot of 10016 events at index 190150 completed
[etcd] Dec 2 11:18:49.189 INFO | kubernetes-master: snapshot of 10017 events at index 200167 completed
[etcd] Dec 2 11:43:06.186 INFO | kubernetes-master: snapshot of 10020 events at index 210187 completed
[etcd] Dec 2 12:06:44.526 INFO | kubernetes-master: snapshot of 10009 events at index 220196 completed
[etcd] Dec 2 12:29:58.950 INFO | kubernetes-master: snapshot of 10003 events at index 230199 completed
[etcd] Dec 2 12:52:47.210 INFO | kubernetes-master: snapshot of 10023 events at index 240222 completed
[etcd] Dec 2 13:15:14.640 INFO | kubernetes-master: snapshot of 10025 events at index 250247 completed
[etcd] Dec 2 13:37:15.713 INFO | kubernetes-master: snapshot of 10006 events at index 260253 completed
[etcd] Dec 2 13:58:56.031 INFO | kubernetes-master: snapshot of 10018 events at index 270271 completed
[etcd] Dec 2 14:20:37.346 INFO | kubernetes-master: snapshot of 10001 events at index 280272 completed
[etcd] Dec 2 14:42:04.366 INFO | kubernetes-master: snapshot of 10021 events at index 290293 completed
[etcd] Dec 2 15:03:07.422 INFO | kubernetes-master: snapshot of 10003 events at index 300296 completed
[etcd] Dec 2 15:23:50.334 INFO | kubernetes-master: snapshot of 10016 events at index 310312 completed
[etcd] Dec 2 15:44:24.786 INFO | kubernetes-master: snapshot of 10009 events at index 320321 completed
[etcd] Dec 2 16:04:47.581 INFO | kubernetes-master: snapshot of 10010 events at index 330331 completed
[etcd] Dec 2 16:24:52.737 INFO | kubernetes-master: snapshot of 10022 events at index 340353 completed
[etcd] Dec 2 16:44:43.265 INFO | kubernetes-master: snapshot of 10005 events at index 350358 completed
fatal error: runtime: out of memory | 1.0 | etcd on master died due to running out of memory on day-old cluster with very little activity - Yesterday I spun up a cluster on GCE around 2pm using the 0.5.2 release. I scheduled two nginx pods on it with a replication controller, then ran kube-push using the 0.5.3 release to update the master. I then started 3 nginx pods on a different replication controller. Everything seemed to be working.
This morning I tried listing the pods and got an error from kubecfg:
F1202 11:11:14.120573 22165 kubecfg.go:428] Got request error: 501: All the given peers are not reachable (Tried to connect to each peer twice and failed) [0]
On the master, etcd is no longer running, and the etcd logs indicate that it ran out of memory. Over night, it looks like snapshots were taken about every half hour, but nothing else is in the logs other than the stack traces:
[etcd] Dec 1 22:13:06.623 INFO | kubernetes-master: state changed from 'follower' to 'leader'.
[etcd] Dec 1 22:13:06.623 INFO | kubernetes-master: leader changed from '' to 'kubernetes-master'.
[etcd] Dec 1 23:33:51.960 INFO | kubernetes-master: snapshot of 10006 events at index 10006 completed
[etcd] Dec 2 00:42:22.390 INFO | kubernetes-master: snapshot of 10011 events at index 20017 completed
[etcd] Dec 2 01:41:25.973 INFO | kubernetes-master: snapshot of 10001 events at index 30018 completed
[etcd] Dec 2 02:33:53.816 INFO | kubernetes-master: snapshot of 10012 events at index 40030 completed
[etcd] Dec 2 03:21:15.942 INFO | kubernetes-master: snapshot of 10005 events at index 50035 completed
[etcd] Dec 2 04:04:56.331 INFO | kubernetes-master: snapshot of 10001 events at index 60036 completed
[etcd] Dec 2 04:45:31.110 INFO | kubernetes-master: snapshot of 10004 events at index 70040 completed
[etcd] Dec 2 05:23:48.165 INFO | kubernetes-master: snapshot of 10006 events at index 80046 completed
[etcd] Dec 2 06:00:05.639 INFO | kubernetes-master: snapshot of 10015 events at index 90061 completed
[etcd] Dec 2 06:34:32.590 INFO | kubernetes-master: snapshot of 10005 events at index 100066 completed
[etcd] Dec 2 07:07:29.809 INFO | kubernetes-master: snapshot of 10002 events at index 110068 completed
[etcd] Dec 2 07:39:15.418 INFO | kubernetes-master: snapshot of 10009 events at index 120077 completed
[etcd] Dec 2 08:09:46.621 INFO | kubernetes-master: snapshot of 10007 events at index 130084 completed
[etcd] Dec 2 08:39:15.081 INFO | kubernetes-master: snapshot of 10011 events at index 140095 completed
[etcd] Dec 2 09:07:46.872 INFO | kubernetes-master: snapshot of 10010 events at index 150105 completed
[etcd] Dec 2 09:35:28.577 INFO | kubernetes-master: snapshot of 10013 events at index 160118 completed
[etcd] Dec 2 10:02:19.479 INFO | kubernetes-master: snapshot of 10007 events at index 170125 completed
[etcd] Dec 2 10:28:28.815 INFO | kubernetes-master: snapshot of 10009 events at index 180134 completed
[etcd] Dec 2 10:53:59.705 INFO | kubernetes-master: snapshot of 10016 events at index 190150 completed
[etcd] Dec 2 11:18:49.189 INFO | kubernetes-master: snapshot of 10017 events at index 200167 completed
[etcd] Dec 2 11:43:06.186 INFO | kubernetes-master: snapshot of 10020 events at index 210187 completed
[etcd] Dec 2 12:06:44.526 INFO | kubernetes-master: snapshot of 10009 events at index 220196 completed
[etcd] Dec 2 12:29:58.950 INFO | kubernetes-master: snapshot of 10003 events at index 230199 completed
[etcd] Dec 2 12:52:47.210 INFO | kubernetes-master: snapshot of 10023 events at index 240222 completed
[etcd] Dec 2 13:15:14.640 INFO | kubernetes-master: snapshot of 10025 events at index 250247 completed
[etcd] Dec 2 13:37:15.713 INFO | kubernetes-master: snapshot of 10006 events at index 260253 completed
[etcd] Dec 2 13:58:56.031 INFO | kubernetes-master: snapshot of 10018 events at index 270271 completed
[etcd] Dec 2 14:20:37.346 INFO | kubernetes-master: snapshot of 10001 events at index 280272 completed
[etcd] Dec 2 14:42:04.366 INFO | kubernetes-master: snapshot of 10021 events at index 290293 completed
[etcd] Dec 2 15:03:07.422 INFO | kubernetes-master: snapshot of 10003 events at index 300296 completed
[etcd] Dec 2 15:23:50.334 INFO | kubernetes-master: snapshot of 10016 events at index 310312 completed
[etcd] Dec 2 15:44:24.786 INFO | kubernetes-master: snapshot of 10009 events at index 320321 completed
[etcd] Dec 2 16:04:47.581 INFO | kubernetes-master: snapshot of 10010 events at index 330331 completed
[etcd] Dec 2 16:24:52.737 INFO | kubernetes-master: snapshot of 10022 events at index 340353 completed
[etcd] Dec 2 16:44:43.265 INFO | kubernetes-master: snapshot of 10005 events at index 350358 completed
fatal error: runtime: out of memory | priority | etcd on master died due to running out of memory on day old cluster with very little activity yesterday i spun up a cluster on gce around using the release i scheduled two nginx pods on it with a replication controller then ran kube push using the release to update the master i then started nginx pods on a different replication controller everything seemed to be working this morning i tried listing the pods and got an error from kubecfg kubecfg go got request error all the given peers are not reachable tried to connect to each peer twice and failed on the master etcd is no longer running and the etcd logs indicate that it ran out of memory over night it looks like snapshots were taken about every half hour but nothing else is in the logs other than the stack traces dec info kubernetes master state changed from follower to leader dec info kubernetes master leader changed from to kubernetes master dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed dec info kubernetes master snapshot of events at index completed fatal error runtime out of memory | 1 |
315,238 | 27,057,959,832 | IssuesEvent | 2023-02-13 17:26:45 | inaturalist/iNaturalistAndroid | https://api.github.com/repos/inaturalist/iNaturalistAndroid | closed | Test new obs from camera flow | Test | I'd like to have an instrumented test that does the following:
1. Opens the app to My Observations (`ObservationsListActivity`)
1. Taps the FAB
1. Taps the "Take a Photo" button
1. Takes a photo and confirms the photo (or stubs this somehow)
1. Asserts that the photo exists in the photos row at the top of the obs editor
1. Asserts that GPS has retrieved coordinates (or stubs it)
1. Taps the FAB in the bottom toolbar to save the obs
1. Asserts that there is an unsynced obs row on My Observations
@eberryaetna, I was trying to just write one that just tapped the FAB, tapped the "No Media" button, and asserted that it launched the intent for the Obs Editor, but evne that was a bit beyond me, so if you have any advice on whether a complicated test like this is possible and how one might go about it, I'd appreciated it. | 1.0 | Test new obs from camera flow - I'd like to have an instrumented test that does the following:
1. Opens the app to My Observations (`ObservationsListActivity`)
1. Taps the FAB
1. Taps the "Take a Photo" button
1. Takes a photo and confirms the photo (or stubs this somehow)
1. Asserts that the photo exists in the photos row at the top of the obs editor
1. Asserts that GPS has retrieved coordinates (or stubs it)
1. Taps the FAB in the bottom toolbar to save the obs
1. Asserts that there is an unsynced obs row on My Observations
@eberryaetna, I was trying to just write one that just tapped the FAB, tapped the "No Media" button, and asserted that it launched the intent for the Obs Editor, but evne that was a bit beyond me, so if you have any advice on whether a complicated test like this is possible and how one might go about it, I'd appreciated it. | non_priority | test new obs from camera flow i d like to have an instrumented test that does the following opens the app to my observations observationslistactivity taps the fab taps the take a photo button takes a photo and confirms the photo or stubs this somehow asserts that the photo exists in the photos row at the top of the obs editor asserts that gps has retrieved coordinates or stubs it taps the fab in the bottom toolbar to save the obs asserts that there is an unsynced obs row on my observations eberryaetna i was trying to just write one that just tapped the fab tapped the no media button and asserted that it launched the intent for the obs editor but evne that was a bit beyond me so if you have any advice on whether a complicated test like this is possible and how one might go about it i d appreciated it | 0 |
8,241 | 7,309,704,921 | IssuesEvent | 2018-02-28 12:46:21 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | opened | Build of host ia32 dart in ARMv7 cross-compilation fails - result segfaults - on Debian | area-infrastructure area-vm | Building ARMv7 32-bit dart sdk on a Debian based linux fails because the x86 dart executable it builds (to generate the snapshots for the ARM sdk) crashes.
After upgrading to our local Debian distribution, and adding toolchain support with
sudo apt-get install g++-multilib
sudo apt-get install g++-arm-linux-gnueabihf # For 32-bit ARM (ARMv7)
sudo apt-get install g++-aarch64-linux-gnu # For 64-bit ARM
the build of --arch=arm fails at the end, when the host executable at
out/ReleaseXARM/x86/dart crashes. It is needed to generate the snapshots for
the ARMv7 SDK.
A build of --arch=ia32 succeeds, and if the copy of dart is copied from out/ReleaseIA32/dart to x86/dart, then the build of --arch=arm succeeds
The equivalent situation with ARMv8 64-bit builds does not have a problem. The dart executable at out/ReleaseXARM64/x64/dart does not crash, and both ARMv8 and X64 builds succeed.
| 1.0 | Build of host ia32 dart in ARMv7 cross-compilation fails - result segfaults - on Debian - Building ARMv7 32-bit dart sdk on a Debian based linux fails because the x86 dart executable it builds (to generate the snapshots for the ARM sdk) crashes.
After upgrading to our local Debian distribution, and adding toolchain support with
sudo apt-get install g++-multilib
sudo apt-get install g++-arm-linux-gnueabihf # For 32-bit ARM (ARMv7)
sudo apt-get install g++-aarch64-linux-gnu # For 64-bit ARM
the build of --arch=arm fails at the end, when the host executable at
out/ReleaseXARM/x86/dart crashes. It is needed to generate the snapshots for
the ARMv7 SDK.
A build of --arch=ia32 succeeds, and if the copy of dart is copied from out/ReleaseIA32/dart to x86/dart, then the build of --arch=arm succeeds
The equivalent situation with ARMv8 64-bit builds does not have a problem. The dart executable at out/ReleaseXARM64/x64/dart does not crash, and both ARMv8 and X64 builds succeed.
| non_priority | build of host dart in cross compilation fails result segfaults on debian building bit dart sdk on a debian based linux fails because the dart executable it builds to generate the snapshots for the arm sdk crashes after upgrading to our local debian distribution and adding toolchain support with sudo apt get install g multilib sudo apt get install g arm linux gnueabihf for bit arm sudo apt get install g linux gnu for bit arm the build of arch arm fails at the end when the host executable at out releasexarm dart crashes it is needed to generate the snapshots for the sdk a build of arch succeeds and if the copy of dart is copied from out dart to dart then the build of arch arm succeeds the equivalent situation with bit builds does not have a problem the dart executable at out dart does not crash and both and builds succeed | 0 |
477,699 | 13,766,659,981 | IssuesEvent | 2020-10-07 14:49:28 | NCIOCPL/cgov-digital-platform | https://api.github.com/repos/NCIOCPL/cgov-digital-platform | closed | Move "apply" button on popups | Component: Admin UX Improvement Low priority change request | Can the apply button be moved to the right, so it's more clearly connected to the search/filters on popups?
Want to avoid confusion thinking that "apply" is the go button, instead of "select"

| 1.0 | Move "apply" button on popups - Can the apply button be moved to the right, so it's more clearly connected to the search/filters on popups?
Want to avoid confusion thinking that "apply" is the go button, instead of "select"

| priority | move apply button on popups can the apply button be moved to the right so it s more clearly connected to the search filters on popups want to avoid confusion thinking that apply is the go button instead of select | 1 |
226,726 | 24,996,535,291 | IssuesEvent | 2022-11-03 01:13:01 | mcaj-git/gatsby2 | https://api.github.com/repos/mcaj-git/gatsby2 | opened | CVE-2022-2421 (High) detected in socket.io-parser-4.0.4.tgz | security vulnerability | ## CVE-2022-2421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>socket.io-parser-4.0.4.tgz</b></p></summary>
<p>socket.io protocol parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.4.tgz">https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/socket.io-parser/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-4.3.0.tgz (Root Library)
- socket.io-client-3.1.3.tgz
- :x: **socket.io-parser-4.0.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mcaj-git/gatsby2/commit/0d161595e6574675ae8197397c7776c5e90a691b">0d161595e6574675ae8197397c7776c5e90a691b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Due to improper type validation in attachment parsing the Socket.io js library, it is possible to overwrite the _placeholder object which allows an attacker to place references to functions at arbitrary places in the resulting query object.
<p>Publish Date: 2022-10-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2421>CVE-2022-2421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://csirt.divd.nl/cases/DIVD-2022-00045/">https://csirt.divd.nl/cases/DIVD-2022-00045/</a></p>
<p>Release Date: 2022-10-26</p>
<p>Fix Resolution: socket.io-parser - 4.0.5,4.2.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-2421 (High) detected in socket.io-parser-4.0.4.tgz - ## CVE-2022-2421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>socket.io-parser-4.0.4.tgz</b></p></summary>
<p>socket.io protocol parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.4.tgz">https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/socket.io-parser/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-4.3.0.tgz (Root Library)
- socket.io-client-3.1.3.tgz
- :x: **socket.io-parser-4.0.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mcaj-git/gatsby2/commit/0d161595e6574675ae8197397c7776c5e90a691b">0d161595e6574675ae8197397c7776c5e90a691b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Due to improper type validation in attachment parsing the Socket.io js library, it is possible to overwrite the _placeholder object which allows an attacker to place references to functions at arbitrary places in the resulting query object.
<p>Publish Date: 2022-10-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2421>CVE-2022-2421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://csirt.divd.nl/cases/DIVD-2022-00045/">https://csirt.divd.nl/cases/DIVD-2022-00045/</a></p>
<p>Release Date: 2022-10-26</p>
<p>Fix Resolution: socket.io-parser - 4.0.5,4.2.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in socket io parser tgz cve high severity vulnerability vulnerable library socket io parser tgz socket io protocol parser library home page a href path to dependency file package json path to vulnerable library node modules socket io parser package json dependency hierarchy gatsby tgz root library socket io client tgz x socket io parser tgz vulnerable library found in head commit a href found in base branch main vulnerability details due to improper type validation in attachment parsing the socket io js library it is possible to overwrite the placeholder object which allows an attacker to place references to functions at arbitrary places in the resulting query object publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution socket io parser step up your open source security game with mend | 0 |
447,727 | 12,892,395,769 | IssuesEvent | 2020-07-13 19:32:33 | PREreview/rapid-prereview | https://api.github.com/repos/PREreview/rapid-prereview | closed | Searching for already added preprints by DOI doesn't yield any results | :dollar: Funded on Issuehunt COVID-19 Mozilla 2020 Sprints OASPA priority 1 bug priority | <!-- Issuehunt Badges -->
[<img alt="Issuehunt badges" src="https://img.shields.io/badge/IssueHunt-%24100%20Funded-%2300A156.svg" />](https://issuehunt.io/r/PREreview/rapid-prereview/issues/114)
<!-- /Issuehunt Badges -->
Happening both on my local build and on the deployed OSrPRE site. Can someone else corroborate?
<!-- Issuehunt content -->
---
<details>
<summary>
<b>IssueHunt Summary</b>
</summary>
### Backers (Total: $100.00)
- [<img src='https://avatars1.githubusercontent.com/u/39268982?v=4' alt='prereview' width=24 height=24> prereview](https://issuehunt.io/u/prereview) ($100.00)
### Submitted pull Requests
- [#142 fix: Searching for already added preprints by DOI doesn't yield any results](https://issuehunt.io/r/PREreview/rapid-prereview/pull/142)
---
#### [Become a backer now!](https://issuehunt.io/r/PREreview/rapid-prereview/issues/114)
#### [Or submit a pull request to get the deposits!](https://issuehunt.io/r/PREreview/rapid-prereview/issues/114)
### Tips
- Checkout the [Issuehunt explorer](https://issuehunt.io/r/PREreview/rapid-prereview/) to discover more funded issues.
- Need some help from other developers? [Add your repositories](https://issuehunt.io/r/new) on IssueHunt to raise funds.
</details>
<!-- /Issuehunt content--> | 2.0 | Searching for already added preprints by DOI doesn't yield any results - <!-- Issuehunt Badges -->
[<img alt="Issuehunt badges" src="https://img.shields.io/badge/IssueHunt-%24100%20Funded-%2300A156.svg" />](https://issuehunt.io/r/PREreview/rapid-prereview/issues/114)
<!-- /Issuehunt Badges -->
Happening both on my local build and on the deployed OSrPRE site. Can someone else corroborate?
<!-- Issuehunt content -->
---
<details>
<summary>
<b>IssueHunt Summary</b>
</summary>
### Backers (Total: $100.00)
- [<img src='https://avatars1.githubusercontent.com/u/39268982?v=4' alt='prereview' width=24 height=24> prereview](https://issuehunt.io/u/prereview) ($100.00)
### Submitted pull Requests
- [#142 fix: Searching for already added preprints by DOI doesn't yield any results](https://issuehunt.io/r/PREreview/rapid-prereview/pull/142)
---
#### [Become a backer now!](https://issuehunt.io/r/PREreview/rapid-prereview/issues/114)
#### [Or submit a pull request to get the deposits!](https://issuehunt.io/r/PREreview/rapid-prereview/issues/114)
### Tips
- Checkout the [Issuehunt explorer](https://issuehunt.io/r/PREreview/rapid-prereview/) to discover more funded issues.
- Need some help from other developers? [Add your repositories](https://issuehunt.io/r/new) on IssueHunt to raise funds.
</details>
<!-- /Issuehunt content--> | priority | searching for already added preprints by doi doesn t yield any results happening both on my local build and on the deployed osrpre site can someone else corroborate issuehunt summary backers total submitted pull requests tips checkout the to discover more funded issues need some help from other developers on issuehunt to raise funds | 1 |
5,753 | 2,966,810,263 | IssuesEvent | 2015-07-12 08:39:02 | nanoc/nanoc.ws | https://api.github.com/repos/nanoc/nanoc.ws | closed | Suggestion: In conversion guide, indicate "gem install nanoc --pre" | documentation work in progress | Otherwise it's hard to install the new stuff. | 1.0 | Suggestion: In conversion guide, indicate "gem install nanoc --pre" - Otherwise it's hard to install the new stuff. | non_priority | suggestion in conversion guide indicate gem install nanoc pre otherwise it s hard to install the new stuff | 0 |
687,687 | 23,535,456,803 | IssuesEvent | 2022-08-19 20:08:18 | trimble-oss/modus-web-components | https://api.github.com/repos/trimble-oss/modus-web-components | closed | [1] Add Data Table component (barebones) | priority:medium new-component data-table | https://docs.google.com/document/d/1JwHy6Hqrj5EJY6deM5382STvM4InvV3OTdci4f1b9Ac/edit#
We may want to wrap a third party table library. Implementation details are TBD | 1.0 | [1] Add Data Table component (barebones) - https://docs.google.com/document/d/1JwHy6Hqrj5EJY6deM5382STvM4InvV3OTdci4f1b9Ac/edit#
We may want to wrap a third party table library. Implementation details are TBD | priority | add data table component barebones we may want to wrap a third party table library implementation details are tbd | 1 |
196,176 | 6,925,406,319 | IssuesEvent | 2017-11-30 15:51:12 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | Checklist Sorting Issue - Wrong Checklist Deleted | help wanted priority: important section: Task Page | At least on Firefox, there is a checklist sorting bug that occurs when you move a checklist item up, and then, without saving, delete an item that used to be above it but is now below it. We've had a report that this causes
Here is how to replicate:
Add items A, B, C, D, E to checklist (in that order).
Move item E to the second slot, so that you have the sequence AEBCD.
Delete item C
You should then have AEBD, but instead C remains and B has been deleted. (We also received a report that this sometimes caused E to jump back down to the bottom, so that you get ABDE.) | 1.0 | Checklist Sorting Issue - Wrong Checklist Deleted - At least on Firefox, there is a checklist sorting bug that occurs when you move a checklist item up, and then, without saving, delete an item that used to be above it but is now below it. We've had a report that this causes
Here is how to replicate:
Add items A, B, C, D, E to checklist (in that order).
Move item E to the second slot, so that you have the sequence AEBCD.
Delete item C
You should then have AEBD, but instead C remains and B has been deleted. (We also received a report that this sometimes caused E to jump back down to the bottom, so that you get ABDE.) | priority | checklist sorting issue wrong checklist deleted at least on firefox there is a checklist sorting bug that occurs when you move a checklist item up and then without saving delete an item that used to be above it but is now below it we ve had a report that this causes here is how to replicate add items a b c d e to checklist in that order move item e to the second slot so that you have the sequence aebcd delete item c you should then have aebd but instead c remains and b has been deleted we also received a report that this sometimes caused e to jump back down to the bottom so that you get abde | 1 |
40,566 | 2,868,928,465 | IssuesEvent | 2015-06-05 22:01:00 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Help text for using a package is wrong | bug Fixed Priority-Medium | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#5899_
----
It looks like: "Apps should use any as the Dart Editor):"
which is all kinds of wrong. | 1.0 | Help text for using a package is wrong - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#5899_
----
It looks like: "Apps should use any as the Dart Editor):"
which is all kinds of wrong. | priority | help text for using a package is wrong issue by originally opened as dart lang sdk it looks like quot apps should use any as the dart editor quot which is all kinds of wrong | 1 |
378,968 | 11,211,552,968 | IssuesEvent | 2020-01-06 15:40:46 | matthiaskoenig/pkdb | https://api.github.com/repos/matthiaskoenig/pkdb | closed | update django to django 3.0 | backend dependencies priority | I am waiting for django-elasticsearch-dsl to support django 3.x.
[[related issue](https://github.com/sabricot/django-elasticsearch-dsl/issues/232)]
| 1.0 | update django to django 3.0 - I am waiting for django-elasticsearch-dsl to support django 3.x.
[[related issue](https://github.com/sabricot/django-elasticsearch-dsl/issues/232)]
| priority | update django to django i am waiting for django elasticsearch dsl to support django x | 1 |
21,102 | 3,870,122,697 | IssuesEvent | 2016-04-11 00:25:09 | jmeas/finance-app | https://api.github.com/repos/jmeas/finance-app | closed | Add babel plugin rewire | enhancement test | This will make it much easier to test some internal algorithms I don't want to export from any file, as well as test the stuff that is getting exported | 1.0 | Add babel plugin rewire - This will make it much easier to test some internal algorithms I don't want to export from any file, as well as test the stuff that is getting exported | non_priority | add babel plugin rewire this will make it much easier to test some internal algorithms i don t want to export from any file as well as test the stuff that is getting exported | 0 |
434,428 | 30,452,901,758 | IssuesEvent | 2023-07-16 14:20:33 | JasonDsouza212/free-hit | https://api.github.com/repos/JasonDsouza212/free-hit | closed | [Docs]: Adding a small Licenses section to the readme and a MIT Licenses badge | documentation level1 gssoc23 no-issue-activity | ### what's wrong in the documentation?
The readme file of the repo should have a Licenses section to make it more professional.
### Add screenshots

### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | [Docs]: Adding a small Licenses section to the readme and a MIT Licenses badge - ### what's wrong in the documentation?
The readme file of the repo should have a Licenses section to make it more professional.
### Add screenshots

### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_priority | adding a small licenses section to the readme and a mit licenses badge what s wrong in the documentation the readme file of the repo should have a licenses section to make it more professional add screenshots code of conduct i agree to follow this project s code of conduct | 0 |
256,080 | 8,126,836,269 | IssuesEvent | 2018-08-17 05:01:29 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | visit_utils.encoding: update ffmpeg flags | Bug Likelihood: 3 - Occasional Priority: Normal Severity: 2 - Minor Irritation | Change +4mv+aic to +mv4+aic, to comply with a parser change in new versions of ffmpeg.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1228
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: visit_utils.encoding: update ffmpeg flags
Assigned to: Cyrus Harrison
Category:
Target version: 2.6
Author: Cyrus Harrison
Start: 11/06/2012
Due date:
% Done: 0
Estimated time:
Created: 11/06/2012 12:52 pm
Updated: 11/06/2012 12:53 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: trunk
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Change +4mv+aic to +mv4+aic, to comply with a parser change in new versions of ffmpeg.
Comments:
Hi Everyone,I updated the encoding flags used in visit_utils.encoding for a few output types.Newer versions of ffmpeg require this.The ffmpeg command line parser was fixed a bit back. The old incantation, although a wide spread part of the ffmpeg lore, wasn't parsed as expected but didn't case any issues before the fix. With this update, we should avoid complaints from newer versions of ffmpeg.RC:Sending visitpy/visit_utils/src/encoding.pySending visitpy/visit_utils/tests/test_encoding.pyTransmitting file data ..Committed revision 19509.Trunk:Sending visitpy/visit_utils/src/encoding.pySending visitpy/visit_utils/tests/test_encoding.pyTransmitting file data ..Committed revision 19512.-Cyrus
| 1.0 | visit_utils.encoding: update ffmpeg flags - Change +4mv+aic to +mv4+aic, to comply with a parser change in new versions of ffmpeg.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1228
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: visit_utils.encoding: update ffmpeg flags
Assigned to: Cyrus Harrison
Category:
Target version: 2.6
Author: Cyrus Harrison
Start: 11/06/2012
Due date:
% Done: 0
Estimated time:
Created: 11/06/2012 12:52 pm
Updated: 11/06/2012 12:53 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: trunk
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Change +4mv+aic to +mv4+aic, to comply with a parser change in new versions of ffmpeg.
Comments:
Hi Everyone,I updated the encoding flags used in visit_utils.encoding for a few output types.Newer versions of ffmpeg require this.The ffmpeg command line parser was fixed a bit back. The old incantation, although a wide spread part of the ffmpeg lore, wasn't parsed as expected but didn't case any issues before the fix. With this update, we should avoid complaints from newer versions of ffmpeg.RC:Sending visitpy/visit_utils/src/encoding.pySending visitpy/visit_utils/tests/test_encoding.pyTransmitting file data ..Committed revision 19509.Trunk:Sending visitpy/visit_utils/src/encoding.pySending visitpy/visit_utils/tests/test_encoding.pyTransmitting file data ..Committed revision 19512.-Cyrus
| priority | visit utils encoding update ffmpeg flags change aic to aic to comply with a parser change in new versions of ffmpeg redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject visit utils encoding update ffmpeg flags assigned to cyrus harrison category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood occasional severity minor irritation found in version trunk impact expected use os all support group any description change aic to aic to comply with a parser change in new versions of ffmpeg comments hi everyone i updated the encoding flags used in visit utils encoding for a few output types newer versions of ffmpeg require this the ffmpeg command line parser was fixed a bit back the old incantation although a wide spread part of the ffmpeg lore wasn t parsed as expected but didn t case any issues before the fix with this update we should avoid complaints from newer versions of ffmpeg rc sending visitpy visit utils src encoding pysending visitpy visit utils tests test encoding pytransmitting file data committed revision trunk sending visitpy visit utils src encoding pysending visitpy visit utils tests test encoding pytransmitting file data committed revision cyrus | 1 |
159,243 | 6,042,567,555 | IssuesEvent | 2017-06-11 14:13:30 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | API server is not accessible via VirtualBox/docker-machine after getting started instructions | area/apiserver priority/backlog sig/docs | The (awesome) getting started guide for docker includes support for running on VirtualBox and suggests using docker-machine to manage that. While the provided commands do get the kubernetes cluster up and running /within/ the VirtualBox VM, the cluster's API server is not accessible from /outside/ the VM without further hackery.
The `master.json` file configures the API server to listen on 127.0.0.1:
``` json
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube:VERSION",
"command": [
"/hyperkube",
"apiserver",
"--portal-net=10.0.0.1/24",
"--address=127.0.0.1",
"--etcd-servers=http://127.0.0.1:4001",
"--cluster-name=kubernetes",
"--v=2"
]
},
```
https://github.com/kubernetes/kubernetes/blob/dd5f970679e81c4c4338788f0458f0f17b6a1095/cluster/images/hyperkube/master.json#L25
The following commands instead reconfigure kubelet to create API servers listening on all addresses (assuming the kubelet container is named `kubelet`):
``` console
docker exec kubelet perl -pi -e 's/address=127.0.0.1/address=0.0.0.0/' /etc/kubernetes/manifests/master.json
docker restart kubelet
```
With this adjustment, everything works as expected and a user can connect to the API server on the VirtualBox's IP address:
``` console
$ kubectl -s $(docker-machine ip dev):8080 get nodes
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
Based on the above, I'd suggest modifying `master.json`'s API server configuration to listen on 0.0.0.0. I can imagine that this might feel icky from a security perspective, but I would hope users are not simply running these images in production without modification. Further, the other containers described in the guide listen on 0.0.0.0 (including etcd), so it seems no less secure (and more useful) if the API server does as well.
Thanks!
Reference:
https://github.com/kubernetes/kubernetes/blob/8de12acc3d3ac391f28c4854f557421dac5cbcea/docs/getting-started-guides/docker.md#step-two-run-the-master
``` console
$ docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): darwin/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
```
``` console
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"}
```
``` console
$ uname -a
Darwin radioactivity.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64
```
``` console
$ VBoxManage -v
4.3.28r100309
```
``` console
$ docker-machine --version
docker-machine version 0.4.0-dev (HEAD)
```
| 1.0 | API server is not accessible via VirtualBox/docker-machine after getting started instructions - The (awesome) getting started guide for docker includes support for running on VirtualBox and suggests using docker-machine to manage that. While the provided commands do get the kubernetes cluster up and running /within/ the VirtualBox VM, the cluster's API server is not accessible from /outside/ the VM without further hackery.
The `master.json` file configures the API server to listen on 127.0.0.1:
``` json
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube:VERSION",
"command": [
"/hyperkube",
"apiserver",
"--portal-net=10.0.0.1/24",
"--address=127.0.0.1",
"--etcd-servers=http://127.0.0.1:4001",
"--cluster-name=kubernetes",
"--v=2"
]
},
```
https://github.com/kubernetes/kubernetes/blob/dd5f970679e81c4c4338788f0458f0f17b6a1095/cluster/images/hyperkube/master.json#L25
The following commands instead reconfigure kubelet to create API servers listening on all addresses (assuming the kubelet container is named `kubelet`):
``` console
docker exec kubelet perl -pi -e 's/address=127.0.0.1/address=0.0.0.0/' /etc/kubernetes/manifests/master.json
docker restart kubelet
```
With this adjustment, everything works as expected and a user can connect to the API server on the VirtualBox's IP address:
``` console
$ kubectl -s $(docker-machine ip dev):8080 get nodes
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
Based on the above, I'd suggest modifying `master.json`'s API server configuration to listen on 0.0.0.0. I can imagine that this might feel icky from a security perspective, but I would hope users are not simply running these images in production without modification. Further, the other containers described in the guide listen on 0.0.0.0 (including etcd), so it seems no less secure (and more useful) if the API server does as well.
Thanks!
Reference:
https://github.com/kubernetes/kubernetes/blob/8de12acc3d3ac391f28c4854f557421dac5cbcea/docs/getting-started-guides/docker.md#step-two-run-the-master
``` console
$ docker version
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): darwin/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
```
``` console
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"}
```
``` console
$ uname -a
Darwin radioactivity.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64
```
``` console
$ VBoxManage -v
4.3.28r100309
```
``` console
$ docker-machine --version
docker-machine version 0.4.0-dev (HEAD)
```
| priority | api server is not accessible via virtualbox docker machine after getting started instructions the awesome getting started guide for docker includes support for running on virtualbox and suggests using docker machine to manage that while the provided commands do get the kubernetes cluster up and running within the virtualbox vm the cluster s api server is not accessible from outside the vm without further hackery the master json file configures the api server to listen on json name apiserver image gcr io google containers hyperkube version command hyperkube apiserver portal net address etcd servers cluster name kubernetes v the following commands instead reconfigure kubelet to create api servers listening on all addresses assuming the kubelet container is named kubelet console docker exec kubelet perl pi e s address address etc kubernetes manifests master json docker restart kubelet with this adjustment everything works as expected and a user can connect to the api server on the virtualbox s ip address console kubectl s docker machine ip dev get nodes name labels status kubernetes io hostname ready based on the above i d suggest modifying master json s api server configuration to listen on i can imagine that this might feel icky from a security perspective but i would hope users are not simply running these images in production without modification further the other containers described in the guide listen on including etcd so it seems no less secure and more useful if the api server does as well thanks reference console docker version client version client api version go version client git commit client os arch client darwin server version server api version go version server git commit server os arch server linux console kubectl version client version version info major minor gitversion gitcommit gittreestate clean server version version info major minor gitversion gitcommit gittreestate clean console uname a darwin radioactivity local darwin kernel version wed jul pdt root xnu release console vboxmanage v console docker machine version docker machine version dev head | 1 |
1,656 | 2,613,420,953 | IssuesEvent | 2015-02-27 20:52:57 | EasyFarm/EasyFarm | https://api.github.com/repos/EasyFarm/EasyFarm | closed | Update FAQ to include solution for attacking wrong targets. | documentation | Update the FAQ section to inform users to disable auto-target before use. This feature messes up with EF's targeting system and can get the player killed. | 1.0 | Update FAQ to include solution for attacking wrong targets. - Update the FAQ section to inform users to disable auto-target before use. This feature messes up with EF's targeting system and can get the player killed. | non_priority | update faq to include solution for attacking wrong targets update the faq section to inform users to disable auto target before use this feature messes up with ef s targeting system and can get the player killed | 0 |
427,040 | 29,738,154,686 | IssuesEvent | 2023-06-14 03:57:33 | Pradumnasaraf/LinkFree-CLI | https://api.github.com/repos/Pradumnasaraf/LinkFree-CLI | opened | [DOCS] New demo GIF | documentation | ### Description
Add a new demo GIF to the README
### Screenshots
_No response_ | 1.0 | [DOCS] New demo GIF - ### Description
Add a new demo GIF to the README
### Screenshots
_No response_ | non_priority | new demo gif description add a new demo gif to the readme screenshots no response | 0 |
95,266 | 11,964,729,146 | IssuesEvent | 2020-04-05 20:40:40 | COVID19Tracking/website | https://api.github.com/repos/COVID19Tracking/website | closed | fix(design): Tighten header line heights | DESIGN | We need to redo line heights for headers, especially H2 in the body has way too much leading between lines. I think this is a typography.js config fix.

| 1.0 | fix(design): Tighten header line heights - We need to redo line heights for headers, especially H2 in the body has way too much leading between lines. I think this is a typography.js config fix.

| non_priority | fix design tighten header line heights we need to redo line heights for headers especially in the body has way too much leading between lines i think this is a typography js config fix | 0 |
65,523 | 3,231,736,892 | IssuesEvent | 2015-10-13 00:15:30 | aic-collections/aicdams-lakeshore | https://api.github.com/repos/aic-collections/aicdams-lakeshore | opened | AssetPresenter returns hash for status instead of ListItem | bug LOW priority | A minor problem, probably buried in Hydra::Presenter, perhaps when to_model is called. We can dig into this later if time permits. Note: the model still resolves status as a ListItem and the object of the triple is unaffected. It only affects GenericFile (i.e. assets). | 1.0 | AssetPresenter returns hash for status instead of ListItem - A minor problem, probably buried in Hydra::Presenter, perhaps when to_model is called. We can dig into this later if time permits. Note: the model still resolves status as a ListItem and the object of the triple is unaffected. It only affects GenericFile (i.e. assets). | priority | assetpresenter returns hash for status instead of listitem a minor problem probably buried in hydra presenter perhaps when to model is called we can dig into this later if time permits note the model still resolves status as a listitem and the object of the triple is unaffected it only affects genericfile i e assets | 1 |
306,046 | 9,380,012,325 | IssuesEvent | 2019-04-04 16:03:46 | mozilla/addons-server | https://api.github.com/repos/mozilla/addons-server | reopened | [Reviewer Tools] Anonymising Reviewer Emails | component: reviewer tools priority: p2 | It is a fact the developers are often not happy if their extensions are rejected during a review. While it should not cause any further complications, in reality that is not always the case.
There are instance where the dissatisfied developers resort to targeting the reviewer via other AMO systems as a consequence.
There are even situations where developer blames the addon-reviewer for issues not even related to the review or reviewer.
AMO systems such as reviewer's addon user-reveiw, addon support & addon Abuse Reports are then targeted for harassment and retaliation.
For example:
```
Harassment of reviewer via user-review system:
https://github.com/mozilla/addons/issues/962
https://github.com/mozilla/addons-server/issues/9686
Harassment of reviewer via add-on support pages:
https://github.com/erosman/support/issues/47
https://github.com/erosman/support/issues/51
Harassment of reviewer via add-on Abuse Reports (since 2015)
https://reviewers.addons.mozilla.org/en-US/reviewers/abuse-reports/firemonkey
Harassment of reviewer via discourse.mozilla.org
(there have been instances in the past)
```
IMHO, review emails could be generalised and anonymised (similar to auto-approval) in order to avoid such occurrences and to enable reviewers to review add-ons without the fear of becoming a victim of harassment or reprisal from unhappy developers.
@caitmuenster @wagnerand @kewisch | 1.0 | [Reviewer Tools] Anonymising Reviewer Emails - It is a fact the developers are often not happy if their extensions are rejected during a review. While it should not cause any further complications, in reality that is not always the case.
There are instance where the dissatisfied developers resort to targeting the reviewer via other AMO systems as a consequence.
There are even situations where developer blames the addon-reviewer for issues not even related to the review or reviewer.
AMO systems such as reviewer's addon user-reveiw, addon support & addon Abuse Reports are then targeted for harassment and retaliation.
For example:
```
Harassment of reviewer via user-review system:
https://github.com/mozilla/addons/issues/962
https://github.com/mozilla/addons-server/issues/9686
Harassment of reviewer via add-on support pages:
https://github.com/erosman/support/issues/47
https://github.com/erosman/support/issues/51
Harassment of reviewer via add-on Abuse Reports (since 2015)
https://reviewers.addons.mozilla.org/en-US/reviewers/abuse-reports/firemonkey
Harassment of reviewer via discourse.mozilla.org
(there have been instances in the past)
```
IMHO, review emails could be generalised and anonymised (similar to auto-approval) in order to avoid such occurrences and to enable reviewers to review add-ons without the fear of becoming a victim of harassment or reprisal from unhappy developers.
@caitmuenster @wagnerand @kewisch | priority | anonymising reviewer emails it is a fact the developers are often not happy if their extensions are rejected during a review while it should not cause any further complications in reality that is not always the case there are instance where the dissatisfied developers resort to targeting the reviewer via other amo systems as a consequence there are even situations where developer blames the addon reviewer for issues not even related to the review or reviewer amo systems such as reviewer s addon user reveiw addon support addon abuse reports are then targeted for harassment and retaliation for example harassment of reviewer via user review system harassment of reviewer via add on support pages harassment of reviewer via add on abuse reports since harassment of reviewer via discourse mozilla org there have been instances in the past imho review emails could be generalised and anonymised similar to auto approval in order to avoid such occurrences and to enable reviewers to review add ons without the fear of becoming a victim of harassment or reprisal from unhappy developers caitmuenster wagnerand kewisch | 1 |
69,802 | 17,861,874,901 | IssuesEvent | 2021-09-06 02:40:57 | Vitzual/Automa | https://api.github.com/repos/Vitzual/Automa | opened | Add assemblers | Building System Alpha 1 | **High Priority**
- [x] Add smelter model
- [ ] Inherit all constructor logic
- [x] Add ability to construct items with 2 inputs / 1 output
| 1.0 | Add assemblers - **High Priority**
- [x] Add smelter model
- [ ] Inherit all constructor logic
- [x] Add ability to construct items with 2 inputs / 1 output
| non_priority | add assemblers high priority add smelter model inherit all constructor logic add ability to construct items with inputs output | 0 |
181,781 | 30,742,600,585 | IssuesEvent | 2023-07-28 12:45:06 | ncosd/food-pantry-app | https://api.github.com/repos/ncosd/food-pantry-app | closed | Change default time for a volunteer window | help wanted Needs Design volunteer-portal | # Feature Description
<!-- Describe how the feature should work, and what problem it solves. -->
- [x] Default time should be 9am - 12pm
- [x] Add target numbers 6,7,8,9,10+
| 1.0 | Change default time for a volunteer window - # Feature Description
<!-- Describe how the feature should work, and what problem it solves. -->
- [x] Default time should be 9am - 12pm
- [x] Add target numbers 6,7,8,9,10+
| non_priority | change default time for a volunteer window feature description default time should be add target numbers | 0 |
151,938 | 23,894,355,320 | IssuesEvent | 2022-09-08 13:48:16 | carbon-design-system/carbon-platform | https://api.github.com/repos/carbon-design-system/carbon-platform | closed | Update framework icons | role: design 🎨 | we need to update framework icons for resource and mini cards to the IBM grid so that they all have the same visual weight.
icons needed for dashboard as well sized like status indicators. | 1.0 | Update framework icons - we need to update framework icons for resource and mini cards to the IBM grid so that they all have the same visual weight.
icons needed for dashboard as well sized like status indicators. | non_priority | update framework icons we need to update framework icons for resource and mini cards to the ibm grid so that they all have the same visual weight icons needed for dashboard as well sized like status indicators | 0 |
156,322 | 5,967,348,799 | IssuesEvent | 2017-05-30 15:46:43 | DCS-LCSR/SignStream3 | https://api.github.com/repos/DCS-LCSR/SignStream3 | opened | Text events disappear when new segment tier created | bug priority | Use case:
Either on clicking new segment tier, or after it, the text events in a ST disappear.
(TODO: verify, etc) | 1.0 | Text events disappear when new segment tier created - Use case:
Either on clicking new segment tier, or after it, the text events in a ST disappear.
(TODO: verify, etc) | priority | text events disappear when new segment tier created use case either on clicking new segment tier or after it the text events in a st disappear todo verify etc | 1 |
545,074 | 15,935,508,284 | IssuesEvent | 2021-04-14 09:56:48 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Fatal error occuring with the recent update Version 1.0.76.11 | NEXT UPDATE [Priority: HIGH] bug | Fatal error: Uncaught Error: Undefined constant "AIOSEO_VERSION" in C:\xampp\htdocs\sravan\wp-content\plugins\accelerated-mobile-pages\templates\features.php:3333
an error occurring when All in One SEO Pro is not activated.
Ref: https://secure.helpscout.net/conversation/1483458039/190934?folderId=2632030
issue occurring in local as well.

| 1.0 | Fatal error occuring with the recent update Version 1.0.76.11 - Fatal error: Uncaught Error: Undefined constant "AIOSEO_VERSION" in C:\xampp\htdocs\sravan\wp-content\plugins\accelerated-mobile-pages\templates\features.php:3333
an error occurring when All in One SEO Pro is not activated.
Ref: https://secure.helpscout.net/conversation/1483458039/190934?folderId=2632030
issue occurring in local as well.

| priority | fatal error occuring with the recent update version fatal error uncaught error undefined constant aioseo version in c xampp htdocs sravan wp content plugins accelerated mobile pages templates features php an error occurring when all in one seo pro is not activated ref issue occurring in local as well | 1 |
259,327 | 8,197,320,343 | IssuesEvent | 2018-08-31 13:03:48 | CSCfi/pebbles | https://api.github.com/repos/CSCfi/pebbles | closed | Clear group users after the course ends | backend feature high-priority | After a course has passed 3-6 months (period can be decided), there should be a button to clear all the users of a group EXCEPT the owner and the manager of the group.
This should be done to accommodate new iterations of a group. This feature is desirable because there are a lot of groups and people being added everyday. | 1.0 | Clear group users after the course ends - After a course has passed 3-6 months (period can be decided), there should be a button to clear all the users of a group EXCEPT the owner and the manager of the group.
This should be done to accommodate new iterations of a group. This feature is desirable because there are a lot of groups and people being added everyday. | priority | clear group users after the course ends after a course has passed months period can be decided there should be a button to clear all the users of a group except the owner and the manager of the group this should be done to accommodate new iterations of a group this feature is desirable because there are a lot of groups and people being added everyday | 1 |
190,107 | 6,808,756,835 | IssuesEvent | 2017-11-04 07:59:56 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | closed | Log context path at startup | priority: low type: enhancement | When starting a web application, there is a nice log message stating which port your server is running on:
`2017-05-24 09:32:50,586 - INFO : MESSAGE=[Jetty started on port(s) 7000 (http/1.1)] TID=[main] CLASS=[org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainer]`
I've run into several occassions though where the context path is also relevant to know. Since you can set it with Spring Boot via server.contextpath, I was thinking it might be a good idea to print this out as well to give developers a more complete picture of where their application is running.
Is this something that could be done in Spring Boot, or would that need to be suggested at the Jetty team?
| 1.0 | Log context path at startup - When starting a web application, there is a nice log message stating which port your server is running on:
`2017-05-24 09:32:50,586 - INFO : MESSAGE=[Jetty started on port(s) 7000 (http/1.1)] TID=[main] CLASS=[org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainer]`
I've run into several occassions though where the context path is also relevant to know. Since you can set it with Spring Boot via server.contextpath, I was thinking it might be a good idea to print this out as well to give developers a more complete picture of where their application is running.
Is this something that could be done in Spring Boot, or would that need to be suggested at the Jetty team?
| priority | log context path at startup when starting a web application there is a nice log message stating which port your server is running on info message tid class i ve run into several occassions though where the context path is also relevant to know since you can set it with spring boot via server contextpath i was thinking it might be a good idea to print this out as well to give developers a more complete picture of where their application is running is this something that could be done in spring boot or would that need to be suggested at the jetty team | 1 |
754,689 | 26,398,506,919 | IssuesEvent | 2023-01-12 21:56:20 | B4-Group/swe_b4 | https://api.github.com/repos/B4-Group/swe_b4 | closed | Sounds | bug Priority | # What is going on
Music dont restart
If the game over sound comes, the music dont start again, after restarting the game
If restarting before you die, the music dont start from the beginning
# What should happen
Music need to start after restart from the beginning
# Steps to reproduce
Die
# Platform
- OS: [e.g. iOS]
- In Editor: Both
- Version v1.0.0
| 1.0 | Sounds - # What is going on
Music dont restart
If the game over sound comes, the music dont start again, after restarting the game
If restarting before you die, the music dont start from the beginning
# What should happen
Music need to start after restart from the beginning
# Steps to reproduce
Die
# Platform
- OS: [e.g. iOS]
- In Editor: Both
- Version v1.0.0
| priority | sounds what is going on music dont restart if the game over sound comes the music dont start again after restarting the game if restarting before you die the music dont start from the beginning what should happen music need to start after restart from the beginning steps to reproduce die platform os in editor both version | 1 |
489,029 | 14,100,268,200 | IssuesEvent | 2020-11-06 03:41:02 | PMEAL/OpenPNM | https://api.github.com/repos/PMEAL/OpenPNM | closed | Add health/consistency checks as pore-scale models? | discussion enhancement low priority proposal | As I am investigating issue #1500, I am writing several functions that inspect pore and throat properties, such as finding throats diameters that are larger than their neighboring pores. This returns an Nt-by-1 array of True/False. It occurs to me that this could be a pore-scale model, that returns Trues in locations where there is a problem.
I can also imagine several other places, EVEN ON algorithms! This is exciting to me because we've never put models on algorithms, but we have the ability to. The models could do things like check for nans, check for invasion pressure < entry pressure (as is the case in issue #1500). | 1.0 | Add health/consistency checks as pore-scale models? - As I am investigating issue #1500, I am writing several functions that inspect pore and throat properties, such as finding throats diameters that are larger than their neighboring pores. This returns an Nt-by-1 array of True/False. It occurs to me that this could be a pore-scale model, that returns Trues in locations where there is a problem.
I can also imagine several other places, EVEN ON algorithms! This is exciting to me because we've never put models on algorithms, but we have the ability to. The models could do things like check for nans, check for invasion pressure < entry pressure (as is the case in issue #1500). | priority | add health consistency checks as pore scale models as i am investigating issue i am writing several functions that inspect pore and throat properties such as finding throats diameters that are larger than their neighboring pores this returns an nt by array of true false it occurs to me that this could be a pore scale model that returns trues in locations where there is a problem i can also imagine several other places even on algorithms this is exciting to me because we ve never put models on algorithms but we have the ability to the models could do things like check for nans check for invasion pressure entry pressure as is the case in issue | 1 |
101,817 | 4,141,484,401 | IssuesEvent | 2016-06-14 05:43:49 | Apollo-Community/ApolloStation | https://api.github.com/repos/Apollo-Community/ApolloStation | closed | Observation Screens doesn't work. | 0.3 mapping oversight priority: medium | This include the observ. screen of the tribunal, and the toxin launching room. There may be more. | 1.0 | Observation Screens doesn't work. - This include the observ. screen of the tribunal, and the toxin launching room. There may be more. | priority | observation screens doesn t work this include the observ screen of the tribunal and the toxin launching room there may be more | 1 |
542,401 | 15,859,604,116 | IssuesEvent | 2021-04-08 08:12:55 | mapbox/mapbox-navigation-ios | https://api.github.com/repos/mapbox/mapbox-navigation-ios | closed | High CPU when camera is not moving | - bug - performance High Priority Injection Work archived | Steps to reproduce:
1. Run example-swift app
1. Turn off navigation simulation
1. Set the simulator location to `Apple` (so it does not move)
1. Start navigation
### Expected:
CPU is near 0% because the camera is not moving.
### Actual:
CPU is near 100%.
If this line https://github.com/mapbox/mapbox-navigation-ios/blob/5ff994a1c6445cef690d10f6758724763a5e5510/MapboxNavigation/NavigationMapView.swift#L336
is changed simply to `setCamera(newCamera, animate: true)`, CPU drops to 0%. If any setCamera function involves a completion handler, CPU jumps to 100%
/cc @mapbox/navigation-ios @mapbox/ios | 1.0 | High CPU when camera is not moving - Steps to reproduce:
1. Run example-swift app
1. Turn off navigation simulation
1. Set the simulator location to `Apple` (so it does not move)
1. Start navigation
### Expected:
CPU is near 0% because the camera is not moving.
### Actual:
CPU is near 100%.
If this line https://github.com/mapbox/mapbox-navigation-ios/blob/5ff994a1c6445cef690d10f6758724763a5e5510/MapboxNavigation/NavigationMapView.swift#L336
is changed simply to `setCamera(newCamera, animate: true)`, CPU drops to 0%. If any setCamera function involves a completion handler, CPU jumps to 100%
/cc @mapbox/navigation-ios @mapbox/ios | priority | high cpu when camera is not moving steps to reproduce run example swift app turn off navigation simulation set the simulator location to apple so it does not move start navigation expected cpu is near because the camera is not moving actual cpu is near if this line is changed simply to setcamera newcamera animate true cpu drops to if any setcamera function involves a completion handler cpu jumps to cc mapbox navigation ios mapbox ios | 1 |
142,196 | 13,017,880,130 | IssuesEvent | 2020-07-26 14:41:47 | semi-technologies/semi-website | https://api.github.com/repos/semi-technologies/semi-website | opened | Add contributor guide | documentation | The guides are available [here](https://github.com/semi-technologies/semi-website/tree/feature/contributor-guide/_documentation/weaviate/current/contributor-guide), adding MD files will add them to the menu as well.
When it's done, you can create a PR (or already open one). | 1.0 | Add contributor guide - The guides are available [here](https://github.com/semi-technologies/semi-website/tree/feature/contributor-guide/_documentation/weaviate/current/contributor-guide), adding MD files will add them to the menu as well.
When it's done, you can create a PR (or already open one). | non_priority | add contributor guide the guides are available adding md files will add them to the menu as well when it s done you can create a pr or already open one | 0 |
102,164 | 4,151,345,950 | IssuesEvent | 2016-06-15 20:18:39 | w3c/webpayments | https://api.github.com/repos/w3c/webpayments | opened | Should the browser pass user data it has collected (email etc) to the payment app? | Priority: Medium Proposal: Payment Apps question | Migrated from https://github.com/w3c/browser-payment-api/issues/194
@mattsaxon said:
>Now that the payment options field can include the request of the email and phone number information, we need to consider how that information might be made securely available to payment applications such that for payment applications that need this information, the experience can be optimised.
>Example payment methods that have requirements for this data are UnionPay. This method can ask for a one-time password which is delivered via SMS and hence phone number (usually mobile/cell) may be needed.
>Other examples are a large number of payment methods that use the email address as the payer unique identifier (e.g. PayPal).
>Whilst it is true that the contact information may be different from the identity information (multiple phone numbers or email addresses in use by the Payee), we should consider how when they are the same (the usual case I would assert), that the information can be shared with the appropriate consent.
>At the moment, whilst it is not actually documented in the specification text, I believe it is the WGs' expectation that the Payment Options field is only available to the Mediator, this prevents the payment apps from offering these contact details as pre-population options data that thy may need.
@rsolomakhin said:
>I don't think that the user agent should provide user's email address to the payment app. The payment app should ask for an email address when the user signs up with or logs into the payment app. This authentication step is necessary anyway, because a payment app will need to access user's payment information in some way. For example, connecting to user's bank account or entering user's credit card information.
@ianbjacobs said:
> +1 to distinguishing the data that the user will provide to different parties for different reasons:
- The merchant for that relationship. (This could further be broken down, e.g.,
shipping agent, but I wouldn't advocate for that in v1.)
- The payment app for the relationship with the payment app provider.
> @mattsaxon, it seems to me that if I install a China UnionPay app, I will give it my email address when I initialize it.
@burdges said:
>Any information provided by the browser like this would violate security properties of some payment apps. | 1.0 | Should the browser pass user data it has collected (email etc) to the payment app? - Migrated from https://github.com/w3c/browser-payment-api/issues/194
@mattsaxon said:
>Now that the payment options field can include the request of the email and phone number information, we need to consider how that information might be made securely available to payment applications such that for payment applications that need this information, the experience can be optimised.
>Example payment methods that have requirements for this data are UnionPay. This method can ask for a one-time password which is delivered via SMS and hence phone number (usually mobile/cell) may be needed.
>Other examples are a large number of payment methods that use the email address as the payer unique identifier (e.g. PayPal).
>Whilst it is true that the contact information may be different from the identity information (multiple phone numbers or email addresses in use by the Payee), we should consider how when they are the same (the usual case I would assert), that the information can be shared with the appropriate consent.
>At the moment, whilst it is not actually documented in the specification text, I believe it is the WGs' expectation that the Payment Options field is only available to the Mediator, this prevents the payment apps from offering these contact details as pre-population options data that thy may need.
@rsolomakhin said:
>I don't think that the user agent should provide user's email address to the payment app. The payment app should ask for an email address when the user signs up with or logs into the payment app. This authentication step is necessary anyway, because a payment app will need to access user's payment information in some way. For example, connecting to user's bank account or entering user's credit card information.
@ianbjacobs said:
> +1 to distinguishing the data that the user will provide to different parties for different reasons:
- The merchant for that relationship. (This could further be broken down, e.g.,
shipping agent, but I wouldn't advocate for that in v1.)
- The payment app for the relationship with the payment app provider.
> @mattsaxon, it seems to me that if I install a China UnionPay app, I will give it my email address when I initialize it.
@burdges said:
>Any information provided by the browser like this would violate security properties of some payment apps. | priority | should the browser pass user data it has collected email etc to the payment app migrated from mattsaxon said now that the payment options field can include the request of the email and phone number information we need to consider how that information might be made securely available to payment applications such that for payment applications that need this information the experience can be optimised example payment methods that have requirements for this data are unionpay this method can ask for a one time password which is delivered via sms and hence phone number usually mobile cell may be needed other examples are a large number of payment methods that use the email address as the payer unique identifier e g paypal whilst it is true that the contact information may be different from the identity information multiple phone numbers or email addresses in use by the payee we should consider how when they are the same the usual case i would assert that the information can be shared with the appropriate consent at the moment whilst it is not actually documented in the specification text i believe it is the wgs expectation that the payment options field is only available to the mediator this prevents the payment apps from offering these contact details as pre population options data that thy may need rsolomakhin said i don t think that the user agent should provide user s email address to the payment app the payment app should ask for an email address when the user signs up with or logs into the payment app this authentication step is necessary anyway because a payment app will need to access user s payment information in some way for example connecting to user s bank account or entering user s credit card information ianbjacobs said to distinguishing the data that the user will provide to different parties for different reasons the merchant for that relationship this could further be broken down e g shipping agent but i wouldn t advocate for that in the payment app for the relationship with the payment app provider mattsaxon it seems to me that if i install a china unionpay app i will give it my email address when i initialize it burdges said any information provided by the browser like this would violate security properties of some payment apps | 1 |
347,821 | 10,434,754,548 | IssuesEvent | 2019-09-17 15:49:16 | prysmaticlabs/prysm | https://api.github.com/repos/prysmaticlabs/prysm | closed | Multinode is currently Broken | Bug Priority: Critical | Blocks have different signing roots between peers. If one peer broadcasts a block and that block has a root of `0x123` for that peer, the other peer would get the same block and determine the root as `0x456`.
```
[2019-09-17 16:01:20] ERROR sync: Failed to handle p2p pubsub error=signature did not verify
could not verify block signature
github.com/prysmaticlabs/prysm/beacon-chain/core/blocks.ProcessBlockHeader
beacon-chain/core/blocks/block_operations.go:193
github.com/prysmaticlabs/prysm/beacon-chain/core/state.ProcessBlock
``` | 1.0 | Multinode is currently Broken - Blocks have different signing roots between peers. If one peer broadcasts a block and that block has a root of `0x123` for that peer, the other peer would get the same block and determine the root as `0x456`.
```
[2019-09-17 16:01:20] ERROR sync: Failed to handle p2p pubsub error=signature did not verify
could not verify block signature
github.com/prysmaticlabs/prysm/beacon-chain/core/blocks.ProcessBlockHeader
beacon-chain/core/blocks/block_operations.go:193
github.com/prysmaticlabs/prysm/beacon-chain/core/state.ProcessBlock
``` | priority | multinode is currently broken blocks have different signing roots between peers if one peer broadcasts a block and that block has a root of for that peer the other peer would get the same block and determine the root as error sync failed to handle pubsub error signature did not verify could not verify block signature github com prysmaticlabs prysm beacon chain core blocks processblockheader beacon chain core blocks block operations go github com prysmaticlabs prysm beacon chain core state processblock | 1 |
10,210 | 8,851,399,561 | IssuesEvent | 2019-01-08 15:40:41 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | image_picker: add multiple images picking | p: first party p: image_picker p: self service plugin severe: new feature would be a good package | Hi guys.
One more option as a valuable update in image_picker plugin.
It would be cool to have multiple image picking option
For now - on long press we have selection of image and selected count in AppBar. Looks like default behavior. But multiple image picking disabled.
Please let me know if it is possible.
Thanks | 1.0 | image_picker: add multiple images picking - Hi guys.
One more option as a valuable update in image_picker plugin.
It would be cool to have multiple image picking option
For now - on long press we have selection of image and selected count in AppBar. Looks like default behavior. But multiple image picking disabled.
Please let me know if it is possible.
Thanks | non_priority | image picker add multiple images picking hi guys one more option as a valuable update in image picker plugin it would be cool to have multiple image picking option for now on long press we have selection of image and selected count in appbar looks like default behavior but multiple image picking disabled please let me know if it is possible thanks | 0 |
7,471 | 3,090,086,542 | IssuesEvent | 2015-08-26 02:39:07 | thoughtbot/neat | https://api.github.com/repos/thoughtbot/neat | closed | Remove span-column(12) from table-grid example | documentation | `@include span-columns(12);` shouldn't have a right-margin. We shouldn't have to include omega every time we want to use a full width column. I would have made the update myself and submitted a PR but I couldn't figure out how to update the code accordingly.
I'm guessing this hasn't been an issue before now because the margin doesn't break anything, however it does flow outside of the outer container.

| 1.0 | Remove span-column(12) from table-grid example - `@include span-columns(12);` shouldn't have a right-margin. We shouldn't have to include omega every time we want to use a full width column. I would have made the update myself and submitted a PR but I couldn't figure out how to update the code accordingly.
I'm guessing this hasn't been an issue before now because the margin doesn't break anything, however it does flow outside of the outer container.

| non_priority | remove span column from table grid example include span columns shouldn t have a right margin we shouldn t have to include omega every time we want to use a full width column i would have made the update myself and submitted a pr but i couldn t figure out how to update the code accordingly i m guessing this hasn t been an issue before now because the margin doesn t break anything however it does flow outside of the outer container | 0 |
407,693 | 11,935,909,189 | IssuesEvent | 2020-04-02 09:24:05 | deora-earth/tealgarden | https://api.github.com/repos/deora-earth/tealgarden | opened | Research about community interaction/ integration | 02 Medium Priority | <!--
# Simple Summary
This policy allows to write out rewards to complete required tasks. Completed tasks are payed by the deora council to the claiming member.
# How to create a new bounty?
1. To start you'll have to fill out the bounty form below.
- If the bounty spans across multiple repositories, consider splitting it in a smaller per-repo bounties if possible.
- If the bounty is larger than M, then the best known expert in the bounty matter should be consulted and included in an
"Expert" field in the bounty description.
2. Communicate the bounty to the organisation by submitting the following form:
https://forms.gle/STSNjTBGygNtTUwLA
- The bounty will get published on the deora communication channel.
# Bounty sizes
XS / 50 to 200 / DAI
S / 200 to 350 / DAI
M / 350 to 550 / DAI
L / 550 to 900 / DAI
XL / 900 to 1400 / DAI
You can specify the range individually under #Roles
# Pair programming
If 2 people claim the bounty together, the payout increases by 1.5x.
# Bounty Challenge
Once a bounty is assigned, the worker is asked to start working immediately on the issue.
If the worker feels blocked in execution, he/she has to communicate the tensions to the gardener.
Only if tensions are not reported and the bounty get's no further attention, anyone can challenge the bounty or takeover.
Bounties should be delivered within time, even if work is left to be performed. Leftover work can be tackled by submitting a new bounty with support by the organisation.
Bounty forking: complexity of bounties that has been undersized can be forked out by a new bounty submission.
**START DESCRIBING YOUR BOUNTY HERE:**
-->
# Bounty
We need more ways how users can interact with each other on and about teal garden. Someone should do research about **tools/ ways** on how we can integrate something to the side to achive our goal of a thriving community.
## Scope
- find a solution for "the community problem"
- propose a way or a tool on how we can have community interactions on the page (or as close to the page as possible
## Deliverables
text with a proposal
## Gain for the project
thriving community
## Roles
bounty gardener: 10% / share
bounty worker: name / share
bounty reviewer: name / share
| 1.0 | Research about community interaction/ integration - <!--
# Simple Summary
This policy allows to write out rewards to complete required tasks. Completed tasks are payed by the deora council to the claiming member.
# How to create a new bounty?
1. To start you'll have to fill out the bounty form below.
- If the bounty spans across multiple repositories, consider splitting it in a smaller per-repo bounties if possible.
- If the bounty is larger than M, then the best known expert in the bounty matter should be consulted and included in an
"Expert" field in the bounty description.
2. Communicate the bounty to the organisation by submitting the following form:
https://forms.gle/STSNjTBGygNtTUwLA
- The bounty will get published on the deora communication channel.
# Bounty sizes
XS / 50 to 200 / DAI
S / 200 to 350 / DAI
M / 350 to 550 / DAI
L / 550 to 900 / DAI
XL / 900 to 1400 / DAI
You can specify the range individually under #Roles
# Pair programming
If 2 people claim the bounty together, the payout increases by 1.5x.
# Bounty Challenge
Once a bounty is assigned, the worker is asked to start working immediately on the issue.
If the worker feels blocked in execution, he/she has to communicate the tensions to the gardener.
Only if tensions are not reported and the bounty get's no further attention, anyone can challenge the bounty or takeover.
Bounties should be delivered within time, even if work is left to be performed. Leftover work can be tackled by submitting a new bounty with support by the organisation.
Bounty forking: complexity of bounties that has been undersized can be forked out by a new bounty submission.
**START DESCRIBING YOUR BOUNTY HERE:**
-->
# Bounty
We need more ways how users can interact with each other on and about teal garden. Someone should do research about **tools/ ways** on how we can integrate something to the side to achive our goal of a thriving community.
## Scope
- find a solution for "the community problem"
- propose a way or a tool on how we can have community interactions on the page (or as close to the page as possible
## Deliverables
text with a proposal
## Gain for the project
thriving community
## Roles
bounty gardener: 10% / share
bounty worker: name / share
bounty reviewer: name / share
| priority | research about community interaction integration simple summary this policy allows to write out rewards to complete required tasks completed tasks are payed by the deora council to the claiming member how to create a new bounty to start you ll have to fill out the bounty form below if the bounty spans across multiple repositories consider splitting it in a smaller per repo bounties if possible if the bounty is larger than m then the best known expert in the bounty matter should be consulted and included in an expert field in the bounty description communicate the bounty to the organisation by submitting the following form the bounty will get published on the deora communication channel bounty sizes xs to dai s to dai m to dai l to dai xl to dai you can specify the range individually under roles pair programming if people claim the bounty together the payout increases by bounty challenge once a bounty is assigned the worker is asked to start working immediately on the issue if the worker feels blocked in execution he she has to communicate the tensions to the gardener only if tensions are not reported and the bounty get s no further attention anyone can challenge the bounty or takeover bounties should be delivered within time even if work is left to be performed leftover work can be tackled by submitting a new bounty with support by the organisation bounty forking complexity of bounties that has been undersized can be forked out by a new bounty submission start describing your bounty here bounty we need more ways how users can interact with each other on and about teal garden someone should do research about tools ways on how we can integrate something to the side to achive our goal of a thriving community scope find a solution for the community problem propose a way or a tool on how we can have community interactions on the page or as close to the page as possible deliverables text with a proposal gain for the project thriving community roles bounty gardener share bounty worker name share bounty reviewer name share | 1 |
179,587 | 21,573,344,315 | IssuesEvent | 2022-05-02 11:00:26 | tabac-ws/JAVA-Demo-2022 | https://api.github.com/repos/tabac-ws/JAVA-Demo-2022 | opened | CVE-2017-3586 (Medium) detected in mysql-connector-java-5.1.25.jar | security vulnerability | ## CVE-2017-3586 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /sitory/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tabac-ws/JAVA-Demo-2022/commit/cae4074695a254f5a29044a8bdd7f1e05c684e89">cae4074695a254f5a29044a8bdd7f1e05c684e89</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586>CVE-2017-3586</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1444406">https://bugzilla.redhat.com/show_bug.cgi?id=1444406</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.25","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.25","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.42","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2017-3586","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily \"exploitable\" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586","cvss3Severity":"medium","cvss3Score":"6.4","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Changed","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2017-3586 (Medium) detected in mysql-connector-java-5.1.25.jar - ## CVE-2017-3586 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /sitory/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tabac-ws/JAVA-Demo-2022/commit/cae4074695a254f5a29044a8bdd7f1e05c684e89">cae4074695a254f5a29044a8bdd7f1e05c684e89</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586>CVE-2017-3586</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1444406">https://bugzilla.redhat.com/show_bug.cgi?id=1444406</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.25","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.25","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.42","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2017-3586","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily \"exploitable\" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586","cvss3Severity":"medium","cvss3Score":"6.4","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Changed","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in mysql connector java jar cve medium severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library sitory mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch main vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data cvss base score confidentiality and integrity impacts cvss vector cvss av n ac l pr l ui n s c c l i l a n publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data cvss base score confidentiality and integrity impacts cvss vector cvss av n ac l pr l ui n s c c l i l a n vulnerabilityurl | 0 |
2,902 | 4,055,170,829 | IssuesEvent | 2016-05-24 14:44:06 | modxcms/revolution | https://api.github.com/repos/modxcms/revolution | closed | Require and force a back-end user to change password first | area-security feature | bertoost created Redmine issue ID 2748
This one should be very cool. We always create a customer-account and we have a simple password for that user. It will be very nice if we could say that the user needs to change the password at first when logging in. Untill they don't do it, the manager is blocked for that user.
If this one exists; I can't find it. | True | Require and force a back-end user to change password first - bertoost created Redmine issue ID 2748
This one should be very cool. We always create a customer-account and we have a simple password for that user. It will be very nice if we could say that the user needs to change the password at first when logging in. Untill they don't do it, the manager is blocked for that user.
If this one exists; I can't find it. | non_priority | require and force a back end user to change password first bertoost created redmine issue id this one should be very cool we always create a customer account and we have a simple password for that user it will be very nice if we could say that the user needs to change the password at first when logging in untill they don t do it the manager is blocked for that user if this one exists i can t find it | 0 |
37,501 | 8,407,650,346 | IssuesEvent | 2018-10-11 21:39:59 | riksanyal/et | https://api.github.com/repos/riksanyal/et | closed | Superfluous error message (Trac #54) | Migrated from Trac SimFactory defect mthomas | I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about "mkdir". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.
$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4
sim: manage and submit cactus jobs
defs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini
defs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini
Cactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg
SimEnvironment.COMMAND: submit
Executing command: submit
Parfile: par/static_tov.par
[log] Assigned restart_id of: 0001
[log] Found the following restart_ids: [0]
[log] Maximum restart id determined to be: 0000
[log] Determined submit restart id: 1
writing to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY
writing to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
mkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied
Executing submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
Submit finished, job id is 788131.nid00016
Migrated from https://trac.einsteintoolkit.org/ticket/54
```json
{
"status": "closed",
"changetime": "2010-10-20T16:14:27",
"description": "I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about \"mkdir\". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.\n\n\n\n$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4\nsim: manage and submit cactus jobs\n\ndefs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini\ndefs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini\n\nCactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg\nSimEnvironment.COMMAND: submit\nExecuting command: submit\nParfile: par/static_tov.par\n[log] Assigned restart_id of: 0001\n[log] Found the following restart_ids: [0]\n[log] Maximum restart id determined to be: 0000\n[log] Determined submit restart id: 1 \nwriting to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY\nwriting to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nmkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied\nExecuting submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nSubmit finished, job id is 788131.nid00016\n",
"reporter": "eschnett",
"cc": "",
"resolution": "fixed",
"_ts": "1287591267709359",
"component": "SimFactory",
"summary": "Superfluous error message",
"priority": "minor",
"keywords": "",
"version": "",
"time": "2010-10-19T02:40:57",
"milestone": "",
"owner": "mthomas",
"type": "defect"
}
```
| 1.0 | Superfluous error message (Trac #54) - I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about "mkdir". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.
$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4
sim: manage and submit cactus jobs
defs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini
defs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini
Cactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg
SimEnvironment.COMMAND: submit
Executing command: submit
Parfile: par/static_tov.par
[log] Assigned restart_id of: 0001
[log] Found the following restart_ids: [0]
[log] Maximum restart id determined to be: 0000
[log] Determined submit restart id: 1
writing to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY
writing to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
mkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied
Executing submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
Submit finished, job id is 788131.nid00016
Migrated from https://trac.einsteintoolkit.org/ticket/54
```json
{
"status": "closed",
"changetime": "2010-10-20T16:14:27",
"description": "I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about \"mkdir\". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.\n\n\n\n$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4\nsim: manage and submit cactus jobs\n\ndefs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini\ndefs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini\n\nCactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg\nSimEnvironment.COMMAND: submit\nExecuting command: submit\nParfile: par/static_tov.par\n[log] Assigned restart_id of: 0001\n[log] Found the following restart_ids: [0]\n[log] Maximum restart id determined to be: 0000\n[log] Determined submit restart id: 1 \nwriting to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY\nwriting to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nmkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied\nExecuting submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nSubmit finished, job id is 788131.nid00016\n",
"reporter": "eschnett",
"cc": "",
"resolution": "fixed",
"_ts": "1287591267709359",
"component": "SimFactory",
"summary": "Superfluous error message",
"priority": "minor",
"keywords": "",
"version": "",
"time": "2010-10-19T02:40:57",
"milestone": "",
"owner": "mthomas",
"type": "defect"
}
```
| non_priority | superfluous error message trac i submitted a job for a simulation which already existed this lead to the following output where there is an error message about mkdir even though there is an error simfactory proceeds it probably shouldn t in this case however i assume that the error is benign because the directory existed before and thus the error message should not be shown at all simfactory sim submit par static tov par procs walltime num threads sim manage and submit cactus jobs defs nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs ini defs local nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs local ini cactus directory nics a proj cactus eschnett einsteintoolkit hg simenvironment command submit executing command submit parfile par static tov par assigned restart id of found the following restart ids maximum restart id determined to be determined submit restart id writing to internaldir lustre scratch eschnett simulations static tov output simfactory writing to lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript mkdir cannot create directory lustre scratch user permission denied executing submit command opt torque bin qsub lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript submit finished job id is migrated from json status closed changetime description i submitted a job for a simulation which already existed this lead to the following output where there is an error message about mkdir even though there is an error simfactory proceeds it probably shouldn t in this case however i assume that the error is benign because the directory existed before and thus the error message should not be shown at all n n n n simfactory sim submit par static tov par procs walltime num threads nsim manage and submit cactus jobs n ndefs nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs ini ndefs local nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs local ini n ncactus directory nics a proj cactus eschnett einsteintoolkit hg nsimenvironment command submit nexecuting command submit nparfile par static tov par n assigned restart id of n found the following restart ids n maximum restart id determined to be n determined submit restart id nwriting to internaldir lustre scratch eschnett simulations static tov output simfactory nwriting to lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript nmkdir cannot create directory lustre scratch user permission denied nexecuting submit command opt torque bin qsub lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript nsubmit finished job id is n reporter eschnett cc resolution fixed ts component simfactory summary superfluous error message priority minor keywords version time milestone owner mthomas type defect | 0 |
48,949 | 13,185,168,892 | IssuesEvent | 2020-08-12 20:51:32 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | icetray development email list (Trac #512) | Incomplete Migration Migrated from Trac defect tools/ports | <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/512
, reported by blaufuss and owned by cgils</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-01-22T18:45:35",
"description": "create a email list for icetray development.\n\nicetray-dev or something like that\n\nLet's use umdgrb's mailman and avoid the collective at UW",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1232649935000000",
"component": "tools/ports",
"summary": "icetray development email list",
"priority": "normal",
"keywords": "",
"time": "2009-01-09T21:14:57",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
</p>
</details>
| 1.0 | icetray development email list (Trac #512) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/512
, reported by blaufuss and owned by cgils</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-01-22T18:45:35",
"description": "create a email list for icetray development.\n\nicetray-dev or something like that\n\nLet's use umdgrb's mailman and avoid the collective at UW",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1232649935000000",
"component": "tools/ports",
"summary": "icetray development email list",
"priority": "normal",
"keywords": "",
"time": "2009-01-09T21:14:57",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
</p>
</details>
| non_priority | icetray development email list trac migrated from reported by blaufuss and owned by cgils json status closed changetime description create a email list for icetray development n nicetray dev or something like that n nlet s use umdgrb s mailman and avoid the collective at uw reporter blaufuss cc resolution fixed ts component tools ports summary icetray development email list priority normal keywords time milestone owner cgils type defect | 0 |
353,818 | 10,559,289,496 | IssuesEvent | 2019-10-04 11:11:20 | nationalarchives/front-end-development-guide | https://api.github.com/repos/nationalarchives/front-end-development-guide | closed | Update development guide to explain decorative images should be CSS backgrounds, not img tags | high-priority | The DAC report has identified an instance where a decorative image has been implemented using an `<img>` tag. We should be explicity about this in the front-end-development-guide so that everyone (including new starters) and 3rd parties are aware. | 1.0 | Update development guide to explain decorative images should be CSS backgrounds, not img tags - The DAC report has identified an instance where a decorative image has been implemented using an `<img>` tag. We should be explicity about this in the front-end-development-guide so that everyone (including new starters) and 3rd parties are aware. | priority | update development guide to explain decorative images should be css backgrounds not img tags the dac report has identified an instance where a decorative image has been implemented using an tag we should be explicity about this in the front end development guide so that everyone including new starters and parties are aware | 1 |
299,557 | 22,613,164,586 | IssuesEvent | 2022-06-29 19:07:04 | SandraScherer/EntertainmentInfothek | https://api.github.com/repos/SandraScherer/EntertainmentInfothek | opened | Introduce factory to all Entry-derived classes | documentation enhancement program | - [ ] Update implementation
- [ ] Update tests
- [ ] Update documentation
- [ ] EntertainmentInfothek_EntertainmentDB.dll.vpp
- [ ] Doxygen | 1.0 | Introduce factory to all Entry-derived classes - - [ ] Update implementation
- [ ] Update tests
- [ ] Update documentation
- [ ] EntertainmentInfothek_EntertainmentDB.dll.vpp
- [ ] Doxygen | non_priority | introduce factory to all entry derived classes update implementation update tests update documentation entertainmentinfothek entertainmentdb dll vpp doxygen | 0 |
759,768 | 26,609,879,671 | IssuesEvent | 2023-01-23 22:50:25 | redhat-developer/odo | https://api.github.com/repos/redhat-developer/odo | closed | `odo delete component --running-in` | priority/Medium kind/user-story | /kind user-story
## User Story
- As an odo user
- I want to delete just `dev` resources from the cluster and keep `deploy` or just `deploy` resources and keep `dev` resources.
- So I can deploy/undeploy applications without touching `dev` instance or delete just `dev` instance without affecting `deployed` app. This is useful in situations where for example I deployed an application (using `odo deploy`) and executed `odo dev`, but `odo dev` crashed and left the "dev" resources on the cluster.
## Acceptance Criteria
## Acceptance Criteria
- [ ] `odo delete component` should delete all resources from the cluster related to the current component (both deployed by `odo dev` and `odo deploy`
- [ ] `odo delete component --running-in dev` should delete all resources from the cluster related to the current component **deployed by `odo dev`**
- [ ] `odo delete component --running-in deploy` should delete all resources from the cluster related to the current component **deployed by `odo deploy`**
/kind user-story
/priority medium
| 1.0 | `odo delete component --running-in` - /kind user-story
## User Story
- As an odo user
- I want to delete just `dev` resources from the cluster and keep `deploy` or just `deploy` resources and keep `dev` resources.
- So I can deploy/undeploy applications without touching `dev` instance or delete just `dev` instance without affecting `deployed` app. This is useful in situations where for example I deployed an application (using `odo deploy`) and executed `odo dev`, but `odo dev` crashed and left the "dev" resources on the cluster.
## Acceptance Criteria
## Acceptance Criteria
- [ ] `odo delete component` should delete all resources from the cluster related to the current component (both deployed by `odo dev` and `odo deploy`
- [ ] `odo delete component --running-in dev` should delete all resources from the cluster related to the current component **deployed by `odo dev`**
- [ ] `odo delete component --running-in deploy` should delete all resources from the cluster related to the current component **deployed by `odo deploy`**
/kind user-story
/priority medium
| priority | odo delete component running in kind user story user story as an odo user i want to delete just dev resources from the cluster and keep deploy or just deploy resources and keep dev resources so i can deploy undeploy applications without touching dev instance or delete just dev instance without affecting deployed app this is useful in situations where for example i deployed an application using odo deploy and executed odo dev but odo dev crashed and left the dev resources on the cluster acceptance criteria acceptance criteria odo delete component should delete all resources from the cluster related to the current component both deployed by odo dev and odo deploy odo delete component running in dev should delete all resources from the cluster related to the current component deployed by odo dev odo delete component running in deploy should delete all resources from the cluster related to the current component deployed by odo deploy kind user story priority medium | 1 |
49,363 | 10,341,918,041 | IssuesEvent | 2019-09-04 04:20:30 | ssm-deepcove/deepcove-website | https://api.github.com/repos/ssm-deepcove/deepcove-website | opened | Override GetHashCode on CmsButton and TextComponent | improve code | Since we have overriden the Equals() method on these classes, we should also override GetHashCode(), so that the classes behave appropriately in hash tables.
Putting this as low priority as I do not believe that it will likely affect us, but is good practice. | 1.0 | Override GetHashCode on CmsButton and TextComponent - Since we have overriden the Equals() method on these classes, we should also override GetHashCode(), so that the classes behave appropriately in hash tables.
Putting this as low priority as I do not believe that it will likely affect us, but is good practice. | non_priority | override gethashcode on cmsbutton and textcomponent since we have overriden the equals method on these classes we should also override gethashcode so that the classes behave appropriately in hash tables putting this as low priority as i do not believe that it will likely affect us but is good practice | 0 |
354,731 | 10,571,537,843 | IssuesEvent | 2019-10-07 07:25:28 | LightXEthan/uwahs-campus-map | https://api.github.com/repos/LightXEthan/uwahs-campus-map | closed | Improve password strength of Firebase Authentication | admin-front enhancement low priority | - firebase default password requirement is 6 characters min.
- research if password strength is configurable in firebase or implement our own. | 1.0 | Improve password strength of Firebase Authentication - - firebase default password requirement is 6 characters min.
- research if password strength is configurable in firebase or implement our own. | priority | improve password strength of firebase authentication firebase default password requirement is characters min research if password strength is configurable in firebase or implement our own | 1 |
615,584 | 19,268,980,137 | IssuesEvent | 2021-12-10 01:30:22 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Feature Request - make "show number of entries" consistent throughout the working session in bulkloader browse-and-edit | Priority-Normal (Not urgent) Enhancement | Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Is your feature request related to a problem? Please describe.**
In Bulkloader browse-and-edit and when working with accessions where n>10 and using SQL or other tools, on every page reload, number of shown entries reverts to 10 (default). this is very cumbersome for many accessions where 10<n<100.
**Describe what you're trying to accomplish**
Once changed to 50 or 100, or other value, number of shown entries should stick throughout the session (or even profile setting?)
**Describe the solution you'd like**
Number of entries should default to 10 or 25 but be consistent once changed, similar to customizing columns shown in SearchResults.cfm
**Describe alternatives you've considered**
An alternative would be to set default to 50, a number that's still small but avoids scrolling through pages on small accessions.
Thanks!
| 1.0 | Feature Request - make "show number of entries" consistent throughout the working session in bulkloader browse-and-edit - Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Is your feature request related to a problem? Please describe.**
In Bulkloader browse-and-edit and when working with accessions where n>10 and using SQL or other tools, on every page reload, number of shown entries reverts to 10 (default). this is very cumbersome for many accessions where 10<n<100.
**Describe what you're trying to accomplish**
Once changed to 50 or 100, or other value, number of shown entries should stick throughout the session (or even profile setting?)
**Describe the solution you'd like**
Number of entries should default to 10 or 25 but be consistent once changed, similar to customizing columns shown in SearchResults.cfm
**Describe alternatives you've considered**
An alternative would be to set default to 50, a number that's still small but avoids scrolling through pages on small accessions.
Thanks!
| priority | feature request make show number of entries consistent throughout the working session in bulkloader browse and edit issue documentation is is your feature request related to a problem please describe in bulkloader browse and edit and when working with accessions where n and using sql or other tools on every page reload number of shown entries reverts to default this is very cumbersome for many accessions where n describe what you re trying to accomplish once changed to or or other value number of shown entries should stick throughout the session or even profile setting describe the solution you d like number of entries should default to or but be consistent once changed similar to customizing columns shown in searchresults cfm describe alternatives you ve considered an alternative would be to set default to a number that s still small but avoids scrolling through pages on small accessions thanks | 1 |
67,821 | 28,056,588,911 | IssuesEvent | 2023-03-29 09:46:24 | elastic/integrations | https://api.github.com/repos/elastic/integrations | opened | [RabbitMQ] Support metrics like message publish and deliver rate for queues | Team:Service-Integrations | Currently both beats and integrations module for RabbitMQ doesn't support all the metrics for queue data stream.
There is an ask for metrics like rabbitmq.queue.messages.publish.rate and rabbitmq.queue.messages.deliver.rate in Discuss forum.
It would be good to consider any such metrics which is currently missing and can be added to support user requirements better.
Discuss Thread: https://discuss.elastic.co/t/how-to-monitor-publish-and-delivery-rate-of-messages-in-rabbitmq-queues-using-metricbeat/328354 | 1.0 | [RabbitMQ] Support metrics like message publish and deliver rate for queues - Currently both beats and integrations module for RabbitMQ doesn't support all the metrics for queue data stream.
There is an ask for metrics like rabbitmq.queue.messages.publish.rate and rabbitmq.queue.messages.deliver.rate in Discuss forum.
It would be good to consider any such metrics which is currently missing and can be added to support user requirements better.
Discuss Thread: https://discuss.elastic.co/t/how-to-monitor-publish-and-delivery-rate-of-messages-in-rabbitmq-queues-using-metricbeat/328354 | non_priority | support metrics like message publish and deliver rate for queues currently both beats and integrations module for rabbitmq doesn t support all the metrics for queue data stream there is an ask for metrics like rabbitmq queue messages publish rate and rabbitmq queue messages deliver rate in discuss forum it would be good to consider any such metrics which is currently missing and can be added to support user requirements better discuss thread | 0 |
543,326 | 15,879,976,872 | IssuesEvent | 2021-04-09 13:09:37 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Version-specific documentation links broken | component:ui priority:medium state:needs_devel type:bug | ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
- UI
##### SUMMARY
When clicking the button Key from Search you'll get a series of informations, the link that send to Ansible Tower is not aligned with the AWX version.
##### ENVIRONMENT
<!--
* AWX version: X.Y.Z
* AWX install method: openshift, minishift, docker on linux, docker for mac, boot2docker
* Ansible version: X.Y.Z
* Operating System:
* Web Browser:
-->
* Docker on Linux
* All browsers
##### STEPS TO REPRODUCE
Within search box, pressing the button Key you will be presented with options and with a link that redirect you to Ansible Tower documentation.
##### EXPECTED RESULTS
<!-- For bug reports, what did you expect to happen when running the steps
above? -->
http://docs.ansible.com/ansible-tower/latest/html/userguide/search_sort.html
##### ACTUAL RESULTS
<!-- For bug reports, what actually happened? -->
http://docs.ansible.com/ansible-tower/1.0.1.0/html/userguide/search_sort.html
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
Even tho is correct that AWX is at release 1.0.1.0 the documentation is not correctly aligned. | 1.0 | Version-specific documentation links broken - ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
- UI
##### SUMMARY
When clicking the button Key from Search you'll get a series of informations, the link that send to Ansible Tower is not aligned with the AWX version.
##### ENVIRONMENT
<!--
* AWX version: X.Y.Z
* AWX install method: openshift, minishift, docker on linux, docker for mac, boot2docker
* Ansible version: X.Y.Z
* Operating System:
* Web Browser:
-->
* Docker on Linux
* All browsers
##### STEPS TO REPRODUCE
Within search box, pressing the button Key you will be presented with options and with a link that redirect you to Ansible Tower documentation.
##### EXPECTED RESULTS
<!-- For bug reports, what did you expect to happen when running the steps
above? -->
http://docs.ansible.com/ansible-tower/latest/html/userguide/search_sort.html
##### ACTUAL RESULTS
<!-- For bug reports, what actually happened? -->
http://docs.ansible.com/ansible-tower/1.0.1.0/html/userguide/search_sort.html
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
Even tho is correct that AWX is at release 1.0.1.0 the documentation is not correctly aligned. | priority | version specific documentation links broken issue type bug report component name ui summary when clicking the button key from search you ll get a series of informations the link that send to ansible tower is not aligned with the awx version environment awx version x y z awx install method openshift minishift docker on linux docker for mac ansible version x y z operating system web browser docker on linux all browsers steps to reproduce within search box pressing the button key you will be presented with options and with a link that redirect you to ansible tower documentation expected results for bug reports what did you expect to happen when running the steps above actual results additional information include any links to sosreport database dumps screenshots or other information even tho is correct that awx is at release the documentation is not correctly aligned | 1 |
165,044 | 20,574,097,154 | IssuesEvent | 2022-03-04 01:20:00 | JMD60260/kitkatclub | https://api.github.com/repos/JMD60260/kitkatclub | opened | WS-2022-0089 (High) detected in nokogiri-1.10.8.gem | security vulnerability | ## WS-2022-0089 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.8.gem</b></p></summary>
<p>Nokogiri (鋸) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.8.gem">https://rubygems.org/gems/nokogiri-1.10.8.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /var/lib/gems/2.5.0/cache/nokogiri-1.10.8.gem</p>
<p>
Dependency Hierarchy:
- coffee-rails-4.2.2.gem (Root Library)
- railties-5.2.3.gem
- actionpack-5.2.3.gem
- rails-dom-testing-2.0.3.gem
- :x: **nokogiri-1.10.8.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nokogiri before version 1.13.2 is vulnerable.
<p>Publish Date: 2022-03-01
<p>URL: <a href=https://github.com/sparklemotion/nokogiri/commit/472913378794b8cae21751b0777205e7c0606a95>WS-2022-0089</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-fq42-c5rg-92c2">https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-fq42-c5rg-92c2</a></p>
<p>Release Date: 2022-03-01</p>
<p>Fix Resolution: nokogiri - v1.13.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2022-0089 (High) detected in nokogiri-1.10.8.gem - ## WS-2022-0089 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.8.gem</b></p></summary>
<p>Nokogiri (鋸) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.8.gem">https://rubygems.org/gems/nokogiri-1.10.8.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /var/lib/gems/2.5.0/cache/nokogiri-1.10.8.gem</p>
<p>
Dependency Hierarchy:
- coffee-rails-4.2.2.gem (Root Library)
- railties-5.2.3.gem
- actionpack-5.2.3.gem
- rails-dom-testing-2.0.3.gem
- :x: **nokogiri-1.10.8.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nokogiri before version 1.13.2 is vulnerable.
<p>Publish Date: 2022-03-01
<p>URL: <a href=https://github.com/sparklemotion/nokogiri/commit/472913378794b8cae21751b0777205e7c0606a95>WS-2022-0089</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-fq42-c5rg-92c2">https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-fq42-c5rg-92c2</a></p>
<p>Release Date: 2022-03-01</p>
<p>Fix Resolution: nokogiri - v1.13.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws high detected in nokogiri gem ws high severity vulnerability vulnerable library nokogiri gem nokogiri 鋸 is an html xml sax and reader parser among nokogiri s many features is the ability to search documents via xpath or selectors library home page a href path to dependency file gemfile lock path to vulnerable library var lib gems cache nokogiri gem dependency hierarchy coffee rails gem root library railties gem actionpack gem rails dom testing gem x nokogiri gem vulnerable library vulnerability details nokogiri before version is vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nokogiri step up your open source security game with whitesource | 0 |
63,506 | 14,656,733,240 | IssuesEvent | 2020-12-28 14:04:41 | fu1771695yongxie/next.js | https://api.github.com/repos/fu1771695yongxie/next.js | opened | CVE-2020-7774 (High) detected in y18n-4.0.0.tgz | security vulnerability | ## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: next.js/node_modules/y18n/package.json</p>
<p>Path to vulnerable library: next.js/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- lerna-3.14.1.tgz (Root Library)
- cli-3.13.0.tgz
- yargs-12.0.5.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/next.js/commit/7da96cb602f4b841f912ded99ee8ea2109a96f0e">7da96cb602f4b841f912ded99ee8ea2109a96f0e</a></p>
<p>Found in base branch: <b>canary</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 5.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7774 (High) detected in y18n-4.0.0.tgz - ## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: next.js/node_modules/y18n/package.json</p>
<p>Path to vulnerable library: next.js/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- lerna-3.14.1.tgz (Root Library)
- cli-3.13.0.tgz
- yargs-12.0.5.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/next.js/commit/7da96cb602f4b841f912ded99ee8ea2109a96f0e">7da96cb602f4b841f912ded99ee8ea2109a96f0e</a></p>
<p>Found in base branch: <b>canary</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 5.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tgz cve high severity vulnerability vulnerable library tgz the bare bones internationalization library used by yargs library home page a href path to dependency file next js node modules package json path to vulnerable library next js node modules package json dependency hierarchy lerna tgz root library cli tgz yargs tgz x tgz vulnerable library found in head commit a href found in base branch canary vulnerability details this affects the package before and poc by const require setlocale proto updatelocale polluted true console log polluted true publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
693,854 | 23,792,221,093 | IssuesEvent | 2022-09-02 15:32:51 | lowRISC/opentitan | https://api.github.com/repos/lowRISC/opentitan | closed | [fusesoc] Compile DPI modules into shared libraries and link them afterwards | Component:Tooling Good First Issue Priority:P2 Type:Enhancement | Currently we compile all DPI code together with the Verilator-generated C++ code. We need to extend fusesoc to first compile the DPI modules into shared libraries, and then pass those libraries to the Verilator makefile to link all components together into one simulation.
This allows us to actually compile C code with a C compiler, as opposed to a C++ compiler (that's what we do right now). | 1.0 | [fusesoc] Compile DPI modules into shared libraries and link them afterwards - Currently we compile all DPI code together with the Verilator-generated C++ code. We need to extend fusesoc to first compile the DPI modules into shared libraries, and then pass those libraries to the Verilator makefile to link all components together into one simulation.
This allows us to actually compile C code with a C compiler, as opposed to a C++ compiler (that's what we do right now). | priority | compile dpi modules into shared libraries and link them afterwards currently we compile all dpi code together with the verilator generated c code we need to extend fusesoc to first compile the dpi modules into shared libraries and then pass those libraries to the verilator makefile to link all components together into one simulation this allows us to actually compile c code with a c compiler as opposed to a c compiler that s what we do right now | 1 |
78,111 | 10,040,212,747 | IssuesEvent | 2019-07-18 19:22:31 | macports/macports-webapp | https://api.github.com/repos/macports/macports-webapp | closed | README.md does not describe what the code does | documentation | The top level README.md explains how to run the webapp but not what it is. A few paragraphs of explanation at the beginning would be in order. | 1.0 | README.md does not describe what the code does - The top level README.md explains how to run the webapp but not what it is. A few paragraphs of explanation at the beginning would be in order. | non_priority | readme md does not describe what the code does the top level readme md explains how to run the webapp but not what it is a few paragraphs of explanation at the beginning would be in order | 0 |
2,289 | 2,715,840,628 | IssuesEvent | 2015-04-10 15:27:12 | Gouga34/TERM1_Poker | https://api.github.com/repos/Gouga34/TERM1_Poker | closed | Ajout d'une méthode qui effectue une mise | Amélioration code | Il faut factoriser les méthodes miser, suivre et relancer en écrivant une méthode qui fasse :
- setMiseCourante
- setMiseTotale
- setMisePlusHaute (avec vérif)
- Incrémentation pot
- Décrémentation
Et utiliser la méthode qui fait tout dans miser, relancer et suivre ! | 1.0 | Ajout d'une méthode qui effectue une mise - Il faut factoriser les méthodes miser, suivre et relancer en écrivant une méthode qui fasse :
- setMiseCourante
- setMiseTotale
- setMisePlusHaute (avec vérif)
- Incrémentation pot
- Décrémentation
Et utiliser la méthode qui fait tout dans miser, relancer et suivre ! | non_priority | ajout d une méthode qui effectue une mise il faut factoriser les méthodes miser suivre et relancer en écrivant une méthode qui fasse setmisecourante setmisetotale setmiseplushaute avec vérif incrémentation pot décrémentation et utiliser la méthode qui fait tout dans miser relancer et suivre | 0 |
457,954 | 13,165,676,425 | IssuesEvent | 2020-08-11 07:07:46 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.gst.gov.in - design is broken | browser-firefox engine-gecko priority-normal | <!-- @browser: Firefox 80.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:80.0) Gecko/20100101 Firefox/80.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/56457 -->
**URL**: https://www.gst.gov.in/
**Browser / Version**: Firefox 80.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/8/f1c7dc11-eef6-441d-8588-a2db658ea782.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@DD`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.gst.gov.in - design is broken - <!-- @browser: Firefox 80.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:80.0) Gecko/20100101 Firefox/80.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/56457 -->
**URL**: https://www.gst.gov.in/
**Browser / Version**: Firefox 80.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/8/f1c7dc11-eef6-441d-8588-a2db658ea782.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@DD`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | design is broken url browser version firefox operating system windows tested another browser yes chrome problem type design is broken description images not loaded steps to reproduce view the screenshot img alt screenshot src browser configuration none submitted in the name of dd from with ❤️ | 1 |
5,801 | 2,974,709,166 | IssuesEvent | 2015-07-15 03:31:22 | stan-dev/cmdstan | https://api.github.com/repos/stan-dev/cmdstan | closed | update doc for L-BFGS default, pointers to Stan language doc | bug documentation enhancement | * [ ] update doc everywhere indicating L-BFGS is default optimizer
* [ ] remove all the algorithm description and replace with pointers to Stan language model; see https://github.com/stan-dev/stan/issues/786#issuecomment-50775550 | 1.0 | update doc for L-BFGS default, pointers to Stan language doc - * [ ] update doc everywhere indicating L-BFGS is default optimizer
* [ ] remove all the algorithm description and replace with pointers to Stan language model; see https://github.com/stan-dev/stan/issues/786#issuecomment-50775550 | non_priority | update doc for l bfgs default pointers to stan language doc update doc everywhere indicating l bfgs is default optimizer remove all the algorithm description and replace with pointers to stan language model see | 0 |
28,359 | 6,988,186,138 | IssuesEvent | 2017-12-14 11:57:41 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Problem saving article in frontend | No Code Attached Yet | ### Steps to reproduce the issue
Try to disable recaptcha plugin
go to frontend and login with an admin account
edit an article
try to save
### Expected result
saved article
### Actual result
PHP Fatal error: Call to a member function checkAnswer() on null in /home/uaug0z2o/domains/icfontanellatoefontevivo.gov.it/public_html/libraries/src/Form/Rule/CaptchaRule.php on line 63
### System information (as much as possible)
Joomla 3.8.3
Php 5.6
### Additional comments
If recaptcha is enable there's no problem | 1.0 | Problem saving article in frontend - ### Steps to reproduce the issue
Try to disable recaptcha plugin
go to frontend and login with an admin account
edit an article
try to save
### Expected result
saved article
### Actual result
PHP Fatal error: Call to a member function checkAnswer() on null in /home/uaug0z2o/domains/icfontanellatoefontevivo.gov.it/public_html/libraries/src/Form/Rule/CaptchaRule.php on line 63
### System information (as much as possible)
Joomla 3.8.3
Php 5.6
### Additional comments
If recaptcha is enable there's no problem | non_priority | problem saving article in frontend steps to reproduce the issue try to disable recaptcha plugin go to frontend and login with an admin account edit an article try to save expected result saved article actual result php fatal error call to a member function checkanswer on null in home domains icfontanellatoefontevivo gov it public html libraries src form rule captcharule php on line system information as much as possible joomla php additional comments if recaptcha is enable there s no problem | 0 |
7,091 | 9,375,814,553 | IssuesEvent | 2019-04-04 05:55:26 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | reopened | incompatible_windows_style_arg_escaping: enables correct subprocess argument escaping on Windows | bazel 1.0 breaking-change-0.25 incompatible-change migration-0.24 team-Windows | # Description
The option `--incompatible_windows_style_arg_escaping` enables correct subprocess argument escaping on Windows. This flag has NO effect on other platforms.
When enabled, [WindowsSubprocessFactory will use ShellUtils.windowsEscapeArg to escape command line arguments](https://github.com/bazelbuild/bazel/blob/1da48619d94ccc9fda59c114c6cd14904dbc6e34/src/main/java/com/google/devtools/build/lib/windows/WindowsSubprocessFactory.java#L77). This is correct, as [verified by tests](https://github.com/bazelbuild/bazel/blob/1da48619d94ccc9fda59c114c6cd14904dbc6e34/src/test/java/com/google/devtools/build/lib/windows/WindowsSubprocessTest.java#L230).
When disabled, [WindowsSubprocessFactory will use ShellUtils.quoteCommandLine](https://github.com/bazelbuild/bazel/blob/1da48619d94ccc9fda59c114c6cd14904dbc6e34/src/main/java/com/google/devtools/build/lib/windows/WindowsSubprocessFactory.java#L90). This is buggy, as shown by https://github.com/bazelbuild/bazel/issues/7122.
# Migration recipe
None, as of 2019-02-18.
We don't expect any breakages when this flag is enabled. However if it breaks your build, please let us know so we can help fixing it and provide a migration recipe.
# Rollout plan
- Bazel 0.23.0 will not support this flag.
- Bazel 0.24.0 is expected to support this flag, with default value being `false`.
- Bazel 0.25.0 is expected to flip this flag to true. | True | incompatible_windows_style_arg_escaping: enables correct subprocess argument escaping on Windows - # Description
The option `--incompatible_windows_style_arg_escaping` enables correct subprocess argument escaping on Windows. This flag has NO effect on other platforms.
When enabled, [WindowsSubprocessFactory will use ShellUtils.windowsEscapeArg to escape command line arguments](https://github.com/bazelbuild/bazel/blob/1da48619d94ccc9fda59c114c6cd14904dbc6e34/src/main/java/com/google/devtools/build/lib/windows/WindowsSubprocessFactory.java#L77). This is correct, as [verified by tests](https://github.com/bazelbuild/bazel/blob/1da48619d94ccc9fda59c114c6cd14904dbc6e34/src/test/java/com/google/devtools/build/lib/windows/WindowsSubprocessTest.java#L230).
When disabled, [WindowsSubprocessFactory will use ShellUtils.quoteCommandLine](https://github.com/bazelbuild/bazel/blob/1da48619d94ccc9fda59c114c6cd14904dbc6e34/src/main/java/com/google/devtools/build/lib/windows/WindowsSubprocessFactory.java#L90). This is buggy, as shown by https://github.com/bazelbuild/bazel/issues/7122.
# Migration recipe
None, as of 2019-02-18.
We don't expect any breakages when this flag is enabled. However if it breaks your build, please let us know so we can help fixing it and provide a migration recipe.
# Rollout plan
- Bazel 0.23.0 will not support this flag.
- Bazel 0.24.0 is expected to support this flag, with default value being `false`.
- Bazel 0.25.0 is expected to flip this flag to true. | non_priority | incompatible windows style arg escaping enables correct subprocess argument escaping on windows description the option incompatible windows style arg escaping enables correct subprocess argument escaping on windows this flag has no effect on other platforms when enabled this is correct as when disabled this is buggy as shown by migration recipe none as of we don t expect any breakages when this flag is enabled however if it breaks your build please let us know so we can help fixing it and provide a migration recipe rollout plan bazel will not support this flag bazel is expected to support this flag with default value being false bazel is expected to flip this flag to true | 0 |
387,990 | 26,747,603,952 | IssuesEvent | 2023-01-30 17:03:03 | Glassait/Computer_craft | https://api.github.com/repos/Glassait/Computer_craft | closed | [UPDATE] Refactor gamemode selection with strategy pattern | documentation update | ## **Update**
**Describe the solution you'd like**
Create a new class to handle multiple gamemode | 1.0 | [UPDATE] Refactor gamemode selection with strategy pattern - ## **Update**
**Describe the solution you'd like**
Create a new class to handle multiple gamemode | non_priority | refactor gamemode selection with strategy pattern update describe the solution you d like create a new class to handle multiple gamemode | 0 |
169,602 | 6,412,818,350 | IssuesEvent | 2017-08-08 05:13:58 | GoogleCloudPlatform/google-cloud-python | https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python | closed | Exception Handling Design | core priority: p2+ status: acknowledged type: enhancement | Created from https://github.com/GoogleCloudPlatform/google-cloud-python/pull/2936.
@dhermes I wasn't 100% sure if you had this all planned out or not, but if there are pieces to it or design questions we could talk about it here instead of in PRs.
The idea(to the best of my knowledge) is to better handle exceptions between our services that handle multiple transports. There is also a differentiation between exceptions that are raised from the upstream service and just normal transport exceptions.
We should have a clear distinction between service related exceptions and transport related exceptions and manage those consistently across all services. | 1.0 | Exception Handling Design - Created from https://github.com/GoogleCloudPlatform/google-cloud-python/pull/2936.
@dhermes I wasn't 100% sure if you had this all planned out or not, but if there are pieces to it or design questions we could talk about it here instead of in PRs.
The idea(to the best of my knowledge) is to better handle exceptions between our services that handle multiple transports. There is also a differentiation between exceptions that are raised from the upstream service and just normal transport exceptions.
We should have a clear distinction between service related exceptions and transport related exceptions and manage those consistently across all services. | priority | exception handling design created from dhermes i wasn t sure if you had this all planned out or not but if there are pieces to it or design questions we could talk about it here instead of in prs the idea to the best of my knowledge is to better handle exceptions between our services that handle multiple transports there is also a differentiation between exceptions that are raised from the upstream service and just normal transport exceptions we should have a clear distinction between service related exceptions and transport related exceptions and manage those consistently across all services | 1 |
534,183 | 15,611,593,533 | IssuesEvent | 2021-03-19 14:29:14 | sopra-fs21-group-01/client | https://api.github.com/repos/sopra-fs21-group-01/client | opened | #4 Pressing the UNO button sends notification to other players | medium priority task | The notification should be kinda "alarming"
Time: 45min
Part of: #4 | 1.0 | #4 Pressing the UNO button sends notification to other players - The notification should be kinda "alarming"
Time: 45min
Part of: #4 | priority | pressing the uno button sends notification to other players the notification should be kinda alarming time part of | 1 |
223,825 | 7,461,071,079 | IssuesEvent | 2018-03-30 23:04:25 | Motoxpro/WorldCupStatsSite | https://api.github.com/repos/Motoxpro/WorldCupStatsSite | closed | Get correct date for qualifying for all Roots and Rain data. | Data/Backend Low Priority Data Issue | Roots and Rain date is for the finals only, so qualifying needs to be corrected. | 1.0 | Get correct date for qualifying for all Roots and Rain data. - Roots and Rain date is for the finals only, so qualifying needs to be corrected. | priority | get correct date for qualifying for all roots and rain data roots and rain date is for the finals only so qualifying needs to be corrected | 1 |
282,992 | 21,316,008,135 | IssuesEvent | 2022-04-16 09:32:47 | FTang21/pe | https://api.github.com/repos/FTang21/pe | opened | Overlapping lines in class diagrams make it a little confusing | severity.Low type.DocumentationBug | 
For some, there is a line coming both in and out around the same location, making the diagram a bit harder to read.
I suggest spreading it out a little bit more and suggest maybe using different colors to differentiate?
<!--session: 1650096032233-d65dfecb-8fa3-4774-9bdf-a138f1813d03-->
<!--Version: Web v3.4.2--> | 1.0 | Overlapping lines in class diagrams make it a little confusing - 
For some, there is a line coming both in and out around the same location, making the diagram a bit harder to read.
I suggest spreading it out a little bit more and suggest maybe using different colors to differentiate?
<!--session: 1650096032233-d65dfecb-8fa3-4774-9bdf-a138f1813d03-->
<!--Version: Web v3.4.2--> | non_priority | overlapping lines in class diagrams make it a little confusing for some there is a line coming both in and out around the same location making the diagram a bit harder to read i suggest spreading it out a little bit more and suggest maybe using different colors to differentiate | 0 |
50,003 | 12,452,587,626 | IssuesEvent | 2020-05-27 12:33:14 | drupal-celebrations/celebrate-drupal-9 | https://api.github.com/repos/drupal-celebrations/celebrate-drupal-9 | closed | Group facets filters | backend sitebuild | Filters for videos and images (possible filters: category, country #37, celebration) are in the content region and there is no way to group/style them easily and thus make use of e.g. flex.
There is an investigation to make use of layout builder #36 but for the mvp we could start with a more simple approach (e.g. region or possibly https://www.drupal.org/project/facets_block) | 1.0 | Group facets filters - Filters for videos and images (possible filters: category, country #37, celebration) are in the content region and there is no way to group/style them easily and thus make use of e.g. flex.
There is an investigation to make use of layout builder #36 but for the mvp we could start with a more simple approach (e.g. region or possibly https://www.drupal.org/project/facets_block) | non_priority | group facets filters filters for videos and images possible filters category country celebration are in the content region and there is no way to group style them easily and thus make use of e g flex there is an investigation to make use of layout builder but for the mvp we could start with a more simple approach e g region or possibly | 0 |
89,077 | 11,195,254,114 | IssuesEvent | 2020-01-03 05:31:28 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | API Designer less files should be revamped | Area/Tooling Component/APIDesigner Component/Composer Type/Improvement Type/UX | **Description:**
This issue is to track the API Designer styles and less file review improvements noted on #13305. | 1.0 | API Designer less files should be revamped - **Description:**
This issue is to track the API Designer styles and less file review improvements noted on #13305. | non_priority | api designer less files should be revamped description this issue is to track the api designer styles and less file review improvements noted on | 0 |
37,832 | 8,529,987,021 | IssuesEvent | 2018-11-03 17:49:45 | jadrake75/stamp-imageparsing | https://api.github.com/repos/jadrake75/stamp-imageparsing | closed | Unable to create files in folders with non A-Z characters such as ö | Defect | If you have a folder or country name like "Grössbachen" it will show up as a square box | 1.0 | Unable to create files in folders with non A-Z characters such as ö - If you have a folder or country name like "Grössbachen" it will show up as a square box | non_priority | unable to create files in folders with non a z characters such as ö if you have a folder or country name like grössbachen it will show up as a square box | 0 |
83,641 | 3,638,064,985 | IssuesEvent | 2016-02-12 14:07:21 | molgenis/molgenis | https://api.github.com/repos/molgenis/molgenis | closed | Charts won't plot TypeTest ID column versus TypeTest ID column | bug molgenis-dataexplorer priority-later | ## Reproduce
Select Dataexplorer, Select TypeTest, Select charts
Create scatter plot for ID versus ID column.
## Expected
I can see the chart
## Actual
I get a somewhat obscure error:
```
18:32:36.221 [ajp-bio-8009-exec-161] ERROR org.molgenis.charts.ChartController - null
org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to execute phase [query_fetch], all shards failed; shardFailures {[3rAobd3GTcOEsqtR2YsSGQ][molgenis][0]: QueryPhaseExecutionException[[molgenis][0]: query[ConstantScore(cache(+_type:org_molgenis_test_TypeTest +org.elasticsearch.index.search.nested.NonNestedDocsFilter@7b0ccb35))],from[0],size[1000],sort[<custom:"id": org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@64be0481>,<custom:"id": org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@73957d5e>]: Query Failed [Failed to execute main query]]; nested: ElasticsearchException[java.lang.NumberFormatException: Invalid shift value in prefixCoded bytes (is encoded value really an INT?)]; nested: UncheckedExecutionException[java.lang.NumberFormatException: Invalid shift value in prefixCoded bytes (is encoded value really an INT?)]; nested: NumberFormatException[Invalid shift value in prefixCoded bytes (is encoded value really an INT?)]; }
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:233) ~[elasticsearch-1.4.4.jar:na]
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.java:179) ~[elasticsearch-1.4.4.jar:na]
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:565) ~[elasticsearch-1.4.4.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_31]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_31]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_31]
``` | 1.0 | Charts won't plot TypeTest ID column versus TypeTest ID column - ## Reproduce
Select Dataexplorer, Select TypeTest, Select charts
Create scatter plot for ID versus ID column.
## Expected
I can see the chart
## Actual
I get a somewhat obscure error:
```
18:32:36.221 [ajp-bio-8009-exec-161] ERROR org.molgenis.charts.ChartController - null
org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to execute phase [query_fetch], all shards failed; shardFailures {[3rAobd3GTcOEsqtR2YsSGQ][molgenis][0]: QueryPhaseExecutionException[[molgenis][0]: query[ConstantScore(cache(+_type:org_molgenis_test_TypeTest +org.elasticsearch.index.search.nested.NonNestedDocsFilter@7b0ccb35))],from[0],size[1000],sort[<custom:"id": org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@64be0481>,<custom:"id": org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@73957d5e>]: Query Failed [Failed to execute main query]]; nested: ElasticsearchException[java.lang.NumberFormatException: Invalid shift value in prefixCoded bytes (is encoded value really an INT?)]; nested: UncheckedExecutionException[java.lang.NumberFormatException: Invalid shift value in prefixCoded bytes (is encoded value really an INT?)]; nested: NumberFormatException[Invalid shift value in prefixCoded bytes (is encoded value really an INT?)]; }
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:233) ~[elasticsearch-1.4.4.jar:na]
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.java:179) ~[elasticsearch-1.4.4.jar:na]
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:565) ~[elasticsearch-1.4.4.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_31]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_31]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_31]
``` | priority | charts won t plot typetest id column versus typetest id column reproduce select dataexplorer select typetest select charts create scatter plot for id versus id column expected i can see the chart actual i get a somewhat obscure error error org molgenis charts chartcontroller null org elasticsearch action search searchphaseexecutionexception failed to execute phase all shards failed shardfailures queryphaseexecutionexception query from size sort query failed nested elasticsearchexception nested uncheckedexecutionexception nested numberformatexception at org elasticsearch action search type transportsearchtypeaction baseasyncaction onfirstphaseresult transportsearchtypeaction java at org elasticsearch action search type transportsearchtypeaction baseasyncaction onfailure transportsearchtypeaction java at org elasticsearch search action searchservicetransportaction run searchservicetransportaction java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java | 1 |
243,549 | 20,423,873,580 | IssuesEvent | 2022-02-24 00:17:54 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | reopened | Frequent test failures of `TestMultiNode/serial/StopMultiNode` | priority/backlog kind/failing-test | This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[KVM_Linux_crio](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=KVM_Linux_crio&test=TestMultiNode/serial/StopMultiNode)|75.94| | 1.0 | Frequent test failures of `TestMultiNode/serial/StopMultiNode` - This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[KVM_Linux_crio](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=KVM_Linux_crio&test=TestMultiNode/serial/StopMultiNode)|75.94| | non_priority | frequent test failures of testmultinode serial stopmultinode this test has high flake rates for the following environments environment flake rate | 0 |
74,635 | 9,794,728,361 | IssuesEvent | 2019-06-11 00:17:18 | arduino/Arduino | https://api.github.com/repos/arduino/Arduino | opened | Wire.requestFrom with 5 params is undocumented & exists only on AVR & SAM, not SAMD & megaAVR | Component: Documentation | Looks like a requestFrom() function taking 5 parameters is undocumented, and only implemented in 2 of the 4 platforms Arduino supports.
Here are the definitions in the AVR and SAM platforms:
https://github.com/arduino/ArduinoCore-avr/blob/master/libraries/Wire/src/Wire.h#L63
https://github.com/arduino/ArduinoCore-sam/blob/master/libraries/Wire/src/Wire.h#L49
However, the SAMD and megaAVR plaforms only have the 2 and 3 parameter requestFrom() functions in their Wire libs:
https://github.com/arduino/ArduinoCore-samd/blob/master/libraries/Wire/Wire.h#L45
https://github.com/arduino/ArduinoCore-megaavr/blob/master/libraries/Wire/src/Wire.h#L60
The Wire library requestFrom() reference page also only documents the 2 and 3 parameter functions:
https://www.arduino.cc/en/Reference/WireRequestFrom
This 5 parameter function should be properly documented, and should be consistently implemented on all platforms. Perhaps issues need to be opened on the repositories for the SAMD and megaAVR platforms? | 1.0 | Wire.requestFrom with 5 params is undocumented & exists only on AVR & SAM, not SAMD & megaAVR - Looks like a requestFrom() function taking 5 parameters is undocumented, and only implemented in 2 of the 4 platforms Arduino supports.
Here are the definitions in the AVR and SAM platforms:
https://github.com/arduino/ArduinoCore-avr/blob/master/libraries/Wire/src/Wire.h#L63
https://github.com/arduino/ArduinoCore-sam/blob/master/libraries/Wire/src/Wire.h#L49
However, the SAMD and megaAVR plaforms only have the 2 and 3 parameter requestFrom() functions in their Wire libs:
https://github.com/arduino/ArduinoCore-samd/blob/master/libraries/Wire/Wire.h#L45
https://github.com/arduino/ArduinoCore-megaavr/blob/master/libraries/Wire/src/Wire.h#L60
The Wire library requestFrom() reference page also only documents the 2 and 3 parameter functions:
https://www.arduino.cc/en/Reference/WireRequestFrom
This 5 parameter function should be properly documented, and should be consistently implemented on all platforms. Perhaps issues need to be opened on the repositories for the SAMD and megaAVR platforms? | non_priority | wire requestfrom with params is undocumented exists only on avr sam not samd megaavr looks like a requestfrom function taking parameters is undocumented and only implemented in of the platforms arduino supports here are the definitions in the avr and sam platforms however the samd and megaavr plaforms only have the and parameter requestfrom functions in their wire libs the wire library requestfrom reference page also only documents the and parameter functions this parameter function should be properly documented and should be consistently implemented on all platforms perhaps issues need to be opened on the repositories for the samd and megaavr platforms | 0 |
246,666 | 7,895,590,299 | IssuesEvent | 2018-06-29 04:22:08 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Double clicking on a silo file on windows brings up visit, but doesn't open the file. | Likelihood: 3 - Occasional OS: All Priority: Normal Severity: 3 - Major Irritation Support Group: Any bug version: 2.5.0 | Al Nichols showed me where he double clicked on an ALE3D file on his window system and it brought up visit, but it didn't open the file or print an error or anything. He then went to the open button and successfully opened the file and created a mesh plot.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 07/03/2012 07:17 pm
Original update: 07/16/2012 05:03 pm
Ticket number: 1118 | 1.0 | Double clicking on a silo file on windows brings up visit, but doesn't open the file. - Al Nichols showed me where he double clicked on an ALE3D file on his window system and it brought up visit, but it didn't open the file or print an error or anything. He then went to the open button and successfully opened the file and created a mesh plot.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 07/03/2012 07:17 pm
Original update: 07/16/2012 05:03 pm
Ticket number: 1118 | priority | double clicking on a silo file on windows brings up visit but doesn t open the file al nichols showed me where he double clicked on an file on his window system and it brought up visit but it didn t open the file or print an error or anything he then went to the open button and successfully opened the file and created a mesh plot redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author eric brugger original creation pm original update pm ticket number | 1 |
585,326 | 17,485,001,819 | IssuesEvent | 2021-08-09 09:50:28 | nimblehq/nimble-medium-ios | https://api.github.com/repos/nimblehq/nimble-medium-ios | opened | As a user, I can see the signup screen from the left menu | type : feature type: integration priority: medium | ## Why
For the new users of the application, they should be able to see the signup screen and create a new account for being able to explore and use more features from the application.
## Acceptance Criteria
- [ ] When the users tap on the `Signup` option from the left menu of the `Home` screen, present the the `Signup` screen modally.
- [ ] When the users tap on the `Close` button in the `Signup` screen, dismiss it modally.
## Resources
- Sample UX Flow:
https://user-images.githubusercontent.com/70877098/128686372-0910abcb-37c6-4227-9ace-9c476da21fd2.mov
| 1.0 | As a user, I can see the signup screen from the left menu - ## Why
For the new users of the application, they should be able to see the signup screen and create a new account for being able to explore and use more features from the application.
## Acceptance Criteria
- [ ] When the users tap on the `Signup` option from the left menu of the `Home` screen, present the the `Signup` screen modally.
- [ ] When the users tap on the `Close` button in the `Signup` screen, dismiss it modally.
## Resources
- Sample UX Flow:
https://user-images.githubusercontent.com/70877098/128686372-0910abcb-37c6-4227-9ace-9c476da21fd2.mov
| priority | as a user i can see the signup screen from the left menu why for the new users of the application they should be able to see the signup screen and create a new account for being able to explore and use more features from the application acceptance criteria when the users tap on the signup option from the left menu of the home screen present the the signup screen modally when the users tap on the close button in the signup screen dismiss it modally resources sample ux flow | 1 |
4,531 | 3,870,846,366 | IssuesEvent | 2016-04-11 07:03:19 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 22817061: Apple Watch Screenshot | classification:ui/usability reproducible:always status:open | #### Description
Summary:
The action to take screenshots on the Apple watch is far too accessible to the point that my photo stream is littered by those photos.
Steps to Reproduce:
Simultaneously (and accidentally) tap moth the crown button and the people selector button.
-
Product Version: 1.0.1
Created: 2015-09-23T09:45:14.848410
Originated: 2015-09-23T10:45:00
Open Radar Link: http://www.openradar.me/22817061 | True | 22817061: Apple Watch Screenshot - #### Description
Summary:
The action to take screenshots on the Apple watch is far too accessible to the point that my photo stream is littered by those photos.
Steps to Reproduce:
Simultaneously (and accidentally) tap moth the crown button and the people selector button.
-
Product Version: 1.0.1
Created: 2015-09-23T09:45:14.848410
Originated: 2015-09-23T10:45:00
Open Radar Link: http://www.openradar.me/22817061 | non_priority | apple watch screenshot description summary the action to take screenshots on the apple watch is far too accessible to the point that my photo stream is littered by those photos steps to reproduce simultaneously and accidentally tap moth the crown button and the people selector button product version created originated open radar link | 0 |
802,706 | 29,044,544,160 | IssuesEvent | 2023-05-13 11:41:03 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | opened | automl.tables.batch_predict_test: test_batch_predict failed | priority: p1 type: bug flakybot: issue | Note: #8811 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: a11f7e84da5b2cc13214a0fd8b77ddb4e06e3784
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b4fb3f2f-f911-4bcf-a4b4-988b1a966d81), [Sponge](http://sponge2/b4fb3f2f-f911-4bcf-a4b4-988b1a966d81)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/retry.py", line 191, in retry_target
return target()
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 120, in _done_or_raise
raise _OperationNotComplete()
google.api_core.future.polling._OperationNotComplete
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 137, in _blocking_poll
polling(self._done_or_raise)(retry=retry)
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/retry.py", line 349, in retry_wrapped_func
return retry_target(
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/retry.py", line 207, in retry_target
raise exceptions.RetryError(
google.api_core.exceptions.RetryError: Deadline of 900.0s exceeded while calling target function, last exception:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workspace/automl/tables/batch_predict_test.py", line 42, in test_batch_predict
automl_tables_predict.batch_predict(
File "/workspace/automl/tables/automl_tables_predict.py", line 167, in batch_predict
response.result()
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 256, in result
self._blocking_poll(timeout=timeout, retry=retry, polling=polling)
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 139, in _blocking_poll
raise concurrent.futures.TimeoutError(
concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout of 900 seconds.</pre></details> | 1.0 | automl.tables.batch_predict_test: test_batch_predict failed - Note: #8811 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: a11f7e84da5b2cc13214a0fd8b77ddb4e06e3784
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b4fb3f2f-f911-4bcf-a4b4-988b1a966d81), [Sponge](http://sponge2/b4fb3f2f-f911-4bcf-a4b4-988b1a966d81)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/retry.py", line 191, in retry_target
return target()
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 120, in _done_or_raise
raise _OperationNotComplete()
google.api_core.future.polling._OperationNotComplete
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 137, in _blocking_poll
polling(self._done_or_raise)(retry=retry)
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/retry.py", line 349, in retry_wrapped_func
return retry_target(
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/retry.py", line 207, in retry_target
raise exceptions.RetryError(
google.api_core.exceptions.RetryError: Deadline of 900.0s exceeded while calling target function, last exception:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workspace/automl/tables/batch_predict_test.py", line 42, in test_batch_predict
automl_tables_predict.batch_predict(
File "/workspace/automl/tables/automl_tables_predict.py", line 167, in batch_predict
response.result()
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 256, in result
self._blocking_poll(timeout=timeout, retry=retry, polling=polling)
File "/workspace/automl/tables/.nox/py-3-8/lib/python3.8/site-packages/google/api_core/future/polling.py", line 139, in _blocking_poll
raise concurrent.futures.TimeoutError(
concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout of 900 seconds.</pre></details> | priority | automl tables batch predict test test batch predict failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output traceback most recent call last file workspace automl tables nox py lib site packages google api core retry py line in retry target return target file workspace automl tables nox py lib site packages google api core future polling py line in done or raise raise operationnotcomplete google api core future polling operationnotcomplete the above exception was the direct cause of the following exception traceback most recent call last file workspace automl tables nox py lib site packages google api core future polling py line in blocking poll polling self done or raise retry retry file workspace automl tables nox py lib site packages google api core retry py line in retry wrapped func return retry target file workspace automl tables nox py lib site packages google api core retry py line in retry target raise exceptions retryerror google api core exceptions retryerror deadline of exceeded while calling target function last exception during handling of the above exception another exception occurred traceback most recent call last file workspace automl tables batch predict test py line in test batch predict automl tables predict batch predict file workspace automl tables automl tables predict py line in batch predict response result file workspace automl tables nox py lib site packages google api core future polling py line in result self blocking poll timeout timeout retry retry polling polling file workspace automl tables nox py lib site packages google api core future polling py line in blocking poll raise concurrent futures timeouterror concurrent futures base timeouterror operation did not complete within the designated timeout of seconds | 1 |
212,785 | 7,242,687,469 | IssuesEvent | 2018-02-14 08:56:09 | wso2/message-broker | https://api.github.com/repos/wso2/message-broker | closed | Keeping a limited number of message in memory for durable queues | Module/broker-core Priority/High Severity/Major Type/Improvement | ### Description
Currently, all the messages published to a queue is kept in the memory. We need to remove new messages from memory if queue size exceeds a certain limit and fetch them from DB if in-memory message count decreases. Otherwise the broker can go out of memory. | 1.0 | Keeping a limited number of message in memory for durable queues - ### Description
Currently, all the messages published to a queue is kept in the memory. We need to remove new messages from memory if queue size exceeds a certain limit and fetch them from DB if in-memory message count decreases. Otherwise the broker can go out of memory. | priority | keeping a limited number of message in memory for durable queues description currently all the messages published to a queue is kept in the memory we need to remove new messages from memory if queue size exceeds a certain limit and fetch them from db if in memory message count decreases otherwise the broker can go out of memory | 1 |
3,420 | 5,832,087,832 | IssuesEvent | 2017-05-08 20:56:37 | LBAB-Humboldt/BioModelos.v2 | https://api.github.com/repos/LBAB-Humboldt/BioModelos.v2 | closed | Hacer que las estadísticas correspondan a datos reales para cada solicitud | A3 b1-gathering-requirements b3-sprint-candidates enhancement | Bug: las estadísticas especies en agenda y modelos aprobados no son dinámicas (i.e. están quemadas) | 1.0 | Hacer que las estadísticas correspondan a datos reales para cada solicitud - Bug: las estadísticas especies en agenda y modelos aprobados no son dinámicas (i.e. están quemadas) | non_priority | hacer que las estadísticas correspondan a datos reales para cada solicitud bug las estadísticas especies en agenda y modelos aprobados no son dinámicas i e están quemadas | 0 |
406,637 | 27,574,938,713 | IssuesEvent | 2023-03-08 12:17:48 | aws/aws-sdk-js-v3 | https://api.github.com/repos/aws/aws-sdk-js-v3 | opened | PITR configuration is not supported in DynamoDB CreateTableCommand docs | documentation needs-triage | ### Describe the issue
It seems that Point in time recovery (PITR) feature is not documented on the DynamoDB Client CreateTableCommand docs.
### Links
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/createtablecommand.html | 1.0 | PITR configuration is not supported in DynamoDB CreateTableCommand docs - ### Describe the issue
It seems that Point in time recovery (PITR) feature is not documented on the DynamoDB Client CreateTableCommand docs.
### Links
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/classes/createtablecommand.html | non_priority | pitr configuration is not supported in dynamodb createtablecommand docs describe the issue it seems that point in time recovery pitr feature is not documented on the dynamodb client createtablecommand docs links | 0 |
79,842 | 3,547,883,077 | IssuesEvent | 2016-01-20 11:52:39 | pombase/pombase-chado | https://api.github.com/repos/pombase/pombase-chado | closed | exported PHAF is suspiciously tiny | bug high priority | BIG problem: The PHAFs from the last couple of builds are only 1.4K zipped, or about 116 lines unzipped.
Sorry, I should've noticed this a lot sooner -- the last sensibly sized PHAF is in pombase-build-2016-01-04-v2-l1 (pombase-build-2016-01-04-v2-l1.phaf.gz 2016-01-06 21:15 513K). | 1.0 | exported PHAF is suspiciously tiny - BIG problem: The PHAFs from the last couple of builds are only 1.4K zipped, or about 116 lines unzipped.
Sorry, I should've noticed this a lot sooner -- the last sensibly sized PHAF is in pombase-build-2016-01-04-v2-l1 (pombase-build-2016-01-04-v2-l1.phaf.gz 2016-01-06 21:15 513K). | priority | exported phaf is suspiciously tiny big problem the phafs from the last couple of builds are only zipped or about lines unzipped sorry i should ve noticed this a lot sooner the last sensibly sized phaf is in pombase build pombase build phaf gz | 1 |
174,285 | 13,465,815,181 | IssuesEvent | 2020-09-09 21:34:07 | npdarrington/Whats-Cookin | https://api.github.com/repos/npdarrington/Whats-Cookin | closed | Pantry Class Ingredients Stock TDD | iteration: 2️⃣ type: test 🔮 | As a user, I want to be able to see when I don't have enough ingredients to cook a given meal
- [ ] Create a method that will show a user when they don't have enough ingredients to cook a meal they have selected.
- [ ] Happy/Sad path if applicable. | 1.0 | Pantry Class Ingredients Stock TDD - As a user, I want to be able to see when I don't have enough ingredients to cook a given meal
- [ ] Create a method that will show a user when they don't have enough ingredients to cook a meal they have selected.
- [ ] Happy/Sad path if applicable. | non_priority | pantry class ingredients stock tdd as a user i want to be able to see when i don t have enough ingredients to cook a given meal create a method that will show a user when they don t have enough ingredients to cook a meal they have selected happy sad path if applicable | 0 |
383,709 | 26,562,171,932 | IssuesEvent | 2023-01-20 16:45:23 | adbar/simplemma | https://api.github.com/repos/adbar/simplemma | closed | Documentation: Please elaborate on the `greedy` parameter | documentation | Hi @adbar ,
could you please in you documentation go into detail on what the `greedy` parameter actually does. In the documentation, it is mentioned as if it would be self-explanatory. However, I really cannot estimate its potential/dangers.
Thanks! :smiley: | 1.0 | Documentation: Please elaborate on the `greedy` parameter - Hi @adbar ,
could you please in you documentation go into detail on what the `greedy` parameter actually does. In the documentation, it is mentioned as if it would be self-explanatory. However, I really cannot estimate its potential/dangers.
Thanks! :smiley: | non_priority | documentation please elaborate on the greedy parameter hi adbar could you please in you documentation go into detail on what the greedy parameter actually does in the documentation it is mentioned as if it would be self explanatory however i really cannot estimate its potential dangers thanks smiley | 0 |
70,865 | 8,584,772,787 | IssuesEvent | 2018-11-14 00:08:47 | mozilla/voice-web | https://api.github.com/repos/mozilla/voice-web | closed | Contribution banner alignments | Design Type: Enhancement | @Gregoor I have two pixel pushes for you on the contribution banner:
- [ ] move the `x` for dismissal to the far right and vertically center?
- [ ] move `Help report bugs` to right align with header content
- [ ] `Take a look` should sit in the middle of the space between the text and `Help report bugs`
 | 1.0 | Contribution banner alignments - @Gregoor I have two pixel pushes for you on the contribution banner:
- [ ] move the `x` for dismissal to the far right and vertically center?
- [ ] move `Help report bugs` to right align with header content
- [ ] `Take a look` should sit in the middle of the space between the text and `Help report bugs`
 | non_priority | contribution banner alignments gregoor i have two pixel pushes for you on the contribution banner move the x for dismissal to the far right and vertically center move help report bugs to right align with header content take a look should sit in the middle of the space between the text and help report bugs | 0 |
394,406 | 11,643,289,198 | IssuesEvent | 2020-02-29 12:39:26 | cybertec-postgresql/pg_timetable | https://api.github.com/repos/cybertec-postgresql/pg_timetable | closed | Add command line option --no-shell-tasks | enhancement priority | Option to disable executing of `SHELL` tasks due to security reasons
| 1.0 | Add command line option --no-shell-tasks - Option to disable executing of `SHELL` tasks due to security reasons
| priority | add command line option no shell tasks option to disable executing of shell tasks due to security reasons | 1 |
2,138 | 4,974,637,360 | IssuesEvent | 2016-12-06 07:35:08 | opentrials/opentrials | https://api.github.com/repos/opentrials/opentrials | closed | clinicaltrials.gov - data not verified recently flag | 0. Blocked Processors | clinicaltrials.gov have a "the recruitment status of this trial has not been verified recently" flag (e.g. [this trial](https://clinicaltrials.gov/ct2/show/NCT00564096)
do we collect this? if not, I think we probably need to, and then we need to change the recruitment status of these trials to "unknown" or at least find a way of flagging that it might not be true (in the same way that ct.gov does) because when we present the status right now e.g. [same trial](http://explorer.opentrials.net/trials/fd3dc841-5db3-415c-8917-c739fb1db7eb) we are still presenting as "recruiting"
| 1.0 | clinicaltrials.gov - data not verified recently flag - clinicaltrials.gov have a "the recruitment status of this trial has not been verified recently" flag (e.g. [this trial](https://clinicaltrials.gov/ct2/show/NCT00564096)
do we collect this? if not, I think we probably need to, and then we need to change the recruitment status of these trials to "unknown" or at least find a way of flagging that it might not be true (in the same way that ct.gov does) because when we present the status right now e.g. [same trial](http://explorer.opentrials.net/trials/fd3dc841-5db3-415c-8917-c739fb1db7eb) we are still presenting as "recruiting"
| non_priority | clinicaltrials gov data not verified recently flag clinicaltrials gov have a the recruitment status of this trial has not been verified recently flag e g do we collect this if not i think we probably need to and then we need to change the recruitment status of these trials to unknown or at least find a way of flagging that it might not be true in the same way that ct gov does because when we present the status right now e g we are still presenting as recruiting | 0 |
93,213 | 10,764,593,320 | IssuesEvent | 2019-11-01 08:45:41 | chanjunren/ped | https://api.github.com/repos/chanjunren/ped | opened | Example given in UG for add health command misleading | severity.Low type.DocumentationBug | 
Parameter inputs in example given does not make sense and does not correspond to it's explanation.
| 1.0 | Example given in UG for add health command misleading - 
Parameter inputs in example given does not make sense and does not correspond to it's explanation.
| non_priority | example given in ug for add health command misleading parameter inputs in example given does not make sense and does not correspond to it s explanation | 0 |
86,216 | 10,477,526,694 | IssuesEvent | 2019-09-23 21:06:34 | MDR-EMT/CC55-MitsubishiVoicedArm | https://api.github.com/repos/MDR-EMT/CC55-MitsubishiVoicedArm | opened | Pruebas de caja Blanca | documentation | Las pruebas de caja blanca se colocaran en la Wiki
- [ ] Realizar los grafos
- [ ] Calcular complejidades ciclomáticas de las pruebas
| 1.0 | Pruebas de caja Blanca - Las pruebas de caja blanca se colocaran en la Wiki
- [ ] Realizar los grafos
- [ ] Calcular complejidades ciclomáticas de las pruebas
| non_priority | pruebas de caja blanca las pruebas de caja blanca se colocaran en la wiki realizar los grafos calcular complejidades ciclomáticas de las pruebas | 0 |
1,547 | 2,644,322,270 | IssuesEvent | 2015-03-12 16:20:40 | pgmasters/backrest | https://api.github.com/repos/pgmasters/backrest | closed | archive::path != backup::path | bug (code) | Make sure that archive::path != backup::path if backup is local. | 1.0 | archive::path != backup::path - Make sure that archive::path != backup::path if backup is local. | non_priority | archive path backup path make sure that archive path backup path if backup is local | 0 |
167,337 | 6,336,671,651 | IssuesEvent | 2017-07-26 21:38:18 | derekparker/delve | https://api.github.com/repos/derekparker/delve | closed | Can't set breakpoint on certain valid lines | area/proc kind/bug priority/P0 | After running:
`thread XXXXX`
to switch the another goroutine, the command
`break file.go:xx`
produces the following error:
`Command failed: no code at /path/to/file.go:xx`
| 1.0 | Can't set breakpoint on certain valid lines - After running:
`thread XXXXX`
to switch the another goroutine, the command
`break file.go:xx`
produces the following error:
`Command failed: no code at /path/to/file.go:xx`
| priority | can t set breakpoint on certain valid lines after running thread xxxxx to switch the another goroutine the command break file go xx produces the following error command failed no code at path to file go xx | 1 |
289,935 | 25,025,065,403 | IssuesEvent | 2022-11-04 06:59:04 | wpeventmanager/wp-event-manager | https://api.github.com/repos/wpeventmanager/wp-event-manager | closed | Backend : Date & Time format - Few event are not diaply | In Testing | Backend perform below setting.

Date&Timeformat Dateformat
01-15-2022 --> d-m-Y - 2 event are display. - Which events has no end date.
01.15.2022 --> d-m-Y - 2 event are display. - Which events has no end date.
Test Event4 -location - Online events - 28-10-2022 & **No End date**
Test Event5 -location - Ahemdabad - 28-10-2022 & **No End date**
Test Event1 - Location - Ahemdabad - 28-10-2022 & 28-02-2023
Test Event2 - location - Surat - 30-10-2022 & 30-11-2022
Test Event3A -location- Online events - 28-10-2022 & 30-11-2022
Test Event3 - location - Online events - 01-11-2022 & 30-11-2022

There are six events availabel in the application.
| 1.0 | Backend : Date & Time format - Few event are not diaply - Backend perform below setting.

Date&Timeformat Dateformat
01-15-2022 --> d-m-Y - 2 event are display. - Which events has no end date.
01.15.2022 --> d-m-Y - 2 event are display. - Which events has no end date.
Test Event4 -location - Online events - 28-10-2022 & **No End date**
Test Event5 -location - Ahemdabad - 28-10-2022 & **No End date**
Test Event1 - Location - Ahemdabad - 28-10-2022 & 28-02-2023
Test Event2 - location - Surat - 30-10-2022 & 30-11-2022
Test Event3A -location- Online events - 28-10-2022 & 30-11-2022
Test Event3 - location - Online events - 01-11-2022 & 30-11-2022

There are six events availabel in the application.
| non_priority | backend date time format few event are not diaply backend perform below setting date timeformat dateformat d m y event are display which events has no end date d m y event are display which events has no end date test location online events no end date test location ahemdabad no end date test location ahemdabad test location surat test location online events test location online events there are six events availabel in the application | 0 |
59,460 | 11,965,955,319 | IssuesEvent | 2020-04-06 01:34:42 | Pokecube-Development/Pokecube-Issues-and-Wiki | https://api.github.com/repos/Pokecube-Development/Pokecube-Issues-and-Wiki | closed | Hunger threshold | 1.14.x 1.15.2 Bug - Code Fixed enhancement | **Is your feature request related to a problem? Please describe.**
Pokemons stop using the moves I want them to despite they having a lot of food on their inventories
**Describe the solution you'd like**
Making them eat their berries sooner
**Describe alternatives you've considered**
Removing the "too hungry to use XXX" thing completely
| 1.0 | Hunger threshold - **Is your feature request related to a problem? Please describe.**
Pokemons stop using the moves I want them to despite they having a lot of food on their inventories
**Describe the solution you'd like**
Making them eat their berries sooner
**Describe alternatives you've considered**
Removing the "too hungry to use XXX" thing completely
| non_priority | hunger threshold is your feature request related to a problem please describe pokemons stop using the moves i want them to despite they having a lot of food on their inventories describe the solution you d like making them eat their berries sooner describe alternatives you ve considered removing the too hungry to use xxx thing completely | 0 |
164,367 | 6,225,003,277 | IssuesEvent | 2017-07-10 15:19:35 | emfoundation/asset-manager | https://api.github.com/repos/emfoundation/asset-manager | closed | Admin - move Folder | bug priority-3 user-story | As an Admin, I want to be able to move a Folder so I can reorganise Assets. | 1.0 | Admin - move Folder - As an Admin, I want to be able to move a Folder so I can reorganise Assets. | priority | admin move folder as an admin i want to be able to move a folder so i can reorganise assets | 1 |
74,905 | 9,811,557,322 | IssuesEvent | 2019-06-13 00:17:39 | aces/Loris | https://api.github.com/repos/aces/Loris | opened | [Dashboard] Across all sites access candidate profiles does not propagate reports tab | 21.0.0 Testing Documentation | 13. Verify that if a user has 'Across all sites access candidate profiles' permission, the reports works and even more important to check without this permission. [Automate Test]
When the permission 'Across all sites access candidate profiles' is selected it does not make the reports tab viewable, the candidate tab becomes viewable. Maybe another permission is required here. The test plan needs to be updated to represent the permissions needed. | 1.0 | [Dashboard] Across all sites access candidate profiles does not propagate reports tab - 13. Verify that if a user has 'Across all sites access candidate profiles' permission, the reports works and even more important to check without this permission. [Automate Test]
When the permission 'Across all sites access candidate profiles' is selected it does not make the reports tab viewable, the candidate tab becomes viewable. Maybe another permission is required here. The test plan needs to be updated to represent the permissions needed. | non_priority | across all sites access candidate profiles does not propagate reports tab verify that if a user has across all sites access candidate profiles permission the reports works and even more important to check without this permission when the permission across all sites access candidate profiles is selected it does not make the reports tab viewable the candidate tab becomes viewable maybe another permission is required here the test plan needs to be updated to represent the permissions needed | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.