Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
99,828 | 4,068,919,919 | IssuesEvent | 2016-05-27 00:17:18 | flashxyz/BookMe | https://api.github.com/repos/flashxyz/BookMe | opened | BUG: reading of room's time restrictions not working well | AdminSettingsTeam bug Development priority : medium | reading of fromTime and toTime in room options showing blank fields. | 1.0 | BUG: reading of room's time restrictions not working well - reading of fromTime and toTime in room options showing blank fields. | priority | bug reading of room s time restrictions not working well reading of fromtime and totime in room options showing blank fields | 1 |
387,412 | 11,460,986,929 | IssuesEvent | 2020-02-07 10:53:15 | project-koku/koku | https://api.github.com/repos/project-koku/koku | closed | Missing cluster name | bug priority - medium | ## Background
We recently removed an implicit group by cluster from the API for project/node which means that we don't have a single cluster to return here.
## Proposed Solution
- We can convert to return an array of clusters. This way if the project exists in multiple clusters we can still deliver this information in a single row in the details page / API resultset.
- See https://docs.djangoproject.com/en/3.0/ref/contrib/postgres/aggregates/#arrayagg for Django use of Postgres specific array_agg method. We could use ArrayAgg to package up cluster ids and cluster aliases in our annotations and aggregates.
**Describe the bug**
As a user, I want to see the cluster associated with each Ocp and Ocp on AWS project. The UI had provided this information in the details page, via table expandable rows.
Unfortunately, the cost reports no longer appear to provide the cluster. As a result, the cluster is undefined and the UI no longer shows cluster names for each project.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'OpenShift on cloud details' or 'OpenShift details'
2. Click on 'Group cost by' project
3. Expand any table row
4. See missing cluster name
**Expected behavior**
We need the cluster name associated with each project
**Screenshots**
This is the API used in the OpenShift details:
/v1/reports/openshift/costs/?delta=cost&filter[limit]=10&filter[offset]=0&filter[resolution]=monthly&filter[time_scope_units]=month&filter[time_scope_value]=-1&group_by[project]=*&order_by[cost]=desc

This is the API used in the OpenShift on cloud details:
/v1/reports/openshift/infrastructures/aws/costs/?delta=cost&filter[limit]=10&filter[offset]=0&filter[resolution]=monthly&filter[time_scope_units]=month&filter[time_scope_value]=-1&group_by[project]=*&order_by[cost]=desc

| 1.0 | Missing cluster name - ## Background
We recently removed an implicit group by cluster from the API for project/node which means that we don't have a single cluster to return here.
## Proposed Solution
- We can convert to return an array of clusters. This way if the project exists in multiple clusters we can still deliver this information in a single row in the details page / API resultset.
- See https://docs.djangoproject.com/en/3.0/ref/contrib/postgres/aggregates/#arrayagg for Django use of Postgres specific array_agg method. We could use ArrayAgg to package up cluster ids and cluster aliases in our annotations and aggregates.
**Describe the bug**
As a user, I want to see the cluster associated with each Ocp and Ocp on AWS project. The UI had provided this information in the details page, via table expandable rows.
Unfortunately, the cost reports no longer appear to provide the cluster. As a result, the cluster is undefined and the UI no longer shows cluster names for each project.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'OpenShift on cloud details' or 'OpenShift details'
2. Click on 'Group cost by' project
3. Expand any table row
4. See missing cluster name
**Expected behavior**
We need the cluster name associated with each project
**Screenshots**
This is the API used in the OpenShift details:
/v1/reports/openshift/costs/?delta=cost&filter[limit]=10&filter[offset]=0&filter[resolution]=monthly&filter[time_scope_units]=month&filter[time_scope_value]=-1&group_by[project]=*&order_by[cost]=desc

This is the API used in the OpenShift on cloud details:
/v1/reports/openshift/infrastructures/aws/costs/?delta=cost&filter[limit]=10&filter[offset]=0&filter[resolution]=monthly&filter[time_scope_units]=month&filter[time_scope_value]=-1&group_by[project]=*&order_by[cost]=desc

| priority | missing cluster name background we recently removed an implicit group by cluster from the api for project node which means that we don t have a single cluster to return here proposed solution we can convert to return an array of clusters this way if the project exists in multiple clusters we can still deliver this information in a single row in the details page api resultset see for django use of postgres specific array agg method we could use arrayagg to package up cluster ids and cluster aliases in our annotations and aggregates describe the bug as a user i want to see the cluster associated with each ocp and ocp on aws project the ui had provided this information in the details page via table expandable rows unfortunately the cost reports no longer appear to provide the cluster as a result the cluster is undefined and the ui no longer shows cluster names for each project to reproduce steps to reproduce the behavior go to openshift on cloud details or openshift details click on group cost by project expand any table row see missing cluster name expected behavior we need the cluster name associated with each project screenshots this is the api used in the openshift details reports openshift costs delta cost filter filter filter monthly filter month filter group by order by desc this is the api used in the openshift on cloud details reports openshift infrastructures aws costs delta cost filter filter filter monthly filter month filter group by order by desc | 1 |
57,784 | 3,083,786,163 | IssuesEvent | 2015-08-24 11:18:46 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | closed | DOM XSS detection | Priority-Medium Type-Enhancement | ```
The current XSS detection only really works for server side issues.
For DOM XSS we really need to launch a browser, attack that.
```
Original issue reported on code.google.com by `psiinon` on 2014-10-17 09:38:50 | 1.0 | DOM XSS detection - ```
The current XSS detection only really works for server side issues.
For DOM XSS we really need to launch a browser, attack that.
```
Original issue reported on code.google.com by `psiinon` on 2014-10-17 09:38:50 | priority | dom xss detection the current xss detection only really works for server side issues for dom xss we really need to launch a browser attack that original issue reported on code google com by psiinon on | 1 |
108,886 | 4,357,337,586 | IssuesEvent | 2016-08-02 01:14:31 | pombase/canto | https://api.github.com/repos/pombase/canto | closed | Add option to export all publication data with canto_export.pl | medium priority | For pombase/pombase-chado#67 we need to get the publication triage data into Chado.
Add it to the JSON output as an option. | 1.0 | Add option to export all publication data with canto_export.pl - For pombase/pombase-chado#67 we need to get the publication triage data into Chado.
Add it to the JSON output as an option. | priority | add option to export all publication data with canto export pl for pombase pombase chado we need to get the publication triage data into chado add it to the json output as an option | 1 |
494,697 | 14,263,526,980 | IssuesEvent | 2020-11-20 14:33:36 | ooni/backend | https://api.github.com/repos/ooni/backend | opened | Review runbooks for on-call readiness | ooni/backend priority/medium | This is about going through some scenarios of what might break while on-call and ensuring I have all the knowledge (and tools) to be able to handle incidents while on-call. | 1.0 | Review runbooks for on-call readiness - This is about going through some scenarios of what might break while on-call and ensuring I have all the knowledge (and tools) to be able to handle incidents while on-call. | priority | review runbooks for on call readiness this is about going through some scenarios of what might break while on call and ensuring i have all the knowledge and tools to be able to handle incidents while on call | 1 |
584,075 | 17,405,423,141 | IssuesEvent | 2021-08-03 04:47:19 | Vurv78/ExpressionScript | https://api.github.com/repos/Vurv78/ExpressionScript | opened | Fix indexing operator | bug parser priority.medium | For some reason even though we accept a comma token it won't move past there. It will instead stay at the comma and not be able to accept a type token.
Repro
```golo
E[1,number]
``` | 1.0 | Fix indexing operator - For some reason even though we accept a comma token it won't move past there. It will instead stay at the comma and not be able to accept a type token.
Repro
```golo
E[1,number]
``` | priority | fix indexing operator for some reason even though we accept a comma token it won t move past there it will instead stay at the comma and not be able to accept a type token repro golo e | 1 |
606,678 | 18,767,459,476 | IssuesEvent | 2021-11-06 06:56:27 | argosp/trialdash | https://api.github.com/repos/argosp/trialdash | closed | Cloning trial keeps previous cloned trial name | enhancement Priority Medium done | When cloning a trial, the name of the new trial is similar to the name of the last cloned one.
Please change to [trial name] clone.
| 1.0 | Cloning trial keeps previous cloned trial name - When cloning a trial, the name of the new trial is similar to the name of the last cloned one.
Please change to [trial name] clone.
| priority | cloning trial keeps previous cloned trial name when cloning a trial the name of the new trial is similar to the name of the last cloned one please change to clone | 1 |
746,836 | 26,048,162,110 | IssuesEvent | 2022-12-22 16:05:37 | mi6/ic-ui-kit | https://api.github.com/repos/mi6/ic-ui-kit | closed | [ic-ui-kit dx] UI Kit storybook will not run on node version >= 17 | type: bug 🐛 dependencies priority: medium type: needs investigation development dx | ## Summary of the bug
The command `npm run storybook` fails when run on any node version greater than 16.18.1 (also working on v16.13.0 LTS). Worth noting that `npm run build` actually succeeds, it's just Storybook that's having the issue.
The error returned is:
```
@ukic/react: Error: error:0308010C:digital envelope routines::unsupported
@ukic/react: at new Hash (node:internal/crypto/hash:67:19)
@ukic/react: at Object.createHash (node:crypto:135:10)
@ukic/react: at module.exports (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\util\createHash.js:135:53)
@ukic/react: at NormalModule._initBuildHash (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:417:16)
@ukic/react: at handleParseError (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:471:10)
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:503:5
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:358:12
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\loader-runner\lib\LoaderRunner.js:373:3
@ukic/react: at iterateNormalLoaders (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\loader-runner\lib\LoaderRunner.js:214:10)
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\loader-runner\lib\LoaderRunner.js:205:4 {
@ukic/react: opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
@ukic/react: library: 'digital envelope routines',
@ukic/react: reason: 'unsupported',
@ukic/react: code: 'ERR_OSSL_EVP_UNSUPPORTED'
@ukic/react: }
```
## How to reproduce
Tell us the steps to reproduce the problem:
1. Use one of the failing node versions (below), say v17.9.1.
### Failing node versions
These versions are tested to fail:
- v17.9.1
- v18.12.1
- v19.2.0
### OK node versions
Version 16.13.0 and 16.18.1 are tested to work.
## 🧐 Expected behaviour
Storybook should launch without the `` error, as it does on node v16.18.1
## 🖥 Desktop
Tested on Windows Server 2022 and macOS Monterey 12.6.
## Possible fix
This is reported in Storybook: https://github.com/storybookjs/storybook/issues/19692, looks like changing to Webpack 5 might fix this.
| 1.0 | [ic-ui-kit dx] UI Kit storybook will not run on node version >= 17 - ## Summary of the bug
The command `npm run storybook` fails when run on any node version greater than 16.18.1 (also working on v16.13.0 LTS). Worth noting that `npm run build` actually succeeds, it's just Storybook that's having the issue.
The error returned is:
```
@ukic/react: Error: error:0308010C:digital envelope routines::unsupported
@ukic/react: at new Hash (node:internal/crypto/hash:67:19)
@ukic/react: at Object.createHash (node:crypto:135:10)
@ukic/react: at module.exports (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\util\createHash.js:135:53)
@ukic/react: at NormalModule._initBuildHash (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:417:16)
@ukic/react: at handleParseError (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:471:10)
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:503:5
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\webpack\lib\NormalModule.js:358:12
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\loader-runner\lib\LoaderRunner.js:373:3
@ukic/react: at iterateNormalLoaders (C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\loader-runner\lib\LoaderRunner.js:214:10)
@ukic/react: at C:\Users\Administrator\dev\ic-ui-kit\packages\react\node_modules\loader-runner\lib\LoaderRunner.js:205:4 {
@ukic/react: opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
@ukic/react: library: 'digital envelope routines',
@ukic/react: reason: 'unsupported',
@ukic/react: code: 'ERR_OSSL_EVP_UNSUPPORTED'
@ukic/react: }
```
## How to reproduce
Tell us the steps to reproduce the problem:
1. Use one of the failing node versions (below), say v17.9.1.
### Failing node versions
These versions are tested to fail:
- v17.9.1
- v18.12.1
- v19.2.0
### OK node versions
Version 16.13.0 and 16.18.1 are tested to work.
## 🧐 Expected behaviour
Storybook should launch without the `` error, as it does on node v16.18.1
## 🖥 Desktop
Tested on Windows Server 2022 and macOS Monterey 12.6.
## Possible fix
This is reported in Storybook: https://github.com/storybookjs/storybook/issues/19692, looks like changing to Webpack 5 might fix this.
| priority | ui kit storybook will not run on node version summary of the bug the command npm run storybook fails when run on any node version greater than also working on lts worth noting that npm run build actually succeeds it s just storybook that s having the issue the error returned is ukic react error error digital envelope routines unsupported ukic react at new hash node internal crypto hash ukic react at object createhash node crypto ukic react at module exports c users administrator dev ic ui kit packages react node modules webpack lib util createhash js ukic react at normalmodule initbuildhash c users administrator dev ic ui kit packages react node modules webpack lib normalmodule js ukic react at handleparseerror c users administrator dev ic ui kit packages react node modules webpack lib normalmodule js ukic react at c users administrator dev ic ui kit packages react node modules webpack lib normalmodule js ukic react at c users administrator dev ic ui kit packages react node modules webpack lib normalmodule js ukic react at c users administrator dev ic ui kit packages react node modules loader runner lib loaderrunner js ukic react at iteratenormalloaders c users administrator dev ic ui kit packages react node modules loader runner lib loaderrunner js ukic react at c users administrator dev ic ui kit packages react node modules loader runner lib loaderrunner js ukic react opensslerrorstack ukic react library digital envelope routines ukic react reason unsupported ukic react code err ossl evp unsupported ukic react how to reproduce tell us the steps to reproduce the problem use one of the failing node versions below say failing node versions these versions are tested to fail ok node versions version and are tested to work 🧐 expected behaviour storybook should launch without the error as it does on node 🖥 desktop tested on windows server and macos monterey possible fix this is reported in storybook looks like changing to webpack might fix this | 1 |
89,937 | 3,807,039,631 | IssuesEvent | 2016-03-25 04:27:24 | TheValarProject/TheValarProjectWebsite | https://api.github.com/repos/TheValarProject/TheValarProjectWebsite | opened | Make themed video player | enhancement priority-medium | Make video player with custom themed controls. This will be used later to show videos that showcase the server/mod. This should use the HTML5 video tags. | 1.0 | Make themed video player - Make video player with custom themed controls. This will be used later to show videos that showcase the server/mod. This should use the HTML5 video tags. | priority | make themed video player make video player with custom themed controls this will be used later to show videos that showcase the server mod this should use the video tags | 1 |
289,651 | 8,873,572,942 | IssuesEvent | 2019-01-11 18:34:16 | Coders-After-Dark/Reskillable | https://api.github.com/repos/Coders-After-Dark/Reskillable | closed | Is the "none" config option working? | Medium Priority bug | I have tried:
tp=reskillable:mining|3
tp:cooked_apple:*="none"
and I"m left with a cooked apple requiring mining 3. What am I doing wrong? Thanks. | 1.0 | Is the "none" config option working? - I have tried:
tp=reskillable:mining|3
tp:cooked_apple:*="none"
and I"m left with a cooked apple requiring mining 3. What am I doing wrong? Thanks. | priority | is the none config option working i have tried tp reskillable mining tp cooked apple none and i m left with a cooked apple requiring mining what am i doing wrong thanks | 1 |
490,047 | 14,114,980,522 | IssuesEvent | 2020-11-07 18:32:50 | apowers313/nhai | https://api.github.com/repos/apowers313/nhai | opened | Shell: run / step | priority:medium 🛠 type:tools | Create shell commands to `run` until the next break point or `step` until the next event | 1.0 | Shell: run / step - Create shell commands to `run` until the next break point or `step` until the next event | priority | shell run step create shell commands to run until the next break point or step until the next event | 1 |
326,310 | 9,954,961,279 | IssuesEvent | 2019-07-05 09:43:59 | Baystation12/Baystation12 | https://api.github.com/repos/Baystation12/Baystation12 | closed | [Master] APLU Classic modkit has no sprite | Priority: Medium Sprites | When you apply the Classic modkit to a Ripley, then enter and eject from it, it then has no sprite. If you reenter it, it once again has a sprite. Basically, the sprite for when no one is in it is missing.
| 1.0 | [Master] APLU Classic modkit has no sprite - When you apply the Classic modkit to a Ripley, then enter and eject from it, it then has no sprite. If you reenter it, it once again has a sprite. Basically, the sprite for when no one is in it is missing.
| priority | aplu classic modkit has no sprite when you apply the classic modkit to a ripley then enter and eject from it it then has no sprite if you reenter it it once again has a sprite basically the sprite for when no one is in it is missing | 1 |
666,960 | 22,393,412,241 | IssuesEvent | 2022-06-17 09:56:45 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | After clicking any office it should show user lists but it takes time and shows no user lists | Priority: medium | steps:
1. log in as Sys admin
2. make your network slow a little bit
3. Search Ibombo office
https://www.loom.com/share/25882c44fb4c4b67816039b3567872b3 | 1.0 | After clicking any office it should show user lists but it takes time and shows no user lists - steps:
1. log in as Sys admin
2. make your network slow a little bit
3. Search Ibombo office
https://www.loom.com/share/25882c44fb4c4b67816039b3567872b3 | priority | after clicking any office it should show user lists but it takes time and shows no user lists steps log in as sys admin make your network slow a little bit search ibombo office | 1 |
409,932 | 11,980,607,984 | IssuesEvent | 2020-04-07 09:37:45 | numbersprotocol/lifebox | https://api.github.com/repos/numbersprotocol/lifebox | opened | [Feature] 一次性輸入的資料,應額外拉出紀錄 | medium priority | Android: 8.0.0
Mobile: Exodus 1s
App version: v0.1.11
一次性輸入的資料,如:身高、居家位置、性別、生日 (為了推算年齡)....等,應額外獨立頁面,於使用者第一次使用時輸入 | 1.0 | [Feature] 一次性輸入的資料,應額外拉出紀錄 - Android: 8.0.0
Mobile: Exodus 1s
App version: v0.1.11
一次性輸入的資料,如:身高、居家位置、性別、生日 (為了推算年齡)....等,應額外獨立頁面,於使用者第一次使用時輸入 | priority | 一次性輸入的資料,應額外拉出紀錄 android mobile exodus app version 一次性輸入的資料,如:身高、居家位置、性別、生日 為了推算年齡 等,應額外獨立頁面,於使用者第一次使用時輸入 | 1 |
309,041 | 9,460,641,893 | IssuesEvent | 2019-04-17 11:32:45 | Fabian-Sommer/HeroesLounge | https://api.github.com/repos/Fabian-Sommer/HeroesLounge | closed | Rework time formats for NA users | high priority medium | Times displayed (upcoming matches, schedule matches) should be in AM/PM for sloths with NA as their region. Scheduling should also be in this format for them. | 1.0 | Rework time formats for NA users - Times displayed (upcoming matches, schedule matches) should be in AM/PM for sloths with NA as their region. Scheduling should also be in this format for them. | priority | rework time formats for na users times displayed upcoming matches schedule matches should be in am pm for sloths with na as their region scheduling should also be in this format for them | 1 |
777,101 | 27,268,369,855 | IssuesEvent | 2023-02-22 20:00:52 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] Strict inequality filter combined with IN condition gives extra rows | kind/bug area/ysql priority/medium status/awaiting-triage | Strict inequality filter combined with IN condition gives extra rows. With hybrid scans we push both the inequality filter and IN query as a part of scan options. However, when there is a strict inequality and an IN condition on the query, the strict inequality ends up not being adhered to completely.
Jira Link: [DB-5465](https://yugabyte.atlassian.net/browse/DB-5465)
### Description
```
./bin/ysqlsh
ysqlsh (11.2-YB-2.17.2.0-b0)
Type "help" for help.
create table test(r1 int, r2 int, primary key(r1 asc, r2 asc));
insert into test select i/5, i%5 from generate_series(1,20) i;
select * from test;
r1 | r2
----+----
0 | 1
0 | 2
0 | 3
0 | 4
1 | 0
1 | 1
1 | 2
1 | 3
1 | 4
2 | 0
2 | 1
2 | 2
2 | 3
2 | 4
3 | 0
3 | 1
3 | 2
3 | 3
3 | 4
4 | 0
(20 rows)
// Case where extra row is present -- INCORRECT
select * from test where r1 in (1, 3) and r2 > 2;
r1 | r2
----+----
1 | 3
1 | 4
3 | 2 <-- extra row
3 | 3
3 | 4
(5 rows)
// Case where extra row is present -- INCORRECT
select * from test where r1 in (0, 1, 3) and r2 > 2;
r1 | r2
----+----
0 | 3
0 | 4
1 | 3
1 | 4
3 | 2 <-- extra row
3 | 3
3 | 4
(7 rows)
```
An extra row is given for queries with both IN condition and a range filter. For the examples shown above, the extra row `3, 2` is being printed. This issue does not persist when `yb_bypass_cond_recheck` to set to false.
[DB-5465]: https://yugabyte.atlassian.net/browse/DB-5465?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] Strict inequality filter combined with IN condition gives extra rows - Strict inequality filter combined with IN condition gives extra rows. With hybrid scans we push both the inequality filter and IN query as a part of scan options. However, when there is a strict inequality and an IN condition on the query, the strict inequality ends up not being adhered to completely.
Jira Link: [DB-5465](https://yugabyte.atlassian.net/browse/DB-5465)
### Description
```
./bin/ysqlsh
ysqlsh (11.2-YB-2.17.2.0-b0)
Type "help" for help.
create table test(r1 int, r2 int, primary key(r1 asc, r2 asc));
insert into test select i/5, i%5 from generate_series(1,20) i;
select * from test;
r1 | r2
----+----
0 | 1
0 | 2
0 | 3
0 | 4
1 | 0
1 | 1
1 | 2
1 | 3
1 | 4
2 | 0
2 | 1
2 | 2
2 | 3
2 | 4
3 | 0
3 | 1
3 | 2
3 | 3
3 | 4
4 | 0
(20 rows)
// Case where extra row is present -- INCORRECT
select * from test where r1 in (1, 3) and r2 > 2;
r1 | r2
----+----
1 | 3
1 | 4
3 | 2 <-- extra row
3 | 3
3 | 4
(5 rows)
// Case where extra row is present -- INCORRECT
select * from test where r1 in (0, 1, 3) and r2 > 2;
r1 | r2
----+----
0 | 3
0 | 4
1 | 3
1 | 4
3 | 2 <-- extra row
3 | 3
3 | 4
(7 rows)
```
An extra row is given for queries with both IN condition and a range filter. For the examples shown above, the extra row `3, 2` is being printed. This issue does not persist when `yb_bypass_cond_recheck` to set to false.
[DB-5465]: https://yugabyte.atlassian.net/browse/DB-5465?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | strict inequality filter combined with in condition gives extra rows strict inequality filter combined with in condition gives extra rows with hybrid scans we push both the inequality filter and in query as a part of scan options however when there is a strict inequality and an in condition on the query the strict inequality ends up not being adhered to completely jira link description bin ysqlsh ysqlsh yb type help for help create table test int int primary key asc asc insert into test select i i from generate series i select from test rows case where extra row is present incorrect select from test where in and extra row rows case where extra row is present incorrect select from test where in and extra row rows an extra row is given for queries with both in condition and a range filter for the examples shown above the extra row is being printed this issue does not persist when yb bypass cond recheck to set to false | 1 |
552,506 | 16,241,826,142 | IssuesEvent | 2021-05-07 10:26:54 | edwisely-ai/Marketing | https://api.github.com/repos/edwisely-ai/Marketing | closed | Social Media - Rabindranath Tagore Post on May 7th | Criticality Medium Priority Low | "The highest education is that which does not merely give us information but makes our life in harmony with all existence."
Post Content :
In 1913, Rabindranath Tagore was the first non-European to win a Nobel Prize in Literature for his poetry collection titled, 'Gitanjali' which is originally written in Bengali and later translated into English. He is also referred to as "the Bard of Bengal".
Happy Rabindranath Tagore Jayanti !!
#literature #poetry #education #highereducation #Tagore #polymath
#highered #college #teaching #students #university #nobelprize
#pandemic #staysafe


| 1.0 | Social Media - Rabindranath Tagore Post on May 7th - "The highest education is that which does not merely give us information but makes our life in harmony with all existence."
Post Content :
In 1913, Rabindranath Tagore was the first non-European to win a Nobel Prize in Literature for his poetry collection titled, 'Gitanjali' which is originally written in Bengali and later translated into English. He is also referred to as "the Bard of Bengal".
Happy Rabindranath Tagore Jayanti !!
#literature #poetry #education #highereducation #Tagore #polymath
#highered #college #teaching #students #university #nobelprize
#pandemic #staysafe


| priority | social media rabindranath tagore post on may the highest education is that which does not merely give us information but makes our life in harmony with all existence post content in rabindranath tagore was the first non european to win a nobel prize in literature for his poetry collection titled gitanjali which is originally written in bengali and later translated into english he is also referred to as the bard of bengal happy rabindranath tagore jayanti literature poetry education highereducation tagore polymath highered college teaching students university nobelprize pandemic staysafe | 1 |
222,179 | 7,430,483,234 | IssuesEvent | 2018-03-25 02:20:09 | cuappdev/podcast-ios | https://api.github.com/repos/cuappdev/podcast-ios | closed | Fix facebook friends endpoint to reflect new backend | Priority: Medium Type: Maintenance | This will fix facebook suggestions on Feed to show up to 20 friends you are not following
https://github.com/cuappdev/podcast-backend/pull/212 | 1.0 | Fix facebook friends endpoint to reflect new backend - This will fix facebook suggestions on Feed to show up to 20 friends you are not following
https://github.com/cuappdev/podcast-backend/pull/212 | priority | fix facebook friends endpoint to reflect new backend this will fix facebook suggestions on feed to show up to friends you are not following | 1 |
636,205 | 20,594,960,225 | IssuesEvent | 2022-03-05 10:43:21 | AY2122S2-CS2103-W17-3/tp | https://api.github.com/repos/AY2122S2-CS2103-W17-3/tp | closed | Add skeleton to UG | type.Story priority.Medium | User story: As a user I can have an updated and useful user guide to teach me how to use the application.
Outline the rough skeleton for other teammates to refine later on. | 1.0 | Add skeleton to UG - User story: As a user I can have an updated and useful user guide to teach me how to use the application.
Outline the rough skeleton for other teammates to refine later on. | priority | add skeleton to ug user story as a user i can have an updated and useful user guide to teach me how to use the application outline the rough skeleton for other teammates to refine later on | 1 |
149,899 | 5,730,852,303 | IssuesEvent | 2017-04-21 10:33:39 | status-im/status-react | https://api.github.com/repos/status-im/status-react | opened | Tap near options button should work as tap on the button | bug intermediate medium-priority | ### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Bug

| 1.0 | Tap near options button should work as tap on the button - ### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Bug

| priority | tap near options button should work as tap on the button description feature or bug i e type bug type bug | 1 |
388,462 | 11,488,066,085 | IssuesEvent | 2020-02-11 13:14:46 | DigitalCampus/django-oppia | https://api.github.com/repos/DigitalCampus/django-oppia | closed | Media API, also return the individual elements/params | enhancement medium priority | Required for generating Oppia export package in ORB - since this won't be processed via Moodle | 1.0 | Media API, also return the individual elements/params - Required for generating Oppia export package in ORB - since this won't be processed via Moodle | priority | media api also return the individual elements params required for generating oppia export package in orb since this won t be processed via moodle | 1 |
57,187 | 3,081,247,157 | IssuesEvent | 2015-08-22 14:38:02 | bitfighter/bitfighter | https://api.github.com/repos/bitfighter/bitfighter | closed | checkArgList failure prints empty string | 020 bug imported Priority-Medium | _From [buckyballreaction](https://code.google.com/u/buckyballreaction/) on March 11, 2014 22:56:36_
What steps will reproduce the problem? 1. call an API method that internally uses checkArgList()
2. use an invalid argument
3. see the stacktrace? it doesn't print the error
For example, do this in a levelgen, in main():
bf:subscribe(Event.ShipEnteredTheTwilightZone)
That will fail the args check and trigger a stack trace... which prints empty strings for the stack.
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=411_ | 1.0 | checkArgList failure prints empty string - _From [buckyballreaction](https://code.google.com/u/buckyballreaction/) on March 11, 2014 22:56:36_
What steps will reproduce the problem? 1. call an API method that internally uses checkArgList()
2. use an invalid argument
3. see the stacktrace? it doesn't print the error
For example, do this in a levelgen, in main():
bf:subscribe(Event.ShipEnteredTheTwilightZone)
That will fail the args check and trigger a stack trace... which prints empty strings for the stack.
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=411_ | priority | checkarglist failure prints empty string from on march what steps will reproduce the problem call an api method that internally uses checkarglist use an invalid argument see the stacktrace it doesn t print the error for example do this in a levelgen in main bf subscribe event shipenteredthetwilightzone that will fail the args check and trigger a stack trace which prints empty strings for the stack original issue | 1 |
165,817 | 6,286,842,042 | IssuesEvent | 2017-07-19 13:51:12 | FezVrasta/popper.js | https://api.github.com/repos/FezVrasta/popper.js | closed | include typescript definitions in npm package | # ENHANCEMENT DIFFICULTY: medium PRIORITY: low TARGETS: core | @FezVrasta would you consider publishing a `.d.ts` file in the NPM package for typescript consumers? [`@types/popper.js`](https://www.npmjs.com/package/@types/popper.js) is already a thing, but including types in the source package itself provides a more consistent experience. it also allows libraries to depend on your interfaces without introducing an `@types` dependency that is unnecessary for JS consumers.
i've been improving the `@types` definitions and could be ready with a PR today or tomorrow? | 1.0 | include typescript definitions in npm package - @FezVrasta would you consider publishing a `.d.ts` file in the NPM package for typescript consumers? [`@types/popper.js`](https://www.npmjs.com/package/@types/popper.js) is already a thing, but including types in the source package itself provides a more consistent experience. it also allows libraries to depend on your interfaces without introducing an `@types` dependency that is unnecessary for JS consumers.
i've been improving the `@types` definitions and could be ready with a PR today or tomorrow? | priority | include typescript definitions in npm package fezvrasta would you consider publishing a d ts file in the npm package for typescript consumers is already a thing but including types in the source package itself provides a more consistent experience it also allows libraries to depend on your interfaces without introducing an types dependency that is unnecessary for js consumers i ve been improving the types definitions and could be ready with a pr today or tomorrow | 1 |
703,371 | 24,155,869,594 | IssuesEvent | 2022-09-22 07:38:18 | enviroCar/enviroCar-app | https://api.github.com/repos/enviroCar/enviroCar-app | closed | App crash when disabling auto connect | bug 3 - Done Priority - 2 - Medium | **Description**
The app crashes when you try to deactivate the "Auto Connect" setting.
**Branches**
master, develop
**How to reproduce**
Go to the settings screen, enable "Auto Connect" and try to disable it again.
**How to fix**
TBD | 1.0 | App crash when disabling auto connect - **Description**
The app crashes when you try to deactivate the "Auto Connect" setting.
**Branches**
master, develop
**How to reproduce**
Go to the settings screen, enable "Auto Connect" and try to disable it again.
**How to fix**
TBD | priority | app crash when disabling auto connect description the app crashes when you try to deactivate the auto connect setting branches master develop how to reproduce go to the settings screen enable auto connect and try to disable it again how to fix tbd | 1 |
40,791 | 2,868,942,355 | IssuesEvent | 2015-06-05 22:06:00 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | update the uploader on bignum | bug Done Priority-Medium | <a href="https://github.com/financeCoding"><img src="https://avatars.githubusercontent.com/u/654526?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [financeCoding](https://github.com/financeCoding)**
_Originally opened as dart-lang/sdk#8533_
----
The uploader email address is no longer valid, can someone update it to financeCoding@gmail.com
http://pub.dartlang.org/packages/bignum
Thanks! | 1.0 | update the uploader on bignum - <a href="https://github.com/financeCoding"><img src="https://avatars.githubusercontent.com/u/654526?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [financeCoding](https://github.com/financeCoding)**
_Originally opened as dart-lang/sdk#8533_
----
The uploader email address is no longer valid, can someone update it to financeCoding@gmail.com
http://pub.dartlang.org/packages/bignum
Thanks! | priority | update the uploader on bignum issue by originally opened as dart lang sdk the uploader email address is no longer valid can someone update it to financecoding gmail com thanks | 1 |
781,053 | 27,420,457,848 | IssuesEvent | 2023-03-01 16:22:38 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] tserver failed starting PG during server restart | kind/bug area/ysql priority/medium | Jira Link: [DB-2626](https://yugabyte.atlassian.net/browse/DB-2626)
### Description
The tserver failed to start PG with logs:
```
...
Failed when waiting for PostgreSQL server to exit: Illegal state (yb/util/subprocess.cc:444): DoWait called on a process that is not running, waiting a bit
<repeat>
...
```
~~This issue might be related to https://github.com/yugabyte/yugabyte-db/issues/12784.~~ The log in this issue is from server. We should locate the actual in PG to tell us more details.
Ref:
discussion thread: https://yugabyte.slack.com/archives/CAR5BCH29/p1655144419422059 | 1.0 | [YSQL] tserver failed starting PG during server restart - Jira Link: [DB-2626](https://yugabyte.atlassian.net/browse/DB-2626)
### Description
The tserver failed to start PG with logs:
```
...
Failed when waiting for PostgreSQL server to exit: Illegal state (yb/util/subprocess.cc:444): DoWait called on a process that is not running, waiting a bit
<repeat>
...
```
~~This issue might be related to https://github.com/yugabyte/yugabyte-db/issues/12784.~~ The log in this issue is from server. We should locate the actual in PG to tell us more details.
Ref:
discussion thread: https://yugabyte.slack.com/archives/CAR5BCH29/p1655144419422059 | priority | tserver failed starting pg during server restart jira link description the tserver failed to start pg with logs failed when waiting for postgresql server to exit illegal state yb util subprocess cc dowait called on a process that is not running waiting a bit this issue might be related to the log in this issue is from server we should locate the actual in pg to tell us more details ref discussion thread | 1 |
71,738 | 3,367,617,952 | IssuesEvent | 2015-11-22 10:19:05 | music-encoding/music-encoding | https://api.github.com/repos/music-encoding/music-encoding | closed | Allow <head> in more places with meiHead and text-containing elements | Component: Core Schema Priority: Medium Status: Needs Patch Type: Enhancement | _From [pd...@virginia.edu](https://code.google.com/u/103686026181985548448/) on January 28, 2015 11:29:01_
Allowing \<head> in more elements that occur in the header (not just \<projectDesc> as covered by issue `#187` ) will permit the header to better capture existing (printed) thematic catalogs. In addition, wherever head is allowed, it should be allowed to occur multiple times in order to capture subheadings.
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=221_ | 1.0 | Allow <head> in more places with meiHead and text-containing elements - _From [pd...@virginia.edu](https://code.google.com/u/103686026181985548448/) on January 28, 2015 11:29:01_
Allowing \<head> in more elements that occur in the header (not just \<projectDesc> as covered by issue `#187` ) will permit the header to better capture existing (printed) thematic catalogs. In addition, wherever head is allowed, it should be allowed to occur multiple times in order to capture subheadings.
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=221_ | priority | allow in more places with meihead and text containing elements from on january allowing in more elements that occur in the header not just as covered by issue will permit the header to better capture existing printed thematic catalogs in addition wherever head is allowed it should be allowed to occur multiple times in order to capture subheadings original issue | 1 |
162,191 | 6,148,687,004 | IssuesEvent | 2017-06-27 18:23:19 | cloudius-systems/osv | https://api.github.com/repos/cloudius-systems/osv | closed | TCP cork implementation | enhancement medium priority | Postpone tcp writes in order to batch data in existing packets and reduce exists
| 1.0 | TCP cork implementation - Postpone tcp writes in order to batch data in existing packets and reduce exists
| priority | tcp cork implementation postpone tcp writes in order to batch data in existing packets and reduce exists | 1 |
509,443 | 14,731,099,039 | IssuesEvent | 2021-01-06 14:13:47 | TeamChocoQuest/ChocolateQuestRepoured | https://api.github.com/repos/TeamChocoQuest/ChocolateQuestRepoured | closed | Possible memory leaks | Priority: Medium Status: Reproduced Type: Bug Waiting for feedback | **Common sense Info**
- I play...
- [X] With a large modpack
- [ ] Only with CQR and it's dependencies
- The issue occurs in...
- [X] Singleplayer
- [X] Multiplayer
- [X] I have searched for this or a similar issue before reporting and it was either (1) not previously reported, or (2) previously fixed and I'm having the same problem.
- [X] I am using the latest version of the mod (all versions can be found on github under releases)
- [X] I read through the FAQ and i could not find something helpful ([FAQ](https://wiki.cq-repoured.net/index.php?title=FAQ))
- [ ] I reproduced the bug without any other mod's except forge, cqr and it's dependencies
- [X] The game crashes because of this bug
**Versions**
Chocolate Quest Repoured: Latest
Forge: 2854
Minecraft: 1.12.2
**Describe the bug**
Generate world without protections. CQR generates protection_region files anyways in the thousands which are continuously loaded and written to on every world save causing a massive memory leak. A world without these files or CQR data on a modpack will boot using 1GB of ram, with these files over 6-8GB on boot up. And the ram keeps increasing until it runs out of memory and deadlocks/crashes.
When updating the mod, it spams that all of the files are from an older version, even when the conversion from old to new versions is enabled in the config. Combined that with the above problem it is impossible to use a world generated or created on a newer version of CQR.
I have seen CQR itself use up to 28GB of ram on my server, no matter how much memory you give it, it will use it all up.
Every world save takes 30-60+ seconds, and in that time nothing ticks, everything's deadlocked.
This has been confirmed to be a reproducible issue by multiple server owners/people i know.
**To Reproduce**
Steps to reproduce the behavior:
This error is produced on the Tekxit 3.14 (PI) modpack by Slayer5934
1) Create world and pregenerate to spawn many dungeons.
2) Watch as the world size generated increases, the ram required to run the server sky rockets.
3) Update CQR
4) Your console now has no purpose, it only passes CQR errors. And maybe butter.
5) Enable conversion of old to new files in config, the version errors dont vanish until the structures.dat file is deleted, the entire CQR folder in the world folder needs to be purged to fix the errors.
6) Every world save will take 60 seconds or longer, basically deadlocking the server AND client (world was moved to single player, same issues occured).
**Expected behavior**
Not to generate protection_region files if protection is disabled in config.
Not to continuously load files into memory when the world is saved.
Not to cause an infinite memory leak.
Not to cause world saves to take longer then 1 second. (On modpacks with over 300 mods and a 25000x25000 world a world save takes less than a second).
| 1.0 | Possible memory leaks - **Common sense Info**
- I play...
- [X] With a large modpack
- [ ] Only with CQR and it's dependencies
- The issue occurs in...
- [X] Singleplayer
- [X] Multiplayer
- [X] I have searched for this or a similar issue before reporting and it was either (1) not previously reported, or (2) previously fixed and I'm having the same problem.
- [X] I am using the latest version of the mod (all versions can be found on github under releases)
- [X] I read through the FAQ and i could not find something helpful ([FAQ](https://wiki.cq-repoured.net/index.php?title=FAQ))
- [ ] I reproduced the bug without any other mod's except forge, cqr and it's dependencies
- [X] The game crashes because of this bug
**Versions**
Chocolate Quest Repoured: Latest
Forge: 2854
Minecraft: 1.12.2
**Describe the bug**
Generate world without protections. CQR generates protection_region files anyways in the thousands which are continuously loaded and written to on every world save causing a massive memory leak. A world without these files or CQR data on a modpack will boot using 1GB of ram, with these files over 6-8GB on boot up. And the ram keeps increasing until it runs out of memory and deadlocks/crashes.
When updating the mod, it spams that all of the files are from an older version, even when the conversion from old to new versions is enabled in the config. Combined that with the above problem it is impossible to use a world generated or created on a newer version of CQR.
I have seen CQR itself use up to 28GB of ram on my server, no matter how much memory you give it, it will use it all up.
Every world save takes 30-60+ seconds, and in that time nothing ticks, everything's deadlocked.
This has been confirmed to be a reproducible issue by multiple server owners/people i know.
**To Reproduce**
Steps to reproduce the behavior:
This error is produced on the Tekxit 3.14 (PI) modpack by Slayer5934
1) Create world and pregenerate to spawn many dungeons.
2) Watch as the world size generated increases, the ram required to run the server sky rockets.
3) Update CQR
4) Your console now has no purpose, it only passes CQR errors. And maybe butter.
5) Enable conversion of old to new files in config, the version errors dont vanish until the structures.dat file is deleted, the entire CQR folder in the world folder needs to be purged to fix the errors.
6) Every world save will take 60 seconds or longer, basically deadlocking the server AND client (world was moved to single player, same issues occured).
**Expected behavior**
Not to generate protection_region files if protection is disabled in config.
Not to continuously load files into memory when the world is saved.
Not to cause an infinite memory leak.
Not to cause world saves to take longer then 1 second. (On modpacks with over 300 mods and a 25000x25000 world a world save takes less than a second).
| priority | possible memory leaks common sense info i play with a large modpack only with cqr and it s dependencies the issue occurs in singleplayer multiplayer i have searched for this or a similar issue before reporting and it was either not previously reported or previously fixed and i m having the same problem i am using the latest version of the mod all versions can be found on github under releases i read through the faq and i could not find something helpful i reproduced the bug without any other mod s except forge cqr and it s dependencies the game crashes because of this bug versions chocolate quest repoured latest forge minecraft describe the bug generate world without protections cqr generates protection region files anyways in the thousands which are continuously loaded and written to on every world save causing a massive memory leak a world without these files or cqr data on a modpack will boot using of ram with these files over on boot up and the ram keeps increasing until it runs out of memory and deadlocks crashes when updating the mod it spams that all of the files are from an older version even when the conversion from old to new versions is enabled in the config combined that with the above problem it is impossible to use a world generated or created on a newer version of cqr i have seen cqr itself use up to of ram on my server no matter how much memory you give it it will use it all up every world save takes seconds and in that time nothing ticks everything s deadlocked this has been confirmed to be a reproducible issue by multiple server owners people i know to reproduce steps to reproduce the behavior this error is produced on the tekxit pi modpack by create world and pregenerate to spawn many dungeons watch as the world size generated increases the ram required to run the server sky rockets update cqr your console now has no purpose it only passes cqr errors and maybe butter enable conversion of old to new files in config the version errors dont vanish until the structures dat file is deleted the entire cqr folder in the world folder needs to be purged to fix the errors every world save will take seconds or longer basically deadlocking the server and client world was moved to single player same issues occured expected behavior not to generate protection region files if protection is disabled in config not to continuously load files into memory when the world is saved not to cause an infinite memory leak not to cause world saves to take longer then second on modpacks with over mods and a world a world save takes less than a second | 1 |
251,628 | 8,019,665,305 | IssuesEvent | 2018-07-26 00:02:11 | MARKETProtocol/dApp | https://api.github.com/repos/MARKETProtocol/dApp | closed | [Deploy Contract] Retry in deploy contract throws error | Priority: Medium Status: Review Needed Type: Bug | ### Description
*Type*: Bug
### Current Behavior
Clicking retry button after cancelling transaction in Metamask throws below error --
```
Unhandled Rejection (TypeError): Cannot read property 'priceFloor' of undefined
```
### Expected Behavior
Clicking retry button after cancelling transaction in Metamask should start contract deployment flow with the provided user input | 1.0 | [Deploy Contract] Retry in deploy contract throws error - ### Description
*Type*: Bug
### Current Behavior
Clicking retry button after cancelling transaction in Metamask throws below error --
```
Unhandled Rejection (TypeError): Cannot read property 'priceFloor' of undefined
```
### Expected Behavior
Clicking retry button after cancelling transaction in Metamask should start contract deployment flow with the provided user input | priority | retry in deploy contract throws error description type bug current behavior clicking retry button after cancelling transaction in metamask throws below error unhandled rejection typeerror cannot read property pricefloor of undefined expected behavior clicking retry button after cancelling transaction in metamask should start contract deployment flow with the provided user input | 1 |
26,239 | 2,684,260,116 | IssuesEvent | 2015-03-28 20:18:12 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | post-update command не выполняется | 1 star bug imported Priority-Medium | _From [sim....@gmail.com](https://code.google.com/u/105258257765487351754/) on January 09, 2013 10:56:59_
Required information! OS version: Win7 SP1 x86 ConEmu version: 130108
Far version (if you are using Far Manager): 3.0.3067
post-update command не выполняется после автоматического обновления *Steps to reproduction* 1. я сделал .bat файл, который удаляет лишние файлы из плагинов ConMan и копирует мой файл Background.xml
2. получил сообщение о новой версии ConEmu запустил обновление
3. после обновления мой .bat файл не запустился.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=877_ | 1.0 | post-update command не выполняется - _From [sim....@gmail.com](https://code.google.com/u/105258257765487351754/) on January 09, 2013 10:56:59_
Required information! OS version: Win7 SP1 x86 ConEmu version: 130108
Far version (if you are using Far Manager): 3.0.3067
post-update command не выполняется после автоматического обновления *Steps to reproduction* 1. я сделал .bat файл, который удаляет лишние файлы из плагинов ConMan и копирует мой файл Background.xml
2. получил сообщение о новой версии ConEmu запустил обновление
3. после обновления мой .bat файл не запустился.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=877_ | priority | post update command не выполняется from on january required information os version conemu version far version if you are using far manager post update command не выполняется после автоматического обновления steps to reproduction я сделал bat файл который удаляет лишние файлы из плагинов conman и копирует мой файл background xml получил сообщение о новой версии conemu запустил обновление после обновления мой bat файл не запустился original issue | 1 |
22,789 | 2,650,925,099 | IssuesEvent | 2015-03-16 06:49:23 | grepper/tovid | https://api.github.com/repos/grepper/tovid | closed | mplayer renders OSD | bug imported Priority-Medium wontfix | _From [zaar...@gmail.com](https://code.google.com/u/116363330358804917839/) on January 14, 2008 16:46:05_
if /etc/mplayer/mplayer.conf is configured to always show the OSD (osdlevel
option) it will be rendered into converted videos by tovid.
either try to check for that and warn the user,
or override it with the command line parameter.
_Original issue: http://code.google.com/p/tovid/issues/detail?id=27_ | 1.0 | mplayer renders OSD - _From [zaar...@gmail.com](https://code.google.com/u/116363330358804917839/) on January 14, 2008 16:46:05_
if /etc/mplayer/mplayer.conf is configured to always show the OSD (osdlevel
option) it will be rendered into converted videos by tovid.
either try to check for that and warn the user,
or override it with the command line parameter.
_Original issue: http://code.google.com/p/tovid/issues/detail?id=27_ | priority | mplayer renders osd from on january if etc mplayer mplayer conf is configured to always show the osd osdlevel option it will be rendered into converted videos by tovid either try to check for that and warn the user or override it with the command line parameter original issue | 1 |
609,285 | 18,870,252,631 | IssuesEvent | 2021-11-13 03:21:02 | scilus/fibernavigator | https://api.github.com/repos/scilus/fibernavigator | closed | Equalize button disabled for specific T1 file | bug imported Priority-Medium OpSys-All Component-UI | _Original author: Jean.Chr...@gmail.com (October 18, 2011 19:58:22)_
<b>What steps will reproduce the problem?</b>
1. Load a specific T1 file.
The equalize button is disabled, when it should be enabled.
The cause is that, for an unknown reason, the nifti file is encoded as a pure 3D data file. The number of bands data is not set, and therefore is set to 0 by default by the nifti library. When updating the property sizer, the check fails.
Suggested fix: when the m_bands member is set to 0, fix it to 1 (anyway, this is logical). This is related to Issue #40.
_Original issue: http://code.google.com/p/fibernavigator/issues/detail?id=41_
| 1.0 | Equalize button disabled for specific T1 file - _Original author: Jean.Chr...@gmail.com (October 18, 2011 19:58:22)_
<b>What steps will reproduce the problem?</b>
1. Load a specific T1 file.
The equalize button is disabled, when it should be enabled.
The cause is that, for an unknown reason, the nifti file is encoded as a pure 3D data file. The number of bands data is not set, and therefore is set to 0 by default by the nifti library. When updating the property sizer, the check fails.
Suggested fix: when the m_bands member is set to 0, fix it to 1 (anyway, this is logical). This is related to Issue #40.
_Original issue: http://code.google.com/p/fibernavigator/issues/detail?id=41_
| priority | equalize button disabled for specific file original author jean chr gmail com october what steps will reproduce the problem load a specific file the equalize button is disabled when it should be enabled the cause is that for an unknown reason the nifti file is encoded as a pure data file the number of bands data is not set and therefore is set to by default by the nifti library when updating the property sizer the check fails suggested fix when the m bands member is set to fix it to anyway this is logical this is related to issue original issue | 1 |
615,021 | 19,211,357,751 | IssuesEvent | 2021-12-07 02:33:38 | MTWGA/thoughtworks-code-review-tools | https://api.github.com/repos/MTWGA/thoughtworks-code-review-tools | closed | 人员选择列表根据字母排序 | Medium Priority | **Is your feature request related to a problem? Please describe.**
选择姓名过于复杂,采用字母表顺序排序的话 可以大幅度缩减查找代码的范围
**Describe the solution you'd like**
使用字母表排序人员选择列表
| 1.0 | 人员选择列表根据字母排序 - **Is your feature request related to a problem? Please describe.**
选择姓名过于复杂,采用字母表顺序排序的话 可以大幅度缩减查找代码的范围
**Describe the solution you'd like**
使用字母表排序人员选择列表
| priority | 人员选择列表根据字母排序 is your feature request related to a problem please describe 选择姓名过于复杂,采用字母表顺序排序的话 可以大幅度缩减查找代码的范围 describe the solution you d like 使用字母表排序人员选择列表 | 1 |
585,006 | 17,468,620,464 | IssuesEvent | 2021-08-06 21:08:50 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Status app is not responding when right click -> Quit the app after sleep | bug macos crash general priority 2: medium | **Steps:**
1. install latest master and create account
2. click yellow button Hide to collapse the app to dock
3. run any other app on foreground
4. lock the laptop and move it to sleep (i just close laptop)
5. wait for a while , usually several minutes
6. open the laptop and unlock
7. right click the status app in dock -> quit
**As a result**, the app can't be closed and becomes not responsive
https://user-images.githubusercontent.com/82375995/120779456-c1c54d80-c52f-11eb-9820-6cbc94f8851e.mov
| 1.0 | Status app is not responding when right click -> Quit the app after sleep - **Steps:**
1. install latest master and create account
2. click yellow button Hide to collapse the app to dock
3. run any other app on foreground
4. lock the laptop and move it to sleep (i just close laptop)
5. wait for a while , usually several minutes
6. open the laptop and unlock
7. right click the status app in dock -> quit
**As a result**, the app can't be closed and becomes not responsive
https://user-images.githubusercontent.com/82375995/120779456-c1c54d80-c52f-11eb-9820-6cbc94f8851e.mov
| priority | status app is not responding when right click quit the app after sleep steps install latest master and create account click yellow button hide to collapse the app to dock run any other app on foreground lock the laptop and move it to sleep i just close laptop wait for a while usually several minutes open the laptop and unlock right click the status app in dock quit as a result the app can t be closed and becomes not responsive | 1 |
764,313 | 26,794,482,078 | IssuesEvent | 2023-02-01 10:51:37 | ooni/probe | https://api.github.com/repos/ooni/probe | closed | Investigate feasibility of using Dart for the CLI | priority/medium research prototype ooni/probe-cli ooni/probe-engine | This issue is about exploring a potential future development direction where we try to integrate tightly Android, iOS, Desktop, and CLI by using Dart and Flutter and by using FFI to access the OONI engine's functionality.
We are [already evaluating](https://github.com/aanorbel/probe-shared) whether to use Flutter for all the user-facing applications to increase code reuse. If we determine that this is doable and desirable, then we'll end up in a situation like the following one:

With this proposal, we hope to consolidate lots of algorithms inside a common library (called “UI Library”). We do not mean to modify the way in which we generate mobile bindings (that is, [go mobile](https://github.com/golang/mobile)). This means that the UI Library will need to implement three distinct drivers for interacting with the OONI Engine:
1. We need a CLI driver for the desktop app that executes [ooniprobe](https://github.com/ooni/probe-cli/) and parses its output.
2. We also need an Android driver for interfacing with Java code generated using go mobile.
3. We also need an iOS driver for interfacing with Objective-C code generated using go mobile.
While this restructuring would certainly reduce the overall complexity, observing this consolidation also begs the question of whether we could further reduce complexity. Let’s explore this possibility.
Maybe a possibility would be to stop using go mobile and to get rid of the Go implementation of `ooniprobe`. We could instead use FFI and a C API to make the same API and data ABI available to all Flutter/Dart clients.
Here's how it would look like:

The objective of the work described in this issue is thus to evaluate this alternative design. We need to understand what API we can expose from Go and which strategy to use to auto-generate messages parsers and serializers.
We have already experimented with this concept in https://github.com/ooni/probe-cli/pull/849, https://github.com/ooni/probe-cli/pull/850, https://github.com/ooni/probe-cli/pull/851. So, far, it seems a good idea to use protobuf. We should continue discussing these prototypes and move a bit forward by trying this code out inside https://github.com/aanorbel/probe-shared. | 1.0 | Investigate feasibility of using Dart for the CLI - This issue is about exploring a potential future development direction where we try to integrate tightly Android, iOS, Desktop, and CLI by using Dart and Flutter and by using FFI to access the OONI engine's functionality.
We are [already evaluating](https://github.com/aanorbel/probe-shared) whether to use Flutter for all the user-facing applications to increase code reuse. If we determine that this is doable and desirable, then we'll end up in a situation like the following one:

With this proposal, we hope to consolidate lots of algorithms inside a common library (called “UI Library”). We do not mean to modify the way in which we generate mobile bindings (that is, [go mobile](https://github.com/golang/mobile)). This means that the UI Library will need to implement three distinct drivers for interacting with the OONI Engine:
1. We need a CLI driver for the desktop app that executes [ooniprobe](https://github.com/ooni/probe-cli/) and parses its output.
2. We also need an Android driver for interfacing with Java code generated using go mobile.
3. We also need an iOS driver for interfacing with Objective-C code generated using go mobile.
While this restructuring would certainly reduce the overall complexity, observing this consolidation also begs the question of whether we could further reduce complexity. Let’s explore this possibility.
Maybe a possibility would be to stop using go mobile and to get rid of the Go implementation of `ooniprobe`. We could instead use FFI and a C API to make the same API and data ABI available to all Flutter/Dart clients.
Here's how it would look like:

The objective of the work described in this issue is thus to evaluate this alternative design. We need to understand what API we can expose from Go and which strategy to use to auto-generate messages parsers and serializers.
We have already experimented with this concept in https://github.com/ooni/probe-cli/pull/849, https://github.com/ooni/probe-cli/pull/850, https://github.com/ooni/probe-cli/pull/851. So, far, it seems a good idea to use protobuf. We should continue discussing these prototypes and move a bit forward by trying this code out inside https://github.com/aanorbel/probe-shared. | priority | investigate feasibility of using dart for the cli this issue is about exploring a potential future development direction where we try to integrate tightly android ios desktop and cli by using dart and flutter and by using ffi to access the ooni engine s functionality we are whether to use flutter for all the user facing applications to increase code reuse if we determine that this is doable and desirable then we ll end up in a situation like the following one with this proposal we hope to consolidate lots of algorithms inside a common library called “ui library” we do not mean to modify the way in which we generate mobile bindings that is this means that the ui library will need to implement three distinct drivers for interacting with the ooni engine we need a cli driver for the desktop app that executes and parses its output we also need an android driver for interfacing with java code generated using go mobile we also need an ios driver for interfacing with objective c code generated using go mobile while this restructuring would certainly reduce the overall complexity observing this consolidation also begs the question of whether we could further reduce complexity let’s explore this possibility maybe a possibility would be to stop using go mobile and to get rid of the go implementation of ooniprobe we could instead use ffi and a c api to make the same api and data abi available to all flutter dart clients here s how it would look like the objective of the work described in this issue is thus to evaluate this alternative design we need to understand what api we can expose from go and which strategy to use to auto generate messages parsers and serializers we have already experimented with this concept in so far it seems a good idea to use protobuf we should continue discussing these prototypes and move a bit forward by trying this code out inside | 1 |
429,355 | 12,423,169,537 | IssuesEvent | 2020-05-24 03:34:48 | scprogramming/Python-Security-Analysis-Toolkit | https://api.github.com/repos/scprogramming/Python-Security-Analysis-Toolkit | opened | Consider GUI/User Interface in General | Analysis Medium Priority | Django is not my forte, so I should consider how I want user interaction with the application to look going forward | 1.0 | Consider GUI/User Interface in General - Django is not my forte, so I should consider how I want user interaction with the application to look going forward | priority | consider gui user interface in general django is not my forte so i should consider how i want user interaction with the application to look going forward | 1 |
27,702 | 2,695,255,959 | IssuesEvent | 2015-04-02 03:05:30 | LK/nullpomino | https://api.github.com/repos/LK/nullpomino | closed | Netadmin login should be disabled on default | auto-migrated Priority-Medium Type-Enhancement | ```
Not like netadmin can do anything really dangerous, but it is not good practice
to have remote admin login with default password in default config.
```
Original issue reported on code.google.com by `w.kowa...@gmail.com` on 21 Jan 2012 at 11:38 | 1.0 | Netadmin login should be disabled on default - ```
Not like netadmin can do anything really dangerous, but it is not good practice
to have remote admin login with default password in default config.
```
Original issue reported on code.google.com by `w.kowa...@gmail.com` on 21 Jan 2012 at 11:38 | priority | netadmin login should be disabled on default not like netadmin can do anything really dangerous but it is not good practice to have remote admin login with default password in default config original issue reported on code google com by w kowa gmail com on jan at | 1 |
37,041 | 2,814,466,234 | IssuesEvent | 2015-05-18 20:12:31 | geoffhumphrey/brewcompetitiononlineentry | https://api.github.com/repos/geoffhumphrey/brewcompetitiononlineentry | closed | Contact form on website broken? | auto-migrated Priority-Medium Type-Other | ```
What steps will reproduce the problem?
1. Send a message using the contact form at http://brewcompetition.com/contact
2. Never receive reply.
Just curious if you're getting those messages or not... I've sent several over
the last few months and have not received any response.
```
Original issue reported on code.google.com by `br...@brewdrinkrepeat.com` on 14 Apr 2015 at 6:34 | 1.0 | Contact form on website broken? - ```
What steps will reproduce the problem?
1. Send a message using the contact form at http://brewcompetition.com/contact
2. Never receive reply.
Just curious if you're getting those messages or not... I've sent several over
the last few months and have not received any response.
```
Original issue reported on code.google.com by `br...@brewdrinkrepeat.com` on 14 Apr 2015 at 6:34 | priority | contact form on website broken what steps will reproduce the problem send a message using the contact form at never receive reply just curious if you re getting those messages or not i ve sent several over the last few months and have not received any response original issue reported on code google com by br brewdrinkrepeat com on apr at | 1 |
627,367 | 19,902,886,167 | IssuesEvent | 2022-01-25 09:48:44 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | ubuntu.com is vulnerable to click jacking | Priority: Medium | Hello,
We got an RT (135497) telling us that ubuntu.com was vulnerable to "click jacking", basically because we don't send an X-Frame-Options header (https://cheatsheetseries.owasp.org/cheatsheets/Clickjacking_Defense_Cheat_Sheet.html)
Could you please take a look and fix, if appropriate ?
Thanks ! | 1.0 | ubuntu.com is vulnerable to click jacking - Hello,
We got an RT (135497) telling us that ubuntu.com was vulnerable to "click jacking", basically because we don't send an X-Frame-Options header (https://cheatsheetseries.owasp.org/cheatsheets/Clickjacking_Defense_Cheat_Sheet.html)
Could you please take a look and fix, if appropriate ?
Thanks ! | priority | ubuntu com is vulnerable to click jacking hello we got an rt telling us that ubuntu com was vulnerable to click jacking basically because we don t send an x frame options header could you please take a look and fix if appropriate thanks | 1 |
220,740 | 7,370,347,098 | IssuesEvent | 2018-03-13 08:05:23 | teamforus/research-and-development | https://api.github.com/repos/teamforus/research-and-development | closed | POC: state/payment channels | fill-template priority-medium proposal | ## poc-state-and-payment-channels
### Background / Context
**Goal/user story:**
**More:**
- Perform safe transactions off-chain, backed by a transaction on-chain.
### Hypothesis:
### Method
*documentation/code*
### Result
*present findings*
### Recommendation
*write recomendation*
| 1.0 | POC: state/payment channels - ## poc-state-and-payment-channels
### Background / Context
**Goal/user story:**
**More:**
- Perform safe transactions off-chain, backed by a transaction on-chain.
### Hypothesis:
### Method
*documentation/code*
### Result
*present findings*
### Recommendation
*write recomendation*
| priority | poc state payment channels poc state and payment channels background context goal user story more perform safe transactions off chain backed by a transaction on chain hypothesis method documentation code result present findings recommendation write recomendation | 1 |
25,933 | 2,684,049,386 | IssuesEvent | 2015-03-28 16:13:53 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Отладочные версии ругаются насчет "Version not available" | 1 star bug imported Priority-Medium | _From [thecybershadow](https://code.google.com/u/thecybershadow/) on March 02, 2012 10:52:32_
" ConEmu latest version location info" указывает на:
file://T:\VCProject\FarPlugin\ ConEmu \Maximus5\version.ini
Предполагаю что это так только в отладочных версиях. (В альфа-релизах теперь только отладочные сборки?) Oтладочная сборка - первая во списке доступным по ссылке "Latest version".
Не совсем очевидно откуда это сообщение об ошибке.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=498_ | 1.0 | Отладочные версии ругаются насчет "Version not available" - _From [thecybershadow](https://code.google.com/u/thecybershadow/) on March 02, 2012 10:52:32_
" ConEmu latest version location info" указывает на:
file://T:\VCProject\FarPlugin\ ConEmu \Maximus5\version.ini
Предполагаю что это так только в отладочных версиях. (В альфа-релизах теперь только отладочные сборки?) Oтладочная сборка - первая во списке доступным по ссылке "Latest version".
Не совсем очевидно откуда это сообщение об ошибке.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=498_ | priority | отладочные версии ругаются насчет version not available from on march conemu latest version location info указывает на file t vcproject farplugin conemu version ini предполагаю что это так только в отладочных версиях в альфа релизах теперь только отладочные сборки oтладочная сборка первая во списке доступным по ссылке latest version не совсем очевидно откуда это сообщение об ошибке original issue | 1 |
523,316 | 15,178,153,350 | IssuesEvent | 2021-02-14 14:19:24 | wevote/WebApp | https://api.github.com/repos/wevote/WebApp | closed | Add interface for uploading your own Profile Photo and Profile Banner | Difficulty: Medium Priority: 1 | Please implement the React interface for these on the "Settings" > "General Settings" page:
1. uploading your own profile photo
2. your own profile banner
3. Choosing which profile photo you would like to be displayed
4. Choose with profile banner you would like to have displayed
(There is no need to implement the API calls.)
Please add to: http://localhost:3000/settings/profile
src/js/components/Settings/SettingsProfile.jsx
I would recommend implementing the Profile picture interface and the Profile Banner interface in their own components.
PLEASE NOTE: In these mockups, the photos are round, but in We Vote all voter photos are square (we reserve round photos for candidates)
NOTE 2: Given time (not part of this issue), I would like us to find a package that allows drag-and-drop upload of a photo on Desktop
NOTE 3: Given time (not part of this issue), I would like to find a react tool that lets us crop and resize photos before we submit them to the API server.
DESKTOP

MOBILE

We have code that lets you upload a photo on the "Logo & Sharing" page -- there is good example code there:
http://localhost:3000/settings/sharing
See: src/js/components/Settings/SettingsSharing.jsx

| 1.0 | Add interface for uploading your own Profile Photo and Profile Banner - Please implement the React interface for these on the "Settings" > "General Settings" page:
1. uploading your own profile photo
2. your own profile banner
3. Choosing which profile photo you would like to be displayed
4. Choose with profile banner you would like to have displayed
(There is no need to implement the API calls.)
Please add to: http://localhost:3000/settings/profile
src/js/components/Settings/SettingsProfile.jsx
I would recommend implementing the Profile picture interface and the Profile Banner interface in their own components.
PLEASE NOTE: In these mockups, the photos are round, but in We Vote all voter photos are square (we reserve round photos for candidates)
NOTE 2: Given time (not part of this issue), I would like us to find a package that allows drag-and-drop upload of a photo on Desktop
NOTE 3: Given time (not part of this issue), I would like to find a react tool that lets us crop and resize photos before we submit them to the API server.
DESKTOP

MOBILE

We have code that lets you upload a photo on the "Logo & Sharing" page -- there is good example code there:
http://localhost:3000/settings/sharing
See: src/js/components/Settings/SettingsSharing.jsx

| priority | add interface for uploading your own profile photo and profile banner please implement the react interface for these on the settings general settings page uploading your own profile photo your own profile banner choosing which profile photo you would like to be displayed choose with profile banner you would like to have displayed there is no need to implement the api calls please add to src js components settings settingsprofile jsx i would recommend implementing the profile picture interface and the profile banner interface in their own components please note in these mockups the photos are round but in we vote all voter photos are square we reserve round photos for candidates note given time not part of this issue i would like us to find a package that allows drag and drop upload of a photo on desktop note given time not part of this issue i would like to find a react tool that lets us crop and resize photos before we submit them to the api server desktop mobile we have code that lets you upload a photo on the logo sharing page there is good example code there see src js components settings settingssharing jsx | 1 |
124,191 | 4,893,190,877 | IssuesEvent | 2016-11-18 22:13:55 | Microsoft/msbuild | https://api.github.com/repos/Microsoft/msbuild | closed | GetItemProvenance should not unescape arguments | Cost-Medium (2 days - 2 weeks) Feature - Globbing Needs Review Priority 1 | GetItemProvenance should not unescape the strings it gets. The following examples show what should match and what should not match. With the current implementation all the cases match:
Include=`”%61b%63”`
Should match: GetItemProvenance(“abc”)
Should not Match: GetItemProvenance(“ab%63”)
Include=`”a?c”`
Should match: GetItemProvenance(“abc”)
Should not match: GetItemProvenance(“a%62c”)
Include=`”a?%63”`
Should match: GetItemProvenance(“abc”)
Should not match: GetItemProvenance(“a%62c”)
Include=”a*c”
Should match: GetItemProvenance(“abcdec”)
Should match due to `*` glob: GetItemProvenance(“a%62c”)
Weird cases:
Include=`”%62”` // %62 is b
Should match: GetItemProvenance(“b”)
Should match because the string is the same: GetItemProvenance(“%62”) | 1.0 | GetItemProvenance should not unescape arguments - GetItemProvenance should not unescape the strings it gets. The following examples show what should match and what should not match. With the current implementation all the cases match:
Include=`”%61b%63”`
Should match: GetItemProvenance(“abc”)
Should not Match: GetItemProvenance(“ab%63”)
Include=`”a?c”`
Should match: GetItemProvenance(“abc”)
Should not match: GetItemProvenance(“a%62c”)
Include=`”a?%63”`
Should match: GetItemProvenance(“abc”)
Should not match: GetItemProvenance(“a%62c”)
Include=”a*c”
Should match: GetItemProvenance(“abcdec”)
Should match due to `*` glob: GetItemProvenance(“a%62c”)
Weird cases:
Include=`”%62”` // %62 is b
Should match: GetItemProvenance(“b”)
Should match because the string is the same: GetItemProvenance(“%62”) | priority | getitemprovenance should not unescape arguments getitemprovenance should not unescape the strings it gets the following examples show what should match and what should not match with the current implementation all the cases match include ” ” should match getitemprovenance “abc” should not match getitemprovenance “ab ” include ”a c” should match getitemprovenance “abc” should not match getitemprovenance “a ” include ”a ” should match getitemprovenance “abc” should not match getitemprovenance “a ” include ”a c” should match getitemprovenance “abcdec” should match due to glob getitemprovenance “a ” weird cases include ” ” is b should match getitemprovenance “b” should match because the string is the same getitemprovenance “ ” | 1 |
182,668 | 6,672,381,207 | IssuesEvent | 2017-10-04 11:23:11 | R-and-LaTeX/CorsoDiLatex | https://api.github.com/repos/R-and-LaTeX/CorsoDiLatex | closed | Esercizi per le slide `Comandi base 1` | in progress priority:medium type:content | **Description**
- [x] Esercizio in cui si descrive cosa si vuole in output liste
- [x] Esercizio in cui si fa vedere output liste
- [x] Esercizio in cui si descrive cosa si vuole in output formattazione
- [x] Esercizio in cui si fa vedere output formattazione
- [x] Esercizio in cui si descrive cosa si vuole in output orientamento pagine
- [x] Esercizio in cui si fa vedere output orientamento pagine
| 1.0 | Esercizi per le slide `Comandi base 1` - **Description**
- [x] Esercizio in cui si descrive cosa si vuole in output liste
- [x] Esercizio in cui si fa vedere output liste
- [x] Esercizio in cui si descrive cosa si vuole in output formattazione
- [x] Esercizio in cui si fa vedere output formattazione
- [x] Esercizio in cui si descrive cosa si vuole in output orientamento pagine
- [x] Esercizio in cui si fa vedere output orientamento pagine
| priority | esercizi per le slide comandi base description esercizio in cui si descrive cosa si vuole in output liste esercizio in cui si fa vedere output liste esercizio in cui si descrive cosa si vuole in output formattazione esercizio in cui si fa vedere output formattazione esercizio in cui si descrive cosa si vuole in output orientamento pagine esercizio in cui si fa vedere output orientamento pagine | 1 |
697,278 | 23,933,617,846 | IssuesEvent | 2022-09-10 23:02:43 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] thorium-bin | request:new-pkg priority:medium | ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/thorium-bin
### Utility this package has for you
ebook reader
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a few.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [ ] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | 1.0 | [Request] thorium-bin - ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/thorium-bin
### Utility this package has for you
ebook reader
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a few.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [ ] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_ | priority | thorium bin link to the package s in the aur utility this package has for you ebook reader do you consider the package s to be useful for every chaotic aur user no but for a few do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response | 1 |
777,217 | 27,271,715,791 | IssuesEvent | 2023-02-22 23:05:29 | 1T57H3F0X/loki | https://api.github.com/repos/1T57H3F0X/loki | closed | Slow run time. | priority: medium status: approved type: bug | Benchmarking showed that it was taking up to _16s_ on some systems to run.

| 1.0 | Slow run time. - Benchmarking showed that it was taking up to _16s_ on some systems to run.

| priority | slow run time benchmarking showed that it was taking up to on some systems to run | 1 |
250,341 | 7,975,595,344 | IssuesEvent | 2018-07-17 09:50:13 | xAPI-vle/moodle-logstore_xapi | https://api.github.com/repos/xAPI-vle/moodle-logstore_xapi | closed | SCORM statements not passing activity name or completed status | priority:medium status:unconfirmed type:bug | **Description**
- {{Brief description of your bug}}
**Version**
- {{branch}} at {{commit}} on {{version - found in your copy of the VERSION file}}
**Steps to reproduce the bug**
1. {{steps}}
**Expected behaviour**
- {{feature}} should be {{expectedResult}} because {{reason}}.
**Actual behaviour**
- {{feature}} is {{actualResult}}.
**Server information**
- {{database}} with {{authentication}}.
**Client information**
- OS: {{operatingSystem}}
- Browser: {{browser}} {{version}}
**Additional information**
- {{additionalInfo}})
| 1.0 | SCORM statements not passing activity name or completed status - **Description**
- {{Brief description of your bug}}
**Version**
- {{branch}} at {{commit}} on {{version - found in your copy of the VERSION file}}
**Steps to reproduce the bug**
1. {{steps}}
**Expected behaviour**
- {{feature}} should be {{expectedResult}} because {{reason}}.
**Actual behaviour**
- {{feature}} is {{actualResult}}.
**Server information**
- {{database}} with {{authentication}}.
**Client information**
- OS: {{operatingSystem}}
- Browser: {{browser}} {{version}}
**Additional information**
- {{additionalInfo}})
| priority | scorm statements not passing activity name or completed status description brief description of your bug version branch at commit on version found in your copy of the version file steps to reproduce the bug steps expected behaviour feature should be expectedresult because reason actual behaviour feature is actualresult server information database with authentication client information os operatingsystem browser browser version additional information additionalinfo | 1 |
693,572 | 23,782,293,919 | IssuesEvent | 2022-09-02 06:41:36 | ansible-collections/azure | https://api.github.com/repos/ansible-collections/azure | closed | ERROR: azure_rm_keyvault fails when enableSoftDelete is False | medium_priority work in | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Using the `enable_soft_delete` parameter to the `azure_rm_keyvault` module for a entirely new key vault results in the following error:
```error
"msg": "Error creating the Key Vault instance: Azure Error: BadRequest\nMessage: The property \"enableSoftDelete\" can be set to false only for creating new vault. Enabling the 'soft delete' functionality is an irreversible action."
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`azure_rm_keyvaultsecret.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.8
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
```
I have also gotten this error when testing on Ansible v2.9.13.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.vault_password
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
New Azure environment containing only a resource group and virtual network.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: azure
# NOTE: Make certain prerequisites and azcollection modules
# are installed per the documentation on Ansible Galaxy:
# https://galaxy.ansible.com/azure/azcollection
collections:
- azure.azcollection
gather_facts: no
vars:
client_abbreviation: acme
client_environment: tst
deployment: "{{ client_abbreviation }}{{ client_environment }}"
azure:
location: northcentralus
resource_group: "{{ deployment }}"
storage_account: "{{ deployment }}sadeploy"
keyvaults:
- name: acmetstkvdemo
enable_soft_delete: no
tags:
override: virtualnetwork
access_policy:
- object_id: "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
application_id: "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
certificates:
- get
- list
- delete
- create
- import
- update
- managecontacts
- getissuers
- listissuers
- setissuers
- deleteissuers
- manageissuers
- recover
- purge
keys:
- encrypt
- decrypt
- wrapkey
- unwrapkey
- sign
- verify
- get
- list
- create
- update
- import
- delete
- backup
- restore
- recover
- purge
secrets:
- get
- list
- set
- delete
- backup
- restore
- recover
- purge
tasks:
- name: "{{ azure.keyvaults[0].name }} : Key Vault"
azure_rm_keyvault:
resource_group: "{{ azure.keyvaults[0].resource_group | default(azure.resource_group) }}"
vault_name: "{{ azure.keyvaults[0].name }}"
enabled_for_deployment: "{{ azure.keyvaults[0].enabled_for_deployment | default(false) }}"
enabled_for_disk_encryption: "{{ azure.keyvaults[0].enabled_for_disk_encryption | default(false) }}"
enabled_for_template_deployment: "{{ azure.keyvaults[0].enabled_for_template_deployment | default(false) }}"
enable_soft_delete: "{{ azure.keyvaults[0].enable_soft_delete | default(true) }}"
vault_tenant: "{{ azure.keyvaults[0].vault_tenant | default(lookup('env', 'AZURE_TENANT')) }}"
sku:
family: "{{ (azure.keyvaults[0].sku.family) if (azure.keyvaults[0].sku.family is defined)
else (false) | default(omit, true) }}"
name: "{{ (azure.keyvaults[0].sku.name | lower) if (azure.keyvaults[0].sku.name is defined)
else ('standard') }}"
access_policies: "{{ access_policy }}"
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected results, depending on status of environment:
* Creation of a new key vault when one does not exist.
* Ignoring of the `enable_soft_delete` parameter when the key vault does exist.
In other words, even though the Azure API does not allow for the use of the `enableSoftDelete` when updating an existing key vault, as an Ansible use, I would expect the key vault module to handle that for me. I would not expect to see an error, either on creation or on subsequent runs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible-playbook` output:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [acmetstkvdemo : Key Vault] ************************************************************************************************************************************************************
task path: /root/deploy/kv.yml:68
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248 && echo ansible-tmp-1600107793.8676393-1789-257390481661248="` echo /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/azure/azcollection/plugins/modules/azure_rm_keyvault.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1782dgag9rfy/tmp5voje51j TO /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/AnsiballZ_azure_rm_keyvault.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/ /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/AnsiballZ_azure_rm_keyvault.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/AnsiballZ_azure_rm_keyvault.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_azure_rm_keyvault_payload_gdb_x4tm/ansible_azure_rm_keyvault_payload.zip/ansible_collections/azure/azcollection/plugins/modules/azure_rm_keyvault.py", line 451, in create_update_keyvault
File "/usr/local/lib/python3.6/dist-packages/azure/mgmt/keyvault/v2018_02_14/operations/vaults_operations.py", line 127, in create_or_update
**operation_config
File "/usr/local/lib/python3.6/dist-packages/azure/mgmt/keyvault/v2018_02_14/operations/vaults_operations.py", line 81, in _create_or_update_initial
raise exp
[WARNING]: Azure API profile latest does not define an entry for KeyVaultManagementClient
fatal: [azure]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"access_policies": [
{
"application_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"object_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"permissions": {
"certificates": [
"get",
"list",
"delete",
"create",
"import",
"update",
"managecontacts",
"getissuers",
"listissuers",
"setissuers",
"deleteissuers",
"manageissuers",
"recover",
"purge"
],
"keys": [
"encrypt",
"decrypt",
"wrapkey",
"unwrapkey",
"sign",
"verify",
"get",
"list",
"create",
"update",
"import",
"delete",
"backup",
"restore",
"recover",
"purge"
],
"secrets": [
"get",
"list",
"set",
"delete",
"backup",
"restore",
"recover",
"purge"
],
"storage": null
},
"tenant_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
],
"ad_user": null,
"adfs_authority_url": null,
"api_profile": "latest",
"append_tags": true,
"auth_source": "auto",
"cert_validation_mode": null,
"client_id": null,
"cloud_environment": "AzureCloud",
"enable_soft_delete": false,
"enabled_for_deployment": false,
"enabled_for_disk_encryption": false,
"enabled_for_template_deployment": false,
"location": null,
"password": null,
"profile": null,
"recover_mode": null,
"resource_group": "acmetst",
"secret": null,
"sku": {
"name": "standard"
},
"state": "present",
"subscription_id": null,
"tags": null,
"tenant": null,
"vault_name": "acmetstkvdemo",
"vault_tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
},
"msg": "Error creating the Key Vault instance: Azure Error: BadRequest\nMessage: The property \"enableSoftDelete\" can be set to false only for creating new vault. Enabling the 'soft delete' functionality is an irreversible action."
}
```
| 1.0 | ERROR: azure_rm_keyvault fails when enableSoftDelete is False - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Using the `enable_soft_delete` parameter to the `azure_rm_keyvault` module for a entirely new key vault results in the following error:
```error
"msg": "Error creating the Key Vault instance: Azure Error: BadRequest\nMessage: The property \"enableSoftDelete\" can be set to false only for creating new vault. Enabling the 'soft delete' functionality is an irreversible action."
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`azure_rm_keyvaultsecret.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.8
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
```
I have also gotten this error when testing on Ansible v2.9.13.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.vault_password
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
New Azure environment containing only a resource group and virtual network.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: azure
# NOTE: Make certain prerequisites and azcollection modules
# are installed per the documentation on Ansible Galaxy:
# https://galaxy.ansible.com/azure/azcollection
collections:
- azure.azcollection
gather_facts: no
vars:
client_abbreviation: acme
client_environment: tst
deployment: "{{ client_abbreviation }}{{ client_environment }}"
azure:
location: northcentralus
resource_group: "{{ deployment }}"
storage_account: "{{ deployment }}sadeploy"
keyvaults:
- name: acmetstkvdemo
enable_soft_delete: no
tags:
override: virtualnetwork
access_policy:
- object_id: "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
application_id: "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
certificates:
- get
- list
- delete
- create
- import
- update
- managecontacts
- getissuers
- listissuers
- setissuers
- deleteissuers
- manageissuers
- recover
- purge
keys:
- encrypt
- decrypt
- wrapkey
- unwrapkey
- sign
- verify
- get
- list
- create
- update
- import
- delete
- backup
- restore
- recover
- purge
secrets:
- get
- list
- set
- delete
- backup
- restore
- recover
- purge
tasks:
- name: "{{ azure.keyvaults[0].name }} : Key Vault"
azure_rm_keyvault:
resource_group: "{{ azure.keyvaults[0].resource_group | default(azure.resource_group) }}"
vault_name: "{{ azure.keyvaults[0].name }}"
enabled_for_deployment: "{{ azure.keyvaults[0].enabled_for_deployment | default(false) }}"
enabled_for_disk_encryption: "{{ azure.keyvaults[0].enabled_for_disk_encryption | default(false) }}"
enabled_for_template_deployment: "{{ azure.keyvaults[0].enabled_for_template_deployment | default(false) }}"
enable_soft_delete: "{{ azure.keyvaults[0].enable_soft_delete | default(true) }}"
vault_tenant: "{{ azure.keyvaults[0].vault_tenant | default(lookup('env', 'AZURE_TENANT')) }}"
sku:
family: "{{ (azure.keyvaults[0].sku.family) if (azure.keyvaults[0].sku.family is defined)
else (false) | default(omit, true) }}"
name: "{{ (azure.keyvaults[0].sku.name | lower) if (azure.keyvaults[0].sku.name is defined)
else ('standard') }}"
access_policies: "{{ access_policy }}"
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected results, depending on status of environment:
* Creation of a new key vault when one does not exist.
* Ignoring of the `enable_soft_delete` parameter when the key vault does exist.
In other words, even though the Azure API does not allow for the use of the `enableSoftDelete` when updating an existing key vault, as an Ansible use, I would expect the key vault module to handle that for me. I would not expect to see an error, either on creation or on subsequent runs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible-playbook` output:
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [acmetstkvdemo : Key Vault] ************************************************************************************************************************************************************
task path: /root/deploy/kv.yml:68
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248 && echo ansible-tmp-1600107793.8676393-1789-257390481661248="` echo /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/azure/azcollection/plugins/modules/azure_rm_keyvault.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1782dgag9rfy/tmp5voje51j TO /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/AnsiballZ_azure_rm_keyvault.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/ /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/AnsiballZ_azure_rm_keyvault.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/AnsiballZ_azure_rm_keyvault.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1600107793.8676393-1789-257390481661248/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_azure_rm_keyvault_payload_gdb_x4tm/ansible_azure_rm_keyvault_payload.zip/ansible_collections/azure/azcollection/plugins/modules/azure_rm_keyvault.py", line 451, in create_update_keyvault
File "/usr/local/lib/python3.6/dist-packages/azure/mgmt/keyvault/v2018_02_14/operations/vaults_operations.py", line 127, in create_or_update
**operation_config
File "/usr/local/lib/python3.6/dist-packages/azure/mgmt/keyvault/v2018_02_14/operations/vaults_operations.py", line 81, in _create_or_update_initial
raise exp
[WARNING]: Azure API profile latest does not define an entry for KeyVaultManagementClient
fatal: [azure]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"access_policies": [
{
"application_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"object_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"permissions": {
"certificates": [
"get",
"list",
"delete",
"create",
"import",
"update",
"managecontacts",
"getissuers",
"listissuers",
"setissuers",
"deleteissuers",
"manageissuers",
"recover",
"purge"
],
"keys": [
"encrypt",
"decrypt",
"wrapkey",
"unwrapkey",
"sign",
"verify",
"get",
"list",
"create",
"update",
"import",
"delete",
"backup",
"restore",
"recover",
"purge"
],
"secrets": [
"get",
"list",
"set",
"delete",
"backup",
"restore",
"recover",
"purge"
],
"storage": null
},
"tenant_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
],
"ad_user": null,
"adfs_authority_url": null,
"api_profile": "latest",
"append_tags": true,
"auth_source": "auto",
"cert_validation_mode": null,
"client_id": null,
"cloud_environment": "AzureCloud",
"enable_soft_delete": false,
"enabled_for_deployment": false,
"enabled_for_disk_encryption": false,
"enabled_for_template_deployment": false,
"location": null,
"password": null,
"profile": null,
"recover_mode": null,
"resource_group": "acmetst",
"secret": null,
"sku": {
"name": "standard"
},
"state": "present",
"subscription_id": null,
"tags": null,
"tenant": null,
"vault_name": "acmetstkvdemo",
"vault_tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
},
"msg": "Error creating the Key Vault instance: Azure Error: BadRequest\nMessage: The property \"enableSoftDelete\" can be set to false only for creating new vault. Enabling the 'soft delete' functionality is an irreversible action."
}
```
| priority | error azure rm keyvault fails when enablesoftdelete is false summary using the enable soft delete parameter to the azure rm keyvault module for a entirely new key vault results in the following error error msg error creating the key vault instance azure error badrequest nmessage the property enablesoftdelete can be set to false only for creating new vault enabling the soft delete functionality is an irreversible action issue type bug report component name azure rm keyvaultsecret py ansible version paste below ansible config file none configured module search path ansible python module location usr local lib dist packages ansible executable location usr local bin ansible python version default jul i have also gotten this error when testing on ansible configuration paste below default vault password file env ansible vault password file root ansible vault password host key checking env ansible host key checking false os environment new azure environment containing only a resource group and virtual network steps to reproduce yaml hosts azure note make certain prerequisites and azcollection modules are installed per the documentation on ansible galaxy collections azure azcollection gather facts no vars client abbreviation acme client environment tst deployment client abbreviation client environment azure location northcentralus resource group deployment storage account deployment sadeploy keyvaults name acmetstkvdemo enable soft delete no tags override virtualnetwork access policy object id xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx application id xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx certificates get list delete create import update managecontacts getissuers listissuers setissuers deleteissuers manageissuers recover purge keys encrypt decrypt wrapkey unwrapkey sign verify get list create update import delete backup restore recover purge secrets get list set delete backup restore recover purge tasks name azure keyvaults name key vault azure rm keyvault resource group azure keyvaults resource group default azure resource group vault name azure keyvaults name enabled for deployment azure keyvaults enabled for deployment default false enabled for disk encryption azure keyvaults enabled for disk encryption default false enabled for template deployment azure keyvaults enabled for template deployment default false enable soft delete azure keyvaults enable soft delete default true vault tenant azure keyvaults vault tenant default lookup env azure tenant sku family azure keyvaults sku family if azure keyvaults sku family is defined else false default omit true name azure keyvaults sku name lower if azure keyvaults sku name is defined else standard access policies access policy expected results expected results depending on status of environment creation of a new key vault when one does not exist ignoring of the enable soft delete parameter when the key vault does exist in other words even though the azure api does not allow for the use of the enablesoftdelete when updating an existing key vault as an ansible use i would expect the key vault module to handle that for me i would not expect to see an error either on creation or on subsequent runs actual results ansible playbook output paste below task task path root deploy kv yml establish local connection for user root exec bin sh c echo root sleep exec bin sh c umask mkdir p echo root ansible tmp mkdir root ansible tmp ansible tmp echo ansible tmp echo root ansible tmp ansible tmp sleep using module file root ansible collections ansible collections azure azcollection plugins modules azure rm keyvault py put root ansible tmp ansible local to root ansible tmp ansible tmp ansiballz azure rm keyvault py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp ansiballz azure rm keyvault py sleep exec bin sh c usr bin root ansible tmp ansible tmp ansiballz azure rm keyvault py sleep exec bin sh c rm f r root ansible tmp ansible tmp dev null sleep the full traceback is file tmp ansible azure rm keyvault payload gdb ansible azure rm keyvault payload zip ansible collections azure azcollection plugins modules azure rm keyvault py line in create update keyvault file usr local lib dist packages azure mgmt keyvault operations vaults operations py line in create or update operation config file usr local lib dist packages azure mgmt keyvault operations vaults operations py line in create or update initial raise exp azure api profile latest does not define an entry for keyvaultmanagementclient fatal failed changed false invocation module args access policies application id xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx object id xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx permissions certificates get list delete create import update managecontacts getissuers listissuers setissuers deleteissuers manageissuers recover purge keys encrypt decrypt wrapkey unwrapkey sign verify get list create update import delete backup restore recover purge secrets get list set delete backup restore recover purge storage null tenant id xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx ad user null adfs authority url null api profile latest append tags true auth source auto cert validation mode null client id null cloud environment azurecloud enable soft delete false enabled for deployment false enabled for disk encryption false enabled for template deployment false location null password null profile null recover mode null resource group acmetst secret null sku name standard state present subscription id null tags null tenant null vault name acmetstkvdemo vault tenant xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx msg error creating the key vault instance azure error badrequest nmessage the property enablesoftdelete can be set to false only for creating new vault enabling the soft delete functionality is an irreversible action | 1 |
614,736 | 19,188,859,123 | IssuesEvent | 2021-12-05 17:09:27 | linaism/SOEN341 | https://api.github.com/repos/linaism/SOEN341 | closed | As a user, I want to see the questions asked and answers voted on in the profile page | user story medium priority | A user should have the ability to see the questions they submitted as well as the answers that they voted on to keep track of their activity.
Acceptance Test:
- [ ] Step 1: User should be able to land on the Profile page
- [ ] Step 2: User should be able to view the questions they submitted
- [ ] Step 3: User should be able to view the answers they voted on
Story points: 3
Risk: medium
Priority: Medium | 1.0 | As a user, I want to see the questions asked and answers voted on in the profile page - A user should have the ability to see the questions they submitted as well as the answers that they voted on to keep track of their activity.
Acceptance Test:
- [ ] Step 1: User should be able to land on the Profile page
- [ ] Step 2: User should be able to view the questions they submitted
- [ ] Step 3: User should be able to view the answers they voted on
Story points: 3
Risk: medium
Priority: Medium | priority | as a user i want to see the questions asked and answers voted on in the profile page a user should have the ability to see the questions they submitted as well as the answers that they voted on to keep track of their activity acceptance test step user should be able to land on the profile page step user should be able to view the questions they submitted step user should be able to view the answers they voted on story points risk medium priority medium | 1 |
275,265 | 8,575,531,480 | IssuesEvent | 2018-11-12 17:32:06 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | Should update to CGNS 3.1.x, as 64-bit support has been added. | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal | Menno Deij submitted changes to CGNS reader in support of 64-bit, which is available with version 3.1 of the library.
VisIt on Windows already uses 3.1.3, we should update other platforms as well.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1040
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Should update to CGNS 3.1.x, as 64-bit support has been added.
Assigned to: Kathleen Biagas
Category:
Target version: 2.8
Author: Kathleen Biagas
Start: 05/02/2012
Due date:
% Done: 0
Estimated time:
Created: 05/02/2012 11:35 am
Updated: 07/24/2014 12:45 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Menno Deij submitted changes to CGNS reader in support of 64-bit, which is available with version 3.1 of the library.
VisIt on Windows already uses 3.1.3, we should update other platforms as well.
Comments:
SVN revision 23582-3
| 1.0 | Should update to CGNS 3.1.x, as 64-bit support has been added. - Menno Deij submitted changes to CGNS reader in support of 64-bit, which is available with version 3.1 of the library.
VisIt on Windows already uses 3.1.3, we should update other platforms as well.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1040
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Should update to CGNS 3.1.x, as 64-bit support has been added.
Assigned to: Kathleen Biagas
Category:
Target version: 2.8
Author: Kathleen Biagas
Start: 05/02/2012
Due date:
% Done: 0
Estimated time:
Created: 05/02/2012 11:35 am
Updated: 07/24/2014 12:45 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Menno Deij submitted changes to CGNS reader in support of 64-bit, which is available with version 3.1 of the library.
VisIt on Windows already uses 3.1.3, we should update other platforms as well.
Comments:
SVN revision 23582-3
| priority | should update to cgns x as bit support has been added menno deij submitted changes to cgns reader in support of bit which is available with version of the library visit on windows already uses we should update other platforms as well redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority normal subject should update to cgns x as bit support has been added assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created am updated pm likelihood severity found in version impact medium expected use occasional os all support group any description menno deij submitted changes to cgns reader in support of bit which is available with version of the library visit on windows already uses we should update other platforms as well comments svn revision | 1 |
32,147 | 2,744,139,511 | IssuesEvent | 2015-04-22 04:09:15 | reingart/prueba | https://api.github.com/repos/reingart/prueba | closed | WSLPG: Error AutorizarLiquidacion() takes exactly 1 argument (2 given) usando delphi7. | auto-migrated Priority-Medium Type-Other | ```
Obtengo el siguiente error al quere autorizar una liquidacion (modo
homologacion), probe distintos cambios a los valores de las variables al crear
la liquidacion pero no hay ninguna diferencia al llamar a AutorizarLiquidacion.
Obtengo el siguiente error:
File "wslpg.pyo", line 294, in capturar_errores_wrapper
TypeError: AutorizarLiquidacion() takes exactly 1 argument (2 given)
Salida completa de seguimiento:
C:\ARCHIV~1\WSLPG
appserver status OK
dbserver status OK
authserver status OK
Ultimo numero de orden es:0
Ahora completo parametros y obtengo coe:
No salio el COE:Traceback (most recent call last):
File "wslpg.pyo", line 294, in capturar_errores_wrapper
TypeError: AutorizarLiquidacion() takes exactly 1 argument (2 given)
Estoy usando delphi7 y winXP.
Dejo el codigo en delphi por si alguien encuentra el error, y por si a alguien
le sirve el aporte. Cualquier respuesta bienvenida.
//CODIGO DELPHI
begin
CoInitialize(nil);
// Crear objeto interface Web Service Autenticación y Autorización
WSAA := CreateOleObject('WSAA') ;
tra := WSAA.CreateTRA('wslpg',3600);
WriteLn(tra);
path := GetCurrentDir + '\';
Certificado := 'privada.crt'; // certificado de prueba
ClavePrivada := 'privada.key'; // clave privada de prueba' +
cms := WSAA.SignTRA(tra, Path + Certificado, Path + ClavePrivada);
WriteLn(cms);
// Llamar al web service para autenticar:
ta := WSAA.CallWSAA(cms, 'https://wsaahomo.afip.gov.ar/ws/services/LoginCms?wsdl'); // Homologación
// Imprimir el ticket de acceso, ToKen y Sign de autorización
WriteLn(ta);
WriteLn('Token:' + WSAA.Token);
WriteLn('Sign:' + WSAA.Sign);
// Crear objeto interface Web Service de Factura Electrónica
WSLPG := CreateOleObject('WSLPG');
WriteLn(WSLPG.Version);
WriteLn(WSLPG.InstallDir);
// Setear tocken y sing de autorización (pasos previos)
WSLPG.Token := WSAA.Token;
WSLPG.Sign := WSAA.Sign;
// CUIT del emisor (debe estar registrado en la AFIP)
WSLPG.Cuit := 'xxxx';
// Conectar al Servicio Web de Facturación de primarias
ok := WSLPG.Conectar('','',''); // homologación
// WriteLn('Estado al conectar ' + WSLPG.);
if not (ok) then WriteLn('Error en conexion:' + WSLPG.TRACEBACK) ;
// else begin
ok:=WSLPG.Dummy;
if not (ok) then WriteLn('Error con DUMMY, servicios caidos:') ;
WriteLn('appserver status ' + WSLPG.AppServerStatus);
WriteLn('dbserver status ' + WSLPG.DbServerStatus);
WriteLn('authserver status ' + WSLPG.AuthServerStatus);
//*****************************************************************************
// Recupera último número de secuencia ID
tipo_cbte := 1; punto_vta := 1;
// con esto obtengo el ultimo numero!!!! tipo de comprobante no me lo toma
ok := WSLPG.ConsultarUltNroOrden(punto_vta);
If ok then
begin
LastId:= WSLPG.NroOrden;
Nro_orden:=LastId + 1;
WriteLn('Ultimo numero de orden es: ' + inttostr(LastId));
end
else
WriteLn('Error con numero de orden' + WSLPG.TRACEBACK + WSLPG.ERRMsg) ;
//***********************************OBTENER COE******************************
WriteLn('Ahora completo parametros y obtengo coe:');
Nro_orden:=1;
pto_emision := 1; // agregado v1.1
cuit_comprador := '30xxxx'; // Exportador
nro_act_comprador := 40;
nro_ing_bruto_comprador := '30xxxx';
cod_tipo_operacion := 1;
es_liquidacion_propia := 'N';
es_canje := 'N';
cod_puerto := 14;
des_puerto_localidad := 'DETALLE PUERTO';
cod_grano := 31;
cuit_vendedor := '30646xxxx';
nro_ing_bruto_vendedor := '90xxxx';
actua_corredor := 'S';
liquida_corredor := 'S';
cuit_corredor := '30xxxxx';
comision_corredor := 1;
nro_ing_bruto_corredor := '90xxxx';
fecha_precio_operacion := '2013-11-18';
precio_ref_tn := 1000;
cod_grado_ref := 'G1';
cod_grado_ent := 'G1';
factor_ent := 98;
precio_flete_tn := 10;
cont_proteico := 20;
alic_iva_operacion := 10.5;
campania_ppal := 1213;
cod_localidad_procedencia := 3;
cod_provincia_procedencia := 1; // agregado v1.1
datos_adicionales := 'DATOS ADICIONALES';
// establezco un parámetro adicional (antes de llamar a CrearLiquidacion)
// nuevos parámetros WSLPGv1.1:
ok := WSLPG.SetParametro('peso_neto_sin_certificado', 1000);
// nuevos parámetros WSLPGv1.3:
ok := WSLPG.SetParametro('cod_prov_procedencia_sin_certificado', 12);
ok := WSLPG.SetParametro('cod_localidad_procedencia_sin_certificado', 5544);
ok := WSLPG.CrearLiquidacion(nro_orden, cuit_comprador,
nro_act_comprador, nro_ing_bruto_comprador,
cod_tipo_operacion,
es_liquidacion_propia, es_canje,
cod_puerto, des_puerto_localidad, cod_grano,
cuit_vendedor, nro_ing_bruto_vendedor,
actua_corredor, liquida_corredor, cuit_corredor,
comision_corredor, nro_ing_bruto_corredor,
fecha_precio_operacion,
precio_ref_tn, cod_grado_ref, cod_grado_ent,
factor_ent, precio_flete_tn, cont_proteico,
alic_iva_operacion, campania_ppal,
cod_localidad_procedencia,
datos_adicionales,
pto_emision, cod_provincia_procedencia);
// llamo al webservice con los datos cargados:
}
ok := WSLPG.AutorizarLiquidacion();
If ok Then
begin
// muestro los resultados devueltos por el webservice:
WriteLn('COE', WSLPG.COE);
WriteLn('COEAjustado', WSLPG.COEAjustado);
WriteLn('TootalDeduccion', WSLPG.TotalDeduccion);
WriteLn('TotalRetencion', WSLPG.TotalRetencion);
WriteLn('TotalRetencionAfip', WSLPG.TotalRetencionAfip);
WriteLn('TotalOtrasRetenciones', WSLPG.TotalOtrasRetenciones) ;
WriteLn('TotalNetoAPagar', WSLPG.TotalNetoAPagar);
WriteLn('TotalIvaRg2300_07', WSLPG.TotalIvaRg2300_07) ;
WriteLn('TotalPagoSegunCondicion', WSLPG.TotalPagoSegunCondicion);
end
else
WriteLn('No salio el COE :' + WSLPG.TRACEBACK) ;
WriteLn('Presione Enter para terminar');
ReadLn;
CoUninitialize;
// FIN CODIGO
Gracias.
Maximiliano Martin Duarte.
```
Original issue reported on code.google.com by `elmartin...@gmail.com` on 21 Nov 2013 at 1:54 | 1.0 | WSLPG: Error AutorizarLiquidacion() takes exactly 1 argument (2 given) usando delphi7. - ```
Obtengo el siguiente error al quere autorizar una liquidacion (modo
homologacion), probe distintos cambios a los valores de las variables al crear
la liquidacion pero no hay ninguna diferencia al llamar a AutorizarLiquidacion.
Obtengo el siguiente error:
File "wslpg.pyo", line 294, in capturar_errores_wrapper
TypeError: AutorizarLiquidacion() takes exactly 1 argument (2 given)
Salida completa de seguimiento:
C:\ARCHIV~1\WSLPG
appserver status OK
dbserver status OK
authserver status OK
Ultimo numero de orden es:0
Ahora completo parametros y obtengo coe:
No salio el COE:Traceback (most recent call last):
File "wslpg.pyo", line 294, in capturar_errores_wrapper
TypeError: AutorizarLiquidacion() takes exactly 1 argument (2 given)
Estoy usando delphi7 y winXP.
Dejo el codigo en delphi por si alguien encuentra el error, y por si a alguien
le sirve el aporte. Cualquier respuesta bienvenida.
//CODIGO DELPHI
begin
CoInitialize(nil);
// Crear objeto interface Web Service Autenticación y Autorización
WSAA := CreateOleObject('WSAA') ;
tra := WSAA.CreateTRA('wslpg',3600);
WriteLn(tra);
path := GetCurrentDir + '\';
Certificado := 'privada.crt'; // certificado de prueba
ClavePrivada := 'privada.key'; // clave privada de prueba' +
cms := WSAA.SignTRA(tra, Path + Certificado, Path + ClavePrivada);
WriteLn(cms);
// Llamar al web service para autenticar:
ta := WSAA.CallWSAA(cms, 'https://wsaahomo.afip.gov.ar/ws/services/LoginCms?wsdl'); // Homologación
// Imprimir el ticket de acceso, ToKen y Sign de autorización
WriteLn(ta);
WriteLn('Token:' + WSAA.Token);
WriteLn('Sign:' + WSAA.Sign);
// Crear objeto interface Web Service de Factura Electrónica
WSLPG := CreateOleObject('WSLPG');
WriteLn(WSLPG.Version);
WriteLn(WSLPG.InstallDir);
// Setear tocken y sing de autorización (pasos previos)
WSLPG.Token := WSAA.Token;
WSLPG.Sign := WSAA.Sign;
// CUIT del emisor (debe estar registrado en la AFIP)
WSLPG.Cuit := 'xxxx';
// Conectar al Servicio Web de Facturación de primarias
ok := WSLPG.Conectar('','',''); // homologación
// WriteLn('Estado al conectar ' + WSLPG.);
if not (ok) then WriteLn('Error en conexion:' + WSLPG.TRACEBACK) ;
// else begin
ok:=WSLPG.Dummy;
if not (ok) then WriteLn('Error con DUMMY, servicios caidos:') ;
WriteLn('appserver status ' + WSLPG.AppServerStatus);
WriteLn('dbserver status ' + WSLPG.DbServerStatus);
WriteLn('authserver status ' + WSLPG.AuthServerStatus);
//*****************************************************************************
// Recupera último número de secuencia ID
tipo_cbte := 1; punto_vta := 1;
// con esto obtengo el ultimo numero!!!! tipo de comprobante no me lo toma
ok := WSLPG.ConsultarUltNroOrden(punto_vta);
If ok then
begin
LastId:= WSLPG.NroOrden;
Nro_orden:=LastId + 1;
WriteLn('Ultimo numero de orden es: ' + inttostr(LastId));
end
else
WriteLn('Error con numero de orden' + WSLPG.TRACEBACK + WSLPG.ERRMsg) ;
//***********************************OBTENER COE******************************
WriteLn('Ahora completo parametros y obtengo coe:');
Nro_orden:=1;
pto_emision := 1; // agregado v1.1
cuit_comprador := '30xxxx'; // Exportador
nro_act_comprador := 40;
nro_ing_bruto_comprador := '30xxxx';
cod_tipo_operacion := 1;
es_liquidacion_propia := 'N';
es_canje := 'N';
cod_puerto := 14;
des_puerto_localidad := 'DETALLE PUERTO';
cod_grano := 31;
cuit_vendedor := '30646xxxx';
nro_ing_bruto_vendedor := '90xxxx';
actua_corredor := 'S';
liquida_corredor := 'S';
cuit_corredor := '30xxxxx';
comision_corredor := 1;
nro_ing_bruto_corredor := '90xxxx';
fecha_precio_operacion := '2013-11-18';
precio_ref_tn := 1000;
cod_grado_ref := 'G1';
cod_grado_ent := 'G1';
factor_ent := 98;
precio_flete_tn := 10;
cont_proteico := 20;
alic_iva_operacion := 10.5;
campania_ppal := 1213;
cod_localidad_procedencia := 3;
cod_provincia_procedencia := 1; // agregado v1.1
datos_adicionales := 'DATOS ADICIONALES';
// establezco un parámetro adicional (antes de llamar a CrearLiquidacion)
// nuevos parámetros WSLPGv1.1:
ok := WSLPG.SetParametro('peso_neto_sin_certificado', 1000);
// nuevos parámetros WSLPGv1.3:
ok := WSLPG.SetParametro('cod_prov_procedencia_sin_certificado', 12);
ok := WSLPG.SetParametro('cod_localidad_procedencia_sin_certificado', 5544);
ok := WSLPG.CrearLiquidacion(nro_orden, cuit_comprador,
nro_act_comprador, nro_ing_bruto_comprador,
cod_tipo_operacion,
es_liquidacion_propia, es_canje,
cod_puerto, des_puerto_localidad, cod_grano,
cuit_vendedor, nro_ing_bruto_vendedor,
actua_corredor, liquida_corredor, cuit_corredor,
comision_corredor, nro_ing_bruto_corredor,
fecha_precio_operacion,
precio_ref_tn, cod_grado_ref, cod_grado_ent,
factor_ent, precio_flete_tn, cont_proteico,
alic_iva_operacion, campania_ppal,
cod_localidad_procedencia,
datos_adicionales,
pto_emision, cod_provincia_procedencia);
// llamo al webservice con los datos cargados:
}
ok := WSLPG.AutorizarLiquidacion();
If ok Then
begin
// muestro los resultados devueltos por el webservice:
WriteLn('COE', WSLPG.COE);
WriteLn('COEAjustado', WSLPG.COEAjustado);
WriteLn('TootalDeduccion', WSLPG.TotalDeduccion);
WriteLn('TotalRetencion', WSLPG.TotalRetencion);
WriteLn('TotalRetencionAfip', WSLPG.TotalRetencionAfip);
WriteLn('TotalOtrasRetenciones', WSLPG.TotalOtrasRetenciones) ;
WriteLn('TotalNetoAPagar', WSLPG.TotalNetoAPagar);
WriteLn('TotalIvaRg2300_07', WSLPG.TotalIvaRg2300_07) ;
WriteLn('TotalPagoSegunCondicion', WSLPG.TotalPagoSegunCondicion);
end
else
WriteLn('No salio el COE :' + WSLPG.TRACEBACK) ;
WriteLn('Presione Enter para terminar');
ReadLn;
CoUninitialize;
// FIN CODIGO
Gracias.
Maximiliano Martin Duarte.
```
Original issue reported on code.google.com by `elmartin...@gmail.com` on 21 Nov 2013 at 1:54 | priority | wslpg error autorizarliquidacion takes exactly argument given usando obtengo el siguiente error al quere autorizar una liquidacion modo homologacion probe distintos cambios a los valores de las variables al crear la liquidacion pero no hay ninguna diferencia al llamar a autorizarliquidacion obtengo el siguiente error file wslpg pyo line in capturar errores wrapper typeerror autorizarliquidacion takes exactly argument given salida completa de seguimiento c archiv wslpg appserver status ok dbserver status ok authserver status ok ultimo numero de orden es ahora completo parametros y obtengo coe no salio el coe traceback most recent call last file wslpg pyo line in capturar errores wrapper typeerror autorizarliquidacion takes exactly argument given estoy usando y winxp dejo el codigo en delphi por si alguien encuentra el error y por si a alguien le sirve el aporte cualquier respuesta bienvenida codigo delphi begin coinitialize nil crear objeto interface web service autenticación y autorización wsaa createoleobject wsaa tra wsaa createtra wslpg writeln tra path getcurrentdir certificado privada crt certificado de prueba claveprivada privada key clave privada de prueba cms wsaa signtra tra path certificado path claveprivada writeln cms llamar al web service para autenticar ta wsaa callwsaa cms homologación imprimir el ticket de acceso token y sign de autorización writeln ta writeln token wsaa token writeln sign wsaa sign crear objeto interface web service de factura electrónica wslpg createoleobject wslpg writeln wslpg version writeln wslpg installdir setear tocken y sing de autorización pasos previos wslpg token wsaa token wslpg sign wsaa sign cuit del emisor debe estar registrado en la afip wslpg cuit xxxx conectar al servicio web de facturación de primarias ok wslpg conectar homologación writeln estado al conectar wslpg if not ok then writeln error en conexion wslpg traceback else begin ok wslpg dummy if not ok then writeln error con dummy servicios caidos writeln appserver status wslpg appserverstatus writeln dbserver status wslpg dbserverstatus writeln authserver status wslpg authserverstatus recupera último número de secuencia id tipo cbte punto vta con esto obtengo el ultimo numero tipo de comprobante no me lo toma ok wslpg consultarultnroorden punto vta if ok then begin lastid wslpg nroorden nro orden lastid writeln ultimo numero de orden es inttostr lastid end else writeln error con numero de orden wslpg traceback wslpg errmsg obtener coe writeln ahora completo parametros y obtengo coe nro orden pto emision agregado cuit comprador exportador nro act comprador nro ing bruto comprador cod tipo operacion es liquidacion propia n es canje n cod puerto des puerto localidad detalle puerto cod grano cuit vendedor nro ing bruto vendedor actua corredor s liquida corredor s cuit corredor comision corredor nro ing bruto corredor fecha precio operacion precio ref tn cod grado ref cod grado ent factor ent precio flete tn cont proteico alic iva operacion campania ppal cod localidad procedencia cod provincia procedencia agregado datos adicionales datos adicionales establezco un parámetro adicional antes de llamar a crearliquidacion nuevos parámetros ok wslpg setparametro peso neto sin certificado nuevos parámetros ok wslpg setparametro cod prov procedencia sin certificado ok wslpg setparametro cod localidad procedencia sin certificado ok wslpg crearliquidacion nro orden cuit comprador nro act comprador nro ing bruto comprador cod tipo operacion es liquidacion propia es canje cod puerto des puerto localidad cod grano cuit vendedor nro ing bruto vendedor actua corredor liquida corredor cuit corredor comision corredor nro ing bruto corredor fecha precio operacion precio ref tn cod grado ref cod grado ent factor ent precio flete tn cont proteico alic iva operacion campania ppal cod localidad procedencia datos adicionales pto emision cod provincia procedencia llamo al webservice con los datos cargados ok wslpg autorizarliquidacion if ok then begin muestro los resultados devueltos por el webservice writeln coe wslpg coe writeln coeajustado wslpg coeajustado writeln tootaldeduccion wslpg totaldeduccion writeln totalretencion wslpg totalretencion writeln totalretencionafip wslpg totalretencionafip writeln totalotrasretenciones wslpg totalotrasretenciones writeln totalnetoapagar wslpg totalnetoapagar writeln wslpg writeln totalpagoseguncondicion wslpg totalpagoseguncondicion end else writeln no salio el coe wslpg traceback writeln presione enter para terminar readln couninitialize fin codigo gracias maximiliano martin duarte original issue reported on code google com by elmartin gmail com on nov at | 1 |
40,887 | 2,868,948,380 | IssuesEvent | 2015-06-05 22:08:14 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Running pub update causes crash and no update) | AssumedStale bug Priority-Medium | <a href="https://github.com/efortuna"><img src="https://avatars.githubusercontent.com/u/2112792?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [efortuna](https://github.com/efortuna)**
_Originally opened as dart-lang/sdk#10400_
----
see attached
______
**Attachment:**
[trace](https://storage.googleapis.com/google-code-attachments/dart/issue-10400/comment-0/trace) (250.54 KB) | 1.0 | Running pub update causes crash and no update) - <a href="https://github.com/efortuna"><img src="https://avatars.githubusercontent.com/u/2112792?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [efortuna](https://github.com/efortuna)**
_Originally opened as dart-lang/sdk#10400_
----
see attached
______
**Attachment:**
[trace](https://storage.googleapis.com/google-code-attachments/dart/issue-10400/comment-0/trace) (250.54 KB) | priority | running pub update causes crash and no update issue by originally opened as dart lang sdk see attached attachment kb | 1 |
325,868 | 9,936,968,487 | IssuesEvent | 2019-07-02 20:34:07 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Insufficient test coverage for LcmDrivenLoop | priority: medium team: manipulation | There are no stress tests for LcmDrivenLoop to catch any threading related bugs.
A way to do so is keeping a reasonably long log in master, and write tests against it.
Any comments on how to proceed would be very welcome here. | 1.0 | Insufficient test coverage for LcmDrivenLoop - There are no stress tests for LcmDrivenLoop to catch any threading related bugs.
A way to do so is keeping a reasonably long log in master, and write tests against it.
Any comments on how to proceed would be very welcome here. | priority | insufficient test coverage for lcmdrivenloop there are no stress tests for lcmdrivenloop to catch any threading related bugs a way to do so is keeping a reasonably long log in master and write tests against it any comments on how to proceed would be very welcome here | 1 |
467,437 | 13,448,459,984 | IssuesEvent | 2020-09-08 15:29:49 | ansible-collections/community.okd | https://api.github.com/repos/ansible-collections/community.okd | closed | Add Probot/stale bot to mark issues as stale and close after certain period | has_pr priority/medium | ##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Implement a Probot/stale bot consistent with ansible-collections/community.kubernetes#53.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Organization / maintenance
##### ADDITIONAL INFORMATION
N/A | 1.0 | Add Probot/stale bot to mark issues as stale and close after certain period - ##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Implement a Probot/stale bot consistent with ansible-collections/community.kubernetes#53.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Organization / maintenance
##### ADDITIONAL INFORMATION
N/A | priority | add probot stale bot to mark issues as stale and close after certain period summary implement a probot stale bot consistent with ansible collections community kubernetes issue type feature idea component name organization maintenance additional information n a | 1 |
667,690 | 22,496,916,058 | IssuesEvent | 2022-06-23 08:24:32 | hackforla/expunge-assist | https://api.github.com/repos/hackforla/expunge-assist | opened | Review word choice and phrasing for user clarity [from usability testing] | priority: medium role: UX content writing feature: figma content writing | ### Overview
Review word choice and phrasing across the product.
Examples from usability testing:
Some word choices on the “Involvement” page was unclear, another user didn’t know what "Recovery" referred to.
### Action Items
- [ ] Review across platform
- [ ] Consider changing "recovery" to a more clear term i.e. from substance abuse
- [ ] Iterate new copy
- [ ] Review in Content
- [ ] Handover to Dev
### Resources/Instructions
| 1.0 | Review word choice and phrasing for user clarity [from usability testing] - ### Overview
Review word choice and phrasing across the product.
Examples from usability testing:
Some word choices on the “Involvement” page was unclear, another user didn’t know what "Recovery" referred to.
### Action Items
- [ ] Review across platform
- [ ] Consider changing "recovery" to a more clear term i.e. from substance abuse
- [ ] Iterate new copy
- [ ] Review in Content
- [ ] Handover to Dev
### Resources/Instructions
| priority | review word choice and phrasing for user clarity overview review word choice and phrasing across the product examples from usability testing some word choices on the “involvement” page was unclear another user didn’t know what recovery referred to action items review across platform consider changing recovery to a more clear term i e from substance abuse iterate new copy review in content handover to dev resources instructions | 1 |
106,064 | 4,259,177,252 | IssuesEvent | 2016-07-11 10:02:49 | Financial-Times/origami-image-service | https://api.github.com/repos/Financial-Times/origami-image-service | opened | Add in support for SVG tinting | priority: medium type: enhancement | We have a few options for this:
1. _Don't_ add support for SVG tinting. Instead we would add build scripts to each of the repositories that have SVGs in them, creating SVGs in all of the allowed colours. They would then be references using something like `fticon:cross/pink`.
2. Try to use a third party. This won't be possible with Imgix, but Cloudinary is adding a way to tint images in the same way as our v1 Image Service does. If this supports SVGs in the same way then we could rely on that.
3. Post-process SVGs in the new image service, adding in the style block to tint them. This adds a layer of complexity to the service which we might not want, but it's likely the closest we can get to the existing behaviour. | 1.0 | Add in support for SVG tinting - We have a few options for this:
1. _Don't_ add support for SVG tinting. Instead we would add build scripts to each of the repositories that have SVGs in them, creating SVGs in all of the allowed colours. They would then be references using something like `fticon:cross/pink`.
2. Try to use a third party. This won't be possible with Imgix, but Cloudinary is adding a way to tint images in the same way as our v1 Image Service does. If this supports SVGs in the same way then we could rely on that.
3. Post-process SVGs in the new image service, adding in the style block to tint them. This adds a layer of complexity to the service which we might not want, but it's likely the closest we can get to the existing behaviour. | priority | add in support for svg tinting we have a few options for this don t add support for svg tinting instead we would add build scripts to each of the repositories that have svgs in them creating svgs in all of the allowed colours they would then be references using something like fticon cross pink try to use a third party this won t be possible with imgix but cloudinary is adding a way to tint images in the same way as our image service does if this supports svgs in the same way then we could rely on that post process svgs in the new image service adding in the style block to tint them this adds a layer of complexity to the service which we might not want but it s likely the closest we can get to the existing behaviour | 1 |
485,126 | 13,961,590,791 | IssuesEvent | 2020-10-25 04:22:31 | AY2021S1-CS2103T-W17-3/tp | https://api.github.com/repos/AY2021S1-CS2103T-W17-3/tp | opened | History and ModelManager may have cyclic dependencies | priority.Medium severity.Low type.Bug | Sounds like a problem, maybe its not.
Can be easily 'fixed' (or alleviated?) if we change `History`'s `save` method to take in a `Model` instead of `ModelManager` (i.e. use the interface). | 1.0 | History and ModelManager may have cyclic dependencies - Sounds like a problem, maybe its not.
Can be easily 'fixed' (or alleviated?) if we change `History`'s `save` method to take in a `Model` instead of `ModelManager` (i.e. use the interface). | priority | history and modelmanager may have cyclic dependencies sounds like a problem maybe its not can be easily fixed or alleviated if we change history s save method to take in a model instead of modelmanager i e use the interface | 1 |
712,210 | 24,487,069,374 | IssuesEvent | 2022-10-09 15:27:38 | saesrpg/saesrpg | https://api.github.com/repos/saesrpg/saesrpg | closed | SF Bank Issue | Type: Bug Priority: Medium Status: Done Status: Accepted | Not an issue with SF bank particularly I guess, but a general one with the bankrob system.
I'll tell steps to reproduce by giving an example;
AA robs SF bank at 16:00
The bank is planned to become able again at 20:00
A group of criminals start PBR at 19:55
PBR is planned to reset at 20:05
While the PBR is still going on, at 20:00 the bank becomes able for gang BR again.
First door object and gang BR marker shows up.
PBR markers stop working, PBR is no more going on.
Everybody behind door 1 gets stuck in the area.
At 20:05, as of PBR being reset, gang BR's first door marker also disappears and never comes back.
This I guess is how the bug occurs, my personal idea, you might want to pause gang BR's become-able-timer while there's a PBR going on if possible.
| 1.0 | SF Bank Issue - Not an issue with SF bank particularly I guess, but a general one with the bankrob system.
I'll tell steps to reproduce by giving an example;
AA robs SF bank at 16:00
The bank is planned to become able again at 20:00
A group of criminals start PBR at 19:55
PBR is planned to reset at 20:05
While the PBR is still going on, at 20:00 the bank becomes able for gang BR again.
First door object and gang BR marker shows up.
PBR markers stop working, PBR is no more going on.
Everybody behind door 1 gets stuck in the area.
At 20:05, as of PBR being reset, gang BR's first door marker also disappears and never comes back.
This I guess is how the bug occurs, my personal idea, you might want to pause gang BR's become-able-timer while there's a PBR going on if possible.
| priority | sf bank issue not an issue with sf bank particularly i guess but a general one with the bankrob system i ll tell steps to reproduce by giving an example aa robs sf bank at the bank is planned to become able again at a group of criminals start pbr at pbr is planned to reset at while the pbr is still going on at the bank becomes able for gang br again first door object and gang br marker shows up pbr markers stop working pbr is no more going on everybody behind door gets stuck in the area at as of pbr being reset gang br s first door marker also disappears and never comes back this i guess is how the bug occurs my personal idea you might want to pause gang br s become able timer while there s a pbr going on if possible | 1 |
637,353 | 20,626,136,099 | IssuesEvent | 2022-03-07 22:48:22 | mito-ds/monorepo | https://api.github.com/repos/mito-ds/monorepo | closed | Add startswith and endswith string filter conditions | type: mitosheet effort: 3 priority: medium | **Is your feature request related to a problem? Please describe.**
In filter options, it would be very useful to have a "Where" option to find strings that `beginswith` or `endswith` a pattern. I often need to filter for strings that end with "-Prod" exactly, but I have some rows that contain "-Productivity" in the middle of the string. A contains filter will include those, so I'd have to add a second filter that does not contain "-Productivity" to get the desired result. There may be other use cases where beginswith and endswith would be more useful.
| 1.0 | Add startswith and endswith string filter conditions - **Is your feature request related to a problem? Please describe.**
In filter options, it would be very useful to have a "Where" option to find strings that `beginswith` or `endswith` a pattern. I often need to filter for strings that end with "-Prod" exactly, but I have some rows that contain "-Productivity" in the middle of the string. A contains filter will include those, so I'd have to add a second filter that does not contain "-Productivity" to get the desired result. There may be other use cases where beginswith and endswith would be more useful.
| priority | add startswith and endswith string filter conditions is your feature request related to a problem please describe in filter options it would be very useful to have a where option to find strings that beginswith or endswith a pattern i often need to filter for strings that end with prod exactly but i have some rows that contain productivity in the middle of the string a contains filter will include those so i d have to add a second filter that does not contain productivity to get the desired result there may be other use cases where beginswith and endswith would be more useful | 1 |
357,429 | 10,606,303,941 | IssuesEvent | 2019-10-10 22:52:31 | minio/minio-dotnet | https://api.github.com/repos/minio/minio-dotnet | closed | Minio not working when I have special characters in access key. | community priority: medium | Hello,
I noticed that minio client would send me an invalid presigned url if I have special characters in my access key. After a little bit of research I found that there was [a PR to fix this](https://github.com/minio/minio-dotnet/pull/221). But the changes from this PR got removed with the very next commit.
I can upload files to minio but when trying to access them with a presigned url I get this error :
**The access key ID you provided does not exist in our records.** | 1.0 | Minio not working when I have special characters in access key. - Hello,
I noticed that minio client would send me an invalid presigned url if I have special characters in my access key. After a little bit of research I found that there was [a PR to fix this](https://github.com/minio/minio-dotnet/pull/221). But the changes from this PR got removed with the very next commit.
I can upload files to minio but when trying to access them with a presigned url I get this error :
**The access key ID you provided does not exist in our records.** | priority | minio not working when i have special characters in access key hello i noticed that minio client would send me an invalid presigned url if i have special characters in my access key after a little bit of research i found that there was but the changes from this pr got removed with the very next commit i can upload files to minio but when trying to access them with a presigned url i get this error the access key id you provided does not exist in our records | 1 |
724,213 | 24,920,720,117 | IssuesEvent | 2022-10-30 22:59:25 | docker-mailserver/docker-mailserver | https://api.github.com/repos/docker-mailserver/docker-mailserver | opened | [BUG] './setup.sh email list' does not display aliases correctly | kind/bug meta/needs triage priority/medium | ### Miscellaneous first checks
- [X] I checked that all ports are open and not blocked by my ISP / hosting provider.
- [X] I know that SSL errors are likely the result of a wrong setup on the user side and not caused by DMS itself. I'm confident my setup is correct.
### Affected Component(s)
./setup.sh email list
### What happened and when does this occur?
```Markdown
On a fresh setup:
./setup.sh email add socialmedia@example.com
./setup.sh alias add facebook@example.com socialmedia@example.com
./setup.sh email list
* media@example.com
[ aliases -> facebook@example.com ]
* socialmedia@example.com
[ aliases -> facebook@example.com ]
```
```
### What did you expect to happen?
```Markdown
facebook@example.com should only be listed for the socialmedia@example.com mail account.
```
### How do we replicate the issue?
```Markdown
.
```
### DMS version
v11.2.0
### What operating system is DMS running on?
Linux
### Which operating system version?
Debian 11
### What instruction set architecture is DMS running on?
x86_64 / AMD64
### What container orchestration tool are you using?
Docker
### docker-compose.yml
_No response_
### Relevant log output
_No response_
### Other relevant information
_No response_
### What level of experience do you have with Docker and mail servers?
- [ ] I am inexperienced with docker
- [ ] I am inexperienced with mail servers
- [ ] I am uncomfortable with the CLI
### Code of conduct
- [X] I have read this project's [Code of Conduct](https://github.com/docker-mailserver/docker-mailserver/blob/master/CODE_OF_CONDUCT.md) and I agree
- [X] I have read the [README](https://github.com/docker-mailserver/docker-mailserver/blob/master/README.md) and the [documentation](https://docker-mailserver.github.io/docker-mailserver/edge/) and I searched the [issue tracker](https://github.com/docker-mailserver/docker-mailserver/issues?q=is%3Aissue) but could not find a solution
### Improvements to this form?
_No response_ | 1.0 | [BUG] './setup.sh email list' does not display aliases correctly - ### Miscellaneous first checks
- [X] I checked that all ports are open and not blocked by my ISP / hosting provider.
- [X] I know that SSL errors are likely the result of a wrong setup on the user side and not caused by DMS itself. I'm confident my setup is correct.
### Affected Component(s)
./setup.sh email list
### What happened and when does this occur?
```Markdown
On a fresh setup:
./setup.sh email add socialmedia@example.com
./setup.sh alias add facebook@example.com socialmedia@example.com
./setup.sh email list
* media@example.com
[ aliases -> facebook@example.com ]
* socialmedia@example.com
[ aliases -> facebook@example.com ]
```
```
### What did you expect to happen?
```Markdown
facebook@example.com should only be listed for the socialmedia@example.com mail account.
```
### How do we replicate the issue?
```Markdown
.
```
### DMS version
v11.2.0
### What operating system is DMS running on?
Linux
### Which operating system version?
Debian 11
### What instruction set architecture is DMS running on?
x86_64 / AMD64
### What container orchestration tool are you using?
Docker
### docker-compose.yml
_No response_
### Relevant log output
_No response_
### Other relevant information
_No response_
### What level of experience do you have with Docker and mail servers?
- [ ] I am inexperienced with docker
- [ ] I am inexperienced with mail servers
- [ ] I am uncomfortable with the CLI
### Code of conduct
- [X] I have read this project's [Code of Conduct](https://github.com/docker-mailserver/docker-mailserver/blob/master/CODE_OF_CONDUCT.md) and I agree
- [X] I have read the [README](https://github.com/docker-mailserver/docker-mailserver/blob/master/README.md) and the [documentation](https://docker-mailserver.github.io/docker-mailserver/edge/) and I searched the [issue tracker](https://github.com/docker-mailserver/docker-mailserver/issues?q=is%3Aissue) but could not find a solution
### Improvements to this form?
_No response_ | priority | setup sh email list does not display aliases correctly miscellaneous first checks i checked that all ports are open and not blocked by my isp hosting provider i know that ssl errors are likely the result of a wrong setup on the user side and not caused by dms itself i m confident my setup is correct affected component s setup sh email list what happened and when does this occur markdown on a fresh setup setup sh email add socialmedia example com setup sh alias add facebook example com socialmedia example com setup sh email list media example com socialmedia example com what did you expect to happen markdown facebook example com should only be listed for the socialmedia example com mail account how do we replicate the issue markdown dms version what operating system is dms running on linux which operating system version debian what instruction set architecture is dms running on what container orchestration tool are you using docker docker compose yml no response relevant log output no response other relevant information no response what level of experience do you have with docker and mail servers i am inexperienced with docker i am inexperienced with mail servers i am uncomfortable with the cli code of conduct i have read this project s and i agree i have read the and the and i searched the but could not find a solution improvements to this form no response | 1 |
605,318 | 18,733,504,804 | IssuesEvent | 2021-11-04 02:25:04 | AY2122S1-CS2103T-W13-2/tp | https://api.github.com/repos/AY2122S1-CS2103T-W13-2/tp | closed | [PE-D] Inconsistent format of commands | type.Bug priority.High severity.Medium docs | Under the commands `add` and `edit`, the parameters are expected to be encapsulated with quotation marks as shown in the examples (highlighted in orange).

However, in `delete` and `find`, quotation marks are omitted from the parameters (highlighted in yellow). It would be better if the group can adopt a consistent format (i.e.: decide if quotation marks are required or not) so that the reader will not be confused.


<!--session: 1635494331267-38517f32-21de-436c-8367-a94de66a554e--><!--Version: Web v3.4.1-->
-------------
Labels: `severity.High` `type.DocumentationBug`
original: stanley-1/ped#10 | 1.0 | [PE-D] Inconsistent format of commands - Under the commands `add` and `edit`, the parameters are expected to be encapsulated with quotation marks as shown in the examples (highlighted in orange).

However, in `delete` and `find`, quotation marks are omitted from the parameters (highlighted in yellow). It would be better if the group can adopt a consistent format (i.e.: decide if quotation marks are required or not) so that the reader will not be confused.


<!--session: 1635494331267-38517f32-21de-436c-8367-a94de66a554e--><!--Version: Web v3.4.1-->
-------------
Labels: `severity.High` `type.DocumentationBug`
original: stanley-1/ped#10 | priority | inconsistent format of commands under the commands add and edit the parameters are expected to be encapsulated with quotation marks as shown in the examples highlighted in orange however in delete and find quotation marks are omitted from the parameters highlighted in yellow it would be better if the group can adopt a consistent format i e decide if quotation marks are required or not so that the reader will not be confused labels severity high type documentationbug original stanley ped | 1 |
650,589 | 21,410,058,964 | IssuesEvent | 2022-04-22 04:16:40 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | Map HTML repr crashes when data array is a Dask array | Bug Package Novice map Priority Medium Effort Medium | If your map is backed by a Dask array (which `Map` does *nominally* support), the nifty HTML repr freaks out.
```python
import sunpy.data.sample
import sunpy.map
import dask.array
m = sunpy.map.Map(sunpy.data.sample.AIA_171_IMAGE)
m_dask = sunpy.map.Map(dask.array.from_array(m.data),m.meta)
m_dask._repr_html_()
```
gives
```python traceback
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-17-89689641455e> in <module>
----> 1 m_dask._repr_html_()
~/Documents/codes/sunpy/sunpy/map/mapbase.py in _repr_html_(self)
356 FigureCanvasBase(fig)
357 ax = fig.subplots()
--> 358 values, bins, patches = ax.hist(finite_data.ravel(), bins=100)
359 norm_centers = norm(0.5 * (bins[:-1] + bins[1:])).data
360 for c, p in zip(norm_centers, patches):
~/miniconda3/envs/sunpy-dev/lib/python3.8/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
1445 def inner(ax, *args, data=None, **kwargs):
1446 if data is None:
-> 1447 return func(ax, *map(sanitize_sequence, args), **kwargs)
1448
1449 bound = new_sig.bind(ax, *args, **kwargs)
~/miniconda3/envs/sunpy-dev/lib/python3.8/site-packages/matplotlib/axes/_axes.py in hist(self, x, bins, range, density, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, **kwargs)
6569
6570 # Massage 'x' for processing.
-> 6571 x = cbook._reshape_2D(x, 'x')
6572 nx = len(x) # number of datasets
6573
~/miniconda3/envs/sunpy-dev/lib/python3.8/site-packages/matplotlib/cbook/__init__.py in _reshape_2D(X, name)
1373
1374 # Iterate over list of iterables.
-> 1375 if len(X) == 0:
1376 return [[]]
1377
TypeError: 'float' object cannot be interpreted as an integer
``` | 1.0 | Map HTML repr crashes when data array is a Dask array - If your map is backed by a Dask array (which `Map` does *nominally* support), the nifty HTML repr freaks out.
```python
import sunpy.data.sample
import sunpy.map
import dask.array
m = sunpy.map.Map(sunpy.data.sample.AIA_171_IMAGE)
m_dask = sunpy.map.Map(dask.array.from_array(m.data),m.meta)
m_dask._repr_html_()
```
gives
```python traceback
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-17-89689641455e> in <module>
----> 1 m_dask._repr_html_()
~/Documents/codes/sunpy/sunpy/map/mapbase.py in _repr_html_(self)
356 FigureCanvasBase(fig)
357 ax = fig.subplots()
--> 358 values, bins, patches = ax.hist(finite_data.ravel(), bins=100)
359 norm_centers = norm(0.5 * (bins[:-1] + bins[1:])).data
360 for c, p in zip(norm_centers, patches):
~/miniconda3/envs/sunpy-dev/lib/python3.8/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
1445 def inner(ax, *args, data=None, **kwargs):
1446 if data is None:
-> 1447 return func(ax, *map(sanitize_sequence, args), **kwargs)
1448
1449 bound = new_sig.bind(ax, *args, **kwargs)
~/miniconda3/envs/sunpy-dev/lib/python3.8/site-packages/matplotlib/axes/_axes.py in hist(self, x, bins, range, density, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, **kwargs)
6569
6570 # Massage 'x' for processing.
-> 6571 x = cbook._reshape_2D(x, 'x')
6572 nx = len(x) # number of datasets
6573
~/miniconda3/envs/sunpy-dev/lib/python3.8/site-packages/matplotlib/cbook/__init__.py in _reshape_2D(X, name)
1373
1374 # Iterate over list of iterables.
-> 1375 if len(X) == 0:
1376 return [[]]
1377
TypeError: 'float' object cannot be interpreted as an integer
``` | priority | map html repr crashes when data array is a dask array if your map is backed by a dask array which map does nominally support the nifty html repr freaks out python import sunpy data sample import sunpy map import dask array m sunpy map map sunpy data sample aia image m dask sunpy map map dask array from array m data m meta m dask repr html gives python traceback typeerror traceback most recent call last in m dask repr html documents codes sunpy sunpy map mapbase py in repr html self figurecanvasbase fig ax fig subplots values bins patches ax hist finite data ravel bins norm centers norm bins bins data for c p in zip norm centers patches envs sunpy dev lib site packages matplotlib init py in inner ax data args kwargs def inner ax args data none kwargs if data is none return func ax map sanitize sequence args kwargs bound new sig bind ax args kwargs envs sunpy dev lib site packages matplotlib axes axes py in hist self x bins range density weights cumulative bottom histtype align orientation rwidth log color label stacked kwargs massage x for processing x cbook reshape x x nx len x number of datasets envs sunpy dev lib site packages matplotlib cbook init py in reshape x name iterate over list of iterables if len x return typeerror float object cannot be interpreted as an integer | 1 |
101,943 | 4,148,473,068 | IssuesEvent | 2016-06-15 11:08:39 | GreatEmerald/RGIC1601 | https://api.github.com/repos/GreatEmerald/RGIC1601 | opened | PlotResult module | Priority 4: Medium | A module that takes the same inputs as the ExportToFile module, neatly plots the resulting vector and point layers, and exports the image into a static image file (PNG). | 1.0 | PlotResult module - A module that takes the same inputs as the ExportToFile module, neatly plots the resulting vector and point layers, and exports the image into a static image file (PNG). | priority | plotresult module a module that takes the same inputs as the exporttofile module neatly plots the resulting vector and point layers and exports the image into a static image file png | 1 |
289,166 | 8,855,484,285 | IssuesEvent | 2019-01-09 06:42:17 | onaio/onadata | https://api.github.com/repos/onaio/onadata | closed | Don't recreate the same dataview | API Good First Issue Module: Datasets Priority: Medium Refactor Size: Small (≤1) | We should prevent the creation of exactly the same dataview for the same form.
When receiving a request to create a new dataview, we should check if the same exact dataview already exists, and if it does, we should not recreate it.
Related: https://github.com/onaio/onadata/issues/1497 | 1.0 | Don't recreate the same dataview - We should prevent the creation of exactly the same dataview for the same form.
When receiving a request to create a new dataview, we should check if the same exact dataview already exists, and if it does, we should not recreate it.
Related: https://github.com/onaio/onadata/issues/1497 | priority | don t recreate the same dataview we should prevent the creation of exactly the same dataview for the same form when receiving a request to create a new dataview we should check if the same exact dataview already exists and if it does we should not recreate it related | 1 |
431,159 | 12,475,895,013 | IssuesEvent | 2020-05-29 12:31:10 | containrrr/watchtower | https://api.github.com/repos/containrrr/watchtower | closed | Download new image but don't update? | Priority: Medium Status: Available Status: Help Needed Status: Stale Type: Question | Hello,
I am wondering if it is possible to have watchtower automatically detect and download new images, but NOT restart or kill the running containers?
WATCHTOWER_NO_RESTART seems to pull and update the images but stops the running containers
WATCHTOWER_MONITOR_ONLY seems to be meant for notifications.
We will be using docker container to run a server for a '3d printer', so having a container restart in the middle of the print is unacceptable.
Ideally the new images would be automatically downloaded and a new container would be started on the next boot or when a user shutsdown the app.
Regards,
| 1.0 | Download new image but don't update? - Hello,
I am wondering if it is possible to have watchtower automatically detect and download new images, but NOT restart or kill the running containers?
WATCHTOWER_NO_RESTART seems to pull and update the images but stops the running containers
WATCHTOWER_MONITOR_ONLY seems to be meant for notifications.
We will be using docker container to run a server for a '3d printer', so having a container restart in the middle of the print is unacceptable.
Ideally the new images would be automatically downloaded and a new container would be started on the next boot or when a user shutsdown the app.
Regards,
| priority | download new image but don t update hello i am wondering if it is possible to have watchtower automatically detect and download new images but not restart or kill the running containers watchtower no restart seems to pull and update the images but stops the running containers watchtower monitor only seems to be meant for notifications we will be using docker container to run a server for a printer so having a container restart in the middle of the print is unacceptable ideally the new images would be automatically downloaded and a new container would be started on the next boot or when a user shutsdown the app regards | 1 |
166,467 | 6,305,258,388 | IssuesEvent | 2017-07-21 17:55:55 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | can't delete account if you use Facebook or Google authentication | priority: medium status: issue: in progress type: medium level coding | The delete account button now requires that you enter your password.
Players who use only Facebook authentication don't have a password. The delete feature needs to have an exception for them where they need to enter only "DELETE" (localised) as previously.
| 1.0 | can't delete account if you use Facebook or Google authentication - The delete account button now requires that you enter your password.
Players who use only Facebook authentication don't have a password. The delete feature needs to have an exception for them where they need to enter only "DELETE" (localised) as previously.
| priority | can t delete account if you use facebook or google authentication the delete account button now requires that you enter your password players who use only facebook authentication don t have a password the delete feature needs to have an exception for them where they need to enter only delete localised as previously | 1 |
198,230 | 6,971,400,054 | IssuesEvent | 2017-12-11 13:54:24 | canonical-websites/www.ubuntu.com | https://api.github.com/repos/canonical-websites/www.ubuntu.com | closed | Small screen scrolling tables have some very narrow columns. | Priority: Medium Type: Enhancement | ## Summary
Whilst reviewing issue #1771 I noticed the small screen horizontally scrolling tables have some very narrow columns. Perhaps these can be wider so the content is easier to read, especially as we are already scrolling the table.
## Process
- Go to the demo - http://www.ubuntu.com-1792-service-desc-update.demo.haus/legal/terms-and-policies/privacy-policy
- Reduce the screensize
- View the 'purpose' column in the 'Cookies we use' table.
## Screenshot

| 1.0 | Small screen scrolling tables have some very narrow columns. - ## Summary
Whilst reviewing issue #1771 I noticed the small screen horizontally scrolling tables have some very narrow columns. Perhaps these can be wider so the content is easier to read, especially as we are already scrolling the table.
## Process
- Go to the demo - http://www.ubuntu.com-1792-service-desc-update.demo.haus/legal/terms-and-policies/privacy-policy
- Reduce the screensize
- View the 'purpose' column in the 'Cookies we use' table.
## Screenshot

| priority | small screen scrolling tables have some very narrow columns summary whilst reviewing issue i noticed the small screen horizontally scrolling tables have some very narrow columns perhaps these can be wider so the content is easier to read especially as we are already scrolling the table process go to the demo reduce the screensize view the purpose column in the cookies we use table screenshot | 1 |
445,390 | 12,830,060,455 | IssuesEvent | 2020-07-07 00:57:40 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Only "Public" Profile Updates Shown on Activity Feed, Even Among Connected Users | bug priority: medium | **Describe the bug**
Profile field updates are only visible in Activity Feed when the profile field has been chosen for "public" visiblity. If two users are connected and the field is chosen for visibility to "connections", the updates do not appear in the Activity Feed of the connected user. Likewise, changes to profile fields chosen for visibility to "all members" do not appear in the Activity Feed of other users.
**To Reproduce**
Steps to reproduce the behavior:
1. Activate "member updates their profile details" in the Posts in Activity Feed BuddyBoss settings on the backend.
2. Connect User A and User B on the site.
3. Change a profile field in User A's profile with the profile field visibility selected for "connections".
4. Visit User B's Activity Feed to see that User A's profile field change is not displayed on User B's Activity Feed.
**Expected behavior**
Users would see an update in their Activity Feed when a connected user edits a profile field with visibility to "connections".
**Screenshots**
Screenshots would likely not be very helpful. The bug is a lack of an update on the Activity Feed.
**Support ticket links**
80293
| 1.0 | Only "Public" Profile Updates Shown on Activity Feed, Even Among Connected Users - **Describe the bug**
Profile field updates are only visible in Activity Feed when the profile field has been chosen for "public" visiblity. If two users are connected and the field is chosen for visibility to "connections", the updates do not appear in the Activity Feed of the connected user. Likewise, changes to profile fields chosen for visibility to "all members" do not appear in the Activity Feed of other users.
**To Reproduce**
Steps to reproduce the behavior:
1. Activate "member updates their profile details" in the Posts in Activity Feed BuddyBoss settings on the backend.
2. Connect User A and User B on the site.
3. Change a profile field in User A's profile with the profile field visibility selected for "connections".
4. Visit User B's Activity Feed to see that User A's profile field change is not displayed on User B's Activity Feed.
**Expected behavior**
Users would see an update in their Activity Feed when a connected user edits a profile field with visibility to "connections".
**Screenshots**
Screenshots would likely not be very helpful. The bug is a lack of an update on the Activity Feed.
**Support ticket links**
80293
| priority | only public profile updates shown on activity feed even among connected users describe the bug profile field updates are only visible in activity feed when the profile field has been chosen for public visiblity if two users are connected and the field is chosen for visibility to connections the updates do not appear in the activity feed of the connected user likewise changes to profile fields chosen for visibility to all members do not appear in the activity feed of other users to reproduce steps to reproduce the behavior activate member updates their profile details in the posts in activity feed buddyboss settings on the backend connect user a and user b on the site change a profile field in user a s profile with the profile field visibility selected for connections visit user b s activity feed to see that user a s profile field change is not displayed on user b s activity feed expected behavior users would see an update in their activity feed when a connected user edits a profile field with visibility to connections screenshots screenshots would likely not be very helpful the bug is a lack of an update on the activity feed support ticket links | 1 |
479,775 | 13,805,678,337 | IssuesEvent | 2020-10-11 14:40:35 | concrete5/concrete5 | https://api.github.com/repos/concrete5/concrete5 | closed | Form File/Image Upload Duplicate Files | Affects:Visitors Bug Priority:Medium Product Areas:Express Product Areas:File Manager Status:Available Type:Bug | On the form block, when using the image/file field. There is the option to choose where the uploaded files go within the file manager, under the options tab when editing the form.

When the selection is set to the default option (file manager), and a file is uploaded through the form, it creates two versions of the file, which are seemingly identical. When you choose to put the uploaded file into a specific folder in your file manager (Such as the "Test Folder" option I have in the above screencap), the file uploads to the folder, and a duplicate of the file appears in the default "File Manager" location as well. This only happens when uploading a file through the form, and it works just fine uploading a file any other way.
I've tried this with a couple of different websites, so I'm pretty sure there is something wrong with concrete5 itself. I'm using the latest version of concrete5 so far with both websites (8.5.2), so I'm not sure if this was always happening or if it was a recent regression.
| 1.0 | Form File/Image Upload Duplicate Files - On the form block, when using the image/file field. There is the option to choose where the uploaded files go within the file manager, under the options tab when editing the form.

When the selection is set to the default option (file manager), and a file is uploaded through the form, it creates two versions of the file, which are seemingly identical. When you choose to put the uploaded file into a specific folder in your file manager (Such as the "Test Folder" option I have in the above screencap), the file uploads to the folder, and a duplicate of the file appears in the default "File Manager" location as well. This only happens when uploading a file through the form, and it works just fine uploading a file any other way.
I've tried this with a couple of different websites, so I'm pretty sure there is something wrong with concrete5 itself. I'm using the latest version of concrete5 so far with both websites (8.5.2), so I'm not sure if this was always happening or if it was a recent regression.
| priority | form file image upload duplicate files on the form block when using the image file field there is the option to choose where the uploaded files go within the file manager under the options tab when editing the form when the selection is set to the default option file manager and a file is uploaded through the form it creates two versions of the file which are seemingly identical when you choose to put the uploaded file into a specific folder in your file manager such as the test folder option i have in the above screencap the file uploads to the folder and a duplicate of the file appears in the default file manager location as well this only happens when uploading a file through the form and it works just fine uploading a file any other way i ve tried this with a couple of different websites so i m pretty sure there is something wrong with itself i m using the latest version of so far with both websites so i m not sure if this was always happening or if it was a recent regression | 1 |
229,178 | 7,572,255,443 | IssuesEvent | 2018-04-23 14:30:41 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | There is no redirection or success notification when using SQL databases. | priority: medium status: confirmed 👍 type: bug 🐛 | <!-- ⚠️ Before writing your issue make sure you are using :-->
<!-- Node 9.x.x -->
<!-- npm 5.x.x -->
<!-- The latest version of Strapi -->
**Informations**
- **Node.js version**: 9.6.1
- **npm version**: 5.6.0
- **Strapi version**: 3.0.0-alpha.11.2
- **Database**: MySQL
- **Operating system**: Macintosh
**What is the current behavior?**
When I'm using a SQL database, and when I tried to save something a new CT or a new entry, I'm not redirected to the previous page or I cannot see any success notification.
**Steps to reproduce the problem**
- Create new project
- Use `strapi-knex` and `strapi-bookshelf` with a MySQL database.
- Go to the administration panel and try to save a new CT. Nothing happens in the front-end. However, the server has restarted and the job has been done.
**What is the expected behavior?**
I should be redirected to the previous page and I should receive a success notification.
**Suggested solutions**
We certainly need to update the `request` helper which is involved in the redirection process.
- [x] I'm sure that this issue hasn't already been referenced
| 1.0 | There is no redirection or success notification when using SQL databases. - <!-- ⚠️ Before writing your issue make sure you are using :-->
<!-- Node 9.x.x -->
<!-- npm 5.x.x -->
<!-- The latest version of Strapi -->
**Informations**
- **Node.js version**: 9.6.1
- **npm version**: 5.6.0
- **Strapi version**: 3.0.0-alpha.11.2
- **Database**: MySQL
- **Operating system**: Macintosh
**What is the current behavior?**
When I'm using a SQL database, and when I tried to save something a new CT or a new entry, I'm not redirected to the previous page or I cannot see any success notification.
**Steps to reproduce the problem**
- Create new project
- Use `strapi-knex` and `strapi-bookshelf` with a MySQL database.
- Go to the administration panel and try to save a new CT. Nothing happens in the front-end. However, the server has restarted and the job has been done.
**What is the expected behavior?**
I should be redirected to the previous page and I should receive a success notification.
**Suggested solutions**
We certainly need to update the `request` helper which is involved in the redirection process.
- [x] I'm sure that this issue hasn't already been referenced
| priority | there is no redirection or success notification when using sql databases informations node js version npm version strapi version alpha database mysql operating system macintosh what is the current behavior when i m using a sql database and when i tried to save something a new ct or a new entry i m not redirected to the previous page or i cannot see any success notification steps to reproduce the problem create new project use strapi knex and strapi bookshelf with a mysql database go to the administration panel and try to save a new ct nothing happens in the front end however the server has restarted and the job has been done what is the expected behavior i should be redirected to the previous page and i should receive a success notification suggested solutions we certainly need to update the request helper which is involved in the redirection process i m sure that this issue hasn t already been referenced | 1 |
250,535 | 7,978,454,346 | IssuesEvent | 2018-07-17 18:22:14 | EsotericSoftware/yamlbeans | https://api.github.com/repos/EsotericSoftware/yamlbeans | closed | Class tags containing '_' won't deserialize. | Priority-Medium bug imported | _From [jim.ferr...@gmail.com](https://code.google.com/u/106187084600841482336/) on March 12, 2011 17:11:47_
To reproduce the problem?
1. Create a Bean class with an embedded '_' in the name, eg com.example.yaml.Foo_Bar
2. Serialize an instance of this class to YAML.
3. Deserialize the YAML back to an instance.
Alternatively you can configure a class tag with the _:
```
config.setClassTag("Foo_Bar", Station.class);
```
I expect to get a deserialized Bean (or an error message about using '_'), but see a stack trace from Tokenizer instead:
java.lang.RuntimeException: Couldn't deserialize DTO from YAML String: Error tokenizing YAML.
at com.mot.tod.dto.utils.YAMLBeansSerializer.fromYAML(YAMLBeansSerializer.java:94)
at com.mot.tod.dto.utils.YAMLBeansSerializer.deepCopy(YAMLBeansSerializer.java:130)
at com.mot.tod.dto.DTOTest.testDTO(DTOTest.java:3032)
at com.mot.tod.dto.DTOTest.testLineupMap_Station(DTOTest.java:2662)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
Caused by: com.esotericsoftware.yamlbeans.YamlException: Error tokenizing YAML.
at com.esotericsoftware.yamlbeans.YamlReader.read(YamlReader.java:110)
at com.esotericsoftware.yamlbeans.YamlReader.read(YamlReader.java:91)
at com.mot.tod.dto.utils.YAMLBeansSerializer.fromYAML(YAMLBeansSerializer.java:90)
... 27 more
Caused by: com.esotericsoftware.yamlbeans.tokenizer.Tokenizer$TokenizerException: Line 0, column 4: While scanning a tag, expected ' ' but found: '_' (95)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.scanTag(Tokenizer.java:705)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.fetchTag(Tokenizer.java:486)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.fetchMoreTokens(Tokenizer.java:306)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.peekNextToken(Tokenizer.java:122)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.peekNextTokenType(Tokenizer.java:127)
at com.esotericsoftware.yamlbeans.parser.Parser$4.produce(Parser.java:133)
at com.esotericsoftware.yamlbeans.parser.Parser.getNextEvent(Parser.java:82)
at com.esotericsoftware.yamlbeans.YamlReader.read(YamlReader.java:101)
... 29 more
<b>What version of the product are you using? On what operating system?</b>
1.0.6 on Windows XP
<b>Please provide any additional information below.</b>
_Original issue: http://code.google.com/p/yamlbeans/issues/detail?id=12_
| 1.0 | Class tags containing '_' won't deserialize. - _From [jim.ferr...@gmail.com](https://code.google.com/u/106187084600841482336/) on March 12, 2011 17:11:47_
To reproduce the problem?
1. Create a Bean class with an embedded '_' in the name, eg com.example.yaml.Foo_Bar
2. Serialize an instance of this class to YAML.
3. Deserialize the YAML back to an instance.
Alternatively you can configure a class tag with the _:
```
config.setClassTag("Foo_Bar", Station.class);
```
I expect to get a deserialized Bean (or an error message about using '_'), but see a stack trace from Tokenizer instead:
java.lang.RuntimeException: Couldn't deserialize DTO from YAML String: Error tokenizing YAML.
at com.mot.tod.dto.utils.YAMLBeansSerializer.fromYAML(YAMLBeansSerializer.java:94)
at com.mot.tod.dto.utils.YAMLBeansSerializer.deepCopy(YAMLBeansSerializer.java:130)
at com.mot.tod.dto.DTOTest.testDTO(DTOTest.java:3032)
at com.mot.tod.dto.DTOTest.testLineupMap_Station(DTOTest.java:2662)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
Caused by: com.esotericsoftware.yamlbeans.YamlException: Error tokenizing YAML.
at com.esotericsoftware.yamlbeans.YamlReader.read(YamlReader.java:110)
at com.esotericsoftware.yamlbeans.YamlReader.read(YamlReader.java:91)
at com.mot.tod.dto.utils.YAMLBeansSerializer.fromYAML(YAMLBeansSerializer.java:90)
... 27 more
Caused by: com.esotericsoftware.yamlbeans.tokenizer.Tokenizer$TokenizerException: Line 0, column 4: While scanning a tag, expected ' ' but found: '_' (95)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.scanTag(Tokenizer.java:705)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.fetchTag(Tokenizer.java:486)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.fetchMoreTokens(Tokenizer.java:306)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.peekNextToken(Tokenizer.java:122)
at com.esotericsoftware.yamlbeans.tokenizer.Tokenizer.peekNextTokenType(Tokenizer.java:127)
at com.esotericsoftware.yamlbeans.parser.Parser$4.produce(Parser.java:133)
at com.esotericsoftware.yamlbeans.parser.Parser.getNextEvent(Parser.java:82)
at com.esotericsoftware.yamlbeans.YamlReader.read(YamlReader.java:101)
... 29 more
<b>What version of the product are you using? On what operating system?</b>
1.0.6 on Windows XP
<b>Please provide any additional information below.</b>
_Original issue: http://code.google.com/p/yamlbeans/issues/detail?id=12_
| priority | class tags containing won t deserialize from on march to reproduce the problem create a bean class with an embedded in the name eg com example yaml foo bar serialize an instance of this class to yaml deserialize the yaml back to an instance alternatively you can configure a class tag with the config setclasstag foo bar station class i expect to get a deserialized bean or an error message about using but see a stack trace from tokenizer instead java lang runtimeexception couldn t deserialize dto from yaml string error tokenizing yaml at com mot tod dto utils yamlbeansserializer fromyaml yamlbeansserializer java at com mot tod dto utils yamlbeansserializer deepcopy yamlbeansserializer java at com mot tod dto dtotest testdto dtotest java at com mot tod dto dtotest testlineupmap station dtotest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner run parentrunner java at org eclipse jdt internal runner run java at org eclipse jdt internal junit runner testexecution run testexecution java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner run remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner main remotetestrunner java caused by com esotericsoftware yamlbeans yamlexception error tokenizing yaml at com esotericsoftware yamlbeans yamlreader read yamlreader java at com esotericsoftware yamlbeans yamlreader read yamlreader java at com mot tod dto utils yamlbeansserializer fromyaml yamlbeansserializer java more caused by com esotericsoftware yamlbeans tokenizer tokenizer tokenizerexception line column while scanning a tag expected but found at com esotericsoftware yamlbeans tokenizer tokenizer scantag tokenizer java at com esotericsoftware yamlbeans tokenizer tokenizer fetchtag tokenizer java at com esotericsoftware yamlbeans tokenizer tokenizer fetchmoretokens tokenizer java at com esotericsoftware yamlbeans tokenizer tokenizer peeknexttoken tokenizer java at com esotericsoftware yamlbeans tokenizer tokenizer peeknexttokentype tokenizer java at com esotericsoftware yamlbeans parser parser produce parser java at com esotericsoftware yamlbeans parser parser getnextevent parser java at com esotericsoftware yamlbeans yamlreader read yamlreader java more what version of the product are you using on what operating system on windows xp please provide any additional information below original issue | 1 |
316,757 | 9,654,638,420 | IssuesEvent | 2019-05-19 15:35:41 | dotkom/onlineweb-frontend | https://api.github.com/repos/dotkom/onlineweb-frontend | opened | Let users add money to their account/saldo | App: Profile Priority: Medium Type: Feature | Users should be able to add money to their saldo.
This should be done using [Stripe Elements](https://stripe.com/payments/elements).
The best way to do this would be using [Ract Stripe Elements](https://github.com/stripe/react-stripe-elements)
Depends on the API from https://github.com/dotkom/onlineweb4/pull/2266 | 1.0 | Let users add money to their account/saldo - Users should be able to add money to their saldo.
This should be done using [Stripe Elements](https://stripe.com/payments/elements).
The best way to do this would be using [Ract Stripe Elements](https://github.com/stripe/react-stripe-elements)
Depends on the API from https://github.com/dotkom/onlineweb4/pull/2266 | priority | let users add money to their account saldo users should be able to add money to their saldo this should be done using the best way to do this would be using depends on the api from | 1 |
168,373 | 6,370,557,799 | IssuesEvent | 2017-08-01 14:25:14 | tardis-sn/tardis | https://api.github.com/repos/tardis-sn/tardis | closed | Segfault during tardis cmontecarlo run | c-montecarlo priority - medium ready | This is what appears to cause the issue.
```
[26] last_line = 1822360143409613.250000
[26] next_line = 1822353496842714.500000
[26] ERROR: Comoving nu less than nu_line!
[26] comov_nu = -58783631258883757404258453611703772493428311659382084935097109061384145004566790513369500832759808.000000
[26] nu_line = 1822355712359626.750000
[26] (comov_nu - nu_line) / nu_line = -32256946797049520942182563511351622606622620138438708712185411031008847627110318080.000000
[26] r = 999999999999999967336168804116691273849533185806555472917961779471295845921727862608739868455469056.000000
[26] mu = -0.522198
[26] nu = -3489787318773531.500000
[26] doppler_factor = 16844473857376206614733575835140392265551777909465454558809109268882602250184163328.000000
[26] cur_zone_id = 1
```
I'll update later as I gather more information.
### SETUP:
```
dalek 1ac519cbb9f91941fbc4d51165e1942d9afc1e95
tardis 3370d8d67a9abf4102d9f9e1fb7fbcff8bcede18
```
`run_failed.py` is a simple script that reproduces the problem. You have to run with OMP_NUM_THREADS=8. The way I use it, I'm exporting that environment variable before running the file (nthreads = 0 in config allows for dynamically changing the number of threads)
[packet.zip](https://github.com/tardis-sn/tardis/files/261436/packet.zip)
| 1.0 | Segfault during tardis cmontecarlo run - This is what appears to cause the issue.
```
[26] last_line = 1822360143409613.250000
[26] next_line = 1822353496842714.500000
[26] ERROR: Comoving nu less than nu_line!
[26] comov_nu = -58783631258883757404258453611703772493428311659382084935097109061384145004566790513369500832759808.000000
[26] nu_line = 1822355712359626.750000
[26] (comov_nu - nu_line) / nu_line = -32256946797049520942182563511351622606622620138438708712185411031008847627110318080.000000
[26] r = 999999999999999967336168804116691273849533185806555472917961779471295845921727862608739868455469056.000000
[26] mu = -0.522198
[26] nu = -3489787318773531.500000
[26] doppler_factor = 16844473857376206614733575835140392265551777909465454558809109268882602250184163328.000000
[26] cur_zone_id = 1
```
I'll update later as I gather more information.
### SETUP:
```
dalek 1ac519cbb9f91941fbc4d51165e1942d9afc1e95
tardis 3370d8d67a9abf4102d9f9e1fb7fbcff8bcede18
```
`run_failed.py` is a simple script that reproduces the problem. You have to run with OMP_NUM_THREADS=8. The way I use it, I'm exporting that environment variable before running the file (nthreads = 0 in config allows for dynamically changing the number of threads)
[packet.zip](https://github.com/tardis-sn/tardis/files/261436/packet.zip)
| priority | segfault during tardis cmontecarlo run this is what appears to cause the issue last line next line error comoving nu less than nu line comov nu nu line comov nu nu line nu line r mu nu doppler factor cur zone id i ll update later as i gather more information setup dalek tardis run failed py is a simple script that reproduces the problem you have to run with omp num threads the way i use it i m exporting that environment variable before running the file nthreads in config allows for dynamically changing the number of threads | 1 |
472,182 | 13,617,760,888 | IssuesEvent | 2020-09-23 17:29:32 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | opened | Update Gifting Modal Design | help wanted priority: medium type: medium level coding | We have updated designs for the Gifting gems ready for implementation

Anyone wanting to work on this can ping @Tressley and will be given access to the Zeplin designs | 1.0 | Update Gifting Modal Design - We have updated designs for the Gifting gems ready for implementation

Anyone wanting to work on this can ping @Tressley and will be given access to the Zeplin designs | priority | update gifting modal design we have updated designs for the gifting gems ready for implementation anyone wanting to work on this can ping tressley and will be given access to the zeplin designs | 1 |
170,780 | 6,471,447,126 | IssuesEvent | 2017-08-17 11:39:54 | geosolutions-it/decat_geonode | https://api.github.com/repos/geosolutions-it/decat_geonode | closed | Impact Assessments: add promoted stauts | backend enhancement in progress Priority: Medium | Want to be able to quickly understand from the GUI if a COP has been created from this IA or not. | 1.0 | Impact Assessments: add promoted stauts - Want to be able to quickly understand from the GUI if a COP has been created from this IA or not. | priority | impact assessments add promoted stauts want to be able to quickly understand from the gui if a cop has been created from this ia or not | 1 |
861 | 2,504,222,830 | IssuesEvent | 2015-01-10 00:48:59 | ubc/acj-versus | https://api.github.com/repos/ubc/acj-versus | closed | Filter list of courses in reporting screen | bug medium priority | Currently, the list of courses used is a list of all the courses the user is enroled in. This is fine in most cases. However, when a "student" is a TA/instructor in a course and a student in another course, a problem will occur. The student will be able to not only download reports from the course they are teaching in but also the course they are enroled in as a student. | 1.0 | Filter list of courses in reporting screen - Currently, the list of courses used is a list of all the courses the user is enroled in. This is fine in most cases. However, when a "student" is a TA/instructor in a course and a student in another course, a problem will occur. The student will be able to not only download reports from the course they are teaching in but also the course they are enroled in as a student. | priority | filter list of courses in reporting screen currently the list of courses used is a list of all the courses the user is enroled in this is fine in most cases however when a student is a ta instructor in a course and a student in another course a problem will occur the student will be able to not only download reports from the course they are teaching in but also the course they are enroled in as a student | 1 |
342,386 | 10,316,144,182 | IssuesEvent | 2019-08-30 09:18:30 | geosolutions-it/geonode | https://api.github.com/repos/geosolutions-it/geonode | opened | [Proposal] API for calendar heat map component | Priority: Medium analytics enhancement | This is a proposal to optimize requests for create a calendar heat map.
### Visualization of publications in Calendar
`resource_type` can be one of `layers`, `maps` or `documents`
Request:
`/api/:resource_type/calendar/?date__gte=2019-01-01+00:00:00&date__lte=2019-12-31+23:59:59&date_type=publication`
query params:
`date__gte`: filter greater and equal to date
`date__lte`: filter less and equal to date
`date_type`: filter only a specific date_type
Response
```
{
"geonode_version": "2.10.1",
"meta": {
"max_count": 7,
"min_count": 1,
"date_type": "publication"
},
"objects": [
{
"date": "2019-01-01",
"count": 7
},
{
"date": "2019-01-02",
"count": 1
},
{
"date": "2019-06-10",
"count": 6
},
{
"date": "2019-08-29",
"count": 1
}
]
}
```
the calendar property contains list of days in chronological order with the count of resources. If a day in the time range has count equal to 0 it will not be listed in the response.
Example of visualization:

### List objects in calendar
`resource_type` can be one of `layers`, `maps` or `documents`
Request:
`/api/:resource_type/calendar/?date=2019-08-29&date_type=publication&list_objects=compact`
query params:
`date`: filter equal to date
`date_type`: filter only a specific date_type
`list_objects`: show list of objects for every day in response, one of `compact` or `full`
- `compact`: returns only following property of the object date, id, name;
- `full`: returns the complete object as in standard `/api/:resource_type` requests;
```
{
"geonode_version": "2.10.1",
"meta": {
"max_count": 7,
"min_count": 1,
"date_type": "publication"
},
"objects": [
{
"date": "2019-08-29",
"count": 1,
"objects": [
{
"date": "2019-08-29T08:03:05.498747",
"id": 78,
"name": "name",
"owner__username": "username",
"owner_name": "owner name",
"title": "Title"
}
]
}
]
}
```
Example of visualization:

| 1.0 | [Proposal] API for calendar heat map component - This is a proposal to optimize requests for create a calendar heat map.
### Visualization of publications in Calendar
`resource_type` can be one of `layers`, `maps` or `documents`
Request:
`/api/:resource_type/calendar/?date__gte=2019-01-01+00:00:00&date__lte=2019-12-31+23:59:59&date_type=publication`
query params:
`date__gte`: filter greater and equal to date
`date__lte`: filter less and equal to date
`date_type`: filter only a specific date_type
Response
```
{
"geonode_version": "2.10.1",
"meta": {
"max_count": 7,
"min_count": 1,
"date_type": "publication"
},
"objects": [
{
"date": "2019-01-01",
"count": 7
},
{
"date": "2019-01-02",
"count": 1
},
{
"date": "2019-06-10",
"count": 6
},
{
"date": "2019-08-29",
"count": 1
}
]
}
```
the calendar property contains list of days in chronological order with the count of resources. If a day in the time range has count equal to 0 it will not be listed in the response.
Example of visualization:

### List objects in calendar
`resource_type` can be one of `layers`, `maps` or `documents`
Request:
`/api/:resource_type/calendar/?date=2019-08-29&date_type=publication&list_objects=compact`
query params:
`date`: filter equal to date
`date_type`: filter only a specific date_type
`list_objects`: show list of objects for every day in response, one of `compact` or `full`
- `compact`: returns only following property of the object date, id, name;
- `full`: returns the complete object as in standard `/api/:resource_type` requests;
```
{
"geonode_version": "2.10.1",
"meta": {
"max_count": 7,
"min_count": 1,
"date_type": "publication"
},
"objects": [
{
"date": "2019-08-29",
"count": 1,
"objects": [
{
"date": "2019-08-29T08:03:05.498747",
"id": 78,
"name": "name",
"owner__username": "username",
"owner_name": "owner name",
"title": "Title"
}
]
}
]
}
```
Example of visualization:

| priority | api for calendar heat map component this is a proposal to optimize requests for create a calendar heat map visualization of publications in calendar resource type can be one of layers maps or documents request api resource type calendar date gte date lte date type publication query params date gte filter greater and equal to date date lte filter less and equal to date date type filter only a specific date type response geonode version meta max count min count date type publication objects date count date count date count date count the calendar property contains list of days in chronological order with the count of resources if a day in the time range has count equal to it will not be listed in the response example of visualization list objects in calendar resource type can be one of layers maps or documents request api resource type calendar date date type publication list objects compact query params date filter equal to date date type filter only a specific date type list objects show list of objects for every day in response one of compact or full compact returns only following property of the object date id name full returns the complete object as in standard api resource type requests geonode version meta max count min count date type publication objects date count objects date id name name owner username username owner name owner name title title example of visualization | 1 |
701,089 | 24,085,579,914 | IssuesEvent | 2022-09-19 10:36:51 | ChainSafe/forest | https://api.github.com/repos/ChainSafe/forest | closed | Move away from async_std mannerism | Priority: 3 - Medium Maintenance Status: Needs Triage Ready | **Issue summary**
<!-- A clear and concise description of what the task is. -->
- Use `std::sync::Arc` instead of `async_std::sync::Arc`. The async_std version is just a re-export.
- Use `std::future::Future` instead of `async_std::future::Future`. The async_std version is just a re-export.
- Use `std::pin::Pin` instead of `async_std::pin::Pin`. The async_std version is just a re-export.
- Use `std::task::{Context, Poll}` instead of `async_std::task::{Context, Poll}`. The async_std version is just a re-export.
- Use `tokio::sync::RwLock` instead of `async_std::sync::RwLock`. We can use the tokio version from an async_std context.
- Use `tokio::timer::Interval` instead of `async_std::stream::Interval`. We can use the tokio version from an async_std context.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 --> | 1.0 | Move away from async_std mannerism - **Issue summary**
<!-- A clear and concise description of what the task is. -->
- Use `std::sync::Arc` instead of `async_std::sync::Arc`. The async_std version is just a re-export.
- Use `std::future::Future` instead of `async_std::future::Future`. The async_std version is just a re-export.
- Use `std::pin::Pin` instead of `async_std::pin::Pin`. The async_std version is just a re-export.
- Use `std::task::{Context, Poll}` instead of `async_std::task::{Context, Poll}`. The async_std version is just a re-export.
- Use `tokio::sync::RwLock` instead of `async_std::sync::RwLock`. We can use the tokio version from an async_std context.
- Use `tokio::timer::Interval` instead of `async_std::stream::Interval`. We can use the tokio version from an async_std context.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 --> | priority | move away from async std mannerism issue summary use std sync arc instead of async std sync arc the async std version is just a re export use std future future instead of async std future future the async std version is just a re export use std pin pin instead of async std pin pin the async std version is just a re export use std task context poll instead of async std task context poll the async std version is just a re export use tokio sync rwlock instead of async std sync rwlock we can use the tokio version from an async std context use tokio timer interval instead of async std stream interval we can use the tokio version from an async std context other information and links | 1 |
314,160 | 9,593,434,013 | IssuesEvent | 2019-05-09 11:32:08 | Porkins97/DinoNuggetsGame | https://api.github.com/repos/Porkins97/DinoNuggetsGame | closed | Pickup Multiple Item Bug | Fixed Priority Medium bug | **Describe the bug**
When the collider attached to the character is touching multiple gameobjects at once and the player triggers the pick up function, both objects are picked up in the same hand.
**Version**
Test Scene
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Test Scene
2. Play Game
3. Move character until it is touching two objects at once
4. Press either Mouse0 or Mouse1
**Expected behaviour**
Only one object is picked up.
**Desktop System:**
- Windows 10
**Additional context**
This is a problem with the pickup code. When the item is held in the hand, it toggles a boolean that prevents you from being able to pick up items if you are holding something already. When you aren't and you are touching two items, it doesn't know to prioritise one over the other. | 1.0 | Pickup Multiple Item Bug - **Describe the bug**
When the collider attached to the character is touching multiple gameobjects at once and the player triggers the pick up function, both objects are picked up in the same hand.
**Version**
Test Scene
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Test Scene
2. Play Game
3. Move character until it is touching two objects at once
4. Press either Mouse0 or Mouse1
**Expected behaviour**
Only one object is picked up.
**Desktop System:**
- Windows 10
**Additional context**
This is a problem with the pickup code. When the item is held in the hand, it toggles a boolean that prevents you from being able to pick up items if you are holding something already. When you aren't and you are touching two items, it doesn't know to prioritise one over the other. | priority | pickup multiple item bug describe the bug when the collider attached to the character is touching multiple gameobjects at once and the player triggers the pick up function both objects are picked up in the same hand version test scene to reproduce steps to reproduce the behavior go to test scene play game move character until it is touching two objects at once press either or expected behaviour only one object is picked up desktop system windows additional context this is a problem with the pickup code when the item is held in the hand it toggles a boolean that prevents you from being able to pick up items if you are holding something already when you aren t and you are touching two items it doesn t know to prioritise one over the other | 1 |
359,835 | 10,681,640,490 | IssuesEvent | 2019-10-22 01:43:56 | AY1920S1-CS2103T-F13-2/main | https://api.github.com/repos/AY1920S1-CS2103T-F13-2/main | closed | MakeSort should allow you to clear the comparator as well. | priority.Medium status.Ongoing | Currently, you are not allowed to override the comparator with an empty one.
A function should be given to do so. | 1.0 | MakeSort should allow you to clear the comparator as well. - Currently, you are not allowed to override the comparator with an empty one.
A function should be given to do so. | priority | makesort should allow you to clear the comparator as well currently you are not allowed to override the comparator with an empty one a function should be given to do so | 1 |
438,713 | 12,643,481,147 | IssuesEvent | 2020-06-16 09:52:49 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1595] Notfication panel: section displacement | Category: UI Priority: Medium Status: Fixed Week Task | 1.
1) Notificztion are closed
2) One pesron create a contract, other person finish it. Few times
3) Person who created the contracts open panel

Everything is moved to the left, there is no any margins, should ve like that

2. Also the person, who was a contractor had broken notifications at some point. Couldn't find particular steps.

UPD: just created a world and open the panel

| 1.0 | [0.9.0 staging-1595] Notfication panel: section displacement - 1.
1) Notificztion are closed
2) One pesron create a contract, other person finish it. Few times
3) Person who created the contracts open panel

Everything is moved to the left, there is no any margins, should ve like that

2. Also the person, who was a contractor had broken notifications at some point. Couldn't find particular steps.

UPD: just created a world and open the panel

| priority | notfication panel section displacement notificztion are closed one pesron create a contract other person finish it few times person who created the contracts open panel everything is moved to the left there is no any margins should ve like that also the person who was a contractor had broken notifications at some point couldn t find particular steps upd just created a world and open the panel | 1 |
316,735 | 9,654,215,251 | IssuesEvent | 2019-05-19 12:19:15 | busy-beaver-dev/busy-beaver | https://api.github.com/repos/busy-beaver-dev/busy-beaver | closed | Migrate DM commands to slash commands | effort medium enhancement good first issue priority medium | As a user, I want to be able to register for busy-beaver by typing in `/busybeaver connect` Currently, we have to DM BusyBeaver. Which is not quite as obvious.
With the new Slash command endpoint, this is easy to do. Should look into how to document this. How are other apps with their user experience? | 1.0 | Migrate DM commands to slash commands - As a user, I want to be able to register for busy-beaver by typing in `/busybeaver connect` Currently, we have to DM BusyBeaver. Which is not quite as obvious.
With the new Slash command endpoint, this is easy to do. Should look into how to document this. How are other apps with their user experience? | priority | migrate dm commands to slash commands as a user i want to be able to register for busy beaver by typing in busybeaver connect currently we have to dm busybeaver which is not quite as obvious with the new slash command endpoint this is easy to do should look into how to document this how are other apps with their user experience | 1 |
77,180 | 3,506,269,234 | IssuesEvent | 2016-01-08 05:09:05 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Talents (BB #241) | Category: Miscellaneous migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 28.07.2010 04:48:14 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/241
<hr>
when player want To get Talent's click on talents for sample Mortal strike 0/5 > 1/5 > 2/5 > 3/5 > 4/5 > 5/5 > And here Again 0/5 player again can get mortal strike | 1.0 | Talents (BB #241) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 28.07.2010 04:48:14 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/241
<hr>
when player want To get Talent's click on talents for sample Mortal strike 0/5 > 1/5 > 2/5 > 3/5 > 4/5 > 5/5 > And here Again 0/5 player again can get mortal strike | priority | talents bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state invalid direct link when player want to get talent s click on talents for sample mortal strike and here again player again can get mortal strike | 1 |
699,213 | 24,009,183,570 | IssuesEvent | 2022-09-14 17:12:24 | ooni/probe | https://api.github.com/repos/ooni/probe | opened | hotfix: remove _probe_engine_sanitize_test_keys | bug hotfix priority/medium ooni/probe-engine | I've modified the code to always sanitize unconditionally. The new code is simpler but it always set the `_probe_engine_sanitize_test_keys` annotation. This does not make any sense. We should remove this annotation. | 1.0 | hotfix: remove _probe_engine_sanitize_test_keys - I've modified the code to always sanitize unconditionally. The new code is simpler but it always set the `_probe_engine_sanitize_test_keys` annotation. This does not make any sense. We should remove this annotation. | priority | hotfix remove probe engine sanitize test keys i ve modified the code to always sanitize unconditionally the new code is simpler but it always set the probe engine sanitize test keys annotation this does not make any sense we should remove this annotation | 1 |
305,552 | 9,371,035,589 | IssuesEvent | 2019-04-03 14:38:04 | netdata/netdata | https://api.github.com/repos/netdata/netdata | closed | support filtering disk naming by id | area/collectors feature request priority/medium | Following https://github.com/netdata/netdata/pull/3084 and https://github.com/netdata/netdata/issues/3081, I have set
```
[plugin:proc:/proc/diskstats]
name disks by id = yes
```
This has the effect of setting my disk names to names which look like
`wwn-0x5000c5xxxxxxxxxx`
instead of (what I would prefer)
`ata-CT250MX500SSD1_xxxxxxxxxx` or `md-name-any:PV1`
Can a prefix filter be added to the `name disks by id` option to filter for the desired name format? | 1.0 | support filtering disk naming by id - Following https://github.com/netdata/netdata/pull/3084 and https://github.com/netdata/netdata/issues/3081, I have set
```
[plugin:proc:/proc/diskstats]
name disks by id = yes
```
This has the effect of setting my disk names to names which look like
`wwn-0x5000c5xxxxxxxxxx`
instead of (what I would prefer)
`ata-CT250MX500SSD1_xxxxxxxxxx` or `md-name-any:PV1`
Can a prefix filter be added to the `name disks by id` option to filter for the desired name format? | priority | support filtering disk naming by id following and i have set name disks by id yes this has the effect of setting my disk names to names which look like wwn instead of what i would prefer ata xxxxxxxxxx or md name any can a prefix filter be added to the name disks by id option to filter for the desired name format | 1 |
273,231 | 8,528,126,566 | IssuesEvent | 2018-11-02 22:06:38 | PushTracker/EvalApp | https://api.github.com/repos/PushTracker/EvalApp | opened | translate eval list view page | bug medium-priority | right now the translation service is not being used for the evaluation list view template. | 1.0 | translate eval list view page - right now the translation service is not being used for the evaluation list view template. | priority | translate eval list view page right now the translation service is not being used for the evaluation list view template | 1 |
496,744 | 14,353,593,085 | IssuesEvent | 2020-11-30 07:12:00 | art-community/art-java | https://api.github.com/repos/art-community/art-java | closed | HTTP Communicators handlers | HTTP client enhancement medium priority | For now, HTTP communicator (sync/async modes) not handle 4xx error codes.
In this issue needs to change logic of parsing Apache HTTP Response and add checks on HTTP statuses.
Check must be configurable:
* CHECKED mode - we are checking status code and throws an HttpCommunicationResponseException with HTTP Response as field
* UNCHECKED - we are just parsing HTTP Response ignoring statuses | 1.0 | HTTP Communicators handlers - For now, HTTP communicator (sync/async modes) not handle 4xx error codes.
In this issue needs to change logic of parsing Apache HTTP Response and add checks on HTTP statuses.
Check must be configurable:
* CHECKED mode - we are checking status code and throws an HttpCommunicationResponseException with HTTP Response as field
* UNCHECKED - we are just parsing HTTP Response ignoring statuses | priority | http communicators handlers for now http communicator sync async modes not handle error codes in this issue needs to change logic of parsing apache http response and add checks on http statuses check must be configurable checked mode we are checking status code and throws an httpcommunicationresponseexception with http response as field unchecked we are just parsing http response ignoring statuses | 1 |
165,171 | 6,264,836,799 | IssuesEvent | 2017-07-16 12:14:20 | carambalabs/xcodeproj | https://api.github.com/repos/carambalabs/xcodeproj | opened | Cleanup comment generation | difficulty:moderate priority:medium status:ready-development type:enhancement | ## Context
There's some duplication in the way comments are generated for `ProjectElement`s. Some of them implement their own private function that generates the comment, finding other elements in the `PBXProj` model. [Here](https://github.com/carambalabs/xcodeproj/blob/master/Sources/xcodeproj/XCConfigurationList.swift#L100) is an example of that kind of functions.
## What
Extract these methods, as [we already did](https://github.com/carambalabs/xcodeproj/blob/master/Sources/xcodeproj/PBXProj.swift#L152), and update the models to use these methods instead.
## Proposal
- Check which models are providing their own implementation when the plist dictionary is generated.
- Move the implementation to the `PBXProj` model extension.
- Check out if there's already a similar method that we can reuse.
| 1.0 | Cleanup comment generation - ## Context
There's some duplication in the way comments are generated for `ProjectElement`s. Some of them implement their own private function that generates the comment, finding other elements in the `PBXProj` model. [Here](https://github.com/carambalabs/xcodeproj/blob/master/Sources/xcodeproj/XCConfigurationList.swift#L100) is an example of that kind of functions.
## What
Extract these methods, as [we already did](https://github.com/carambalabs/xcodeproj/blob/master/Sources/xcodeproj/PBXProj.swift#L152), and update the models to use these methods instead.
## Proposal
- Check which models are providing their own implementation when the plist dictionary is generated.
- Move the implementation to the `PBXProj` model extension.
- Check out if there's already a similar method that we can reuse.
| priority | cleanup comment generation context there s some duplication in the way comments are generated for projectelement s some of them implement their own private function that generates the comment finding other elements in the pbxproj model is an example of that kind of functions what extract these methods as and update the models to use these methods instead proposal check which models are providing their own implementation when the plist dictionary is generated move the implementation to the pbxproj model extension check out if there s already a similar method that we can reuse | 1 |
22,329 | 2,648,764,350 | IssuesEvent | 2015-03-14 07:14:18 | pyroscope/pyroscope | https://api.github.com/repos/pyroscope/pyroscope | closed | Stats / RRD data aquisition | auto-migrated Milestone-Torque Priority-Medium Type-Task | ```
Poll relevant data in >= 1 sec intervals (probably different sets of data
at different intervals) and put them into a RRD.
```
Original issue reported on code.google.com by `pyroscope.project` on 11 Jun 2009 at 7:12 | 1.0 | Stats / RRD data aquisition - ```
Poll relevant data in >= 1 sec intervals (probably different sets of data
at different intervals) and put them into a RRD.
```
Original issue reported on code.google.com by `pyroscope.project` on 11 Jun 2009 at 7:12 | priority | stats rrd data aquisition poll relevant data in sec intervals probably different sets of data at different intervals and put them into a rrd original issue reported on code google com by pyroscope project on jun at | 1 |
333,270 | 10,119,809,660 | IssuesEvent | 2019-07-31 12:25:12 | firecracker-microvm/firecracker | https://api.github.com/repos/firecracker-microvm/firecracker | closed | Unify timing functionalities | Contribute: Good First Issue Priority: Medium Quality: Improvement | Firecracker is currently using 3 different implementations for getting the time: the `time` crate, the `chrono` crate, and a wrapper over `libc::clock_gettime` (for CPU time). These should be unified under a (set of) wrapper(s) in a separate crate. | 1.0 | Unify timing functionalities - Firecracker is currently using 3 different implementations for getting the time: the `time` crate, the `chrono` crate, and a wrapper over `libc::clock_gettime` (for CPU time). These should be unified under a (set of) wrapper(s) in a separate crate. | priority | unify timing functionalities firecracker is currently using different implementations for getting the time the time crate the chrono crate and a wrapper over libc clock gettime for cpu time these should be unified under a set of wrapper s in a separate crate | 1 |
353,637 | 10,555,280,124 | IssuesEvent | 2019-10-03 21:27:45 | OpenSRP/opensrp-client-chw | https://api.github.com/repos/OpenSRP/opensrp-client-chw | closed | WASH task should not default sorted to the top in the Activity tab | bug medium priority | - [ ] The WASH task, like any other task, should be sorted in reverse chronological order, just like all of the other tasks in the Activity tab. Right now, it remains fixed to the top of the list.

Thanks @rkodev For bringing this to our attention. | 1.0 | WASH task should not default sorted to the top in the Activity tab - - [ ] The WASH task, like any other task, should be sorted in reverse chronological order, just like all of the other tasks in the Activity tab. Right now, it remains fixed to the top of the list.

Thanks @rkodev For bringing this to our attention. | priority | wash task should not default sorted to the top in the activity tab the wash task like any other task should be sorted in reverse chronological order just like all of the other tasks in the activity tab right now it remains fixed to the top of the list thanks rkodev for bringing this to our attention | 1 |
96,657 | 3,971,590,059 | IssuesEvent | 2016-05-04 12:35:41 | googlei18n/libphonenumber | https://api.github.com/repos/googlei18n/libphonenumber | closed | Add geocoding for ML | bug priority-medium | Imported from [Google Code issue #148](https://code.google.com/p/libphonenumber/issues/detail?id=148) created by [cfkoch](https://code.google.com/u/108438119205599918600/) on 2012-06-16T02:33:59.000Z:
----
Data does not exist.
Numbering Plan as of 19.V.2010 contains it.
http://www.itu.int/oth/T0202000083/en
| 1.0 | Add geocoding for ML - Imported from [Google Code issue #148](https://code.google.com/p/libphonenumber/issues/detail?id=148) created by [cfkoch](https://code.google.com/u/108438119205599918600/) on 2012-06-16T02:33:59.000Z:
----
Data does not exist.
Numbering Plan as of 19.V.2010 contains it.
http://www.itu.int/oth/T0202000083/en
| priority | add geocoding for ml imported from created by on data does not exist numbering plan as of v contains it | 1 |
728,235 | 25,072,358,344 | IssuesEvent | 2022-11-07 13:09:14 | AY2223S1-CS2103T-T09-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-T09-2/tp | closed | [PE-D][Tester D] List components clickability | priority.Medium PED.GoodToFix | List components seem to be clickable, but does nothing when clicked, it only shows a blue highlight of the box

Perhaps you can solve this by using the command `mouseTransparent="false" ` for your component in mainWindow.fxml (😊: hope this helps!)
<!--session: 1666944873059-ed908e70-adf6-4ff0-85a9-18e7e7baad97-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: loyhongshenggg/ped#7 | 1.0 | [PE-D][Tester D] List components clickability - List components seem to be clickable, but does nothing when clicked, it only shows a blue highlight of the box

Perhaps you can solve this by using the command `mouseTransparent="false" ` for your component in mainWindow.fxml (😊: hope this helps!)
<!--session: 1666944873059-ed908e70-adf6-4ff0-85a9-18e7e7baad97-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: loyhongshenggg/ped#7 | priority | list components clickability list components seem to be clickable but does nothing when clicked it only shows a blue highlight of the box perhaps you can solve this by using the command mousetransparent false for your component in mainwindow fxml 😊 hope this helps labels severity medium type functionalitybug original loyhongshenggg ped | 1 |
336,899 | 10,198,975,665 | IssuesEvent | 2019-08-13 07:17:22 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | SQL Reserach: collections support | Estimation Required Module: SQL Research Priority: Medium Source: Internal Team: Core Type: Enhancement | The SQL standard defines two collection types: arrays and multisets. We must support them. This includes:
1. Define the set of functions to be supported as per the standard
2. Define converters for incoming collections and arrays
3. Implement the required functions | 1.0 | SQL Reserach: collections support - The SQL standard defines two collection types: arrays and multisets. We must support them. This includes:
1. Define the set of functions to be supported as per the standard
2. Define converters for incoming collections and arrays
3. Implement the required functions | priority | sql reserach collections support the sql standard defines two collection types arrays and multisets we must support them this includes define the set of functions to be supported as per the standard define converters for incoming collections and arrays implement the required functions | 1 |
563,503 | 16,687,081,519 | IssuesEvent | 2021-06-08 09:08:06 | stackabletech/product-config | https://api.github.com/repos/stackabletech/product-config | opened | Rename Conf in PropertyNameKind to File | good first issue priority/medium | The `PropertyNameKind` enum consists of three entries:
Conf(String),
Env,
Cli,
All of them offer config options. "Conf" in the enum refers to a config file (and file name via String).
For consistency we therefore should rename "Conf" to "File". | 1.0 | Rename Conf in PropertyNameKind to File - The `PropertyNameKind` enum consists of three entries:
Conf(String),
Env,
Cli,
All of them offer config options. "Conf" in the enum refers to a config file (and file name via String).
For consistency we therefore should rename "Conf" to "File". | priority | rename conf in propertynamekind to file the propertynamekind enum consists of three entries conf string env cli all of them offer config options conf in the enum refers to a config file and file name via string for consistency we therefore should rename conf to file | 1 |
77,150 | 3,506,265,947 | IssuesEvent | 2016-01-08 05:06:53 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Talent inspection (BB #211) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** apple
**Original Date:** 04.07.2010 19:23:39 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/211
<hr>
Still bugged, I've got the newest core, removed cache and shit : / | 1.0 | Talent inspection (BB #211) - This issue was migrated from bitbucket.
**Original Reporter:** apple
**Original Date:** 04.07.2010 19:23:39 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/211
<hr>
Still bugged, I've got the newest core, removed cache and shit : / | priority | talent inspection bb this issue was migrated from bitbucket original reporter apple original date gmt original priority major original type bug original state resolved direct link still bugged i ve got the newest core removed cache and shit | 1 |
238,603 | 7,781,215,937 | IssuesEvent | 2018-06-05 22:59:17 | SellBirdHQ/theme | https://api.github.com/repos/SellBirdHQ/theme | closed | Order filters | EDD has PR priority: medium priorty: high | The filters on the Orders tab of the dashboard need to be made functional.
Previously: https://github.com/SellBirdHQ/sellbird/issues/9 | 1.0 | Order filters - The filters on the Orders tab of the dashboard need to be made functional.
Previously: https://github.com/SellBirdHQ/sellbird/issues/9 | priority | order filters the filters on the orders tab of the dashboard need to be made functional previously | 1 |
824,118 | 31,141,637,198 | IssuesEvent | 2023-08-16 00:50:48 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | Dark simulacrum holy shock interaction | Class: Death Knight Spell Priority: Medium PVP: Mechanic | [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:**
Using holy shock while under dark simulacrum should remove the buff and award to spell to dk, here it doesnt.
Why is this important?
Because you wanna give a "bad spell" to DK instead of a "good one", so it impact pvp a lot
**How to reproduce:**
Use holy shock while under effect of dark sim.
**How it should work:**
Holy shock among other spells should be copyable, as far as im aware most of mana costing spells should be copyable
**Database links:**
Source from 2011
https://www.dual-boxing.com/threads/37454-Dark-Simulacrum | 1.0 | Dark simulacrum holy shock interaction - [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:**
Using holy shock while under dark simulacrum should remove the buff and award to spell to dk, here it doesnt.
Why is this important?
Because you wanna give a "bad spell" to DK instead of a "good one", so it impact pvp a lot
**How to reproduce:**
Use holy shock while under effect of dark sim.
**How it should work:**
Holy shock among other spells should be copyable, as far as im aware most of mana costing spells should be copyable
**Database links:**
Source from 2011
https://www.dual-boxing.com/threads/37454-Dark-Simulacrum | priority | dark simulacrum holy shock interaction rembember add links to things related to the bug using for example cata twinhead twinstar cz description using holy shock while under dark simulacrum should remove the buff and award to spell to dk here it doesnt why is this important because you wanna give a bad spell to dk instead of a good one so it impact pvp a lot how to reproduce use holy shock while under effect of dark sim how it should work holy shock among other spells should be copyable as far as im aware most of mana costing spells should be copyable database links source from | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.