Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
48,666 | 2,999,433,987 | IssuesEvent | 2015-07-23 19:00:36 | jayway/powermock | https://api.github.com/repos/jayway/powermock | closed | Rename invocation to invoke in Mockito API | enhancement imported Milestone-Release1.3 Priority-High | _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on September 29, 2009 19:16:10_
do it
_Original issue: http://code.google.com/p/powermock/issues/detail?id=173_ | 1.0 | Rename invocation to invoke in Mockito API - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on September 29, 2009 19:16:10_
do it
_Original issue: http://code.google.com/p/powermock/issues/detail?id=173_ | priority | rename invocation to invoke in mockito api from on september do it original issue | 1 |
739,410 | 25,595,513,772 | IssuesEvent | 2022-12-01 15:55:33 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Mobile: Backend Connection of Create Learning Space | priority-high Status: Completed mobile back-connection | ### Issue Description
since the screen for creating a learning space is ready, it should also be connected to the backend of our application.
### Step Details
Steps that will be performed:
- [x] implement request model for post learning space
- [x] connect request to button
- [x] implement response model
- [x] match response model with inner model to transfer to other pages
- [x] implement request model for patch/edit learning space
- [x] connect edit request to appropriate use case
- [ ] implement unit tests for the implemented functionality
### Final Actions
once all the steps have been completed, a pr will be opened to get the review of mobile team members.
### Deadline of the Issue
30.11.2022
### Reviewer
Egemen Atik
### Deadline for the Review
03.12.2022 | 1.0 | Mobile: Backend Connection of Create Learning Space - ### Issue Description
since the screen for creating a learning space is ready, it should also be connected to the backend of our application.
### Step Details
Steps that will be performed:
- [x] implement request model for post learning space
- [x] connect request to button
- [x] implement response model
- [x] match response model with inner model to transfer to other pages
- [x] implement request model for patch/edit learning space
- [x] connect edit request to appropriate use case
- [ ] implement unit tests for the implemented functionality
### Final Actions
once all the steps have been completed, a pr will be opened to get the review of mobile team members.
### Deadline of the Issue
30.11.2022
### Reviewer
Egemen Atik
### Deadline for the Review
03.12.2022 | priority | mobile backend connection of create learning space issue description since the screen for creating a learning space is ready it should also be connected to the backend of our application step details steps that will be performed implement request model for post learning space connect request to button implement response model match response model with inner model to transfer to other pages implement request model for patch edit learning space connect edit request to appropriate use case implement unit tests for the implemented functionality final actions once all the steps have been completed a pr will be opened to get the review of mobile team members deadline of the issue reviewer egemen atik deadline for the review | 1 |
441,230 | 12,709,645,135 | IssuesEvent | 2020-06-23 12:42:33 | Kameldrengene/cdioFinal_f2020 | https://api.github.com/repos/Kameldrengene/cdioFinal_f2020 | closed | General HTML | Front end HTML Heavy High Priority | - [ ] Always be able to see who you are logged in as? (2 lines to implement in all .html body files)
- [x] Log out button | 1.0 | General HTML - - [ ] Always be able to see who you are logged in as? (2 lines to implement in all .html body files)
- [x] Log out button | priority | general html always be able to see who you are logged in as lines to implement in all html body files log out button | 1 |
776,150 | 27,248,565,425 | IssuesEvent | 2023-02-22 05:34:33 | cbgaindia/district-dashboard | https://api.github.com/repos/cbgaindia/district-dashboard | closed | Change colour of download data tab | bug high priority | The colour is still from the constituency dashboard
<img width="843" alt="Screenshot 2023-02-21 at 4 40 09 PM" src="https://user-images.githubusercontent.com/67921422/220329418-0af00116-02aa-43df-a2ee-e1e679ecde82.png">
<img width="887" alt="Screenshot 2023-02-21 at 4 39 56 PM" src="https://user-images.githubusercontent.com/67921422/220329431-2e3b3835-4dd2-4ea8-8495-fb8fb015705d.png">
| 1.0 | Change colour of download data tab - The colour is still from the constituency dashboard
<img width="843" alt="Screenshot 2023-02-21 at 4 40 09 PM" src="https://user-images.githubusercontent.com/67921422/220329418-0af00116-02aa-43df-a2ee-e1e679ecde82.png">
<img width="887" alt="Screenshot 2023-02-21 at 4 39 56 PM" src="https://user-images.githubusercontent.com/67921422/220329431-2e3b3835-4dd2-4ea8-8495-fb8fb015705d.png">
| priority | change colour of download data tab the colour is still from the constituency dashboard img width alt screenshot at pm src img width alt screenshot at pm src | 1 |
189,489 | 6,798,145,699 | IssuesEvent | 2017-11-02 03:24:05 | OpenKore/openkore | https://api.github.com/repos/OpenKore/openkore | closed | xKore 1 and xKore 3 not have suport to encryptMessageID | bug help wanted network/poseidon priority: high | ###### Summary:
openkore xkore 1 and xkore 3 not have suport to encryptMessageID
###### Affected configuration(s)/ file(s):
network\xkore
network\xkoreProxy
###### Impact:
no one can connect to a server that need encryptMessageID
###### Expected Behavior:
connect to servers using encryptMessageID
###### Actual Behavior:
kore not work with encryptMessageID when we connect to a server that need.
###### Steps to Reproduce:
Just try to use xKore 1 or 3 in a server that have crypt keys
References:
#1174
#1160
#1133
#977
#540
innitial support:
https://github.com/alisonrag/openkore/commit/3034f398f7aed74835c24ec24b171d2ee8a0ba88
bug in map with players when login, kore lost itself in keys or maybe in packet '-' | 1.0 | xKore 1 and xKore 3 not have suport to encryptMessageID - ###### Summary:
openkore xkore 1 and xkore 3 not have suport to encryptMessageID
###### Affected configuration(s)/ file(s):
network\xkore
network\xkoreProxy
###### Impact:
no one can connect to a server that need encryptMessageID
###### Expected Behavior:
connect to servers using encryptMessageID
###### Actual Behavior:
kore not work with encryptMessageID when we connect to a server that need.
###### Steps to Reproduce:
Just try to use xKore 1 or 3 in a server that have crypt keys
References:
#1174
#1160
#1133
#977
#540
innitial support:
https://github.com/alisonrag/openkore/commit/3034f398f7aed74835c24ec24b171d2ee8a0ba88
bug in map with players when login, kore lost itself in keys or maybe in packet '-' | priority | xkore and xkore not have suport to encryptmessageid summary openkore xkore and xkore not have suport to encryptmessageid affected configuration s file s network xkore network xkoreproxy impact no one can connect to a server that need encryptmessageid expected behavior connect to servers using encryptmessageid actual behavior kore not work with encryptmessageid when we connect to a server that need steps to reproduce just try to use xkore or in a server that have crypt keys references innitial support bug in map with players when login kore lost itself in keys or maybe in packet | 1 |
659,806 | 21,941,995,948 | IssuesEvent | 2022-05-23 19:09:08 | LDSSA/wiki | https://api.github.com/repos/LDSSA/wiki | closed | Batch 6 Calendar proposal - Capstone | priority:high open discussion Batch 6 | This is a continuation of issue #346, limited to the capstone calendar only.
## The proposal
The main objectives are:
- Line up the calendar with the academic year, starting in September (suggested in #345)
- Adjust capstone deadlines in accordance with #345
I propose the following calendar, available in a google sheet and accessible through your LDSA account [here](https://docs.google.com/spreadsheets/d/1XtpbVT8YHB1mSvJxP8rvWFXqQTdXDPQTiV5or8uzLwI/edit?usp=sharing)
Here are the most important dates for the capstone, in the format of tables:
| Description | Date |
|-------------------------------------------------------------|------------------------------------------------------------------|
| Kick off | 2023-04-03 |
| Capstone Clarification email | 2023-04-09 |
| Trial round of requests | 2023-04-23 |
| Deadline Provisory report 1 and app launched | 2023-04-30 , 23h59 UTC |
| First round of requests | 2023-05-01 to 2023-05-07 |
| Comments to report 1 made by instructors | 2023-05-07 23h59 UTC |
| Deadline report 2 + redeploy + address comments to report 1 | 2023-05-28, 23h59 UTC |
| Second round of requests | 2023-05-29 to 2023-06-02 |
| Comments to report 2 made by instructors | 2023-06-04 , 23h59 UTC |
| Deadline address comments to report 2 | 2023-06-11, 23h59 UTC |
| Graduates announced | 2023-06-19 |
| 1.0 | Batch 6 Calendar proposal - Capstone - This is a continuation of issue #346, limited to the capstone calendar only.
## The proposal
The main objectives are:
- Line up the calendar with the academic year, starting in September (suggested in #345)
- Adjust capstone deadlines in accordance with #345
I propose the following calendar, available in a google sheet and accessible through your LDSA account [here](https://docs.google.com/spreadsheets/d/1XtpbVT8YHB1mSvJxP8rvWFXqQTdXDPQTiV5or8uzLwI/edit?usp=sharing)
Here are the most important dates for the capstone, in the format of tables:
| Description | Date |
|-------------------------------------------------------------|------------------------------------------------------------------|
| Kick off | 2023-04-03 |
| Capstone Clarification email | 2023-04-09 |
| Trial round of requests | 2023-04-23 |
| Deadline Provisory report 1 and app launched | 2023-04-30 , 23h59 UTC |
| First round of requests | 2023-05-01 to 2023-05-07 |
| Comments to report 1 made by instructors | 2023-05-07 23h59 UTC |
| Deadline report 2 + redeploy + address comments to report 1 | 2023-05-28, 23h59 UTC |
| Second round of requests | 2023-05-29 to 2023-06-02 |
| Comments to report 2 made by instructors | 2023-06-04 , 23h59 UTC |
| Deadline address comments to report 2 | 2023-06-11, 23h59 UTC |
| Graduates announced | 2023-06-19 |
| priority | batch calendar proposal capstone this is a continuation of issue limited to the capstone calendar only the proposal the main objectives are line up the calendar with the academic year starting in september suggested in adjust capstone deadlines in accordance with i propose the following calendar available in a google sheet and accessible through your ldsa account here are the most important dates for the capstone in the format of tables description date kick off capstone clarification email trial round of requests deadline provisory report and app launched utc first round of requests to comments to report made by instructors utc deadline report redeploy address comments to report utc second round of requests to comments to report made by instructors utc deadline address comments to report utc graduates announced | 1 |
594,181 | 18,026,014,551 | IssuesEvent | 2021-09-17 04:43:02 | input-output-hk/cardano-graphql | https://api.github.com/repos/input-output-hk/cardano-graphql | closed | Circulating and total supply computation | BUG PRIORITY:HIGH SEVERITY:MEDIUM | ### Environment
[Release 4.0.0 ](https://github.com/input-output-hk/cardano-graphql/commit/058873f7cfa6d5d287f859a58bf511b128da6494)
#### Platform
Linux / Other
**Platform version**: NixOS 21.05
### Steps to reproduce the bug
Query this:
```
query{
ada{
supply{
circulating
max
total
}
}
}
```
### What is the expected behavior?
Get correct number for circulating supply.
Get correct number for total supply.
1. Circulating supply:
I have a strong suspicion that circulating supply [calculation](https://github.com/input-output-hk/cardano-graphql/blob/master/packages/api-cardano-db-hasura/src/HasuraClient.ts#L136) might be off by ~billion.
The current calculation consists of sum of `utxos + (rewards - withdrawals)`, but if you look closely, `rewards-withdrawals` are in fact negative.
Querying directly values from[ db sync (9.0.0)](https://github.com/input-output-hk/cardano-db-sync/commit/0b054e18aec3b1e387a1c7b713c562ade37417a3).
Utxos:
```
SELECT SUM(txo.value)
FROM tx_out txo
LEFT JOIN tx_in txi ON (txo.tx_id = txi.tx_out_id)
AND (txo.index = txi.tx_out_index)
WHERE txi IS NULL;
-------------------
32247431809402515
(1 row)
```
We can also confirm this by querying graphql:
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ utxos_aggregate { aggregate {sum {value}} } }"}' http://localhost:3002/graphql
{
"data": { "utxos_aggregate": { "aggregate": { "sum": { "value": "32247431809402516" } } } }
}
```
Next, we calculate rewards-withdrawals in dbsync:
```
SELECT
(SELECT COALESCE(SUM(amount), 0) FROM reward)
-
(SELECT SUM(amount) FROM withdrawal);
------------------
-159211187454300
(1 row)
```
Adding it up to utxos, we get: `32088220621948215`.
This corresponds to what can be seen in graphql as circulating supply:
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ ada{ supply{ circulating max total } } }"}' http://localhost:3002/graphql
{
"data": {
"ada": {
"supply": {
"circulating": "32088220621948216",
"max": "45000000000000000",
"total": "32913667440643408"
}
}
}
}
```
However, this will yield circulating supply that is even lower than utxos and doesn't look right to me. `withdrawal` table also takes `treasury` and `reserve` into account, so I think the correct formula for calculating circulating supply should be `utxos + reward + treasury + reserve - withdrawal`
or represented as query in db sync:
```
SELECT
(SELECT COALESCE(SUM(amount), 0) FROM reward)
-
(SELECT COALESCE(SUM(amount),0) FROM withdrawal)
+
(SELECT COALESCE(SUM(amount),0) FROM reserve )
+
(SELECT COALESCE(SUM(amount),0) FROM treasury);
------------------
443141922376394
```
which yields a positive number of all withdrawals from all possibly available funds.
I have considered all the possibilities and I have reached the conclusion that the number from graphql might be wrong. Please, correct me if I'm at fault here, but it seems that circulating supply should be closer to `32690573731778909` than to the current value of `32088220621948216`.
Note: The outputs were collected few hours ago, so they are not entirely up to date.
Note2: COALESCE is needed especially for testnet, where there is empty `reserve` table .
2. Total supply (testnet)
Querying total supply on **testnet** in the current epoch doesn't look right as the circulating supply is higher than total supply.
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ ada{ supply{ circulating max total } } }"}' http://localhost:3002/graphql
{
"data": {
"ada": {
"supply": {
"circulating": "42050393097894181",
"max": "45000000000000000",
"total": "40233883596865181"
}
}
}
}
```
Note: Edited (replaced data from stuck public dandelion instance with local synced data).
However, if I got this correctly, total supply is calculated as the `max supply` - `reserves` and `reserves` is taken from the ledger state at the epoch boundary. This means there will be a similar issue in the current epoch, as the current reserves are `4766116403134819` but querying utxos in db sync yield a number higher than total supply (`40233883596865181`):
```
SELECT SUM(txo.value)
FROM tx_out txo
LEFT JOIN tx_in txi ON (txo.tx_id = txi.tx_out_id)
AND (txo.index = txi.tx_out_index)
WHERE txi IS NULL;
-------------------
42016202067224430
(1 row)
```
Again, querying graphql confirms the utxos value used in the computation is the one above:
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ utxos_aggregate { aggregate {sum {value}} } }"}' http://localhost:3002/graphql
{"data":{"utxos_aggregate":{"aggregate":{"sum":{"value":"42016202067224430"}}}}
```
Sorry for the trouble, I'm not sure if I'm doing something wrong so I'd appreciate your help and clarification if I'm mistaken. Thanks a million. | 1.0 | Circulating and total supply computation - ### Environment
[Release 4.0.0 ](https://github.com/input-output-hk/cardano-graphql/commit/058873f7cfa6d5d287f859a58bf511b128da6494)
#### Platform
Linux / Other
**Platform version**: NixOS 21.05
### Steps to reproduce the bug
Query this:
```
query{
ada{
supply{
circulating
max
total
}
}
}
```
### What is the expected behavior?
Get correct number for circulating supply.
Get correct number for total supply.
1. Circulating supply:
I have a strong suspicion that circulating supply [calculation](https://github.com/input-output-hk/cardano-graphql/blob/master/packages/api-cardano-db-hasura/src/HasuraClient.ts#L136) might be off by ~billion.
The current calculation consists of sum of `utxos + (rewards - withdrawals)`, but if you look closely, `rewards-withdrawals` are in fact negative.
Querying directly values from[ db sync (9.0.0)](https://github.com/input-output-hk/cardano-db-sync/commit/0b054e18aec3b1e387a1c7b713c562ade37417a3).
Utxos:
```
SELECT SUM(txo.value)
FROM tx_out txo
LEFT JOIN tx_in txi ON (txo.tx_id = txi.tx_out_id)
AND (txo.index = txi.tx_out_index)
WHERE txi IS NULL;
-------------------
32247431809402515
(1 row)
```
We can also confirm this by querying graphql:
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ utxos_aggregate { aggregate {sum {value}} } }"}' http://localhost:3002/graphql
{
"data": { "utxos_aggregate": { "aggregate": { "sum": { "value": "32247431809402516" } } } }
}
```
Next, we calculate rewards-withdrawals in dbsync:
```
SELECT
(SELECT COALESCE(SUM(amount), 0) FROM reward)
-
(SELECT SUM(amount) FROM withdrawal);
------------------
-159211187454300
(1 row)
```
Adding it up to utxos, we get: `32088220621948215`.
This corresponds to what can be seen in graphql as circulating supply:
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ ada{ supply{ circulating max total } } }"}' http://localhost:3002/graphql
{
"data": {
"ada": {
"supply": {
"circulating": "32088220621948216",
"max": "45000000000000000",
"total": "32913667440643408"
}
}
}
}
```
However, this will yield circulating supply that is even lower than utxos and doesn't look right to me. `withdrawal` table also takes `treasury` and `reserve` into account, so I think the correct formula for calculating circulating supply should be `utxos + reward + treasury + reserve - withdrawal`
or represented as query in db sync:
```
SELECT
(SELECT COALESCE(SUM(amount), 0) FROM reward)
-
(SELECT COALESCE(SUM(amount),0) FROM withdrawal)
+
(SELECT COALESCE(SUM(amount),0) FROM reserve )
+
(SELECT COALESCE(SUM(amount),0) FROM treasury);
------------------
443141922376394
```
which yields a positive number of all withdrawals from all possibly available funds.
I have considered all the possibilities and I have reached the conclusion that the number from graphql might be wrong. Please, correct me if I'm at fault here, but it seems that circulating supply should be closer to `32690573731778909` than to the current value of `32088220621948216`.
Note: The outputs were collected few hours ago, so they are not entirely up to date.
Note2: COALESCE is needed especially for testnet, where there is empty `reserve` table .
2. Total supply (testnet)
Querying total supply on **testnet** in the current epoch doesn't look right as the circulating supply is higher than total supply.
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ ada{ supply{ circulating max total } } }"}' http://localhost:3002/graphql
{
"data": {
"ada": {
"supply": {
"circulating": "42050393097894181",
"max": "45000000000000000",
"total": "40233883596865181"
}
}
}
}
```
Note: Edited (replaced data from stuck public dandelion instance with local synced data).
However, if I got this correctly, total supply is calculated as the `max supply` - `reserves` and `reserves` is taken from the ledger state at the epoch boundary. This means there will be a similar issue in the current epoch, as the current reserves are `4766116403134819` but querying utxos in db sync yield a number higher than total supply (`40233883596865181`):
```
SELECT SUM(txo.value)
FROM tx_out txo
LEFT JOIN tx_in txi ON (txo.tx_id = txi.tx_out_id)
AND (txo.index = txi.tx_out_index)
WHERE txi IS NULL;
-------------------
42016202067224430
(1 row)
```
Again, querying graphql confirms the utxos value used in the computation is the one above:
```
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ utxos_aggregate { aggregate {sum {value}} } }"}' http://localhost:3002/graphql
{"data":{"utxos_aggregate":{"aggregate":{"sum":{"value":"42016202067224430"}}}}
```
Sorry for the trouble, I'm not sure if I'm doing something wrong so I'd appreciate your help and clarification if I'm mistaken. Thanks a million. | priority | circulating and total supply computation environment platform linux other platform version nixos steps to reproduce the bug query this query ada supply circulating max total what is the expected behavior get correct number for circulating supply get correct number for total supply circulating supply i have a strong suspicion that circulating supply might be off by billion the current calculation consists of sum of utxos rewards withdrawals but if you look closely rewards withdrawals are in fact negative querying directly values from utxos select sum txo value from tx out txo left join tx in txi on txo tx id txi tx out id and txo index txi tx out index where txi is null row we can also confirm this by querying graphql curl x post h content type application json d query utxos aggregate aggregate sum value data utxos aggregate aggregate sum value next we calculate rewards withdrawals in dbsync select select coalesce sum amount from reward select sum amount from withdrawal row adding it up to utxos we get this corresponds to what can be seen in graphql as circulating supply curl x post h content type application json d query ada supply circulating max total data ada supply circulating max total however this will yield circulating supply that is even lower than utxos and doesn t look right to me withdrawal table also takes treasury and reserve into account so i think the correct formula for calculating circulating supply should be utxos reward treasury reserve withdrawal or represented as query in db sync select select coalesce sum amount from reward select coalesce sum amount from withdrawal select coalesce sum amount from reserve select coalesce sum amount from treasury which yields a positive number of all withdrawals from all possibly available funds i have considered all the possibilities and i have reached the conclusion that the number from graphql might be wrong please correct me if i m at fault here but it seems that circulating supply should be closer to than to the current value of note the outputs were collected few hours ago so they are not entirely up to date coalesce is needed especially for testnet where there is empty reserve table total supply testnet querying total supply on testnet in the current epoch doesn t look right as the circulating supply is higher than total supply curl x post h content type application json d query ada supply circulating max total data ada supply circulating max total note edited replaced data from stuck public dandelion instance with local synced data however if i got this correctly total supply is calculated as the max supply reserves and reserves is taken from the ledger state at the epoch boundary this means there will be a similar issue in the current epoch as the current reserves are but querying utxos in db sync yield a number higher than total supply select sum txo value from tx out txo left join tx in txi on txo tx id txi tx out id and txo index txi tx out index where txi is null row again querying graphql confirms the utxos value used in the computation is the one above curl x post h content type application json d query utxos aggregate aggregate sum value data utxos aggregate aggregate sum value sorry for the trouble i m not sure if i m doing something wrong so i d appreciate your help and clarification if i m mistaken thanks a million | 1 |
714,568 | 24,566,692,817 | IssuesEvent | 2022-10-13 04:11:52 | encorelab/ck-board | https://api.github.com/repos/encorelab/ck-board | opened | Create "Learner Model" UI | enhancement high priority | The Learner Model UI displays graphs of student data (some data manually entered by the teacher and some gathered from the CK Board).
1. For instance, we may add the Learner Model UI as part of the CK Student Monitor UI (below the task monitoring tools)
<img width="212" alt="Screen Shot 2022-10-12 at 11 29 13 PM" src="https://user-images.githubusercontent.com/6416247/195492500-b9e2390c-5f3d-43b4-8ea2-49478ad2156d.png">
2. By selecting either View by Content or View by SEL, the teacher gets an overview of all students by that metric
<img width="794" alt="Screen Shot 2022-10-12 at 11 30 01 PM" src="https://user-images.githubusercontent.com/6416247/195492582-dd373af5-fe5c-4763-91c6-1794250c18ff.png">
3. By selecting a student's name, the teacher can view or modify data for each student. Displayed data includes: (1) Content knowledge - entered by the teacher based on diagnostic and formative assessments, (2) Social-emotional learning (SEL) data - entered by the teacher based on diagnostic and re-administration of SEL survey, and (3) dynamic system data - including goals set by the teacher
<img width="795" alt="Screen Shot 2022-10-12 at 11 32 19 PM" src="https://user-images.githubusercontent.com/6416247/195492850-3df71e94-26eb-40a2-8d76-5970e47fbaee.png">
4. TBD
To see demo code of the javascript for the above visualizations or to explore other interactive demos, please explore the resources below:
- [Highcharts demo code.zip](https://github.com/encorelab/ck-board/files/9770363/Highcharts.demo.code.zip)
- https://www.highcharts.com/demo/gauge-activity | 1.0 | Create "Learner Model" UI - The Learner Model UI displays graphs of student data (some data manually entered by the teacher and some gathered from the CK Board).
1. For instance, we may add the Learner Model UI as part of the CK Student Monitor UI (below the task monitoring tools)
<img width="212" alt="Screen Shot 2022-10-12 at 11 29 13 PM" src="https://user-images.githubusercontent.com/6416247/195492500-b9e2390c-5f3d-43b4-8ea2-49478ad2156d.png">
2. By selecting either View by Content or View by SEL, the teacher gets an overview of all students by that metric
<img width="794" alt="Screen Shot 2022-10-12 at 11 30 01 PM" src="https://user-images.githubusercontent.com/6416247/195492582-dd373af5-fe5c-4763-91c6-1794250c18ff.png">
3. By selecting a student's name, the teacher can view or modify data for each student. Displayed data includes: (1) Content knowledge - entered by the teacher based on diagnostic and formative assessments, (2) Social-emotional learning (SEL) data - entered by the teacher based on diagnostic and re-administration of SEL survey, and (3) dynamic system data - including goals set by the teacher
<img width="795" alt="Screen Shot 2022-10-12 at 11 32 19 PM" src="https://user-images.githubusercontent.com/6416247/195492850-3df71e94-26eb-40a2-8d76-5970e47fbaee.png">
4. TBD
To see demo code of the javascript for the above visualizations or to explore other interactive demos, please explore the resources below:
- [Highcharts demo code.zip](https://github.com/encorelab/ck-board/files/9770363/Highcharts.demo.code.zip)
- https://www.highcharts.com/demo/gauge-activity | priority | create learner model ui the learner model ui displays graphs of student data some data manually entered by the teacher and some gathered from the ck board for instance we may add the learner model ui as part of the ck student monitor ui below the task monitoring tools img width alt screen shot at pm src by selecting either view by content or view by sel the teacher gets an overview of all students by that metric img width alt screen shot at pm src by selecting a student s name the teacher can view or modify data for each student displayed data includes content knowledge entered by the teacher based on diagnostic and formative assessments social emotional learning sel data entered by the teacher based on diagnostic and re administration of sel survey and dynamic system data including goals set by the teacher img width alt screen shot at pm src tbd to see demo code of the javascript for the above visualizations or to explore other interactive demos please explore the resources below | 1 |
788,298 | 27,750,177,609 | IssuesEvent | 2023-03-15 20:04:39 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-1161] Events not being added to Outlook calendar | 🐛 bug High priority | If the email associated with [cal.com](http://cal.com) account isn't outlook, destination calendar for an event type set to outlook doesn't add it.
Created via Threads. See full discussion: [https://threads.com/34452351117](https://threads.com/34452351117)
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1161](https://linear.app/calcom/issue/CAL-1161/events-not-being-added-to-outlook-calendar)</sub> | 1.0 | [CAL-1161] Events not being added to Outlook calendar - If the email associated with [cal.com](http://cal.com) account isn't outlook, destination calendar for an event type set to outlook doesn't add it.
Created via Threads. See full discussion: [https://threads.com/34452351117](https://threads.com/34452351117)
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1161](https://linear.app/calcom/issue/CAL-1161/events-not-being-added-to-outlook-calendar)</sub> | priority | events not being added to outlook calendar if the email associated with account isn t outlook destination calendar for an event type set to outlook doesn t add it created via threads see full discussion from | 1 |
434,991 | 12,530,190,958 | IssuesEvent | 2020-06-04 12:40:58 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | [AP] (Alliance Dashboard) Add Data Information for the QA process | Priority - High Type - Enhancement | Tonya will send the narrative for this process
- [ ] Include "Data and information is quality assured (for 2017 and 2018) b) Quality assurance is ongoing for 2019 data." in the dashboard.
**Move to Closed when:** The text is on the dashboard.
| 1.0 | [AP] (Alliance Dashboard) Add Data Information for the QA process - Tonya will send the narrative for this process
- [ ] Include "Data and information is quality assured (for 2017 and 2018) b) Quality assurance is ongoing for 2019 data." in the dashboard.
**Move to Closed when:** The text is on the dashboard.
| priority | alliance dashboard add data information for the qa process tonya will send the narrative for this process include data and information is quality assured for and b quality assurance is ongoing for data in the dashboard move to closed when the text is on the dashboard | 1 |
104,717 | 4,217,699,352 | IssuesEvent | 2016-06-30 13:55:51 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | ReqMgr2: status update via web UI is broken | High Priority | It returns a status Ok, however no changes were made to the request itself. The reason seems to be coming from the fact that you always _update_ the request priority as well, this is what is sent downstream from web interactions:
`{"RequestStatus":"aborted","RequestPriority":"250000"}`
which then does not end in an statusUpdate in Request.py, e.g.
https://github.com/dmwm/WMCore/blob/master/src/python/WMCore/ReqMgr/Service/Request.py#L508
I'm still checking on how to fix it.
| 1.0 | ReqMgr2: status update via web UI is broken - It returns a status Ok, however no changes were made to the request itself. The reason seems to be coming from the fact that you always _update_ the request priority as well, this is what is sent downstream from web interactions:
`{"RequestStatus":"aborted","RequestPriority":"250000"}`
which then does not end in an statusUpdate in Request.py, e.g.
https://github.com/dmwm/WMCore/blob/master/src/python/WMCore/ReqMgr/Service/Request.py#L508
I'm still checking on how to fix it.
| priority | status update via web ui is broken it returns a status ok however no changes were made to the request itself the reason seems to be coming from the fact that you always update the request priority as well this is what is sent downstream from web interactions requeststatus aborted requestpriority which then does not end in an statusupdate in request py e g i m still checking on how to fix it | 1 |
255,034 | 8,102,699,709 | IssuesEvent | 2018-08-13 03:39:44 | mesg-foundation/core | https://api.github.com/repos/mesg-foundation/core | opened | logs command doesn't work anymore | bug high priority | I think because of the refactoring of the container package, the `logs` command doesn't work anymore and I'm pretty sure many other features might have the same problem.
I still have to investigate more but it seems that we are using `context.WithTimeout` and when we have streams of data from docker then the timeout is reached and the context is terminated.
We should use `context.Background` in these cases
This issue might occurs in:
- build the docker image
- log the container
Let's make sure to test again all the different features using the `./dev-core` and `./dev-cli` to do our manual tests
@ilgooz can you have a look and confirm my thoughts ? | 1.0 | logs command doesn't work anymore - I think because of the refactoring of the container package, the `logs` command doesn't work anymore and I'm pretty sure many other features might have the same problem.
I still have to investigate more but it seems that we are using `context.WithTimeout` and when we have streams of data from docker then the timeout is reached and the context is terminated.
We should use `context.Background` in these cases
This issue might occurs in:
- build the docker image
- log the container
Let's make sure to test again all the different features using the `./dev-core` and `./dev-cli` to do our manual tests
@ilgooz can you have a look and confirm my thoughts ? | priority | logs command doesn t work anymore i think because of the refactoring of the container package the logs command doesn t work anymore and i m pretty sure many other features might have the same problem i still have to investigate more but it seems that we are using context withtimeout and when we have streams of data from docker then the timeout is reached and the context is terminated we should use context background in these cases this issue might occurs in build the docker image log the container let s make sure to test again all the different features using the dev core and dev cli to do our manual tests ilgooz can you have a look and confirm my thoughts | 1 |
688,921 | 23,600,070,670 | IssuesEvent | 2022-08-24 00:24:15 | nanoframework/Home | https://api.github.com/repos/nanoframework/Home | closed | Starting, Stopping and Starting BLE Advertisement cause System Panic | Type: Bug Status: In progress Priority: High | ### Library/API/IoT binding
nanoFramework.Device.Bluetooth
### Visual Studio version
VS2022 v 17.2.5
### .NET nanoFramework extension version
2022.2.0.23
### Target name(s)
ESP32_BLE_REV0
### Firmware version
1.8.0.362
### Device capabilities
_No response_
### Description
System Panics after Starting-> Stopping -> Starting BLE Advertisment.
### How to reproduce
```
GattServiceProviderResult result = GattServiceProvider.Create(serviceUuid);
if (result.Error != BluetoothError.Success)
{
return;
}
GattServiceProvider serviceProvider = result.ServiceProvider;
// Get created Primary service from provider
GattLocalService service = serviceProvider.Service;
DataWriter sw = new DataWriter();
sw.WriteString("This is Bluetooth sample 3");
GattLocalCharacteristicResult characteristicResult = service.CreateCharacteristic(readStaticCharUuid,
new GattLocalCharacteristicParameters()
{
CharacteristicProperties = GattCharacteristicProperties.Read,
UserDescription = "My Static Characteristic",
//ReadProtectionLevel = GattProtectionLevel.EncryptionRequired,
StaticValue = sw.DetachBuffer()
});
if (characteristicResult.Error != BluetoothError.Success)
{
// An error occurred.
return;
}
DeviceInformationServiceService DifService = new DeviceInformationServiceService(
serviceProvider,
"BLE Sensor",
"Model-1",
"989898", // no serial number
"v1.0",
SystemInfo.Version.ToString(),
"");
BatteryService BatService = new BatteryService(serviceProvider);
// Update the Battery service the current battery level regularly. In this case 94%
BatService.BatteryLevel = 94;
Debug.WriteLine("Before Advertise");
serviceProvider.StartAdvertising(new GattServiceProviderAdvertisingParameters()
{
DeviceName = "BLE Sensor",
IsConnectable = true,
IsDiscoverable = true,
});
for (int i = 1; i < 5; i++)
{
Debug.WriteLine("Loop: " + i);
Thread.Sleep(2_000);
if (serviceProvider.AdvertisementStatus == GattServiceProviderAdvertisementStatus.Stopped)
{
Debug.WriteLine("Starting");
serviceProvider.StartAdvertising();
}
else
{
Debug.WriteLine("Stopping");
serviceProvider.StopAdvertising();
}
}
```
### Expected behaviour
Do not crash the system when stopping and starting BLE Advertisment.
### Screenshots
_No response_
### Sample project or code
_No response_
### Aditional information
_No response_ | 1.0 | Starting, Stopping and Starting BLE Advertisement cause System Panic - ### Library/API/IoT binding
nanoFramework.Device.Bluetooth
### Visual Studio version
VS2022 v 17.2.5
### .NET nanoFramework extension version
2022.2.0.23
### Target name(s)
ESP32_BLE_REV0
### Firmware version
1.8.0.362
### Device capabilities
_No response_
### Description
System Panics after Starting-> Stopping -> Starting BLE Advertisment.
### How to reproduce
```
GattServiceProviderResult result = GattServiceProvider.Create(serviceUuid);
if (result.Error != BluetoothError.Success)
{
return;
}
GattServiceProvider serviceProvider = result.ServiceProvider;
// Get created Primary service from provider
GattLocalService service = serviceProvider.Service;
DataWriter sw = new DataWriter();
sw.WriteString("This is Bluetooth sample 3");
GattLocalCharacteristicResult characteristicResult = service.CreateCharacteristic(readStaticCharUuid,
new GattLocalCharacteristicParameters()
{
CharacteristicProperties = GattCharacteristicProperties.Read,
UserDescription = "My Static Characteristic",
//ReadProtectionLevel = GattProtectionLevel.EncryptionRequired,
StaticValue = sw.DetachBuffer()
});
if (characteristicResult.Error != BluetoothError.Success)
{
// An error occurred.
return;
}
DeviceInformationServiceService DifService = new DeviceInformationServiceService(
serviceProvider,
"BLE Sensor",
"Model-1",
"989898", // no serial number
"v1.0",
SystemInfo.Version.ToString(),
"");
BatteryService BatService = new BatteryService(serviceProvider);
// Update the Battery service the current battery level regularly. In this case 94%
BatService.BatteryLevel = 94;
Debug.WriteLine("Before Advertise");
serviceProvider.StartAdvertising(new GattServiceProviderAdvertisingParameters()
{
DeviceName = "BLE Sensor",
IsConnectable = true,
IsDiscoverable = true,
});
for (int i = 1; i < 5; i++)
{
Debug.WriteLine("Loop: " + i);
Thread.Sleep(2_000);
if (serviceProvider.AdvertisementStatus == GattServiceProviderAdvertisementStatus.Stopped)
{
Debug.WriteLine("Starting");
serviceProvider.StartAdvertising();
}
else
{
Debug.WriteLine("Stopping");
serviceProvider.StopAdvertising();
}
}
```
### Expected behaviour
Do not crash the system when stopping and starting BLE Advertisment.
### Screenshots
_No response_
### Sample project or code
_No response_
### Aditional information
_No response_ | priority | starting stopping and starting ble advertisement cause system panic library api iot binding nanoframework device bluetooth visual studio version v net nanoframework extension version target name s ble firmware version device capabilities no response description system panics after starting stopping starting ble advertisment how to reproduce gattserviceproviderresult result gattserviceprovider create serviceuuid if result error bluetootherror success return gattserviceprovider serviceprovider result serviceprovider get created primary service from provider gattlocalservice service serviceprovider service datawriter sw new datawriter sw writestring this is bluetooth sample gattlocalcharacteristicresult characteristicresult service createcharacteristic readstaticcharuuid new gattlocalcharacteristicparameters characteristicproperties gattcharacteristicproperties read userdescription my static characteristic readprotectionlevel gattprotectionlevel encryptionrequired staticvalue sw detachbuffer if characteristicresult error bluetootherror success an error occurred return deviceinformationserviceservice difservice new deviceinformationserviceservice serviceprovider ble sensor model no serial number systeminfo version tostring batteryservice batservice new batteryservice serviceprovider update the battery service the current battery level regularly in this case batservice batterylevel debug writeline before advertise serviceprovider startadvertising new gattserviceprovideradvertisingparameters devicename ble sensor isconnectable true isdiscoverable true for int i i i debug writeline loop i thread sleep if serviceprovider advertisementstatus gattserviceprovideradvertisementstatus stopped debug writeline starting serviceprovider startadvertising else debug writeline stopping serviceprovider stopadvertising expected behaviour do not crash the system when stopping and starting ble advertisment screenshots no response sample project or code no response aditional information no response | 1 |
315,596 | 9,629,234,032 | IssuesEvent | 2019-05-15 09:09:09 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | [NPC/SPELL] Baron Silverlaine - id 3887 -Shadowfang keep | Confirmed Dungeon/Raid Fixed in Dev Priority-High | **Links:**
NPC -https://www.wowhead.com/npc=3887/baron-silverlaine
SPELL -https://www.wowhead.com/spell=93857/summon-worgen-spirit
from WoWHead or our Armory
**What is happening:**
when NPC/BOSS is using the ability the spirit appears forming one of the worgen spirirts but instead of it using the abilities describe for that spirit form they are just exploding and wiping the party.
This spell has a random chance of calling on of the Pre cata worgen bosses back as a simple to kill add for this boss.
Summonable Worgens:
Rethilgore
Razorclaw the Butcher
Odo the Blindwatcher
Wolf Master Nandos with three Lupine Spectres
**What should happen:**
The spirits summoned should use the abilities described in the tooltip in ingame dungeon guide not exploding and causing mass damage
| 1.0 | [NPC/SPELL] Baron Silverlaine - id 3887 -Shadowfang keep - **Links:**
NPC -https://www.wowhead.com/npc=3887/baron-silverlaine
SPELL -https://www.wowhead.com/spell=93857/summon-worgen-spirit
from WoWHead or our Armory
**What is happening:**
when NPC/BOSS is using the ability the spirit appears forming one of the worgen spirirts but instead of it using the abilities describe for that spirit form they are just exploding and wiping the party.
This spell has a random chance of calling on of the Pre cata worgen bosses back as a simple to kill add for this boss.
Summonable Worgens:
Rethilgore
Razorclaw the Butcher
Odo the Blindwatcher
Wolf Master Nandos with three Lupine Spectres
**What should happen:**
The spirits summoned should use the abilities described in the tooltip in ingame dungeon guide not exploding and causing mass damage
| priority | baron silverlaine id shadowfang keep links npc spell from wowhead or our armory what is happening when npc boss is using the ability the spirit appears forming one of the worgen spirirts but instead of it using the abilities describe for that spirit form they are just exploding and wiping the party this spell has a random chance of calling on of the pre cata worgen bosses back as a simple to kill add for this boss summonable worgens rethilgore razorclaw the butcher odo the blindwatcher wolf master nandos with three lupine spectres what should happen the spirits summoned should use the abilities described in the tooltip in ingame dungeon guide not exploding and causing mass damage | 1 |
568,502 | 16,981,598,904 | IssuesEvent | 2021-06-30 09:32:38 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | opened | Stop Dynamic Kubo Toyabe fitting crashing if bin width outside acceptable range | High Priority ISIS Team: Spectroscopy Muon | ### Expected behavior
When using the Dynamic Kubo Toyabe fitting on MUSR if you enter a bin width <0.001 or >0.100 an exception should be thrown and error message provided for user.
### Actual behavior
Mantid is crashing.
### Steps to reproduce the behavior
1. Open Muon Analysis GUI
2. Load a run
3. Click on fitting tab and add the Dynamic Kubo Toyabe fit function
4. Change the bin width to 0.5
### Platforms affected
Windows 10 (but likely to be all) | 1.0 | Stop Dynamic Kubo Toyabe fitting crashing if bin width outside acceptable range - ### Expected behavior
When using the Dynamic Kubo Toyabe fitting on MUSR if you enter a bin width <0.001 or >0.100 an exception should be thrown and error message provided for user.
### Actual behavior
Mantid is crashing.
### Steps to reproduce the behavior
1. Open Muon Analysis GUI
2. Load a run
3. Click on fitting tab and add the Dynamic Kubo Toyabe fit function
4. Change the bin width to 0.5
### Platforms affected
Windows 10 (but likely to be all) | priority | stop dynamic kubo toyabe fitting crashing if bin width outside acceptable range expected behavior when using the dynamic kubo toyabe fitting on musr if you enter a bin width an exception should be thrown and error message provided for user actual behavior mantid is crashing steps to reproduce the behavior open muon analysis gui load a run click on fitting tab and add the dynamic kubo toyabe fit function change the bin width to platforms affected windows but likely to be all | 1 |
795,154 | 28,063,869,416 | IssuesEvent | 2023-03-29 14:13:56 | xsuite/xsuite | https://api.github.com/repos/xsuite/xsuite | opened | Kernel for random number generator initialization not reused | bug High priority | ```python
In [4]: p = xp.Particles(); p._init_random_number_generator()
Compiling ContextCpu kernels...
ld: warning: -pie being ignored. It is only used when linking a main executable
Done compiling ContextCpu kernels.
In [5]: p = xp.Particles(); p._init_random_number_generator()
Compiling ContextCpu kernels...
ld: warning: -pie being ignored. It is only used when linking a main executable
Done compiling ContextCpu kernels.
In [6]: p = xp.Particles(); p._init_random_number_generator()
Compiling ContextCpu kernels...
ld: warning: -pie being ignored. It is only used when linking a main executable
Done compiling ContextCpu kernels.
``` | 1.0 | Kernel for random number generator initialization not reused - ```python
In [4]: p = xp.Particles(); p._init_random_number_generator()
Compiling ContextCpu kernels...
ld: warning: -pie being ignored. It is only used when linking a main executable
Done compiling ContextCpu kernels.
In [5]: p = xp.Particles(); p._init_random_number_generator()
Compiling ContextCpu kernels...
ld: warning: -pie being ignored. It is only used when linking a main executable
Done compiling ContextCpu kernels.
In [6]: p = xp.Particles(); p._init_random_number_generator()
Compiling ContextCpu kernels...
ld: warning: -pie being ignored. It is only used when linking a main executable
Done compiling ContextCpu kernels.
``` | priority | kernel for random number generator initialization not reused python in p xp particles p init random number generator compiling contextcpu kernels ld warning pie being ignored it is only used when linking a main executable done compiling contextcpu kernels in p xp particles p init random number generator compiling contextcpu kernels ld warning pie being ignored it is only used when linking a main executable done compiling contextcpu kernels in p xp particles p init random number generator compiling contextcpu kernels ld warning pie being ignored it is only used when linking a main executable done compiling contextcpu kernels | 1 |
120,818 | 4,794,766,962 | IssuesEvent | 2016-10-31 22:04:24 | Microsoft/TypeScript | https://api.github.com/repos/Microsoft/TypeScript | closed | Error introduced into 10/28 @next build | Bug High Priority | Some change between `typescript@2.1.0-dev.20161027` and `typescript@2.1.0-dev.20161028` introduced this error into my build. This issue also exists in `typescript@2.1.0-dev.20161029`
```
/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23590
result.declarations = symbol.declarations.slice(0);
^
TypeError: Cannot read property 'slice' of undefined
at cloneSymbol (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23590:54)
at mergeModuleAugmentation (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23683:90)
at initializeTypeChecker (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:41037:25)
at Object.createTypeChecker (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23527:9)
at getDiagnosticsProducingTypeChecker (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:60181:93)
at Object.getGlobalDiagnostics (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:60490:41)
at Tsifier.checkSemantics (/Users/ntilwalli/dev/spotlight/node_modules/tsify/lib/Tsifier.js:219:37)
at Tsifier.compile (/Users/ntilwalli/dev/spotlight/node_modules/tsify/lib/Tsifier.js:179:34)
at Tsifier.generateCache (/Users/ntilwalli/dev/spotlight/node_modules/tsify/lib/Tsifier.js:156:8)
at DestroyableTransform.flush [as _flush] (/Users/ntilwalli/dev/spotlight/node_modules/tsify/index.js:73:12)
```
| 1.0 | Error introduced into 10/28 @next build - Some change between `typescript@2.1.0-dev.20161027` and `typescript@2.1.0-dev.20161028` introduced this error into my build. This issue also exists in `typescript@2.1.0-dev.20161029`
```
/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23590
result.declarations = symbol.declarations.slice(0);
^
TypeError: Cannot read property 'slice' of undefined
at cloneSymbol (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23590:54)
at mergeModuleAugmentation (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23683:90)
at initializeTypeChecker (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:41037:25)
at Object.createTypeChecker (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:23527:9)
at getDiagnosticsProducingTypeChecker (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:60181:93)
at Object.getGlobalDiagnostics (/Users/ntilwalli/dev/spotlight/node_modules/typescript/lib/typescript.js:60490:41)
at Tsifier.checkSemantics (/Users/ntilwalli/dev/spotlight/node_modules/tsify/lib/Tsifier.js:219:37)
at Tsifier.compile (/Users/ntilwalli/dev/spotlight/node_modules/tsify/lib/Tsifier.js:179:34)
at Tsifier.generateCache (/Users/ntilwalli/dev/spotlight/node_modules/tsify/lib/Tsifier.js:156:8)
at DestroyableTransform.flush [as _flush] (/Users/ntilwalli/dev/spotlight/node_modules/tsify/index.js:73:12)
```
| priority | error introduced into next build some change between typescript dev and typescript dev introduced this error into my build this issue also exists in typescript dev users ntilwalli dev spotlight node modules typescript lib typescript js result declarations symbol declarations slice typeerror cannot read property slice of undefined at clonesymbol users ntilwalli dev spotlight node modules typescript lib typescript js at mergemoduleaugmentation users ntilwalli dev spotlight node modules typescript lib typescript js at initializetypechecker users ntilwalli dev spotlight node modules typescript lib typescript js at object createtypechecker users ntilwalli dev spotlight node modules typescript lib typescript js at getdiagnosticsproducingtypechecker users ntilwalli dev spotlight node modules typescript lib typescript js at object getglobaldiagnostics users ntilwalli dev spotlight node modules typescript lib typescript js at tsifier checksemantics users ntilwalli dev spotlight node modules tsify lib tsifier js at tsifier compile users ntilwalli dev spotlight node modules tsify lib tsifier js at tsifier generatecache users ntilwalli dev spotlight node modules tsify lib tsifier js at destroyabletransform flush users ntilwalli dev spotlight node modules tsify index js | 1 |
186,072 | 6,733,292,433 | IssuesEvent | 2017-10-18 14:25:31 | luccafort/otogi_novels | https://api.github.com/repos/luccafort/otogi_novels | closed | 小説のストーリー毎にまえがき/あとがき機能 | priority:high somebody:soon | ## 概要:
ダイジェスト機能とはまえがき/あとがきのこと。
前回の要約をストーリー本文の前に表示する機能、あるいは次回予告の告知などに利用する際に使う機能。
- [x] まえがきのCRUD機能
- [x] あとがきのCRUD機能
- [x] まえがき/あとがきの表示
必須項目ではない。
著者の好みで文章の前後に表示させることができる。
画像の差し込みなどはなし。 | 1.0 | 小説のストーリー毎にまえがき/あとがき機能 - ## 概要:
ダイジェスト機能とはまえがき/あとがきのこと。
前回の要約をストーリー本文の前に表示する機能、あるいは次回予告の告知などに利用する際に使う機能。
- [x] まえがきのCRUD機能
- [x] あとがきのCRUD機能
- [x] まえがき/あとがきの表示
必須項目ではない。
著者の好みで文章の前後に表示させることができる。
画像の差し込みなどはなし。 | priority | 小説のストーリー毎にまえがき あとがき機能 概要 ダイジェスト機能とはまえがき あとがきのこと。 前回の要約をストーリー本文の前に表示する機能、あるいは次回予告の告知などに利用する際に使う機能。 まえがきのcrud機能 あとがきのcrud機能 まえがき あとがきの表示 必須項目ではない。 著者の好みで文章の前後に表示させることができる。 画像の差し込みなどはなし。 | 1 |
41,168 | 2,868,982,057 | IssuesEvent | 2015-06-05 22:21:42 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | opened | Easily allow the installation and usage of any web component | enhancement Priority-High | <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#20323_
----
There are a lot of great JavaScript web components out there. As a Dart developer, I would like to use all web components in my project. For example, the Google Web Components: https://github.com/GoogleWebComponents | 1.0 | Easily allow the installation and usage of any web component - <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#20323_
----
There are a lot of great JavaScript web components out there. As a Dart developer, I would like to use all web components in my project. For example, the Google Web Components: https://github.com/GoogleWebComponents | priority | easily allow the installation and usage of any web component issue by originally opened as dart lang sdk there are a lot of great javascript web components out there as a dart developer i would like to use all web components in my project for example the google web components | 1 |
598,868 | 18,257,551,565 | IssuesEvent | 2021-10-03 09:38:24 | AY2122S1-CS2103T-T11-3/tp | https://api.github.com/repos/AY2122S1-CS2103T-T11-3/tp | opened | As a clinic receptionist, I can add a doctor’s personal details so I can better understand them and have a way of easily contacting them | type.Story priority.High | Create a command to add Doctors to the doctor records in plannerMD.
To be done after #42 and #43 | 1.0 | As a clinic receptionist, I can add a doctor’s personal details so I can better understand them and have a way of easily contacting them - Create a command to add Doctors to the doctor records in plannerMD.
To be done after #42 and #43 | priority | as a clinic receptionist i can add a doctor’s personal details so i can better understand them and have a way of easily contacting them create a command to add doctors to the doctor records in plannermd to be done after and | 1 |
376,658 | 11,149,893,653 | IssuesEvent | 2019-12-23 20:21:45 | bounswe/bounswe2019group8 | https://api.github.com/repos/bounswe/bounswe2019group8 | opened | Article Show Page Add Like and Dislike | Effort: High Mobile New feature Platform: Mobile Priority: High Status: Done | **Actions:**
1. Add like and dislike to show article page.
1. Connect with backend.
**Notes:**
- [x] Add like and dislike to show article page.
- [x] Connect with backend.
**Deadline:** 24.10.2019 - 18.43 | 1.0 | Article Show Page Add Like and Dislike - **Actions:**
1. Add like and dislike to show article page.
1. Connect with backend.
**Notes:**
- [x] Add like and dislike to show article page.
- [x] Connect with backend.
**Deadline:** 24.10.2019 - 18.43 | priority | article show page add like and dislike actions add like and dislike to show article page connect with backend notes add like and dislike to show article page connect with backend deadline | 1 |
550,041 | 16,103,849,580 | IssuesEvent | 2021-04-27 12:50:39 | georchestra/mapstore2-georchestra | https://api.github.com/repos/georchestra/mapstore2-georchestra | closed | Error opening the Embed map in geOrchestra GeoSolutions DEV | Priority: High bug | There is an error when loading a saved map in embed mode in GeoSolutions geOrchestra DEV.
Try for example this map.
https://georchestra.geo-solutions.it/mapstore/embedded.html#/3023
There is a Not fount error while loading the following:
https://georchestra.geo-solutions.it/mapstore/dist/geOrchestra-embedded.js
@offtherailz probably it is due to the latest update we did for the embedded part and you forgot some configuration. Please confirm.
| 1.0 | Error opening the Embed map in geOrchestra GeoSolutions DEV - There is an error when loading a saved map in embed mode in GeoSolutions geOrchestra DEV.
Try for example this map.
https://georchestra.geo-solutions.it/mapstore/embedded.html#/3023
There is a Not fount error while loading the following:
https://georchestra.geo-solutions.it/mapstore/dist/geOrchestra-embedded.js
@offtherailz probably it is due to the latest update we did for the embedded part and you forgot some configuration. Please confirm.
| priority | error opening the embed map in georchestra geosolutions dev there is an error when loading a saved map in embed mode in geosolutions georchestra dev try for example this map there is a not fount error while loading the following offtherailz probably it is due to the latest update we did for the embedded part and you forgot some configuration please confirm | 1 |
809,464 | 30,193,848,269 | IssuesEvent | 2023-07-04 18:12:54 | wizeline/wize-q-remix | https://api.github.com/repos/wizeline/wize-q-remix | closed | WizeQ is indexed by google | high priority | WizeQ app is indexed by google , it means that it can be search by the browser. | 1.0 | WizeQ is indexed by google - WizeQ app is indexed by google , it means that it can be search by the browser. | priority | wizeq is indexed by google wizeq app is indexed by google it means that it can be search by the browser | 1 |
103,431 | 4,173,004,224 | IssuesEvent | 2016-06-21 08:58:11 | bazingatechnologies/FSharp.Data.GraphQL | https://api.github.com/repos/bazingatechnologies/FSharp.Data.GraphQL | closed | Fix parser | bug graphql-spec high-priority | Current parser suffers from some issues, where most notable is being fragile on the newline/whitespaces in front of the query.
The issue is recreated by *parser should parse kitchen sink* test.
Known issues of the current parser:
- Problems with parsing queries having new lines in them, especially at the beginning of the query.
- Problems with parsing complex inputs as an arguments. | 1.0 | Fix parser - Current parser suffers from some issues, where most notable is being fragile on the newline/whitespaces in front of the query.
The issue is recreated by *parser should parse kitchen sink* test.
Known issues of the current parser:
- Problems with parsing queries having new lines in them, especially at the beginning of the query.
- Problems with parsing complex inputs as an arguments. | priority | fix parser current parser suffers from some issues where most notable is being fragile on the newline whitespaces in front of the query the issue is recreated by parser should parse kitchen sink test known issues of the current parser problems with parsing queries having new lines in them especially at the beginning of the query problems with parsing complex inputs as an arguments | 1 |
364,036 | 10,758,070,522 | IssuesEvent | 2019-10-31 14:23:44 | AY1920S1-CS2113T-F09-2/main | https://api.github.com/repos/AY1920S1-CS2113T-F09-2/main | closed | Reminder: customized by stock/date | priority.High type.Story | Create a feature that allows users to customize the reminders to their convenience | 1.0 | Reminder: customized by stock/date - Create a feature that allows users to customize the reminders to their convenience | priority | reminder customized by stock date create a feature that allows users to customize the reminders to their convenience | 1 |
253,792 | 8,066,023,476 | IssuesEvent | 2018-08-04 10:10:58 | AeneasPlatform/Aeneas | https://api.github.com/repos/AeneasPlatform/Aeneas | closed | Pre-mining setup fail handling | Aeneas:Core Priority 1 : High bug | **It is not any fail handling of mining timeout error.**
**To Reproduce**
Steps to reproduce the behavior:
1. Launch node at the first time.
2. Wait till pre-mining setup will end.
3. If setup value is bigger than it should be, you will see an error if mining timeout happens.
**Expected behavior**
It should get new block from the network, apply it to history and start new iteration.
**Desktop (please complete the following information):**
- OS: OS X El Capitan
- Device: Macbook Pro 2015 Early
- Version : commit № [839c3ed45c32687d14fc8af8178eb1138bda1ca1](https://github.com/AeneasPlatform/Aeneas/commit/839c3ed45c32687d14fc8af8178eb1138bda1ca1)
| 1.0 | Pre-mining setup fail handling - **It is not any fail handling of mining timeout error.**
**To Reproduce**
Steps to reproduce the behavior:
1. Launch node at the first time.
2. Wait till pre-mining setup will end.
3. If setup value is bigger than it should be, you will see an error if mining timeout happens.
**Expected behavior**
It should get new block from the network, apply it to history and start new iteration.
**Desktop (please complete the following information):**
- OS: OS X El Capitan
- Device: Macbook Pro 2015 Early
- Version : commit № [839c3ed45c32687d14fc8af8178eb1138bda1ca1](https://github.com/AeneasPlatform/Aeneas/commit/839c3ed45c32687d14fc8af8178eb1138bda1ca1)
| priority | pre mining setup fail handling it is not any fail handling of mining timeout error to reproduce steps to reproduce the behavior launch node at the first time wait till pre mining setup will end if setup value is bigger than it should be you will see an error if mining timeout happens expected behavior it should get new block from the network apply it to history and start new iteration desktop please complete the following information os os x el capitan device macbook pro early version commit № | 1 |
698,286 | 23,972,571,529 | IssuesEvent | 2022-09-13 08:58:01 | PaddlePaddle/Paddle | https://api.github.com/repos/PaddlePaddle/Paddle | closed | [Feature High priority] SPR 2+2+8 models AMX enabling and deployment | high priority Intel int8 | **Newest high priority model list from ligang, Green is highest priority.**

How to get the models:
**Detection model: Retinanet, faster-rcnn-r50-fpn**
#Clas
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x1_0_infer.tar
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_infer.tar
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/SwinTransformer_tiny_patch4_window7_224_infer.tar
# OCR
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
# Seg
# Human Pose 512x512
wget https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax.zip
# Human Pose 192x192
https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/fcn_hrnetw18_small_v1_humanseg_192x192_with_softmax.zip
# Matting model export
https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.3/contrib/Matting/README_CN.md
# Detection
Need to set export resolution ratio (需要设置输出分辨率)
You can use these scripts and set exporting resolution ratio and export. (可以在PaddleDetection中用下面连接中的脚本,设置分辨率后导出)
https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/EXPORT_MODEL_en.md
Picodet:https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/picodet/README_cn.md
PP-YOLOv2:https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.3/configs/ppyolo/README_cn.md
| 1.0 | [Feature High priority] SPR 2+2+8 models AMX enabling and deployment - **Newest high priority model list from ligang, Green is highest priority.**

How to get the models:
**Detection model: Retinanet, faster-rcnn-r50-fpn**
#Clas
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/PPLCNet_x1_0_infer.tar
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV3_large_x1_0_infer.tar
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/SwinTransformer_tiny_patch4_window7_224_infer.tar
# OCR
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
# Seg
# Human Pose 512x512
wget https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax.zip
# Human Pose 192x192
https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/fcn_hrnetw18_small_v1_humanseg_192x192_with_softmax.zip
# Matting model export
https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.3/contrib/Matting/README_CN.md
# Detection
Need to set export resolution ratio (需要设置输出分辨率)
You can use these scripts and set exporting resolution ratio and export. (可以在PaddleDetection中用下面连接中的脚本,设置分辨率后导出)
https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/EXPORT_MODEL_en.md
Picodet:https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/picodet/README_cn.md
PP-YOLOv2:https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.3/configs/ppyolo/README_cn.md
| priority | spr models amx enabling and deployment newest high priority model list from ligang green is highest priority how to get the models: detection model retinanet faster rcnn fpn clas wget wget wget ocr wget wget seg human pose wget human pose matting model export detection need to set export resolution ratio 需要设置输出分辨率 you can use these scripts and set exporting resolution ratio and export 可以在paddledetection中用下面连接中的脚本,设置分辨率后导出 picodet: pp : | 1 |
353,292 | 10,551,121,026 | IssuesEvent | 2019-10-03 12:42:27 | boi123212321/porn-manager | https://api.github.com/repos/boi123212321/porn-manager | reopened | Tag aliases + Tags view | enhancement high priority | - [x] Allow user to create/delete/view/change new tags with optional aliases
- [ ] Automatically find tags in imported file names
- [ ] Optional tag image | 1.0 | Tag aliases + Tags view - - [x] Allow user to create/delete/view/change new tags with optional aliases
- [ ] Automatically find tags in imported file names
- [ ] Optional tag image | priority | tag aliases tags view allow user to create delete view change new tags with optional aliases automatically find tags in imported file names optional tag image | 1 |
212,314 | 7,235,730,960 | IssuesEvent | 2018-02-13 02:23:11 | City-Bureau/city-scrapers | https://api.github.com/repos/City-Bureau/city-scrapers | closed | Mapzen is shutting down :( | priority: high (must have) | https://mapzen.com/blog/shutdown/
There are [some alternatives](https://mapzen.com/blog/migration/#mapzen-search) listed on their [migration guide](https://mapzen.com/blog/migration/). We'll need to find a replacement that provides lat/long as well as community area.
cc @r-wei | 1.0 | Mapzen is shutting down :( - https://mapzen.com/blog/shutdown/
There are [some alternatives](https://mapzen.com/blog/migration/#mapzen-search) listed on their [migration guide](https://mapzen.com/blog/migration/). We'll need to find a replacement that provides lat/long as well as community area.
cc @r-wei | priority | mapzen is shutting down there are listed on their we ll need to find a replacement that provides lat long as well as community area cc r wei | 1 |
798,728 | 28,293,168,995 | IssuesEvent | 2023-04-09 13:17:21 | bounswe/bounswe2023group6 | https://api.github.com/repos/bounswe/bounswe2023group6 | closed | Creation of the Sequence Diagram for Creating and Editing Comment | priority: high type: task area: wiki area: milestone status: review-needed | ### Problem
The sequence diagram for the creating and editing comment should be implemented as decided on the 13th meeting.
### Solution
_No response_
### Documentation
_No response_
### Additional notes
_No response_
### Reviewers
Ömer Bahadıroğlu
### Deadline
08.04.2023 - Saturday - 21:00 | 1.0 | Creation of the Sequence Diagram for Creating and Editing Comment - ### Problem
The sequence diagram for the creating and editing comment should be implemented as decided on the 13th meeting.
### Solution
_No response_
### Documentation
_No response_
### Additional notes
_No response_
### Reviewers
Ömer Bahadıroğlu
### Deadline
08.04.2023 - Saturday - 21:00 | priority | creation of the sequence diagram for creating and editing comment problem the sequence diagram for the creating and editing comment should be implemented as decided on the meeting solution no response documentation no response additional notes no response reviewers ömer bahadıroğlu deadline saturday | 1 |
325,515 | 9,932,057,717 | IssuesEvent | 2019-07-02 08:58:57 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | A tif file from Model Library cannot be opened in GAMA Image Viewer | > Bug Concerns Data Persistence Concerns Interface Concerns Models Library Priority High Version Git | **Describe the bug**
In the model Library, Data / Data Importation / includes, there is the file `land-cover.tif`.
When double-clicking on it, it should be opened in the Image Viewer view.
Instead, I get a white / empty Image Viewer, and an Exception in the Eclipse Console:
```
org.eclipse.swt.SWTException: Unsupported color depth
at org.eclipse.swt.SWT.error(SWT.java:4699)
at org.eclipse.swt.SWT.error(SWT.java:4614)
at org.eclipse.swt.SWT.error(SWT.java:4585)
at org.eclipse.swt.graphics.Image.createRepresentation(Image.java:994)
at org.eclipse.swt.graphics.Image.init(Image.java:1471)
at org.eclipse.swt.graphics.Image.<init>(Image.java:565)
at ummisco.gama.ui.viewers.image.ImageViewer.showImage(ImageViewer.java:371)
at ummisco.gama.ui.viewers.image.ImageViewer$2.lambda$0(ImageViewer.java:319)
at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:40)
at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:185)
at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4102)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3769)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$5.run(PartRenderingEngine.java:1173)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:338)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1062)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:155)
at org.eclipse.ui.internal.Workbench.lambda$3(Workbench.java:644)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:338)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:566)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
at msi.gama.application.Application.start(Application.java:126)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:203)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:137)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:107)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:661)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:597)
at org.eclipse.equinox.launcher.Main.run(Main.java:1476)
at org.eclipse.equinox.launcher.Main.main(Main.java:1449)
```
Notice that there is another TIFF file, `bogota_grid.tif` that can be opened properly in the Image Viewer.
**Desktop (please complete the following information):**
- OS: macOS Mojave
- GAMA version: git
| 1.0 | A tif file from Model Library cannot be opened in GAMA Image Viewer - **Describe the bug**
In the model Library, Data / Data Importation / includes, there is the file `land-cover.tif`.
When double-clicking on it, it should be opened in the Image Viewer view.
Instead, I get a white / empty Image Viewer, and an Exception in the Eclipse Console:
```
org.eclipse.swt.SWTException: Unsupported color depth
at org.eclipse.swt.SWT.error(SWT.java:4699)
at org.eclipse.swt.SWT.error(SWT.java:4614)
at org.eclipse.swt.SWT.error(SWT.java:4585)
at org.eclipse.swt.graphics.Image.createRepresentation(Image.java:994)
at org.eclipse.swt.graphics.Image.init(Image.java:1471)
at org.eclipse.swt.graphics.Image.<init>(Image.java:565)
at ummisco.gama.ui.viewers.image.ImageViewer.showImage(ImageViewer.java:371)
at ummisco.gama.ui.viewers.image.ImageViewer$2.lambda$0(ImageViewer.java:319)
at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:40)
at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:185)
at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4102)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3769)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$5.run(PartRenderingEngine.java:1173)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:338)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1062)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:155)
at org.eclipse.ui.internal.Workbench.lambda$3(Workbench.java:644)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:338)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:566)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
at msi.gama.application.Application.start(Application.java:126)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:203)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:137)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:107)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:661)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:597)
at org.eclipse.equinox.launcher.Main.run(Main.java:1476)
at org.eclipse.equinox.launcher.Main.main(Main.java:1449)
```
Notice that there is another TIFF file, `bogota_grid.tif` that can be opened properly in the Image Viewer.
**Desktop (please complete the following information):**
- OS: macOS Mojave
- GAMA version: git
| priority | a tif file from model library cannot be opened in gama image viewer describe the bug in the model library data data importation includes there is the file land cover tif when double clicking on it it should be opened in the image viewer view instead i get a white empty image viewer and an exception in the eclipse console org eclipse swt swtexception unsupported color depth at org eclipse swt swt error swt java at org eclipse swt swt error swt java at org eclipse swt swt error swt java at org eclipse swt graphics image createrepresentation image java at org eclipse swt graphics image init image java at org eclipse swt graphics image image java at ummisco gama ui viewers image imageviewer showimage imageviewer java at ummisco gama ui viewers image imageviewer lambda imageviewer java at org eclipse swt widgets runnablelock run runnablelock java at org eclipse swt widgets synchronizer runasyncmessages synchronizer java at org eclipse swt widgets display runasyncmessages display java at org eclipse swt widgets display readanddispatch display java at org eclipse ui internal workbench swt partrenderingengine run partrenderingengine java at org eclipse core databinding observable realm runwithdefault realm java at org eclipse ui internal workbench swt partrenderingengine run partrenderingengine java at org eclipse ui internal workbench createandrunui java at org eclipse ui internal workbench lambda workbench java at org eclipse core databinding observable realm runwithdefault realm java at org eclipse ui internal workbench createandrunworkbench workbench java at org eclipse ui platformui createandrunworkbench platformui java at msi gama application application start application java at org eclipse equinox internal app eclipseapphandle run eclipseapphandle java at org eclipse core runtime internal adaptor eclipseapplauncher runapplication eclipseapplauncher java at org eclipse core runtime internal adaptor eclipseapplauncher start eclipseapplauncher java at org eclipse core runtime adaptor eclipsestarter run eclipsestarter java at org eclipse core runtime adaptor eclipsestarter run eclipsestarter java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org eclipse equinox launcher main invokeframework main java at org eclipse equinox launcher main basicrun main java at org eclipse equinox launcher main run main java at org eclipse equinox launcher main main main java notice that there is another tiff file bogota grid tif that can be opened properly in the image viewer desktop please complete the following information os macos mojave gama version git | 1 |
187,592 | 6,759,402,714 | IssuesEvent | 2017-10-24 16:59:18 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | DAL rework: Errors with violations | Priority: High Type: Bug | ERROR log statements when triggering a violation from the admin UI to a node
`Oct 24 11:38:09 pancakeFence1 httpd_admin: httpd.admin(2478) ERROR: [mac:unknown] Database query failed with non retryable error: Column 'vid' in field list is ambiguous (errno: 1052) [SELECT `id`, `mac`, `vid`, `start_date`, `release_date`, `status`, `ticket_ref`, `notes` FROM violation INNER JOIN class ON ( `violation`.`vid` = `class`.`vid` ) WHERE ( ( `mac` = ? AND `status` = ? AND `vid` = ? ) ) LIMIT ? OFFSET ?]{de:ad:be:ef:42:42, open, 1100001, 1, 0} (pf::dal::db_execute)
Oct 24 11:38:09 pancakeFence1 httpd_admin: httpd.admin(2478) ERROR: [mac:unknown] Database query failed with non retryable error: Column 'vid' in field list is ambiguous (errno: 1052) [SELECT `id`, `mac`, `vid`, `start_date`, `release_date`, `status`, `ticket_ref`, `notes` FROM violation INNER JOIN class ON ( `violation`.`vid` = `class`.`vid` ) WHERE ( ( `mac` = ? AND `status` = ? ) ) ORDER BY `start_date` DESC]{de:ad:be:ef:42:42, open} (pf::dal::db_execute)`
Also, unable to release the violation from the admin UI (Error! An error occurred while closing the violation.)
`Oct 24 11:40:36 pancakeFence1 httpd_admin: httpd.admin(2478) ERROR: [mac:unknown] Database query failed with non retryable error: Column 'vid' in field list is ambiguous (errno: 1052) [SELECT `id`, `mac`, `vid`, `start_date`, `release_date`, `status`, `ticket_ref`, `notes` FROM violation INNER JOIN class ON ( `violation`.`vid` = `class`.`vid` ) WHERE ( `id` = ? ) LIMIT ? OFFSET ?]{1, 1, 0} (pf::dal::db_execute)` | 1.0 | DAL rework: Errors with violations - ERROR log statements when triggering a violation from the admin UI to a node
`Oct 24 11:38:09 pancakeFence1 httpd_admin: httpd.admin(2478) ERROR: [mac:unknown] Database query failed with non retryable error: Column 'vid' in field list is ambiguous (errno: 1052) [SELECT `id`, `mac`, `vid`, `start_date`, `release_date`, `status`, `ticket_ref`, `notes` FROM violation INNER JOIN class ON ( `violation`.`vid` = `class`.`vid` ) WHERE ( ( `mac` = ? AND `status` = ? AND `vid` = ? ) ) LIMIT ? OFFSET ?]{de:ad:be:ef:42:42, open, 1100001, 1, 0} (pf::dal::db_execute)
Oct 24 11:38:09 pancakeFence1 httpd_admin: httpd.admin(2478) ERROR: [mac:unknown] Database query failed with non retryable error: Column 'vid' in field list is ambiguous (errno: 1052) [SELECT `id`, `mac`, `vid`, `start_date`, `release_date`, `status`, `ticket_ref`, `notes` FROM violation INNER JOIN class ON ( `violation`.`vid` = `class`.`vid` ) WHERE ( ( `mac` = ? AND `status` = ? ) ) ORDER BY `start_date` DESC]{de:ad:be:ef:42:42, open} (pf::dal::db_execute)`
Also, unable to release the violation from the admin UI (Error! An error occurred while closing the violation.)
`Oct 24 11:40:36 pancakeFence1 httpd_admin: httpd.admin(2478) ERROR: [mac:unknown] Database query failed with non retryable error: Column 'vid' in field list is ambiguous (errno: 1052) [SELECT `id`, `mac`, `vid`, `start_date`, `release_date`, `status`, `ticket_ref`, `notes` FROM violation INNER JOIN class ON ( `violation`.`vid` = `class`.`vid` ) WHERE ( `id` = ? ) LIMIT ? OFFSET ?]{1, 1, 0} (pf::dal::db_execute)` | priority | dal rework errors with violations error log statements when triggering a violation from the admin ui to a node oct httpd admin httpd admin error database query failed with non retryable error column vid in field list is ambiguous errno de ad be ef open pf dal db execute oct httpd admin httpd admin error database query failed with non retryable error column vid in field list is ambiguous errno de ad be ef open pf dal db execute also unable to release the violation from the admin ui error an error occurred while closing the violation oct httpd admin httpd admin error database query failed with non retryable error column vid in field list is ambiguous errno pf dal db execute | 1 |
111,902 | 4,494,573,920 | IssuesEvent | 2016-08-31 06:53:22 | thommoboy/There-are-no-brakes | https://api.github.com/repos/thommoboy/There-are-no-brakes | closed | Camera switch on ancients level | Ancients help wanted Priority High | want player can press some button then set a screen become main screen and be bigger.
Help:
Which button should do it? and which script is doing it? | 1.0 | Camera switch on ancients level - want player can press some button then set a screen become main screen and be bigger.
Help:
Which button should do it? and which script is doing it? | priority | camera switch on ancients level want player can press some button then set a screen become main screen and be bigger help which button should do it and which script is doing it | 1 |
388,267 | 11,485,625,495 | IssuesEvent | 2020-02-11 08:08:36 | mpusz/units | https://api.github.com/repos/mpusz/units | closed | gcc-10 optimizer issue | bug help wanted high priority | The following code https://godbolt.org/z/7pqFV7 compiles and runs fine on both Debug and Release on gcc-9.2. It also runs fine without optimizations on gcc-10. However, trying to enable any, even -O1, makes units symbols in the library to be optimized away (https://godbolt.org/z/5ZhQs5).
It is most probably a bug in gcc rather than in the units library. I already spoke about this problem with gcc developers but they need a self-contained example with repro of this bug in order to start working on the issue. If anyone could help in creating such a code sample it would be great! | 1.0 | gcc-10 optimizer issue - The following code https://godbolt.org/z/7pqFV7 compiles and runs fine on both Debug and Release on gcc-9.2. It also runs fine without optimizations on gcc-10. However, trying to enable any, even -O1, makes units symbols in the library to be optimized away (https://godbolt.org/z/5ZhQs5).
It is most probably a bug in gcc rather than in the units library. I already spoke about this problem with gcc developers but they need a self-contained example with repro of this bug in order to start working on the issue. If anyone could help in creating such a code sample it would be great! | priority | gcc optimizer issue the following code compiles and runs fine on both debug and release on gcc it also runs fine without optimizations on gcc however trying to enable any even makes units symbols in the library to be optimized away it is most probably a bug in gcc rather than in the units library i already spoke about this problem with gcc developers but they need a self contained example with repro of this bug in order to start working on the issue if anyone could help in creating such a code sample it would be great | 1 |
282,959 | 8,712,269,331 | IssuesEvent | 2018-12-06 21:42:18 | openstax/bit | https://api.github.com/repos/openstax/bit | closed | SEO: Redirect from openstax.org// to openstax.org | Change Request Epic priority1-high | ### Description
Create a 301 redirect from openstax.org// and to openstax.org
### Acceptance
openstax.org// redirects to openstax.org
### Impact
* **Severity**: [ ] High [ ] Medium [ ] Low | 1.0 | SEO: Redirect from openstax.org// to openstax.org - ### Description
Create a 301 redirect from openstax.org// and to openstax.org
### Acceptance
openstax.org// redirects to openstax.org
### Impact
* **Severity**: [ ] High [ ] Medium [ ] Low | priority | seo redirect from openstax org to openstax org description create a redirect from openstax org and to openstax org acceptance openstax org redirects to openstax org impact severity high medium low | 1 |
319,656 | 9,747,754,197 | IssuesEvent | 2019-06-03 15:00:27 | openbankingspace/tpp-issues | https://api.github.com/repos/openbankingspace/tpp-issues | opened | Barclays - Fetching Standing Orders, Direct Debits, and Beneficiaries returns 400 error response | aspsp:barclays env:live issue:down priority:high type:aisp1 | As of the start of today (3rd of June), we've been having intermittent issues fetching Standing Orders, Direct Debits, and Beneficiaries from Barclays. The following error response is returned, with a 400 status code:
```
<html><head><title>JBWEB000065: HTTP Status 400 - </title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>JBWEB000065: HTTP Status 400 - </h1><HR size="1" noshade="noshade"><p><b>JBWEB000309: type</b> JBWEB000067: Status report</p><p><b>JBWEB000068: message</b> <u></u></p><p><b>JBWEB000069: description</b> <u>JBWEB000120: The request sent by the client was syntactically incorrect.</u></p><HR size="1" noshade="noshade"></body></html>
```
This issue seems to be PSU specific, as PSUs have attempted to share their Standing Orders, Direct Debits, and Beneficiaries multiple times, and the same 400 response is returned. It doesn't seem to affect all Barclays PSUs, however we are unsure of the cohort of users impacted. Could Barclays provide information on this?
## Debug Information
_Sent separately to Barclays._
## Impact
High. This affects an unknown cohort of PSUs and prevents them from using services that require Standing Orders, Direct Debits, and Beneficiaries information. | 1.0 | Barclays - Fetching Standing Orders, Direct Debits, and Beneficiaries returns 400 error response - As of the start of today (3rd of June), we've been having intermittent issues fetching Standing Orders, Direct Debits, and Beneficiaries from Barclays. The following error response is returned, with a 400 status code:
```
<html><head><title>JBWEB000065: HTTP Status 400 - </title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>JBWEB000065: HTTP Status 400 - </h1><HR size="1" noshade="noshade"><p><b>JBWEB000309: type</b> JBWEB000067: Status report</p><p><b>JBWEB000068: message</b> <u></u></p><p><b>JBWEB000069: description</b> <u>JBWEB000120: The request sent by the client was syntactically incorrect.</u></p><HR size="1" noshade="noshade"></body></html>
```
This issue seems to be PSU specific, as PSUs have attempted to share their Standing Orders, Direct Debits, and Beneficiaries multiple times, and the same 400 response is returned. It doesn't seem to affect all Barclays PSUs, however we are unsure of the cohort of users impacted. Could Barclays provide information on this?
## Debug Information
_Sent separately to Barclays._
## Impact
High. This affects an unknown cohort of PSUs and prevents them from using services that require Standing Orders, Direct Debits, and Beneficiaries information. | priority | barclays fetching standing orders direct debits and beneficiaries returns error response as of the start of today of june we ve been having intermittent issues fetching standing orders direct debits and beneficiaries from barclays the following error response is returned with a status code http status http status type status report message description the request sent by the client was syntactically incorrect this issue seems to be psu specific as psus have attempted to share their standing orders direct debits and beneficiaries multiple times and the same response is returned it doesn t seem to affect all barclays psus however we are unsure of the cohort of users impacted could barclays provide information on this debug information sent separately to barclays impact high this affects an unknown cohort of psus and prevents them from using services that require standing orders direct debits and beneficiaries information | 1 |
284,337 | 8,737,375,678 | IssuesEvent | 2018-12-11 22:22:31 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | error plotting relaxer nodes | bug crash likelihood medium priority reviewed severity high wrong results | Josh Kallman has some simulations where the relaxer nodes (not the labels, but the actual nodes) that can be shown in visit don't show up. He gets a couple of weird errors: 1. If there are no nodes in the set, VisIt dies. I don't know if this is a visit or something we're doing to output things incorrectly. Maybe Dan Laney or Cyrus Harrison would know. 2. Some of the nodes say they can't be displayed because of a 2D/3D plot incompatibility. Maybe this is a symptom of not having a set output too, but seems like something on our end. The files work in VisIt 2.8, but not 2.10. Cyrus has example data.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2674
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: error plotting relaxer nodes
Assigned to: Mark Miller
Category:
Target version: 2.12.2
Author: Cyrus Harrison
Start: 08/30/2016
Due date:
% Done: 0
Estimated time:
Created: 08/30/2016 06:33 pm
Updated: 01/30/2017 02:52 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Josh Kallman has some simulations where the relaxer nodes (not the labels, but the actual nodes) that can be shown in visit don't show up. He gets a couple of weird errors: 1. If there are no nodes in the set, VisIt dies. I don't know if this is a visit or something we're doing to output things incorrectly. Maybe Dan Laney or Cyrus Harrison would know. 2. Some of the nodes say they can't be displayed because of a 2D/3D plot incompatibility. Maybe this is a symptom of not having a set output too, but seems like something on our end. The files work in VisIt 2.8, but not 2.10. Cyrus has example data.
Comments:
gave mark some vlog files from josh, we may need to ask for data. I looked at the vlogs.There are some oddities. For example, VisIt is seeing the mmesh option on many multiblock vars but the mesh specified doesn't exist. So, it is falling back to fuzzy logic to match multimesh to multi-var. But, that logic appears to be doing the right thing. They should fix their files but I don't think this is the cause of the problem.There are four objects that are all empty. That is fine too. But any plots involving them will obviously be empty too.I have asked Josh for data. its strange they say it worked in 2.8, they must have been getting luck with the old logic in the plugin, b/c it sounds like there are some that should be fixed. Ok, just got some new data files from Josh and displayed them with VisIt 2.10.3 and 2.12.0.I tried to mesh plot all the meshes listed under the relax node menu.The all plot without error save 2 that generate the empty plot warning and indeed are empty.Will send Josh email I have a fix for the crash here. It needs to get merged.That said, the files contain other issues with DBOPT_MMESH options I added detection logic for point, quad, ucd, csg emtpy blocks and return null from GetVar calls in this case
| 1.0 | error plotting relaxer nodes - Josh Kallman has some simulations where the relaxer nodes (not the labels, but the actual nodes) that can be shown in visit don't show up. He gets a couple of weird errors: 1. If there are no nodes in the set, VisIt dies. I don't know if this is a visit or something we're doing to output things incorrectly. Maybe Dan Laney or Cyrus Harrison would know. 2. Some of the nodes say they can't be displayed because of a 2D/3D plot incompatibility. Maybe this is a symptom of not having a set output too, but seems like something on our end. The files work in VisIt 2.8, but not 2.10. Cyrus has example data.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2674
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: error plotting relaxer nodes
Assigned to: Mark Miller
Category:
Target version: 2.12.2
Author: Cyrus Harrison
Start: 08/30/2016
Due date:
% Done: 0
Estimated time:
Created: 08/30/2016 06:33 pm
Updated: 01/30/2017 02:52 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Josh Kallman has some simulations where the relaxer nodes (not the labels, but the actual nodes) that can be shown in visit don't show up. He gets a couple of weird errors: 1. If there are no nodes in the set, VisIt dies. I don't know if this is a visit or something we're doing to output things incorrectly. Maybe Dan Laney or Cyrus Harrison would know. 2. Some of the nodes say they can't be displayed because of a 2D/3D plot incompatibility. Maybe this is a symptom of not having a set output too, but seems like something on our end. The files work in VisIt 2.8, but not 2.10. Cyrus has example data.
Comments:
gave mark some vlog files from josh, we may need to ask for data. I looked at the vlogs.There are some oddities. For example, VisIt is seeing the mmesh option on many multiblock vars but the mesh specified doesn't exist. So, it is falling back to fuzzy logic to match multimesh to multi-var. But, that logic appears to be doing the right thing. They should fix their files but I don't think this is the cause of the problem.There are four objects that are all empty. That is fine too. But any plots involving them will obviously be empty too.I have asked Josh for data. its strange they say it worked in 2.8, they must have been getting luck with the old logic in the plugin, b/c it sounds like there are some that should be fixed. Ok, just got some new data files from Josh and displayed them with VisIt 2.10.3 and 2.12.0.I tried to mesh plot all the meshes listed under the relax node menu.The all plot without error save 2 that generate the empty plot warning and indeed are empty.Will send Josh email I have a fix for the crash here. It needs to get merged.That said, the files contain other issues with DBOPT_MMESH options I added detection logic for point, quad, ucd, csg emtpy blocks and return null from GetVar calls in this case
| priority | error plotting relaxer nodes josh kallman has some simulations where the relaxer nodes not the labels but the actual nodes that can be shown in visit don t show up he gets a couple of weird errors if there are no nodes in the set visit dies i don t know if this is a visit or something we re doing to output things incorrectly maybe dan laney or cyrus harrison would know some of the nodes say they can t be displayed because of a plot incompatibility maybe this is a symptom of not having a set output too but seems like something on our end the files work in visit but not cyrus has example data redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject error plotting relaxer nodes assigned to mark miller category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description josh kallman has some simulations where the relaxer nodes not the labels but the actual nodes that can be shown in visit don t show up he gets a couple of weird errors if there are no nodes in the set visit dies i don t know if this is a visit or something we re doing to output things incorrectly maybe dan laney or cyrus harrison would know some of the nodes say they can t be displayed because of a plot incompatibility maybe this is a symptom of not having a set output too but seems like something on our end the files work in visit but not cyrus has example data comments gave mark some vlog files from josh we may need to ask for data i looked at the vlogs there are some oddities for example visit is seeing the mmesh option on many multiblock vars but the mesh specified doesn t exist so it is falling back to fuzzy logic to match multimesh to multi var but that logic appears to be doing the right thing they should fix their files but i don t think this is the cause of the problem there are four objects that are all empty that is fine too but any plots involving them will obviously be empty too i have asked josh for data its strange they say it worked in they must have been getting luck with the old logic in the plugin b c it sounds like there are some that should be fixed ok just got some new data files from josh and displayed them with visit and i tried to mesh plot all the meshes listed under the relax node menu the all plot without error save that generate the empty plot warning and indeed are empty will send josh email i have a fix for the crash here it needs to get merged that said the files contain other issues with dbopt mmesh options i added detection logic for point quad ucd csg emtpy blocks and return null from getvar calls in this case | 1 |
828,592 | 31,835,726,427 | IssuesEvent | 2023-09-14 13:25:16 | opendatahub-io/odh-dashboard | https://api.github.com/repos/opendatahub-io/odh-dashboard | closed | [Bug]: Partial Cluster Storage Sizes is Poorly Supported | kind/bug priority/high feature/ds-projects field-priority | ### Solution
- Implement dropdown for size option -- https://github.com/opendatahub-io/odh-dashboard/issues/1480#issuecomment-1689380585 for mocks
- reflect value in number field in that unit
- do not auto-convert value when unit changes
- do not allow decimal values (whole numbers only in the unit of their choice)
- Apply this change to Workbench PVC & PVC Creation/Edit Modal
---
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Deploy type
Directly installing the Dashboard at the latest (eg. `odh-dashboard/main`)
### Version
v2.11.0
### Current Behavior
When creating a workbench, if you supply a partial value (1.5 for instance) K8s will convert that to Mi, instead of keeping the decimal Gi format.
The UI renders the value fine for the visual representation on the table for cluster storages... but when you edit the cluster storage in question, we don't see that difference and treat the value as Gi -- which turns 1.5 gigs into 1536 gigs (instead of Mi)
### Expected Behavior
We support units (let K8s store it as it sees fit). Or we convert back by dividing by 1024.
UX should make the call that makes the most sense.
Alternatively (and less desired) we just stop partial values.
### Steps To Reproduce
1. create a DS Project
2. click "Create workbench" button
3. insert your workbench details
4. in the "Persistent Storage Size" field, insert a decimal value (e.g., 1.5)
5. Submit by clicking "Create workbench" button
6. check the cluster storage details in the table
7. wait until workbench gets "Running" state
8. edit the cluster storage in question
### Workaround (if any)
Use whole numbers
### What browsers are you seeing the problem on?
_No response_
### Anything else
_No response_ | 2.0 | [Bug]: Partial Cluster Storage Sizes is Poorly Supported - ### Solution
- Implement dropdown for size option -- https://github.com/opendatahub-io/odh-dashboard/issues/1480#issuecomment-1689380585 for mocks
- reflect value in number field in that unit
- do not auto-convert value when unit changes
- do not allow decimal values (whole numbers only in the unit of their choice)
- Apply this change to Workbench PVC & PVC Creation/Edit Modal
---
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Deploy type
Directly installing the Dashboard at the latest (eg. `odh-dashboard/main`)
### Version
v2.11.0
### Current Behavior
When creating a workbench, if you supply a partial value (1.5 for instance) K8s will convert that to Mi, instead of keeping the decimal Gi format.
The UI renders the value fine for the visual representation on the table for cluster storages... but when you edit the cluster storage in question, we don't see that difference and treat the value as Gi -- which turns 1.5 gigs into 1536 gigs (instead of Mi)
### Expected Behavior
We support units (let K8s store it as it sees fit). Or we convert back by dividing by 1024.
UX should make the call that makes the most sense.
Alternatively (and less desired) we just stop partial values.
### Steps To Reproduce
1. create a DS Project
2. click "Create workbench" button
3. insert your workbench details
4. in the "Persistent Storage Size" field, insert a decimal value (e.g., 1.5)
5. Submit by clicking "Create workbench" button
6. check the cluster storage details in the table
7. wait until workbench gets "Running" state
8. edit the cluster storage in question
### Workaround (if any)
Use whole numbers
### What browsers are you seeing the problem on?
_No response_
### Anything else
_No response_ | priority | partial cluster storage sizes is poorly supported solution implement dropdown for size option for mocks reflect value in number field in that unit do not auto convert value when unit changes do not allow decimal values whole numbers only in the unit of their choice apply this change to workbench pvc pvc creation edit modal is there an existing issue for this i have searched the existing issues deploy type directly installing the dashboard at the latest eg odh dashboard main version current behavior when creating a workbench if you supply a partial value for instance will convert that to mi instead of keeping the decimal gi format the ui renders the value fine for the visual representation on the table for cluster storages but when you edit the cluster storage in question we don t see that difference and treat the value as gi which turns gigs into gigs instead of mi expected behavior we support units let store it as it sees fit or we convert back by dividing by ux should make the call that makes the most sense alternatively and less desired we just stop partial values steps to reproduce create a ds project click create workbench button insert your workbench details in the persistent storage size field insert a decimal value e g submit by clicking create workbench button check the cluster storage details in the table wait until workbench gets running state edit the cluster storage in question workaround if any use whole numbers what browsers are you seeing the problem on no response anything else no response | 1 |
778,848 | 27,331,579,633 | IssuesEvent | 2023-02-25 17:49:26 | AUBGTheHUB/spa-website-2022 | https://api.github.com/repos/AUBGTheHUB/spa-website-2022 | opened | HackAUBG Admin Panel Sections | enhancement high priority frontend ADMIN PANEL | ## Brief description:
Create all the necessary sections in the admin panel for the hackaubg team management
## How to achieve it:
Pages:
- `/admin/hackathon/teams` : should display all hackathon teams with info about the number of current members, name of the team, and the predominant school (e.g. if there are three people from AUBG, one from FMI - you write AUBG on the card) (cards should have see members button which will return `/admin/hackathon/teams/members`)
- `/admin/hackathon/teams/members` : cards with members containing all of their info (similar to the `/admin/members` )
- `/admin/hackathon/teams/emails` : which fetches all of the participants' emails and renders them on the page as follows (OR just the emails separated by the commas
-
| 1.0 | HackAUBG Admin Panel Sections - ## Brief description:
Create all the necessary sections in the admin panel for the hackaubg team management
## How to achieve it:
Pages:
- `/admin/hackathon/teams` : should display all hackathon teams with info about the number of current members, name of the team, and the predominant school (e.g. if there are three people from AUBG, one from FMI - you write AUBG on the card) (cards should have see members button which will return `/admin/hackathon/teams/members`)
- `/admin/hackathon/teams/members` : cards with members containing all of their info (similar to the `/admin/members` )
- `/admin/hackathon/teams/emails` : which fetches all of the participants' emails and renders them on the page as follows (OR just the emails separated by the commas
-
| priority | hackaubg admin panel sections brief description create all the necessary sections in the admin panel for the hackaubg team management how to achieve it pages admin hackathon teams should display all hackathon teams with info about the number of current members name of the team and the predominant school e g if there are three people from aubg one from fmi you write aubg on the card cards should have see members button which will return admin hackathon teams members admin hackathon teams members cards with members containing all of their info similar to the admin members admin hackathon teams emails which fetches all of the participants emails and renders them on the page as follows or just the emails separated by the commas | 1 |
434,176 | 12,515,285,964 | IssuesEvent | 2020-06-03 07:23:10 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | External file/url references to parameters and responses are not being resolved in the Resource UI | 2.6.0 3.x.x Priority/Highest commitment | **Description:**
When creating an API resource containing External file/url reference to parameters and responses, these references do not get resolved when displaying in the Resource UI(API definition under API design tab)
**Suggested Labels:**
APIM 2.5.0
**Affected Product Version:**
API-M 2.5.0
**Steps to reproduce:**
1. Create an API resource containing parameters and responses with edit references.
2. Navigate to the Resource UI
_Outcome_: The parameter configuration will not be displayed in the Resource UI | 1.0 | External file/url references to parameters and responses are not being resolved in the Resource UI - **Description:**
When creating an API resource containing External file/url reference to parameters and responses, these references do not get resolved when displaying in the Resource UI(API definition under API design tab)
**Suggested Labels:**
APIM 2.5.0
**Affected Product Version:**
API-M 2.5.0
**Steps to reproduce:**
1. Create an API resource containing parameters and responses with edit references.
2. Navigate to the Resource UI
_Outcome_: The parameter configuration will not be displayed in the Resource UI | priority | external file url references to parameters and responses are not being resolved in the resource ui description when creating an api resource containing external file url reference to parameters and responses these references do not get resolved when displaying in the resource ui api definition under api design tab suggested labels apim affected product version api m steps to reproduce create an api resource containing parameters and responses with edit references navigate to the resource ui outcome the parameter configuration will not be displayed in the resource ui | 1 |
636,596 | 20,603,861,527 | IssuesEvent | 2022-03-06 17:31:44 | RoboJackets/apiary-mobile | https://api.github.com/repos/RoboJackets/apiary-mobile | opened | Crash when logging out | type / bug priority / high | Sentry isn't catching this, but:
1. Login
2. Record an attendance record by any method
3. Logout
and the app crashes | 1.0 | Crash when logging out - Sentry isn't catching this, but:
1. Login
2. Record an attendance record by any method
3. Logout
and the app crashes | priority | crash when logging out sentry isn t catching this but login record an attendance record by any method logout and the app crashes | 1 |
596,387 | 18,104,225,679 | IssuesEvent | 2021-09-22 17:19:29 | NOAA-GSL/VxLegacyIngest | https://api.github.com/repos/NOAA-GSL/VxLegacyIngest | closed | Add FV3_GFS_EMC vx | Status: Doing Type: Task Priority: High | ---
Author Name: **molly.b.smith** (molly.b.smith)
Original Redmine Issue: 51264, https://vlab.ncep.noaa.gov/redmine/issues/51264
Original Date: 2018-06-05
Original Assignee: jeffrey.a.hamilton
---
Stan has requested that we add FV3_GFS_EMC vx data to all apps that currently verify the GFS.
| 1.0 | Add FV3_GFS_EMC vx - ---
Author Name: **molly.b.smith** (molly.b.smith)
Original Redmine Issue: 51264, https://vlab.ncep.noaa.gov/redmine/issues/51264
Original Date: 2018-06-05
Original Assignee: jeffrey.a.hamilton
---
Stan has requested that we add FV3_GFS_EMC vx data to all apps that currently verify the GFS.
| priority | add gfs emc vx author name molly b smith molly b smith original redmine issue original date original assignee jeffrey a hamilton stan has requested that we add gfs emc vx data to all apps that currently verify the gfs | 1 |
186,181 | 6,734,157,681 | IssuesEvent | 2017-10-18 17:02:54 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | k_queue_poll returns NULL with K_FOREVER | area: Kernel bug priority: high | When running with https://github.com/zephyrproject-rtos/zephyr/pull/4350, we still see instances on Cortex-M4 (nRF52832) where k_queue_poll returns NULL, traced down to sys_slist_get() returning NULL, something that should never happen. We can reproduce relatively easily, although it seems to be a race condition. | 1.0 | k_queue_poll returns NULL with K_FOREVER - When running with https://github.com/zephyrproject-rtos/zephyr/pull/4350, we still see instances on Cortex-M4 (nRF52832) where k_queue_poll returns NULL, traced down to sys_slist_get() returning NULL, something that should never happen. We can reproduce relatively easily, although it seems to be a race condition. | priority | k queue poll returns null with k forever when running with we still see instances on cortex where k queue poll returns null traced down to sys slist get returning null something that should never happen we can reproduce relatively easily although it seems to be a race condition | 1 |
502,428 | 14,546,127,411 | IssuesEvent | 2020-12-15 20:44:17 | ansible/awx | https://api.github.com/repos/ansible/awx | opened | Add proper validation to CodeMirrorField component | component:ui_next priority:high state:needs_devel type:feature | ##### ISSUE TYPE
- Feature Idea
##### SUMMARY
It doesn't look like this was ever really implemented. There's a prop on the component where you _could_ pass in a validate function but we don't actually do that in any of the places.
The idea here is that we want to validate the field for malformed yaml/json and _at a minimum_ show an error beneath the field.
The reason why I think this is going to be difficult is because we have to deal with formik. By default, formik is going to run validation on _change_ and on _blur_. In the case of these codemirror fields we definitely don't want to validate on change (each keystroke) since this would show errors constantly. We probably only want to validate on blur. This can be configured at the _form_ level with a prop passed to `<Formik ... ` but that would mean _all_ the fields in the form would only validate on blur. We may have to come up with a clever solution here to only validate these codemirror fields on blur but validate the rest of the form on change and blur.
This is particularly important for places with prompts. We manipulate/combine extra vars and survey values later on during the process and having unvalidated data makes error handling difficult.
This may also tie in to https://github.com/ansible/awx/issues/8905
| 1.0 | Add proper validation to CodeMirrorField component - ##### ISSUE TYPE
- Feature Idea
##### SUMMARY
It doesn't look like this was ever really implemented. There's a prop on the component where you _could_ pass in a validate function but we don't actually do that in any of the places.
The idea here is that we want to validate the field for malformed yaml/json and _at a minimum_ show an error beneath the field.
The reason why I think this is going to be difficult is because we have to deal with formik. By default, formik is going to run validation on _change_ and on _blur_. In the case of these codemirror fields we definitely don't want to validate on change (each keystroke) since this would show errors constantly. We probably only want to validate on blur. This can be configured at the _form_ level with a prop passed to `<Formik ... ` but that would mean _all_ the fields in the form would only validate on blur. We may have to come up with a clever solution here to only validate these codemirror fields on blur but validate the rest of the form on change and blur.
This is particularly important for places with prompts. We manipulate/combine extra vars and survey values later on during the process and having unvalidated data makes error handling difficult.
This may also tie in to https://github.com/ansible/awx/issues/8905
| priority | add proper validation to codemirrorfield component issue type feature idea summary it doesn t look like this was ever really implemented there s a prop on the component where you could pass in a validate function but we don t actually do that in any of the places the idea here is that we want to validate the field for malformed yaml json and at a minimum show an error beneath the field the reason why i think this is going to be difficult is because we have to deal with formik by default formik is going to run validation on change and on blur in the case of these codemirror fields we definitely don t want to validate on change each keystroke since this would show errors constantly we probably only want to validate on blur this can be configured at the form level with a prop passed to formik but that would mean all the fields in the form would only validate on blur we may have to come up with a clever solution here to only validate these codemirror fields on blur but validate the rest of the form on change and blur this is particularly important for places with prompts we manipulate combine extra vars and survey values later on during the process and having unvalidated data makes error handling difficult this may also tie in to | 1 |
398,082 | 11,737,689,241 | IssuesEvent | 2020-03-11 14:59:27 | fac18/safe-space | https://api.github.com/repos/fac18/safe-space | closed | Questions page - multiple choice and radio buttons | E1 high priority user story | - [ ] As a user I want to be guided blah blah | 1.0 | Questions page - multiple choice and radio buttons - - [ ] As a user I want to be guided blah blah | priority | questions page multiple choice and radio buttons as a user i want to be guided blah blah | 1 |
484,785 | 13,957,543,862 | IssuesEvent | 2020-10-24 07:23:15 | AY2021S1-CS2113T-F11-4/tp | https://api.github.com/repos/AY2021S1-CS2113T-F11-4/tp | closed | Improvements for Version 2 | priority.High severity.Medium type.Task | - Optimize task access using `Hashtable` for `ProjectBacklog` (Refer to #68)
- Rename sprint action commands for the following
- Delete task
- Add task
- `ProjectBacklog` add view command for all tasks
- Change parameters to accept proper data type
- Move the parsing of the parameters to parser or other relevant functions | 1.0 | Improvements for Version 2 - - Optimize task access using `Hashtable` for `ProjectBacklog` (Refer to #68)
- Rename sprint action commands for the following
- Delete task
- Add task
- `ProjectBacklog` add view command for all tasks
- Change parameters to accept proper data type
- Move the parsing of the parameters to parser or other relevant functions | priority | improvements for version optimize task access using hashtable for projectbacklog refer to rename sprint action commands for the following delete task add task projectbacklog add view command for all tasks change parameters to accept proper data type move the parsing of the parameters to parser or other relevant functions | 1 |
326,303 | 9,954,898,866 | IssuesEvent | 2019-07-05 09:34:45 | DigitalCampus/moodle-block_oppia_mobile_export | https://api.github.com/repos/DigitalCampus/moodle-block_oppia_mobile_export | closed | In Moodle version 3.5+ (Build: 20180531), the DB installation XML causes an error with the PATH attribute | bug good-first-issue high priority | (Version is taken from version.php)
The current PATH in the XML installation is the following:
`<XMLDB PATH="blocks/oppia_mobile_export" VERSION="2013111401" COMMENT="XMLDB file for Moodle block/oppia_mobile_export">`
Which causes an error in the process of activating the block, rendering the activation impossible. The problem seems to be solved when changing the line to the following:
`<XMLDB PATH="blocks/oppia_mobile_export/db" VERSION="2013111401" COMMENT="XMLDB file for Moodle block/oppia_mobile_export">`
This can be due to changes in how Moodle interprets the PATH attribute in the newer version, as the former is reported to work in Moodle 3.4.3 | 1.0 | In Moodle version 3.5+ (Build: 20180531), the DB installation XML causes an error with the PATH attribute - (Version is taken from version.php)
The current PATH in the XML installation is the following:
`<XMLDB PATH="blocks/oppia_mobile_export" VERSION="2013111401" COMMENT="XMLDB file for Moodle block/oppia_mobile_export">`
Which causes an error in the process of activating the block, rendering the activation impossible. The problem seems to be solved when changing the line to the following:
`<XMLDB PATH="blocks/oppia_mobile_export/db" VERSION="2013111401" COMMENT="XMLDB file for Moodle block/oppia_mobile_export">`
This can be due to changes in how Moodle interprets the PATH attribute in the newer version, as the former is reported to work in Moodle 3.4.3 | priority | in moodle version build the db installation xml causes an error with the path attribute version is taken from version php the current path in the xml installation is the following which causes an error in the process of activating the block rendering the activation impossible the problem seems to be solved when changing the line to the following this can be due to changes in how moodle interprets the path attribute in the newer version as the former is reported to work in moodle | 1 |
234,653 | 7,724,380,324 | IssuesEvent | 2018-05-24 14:56:36 | agda/agda | https://api.github.com/repos/agda/agda | closed | Preserve ellipsis (...) when case splitting in interaction | case-splitting interaction priority: high type: enhancement | ``` agda
f (m e (s (s y)) with t
... | x = {! x !}
```
case splitting on x will replace ellipsis (...) by `f (m e (s (s y)))`.
Ellipsis was probably intended by user, should be preserved.
There could be a separate command for `expandEllipsis`, if needed.
Original issue reported on code.google.com by `andreas....@gmail.com` on 21 Feb 2012 at 5:26
| 1.0 | Preserve ellipsis (...) when case splitting in interaction - ``` agda
f (m e (s (s y)) with t
... | x = {! x !}
```
case splitting on x will replace ellipsis (...) by `f (m e (s (s y)))`.
Ellipsis was probably intended by user, should be preserved.
There could be a separate command for `expandEllipsis`, if needed.
Original issue reported on code.google.com by `andreas....@gmail.com` on 21 Feb 2012 at 5:26
| priority | preserve ellipsis when case splitting in interaction agda f m e s s y with t x x case splitting on x will replace ellipsis by f m e s s y ellipsis was probably intended by user should be preserved there could be a separate command for expandellipsis if needed original issue reported on code google com by andreas gmail com on feb at | 1 |
495,100 | 14,272,116,815 | IssuesEvent | 2020-11-21 15:37:45 | UC-Davis-molecular-computing/scadnano | https://api.github.com/repos/UC-Davis-molecular-computing/scadnano | closed | append "5'-", "3'-", or "internal-" to modification key when automatically using IDT text as id for JSON key | bug closed in dev high priority | Set the IDT text to be the same for a 5' and 3' modification. Both cannot then be written to the modifications field of the JSON file, which will cause an error when reading back out.
Fix: append "5'"/"3'" to the key when using the IDT text as the modification's key in the JSON. | 1.0 | append "5'-", "3'-", or "internal-" to modification key when automatically using IDT text as id for JSON key - Set the IDT text to be the same for a 5' and 3' modification. Both cannot then be written to the modifications field of the JSON file, which will cause an error when reading back out.
Fix: append "5'"/"3'" to the key when using the IDT text as the modification's key in the JSON. | priority | append or internal to modification key when automatically using idt text as id for json key set the idt text to be the same for a and modification both cannot then be written to the modifications field of the json file which will cause an error when reading back out fix append to the key when using the idt text as the modification s key in the json | 1 |
90,047 | 3,808,567,255 | IssuesEvent | 2016-03-25 15:39:59 | chocolatey/choco | https://api.github.com/repos/chocolatey/choco | closed | Sign the powershell scripts and assemblies | 3 - Done Enhancement Priority_HIGH Security | One of the things we'll want to do for added security and for companies that need all scripts signed is to sign the PowerShell scripts.
https://groups.google.com/d/msgid/chocolatey/a476ca1e-85b0-4c53-816e-5621ef22ca9e%40googlegroups.com | 1.0 | Sign the powershell scripts and assemblies - One of the things we'll want to do for added security and for companies that need all scripts signed is to sign the PowerShell scripts.
https://groups.google.com/d/msgid/chocolatey/a476ca1e-85b0-4c53-816e-5621ef22ca9e%40googlegroups.com | priority | sign the powershell scripts and assemblies one of the things we ll want to do for added security and for companies that need all scripts signed is to sign the powershell scripts | 1 |
192,018 | 6,845,761,863 | IssuesEvent | 2017-11-13 09:33:33 | metasfresh/metasfresh-webui-frontend | https://api.github.com/repos/metasfresh/metasfresh-webui-frontend | closed | view: Edit field value is broken | branch:master branch:release priority:high type:bug | ### Is this a bug or feature request?
bug
### What is the current behavior?
#### Which are the steps to reproduce?
* open Product Prices: https://w101.metasfresh.com:8443/window/540325
* right click on an numeric cell, e.g. Price List
* click on Edit field value
=> NOK: nothing happens

### What is the expected or desired behavior?
I should be able to edit that value directly
| 1.0 | view: Edit field value is broken - ### Is this a bug or feature request?
bug
### What is the current behavior?
#### Which are the steps to reproduce?
* open Product Prices: https://w101.metasfresh.com:8443/window/540325
* right click on an numeric cell, e.g. Price List
* click on Edit field value
=> NOK: nothing happens

### What is the expected or desired behavior?
I should be able to edit that value directly
| priority | view edit field value is broken is this a bug or feature request bug what is the current behavior which are the steps to reproduce open product prices right click on an numeric cell e g price list click on edit field value nok nothing happens what is the expected or desired behavior i should be able to edit that value directly | 1 |
478,936 | 13,788,722,649 | IssuesEvent | 2020-10-09 07:43:38 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | RVL-1248 - Coverage targets not updating in assign jurisdictions - Namibia | Priority: High wontfix | On Namibia Production:
Once you have input the coverage targets in the jurisdiction metadata section of the Web UI, it does not update the structure counts for that OA in the assign jurisdiction section.
**Steps to reproduce:**
1. In jurisdiction metadata, upload a csv. with targets for Onguta - 600 and Omatunda - 600
2. Download the target data from jurisdiction metadata to check the target was successfully assigned

3. Go to the planning tool and create a new test plan, save this in draft form
4. Click the plan to assign jurisdictions
5. Check the structure counts for Onguta and Omatunda


6. The number of structures in the assign jurisdiction section does not match those from the target set in jurisdiction metadata
| 1.0 | RVL-1248 - Coverage targets not updating in assign jurisdictions - Namibia - On Namibia Production:
Once you have input the coverage targets in the jurisdiction metadata section of the Web UI, it does not update the structure counts for that OA in the assign jurisdiction section.
**Steps to reproduce:**
1. In jurisdiction metadata, upload a csv. with targets for Onguta - 600 and Omatunda - 600
2. Download the target data from jurisdiction metadata to check the target was successfully assigned

3. Go to the planning tool and create a new test plan, save this in draft form
4. Click the plan to assign jurisdictions
5. Check the structure counts for Onguta and Omatunda


6. The number of structures in the assign jurisdiction section does not match those from the target set in jurisdiction metadata
| priority | rvl coverage targets not updating in assign jurisdictions namibia on namibia production once you have input the coverage targets in the jurisdiction metadata section of the web ui it does not update the structure counts for that oa in the assign jurisdiction section steps to reproduce in jurisdiction metadata upload a csv with targets for onguta and omatunda download the target data from jurisdiction metadata to check the target was successfully assigned go to the planning tool and create a new test plan save this in draft form click the plan to assign jurisdictions check the structure counts for onguta and omatunda the number of structures in the assign jurisdiction section does not match those from the target set in jurisdiction metadata | 1 |
333,154 | 10,118,103,779 | IssuesEvent | 2019-07-31 08:18:20 | cognitedata/cognitesdk-js | https://api.github.com/repos/cognitedata/cognitesdk-js | closed | Add classes Asset & AssetList | high priority | Create a new class Asset:
```js
class Asset {
...
delete()
parent()
subtree()
children()
timeSeries()
events()
files()
}
```
```js
const assetList = [Asset, Asset, Asset];
assetList.delete = () => {...}
assetList.timeSeries = () => {...}
``` | 1.0 | Add classes Asset & AssetList - Create a new class Asset:
```js
class Asset {
...
delete()
parent()
subtree()
children()
timeSeries()
events()
files()
}
```
```js
const assetList = [Asset, Asset, Asset];
assetList.delete = () => {...}
assetList.timeSeries = () => {...}
``` | priority | add classes asset assetlist create a new class asset js class asset delete parent subtree children timeseries events files js const assetlist assetlist delete assetlist timeseries | 1 |
550,490 | 16,113,781,806 | IssuesEvent | 2021-04-28 03:09:02 | TestCentric/testcentric-gui | https://api.github.com/repos/TestCentric/testcentric-gui | closed | Agent Redesign | Feature High Priority | Based on the work in the agent-spike branch, this will include
* A new agent.core assembly containing Types only used by agents
* Elimination of DriverService
* Removal of the ExtensionService from engine.core, along with all support for Services
* New test package properties to indicate the test framework being used.
After completion of this issue, frameworks other than NUnit 3 will no longer be supported. This is temporary and nunit v2 support will be added back along with added extension points to replace the lost functionality before 2.0 is released. | 1.0 | Agent Redesign - Based on the work in the agent-spike branch, this will include
* A new agent.core assembly containing Types only used by agents
* Elimination of DriverService
* Removal of the ExtensionService from engine.core, along with all support for Services
* New test package properties to indicate the test framework being used.
After completion of this issue, frameworks other than NUnit 3 will no longer be supported. This is temporary and nunit v2 support will be added back along with added extension points to replace the lost functionality before 2.0 is released. | priority | agent redesign based on the work in the agent spike branch this will include a new agent core assembly containing types only used by agents elimination of driverservice removal of the extensionservice from engine core along with all support for services new test package properties to indicate the test framework being used after completion of this issue frameworks other than nunit will no longer be supported this is temporary and nunit support will be added back along with added extension points to replace the lost functionality before is released | 1 |
785,676 | 27,622,232,113 | IssuesEvent | 2023-03-10 01:47:26 | NucciTheBoss/cleantest | https://api.github.com/repos/NucciTheBoss/cleantest | closed | `passwd` utility needed for user and group management | Priority: High Type: Enhancement | A `passwd` utility will allow end-users to create test users and groups inside test environment instances. This will be ideal for testing applications such as Apptainer where they are used by non-privileged users. | 1.0 | `passwd` utility needed for user and group management - A `passwd` utility will allow end-users to create test users and groups inside test environment instances. This will be ideal for testing applications such as Apptainer where they are used by non-privileged users. | priority | passwd utility needed for user and group management a passwd utility will allow end users to create test users and groups inside test environment instances this will be ideal for testing applications such as apptainer where they are used by non privileged users | 1 |
257,799 | 8,142,177,488 | IssuesEvent | 2018-08-21 06:33:24 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | Intermittent issue - Creating Che workspace fails - configured service account doesn't have access | SEV2-high area/che issue/intermittent priority/P4 status/pending-qa-verification team/che/osio type/bug | Seeing this problem intermittently when creating a new workspace. Performing an env reset and then retrying seems to resolve this.
Steps to recreate issue:
* Create new OSIO space
* Create new Che workspace

| 1.0 | Intermittent issue - Creating Che workspace fails - configured service account doesn't have access - Seeing this problem intermittently when creating a new workspace. Performing an env reset and then retrying seems to resolve this.
Steps to recreate issue:
* Create new OSIO space
* Create new Che workspace

| priority | intermittent issue creating che workspace fails configured service account doesn t have access seeing this problem intermittently when creating a new workspace performing an env reset and then retrying seems to resolve this steps to recreate issue create new osio space create new che workspace | 1 |
749,016 | 26,147,706,986 | IssuesEvent | 2022-12-30 08:30:47 | SuddenDevelopment/StopMotion | https://api.github.com/repos/SuddenDevelopment/StopMotion | closed | UI Controls | enhancement Priority High | **Controls:**
Controls can be enabled and disabled in different parts of the interface.
> 1. N-panel.
> 2. Floating menu Viewport, Dopesheet & Graph Editor.
> Left, Right, Top, Bottom.
> 3. Pie Menu.
>
Mock ups need to be done?
| 1.0 | UI Controls - **Controls:**
Controls can be enabled and disabled in different parts of the interface.
> 1. N-panel.
> 2. Floating menu Viewport, Dopesheet & Graph Editor.
> Left, Right, Top, Bottom.
> 3. Pie Menu.
>
Mock ups need to be done?
| priority | ui controls controls controls can be enabled and disabled in different parts of the interface n panel floating menu viewport dopesheet graph editor left right top bottom pie menu mock ups need to be done | 1 |
803,709 | 29,187,124,280 | IssuesEvent | 2023-05-19 16:22:14 | stratosphererl/stratosphere | https://api.github.com/repos/stratosphererl/stratosphere | closed | Make reference to number of users and replays dynamic in home.tsx and about.tsx | priority: high | # Acceptance Criteria
- [ ] When a user views either the home or about page, have the references in the text referring to the number of users and replays on our site display the actual number present on our site
# Estimation of Work
- TBA
# Tasks
For both home.tsx and replay.tsx:
- [ ] Call stats service for number of users and number of replays
- [ ] Dynamically present the two numbers in the text of each page
# Risks
None
# Notes
n/a | 1.0 | Make reference to number of users and replays dynamic in home.tsx and about.tsx - # Acceptance Criteria
- [ ] When a user views either the home or about page, have the references in the text referring to the number of users and replays on our site display the actual number present on our site
# Estimation of Work
- TBA
# Tasks
For both home.tsx and replay.tsx:
- [ ] Call stats service for number of users and number of replays
- [ ] Dynamically present the two numbers in the text of each page
# Risks
None
# Notes
n/a | priority | make reference to number of users and replays dynamic in home tsx and about tsx acceptance criteria when a user views either the home or about page have the references in the text referring to the number of users and replays on our site display the actual number present on our site estimation of work tba tasks for both home tsx and replay tsx call stats service for number of users and number of replays dynamically present the two numbers in the text of each page risks none notes n a | 1 |
715,729 | 24,607,918,169 | IssuesEvent | 2022-10-14 18:07:15 | IBMa/equal-access | https://api.github.com/repos/IBMa/equal-access | opened | [BUG]: Stored scans is creating two inputs for each scan | Bug priority-1 (high) | ### Project
a11y checker extension
### Browser
Chrome
### Operating system
MacOS
### Description
When user is storing scans, each scan is showing two.
Also the scan starts with the number No.2 vs No.1. in addition, when we see all scan, we lost the scrolling option to see more scans. Please review attachments for more details.
### Steps to reproduce
1. Install [3.1.39](https://github.com/IBMa/equal-access/actions/runs/3251376559)
2. Navigate to https://able-test.mybluemix.net/able/toolkit/design/overview
3. Bring the checker UI and conduct a scan.
4. Select the start storing scans option
5. Conduct a few more scans.
Observation: Each scan is storing as two scans.
<img width="964" alt="Screen Shot 2022-10-14 at 12 50 06 PM" src="https://user-images.githubusercontent.com/62436670/195912581-27ecebe2-da8f-4064-8129-22f751a5e1d0.png">
<img width="848" alt="Screen Shot 2022-10-14 at 12 50 24 PM" src="https://user-images.githubusercontent.com/62436670/195912586-57bdc080-315e-4e9d-b2cb-da4f04d3daa6.png">
<img width="1206" alt="Screen Shot 2022-10-14 at 12 51 47 PM" src="https://user-images.githubusercontent.com/62436670/195912589-3c17e989-c474-4c94-b513-9a1fcc046186.png">
[Accessibility_Report-Design – IBM Accessibility.xlsx.zip](https://github.com/IBMa/equal-access/files/9788986/Accessibility_Report-Design.IBM.Accessibility.xlsx.zip)
| 1.0 | [BUG]: Stored scans is creating two inputs for each scan - ### Project
a11y checker extension
### Browser
Chrome
### Operating system
MacOS
### Description
When user is storing scans, each scan is showing two.
Also the scan starts with the number No.2 vs No.1. in addition, when we see all scan, we lost the scrolling option to see more scans. Please review attachments for more details.
### Steps to reproduce
1. Install [3.1.39](https://github.com/IBMa/equal-access/actions/runs/3251376559)
2. Navigate to https://able-test.mybluemix.net/able/toolkit/design/overview
3. Bring the checker UI and conduct a scan.
4. Select the start storing scans option
5. Conduct a few more scans.
Observation: Each scan is storing as two scans.
<img width="964" alt="Screen Shot 2022-10-14 at 12 50 06 PM" src="https://user-images.githubusercontent.com/62436670/195912581-27ecebe2-da8f-4064-8129-22f751a5e1d0.png">
<img width="848" alt="Screen Shot 2022-10-14 at 12 50 24 PM" src="https://user-images.githubusercontent.com/62436670/195912586-57bdc080-315e-4e9d-b2cb-da4f04d3daa6.png">
<img width="1206" alt="Screen Shot 2022-10-14 at 12 51 47 PM" src="https://user-images.githubusercontent.com/62436670/195912589-3c17e989-c474-4c94-b513-9a1fcc046186.png">
[Accessibility_Report-Design – IBM Accessibility.xlsx.zip](https://github.com/IBMa/equal-access/files/9788986/Accessibility_Report-Design.IBM.Accessibility.xlsx.zip)
| priority | stored scans is creating two inputs for each scan project checker extension browser chrome operating system macos description when user is storing scans each scan is showing two also the scan starts with the number no vs no in addition when we see all scan we lost the scrolling option to see more scans please review attachments for more details steps to reproduce install navigate to bring the checker ui and conduct a scan select the start storing scans option conduct a few more scans observation each scan is storing as two scans img width alt screen shot at pm src img width alt screen shot at pm src img width alt screen shot at pm src | 1 |
474,723 | 13,674,978,410 | IssuesEvent | 2020-09-29 12:03:55 | material-docs/material-docs | https://api.github.com/repos/material-docs/material-docs | closed | Fix bug in header | bug high priority invalid | Header object does not shows locale data.
```
<H1><Locale path={"pages/FirstPage/header1"}/></H1>
```
#### Locale:
```
export default {
name: "en-us",
label: "English",
locale: {
pages: {
FirstPage: {
header1: "My name is Danil Andreev",
header2: "This is a page about my history.",
text: "Hello, my name is __Danil Andreev__, I am a programmer from Kiev, Ukraine.",
header3: "I will show you a piece of code",
},
SecondPage: {
header1: "This is a feature page",
header2: "Table",
redirect: "Redirect to Page About Me",
}
}
}
}
``` | 1.0 | Fix bug in header - Header object does not shows locale data.
```
<H1><Locale path={"pages/FirstPage/header1"}/></H1>
```
#### Locale:
```
export default {
name: "en-us",
label: "English",
locale: {
pages: {
FirstPage: {
header1: "My name is Danil Andreev",
header2: "This is a page about my history.",
text: "Hello, my name is __Danil Andreev__, I am a programmer from Kiev, Ukraine.",
header3: "I will show you a piece of code",
},
SecondPage: {
header1: "This is a feature page",
header2: "Table",
redirect: "Redirect to Page About Me",
}
}
}
}
``` | priority | fix bug in header header object does not shows locale data locale export default name en us label english locale pages firstpage my name is danil andreev this is a page about my history text hello my name is danil andreev i am a programmer from kiev ukraine i will show you a piece of code secondpage this is a feature page table redirect redirect to page about me | 1 |
149,062 | 5,706,813,404 | IssuesEvent | 2017-04-18 12:24:13 | eolexe/zenhubext | https://api.github.com/repos/eolexe/zenhubext | opened | Create Account [backend] | Priority - 1 High Type - New Feature | <!-- Issues Template v1.0 -->
## Issue Workflow
<!-- This section is to help team to ensure the issue is properly documented before handing off to development. -->
- [x] New Issue is ready to be reviewed by repo owner or one of the core team members (@eolexe)
- [ ] Issue clarified and all sections are fulfiled.
- [ ] Issue estimated. (using zenhub)
- [ ] Issue approval & prioritized
<!-- Feel free to delete sections that are not applicable -->
## Feature. User Story
As a user, I want to be able to register a Senstone account, to personalise my experience and protect my notes from unauthorised access
## Annotated Wireframes / Screenshots
## Tasks
- [ ] Create API `POST /users`
- [ ] Document on apiary
| 1.0 | Create Account [backend] - <!-- Issues Template v1.0 -->
## Issue Workflow
<!-- This section is to help team to ensure the issue is properly documented before handing off to development. -->
- [x] New Issue is ready to be reviewed by repo owner or one of the core team members (@eolexe)
- [ ] Issue clarified and all sections are fulfiled.
- [ ] Issue estimated. (using zenhub)
- [ ] Issue approval & prioritized
<!-- Feel free to delete sections that are not applicable -->
## Feature. User Story
As a user, I want to be able to register a Senstone account, to personalise my experience and protect my notes from unauthorised access
## Annotated Wireframes / Screenshots
## Tasks
- [ ] Create API `POST /users`
- [ ] Document on apiary
| priority | create account issue workflow new issue is ready to be reviewed by repo owner or one of the core team members eolexe issue clarified and all sections are fulfiled issue estimated using zenhub issue approval prioritized feature user story as a user i want to be able to register a senstone account to personalise my experience and protect my notes from unauthorised access annotated wireframes screenshots tasks create api post users document on apiary | 1 |
203,362 | 7,060,323,536 | IssuesEvent | 2018-01-05 08:09:39 | wso2-incubator/testgrid | https://api.github.com/repos/wso2-incubator/testgrid | closed | Jenkins pipeline fails to connect to host without disabling HostkeyChecking | Priority/High Severity/Major Type/Task | **Description:**
Jenkins pipeline fails to connect to a remote host without adding -o StrictHostKeyChecking=no option.
this should be avoided.
| 1.0 | Jenkins pipeline fails to connect to host without disabling HostkeyChecking - **Description:**
Jenkins pipeline fails to connect to a remote host without adding -o StrictHostKeyChecking=no option.
this should be avoided.
| priority | jenkins pipeline fails to connect to host without disabling hostkeychecking description jenkins pipeline fails to connect to a remote host without adding o stricthostkeychecking no option this should be avoided | 1 |
247,700 | 7,922,552,851 | IssuesEvent | 2018-07-05 11:13:56 | status-im/status-react | https://api.github.com/repos/status-im/status-react | opened | Messages containing only emojis are not displayed in desktop | chat desktop high-priority high-severity |
### Description
*Type*: Bug
*Summary*:
#### Expected behavior
can see message with emoji
#### Actual behavior
message is not displayed in status-desktop

### Reproduction
*Prerequisites:* user A (mobile) and User B (desktop) are joined to same public chat (i.e., #tocheck)
- Open Status
- User A: send issue with emoji only to #tocheck
- User B: check channel
### Additional Information
* Status version: [desktop 05/07/2018](https://jenkins.status.im/job/status-react/job/desktop/job/manual/11/)
* Operating System: MacOS High Sierra
| 1.0 | Messages containing only emojis are not displayed in desktop -
### Description
*Type*: Bug
*Summary*:
#### Expected behavior
can see message with emoji
#### Actual behavior
message is not displayed in status-desktop

### Reproduction
*Prerequisites:* user A (mobile) and User B (desktop) are joined to same public chat (i.e., #tocheck)
- Open Status
- User A: send issue with emoji only to #tocheck
- User B: check channel
### Additional Information
* Status version: [desktop 05/07/2018](https://jenkins.status.im/job/status-react/job/desktop/job/manual/11/)
* Operating System: MacOS High Sierra
| priority | messages containing only emojis are not displayed in desktop description type bug summary expected behavior can see message with emoji actual behavior message is not displayed in status desktop reproduction prerequisites user a mobile and user b desktop are joined to same public chat i e tocheck open status user a send issue with emoji only to tocheck user b check channel additional information status version operating system macos high sierra | 1 |
205,748 | 7,105,687,369 | IssuesEvent | 2018-01-16 14:29:23 | Neovici/cosmoz-omnitable | https://api.github.com/repos/Neovici/cosmoz-omnitable | closed | XLSX shows faulty date and lacks article-ID | high priority | Downloaded XLSX (from omnitable) shows weird date format and lacks article-ID. | 1.0 | XLSX shows faulty date and lacks article-ID - Downloaded XLSX (from omnitable) shows weird date format and lacks article-ID. | priority | xlsx shows faulty date and lacks article id downloaded xlsx from omnitable shows weird date format and lacks article id | 1 |
729,937 | 25,151,417,655 | IssuesEvent | 2022-11-10 10:17:16 | fkie-cad/dewolf | https://api.github.com/repos/fkie-cad/dewolf | closed | AttributeError: 'GlobalVariable' object has no attribute 'operands' in expressionpropagationfunctioncalls | bug priority-high | ### What happened?
The decompiler crashes with an AttributeError during the dataflowanalysis in expressionpropagationfunctioncall.
```python
Traceback (most recent call last):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 80, in <module>
main(Decompiler)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/util/commandline.py"", line 65, in main
task = decompiler.decompile(function_name, options)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 55, in decompile
pipeline.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/pipeline.py"", line 97, in run
instance.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 22, in run
super().run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/commons/expressionpropagationcommons.py"", line 47, in run
while self.perform(task.graph, iteration):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 42, in perform
if self._definition_can_be_propagated_into_target(var_definition, instruction):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 86, in _definition_can_be_propagated_into_target
and self._is_call_value_used_exactly_once(definition)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 63, in _is_call_value_used_exactly_once
if len(return_values := definition.destination.operands) == 1:
AttributeError: 'GlobalVariable' object has no attribute 'operands'
```
### How to reproduce?
Decompile the main function in one of the samples given below.
[expressionpropagationfunctioncall_error.zip](https://github.com/fkie-cad/dewolf/files/9979638/expressionpropagationfunctioncall_error.zip)
### Affected Binary Ninja Version(s)
3.2.3814 | 1.0 | AttributeError: 'GlobalVariable' object has no attribute 'operands' in expressionpropagationfunctioncalls - ### What happened?
The decompiler crashes with an AttributeError during the dataflowanalysis in expressionpropagationfunctioncall.
```python
Traceback (most recent call last):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 80, in <module>
main(Decompiler)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/util/commandline.py"", line 65, in main
task = decompiler.decompile(function_name, options)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 55, in decompile
pipeline.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/pipeline.py"", line 97, in run
instance.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 22, in run
super().run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/commons/expressionpropagationcommons.py"", line 47, in run
while self.perform(task.graph, iteration):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 42, in perform
if self._definition_can_be_propagated_into_target(var_definition, instruction):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 86, in _definition_can_be_propagated_into_target
and self._is_call_value_used_exactly_once(definition)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/dataflowanalysis/expressionpropagationfunctioncall.py"", line 63, in _is_call_value_used_exactly_once
if len(return_values := definition.destination.operands) == 1:
AttributeError: 'GlobalVariable' object has no attribute 'operands'
```
### How to reproduce?
Decompile the main function in one of the samples given below.
[expressionpropagationfunctioncall_error.zip](https://github.com/fkie-cad/dewolf/files/9979638/expressionpropagationfunctioncall_error.zip)
### Affected Binary Ninja Version(s)
3.2.3814 | priority | attributeerror globalvariable object has no attribute operands in expressionpropagationfunctioncalls what happened the decompiler crashes with an attributeerror during the dataflowanalysis in expressionpropagationfunctioncall python traceback most recent call last file home ubuntu binaryninja plugins dewolf decompile py line in main decompiler file home ubuntu binaryninja plugins dewolf decompiler util commandline py line in main task decompiler decompile function name options file home ubuntu binaryninja plugins dewolf decompile py line in decompile pipeline run task file home ubuntu binaryninja plugins dewolf decompiler pipeline pipeline py line in run instance run task file home ubuntu binaryninja plugins dewolf decompiler pipeline dataflowanalysis expressionpropagationfunctioncall py line in run super run task file home ubuntu binaryninja plugins dewolf decompiler pipeline commons expressionpropagationcommons py line in run while self perform task graph iteration file home ubuntu binaryninja plugins dewolf decompiler pipeline dataflowanalysis expressionpropagationfunctioncall py line in perform if self definition can be propagated into target var definition instruction file home ubuntu binaryninja plugins dewolf decompiler pipeline dataflowanalysis expressionpropagationfunctioncall py line in definition can be propagated into target and self is call value used exactly once definition file home ubuntu binaryninja plugins dewolf decompiler pipeline dataflowanalysis expressionpropagationfunctioncall py line in is call value used exactly once if len return values definition destination operands attributeerror globalvariable object has no attribute operands how to reproduce decompile the main function in one of the samples given below affected binary ninja version s | 1 |
246,577 | 7,895,402,688 | IssuesEvent | 2018-06-29 03:03:00 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Have ability to have curves interpreted as r-theta coordinates instead of x-y. | Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: High Support Group: Any | Al Nichols stopped by and mentioned that he would like to have visit interpret his ultra files as r-theta data. He mentioned having a control that would specify that they are r-theta. I see a few ways to do this. Add something to file format, add a database option, or add a control to the curve plot. He mentions that now they convert them to x-y which is sometimes an issue, since in r-theta space the curve is monotonically increasing, but in x-y it is not and results in strange curves, since we enforce that.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 09/09/2013 12:16 pm
Original update: 09/12/2013 04:05 pm
Ticket number: 1596 | 1.0 | Have ability to have curves interpreted as r-theta coordinates instead of x-y. - Al Nichols stopped by and mentioned that he would like to have visit interpret his ultra files as r-theta data. He mentioned having a control that would specify that they are r-theta. I see a few ways to do this. Add something to file format, add a database option, or add a control to the curve plot. He mentions that now they convert them to x-y which is sometimes an issue, since in r-theta space the curve is monotonically increasing, but in x-y it is not and results in strange curves, since we enforce that.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 09/09/2013 12:16 pm
Original update: 09/12/2013 04:05 pm
Ticket number: 1596 | priority | have ability to have curves interpreted as r theta coordinates instead of x y al nichols stopped by and mentioned that he would like to have visit interpret his ultra files as r theta data he mentioned having a control that would specify that they are r theta i see a few ways to do this add something to file format add a database option or add a control to the curve plot he mentions that now they convert them to x y which is sometimes an issue since in r theta space the curve is monotonically increasing but in x y it is not and results in strange curves since we enforce that redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author eric brugger original creation pm original update pm ticket number | 1 |
83,263 | 3,632,744,154 | IssuesEvent | 2016-02-11 11:24:45 | Kunstmaan/KunstmaanBundlesCMS | https://api.github.com/repos/Kunstmaan/KunstmaanBundlesCMS | closed | Media bundle: pdf preview on => error | Priority: High Profile: Backend Target audience: Developers | When pdf preview is enabled, the container can not be built because FileHandler does not implement the MimeTypeGuesserFactoryInterface:
```
[Symfony\Component\Debug\Exception\ContextErrorException]
Catchable Fatal Error: Argument 2 passed to Kunstmaan\MediaBundle\Helper\File\FileHandler::__construct() must implement interface Kunstmaan\MediaBundle\Helper\MimeTypeGuesserFactoryInterface, none given, called in /app/cache/dev/appDevDebugProjectContainer.php on line 3728 and defined
```
A test should also be added. | 1.0 | Media bundle: pdf preview on => error - When pdf preview is enabled, the container can not be built because FileHandler does not implement the MimeTypeGuesserFactoryInterface:
```
[Symfony\Component\Debug\Exception\ContextErrorException]
Catchable Fatal Error: Argument 2 passed to Kunstmaan\MediaBundle\Helper\File\FileHandler::__construct() must implement interface Kunstmaan\MediaBundle\Helper\MimeTypeGuesserFactoryInterface, none given, called in /app/cache/dev/appDevDebugProjectContainer.php on line 3728 and defined
```
A test should also be added. | priority | media bundle pdf preview on error when pdf preview is enabled the container can not be built because filehandler does not implement the mimetypeguesserfactoryinterface catchable fatal error argument passed to kunstmaan mediabundle helper file filehandler construct must implement interface kunstmaan mediabundle helper mimetypeguesserfactoryinterface none given called in app cache dev appdevdebugprojectcontainer php on line and defined a test should also be added | 1 |
497,598 | 14,381,657,238 | IssuesEvent | 2020-12-02 05:55:36 | Automattic/abacus | https://api.github.com/repos/Automattic/abacus | opened | Clicking on some metrics causes an error | [!priority] high [component] experimenter interface [type] bug | The error is: "Can't read property 'bottom' of undefined"
See p1606825441001400-slack-G01BQ5WMR8Q | 1.0 | Clicking on some metrics causes an error - The error is: "Can't read property 'bottom' of undefined"
See p1606825441001400-slack-G01BQ5WMR8Q | priority | clicking on some metrics causes an error the error is can t read property bottom of undefined see slack | 1 |
370,613 | 10,934,713,368 | IssuesEvent | 2019-11-24 13:40:36 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | Eco broken on Catalina | High Priority | Eco doesn't work on MacOS Catalina. Users report they got the warning in prior versions that MacOS will no longer support 32-Bit Software, with Catalina Eco ceased to start. | 1.0 | Eco broken on Catalina - Eco doesn't work on MacOS Catalina. Users report they got the warning in prior versions that MacOS will no longer support 32-Bit Software, with Catalina Eco ceased to start. | priority | eco broken on catalina eco doesn t work on macos catalina users report they got the warning in prior versions that macos will no longer support bit software with catalina eco ceased to start | 1 |
772,373 | 27,118,967,556 | IssuesEvent | 2023-02-15 21:01:52 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | [ geography request ]: Add the Quba-Khachmaz Region in Azerbaijan | Priority-High (Needed for work) Function-Locality/Event/Georeferencing Help wanted | # Carefully read the documentation at https://handbook.arctosdb.org/documentation/higher-geography.html before submitting a request.
### Explain what geography needs created.
Request to add the Quba District of Azerbaijan
Location information: https://en.wikipedia.org/wiki/Quba_District_(Azerbaijan)
https://gadm.org/maps/AZE/quba-khachmaz/quba.html
or it may make more sense to create the Quba-Khachmaz Region in Azerbaijan, since Quba is a District within that region
https://gadm.org/maps/AZE/quba-khachmaz.html
Either would be very helpful. I have a record that comes from Quba, Quba, Azerbaijan that I am trying to bulkload.
| 1.0 | [ geography request ]: Add the Quba-Khachmaz Region in Azerbaijan - # Carefully read the documentation at https://handbook.arctosdb.org/documentation/higher-geography.html before submitting a request.
### Explain what geography needs created.
Request to add the Quba District of Azerbaijan
Location information: https://en.wikipedia.org/wiki/Quba_District_(Azerbaijan)
https://gadm.org/maps/AZE/quba-khachmaz/quba.html
or it may make more sense to create the Quba-Khachmaz Region in Azerbaijan, since Quba is a District within that region
https://gadm.org/maps/AZE/quba-khachmaz.html
Either would be very helpful. I have a record that comes from Quba, Quba, Azerbaijan that I am trying to bulkload.
| priority | add the quba khachmaz region in azerbaijan carefully read the documentation at before submitting a request explain what geography needs created request to add the quba district of azerbaijan location information or it may make more sense to create the quba khachmaz region in azerbaijan since quba is a district within that region either would be very helpful i have a record that comes from quba quba azerbaijan that i am trying to bulkload | 1 |
524,995 | 15,227,153,597 | IssuesEvent | 2021-02-18 09:50:14 | VSemenovykh/AutoAdsApp | https://api.github.com/repos/VSemenovykh/AutoAdsApp | closed | Create data model UML diagram | high priority | Create a data model UML diagram for the project.
Show all data model classes and dependencies between them. | 1.0 | Create data model UML diagram - Create a data model UML diagram for the project.
Show all data model classes and dependencies between them. | priority | create data model uml diagram create a data model uml diagram for the project show all data model classes and dependencies between them | 1 |
565,100 | 16,748,959,289 | IssuesEvent | 2021-06-11 19:36:51 | tristantheb/translated-content | https://api.github.com/repos/tristantheb/translated-content | closed | UPDATE: Update HTML section - Part 3 | priority:high ❗ type:html update 🗓 | ### 🎫 Related issues / pull request
mdn/translated-content#706
### 👀 Observed issues
Pages in the Elements section are out of date, contain deprecated elements or are missing. This area is one of the most visited areas on the HTML side along with the Attributes (next work for HTML).
### 📂 Origin page/folder
https://developer.mozilla.org/fr/docs/Web/HTML/Attributes
### 🔁 Changes in commits
Addition by aplabetic order of the page update. | 1.0 | UPDATE: Update HTML section - Part 3 - ### 🎫 Related issues / pull request
mdn/translated-content#706
### 👀 Observed issues
Pages in the Elements section are out of date, contain deprecated elements or are missing. This area is one of the most visited areas on the HTML side along with the Attributes (next work for HTML).
### 📂 Origin page/folder
https://developer.mozilla.org/fr/docs/Web/HTML/Attributes
### 🔁 Changes in commits
Addition by aplabetic order of the page update. | priority | update update html section part 🎫 related issues pull request mdn translated content 👀 observed issues pages in the elements section are out of date contain deprecated elements or are missing this area is one of the most visited areas on the html side along with the attributes next work for html 📂 origin page folder 🔁 changes in commits addition by aplabetic order of the page update | 1 |
107,938 | 4,322,351,663 | IssuesEvent | 2016-07-25 13:50:25 | Financial-Times/origami-image-service | https://api.github.com/repos/Financial-Times/origami-image-service | closed | Add in support for custom schemes | priority: high type: enhancement | We need to support the custom schemes that version 1 supports: `ftcms`, `fticon`, `fthead`, `ftsocial`, `ftpodcast`, `ftlogo`.
These map to various Git repositories, and there are a couple of options for doing this:
1. On deploy, all of the required repositories are installed on the server and custom schemes map to the created folder of images. This is similar to way v1 of the Image Service works. This method is complicated by the fact that we need to apply transforms through proxying to a third-party – we'd need to publicly expose the raw images and then hit the image service twice to load them:
```
> https://image-service/v2/images/raw/fticon:cross?format=png
[ expands to ]
> https://image-service/v2/images/raw/https%3A%2F%2Fimage-service%2Fv2%2Fimages%2Fsource%2Ffticon%2Fcross.svg?format=png
[ proxies to ]
> https://third-party/image/https%3A%2F%2Fimage-service%2Fv2%2Fimages%2Fsource%2Ffticon%2Fcross.svg?format=png
[ third-party hits the image service again to request the icon ]
```
2. On request, the URL is mapped to a _different_ publicly hosted version of the images. This is very similar to option 1 except that the image service itself wouldn't be responsible for serving the images as well as proxying them:
```
> https://image-service/v2/images/raw/fticon:cross?format=png
[ expands to ]
> https://image-service/v2/images/raw/https%3A%2F%2Fother-service%2Ffticon%2Fcross.svg?format=png
[ proxies to ]
> https://third-party/image/https%3A%2F%2Fother-service%2Ffticon%2Fcross.svg?format=png
[ third-party hits the other service to request the icon ]
```
3. On request, the URL is mapped to an image that has been uploaded to the third party already. This would require us to build an image uploader as [outlined in the original proposal](https://docs.google.com/drawings/d/1fC07fJ_2Flwd5Qugyltb_5mU2ALqtssaPqMdy_5-xN4). This ties us further into a specific third party and could be complicated to build, but it offers the simplest/most efficient request process:
```
> https://image-service/v2/images/raw/fticon:cross?format=png
[ proxies to ]
> https://third-party/uploaded/fticon/cross?format=png
```
As well as reimplementing what we already have in Image Service v1, we also need to allow versioning of image sets. Currently we have no ability to remove images as it would result in dependent sites breaking. We'd like to support a version in the custom scheme like this:
```
fticon-v4:cross
fticon-v5:cross
```
This complicates all of the above options, probably equally. | 1.0 | Add in support for custom schemes - We need to support the custom schemes that version 1 supports: `ftcms`, `fticon`, `fthead`, `ftsocial`, `ftpodcast`, `ftlogo`.
These map to various Git repositories, and there are a couple of options for doing this:
1. On deploy, all of the required repositories are installed on the server and custom schemes map to the created folder of images. This is similar to way v1 of the Image Service works. This method is complicated by the fact that we need to apply transforms through proxying to a third-party – we'd need to publicly expose the raw images and then hit the image service twice to load them:
```
> https://image-service/v2/images/raw/fticon:cross?format=png
[ expands to ]
> https://image-service/v2/images/raw/https%3A%2F%2Fimage-service%2Fv2%2Fimages%2Fsource%2Ffticon%2Fcross.svg?format=png
[ proxies to ]
> https://third-party/image/https%3A%2F%2Fimage-service%2Fv2%2Fimages%2Fsource%2Ffticon%2Fcross.svg?format=png
[ third-party hits the image service again to request the icon ]
```
2. On request, the URL is mapped to a _different_ publicly hosted version of the images. This is very similar to option 1 except that the image service itself wouldn't be responsible for serving the images as well as proxying them:
```
> https://image-service/v2/images/raw/fticon:cross?format=png
[ expands to ]
> https://image-service/v2/images/raw/https%3A%2F%2Fother-service%2Ffticon%2Fcross.svg?format=png
[ proxies to ]
> https://third-party/image/https%3A%2F%2Fother-service%2Ffticon%2Fcross.svg?format=png
[ third-party hits the other service to request the icon ]
```
3. On request, the URL is mapped to an image that has been uploaded to the third party already. This would require us to build an image uploader as [outlined in the original proposal](https://docs.google.com/drawings/d/1fC07fJ_2Flwd5Qugyltb_5mU2ALqtssaPqMdy_5-xN4). This ties us further into a specific third party and could be complicated to build, but it offers the simplest/most efficient request process:
```
> https://image-service/v2/images/raw/fticon:cross?format=png
[ proxies to ]
> https://third-party/uploaded/fticon/cross?format=png
```
As well as reimplementing what we already have in Image Service v1, we also need to allow versioning of image sets. Currently we have no ability to remove images as it would result in dependent sites breaking. We'd like to support a version in the custom scheme like this:
```
fticon-v4:cross
fticon-v5:cross
```
This complicates all of the above options, probably equally. | priority | add in support for custom schemes we need to support the custom schemes that version supports ftcms fticon fthead ftsocial ftpodcast ftlogo these map to various git repositories and there are a couple of options for doing this on deploy all of the required repositories are installed on the server and custom schemes map to the created folder of images this is similar to way of the image service works this method is complicated by the fact that we need to apply transforms through proxying to a third party – we d need to publicly expose the raw images and then hit the image service twice to load them on request the url is mapped to a different publicly hosted version of the images this is very similar to option except that the image service itself wouldn t be responsible for serving the images as well as proxying them on request the url is mapped to an image that has been uploaded to the third party already this would require us to build an image uploader as this ties us further into a specific third party and could be complicated to build but it offers the simplest most efficient request process as well as reimplementing what we already have in image service we also need to allow versioning of image sets currently we have no ability to remove images as it would result in dependent sites breaking we d like to support a version in the custom scheme like this fticon cross fticon cross this complicates all of the above options probably equally | 1 |
591,405 | 17,839,364,060 | IssuesEvent | 2021-09-03 08:02:35 | cyntaria/UniPal-Backend | https://api.github.com/repos/cyntaria/UniPal-Backend | reopened | As a student, I should be able to get details of a hobby, so that I can understand what information it represents | Status: In Progress Status: Review Needed Priority: High user story Type: Feature | ### Summary
As a `student`, I should be able to **get details of a hobby**, so that I can **understand what information it represents**.
### Acceptance Criteria
**GIVEN** an `student` is *requesting details of a hobby* in the app
**WHEN** the app hits the `hobbies/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, the unique id of the entity for which the details are needed.
**THEN** the app should receive a status `200`
**AND** in the response, the following information should be returned:
- headers
- hobby details
Sample Request/Sample Response
```
headers: {
error: 0,
message: "..."
}
body: {
hobby_id: 2,
hobby: "painting"
}
```
### Resources
- Development URL: {Here goes a URL to the feature on development API}
- Production URL: {Here goes a URL to the feature on production API}
### Dev Notes
This endpoint is accessible by and serves the admin in the same way.
### Testing Notes
##### Scenario 1: GET request is successful
**GIVEN** a `student` is *requesting details of a hobby* in the app
**WHEN** the app hits the `hobbies/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`
**THEN** the app should receive a status `200`
**AND** the `{id}` in the body should be same as the `:id` in the path parameter
##### Scenario 2: GET request is unsuccessful
**GIVEN** a `student` is *requesting details of a hobby* in the app
**WHEN** the app hits the `hobbies/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, **a non-existent id**
**THEN** the app should receive a status `404`
**AND** the response headers' `code` parameter should contain "**_NotFoundException_**"
#### Scenario 3: GET request is forbidden
**GIVEN** a `student` is *requesting all possible hobbies* in the app
**WHEN** the app hits the `/hobbies` endpoint with a valid GET request
**AND** the request contains no **authorization token**
**THEN** the app should receive a status `401`
**AND** the response headers' `code` parameter should contain "**_TokenMissingException_**" | 1.0 | As a student, I should be able to get details of a hobby, so that I can understand what information it represents - ### Summary
As a `student`, I should be able to **get details of a hobby**, so that I can **understand what information it represents**.
### Acceptance Criteria
**GIVEN** an `student` is *requesting details of a hobby* in the app
**WHEN** the app hits the `hobbies/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, the unique id of the entity for which the details are needed.
**THEN** the app should receive a status `200`
**AND** in the response, the following information should be returned:
- headers
- hobby details
Sample Request/Sample Response
```
headers: {
error: 0,
message: "..."
}
body: {
hobby_id: 2,
hobby: "painting"
}
```
### Resources
- Development URL: {Here goes a URL to the feature on development API}
- Production URL: {Here goes a URL to the feature on production API}
### Dev Notes
This endpoint is accessible by and serves the admin in the same way.
### Testing Notes
##### Scenario 1: GET request is successful
**GIVEN** a `student` is *requesting details of a hobby* in the app
**WHEN** the app hits the `hobbies/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`
**THEN** the app should receive a status `200`
**AND** the `{id}` in the body should be same as the `:id` in the path parameter
##### Scenario 2: GET request is unsuccessful
**GIVEN** a `student` is *requesting details of a hobby* in the app
**WHEN** the app hits the `hobbies/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, **a non-existent id**
**THEN** the app should receive a status `404`
**AND** the response headers' `code` parameter should contain "**_NotFoundException_**"
#### Scenario 3: GET request is forbidden
**GIVEN** a `student` is *requesting all possible hobbies* in the app
**WHEN** the app hits the `/hobbies` endpoint with a valid GET request
**AND** the request contains no **authorization token**
**THEN** the app should receive a status `401`
**AND** the response headers' `code` parameter should contain "**_TokenMissingException_**" | priority | as a student i should be able to get details of a hobby so that i can understand what information it represents summary as a student i should be able to get details of a hobby so that i can understand what information it represents acceptance criteria given an student is requesting details of a hobby in the app when the app hits the hobbies id endpoint with a valid get request containing the path parameter id the unique id of the entity for which the details are needed then the app should receive a status and in the response the following information should be returned headers hobby details sample request sample response headers error message body hobby id hobby painting resources development url here goes a url to the feature on development api production url here goes a url to the feature on production api dev notes this endpoint is accessible by and serves the admin in the same way testing notes scenario get request is successful given a student is requesting details of a hobby in the app when the app hits the hobbies id endpoint with a valid get request containing the path parameter id then the app should receive a status and the id in the body should be same as the id in the path parameter scenario get request is unsuccessful given a student is requesting details of a hobby in the app when the app hits the hobbies id endpoint with a valid get request containing the path parameter id a non existent id then the app should receive a status and the response headers code parameter should contain notfoundexception scenario get request is forbidden given a student is requesting all possible hobbies in the app when the app hits the hobbies endpoint with a valid get request and the request contains no authorization token then the app should receive a status and the response headers code parameter should contain tokenmissingexception | 1 |
7,171 | 2,598,069,737 | IssuesEvent | 2015-02-22 03:41:23 | OpenConceptLab/oclapi | https://api.github.com/repos/OpenConceptLab/oclapi | closed | Move mappings to live under sources instead of concepts | enhancement high-priority | Change mappings to live under sources, similar to concepts - meaning, they will have both a top-level resource for searching only (`/mappings`) like they currently have, and a source-level resource (e.g. `/orgs/WHO/sources/ICD-10/mappings/`). Mappings must have UUIDs (which we already have) and should be accessible directly through: `GET /.../sources/[source]/mappings/[mapping_id]`. Mappings should be saved with source versions. Note that neither the "from_concept" or the "to_concept" needs to be in the same source as the mapping.
FYI mappings will support 2 types of "to_concepts": (1) concept stored in OCL (to_concept_url provided), (2) source defined in OCL but concept not (to_source_url and to_concept_code provided). OCL will not support the case in which neither the to_source nor to_concept are defined in OCL (to_concept_code and to_source_code provided) -- Note that this is the current behavior, just reiterating.
"Direct mappings" must continue to be returned in the detailed version of a concept. "Inverse mappings" should be returned as well if the "includeInverseMappings" parameter is set to true (as described in the wiki on the mappings page`).
Estimated at 4 hours of work. | 1.0 | Move mappings to live under sources instead of concepts - Change mappings to live under sources, similar to concepts - meaning, they will have both a top-level resource for searching only (`/mappings`) like they currently have, and a source-level resource (e.g. `/orgs/WHO/sources/ICD-10/mappings/`). Mappings must have UUIDs (which we already have) and should be accessible directly through: `GET /.../sources/[source]/mappings/[mapping_id]`. Mappings should be saved with source versions. Note that neither the "from_concept" or the "to_concept" needs to be in the same source as the mapping.
FYI mappings will support 2 types of "to_concepts": (1) concept stored in OCL (to_concept_url provided), (2) source defined in OCL but concept not (to_source_url and to_concept_code provided). OCL will not support the case in which neither the to_source nor to_concept are defined in OCL (to_concept_code and to_source_code provided) -- Note that this is the current behavior, just reiterating.
"Direct mappings" must continue to be returned in the detailed version of a concept. "Inverse mappings" should be returned as well if the "includeInverseMappings" parameter is set to true (as described in the wiki on the mappings page`).
Estimated at 4 hours of work. | priority | move mappings to live under sources instead of concepts change mappings to live under sources similar to concepts meaning they will have both a top level resource for searching only mappings like they currently have and a source level resource e g orgs who sources icd mappings mappings must have uuids which we already have and should be accessible directly through get sources mappings mappings should be saved with source versions note that neither the from concept or the to concept needs to be in the same source as the mapping fyi mappings will support types of to concepts concept stored in ocl to concept url provided source defined in ocl but concept not to source url and to concept code provided ocl will not support the case in which neither the to source nor to concept are defined in ocl to concept code and to source code provided note that this is the current behavior just reiterating direct mappings must continue to be returned in the detailed version of a concept inverse mappings should be returned as well if the includeinversemappings parameter is set to true as described in the wiki on the mappings page estimated at hours of work | 1 |
197,806 | 6,964,188,371 | IssuesEvent | 2017-12-08 20:32:43 | hackcambridge/hack-cambridge-website | https://api.github.com/repos/hackcambridge/hack-cambridge-website | closed | It is difficult to determine what Hack Cambridge is from /apply | Epic: Applications Priority: High | This year, many ads for Hack Cambridge are linking straight to `/apply`, to reduce friction in the process.
However, landing on it, if you don't know what Hack Cambridge or even what a hackathon is, it does not provide much information:

Maybe we should add some of the content on the home page, or at least a more compelling reason to visit the home page?
A useful follow-up to this may be to look at the analytics and see if this has affected application rates.
| 1.0 | It is difficult to determine what Hack Cambridge is from /apply - This year, many ads for Hack Cambridge are linking straight to `/apply`, to reduce friction in the process.
However, landing on it, if you don't know what Hack Cambridge or even what a hackathon is, it does not provide much information:

Maybe we should add some of the content on the home page, or at least a more compelling reason to visit the home page?
A useful follow-up to this may be to look at the analytics and see if this has affected application rates.
| priority | it is difficult to determine what hack cambridge is from apply this year many ads for hack cambridge are linking straight to apply to reduce friction in the process however landing on it if you don t know what hack cambridge or even what a hackathon is it does not provide much information maybe we should add some of the content on the home page or at least a more compelling reason to visit the home page a useful follow up to this may be to look at the analytics and see if this has affected application rates | 1 |
230,593 | 7,611,956,068 | IssuesEvent | 2018-05-01 15:47:01 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | SSO: Since upgrade of RADIUS lib, timeouts aren't occurring anymore | Priority: High Type: Bug | Since we upgraded layeh.com/radius, it uses the context to cancel out the request (i.e. timing out) so we need to set a deadline to the context when performing RADIUS SSO, otherwise connections stay open indefinitely until a response comes back which may never happen | 1.0 | SSO: Since upgrade of RADIUS lib, timeouts aren't occurring anymore - Since we upgraded layeh.com/radius, it uses the context to cancel out the request (i.e. timing out) so we need to set a deadline to the context when performing RADIUS SSO, otherwise connections stay open indefinitely until a response comes back which may never happen | priority | sso since upgrade of radius lib timeouts aren t occurring anymore since we upgraded layeh com radius it uses the context to cancel out the request i e timing out so we need to set a deadline to the context when performing radius sso otherwise connections stay open indefinitely until a response comes back which may never happen | 1 |
741,852 | 25,824,662,847 | IssuesEvent | 2022-12-12 12:05:44 | dmwm/CRABServer | https://api.github.com/repos/dmwm/CRABServer | closed | list CMSSW releases and archs that we need to support | Status: Done Priority: High Area: CI/CD | Looking at what CMSSW releases users are running in CRAB in last mont [1]
I think we can make a list of all releases to test like below. Picking for each major release
only the latest one, changes inside each release series are (mostly) only CMSSW internals
and will not affect dependencies which we rely on.
**NOTE** list to be reviewed/refined in light of https://cmssdt.cern.ch/SDT/cgi-bin/ReleasesXML and possibly input from CMSSW people
release|arch | inputDataset
---------|-------|------------------
CMSSW_12_0_1| slc7_amd64_gcc900 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_11_3_4| slc7_amd64_gcc900 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_10_6_26 | slc7_amd64_gcc700 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_9_4_21 | slc7_amd64_gcc630 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_8_0_36 | slc7_amd64_gcc530 | /GenericTTbar/HC-CMSSW_7_0_4_START70_V7-v1/GEN-SIM-RECO
CMSSW_7_6_7 | slc6_amd64_gcc493 | /GenericTTbar/HC-CMSSW_5_3_1_START53_V5-v1/GEN-SIM-RECO
CMSSW_7_1_29 | slc6_amd64_gcc481 | /GenericTTbar/HC-CMSSW_5_3_1_START53_V5-v1/GEN-SIM-RECO
The top of the series for CMSSW_11 may evolve, other series should hopefully be stable now.
Of course running SL6 releases is not trivial and will require an ad hoc container where to run CRAB Client.
[1]
https://monit-grafana.cern.ch/goto/dE0MGpR7z

| 1.0 | list CMSSW releases and archs that we need to support - Looking at what CMSSW releases users are running in CRAB in last mont [1]
I think we can make a list of all releases to test like below. Picking for each major release
only the latest one, changes inside each release series are (mostly) only CMSSW internals
and will not affect dependencies which we rely on.
**NOTE** list to be reviewed/refined in light of https://cmssdt.cern.ch/SDT/cgi-bin/ReleasesXML and possibly input from CMSSW people
release|arch | inputDataset
---------|-------|------------------
CMSSW_12_0_1| slc7_amd64_gcc900 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_11_3_4| slc7_amd64_gcc900 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_10_6_26 | slc7_amd64_gcc700 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_9_4_21 | slc7_amd64_gcc630 | /GenericTTbar/HC-CMSSW_9_2_6_91X_mcRun1_realistic_v2-v2/AODSIM
CMSSW_8_0_36 | slc7_amd64_gcc530 | /GenericTTbar/HC-CMSSW_7_0_4_START70_V7-v1/GEN-SIM-RECO
CMSSW_7_6_7 | slc6_amd64_gcc493 | /GenericTTbar/HC-CMSSW_5_3_1_START53_V5-v1/GEN-SIM-RECO
CMSSW_7_1_29 | slc6_amd64_gcc481 | /GenericTTbar/HC-CMSSW_5_3_1_START53_V5-v1/GEN-SIM-RECO
The top of the series for CMSSW_11 may evolve, other series should hopefully be stable now.
Of course running SL6 releases is not trivial and will require an ad hoc container where to run CRAB Client.
[1]
https://monit-grafana.cern.ch/goto/dE0MGpR7z

| priority | list cmssw releases and archs that we need to support looking at what cmssw releases users are running in crab in last mont i think we can make a list of all releases to test like below picking for each major release only the latest one changes inside each release series are mostly only cmssw internals and will not affect dependencies which we rely on note list to be reviewed refined in light of and possibly input from cmssw people release arch inputdataset cmssw genericttbar hc cmssw realistic aodsim cmssw genericttbar hc cmssw realistic aodsim cmssw genericttbar hc cmssw realistic aodsim cmssw genericttbar hc cmssw realistic aodsim cmssw genericttbar hc cmssw gen sim reco cmssw genericttbar hc cmssw gen sim reco cmssw genericttbar hc cmssw gen sim reco the top of the series for cmssw may evolve other series should hopefully be stable now of course running releases is not trivial and will require an ad hoc container where to run crab client | 1 |
263,872 | 8,302,805,637 | IssuesEvent | 2018-09-21 15:31:38 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | TODO: Create a Stand-In for Director's IK Trajectory Planner | priority: high team: manipulation type: feature request | # Problem Definition
Few of us are able to compile not to mention _run_ the custom version of Director that we're using to generate the joint trajectories for the IIWA robot. Thus, to facilitate development, it would be helpful to have an application that can be a lightweight stand-in for it.
# Action Items
Create a simple application that waits for a key stroke. When the user hits a key, the application should:
1. Create an IK trajectory by calling [GenerateIKDemoTrajectory()](https://github.com/naveenoid/drake/blob/dev/kukaIKControlDemo/drake/examples/kuka_iiwa_arm/iiwa_simulation.h#L83) to obtain a sample IK trajectory.
2. Save the trajectory in a [robot_plan_t](https://github.com/sammy-tri/drake/blob/director_ik/drake/lcmtypes/lcmt_robot_plan_t.lcm) LCM message. Specifically, in the "plan" member variable within that LCM message, which is an array of [bot_core.robot_state_t](https://github.com/mwoehlke-kitware/bot_core_lcmtypes/blob/0434090f6f54228c35514f7c9216e3bf1e003ea3/lcmtypes/bot_core_robot_state_t.lcm) objects.
3. Publish the message on LCM channel "COMMITTED_ROBOT_PLAN".
# Notes
## Dependencies on bot_core_lcmtypes
LCM message `drake-distro/drake/lcmtypes/lcmt_robot_plan_t.lcm` relies on `bot_core.robot_state_t`, which is defined in `drake-distr/externals/bot_core_lcmtypes/lcmtypes/bot_core_robot_state_t.lcm`. This requires fiddling around with various CMakeLists.txt to allow it to find the generated `bot_core_robot_state_t.h` header file.
| 1.0 | TODO: Create a Stand-In for Director's IK Trajectory Planner - # Problem Definition
Few of us are able to compile not to mention _run_ the custom version of Director that we're using to generate the joint trajectories for the IIWA robot. Thus, to facilitate development, it would be helpful to have an application that can be a lightweight stand-in for it.
# Action Items
Create a simple application that waits for a key stroke. When the user hits a key, the application should:
1. Create an IK trajectory by calling [GenerateIKDemoTrajectory()](https://github.com/naveenoid/drake/blob/dev/kukaIKControlDemo/drake/examples/kuka_iiwa_arm/iiwa_simulation.h#L83) to obtain a sample IK trajectory.
2. Save the trajectory in a [robot_plan_t](https://github.com/sammy-tri/drake/blob/director_ik/drake/lcmtypes/lcmt_robot_plan_t.lcm) LCM message. Specifically, in the "plan" member variable within that LCM message, which is an array of [bot_core.robot_state_t](https://github.com/mwoehlke-kitware/bot_core_lcmtypes/blob/0434090f6f54228c35514f7c9216e3bf1e003ea3/lcmtypes/bot_core_robot_state_t.lcm) objects.
3. Publish the message on LCM channel "COMMITTED_ROBOT_PLAN".
# Notes
## Dependencies on bot_core_lcmtypes
LCM message `drake-distro/drake/lcmtypes/lcmt_robot_plan_t.lcm` relies on `bot_core.robot_state_t`, which is defined in `drake-distr/externals/bot_core_lcmtypes/lcmtypes/bot_core_robot_state_t.lcm`. This requires fiddling around with various CMakeLists.txt to allow it to find the generated `bot_core_robot_state_t.h` header file.
| priority | todo create a stand in for director s ik trajectory planner problem definition few of us are able to compile not to mention run the custom version of director that we re using to generate the joint trajectories for the iiwa robot thus to facilitate development it would be helpful to have an application that can be a lightweight stand in for it action items create a simple application that waits for a key stroke when the user hits a key the application should create an ik trajectory by calling to obtain a sample ik trajectory save the trajectory in a lcm message specifically in the plan member variable within that lcm message which is an array of objects publish the message on lcm channel committed robot plan notes dependencies on bot core lcmtypes lcm message drake distro drake lcmtypes lcmt robot plan t lcm relies on bot core robot state t which is defined in drake distr externals bot core lcmtypes lcmtypes bot core robot state t lcm this requires fiddling around with various cmakelists txt to allow it to find the generated bot core robot state t h header file | 1 |
64,750 | 3,218,322,250 | IssuesEvent | 2015-10-08 00:18:36 | angular/material | https://api.github.com/repos/angular/material | closed | mdDialog renders `<p>` as text | priority: high reported by Googler | When the mdDialog code goes to wrap the content in a `<p>` element, it is instead wrapping the *text* content with `<p>` ... `</p>`
Reproduction: http://codepen.io/anon/pen/gaMeLj | 1.0 | mdDialog renders `<p>` as text - When the mdDialog code goes to wrap the content in a `<p>` element, it is instead wrapping the *text* content with `<p>` ... `</p>`
Reproduction: http://codepen.io/anon/pen/gaMeLj | priority | mddialog renders as text when the mddialog code goes to wrap the content in a element it is instead wrapping the text content with reproduction | 1 |
305,440 | 9,369,379,725 | IssuesEvent | 2019-04-03 10:57:00 | metwork-framework/mfdata | https://api.github.com/repos/metwork-framework/mfdata | closed | error in switch plugin with a specific file | Priority: High Status: In Progress Type: Bug backport-to-0.5 backport-to-0.6 | ```
Traceback (most recent call last):
File "/opt/metwork-mfdata-0.5/opt/python3/lib/python3.5/site-packages/acquisition-0.0.0-py3.5.egg/acquisition/step.py", line 119, in _exception_safe_call
return func(*args, **kwargs)
File "/home/mfdata/var/plugins/switch/main.py", line 161, in process
system_magic = get_magic(xaf)
File "/home/mfdata/var/plugins/switch/main.py", line 25, in get_magic
tag_magic = magic.from_file(xaf_file.filepath)
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 92, in from_file
return self._handle509Bug(e)
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 101, in _handle509Bug
raise e
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 90, in from_file
return maybe_decode(magic_file(self.cookie, filename))
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 247, in magic_file
return _magic_file(cookie, coerce_filename(filename))
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 188, in errorcheck_null
raise MagicException(err)
magic.MagicException: b'Macintosh HFS Extended version 9220 data (mounted) vasprintf failed (Invalid or incomplete multibyte or wide character)'
``` | 1.0 | error in switch plugin with a specific file - ```
Traceback (most recent call last):
File "/opt/metwork-mfdata-0.5/opt/python3/lib/python3.5/site-packages/acquisition-0.0.0-py3.5.egg/acquisition/step.py", line 119, in _exception_safe_call
return func(*args, **kwargs)
File "/home/mfdata/var/plugins/switch/main.py", line 161, in process
system_magic = get_magic(xaf)
File "/home/mfdata/var/plugins/switch/main.py", line 25, in get_magic
tag_magic = magic.from_file(xaf_file.filepath)
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 92, in from_file
return self._handle509Bug(e)
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 101, in _handle509Bug
raise e
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 90, in from_file
return maybe_decode(magic_file(self.cookie, filename))
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 247, in magic_file
return _magic_file(cookie, coerce_filename(filename))
File "/home/mfdata/var/plugins/switch/local/lib/python3.5/site-packages/magic.py", line 188, in errorcheck_null
raise MagicException(err)
magic.MagicException: b'Macintosh HFS Extended version 9220 data (mounted) vasprintf failed (Invalid or incomplete multibyte or wide character)'
``` | priority | error in switch plugin with a specific file traceback most recent call last file opt metwork mfdata opt lib site packages acquisition egg acquisition step py line in exception safe call return func args kwargs file home mfdata var plugins switch main py line in process system magic get magic xaf file home mfdata var plugins switch main py line in get magic tag magic magic from file xaf file filepath file home mfdata var plugins switch local lib site packages magic py line in from file return self e file home mfdata var plugins switch local lib site packages magic py line in raise e file home mfdata var plugins switch local lib site packages magic py line in from file return maybe decode magic file self cookie filename file home mfdata var plugins switch local lib site packages magic py line in magic file return magic file cookie coerce filename filename file home mfdata var plugins switch local lib site packages magic py line in errorcheck null raise magicexception err magic magicexception b macintosh hfs extended version data mounted vasprintf failed invalid or incomplete multibyte or wide character | 1 |
453,543 | 13,081,761,716 | IssuesEvent | 2020-08-01 12:13:46 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | Spirate and Civilian AI is broken on latest master | Feature: Entities Priority: 1-high | After you spawn them they don't do anything, they just stand there. Xeno AI is still functional. | 1.0 | Spirate and Civilian AI is broken on latest master - After you spawn them they don't do anything, they just stand there. Xeno AI is still functional. | priority | spirate and civilian ai is broken on latest master after you spawn them they don t do anything they just stand there xeno ai is still functional | 1 |
295,513 | 9,087,439,895 | IssuesEvent | 2019-02-18 13:45:50 | EricssonResearch/scott-eu | https://api.github.com/repos/EricssonResearch/scott-eu | closed | Trouble modelling certain classes | Comp: Model project Priority: High Status: Abandoned Type: Bug Upstream: Lyo | I have a trouble representing extra properties on the class definitions, such as:
<img width="747" alt="screen shot 2018-02-17 at 01 51 35" src="https://user-images.githubusercontent.com/64734/36336123-562ec4c0-1385-11e8-852e-2afc956b1221.png">
and
<img width="479" alt="screen shot 2018-02-17 at 01 51 43" src="https://user-images.githubusercontent.com/64734/36336124-5c37157a-1385-11e8-8a55-07afea58e93b.png">
---
@jadelkhoury, what do you think will be the best way to proceed? | 1.0 | Trouble modelling certain classes - I have a trouble representing extra properties on the class definitions, such as:
<img width="747" alt="screen shot 2018-02-17 at 01 51 35" src="https://user-images.githubusercontent.com/64734/36336123-562ec4c0-1385-11e8-852e-2afc956b1221.png">
and
<img width="479" alt="screen shot 2018-02-17 at 01 51 43" src="https://user-images.githubusercontent.com/64734/36336124-5c37157a-1385-11e8-8a55-07afea58e93b.png">
---
@jadelkhoury, what do you think will be the best way to proceed? | priority | trouble modelling certain classes i have a trouble representing extra properties on the class definitions such as img width alt screen shot at src and img width alt screen shot at src jadelkhoury what do you think will be the best way to proceed | 1 |
318,267 | 9,684,261,964 | IssuesEvent | 2019-05-23 13:25:07 | BottleneckStudio/keepmotivat.in | https://api.github.com/repos/BottleneckStudio/keepmotivat.in | opened | [Release1] Add middleware. | bug good first issue high priority | We should add useful middleware to our application. Namely:
- [ ] Gzip
- [ ] Logging
- [ ] Auth
We can add some middlewares later. But the middleware mentioned above should be the priority. Anything I've missed from the mentioned middleware above @kenkoii? CC @jkbicbic. | 1.0 | [Release1] Add middleware. - We should add useful middleware to our application. Namely:
- [ ] Gzip
- [ ] Logging
- [ ] Auth
We can add some middlewares later. But the middleware mentioned above should be the priority. Anything I've missed from the mentioned middleware above @kenkoii? CC @jkbicbic. | priority | add middleware we should add useful middleware to our application namely gzip logging auth we can add some middlewares later but the middleware mentioned above should be the priority anything i ve missed from the mentioned middleware above kenkoii cc jkbicbic | 1 |
754,301 | 26,380,783,205 | IssuesEvent | 2023-01-12 08:27:54 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | [Death Knight ZONE][Quest][A Special Surprise] | Class: Death Knight Quest - Cataclysm (1-60) Priority: High Status: Confirmed | [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:**
RP heavy quest where you eventually kill Kug Ironjaw
**How to reproduce:**
Accep the quest, go to Kug and watch the rp
He will die without being touched rendering the quest incomplete.
It seems like he dies on a timer
As witnessed by Sylah
**How it should work:**
https://www.youtube.com/watch?v=9lzb-Gz5bD4
**Database links:**
https://cata-twinhead.twinstar.cz/?quest=12739 | 1.0 | [Death Knight ZONE][Quest][A Special Surprise] - [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:**
RP heavy quest where you eventually kill Kug Ironjaw
**How to reproduce:**
Accep the quest, go to Kug and watch the rp
He will die without being touched rendering the quest incomplete.
It seems like he dies on a timer
As witnessed by Sylah
**How it should work:**
https://www.youtube.com/watch?v=9lzb-Gz5bD4
**Database links:**
https://cata-twinhead.twinstar.cz/?quest=12739 | priority | rembember add links to things related to the bug using for example cata twinhead twinstar cz description rp heavy quest where you eventually kill kug ironjaw how to reproduce accep the quest go to kug and watch the rp he will die without being touched rendering the quest incomplete it seems like he dies on a timer as witnessed by sylah how it should work database links | 1 |
790,245 | 27,820,244,076 | IssuesEvent | 2023-03-19 06:01:34 | AY2223S2-CS2103T-T17-3/tp | https://api.github.com/repos/AY2223S2-CS2103T-T17-3/tp | closed | As a user, I need a way to group my contacts easily. | type.Task priority.High | So that when I want to find a group of people in my contact list, I can do it easily. | 1.0 | As a user, I need a way to group my contacts easily. - So that when I want to find a group of people in my contact list, I can do it easily. | priority | as a user i need a way to group my contacts easily so that when i want to find a group of people in my contact list i can do it easily | 1 |
24,581 | 2,669,238,027 | IssuesEvent | 2015-03-23 14:31:34 | tomatocart/TomatoCart-v1 | https://api.github.com/repos/tomatocart/TomatoCart-v1 | opened | [Bootstrap]Variants specials aren't displayed in special page | Priority: High Type: Bug | I use TomatoCart 1.1.8.6 with bootstrap template and i did some specials on variants of my articles. Everything is working...when i choose a vaiant the price is changing and i see the price discount. But when i check the specials page /products.php?specials they are not present. Only a text like: There are no products in this category...but i would like to see the variant specials on that special page.
How can i fix that? I want the variants also show up as specials on that page and in the bootstrap template footer... | 1.0 | [Bootstrap]Variants specials aren't displayed in special page - I use TomatoCart 1.1.8.6 with bootstrap template and i did some specials on variants of my articles. Everything is working...when i choose a vaiant the price is changing and i see the price discount. But when i check the specials page /products.php?specials they are not present. Only a text like: There are no products in this category...but i would like to see the variant specials on that special page.
How can i fix that? I want the variants also show up as specials on that page and in the bootstrap template footer... | priority | variants specials aren t displayed in special page i use tomatocart with bootstrap template and i did some specials on variants of my articles everything is working when i choose a vaiant the price is changing and i see the price discount but when i check the specials page products php specials they are not present only a text like there are no products in this category but i would like to see the variant specials on that special page how can i fix that i want the variants also show up as specials on that page and in the bootstrap template footer | 1 |
99,243 | 4,049,811,221 | IssuesEvent | 2016-05-23 16:00:13 | BNHM/berkeleymapper | https://api.github.com/repos/BNHM/berkeleymapper | opened | If there are MANY points they just don't show | Priority-High | Michael Black reports that calls to BerkeleyMapper with many points just drops data --- that is, these are map calls with greater than 50,000 points. This should be tested | 1.0 | If there are MANY points they just don't show - Michael Black reports that calls to BerkeleyMapper with many points just drops data --- that is, these are map calls with greater than 50,000 points. This should be tested | priority | if there are many points they just don t show michael black reports that calls to berkeleymapper with many points just drops data that is these are map calls with greater than points this should be tested | 1 |
352,282 | 10,534,423,805 | IssuesEvent | 2019-10-01 14:48:10 | BlueCodeSystems/smartcerv | https://api.github.com/repos/BlueCodeSystems/smartcerv | closed | Refactor Widgets | High priority enhancement | ### Functional requirements:
- [ ] Widgets should retain data captured on previously submission otherwise fill in default entries | 1.0 | Refactor Widgets - ### Functional requirements:
- [ ] Widgets should retain data captured on previously submission otherwise fill in default entries | priority | refactor widgets functional requirements widgets should retain data captured on previously submission otherwise fill in default entries | 1 |
193,980 | 6,890,217,421 | IssuesEvent | 2017-11-22 13:16:50 | PathwayCommons/app-ui | https://api.github.com/repos/PathwayCommons/app-ui | closed | Tooltip does not work in Firefox Quantum | 1 high priority bug | The metadata tooltip is not displayed when the right click event is triggered in the latest version of Firefox.
Right clicking properly triggers the tooltip generation function, but does not result in a visible tooltip being displayed.
| 1.0 | Tooltip does not work in Firefox Quantum - The metadata tooltip is not displayed when the right click event is triggered in the latest version of Firefox.
Right clicking properly triggers the tooltip generation function, but does not result in a visible tooltip being displayed.
| priority | tooltip does not work in firefox quantum the metadata tooltip is not displayed when the right click event is triggered in the latest version of firefox right clicking properly triggers the tooltip generation function but does not result in a visible tooltip being displayed | 1 |
762,846 | 26,733,266,040 | IssuesEvent | 2023-01-30 07:17:14 | Reyder95/Project-Capybara | https://api.github.com/repos/Reyder95/Project-Capybara | opened | Figure Out How to Implement Level Backgrounds | technological in progress high priority | This will probably be intertwined with us figuring out how to do tile sets. I also think we are going need a "parallax effect" for certain backgrounds such as the horizon seen on Meditation Beach aka Level 1. | 1.0 | Figure Out How to Implement Level Backgrounds - This will probably be intertwined with us figuring out how to do tile sets. I also think we are going need a "parallax effect" for certain backgrounds such as the horizon seen on Meditation Beach aka Level 1. | priority | figure out how to implement level backgrounds this will probably be intertwined with us figuring out how to do tile sets i also think we are going need a parallax effect for certain backgrounds such as the horizon seen on meditation beach aka level | 1 |
584,191 | 17,408,383,832 | IssuesEvent | 2021-08-03 09:06:37 | tantivy-search/tantivy | https://api.github.com/repos/tantivy-search/tantivy | closed | Panic in merge thread | bug high priority | Running on 0.15.3, the merge thread panics while trying to list segment positions. I couldn't find a small reproducible code that triggers the issue yet, and this seems to only happen in my project after a few days of usage with probably about a hundred mutations per day. After trying to reproduce the issue with random mutations in my quest of finding a small reproducing example, I have the feeling that it's the result of some corruption after many merges.
<details>
<summary>Backtrace</summary>
```
thread 'merge_thread0' panicked at 'attempt to add with overflow', /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/segment_postings.rs:258:17
stack backtrace:
0: 0x55f464fc1b50 - std::backtrace_rs::backtrace::libunwind::trace::ha5edb8ba5c6b7a6c
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/../../backtrace/src/backtrace/libunwind.rs:90:5
1: 0x55f464fc1b50 - std::backtrace_rs::backtrace::trace_unsynchronized::h0de86d320a827db2
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x55f464fc1b50 - std::sys_common::backtrace::_print_fmt::h97b9ad6f0a1380ff
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:67:5
3: 0x55f464fc1b50 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h14be7eb08f97fe80
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:46:22
4: 0x55f464fec0ef - core::fmt::write::h2ca8877d3e0e52de
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/fmt/mod.rs:1094:17
5: 0x55f464fba6b5 - std::io::Write::write_fmt::h64f5987220b618f4
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/io/mod.rs:1584:15
6: 0x55f464fc405b - std::sys_common::backtrace::_print::h7f1a4097308f2e0a
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:49:5
7: 0x55f464fc405b - std::sys_common::backtrace::print::h1f799fc2ca7f5035
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:36:9
8: 0x55f464fc405b - std::panicking::default_hook::{{closure}}::hf38436e8a3ce1071
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:208:50
9: 0x55f464fc3b2d - std::panicking::default_hook::he2f8f3fae11ed1dd
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:225:9
10: 0x55f464fc47dd - std::panicking::rust_panic_with_hook::h79a18548bd90c7d4
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:591:17
11: 0x55f464fc4317 - std::panicking::begin_panic_handler::{{closure}}::h212a72cc08e25126
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:495:13
12: 0x55f464fc1fec - std::sys_common::backtrace::__rust_end_short_backtrace::hbd6897dd42bc0fcd
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:141:18
13: 0x55f464fc42a9 - rust_begin_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:493:5
14: 0x55f464fe9771 - core::panicking::panic_fmt::h77ecd04e9b1dd84d
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:92:14
15: 0x55f464fe96bd - core::panicking::panic::h60569d8a39169222
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:50:5
16: 0x55f463b57ee9 - <tantivy::postings::segment_postings::SegmentPostings as tantivy::postings::postings::Postings>::positions_with_offset::h4123bf66e4e00f68
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/segment_postings.rs:258:17
17: 0x55f463b57479 - tantivy::postings::postings::Postings::positions::hbd39bc6e49c80976
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/postings.rs:24:9
18: 0x55f463a91743 - tantivy::indexer::merger::IndexMerger::write_postings_for_field::h57e445b57d077cf9
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/merger.rs:922:25
19: 0x55f463a9297d - tantivy::indexer::merger::IndexMerger::write_postings::h0fbf4e4079e39341
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/merger.rs:970:53
20: 0x55f463a94cf7 - tantivy::indexer::merger::IndexMerger::write::h7c6ea7dfcc8f779d
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/merger.rs:1071:33
21: 0x55f4639481e2 - tantivy::indexer::segment_updater::merge::h36b25a9851a827ec
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/segment_updater.rs:142:20
22: 0x55f46394d432 - tantivy::indexer::segment_updater::SegmentUpdater::start_merge::{{closure}}::h8d47bf6a2d5d64b7
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/segment_updater.rs:498:19
23: 0x55f463b7dc2c - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::h8a901b06a716ed55
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80:19
24: 0x55f464945269 - <futures_task::future_obj::LocalFutureObj<T> as core::future::future::Future>::poll::h089823b4a04b8205
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.16/src/future_obj.rs:84:18
25: 0x55f4649451e1 - <futures_task::future_obj::FutureObj<T> as core::future::future::Future>::poll::h122379c8c36bd25a
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.16/src/future_obj.rs:127:9
26: 0x55f464944c3c - futures_util::future::future::FutureExt::poll_unpin::h992a41cd45e9a023
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.16/src/future/future/mod.rs:562:9
27: 0x55f464946a75 - futures_executor::thread_pool::Task::run::h67abfdd088d67ee6
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.16/src/thread_pool.rs:322:27
28: 0x55f4649458e7 - futures_executor::thread_pool::PoolState::work::habcdb6f78988cc91
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.16/src/thread_pool.rs:154:39
29: 0x55f4649468fb - futures_executor::thread_pool::ThreadPoolBuilder::create::{{closure}}::h62dba45c12cd055b
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.16/src/thread_pool.rs:284:42
30: 0x55f464958edc - std::sys_common::backtrace::__rust_begin_short_backtrace::h5af3b7a895a3f7bc
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys_common/backtrace.rs:125:18
31: 0x55f46495a741 - std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}}::hfb907fbc1f766a9a
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:481:17
32: 0x55f464958eb1 - <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once::ha4715ad11f2a71a4
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:344:9
33: 0x55f464948373 - std::panicking::try::do_call::hcdd310d3de191f62
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:379:40
34: 0x55f46494876d - __rust_try
35: 0x55f4649482b1 - std::panicking::try::h0354e8d9732084ca
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:343:19
36: 0x55f464959b91 - std::panic::catch_unwind::h4b478974cca8c20c
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:431:14
37: 0x55f46495a53d - std::thread::Builder::spawn_unchecked::{{closure}}::h65ed724ce8ee9f8a
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:480:30
38: 0x55f46495a8bf - core::ops::function::FnOnce::call_once{{vtable.shim}}::h496b9f8e92515106
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
39: 0x55f464fcb3da - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h75c2ca1daad47228
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/alloc/src/boxed.rs:1546:9
40: 0x55f464fcb3da - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hdf9f8afc9d34e311
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/alloc/src/boxed.rs:1546:9
41: 0x55f464fcb3da - std::sys::unix::thread::Thread::new::thread_start::hc238bac7748b195d
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys/unix/thread.rs:71:17
42: 0x7fb5f6eaf609 - start_thread
43: 0x7fb5f6c7f293 - clone
44: 0x0 - <unknown>
```
</details>
After adding some debugging right before the panic point, and it seems that the positions explode in value which gets summed in loop and eventually overflows the u32.
**Debugging code**
```
diff --git a/src/postings/segment_postings.rs b/src/postings/segment_postings.rs
index 67c3aa48a..78b958f72 100644
--- a/src/postings/segment_postings.rs
+++ b/src/postings/segment_postings.rs
@@ -255,6 +255,9 @@ impl Postings for SegmentPostings {
position_reader.read(read_offset, &mut output[..]);
let mut cum = offset;
for output_mut in output.iter_mut() {
+ if cum > 100000 {
+ println!("tf={} out={} cum={}", term_freq, *output_mut, cum);
+ }
cum += *output_mut;
*output_mut = cum;
}
```
**Outputs**
```
tf=17 out=537053 cum=214909
tf=17 out=1564834 cum=751962
tf=17 out=4131190 cum=2316796
tf=17 out=10060076 cum=6447986
tf=17 out=22897404 cum=16508062
tf=17 out=49205305 cum=39405466
tf=17 out=100625901 cum=88610771
tf=17 out=197071398 cum=189236672
tf=17 out=371523100 cum=386308070
tf=17 out=677080003 cum=757831170
tf=17 out=1197093556 cum=1434911173
tf=17 out=2059465165 cum=2632004729
...
thread 'merge_thread0' panicked at 'attempt to add with overflow', /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/segment_postings.rs:258:17
...
```
Let me know if there is anything I could provide to help. Even though my code that uses Tantivy is [open source](https://github.com/appaquet/exocore/tree/master/store/src/local/mutation_index), it's probably too complex to reproduce easily on your side and would require my corrupted index that contains private data. Feel free to contact me privately if it can help debugging (my gh username @ gmail.com) | 1.0 | Panic in merge thread - Running on 0.15.3, the merge thread panics while trying to list segment positions. I couldn't find a small reproducible code that triggers the issue yet, and this seems to only happen in my project after a few days of usage with probably about a hundred mutations per day. After trying to reproduce the issue with random mutations in my quest of finding a small reproducing example, I have the feeling that it's the result of some corruption after many merges.
<details>
<summary>Backtrace</summary>
```
thread 'merge_thread0' panicked at 'attempt to add with overflow', /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/segment_postings.rs:258:17
stack backtrace:
0: 0x55f464fc1b50 - std::backtrace_rs::backtrace::libunwind::trace::ha5edb8ba5c6b7a6c
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/../../backtrace/src/backtrace/libunwind.rs:90:5
1: 0x55f464fc1b50 - std::backtrace_rs::backtrace::trace_unsynchronized::h0de86d320a827db2
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x55f464fc1b50 - std::sys_common::backtrace::_print_fmt::h97b9ad6f0a1380ff
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:67:5
3: 0x55f464fc1b50 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h14be7eb08f97fe80
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:46:22
4: 0x55f464fec0ef - core::fmt::write::h2ca8877d3e0e52de
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/fmt/mod.rs:1094:17
5: 0x55f464fba6b5 - std::io::Write::write_fmt::h64f5987220b618f4
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/io/mod.rs:1584:15
6: 0x55f464fc405b - std::sys_common::backtrace::_print::h7f1a4097308f2e0a
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:49:5
7: 0x55f464fc405b - std::sys_common::backtrace::print::h1f799fc2ca7f5035
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:36:9
8: 0x55f464fc405b - std::panicking::default_hook::{{closure}}::hf38436e8a3ce1071
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:208:50
9: 0x55f464fc3b2d - std::panicking::default_hook::he2f8f3fae11ed1dd
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:225:9
10: 0x55f464fc47dd - std::panicking::rust_panic_with_hook::h79a18548bd90c7d4
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:591:17
11: 0x55f464fc4317 - std::panicking::begin_panic_handler::{{closure}}::h212a72cc08e25126
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:495:13
12: 0x55f464fc1fec - std::sys_common::backtrace::__rust_end_short_backtrace::hbd6897dd42bc0fcd
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys_common/backtrace.rs:141:18
13: 0x55f464fc42a9 - rust_begin_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:493:5
14: 0x55f464fe9771 - core::panicking::panic_fmt::h77ecd04e9b1dd84d
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:92:14
15: 0x55f464fe96bd - core::panicking::panic::h60569d8a39169222
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:50:5
16: 0x55f463b57ee9 - <tantivy::postings::segment_postings::SegmentPostings as tantivy::postings::postings::Postings>::positions_with_offset::h4123bf66e4e00f68
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/segment_postings.rs:258:17
17: 0x55f463b57479 - tantivy::postings::postings::Postings::positions::hbd39bc6e49c80976
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/postings.rs:24:9
18: 0x55f463a91743 - tantivy::indexer::merger::IndexMerger::write_postings_for_field::h57e445b57d077cf9
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/merger.rs:922:25
19: 0x55f463a9297d - tantivy::indexer::merger::IndexMerger::write_postings::h0fbf4e4079e39341
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/merger.rs:970:53
20: 0x55f463a94cf7 - tantivy::indexer::merger::IndexMerger::write::h7c6ea7dfcc8f779d
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/merger.rs:1071:33
21: 0x55f4639481e2 - tantivy::indexer::segment_updater::merge::h36b25a9851a827ec
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/segment_updater.rs:142:20
22: 0x55f46394d432 - tantivy::indexer::segment_updater::SegmentUpdater::start_merge::{{closure}}::h8d47bf6a2d5d64b7
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/indexer/segment_updater.rs:498:19
23: 0x55f463b7dc2c - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::h8a901b06a716ed55
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80:19
24: 0x55f464945269 - <futures_task::future_obj::LocalFutureObj<T> as core::future::future::Future>::poll::h089823b4a04b8205
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.16/src/future_obj.rs:84:18
25: 0x55f4649451e1 - <futures_task::future_obj::FutureObj<T> as core::future::future::Future>::poll::h122379c8c36bd25a
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-task-0.3.16/src/future_obj.rs:127:9
26: 0x55f464944c3c - futures_util::future::future::FutureExt::poll_unpin::h992a41cd45e9a023
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.16/src/future/future/mod.rs:562:9
27: 0x55f464946a75 - futures_executor::thread_pool::Task::run::h67abfdd088d67ee6
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.16/src/thread_pool.rs:322:27
28: 0x55f4649458e7 - futures_executor::thread_pool::PoolState::work::habcdb6f78988cc91
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.16/src/thread_pool.rs:154:39
29: 0x55f4649468fb - futures_executor::thread_pool::ThreadPoolBuilder::create::{{closure}}::h62dba45c12cd055b
at /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-executor-0.3.16/src/thread_pool.rs:284:42
30: 0x55f464958edc - std::sys_common::backtrace::__rust_begin_short_backtrace::h5af3b7a895a3f7bc
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys_common/backtrace.rs:125:18
31: 0x55f46495a741 - std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}}::hfb907fbc1f766a9a
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:481:17
32: 0x55f464958eb1 - <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once::ha4715ad11f2a71a4
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:344:9
33: 0x55f464948373 - std::panicking::try::do_call::hcdd310d3de191f62
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:379:40
34: 0x55f46494876d - __rust_try
35: 0x55f4649482b1 - std::panicking::try::h0354e8d9732084ca
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:343:19
36: 0x55f464959b91 - std::panic::catch_unwind::h4b478974cca8c20c
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:431:14
37: 0x55f46495a53d - std::thread::Builder::spawn_unchecked::{{closure}}::h65ed724ce8ee9f8a
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:480:30
38: 0x55f46495a8bf - core::ops::function::FnOnce::call_once{{vtable.shim}}::h496b9f8e92515106
at /home/appaquet/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
39: 0x55f464fcb3da - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h75c2ca1daad47228
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/alloc/src/boxed.rs:1546:9
40: 0x55f464fcb3da - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hdf9f8afc9d34e311
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/alloc/src/boxed.rs:1546:9
41: 0x55f464fcb3da - std::sys::unix::thread::Thread::new::thread_start::hc238bac7748b195d
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/sys/unix/thread.rs:71:17
42: 0x7fb5f6eaf609 - start_thread
43: 0x7fb5f6c7f293 - clone
44: 0x0 - <unknown>
```
</details>
After adding some debugging right before the panic point, and it seems that the positions explode in value which gets summed in loop and eventually overflows the u32.
**Debugging code**
```
diff --git a/src/postings/segment_postings.rs b/src/postings/segment_postings.rs
index 67c3aa48a..78b958f72 100644
--- a/src/postings/segment_postings.rs
+++ b/src/postings/segment_postings.rs
@@ -255,6 +255,9 @@ impl Postings for SegmentPostings {
position_reader.read(read_offset, &mut output[..]);
let mut cum = offset;
for output_mut in output.iter_mut() {
+ if cum > 100000 {
+ println!("tf={} out={} cum={}", term_freq, *output_mut, cum);
+ }
cum += *output_mut;
*output_mut = cum;
}
```
**Outputs**
```
tf=17 out=537053 cum=214909
tf=17 out=1564834 cum=751962
tf=17 out=4131190 cum=2316796
tf=17 out=10060076 cum=6447986
tf=17 out=22897404 cum=16508062
tf=17 out=49205305 cum=39405466
tf=17 out=100625901 cum=88610771
tf=17 out=197071398 cum=189236672
tf=17 out=371523100 cum=386308070
tf=17 out=677080003 cum=757831170
tf=17 out=1197093556 cum=1434911173
tf=17 out=2059465165 cum=2632004729
...
thread 'merge_thread0' panicked at 'attempt to add with overflow', /home/appaquet/.cargo/registry/src/github.com-1ecc6299db9ec823/tantivy-0.15.3/src/postings/segment_postings.rs:258:17
...
```
Let me know if there is anything I could provide to help. Even though my code that uses Tantivy is [open source](https://github.com/appaquet/exocore/tree/master/store/src/local/mutation_index), it's probably too complex to reproduce easily on your side and would require my corrupted index that contains private data. Feel free to contact me privately if it can help debugging (my gh username @ gmail.com) | priority | panic in merge thread running on the merge thread panics while trying to list segment positions i couldn t find a small reproducible code that triggers the issue yet and this seems to only happen in my project after a few days of usage with probably about a hundred mutations per day after trying to reproduce the issue with random mutations in my quest of finding a small reproducing example i have the feeling that it s the result of some corruption after many merges backtrace thread merge panicked at attempt to add with overflow home appaquet cargo registry src github com tantivy src postings segment postings rs stack backtrace std backtrace rs backtrace libunwind trace at rustc library std src backtrace src backtrace libunwind rs std backtrace rs backtrace trace unsynchronized at rustc library std src backtrace src backtrace mod rs std sys common backtrace print fmt at rustc library std src sys common backtrace rs fmt at rustc library std src sys common backtrace rs core fmt write at rustc library core src fmt mod rs std io write write fmt at rustc library std src io mod rs std sys common backtrace print at rustc library std src sys common backtrace rs std sys common backtrace print at rustc library std src sys common backtrace rs std panicking default hook closure at rustc library std src panicking rs std panicking default hook at rustc library std src panicking rs std panicking rust panic with hook at rustc library std src panicking rs std panicking begin panic handler closure at rustc library std src panicking rs std sys common backtrace rust end short backtrace at rustc library std src sys common backtrace rs rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core panicking panic at rustc library core src panicking rs positions with offset at home appaquet cargo registry src github com tantivy src postings segment postings rs tantivy postings postings postings positions at home appaquet cargo registry src github com tantivy src postings postings rs tantivy indexer merger indexmerger write postings for field at home appaquet cargo registry src github com tantivy src indexer merger rs tantivy indexer merger indexmerger write postings at home appaquet cargo registry src github com tantivy src indexer merger rs tantivy indexer merger indexmerger write at home appaquet cargo registry src github com tantivy src indexer merger rs tantivy indexer segment updater merge at home appaquet cargo registry src github com tantivy src indexer segment updater rs tantivy indexer segment updater segmentupdater start merge closure at home appaquet cargo registry src github com tantivy src indexer segment updater rs as core future future future poll at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library core src future mod rs as core future future future poll at home appaquet cargo registry src github com futures task src future obj rs as core future future future poll at home appaquet cargo registry src github com futures task src future obj rs futures util future future futureext poll unpin at home appaquet cargo registry src github com futures util src future future mod rs futures executor thread pool task run at home appaquet cargo registry src github com futures executor src thread pool rs futures executor thread pool poolstate work at home appaquet cargo registry src github com futures executor src thread pool rs futures executor thread pool threadpoolbuilder create closure at home appaquet cargo registry src github com futures executor src thread pool rs std sys common backtrace rust begin short backtrace at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library std src sys common backtrace rs std thread builder spawn unchecked closure closure at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library std src thread mod rs as core ops function fnonce call once at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library std src panic rs std panicking try do call at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library std src panicking rs rust try std panicking try at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library std src panicking rs std panic catch unwind at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library std src panic rs std thread builder spawn unchecked closure at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library std src thread mod rs core ops function fnonce call once vtable shim at home appaquet rustup toolchains stable unknown linux gnu lib rustlib src rust library core src ops function rs as core ops function fnonce call once at rustc library alloc src boxed rs as core ops function fnonce call once at rustc library alloc src boxed rs std sys unix thread thread new thread start at rustc library std src sys unix thread rs start thread clone after adding some debugging right before the panic point and it seems that the positions explode in value which gets summed in loop and eventually overflows the debugging code diff git a src postings segment postings rs b src postings segment postings rs index a src postings segment postings rs b src postings segment postings rs impl postings for segmentpostings position reader read read offset mut output let mut cum offset for output mut in output iter mut if cum println tf out cum term freq output mut cum cum output mut output mut cum outputs tf out cum tf out cum tf out cum tf out cum tf out cum tf out cum tf out cum tf out cum tf out cum tf out cum tf out cum tf out cum thread merge panicked at attempt to add with overflow home appaquet cargo registry src github com tantivy src postings segment postings rs let me know if there is anything i could provide to help even though my code that uses tantivy is it s probably too complex to reproduce easily on your side and would require my corrupted index that contains private data feel free to contact me privately if it can help debugging my gh username gmail com | 1 |
619,903 | 19,539,120,803 | IssuesEvent | 2021-12-31 15:31:00 | KA-Huis/repair-tool | https://api.github.com/repos/KA-Huis/repair-tool | closed | Reparatie melden | Estimation: 3 Priority: High User Story | **Als een** vrijwilliger **wil ik** een reparatie kunnen melden, **zodat** zaken die stuk zijn snel opgepakt kunnen worden.
### Omschrijving
Een vrijwilliger moet gemakkelijk een reparatie kunnen melden zodat de klusjesman dit kan oppakken.
### Acceptatiecriteria
1. Vrijwilliger moet omschrijving kunnen geven aan klus.
2. Vrijwilliger moet een locatie kunnen toevoegen aan de klus.
3. Vrijwilliger moet prioriteit kunnen toevoegen aan de klus.
4. Gemelde klus moet in een lijst komen te staan met openstaande klussen. #3
5. Vrijwilliger moet klus kunnen inzien door er op te klikken. #5
### Taken
- [x] #7
- [x] #17
- [x] #18
- [x] #19
| 1.0 | Reparatie melden - **Als een** vrijwilliger **wil ik** een reparatie kunnen melden, **zodat** zaken die stuk zijn snel opgepakt kunnen worden.
### Omschrijving
Een vrijwilliger moet gemakkelijk een reparatie kunnen melden zodat de klusjesman dit kan oppakken.
### Acceptatiecriteria
1. Vrijwilliger moet omschrijving kunnen geven aan klus.
2. Vrijwilliger moet een locatie kunnen toevoegen aan de klus.
3. Vrijwilliger moet prioriteit kunnen toevoegen aan de klus.
4. Gemelde klus moet in een lijst komen te staan met openstaande klussen. #3
5. Vrijwilliger moet klus kunnen inzien door er op te klikken. #5
### Taken
- [x] #7
- [x] #17
- [x] #18
- [x] #19
| priority | reparatie melden als een vrijwilliger wil ik een reparatie kunnen melden zodat zaken die stuk zijn snel opgepakt kunnen worden omschrijving een vrijwilliger moet gemakkelijk een reparatie kunnen melden zodat de klusjesman dit kan oppakken acceptatiecriteria vrijwilliger moet omschrijving kunnen geven aan klus vrijwilliger moet een locatie kunnen toevoegen aan de klus vrijwilliger moet prioriteit kunnen toevoegen aan de klus gemelde klus moet in een lijst komen te staan met openstaande klussen vrijwilliger moet klus kunnen inzien door er op te klikken taken | 1 |
228,966 | 7,570,021,889 | IssuesEvent | 2018-04-23 07:36:34 | ballerina-platform/ballerina-examples | https://api.github.com/repos/ballerina-platform/ballerina-examples | closed | [Sample] The websub-internal-hub-sample fails :ballerina-0.970.0-beta1-SNAPSHOT | Priority/High bug | The websub-internal-hub-sample fails
./ballerina run /ballerina/ballerina-examples/examples/websub-internal-hub-sample/
publisher.bal
openjdk version "1.8.0_162"
ubuntu 16.04
```
/ballerina/ballerina-examples/examples/websub-internal-hub-sample/
publisher.bal websub-internal-hub-sample.description
subscriber.bal websub-internal-hub-sample.sh
/ballerina/ballerina-examples/examples/websub-internal-hub-sample/publisher.bal /ballerina-0.970.0-beta1-SNAPSHOT/bin$ ./ballerina run ~/products/
error: ballerina/websub/subscriber_service_endpoint.bal:81:30: incompatible types: expected 'ballerina.http:Filter', found 'ballerina.http:Filter|error'
error: ballerina/websub/subscriber_service_endpoint.bal:147:79: A matching pattern cannot be guaranteed for types '[ballerina.http:AuthConfig]'
error: ballerina/websub/commons.bal:187:54: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:140:57: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:155:69: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:156:73: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
compilation contains errors
/ballerina/ballerina-examples/examples/websub-internal-hub-sample/subscriber.bal
error: ballerina/websub/subscriber_service_endpoint.bal:81:30: incompatible types: expected 'ballerina.http:Filter', found 'ballerina.http:Filter|error'
error: ballerina/websub/subscriber_service_endpoint.bal:147:79: A matching pattern cannot be guaranteed for types '[ballerina.http:AuthConfig]'
error: ballerina/websub/commons.bal:187:54: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:140:57: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:155:69: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:156:73: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ./subscriber.bal:43:73: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
compilation contains errors
shavantha@shavantha-ThinkPad-X1-Carbon-4th:~/products/ballerina/build/18042018_1/ballerina-0.970.0-beta1-SNAPSHOT/bin$
``` | 1.0 | [Sample] The websub-internal-hub-sample fails :ballerina-0.970.0-beta1-SNAPSHOT - The websub-internal-hub-sample fails
./ballerina run /ballerina/ballerina-examples/examples/websub-internal-hub-sample/
publisher.bal
openjdk version "1.8.0_162"
ubuntu 16.04
```
/ballerina/ballerina-examples/examples/websub-internal-hub-sample/
publisher.bal websub-internal-hub-sample.description
subscriber.bal websub-internal-hub-sample.sh
/ballerina/ballerina-examples/examples/websub-internal-hub-sample/publisher.bal /ballerina-0.970.0-beta1-SNAPSHOT/bin$ ./ballerina run ~/products/
error: ballerina/websub/subscriber_service_endpoint.bal:81:30: incompatible types: expected 'ballerina.http:Filter', found 'ballerina.http:Filter|error'
error: ballerina/websub/subscriber_service_endpoint.bal:147:79: A matching pattern cannot be guaranteed for types '[ballerina.http:AuthConfig]'
error: ballerina/websub/commons.bal:187:54: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:140:57: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:155:69: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:156:73: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
compilation contains errors
/ballerina/ballerina-examples/examples/websub-internal-hub-sample/subscriber.bal
error: ballerina/websub/subscriber_service_endpoint.bal:81:30: incompatible types: expected 'ballerina.http:Filter', found 'ballerina.http:Filter|error'
error: ballerina/websub/subscriber_service_endpoint.bal:147:79: A matching pattern cannot be guaranteed for types '[ballerina.http:AuthConfig]'
error: ballerina/websub/commons.bal:187:54: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:140:57: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:155:69: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ballerina/websub/hub_client_connector.bal:156:73: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
error: ./subscriber.bal:43:73: unreachable pattern: preceding patterns are too general or the pattern ordering is not correct
compilation contains errors
shavantha@shavantha-ThinkPad-X1-Carbon-4th:~/products/ballerina/build/18042018_1/ballerina-0.970.0-beta1-SNAPSHOT/bin$
``` | priority | the websub internal hub sample fails ballerina snapshot the websub internal hub sample fails ballerina run ballerina ballerina examples examples websub internal hub sample publisher bal openjdk version ubuntu ballerina ballerina examples examples websub internal hub sample publisher bal websub internal hub sample description subscriber bal websub internal hub sample sh ballerina ballerina examples examples websub internal hub sample publisher bal ballerina snapshot bin ballerina run products error ballerina websub subscriber service endpoint bal incompatible types expected ballerina http filter found ballerina http filter error error ballerina websub subscriber service endpoint bal a matching pattern cannot be guaranteed for types error ballerina websub commons bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct error ballerina websub hub client connector bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct error ballerina websub hub client connector bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct error ballerina websub hub client connector bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct compilation contains errors ballerina ballerina examples examples websub internal hub sample subscriber bal error ballerina websub subscriber service endpoint bal incompatible types expected ballerina http filter found ballerina http filter error error ballerina websub subscriber service endpoint bal a matching pattern cannot be guaranteed for types error ballerina websub commons bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct error ballerina websub hub client connector bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct error ballerina websub hub client connector bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct error ballerina websub hub client connector bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct error subscriber bal unreachable pattern preceding patterns are too general or the pattern ordering is not correct compilation contains errors shavantha shavantha thinkpad carbon products ballerina build ballerina snapshot bin | 1 |
601,510 | 18,412,464,018 | IssuesEvent | 2021-10-13 07:46:48 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Debug log Errors | bug [Priority: HIGH] | 1) *PHP Warning: Undefined variable $height in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/includes/vendor/amp/includes/utils/class-amp-image-dimension-extractor.php on line 247*
2) *PHP Warning: Trying to access array offset on value of type null in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/includes/vendor/amp/includes/utils/class-amp-image-dimension-extractor.php on line 247*
3) *PHP Warning: Attempt to read property "ID" on null in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/templates/design-manager/design-3/elements/featured-image.php on line 15*
4) *PHP Warning: Undefined array key "ampforwp-ia-on-off" in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/templates/features.php on line 6999*
5) *PHP Warning: Trying to access array offset on value of type null in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/templates/features.php on line 6999*
6) *PHP Warning: Undefined array key "HTTP_USER_AGENT" in /home/showmetechcom/corporate.showmetech.com.br/wp-content/plugins/accelerated-mobile-pages/classes/class-ampforwp-walker-nav-menu.php on line 80*
---
Only user end issue
PHP version on the user's site: 8.0.10
**HelpScout Conversation** - https://secure.helpscout.net/conversation/1501634003/194083?folderId=4672784 | 1.0 | Debug log Errors - 1) *PHP Warning: Undefined variable $height in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/includes/vendor/amp/includes/utils/class-amp-image-dimension-extractor.php on line 247*
2) *PHP Warning: Trying to access array offset on value of type null in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/includes/vendor/amp/includes/utils/class-amp-image-dimension-extractor.php on line 247*
3) *PHP Warning: Attempt to read property "ID" on null in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/templates/design-manager/design-3/elements/featured-image.php on line 15*
4) *PHP Warning: Undefined array key "ampforwp-ia-on-off" in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/templates/features.php on line 6999*
5) *PHP Warning: Trying to access array offset on value of type null in /home/showmetechcom/public_html/wp-content/plugins/accelerated-mobile-pages/templates/features.php on line 6999*
6) *PHP Warning: Undefined array key "HTTP_USER_AGENT" in /home/showmetechcom/corporate.showmetech.com.br/wp-content/plugins/accelerated-mobile-pages/classes/class-ampforwp-walker-nav-menu.php on line 80*
---
Only user end issue
PHP version on the user's site: 8.0.10
**HelpScout Conversation** - https://secure.helpscout.net/conversation/1501634003/194083?folderId=4672784 | priority | debug log errors php warning undefined variable height in home showmetechcom public html wp content plugins accelerated mobile pages includes vendor amp includes utils class amp image dimension extractor php on line php warning trying to access array offset on value of type null in home showmetechcom public html wp content plugins accelerated mobile pages includes vendor amp includes utils class amp image dimension extractor php on line php warning attempt to read property id on null in home showmetechcom public html wp content plugins accelerated mobile pages templates design manager design elements featured image php on line php warning undefined array key ampforwp ia on off in home showmetechcom public html wp content plugins accelerated mobile pages templates features php on line php warning trying to access array offset on value of type null in home showmetechcom public html wp content plugins accelerated mobile pages templates features php on line php warning undefined array key http user agent in home showmetechcom corporate showmetech com br wp content plugins accelerated mobile pages classes class ampforwp walker nav menu php on line only user end issue php version on the user s site helpscout conversation | 1 |
162,478 | 6,154,095,347 | IssuesEvent | 2017-06-28 11:47:11 | sunwww/ecom-ast-ru | https://api.github.com/repos/sunwww/ecom-ast-ru | closed | Доработать отчет Выполнение плана койко-дней по отделениям | auto-migrated Priority-High Type-Enhancement | ```
Восстановить название отделений
поликлиники.
Убрать финансовые показатели.
Настроить локальный порядок сортировки
отделений (по ДГКБ 2: педиатрия, ПНО, ЦНС,
пульмонология).
```
Original issue reported on code.google.com by `avzvia...@gmail.com` on 11 Mar 2013 at 9:30
| 1.0 | Доработать отчет Выполнение плана койко-дней по отделениям - ```
Восстановить название отделений
поликлиники.
Убрать финансовые показатели.
Настроить локальный порядок сортировки
отделений (по ДГКБ 2: педиатрия, ПНО, ЦНС,
пульмонология).
```
Original issue reported on code.google.com by `avzvia...@gmail.com` on 11 Mar 2013 at 9:30
| priority | доработать отчет выполнение плана койко дней по отделениям восстановить название отделений поликлиники убрать финансовые показатели настроить локальный порядок сортировки отделений по дгкб педиатрия пно цнс пульмонология original issue reported on code google com by avzvia gmail com on mar at | 1 |
4,567 | 2,559,541,854 | IssuesEvent | 2015-02-05 01:49:18 | chessmasterhong/WaterEmblem | https://api.github.com/repos/chessmasterhong/WaterEmblem | closed | Game does not progress after battle animation when enemy unit is still alive. | bug duplicate high priority | After the battle animation (regardless of whoever initiated the attack), the game it will be perpetually stuck at the active unit's turn, rendering the game unable to progress or playable pass the first battle animation.
This behavior has been tested on Firefox and Chrome. | 1.0 | Game does not progress after battle animation when enemy unit is still alive. - After the battle animation (regardless of whoever initiated the attack), the game it will be perpetually stuck at the active unit's turn, rendering the game unable to progress or playable pass the first battle animation.
This behavior has been tested on Firefox and Chrome. | priority | game does not progress after battle animation when enemy unit is still alive after the battle animation regardless of whoever initiated the attack the game it will be perpetually stuck at the active unit s turn rendering the game unable to progress or playable pass the first battle animation this behavior has been tested on firefox and chrome | 1 |
395,844 | 11,697,319,768 | IssuesEvent | 2020-03-06 11:33:33 | Dijky/sv-des | https://api.github.com/repos/Dijky/sv-des | opened | Enable decryption support | HIGH PRIORITY enhancement | This should be as simple as making the `DESKeyRotate` invertable (rotate right instead of left) and slightly modifiying the rotation schedule
- [ ] Decide what kind of parameter to use: `input` or `parameter`?
- [ ] Deliver parameter to where it's needed
- [ ] Invert key schedule
- [ ] Add decryption test vectors to test bench ([memory file](https://github.com/Dijky/sv-des/blob/master/DES.srcs/sim_1/DES/des_dec.test.mem)) | 1.0 | Enable decryption support - This should be as simple as making the `DESKeyRotate` invertable (rotate right instead of left) and slightly modifiying the rotation schedule
- [ ] Decide what kind of parameter to use: `input` or `parameter`?
- [ ] Deliver parameter to where it's needed
- [ ] Invert key schedule
- [ ] Add decryption test vectors to test bench ([memory file](https://github.com/Dijky/sv-des/blob/master/DES.srcs/sim_1/DES/des_dec.test.mem)) | priority | enable decryption support this should be as simple as making the deskeyrotate invertable rotate right instead of left and slightly modifiying the rotation schedule decide what kind of parameter to use input or parameter deliver parameter to where it s needed invert key schedule add decryption test vectors to test bench | 1 |
185,040 | 6,718,405,552 | IssuesEvent | 2017-10-15 12:29:12 | phansch/axolotl | https://api.github.com/repos/phansch/axolotl | closed | Setup heroku deployment | maintenance priority:high | * When pushing to master, travis should deploy to heroku in a new build stage | 1.0 | Setup heroku deployment - * When pushing to master, travis should deploy to heroku in a new build stage | priority | setup heroku deployment when pushing to master travis should deploy to heroku in a new build stage | 1 |
314,544 | 9,598,536,033 | IssuesEvent | 2019-05-10 02:06:35 | conan-community/community | https://api.github.com/repos/conan-community/community | closed | Boost 1.69.0 does not build on macOS/gcc-8.2 | complex: high priority: medium stage: queue type: bug | Description of Problem, Request, or Question
Package Details (Include if Applicable)
Package Name/Version: boost/1.69.0
Operating System: macOS
Operation System Version: 10.14
Compiler+version: gcc-8.2
Conan version: conan 1.11.2
Steps to reproduce (Include if Applicable)
Use conan to install boost on macOS using gcc.
Build logs (Include if Available)
https://travis-ci.org/acgetchell/CDT-plusplus/jobs/512902318#L2638
https://travis-ci.org/acgetchell/CDT-plusplus/jobs/512902319#L2351 | 1.0 | Boost 1.69.0 does not build on macOS/gcc-8.2 - Description of Problem, Request, or Question
Package Details (Include if Applicable)
Package Name/Version: boost/1.69.0
Operating System: macOS
Operation System Version: 10.14
Compiler+version: gcc-8.2
Conan version: conan 1.11.2
Steps to reproduce (Include if Applicable)
Use conan to install boost on macOS using gcc.
Build logs (Include if Available)
https://travis-ci.org/acgetchell/CDT-plusplus/jobs/512902318#L2638
https://travis-ci.org/acgetchell/CDT-plusplus/jobs/512902319#L2351 | priority | boost does not build on macos gcc description of problem request or question package details include if applicable package name version boost operating system macos operation system version compiler version gcc conan version conan steps to reproduce include if applicable use conan to install boost on macos using gcc build logs include if available | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.