Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
742,417 | 25,853,459,248 | IssuesEvent | 2022-12-13 12:04:44 | dodona-edu/dodona | https://api.github.com/repos/dodona-edu/dodona | closed | Order activities by popularity | feature medium priority | Popularity of an exercise is definitely a strong indicator of interestingness of the exercise so if possible we should surface that information in the table. I don't think we want to list the exact number of courses where an exercises is used, but maybe we can use an icon with different states: low, medium, high, featured. We will then have to pick sensible threshold values. I don't know what the best strategy is for fetching this information: calculate it on demand, store it in the DB or store it in cache. In addition, we could add a filter for it.
_Originally posted by @bmesuere in https://github.com/dodona-edu/dodona/pull/4203#pullrequestreview-1199499438_
| 1.0 | Order activities by popularity - Popularity of an exercise is definitely a strong indicator of interestingness of the exercise so if possible we should surface that information in the table. I don't think we want to list the exact number of courses where an exercises is used, but maybe we can use an icon with different states: low, medium, high, featured. We will then have to pick sensible threshold values. I don't know what the best strategy is for fetching this information: calculate it on demand, store it in the DB or store it in cache. In addition, we could add a filter for it.
_Originally posted by @bmesuere in https://github.com/dodona-edu/dodona/pull/4203#pullrequestreview-1199499438_
| priority | order activities by popularity popularity of an exercise is definitely a strong indicator of interestingness of the exercise so if possible we should surface that information in the table i don t think we want to list the exact number of courses where an exercises is used but maybe we can use an icon with different states low medium high featured we will then have to pick sensible threshold values i don t know what the best strategy is for fetching this information calculate it on demand store it in the db or store it in cache in addition we could add a filter for it originally posted by bmesuere in | 1 |
601,835 | 18,436,208,964 | IssuesEvent | 2021-10-14 13:18:51 | AY2122S1-CS2103T-F11-3/tp | https://api.github.com/repos/AY2122S1-CS2103T-F11-3/tp | closed | Update test cases for leave related functionality | priority.MEDIUM | Writing/Editing test cases for the add/remove leaves command. | 1.0 | Update test cases for leave related functionality - Writing/Editing test cases for the add/remove leaves command. | priority | update test cases for leave related functionality writing editing test cases for the add remove leaves command | 1 |
323,530 | 9,856,088,895 | IssuesEvent | 2019-06-19 21:05:50 | USGS-Astrogeology/PyHAT_Point_Spectra_GUI | https://api.github.com/repos/USGS-Astrogeology/PyHAT_Point_Spectra_GUI | opened | Fix how model coefficients are saved for local regression | Difficulty Intermediate Priority: Medium | Currently intercept and model name are incorrect. Intercept should be the actual intercept of the model, model name should convey the algorithm, unknown spectrum name, number of local spectra used, etc. | 1.0 | Fix how model coefficients are saved for local regression - Currently intercept and model name are incorrect. Intercept should be the actual intercept of the model, model name should convey the algorithm, unknown spectrum name, number of local spectra used, etc. | priority | fix how model coefficients are saved for local regression currently intercept and model name are incorrect intercept should be the actual intercept of the model model name should convey the algorithm unknown spectrum name number of local spectra used etc | 1 |
666,463 | 22,356,729,703 | IssuesEvent | 2022-06-15 16:16:56 | oslopride/oslopride.no | https://api.github.com/repos/oslopride/oslopride.no | closed | Events - Improve UI | priority-medium | Link til Figma:
https://www.figma.com/file/0iN4qC7XqR4UagR3o7VEHn/Oslo-Pride-%E2%80%93-Design-System-%26-Web?node-id=829%3A20951
Legg samme event-filter som Skeivtkulturkalender.
Endringer på selve event-kort. | 1.0 | Events - Improve UI - Link til Figma:
https://www.figma.com/file/0iN4qC7XqR4UagR3o7VEHn/Oslo-Pride-%E2%80%93-Design-System-%26-Web?node-id=829%3A20951
Legg samme event-filter som Skeivtkulturkalender.
Endringer på selve event-kort. | priority | events improve ui link til figma legg samme event filter som skeivtkulturkalender endringer på selve event kort | 1 |
351,248 | 10,514,571,364 | IssuesEvent | 2019-09-28 01:43:42 | AY1920S1-CS2113T-W17-4/main | https://api.github.com/repos/AY1920S1-CS2113T-W17-4/main | opened | As a Computing student, I can have a do-after task | priority.Medium type.Story | so that I know what tasks need to be done after completing a specific task. | 1.0 | As a Computing student, I can have a do-after task - so that I know what tasks need to be done after completing a specific task. | priority | as a computing student i can have a do after task so that i know what tasks need to be done after completing a specific task | 1 |
150,164 | 5,738,811,117 | IssuesEvent | 2017-04-23 08:40:53 | diamm/diamm | https://api.github.com/repos/diamm/diamm | closed | Image display order | Component: Metadata Priority: Medium Type: Support | Need to be able to edit image order number to adjust incorrect display order if necessary | 1.0 | Image display order - Need to be able to edit image order number to adjust incorrect display order if necessary | priority | image display order need to be able to edit image order number to adjust incorrect display order if necessary | 1 |
577,797 | 17,135,069,201 | IssuesEvent | 2021-07-13 00:08:02 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | net: ip: Assertion fails when tcp_send_data() with zero length packet | bug priority: medium | **Describe the bug**
When testing TCP stack, assertion fails as following:
```
ASSERTION FAIL [frag] @ /zephyr/zephyrmaster/zephyr/subsys/net/buf.c:640
@ /zephyr/zephyrmaster/zephyr/lib/os/assert.c:45
E: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
E: Current thread: 0x3d87e0 (tcp_work)
E: Halting system
```
The flow of this failure is:
```tcp_work thread``` -> tcp_resend_data() -> tcp_send_data()
```
{
/* 1. send_data_totoal and unacked_len are all 0 in this case */
len = 0 = MIN3(conn->send_data_total - conn->unacked_len,
conn->send_win - conn->unacked_len,
conn_mss(conn));
/* 2. zero length packet (pkt->buffer=NULL) allocated */
pkt = tcp_pkt_alloc(conn, len=0);
ret = tcp_pkt_peek(pkt, conn->send_data, pos, len=0);
/* 3. zero length packet passed to tcp_out_ext() */
ret = tcp_out_ext(conn, PSH | ACK, pkt, conn->seq + conn->unacked_len);
}
```
-> tcp_out_ext() -> net_pkt_append_buffer(pkt, data->buffer=```NULL```) -> net_buf_frag_insert(..., buffer=```NULL```) -> _ASSERT_NO_MSG(frag) -> ```ASSERTION FAIL```
**To Solve**
Don't proceed if TCP resend length is zero.
```
len = MIN3(conn->send_data_total - conn->unacked_len,
conn->send_win - conn->unacked_len,
conn_mss(conn));
if (len == 0) {
goto out;
}
```
**Environment**
- OS: Linux
- Toolchain: Zephyr SDK
- Commit 89212a7fbf5fcd8e4d661c016344ae4bf2d46f53
| 1.0 | net: ip: Assertion fails when tcp_send_data() with zero length packet - **Describe the bug**
When testing TCP stack, assertion fails as following:
```
ASSERTION FAIL [frag] @ /zephyr/zephyrmaster/zephyr/subsys/net/buf.c:640
@ /zephyr/zephyrmaster/zephyr/lib/os/assert.c:45
E: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
E: Current thread: 0x3d87e0 (tcp_work)
E: Halting system
```
The flow of this failure is:
```tcp_work thread``` -> tcp_resend_data() -> tcp_send_data()
```
{
/* 1. send_data_totoal and unacked_len are all 0 in this case */
len = 0 = MIN3(conn->send_data_total - conn->unacked_len,
conn->send_win - conn->unacked_len,
conn_mss(conn));
/* 2. zero length packet (pkt->buffer=NULL) allocated */
pkt = tcp_pkt_alloc(conn, len=0);
ret = tcp_pkt_peek(pkt, conn->send_data, pos, len=0);
/* 3. zero length packet passed to tcp_out_ext() */
ret = tcp_out_ext(conn, PSH | ACK, pkt, conn->seq + conn->unacked_len);
}
```
-> tcp_out_ext() -> net_pkt_append_buffer(pkt, data->buffer=```NULL```) -> net_buf_frag_insert(..., buffer=```NULL```) -> _ASSERT_NO_MSG(frag) -> ```ASSERTION FAIL```
**To Solve**
Don't proceed if TCP resend length is zero.
```
len = MIN3(conn->send_data_total - conn->unacked_len,
conn->send_win - conn->unacked_len,
conn_mss(conn));
if (len == 0) {
goto out;
}
```
**Environment**
- OS: Linux
- Toolchain: Zephyr SDK
- Commit 89212a7fbf5fcd8e4d661c016344ae4bf2d46f53
| priority | net ip assertion fails when tcp send data with zero length packet describe the bug when testing tcp stack assertion fails as following assertion fail zephyr zephyrmaster zephyr subsys net buf c zephyr zephyrmaster zephyr lib os assert c e zephyr fatal error kernel panic on cpu e current thread tcp work e halting system the flow of this failure is tcp work thread tcp resend data tcp send data send data totoal and unacked len are all in this case len conn send data total conn unacked len conn send win conn unacked len conn mss conn zero length packet pkt buffer null allocated pkt tcp pkt alloc conn len ret tcp pkt peek pkt conn send data pos len zero length packet passed to tcp out ext ret tcp out ext conn psh ack pkt conn seq conn unacked len tcp out ext net pkt append buffer pkt data buffer null net buf frag insert buffer null assert no msg frag assertion fail to solve don t proceed if tcp resend length is zero len conn send data total conn unacked len conn send win conn unacked len conn mss conn if len goto out environment os linux toolchain zephyr sdk commit | 1 |
796,217 | 28,102,406,553 | IssuesEvent | 2023-03-30 20:37:46 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Tablet splitting should be blocked on system_postgres.sequences_data | kind/enhancement area/docdb priority/medium 2.14 Backport Required 2.16 Backport Required | Jira Link: [DB-5577](https://yugabyte.atlassian.net/browse/DB-5577)
### Description
Tablet splitting can currently be applied to the `system_postgres.sequences_data`. This is a metadata table, even though it is not a system catalog table, and therefore similar to system catalog tables, tablet splitting should be disabled for it.
Reproduction:
```
.bin/yb-admin --master_addresses 127.0.0.1:7100 list_tablets ysql.system_postgres sequences_data
.bin/yb-admin --master_addresses 127.0.0.1:7100 split_tablet [tablet id]
```
[DB-5577]: https://yugabyte.atlassian.net/browse/DB-5577?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [DocDB] Tablet splitting should be blocked on system_postgres.sequences_data - Jira Link: [DB-5577](https://yugabyte.atlassian.net/browse/DB-5577)
### Description
Tablet splitting can currently be applied to the `system_postgres.sequences_data`. This is a metadata table, even though it is not a system catalog table, and therefore similar to system catalog tables, tablet splitting should be disabled for it.
Reproduction:
```
.bin/yb-admin --master_addresses 127.0.0.1:7100 list_tablets ysql.system_postgres sequences_data
.bin/yb-admin --master_addresses 127.0.0.1:7100 split_tablet [tablet id]
```
[DB-5577]: https://yugabyte.atlassian.net/browse/DB-5577?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | tablet splitting should be blocked on system postgres sequences data jira link description tablet splitting can currently be applied to the system postgres sequences data this is a metadata table even though it is not a system catalog table and therefore similar to system catalog tables tablet splitting should be disabled for it reproduction bin yb admin master addresses list tablets ysql system postgres sequences data bin yb admin master addresses split tablet | 1 |
690,192 | 23,648,910,081 | IssuesEvent | 2022-08-26 03:21:18 | Kong/gateway-operator | https://api.github.com/repos/Kong/gateway-operator | closed | Update dataplane env vars in controlplane when dataplane service changes | size/medium area/maintenance priority/medium | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
The `ControlPlane` deployments are pointed to the correct `DataPlane` service via 2 different env vars: `CONTROLLER_PUBLISH_SERVICE`, and `CONTROLLER_KONG_ADMIN_URL`. These 2 vars contain the correct dataplane service address to talk to. With https://github.com/Kong/gateway-operator/pull/91, when the `DataPlane` service is deleted, it is recreated automatically by the `dataplane-controller`; the problem is that the service name is not static, but created through the `generateName` field. Hence, every time the `DataPlane`service is recreated by the `dataplane-controller`, the `controlplane-controller` should also update the `ControlPlane` deployment env vars.
### Proposed Solution
We should trigger an update on the `controlplane-controller` every time the `DataPlane` service changes. To do so, we can watch the `Dataplane` services in the `controlplane-controller` and trigger a reconciliation loop every time an event occurs.
### Additional information
_No response_
### Acceptance Criteria
- [ ] Every time the dataplane service is created, the dataplane service env vars are set in the controlplane deployment
- [ ] Every time the dataplane service is deleted, the dataplane service env vars are unset from the controlplane deployment | 1.0 | Update dataplane env vars in controlplane when dataplane service changes - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
The `ControlPlane` deployments are pointed to the correct `DataPlane` service via 2 different env vars: `CONTROLLER_PUBLISH_SERVICE`, and `CONTROLLER_KONG_ADMIN_URL`. These 2 vars contain the correct dataplane service address to talk to. With https://github.com/Kong/gateway-operator/pull/91, when the `DataPlane` service is deleted, it is recreated automatically by the `dataplane-controller`; the problem is that the service name is not static, but created through the `generateName` field. Hence, every time the `DataPlane`service is recreated by the `dataplane-controller`, the `controlplane-controller` should also update the `ControlPlane` deployment env vars.
### Proposed Solution
We should trigger an update on the `controlplane-controller` every time the `DataPlane` service changes. To do so, we can watch the `Dataplane` services in the `controlplane-controller` and trigger a reconciliation loop every time an event occurs.
### Additional information
_No response_
### Acceptance Criteria
- [ ] Every time the dataplane service is created, the dataplane service env vars are set in the controlplane deployment
- [ ] Every time the dataplane service is deleted, the dataplane service env vars are unset from the controlplane deployment | priority | update dataplane env vars in controlplane when dataplane service changes is there an existing issue for this i have searched the existing issues problem statement the controlplane deployments are pointed to the correct dataplane service via different env vars controller publish service and controller kong admin url these vars contain the correct dataplane service address to talk to with when the dataplane service is deleted it is recreated automatically by the dataplane controller the problem is that the service name is not static but created through the generatename field hence every time the dataplane service is recreated by the dataplane controller the controlplane controller should also update the controlplane deployment env vars proposed solution we should trigger an update on the controlplane controller every time the dataplane service changes to do so we can watch the dataplane services in the controlplane controller and trigger a reconciliation loop every time an event occurs additional information no response acceptance criteria every time the dataplane service is created the dataplane service env vars are set in the controlplane deployment every time the dataplane service is deleted the dataplane service env vars are unset from the controlplane deployment | 1 |
267,345 | 8,388,015,029 | IssuesEvent | 2018-10-09 03:52:22 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | opened | Implement sampling strategy in analytics | Priority-Medium enhancement | We will most likely overwhelm the free tier in google analytics, but there's an easy answer - sampling strategy.
Need to implement this in the upstream library and tune it hear based on the usage we see | 1.0 | Implement sampling strategy in analytics - We will most likely overwhelm the free tier in google analytics, but there's an easy answer - sampling strategy.
Need to implement this in the upstream library and tune it hear based on the usage we see | priority | implement sampling strategy in analytics we will most likely overwhelm the free tier in google analytics but there s an easy answer sampling strategy need to implement this in the upstream library and tune it hear based on the usage we see | 1 |
794,887 | 28,053,515,452 | IssuesEvent | 2023-03-29 07:51:22 | masastack/MASA.MC | https://api.github.com/repos/masastack/MASA.MC | closed | Send messages and upload recipients in batches. After uploading files, select File Upload again. The upload process is displayed | type/bug status/resolved severity/medium site/staging priority/p2 | 发送消息,批量上传收件人,上传文件后,再次选择文件上传,应显示上传的过程
 | 1.0 | Send messages and upload recipients in batches. After uploading files, select File Upload again. The upload process is displayed - 发送消息,批量上传收件人,上传文件后,再次选择文件上传,应显示上传的过程
 | priority | send messages and upload recipients in batches after uploading files select file upload again the upload process is displayed 发送消息,批量上传收件人,上传文件后,再次选择文件上传,应显示上传的过程 | 1 |
770,685 | 27,050,682,526 | IssuesEvent | 2023-02-13 13:02:48 | union-platform/union-app | https://api.github.com/repos/union-platform/union-app | closed | Rework functions to expect full data structure, not PK only | type: enhancement priority: medium ss: backend | # Clarification and motivation
<!--
Clarify what you want to be done and why.
-->
With current implementation there two problems:
1. It's easier to construct wrong argument and receive error at some point.
2. Sql exception in case of non existing foreign key will be translated as 500, but in some handlers we should trait it as 404 because provided pk is part of url.
# Acceptance criteria
<!--
Clarify how we can verify that the task is done.
-->
All functions where it's possible receives full data as argument and extracts PK in local scope. | 1.0 | Rework functions to expect full data structure, not PK only - # Clarification and motivation
<!--
Clarify what you want to be done and why.
-->
With current implementation there two problems:
1. It's easier to construct wrong argument and receive error at some point.
2. Sql exception in case of non existing foreign key will be translated as 500, but in some handlers we should trait it as 404 because provided pk is part of url.
# Acceptance criteria
<!--
Clarify how we can verify that the task is done.
-->
All functions where it's possible receives full data as argument and extracts PK in local scope. | priority | rework functions to expect full data structure not pk only clarification and motivation clarify what you want to be done and why with current implementation there two problems it s easier to construct wrong argument and receive error at some point sql exception in case of non existing foreign key will be translated as but in some handlers we should trait it as because provided pk is part of url acceptance criteria clarify how we can verify that the task is done all functions where it s possible receives full data as argument and extracts pk in local scope | 1 |
670,723 | 22,701,391,129 | IssuesEvent | 2022-07-05 10:59:46 | FEeasy404/GameUs | https://api.github.com/repos/FEeasy404/GameUs | opened | 채팅 목록 마크업 구현 | ✨new feature 🖐Priority: Medium | ## 추가 기능 설명
채팅 목록과 채팅방은 마크업 구현 및 스타일 적용만 진행합니다. 현재 대화가 진행 중인 채팅 목록이 표시됩니다.
## 할 일
- [ ] 채팅 목록 리스트 컴포넌트 생성
- [ ] 채팅 목록 페이지 구현
## ETC
| 1.0 | 채팅 목록 마크업 구현 - ## 추가 기능 설명
채팅 목록과 채팅방은 마크업 구현 및 스타일 적용만 진행합니다. 현재 대화가 진행 중인 채팅 목록이 표시됩니다.
## 할 일
- [ ] 채팅 목록 리스트 컴포넌트 생성
- [ ] 채팅 목록 페이지 구현
## ETC
| priority | 채팅 목록 마크업 구현 추가 기능 설명 채팅 목록과 채팅방은 마크업 구현 및 스타일 적용만 진행합니다 현재 대화가 진행 중인 채팅 목록이 표시됩니다 할 일 채팅 목록 리스트 컴포넌트 생성 채팅 목록 페이지 구현 etc | 1 |
756,317 | 26,466,441,706 | IssuesEvent | 2023-01-17 00:29:27 | cs-utulsa/Encrypted-Chat-Service | https://api.github.com/repos/cs-utulsa/Encrypted-Chat-Service | closed | Coding: GUI Working with Chat | coding Priority 2 Medium Effort | Issue exists to ensure all screens created in other issues work with the chat class and have all existing functionality included.
- [x] Properly commented
- [x] GUIs do not block use of functions
- [x] All functions available to user via the GUI
- [x] Add Boolean at top of chat to indicate GUI or console mode. Functionality for both must be put in place as part of this. | 1.0 | Coding: GUI Working with Chat - Issue exists to ensure all screens created in other issues work with the chat class and have all existing functionality included.
- [x] Properly commented
- [x] GUIs do not block use of functions
- [x] All functions available to user via the GUI
- [x] Add Boolean at top of chat to indicate GUI or console mode. Functionality for both must be put in place as part of this. | priority | coding gui working with chat issue exists to ensure all screens created in other issues work with the chat class and have all existing functionality included properly commented guis do not block use of functions all functions available to user via the gui add boolean at top of chat to indicate gui or console mode functionality for both must be put in place as part of this | 1 |
581,654 | 17,314,079,662 | IssuesEvent | 2021-07-27 01:54:41 | lewisjwilson/kmj | https://api.github.com/repos/lewisjwilson/kmj | opened | Fix percentage in FlashcardsComplete.kt | Medium Priority bug | Easy fix, percentage is showing to too many sf. Change to 1/2 s.f.
Also change text to `Accuracy ${value}`? | 1.0 | Fix percentage in FlashcardsComplete.kt - Easy fix, percentage is showing to too many sf. Change to 1/2 s.f.
Also change text to `Accuracy ${value}`? | priority | fix percentage in flashcardscomplete kt easy fix percentage is showing to too many sf change to s f also change text to accuracy value | 1 |
84,276 | 3,662,593,620 | IssuesEvent | 2016-02-19 00:02:48 | sandialabs/slycat | https://api.github.com/repos/sandialabs/slycat | opened | Scientific Notation not supported in hyperchunk query language | bug Medium Priority PS Model | Filtering using either a slider or categorical buttons fails if the input data includes values using scientific notation. HQL doesn't handle the 'e-' portion of the number. | 1.0 | Scientific Notation not supported in hyperchunk query language - Filtering using either a slider or categorical buttons fails if the input data includes values using scientific notation. HQL doesn't handle the 'e-' portion of the number. | priority | scientific notation not supported in hyperchunk query language filtering using either a slider or categorical buttons fails if the input data includes values using scientific notation hql doesn t handle the e portion of the number | 1 |
548,588 | 16,067,153,218 | IssuesEvent | 2021-04-23 21:11:32 | Qiskit/qiskit-terra | https://api.github.com/repos/Qiskit/qiskit-terra | closed | Incorrect deserialization result with ByType polymorphic field | bug priority: medium | # Information
- **Qiskit Terra version**: 0.10.0
- **Python version**: 3.7.3
- **Operating system**: Darwin Kernel Version 18.7.0
### What is the current behavior?
When using polymorphic field `qiskit.validation.fields.ByType` combined with multi-nested schemas leads to incorrect deserialization result. If you have a nested schema listed in `ByType` polymorphic field, it might not get correctly deserialized. An example is provided below.
### Steps to reproduce the problem
Consider the following setup:
```
from qiskit.validation import BaseSchema
from qiskit.validation.fields import ByType, Nested, Integer, String
class IntSchema(BaseSchema):
int_field = Integer(required=True)
class FirstNestedSchema(BaseSchema):
string = String(required=True)
class SecondNestedchema(BaseSchema):
nested_int = Nested(IntSchema)
class TestSchema(BaseSchema):
test = ByType([Nested(FirstNestedSchema), Nested(SecondNestedchema)])
```
And corresponding model bindings:
```
from qiskit.validation import BaseModel, bind_schema
@bind_schema(TestSchema)
class TestModel(BaseModel):
pass
@bind_schema(SecondNestedchema)
class TestSecondNested(BaseModel):
pass
@bind_schema(IntSchema)
class TestInt(BaseModel):
pass
```
The following test case will always fail:
```
from qiskit.test import QiskitTestCase
class Test(QiskitTestCase):
def test_schema(self):
int_model = TestInt(int_field=1)
second_nested = TestSecondNested(nested_int=int_model)
result = TestModel(test=second_nested).to_dict()
# result == {'test': {'nested_int': TestInt(int_field=1)}}
expected = {'test': {'nested_int': {'int_field': 1}}}
self.assertEqual(result, expected)
```
### What is the expected behavior?
When you create a dict from given schemas:
```
int_model = TestInt(int_field=1)
second_nested = TestSecondNested(nested_int=int_model)
result = TestModel(test=second_nested).to_dict()
```
`result` dict is supposed to match the following dict:
```
{'test': {'nested_int': {'int_field': 1}}}
```
Instead, `TestInt` will not get deserialized properly, and will result in:
```
{'test': {'nested_int': TestInt(int_field=1)}}
```
### Observations
Interestingly enough, the deserialization result depends on the **order of nested schemas** in the list of `ByType` fields.
For example, if we declare `TestSchema` as:
```
class TestSchema(BaseSchema):
test = ByType([Nested(SecondNestedchema), Nested(FirstNestedSchema)])
```
i.e. changing the order of nested schemas or swapping `FirstNestedSchema` and `SecondNestedchema` it will result in correct deserialized dict:
```
{'test': {'nested_int': {'int_field': 1}}}
```
¯\\\_(ツ)_/¯ | 1.0 | Incorrect deserialization result with ByType polymorphic field - # Information
- **Qiskit Terra version**: 0.10.0
- **Python version**: 3.7.3
- **Operating system**: Darwin Kernel Version 18.7.0
### What is the current behavior?
When using polymorphic field `qiskit.validation.fields.ByType` combined with multi-nested schemas leads to incorrect deserialization result. If you have a nested schema listed in `ByType` polymorphic field, it might not get correctly deserialized. An example is provided below.
### Steps to reproduce the problem
Consider the following setup:
```
from qiskit.validation import BaseSchema
from qiskit.validation.fields import ByType, Nested, Integer, String
class IntSchema(BaseSchema):
int_field = Integer(required=True)
class FirstNestedSchema(BaseSchema):
string = String(required=True)
class SecondNestedchema(BaseSchema):
nested_int = Nested(IntSchema)
class TestSchema(BaseSchema):
test = ByType([Nested(FirstNestedSchema), Nested(SecondNestedchema)])
```
And corresponding model bindings:
```
from qiskit.validation import BaseModel, bind_schema
@bind_schema(TestSchema)
class TestModel(BaseModel):
pass
@bind_schema(SecondNestedchema)
class TestSecondNested(BaseModel):
pass
@bind_schema(IntSchema)
class TestInt(BaseModel):
pass
```
The following test case will always fail:
```
from qiskit.test import QiskitTestCase
class Test(QiskitTestCase):
def test_schema(self):
int_model = TestInt(int_field=1)
second_nested = TestSecondNested(nested_int=int_model)
result = TestModel(test=second_nested).to_dict()
# result == {'test': {'nested_int': TestInt(int_field=1)}}
expected = {'test': {'nested_int': {'int_field': 1}}}
self.assertEqual(result, expected)
```
### What is the expected behavior?
When you create a dict from given schemas:
```
int_model = TestInt(int_field=1)
second_nested = TestSecondNested(nested_int=int_model)
result = TestModel(test=second_nested).to_dict()
```
`result` dict is supposed to match the following dict:
```
{'test': {'nested_int': {'int_field': 1}}}
```
Instead, `TestInt` will not get deserialized properly, and will result in:
```
{'test': {'nested_int': TestInt(int_field=1)}}
```
### Observations
Interestingly enough, the deserialization result depends on the **order of nested schemas** in the list of `ByType` fields.
For example, if we declare `TestSchema` as:
```
class TestSchema(BaseSchema):
test = ByType([Nested(SecondNestedchema), Nested(FirstNestedSchema)])
```
i.e. changing the order of nested schemas or swapping `FirstNestedSchema` and `SecondNestedchema` it will result in correct deserialized dict:
```
{'test': {'nested_int': {'int_field': 1}}}
```
¯\\\_(ツ)_/¯ | priority | incorrect deserialization result with bytype polymorphic field information qiskit terra version python version operating system darwin kernel version what is the current behavior when using polymorphic field qiskit validation fields bytype combined with multi nested schemas leads to incorrect deserialization result if you have a nested schema listed in bytype polymorphic field it might not get correctly deserialized an example is provided below steps to reproduce the problem consider the following setup from qiskit validation import baseschema from qiskit validation fields import bytype nested integer string class intschema baseschema int field integer required true class firstnestedschema baseschema string string required true class secondnestedchema baseschema nested int nested intschema class testschema baseschema test bytype and corresponding model bindings from qiskit validation import basemodel bind schema bind schema testschema class testmodel basemodel pass bind schema secondnestedchema class testsecondnested basemodel pass bind schema intschema class testint basemodel pass the following test case will always fail from qiskit test import qiskittestcase class test qiskittestcase def test schema self int model testint int field second nested testsecondnested nested int int model result testmodel test second nested to dict result test nested int testint int field expected test nested int int field self assertequal result expected what is the expected behavior when you create a dict from given schemas int model testint int field second nested testsecondnested nested int int model result testmodel test second nested to dict result dict is supposed to match the following dict test nested int int field instead testint will not get deserialized properly and will result in test nested int testint int field observations interestingly enough the deserialization result depends on the order of nested schemas in the list of bytype fields for example if we declare testschema as class testschema baseschema test bytype i e changing the order of nested schemas or swapping firstnestedschema and secondnestedchema it will result in correct deserialized dict test nested int int field ¯ ツ ¯ | 1 |
532,964 | 15,574,451,210 | IssuesEvent | 2021-03-17 09:51:55 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | workspace admin can not view the workspace quote management. | kind/bug priority/medium | **Describe**
A user with global role `regular` or `Cluster Management` and workspace role `admin` can not view the workspace quote management.
**Environment**
kubespheredev/ks-console:latest
**Preset conditions**
A user with global role `regular` and workspace role `admin`
**Expected behavior**
A user with global role `regular` and workspace role `admin` can view the workspace quote management.
**Actual behavior**

/kind bug
/assign @wansir
/milestone 3.1.0
/priority medium
| 1.0 | workspace admin can not view the workspace quote management. - **Describe**
A user with global role `regular` or `Cluster Management` and workspace role `admin` can not view the workspace quote management.
**Environment**
kubespheredev/ks-console:latest
**Preset conditions**
A user with global role `regular` and workspace role `admin`
**Expected behavior**
A user with global role `regular` and workspace role `admin` can view the workspace quote management.
**Actual behavior**

/kind bug
/assign @wansir
/milestone 3.1.0
/priority medium
| priority | workspace admin can not view the workspace quote management describe a user with global role regular or cluster management and workspace role admin can not view the workspace quote management environment kubespheredev ks console latest preset conditions a user with global role regular and workspace role admin expected behavior a user with global role regular and workspace role admin can view the workspace quote management actual behavior kind bug assign wansir milestone priority medium | 1 |
80,097 | 3,550,700,507 | IssuesEvent | 2016-01-20 23:03:07 | ualbertalib/discovery | https://api.github.com/repos/ualbertalib/discovery | opened | Find more by this author link not providing results | bug Medium priority | Clicking on [Cordasco, Francesco, 1920-2001](https://www.library.ualberta.ca/catalog?f%5Bauthor_display%5D%5B%5D=Cordasco,+Francesco,+1920-2001) from [this record](https://www.library.ualberta.ca/catalog/466240) leads to the following screen:

NEOS shows 52 items for this author. | 1.0 | Find more by this author link not providing results - Clicking on [Cordasco, Francesco, 1920-2001](https://www.library.ualberta.ca/catalog?f%5Bauthor_display%5D%5B%5D=Cordasco,+Francesco,+1920-2001) from [this record](https://www.library.ualberta.ca/catalog/466240) leads to the following screen:

NEOS shows 52 items for this author. | priority | find more by this author link not providing results clicking on from leads to the following screen neos shows items for this author | 1 |
738,186 | 25,548,414,315 | IssuesEvent | 2022-11-29 21:02:07 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] `yb_build.sh --no-tests reinitdb` fails | kind/bug area/ysql priority/medium status/awaiting-triage | Jira Link: [DB-4276](https://yugabyte.atlassian.net/browse/DB-4276)
### Description
Error message:
```
Standard output from external program {{ '/home/amartsin/code/yugabyte-db/build-support/run-test.sh' /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/tests-pgwrapper/create_initial_sys_catalog_snapshot }} running in '/home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja', saving stdout to {{ /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/create_initial_sys_catalog_snapshot.out }}, stderr to {{ /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/create_initial_sys_catalog_snapshot.err }}:
Test is running on host amartsin-laptop, arguments: /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/tests-pgwrapper/create_initial_sys_catalog_snapshot
/home/amartsin/code/yugabyte-db/build-support/common-test-env.sh: line 912: /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/bin/run-with-timeout: No such file or directory
(end of standard output)
```
Internally the `initial_sys_catalog_snapshot` build target uses run-with-timeout tool, which considered a part of test infrastructure and is not built if `--no-tests` is specified. | 1.0 | [YSQL] `yb_build.sh --no-tests reinitdb` fails - Jira Link: [DB-4276](https://yugabyte.atlassian.net/browse/DB-4276)
### Description
Error message:
```
Standard output from external program {{ '/home/amartsin/code/yugabyte-db/build-support/run-test.sh' /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/tests-pgwrapper/create_initial_sys_catalog_snapshot }} running in '/home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja', saving stdout to {{ /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/create_initial_sys_catalog_snapshot.out }}, stderr to {{ /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/create_initial_sys_catalog_snapshot.err }}:
Test is running on host amartsin-laptop, arguments: /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/tests-pgwrapper/create_initial_sys_catalog_snapshot
/home/amartsin/code/yugabyte-db/build-support/common-test-env.sh: line 912: /home/amartsin/code/yugabyte-db/build/debug-clang15-dynamic-ninja/bin/run-with-timeout: No such file or directory
(end of standard output)
```
Internally the `initial_sys_catalog_snapshot` build target uses run-with-timeout tool, which considered a part of test infrastructure and is not built if `--no-tests` is specified. | priority | yb build sh no tests reinitdb fails jira link description error message standard output from external program home amartsin code yugabyte db build support run test sh home amartsin code yugabyte db build debug dynamic ninja tests pgwrapper create initial sys catalog snapshot running in home amartsin code yugabyte db build debug dynamic ninja saving stdout to home amartsin code yugabyte db build debug dynamic ninja create initial sys catalog snapshot out stderr to home amartsin code yugabyte db build debug dynamic ninja create initial sys catalog snapshot err test is running on host amartsin laptop arguments home amartsin code yugabyte db build debug dynamic ninja tests pgwrapper create initial sys catalog snapshot home amartsin code yugabyte db build support common test env sh line home amartsin code yugabyte db build debug dynamic ninja bin run with timeout no such file or directory end of standard output internally the initial sys catalog snapshot build target uses run with timeout tool which considered a part of test infrastructure and is not built if no tests is specified | 1 |
689,796 | 23,634,522,521 | IssuesEvent | 2022-08-25 12:16:02 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [CDCSDK] Stale entry in CDC Cache causes Steam Expiration. | priority/medium 2.12 Backport Required 2.14 Backport Required | Consider a cluster with 3 tservers (TS1, TS2, TS3) and a table with a single tablet. Today we maintain and track the stream active time in the cache. During starting time tablet LEADER is TS1, so there is a Cache entry for the tablet, to track its active time. After some time TS2 becomes the tablet LEADER, so an entry will be created in TS2's cache to track the active time of the tablet. now after cdc_intent_retention_ms expiration time, TS1 becomes a LEADER, but its existing cache entry is not in sync, so if we call GetChanges stream will expire.
| 1.0 | [CDCSDK] Stale entry in CDC Cache causes Steam Expiration. - Consider a cluster with 3 tservers (TS1, TS2, TS3) and a table with a single tablet. Today we maintain and track the stream active time in the cache. During starting time tablet LEADER is TS1, so there is a Cache entry for the tablet, to track its active time. After some time TS2 becomes the tablet LEADER, so an entry will be created in TS2's cache to track the active time of the tablet. now after cdc_intent_retention_ms expiration time, TS1 becomes a LEADER, but its existing cache entry is not in sync, so if we call GetChanges stream will expire.
| priority | stale entry in cdc cache causes steam expiration consider a cluster with tservers and a table with a single tablet today we maintain and track the stream active time in the cache during starting time tablet leader is so there is a cache entry for the tablet to track its active time after some time becomes the tablet leader so an entry will be created in s cache to track the active time of the tablet now after cdc intent retention ms expiration time becomes a leader but its existing cache entry is not in sync so if we call getchanges stream will expire | 1 |
459,142 | 13,187,092,154 | IssuesEvent | 2020-08-13 02:19:27 | Twin-Cities-Mutual-Aid/twin-cities-aid-distribution-locations | https://api.github.com/repos/Twin-Cities-Mutual-Aid/twin-cities-aid-distribution-locations | closed | Replacing the back end sheet with a copy | Priority: Medium Type: Discussion Type: Feature Request From TCMAP Type: Maintenance | The current owner of the Google Sheets document that is our backend...is someone who hasn't been part of this project for more than a month and hasn't answered emails. I'd like to do a "make a copy" on it so that one of our registered email addresses can become the owner, to remove the possibility that she cleans up her Drive and deletes the whole thing.
Here are things I know will need to change to make that happen:
- The public version of the data will need to draw from the new private sheet
- Everyone who has edit access will need to be given access to the new sheet/lose access to the old one so they don't update in the wrong place
- Pinned links/Slackbot auto-responses that spit out the link will need to be updated
@maxine and/or @mc-funk, what will need to be done in Twilio to make sure texts go to the new sheet?
Everyone else, what else am I missing? | 1.0 | Replacing the back end sheet with a copy - The current owner of the Google Sheets document that is our backend...is someone who hasn't been part of this project for more than a month and hasn't answered emails. I'd like to do a "make a copy" on it so that one of our registered email addresses can become the owner, to remove the possibility that she cleans up her Drive and deletes the whole thing.
Here are things I know will need to change to make that happen:
- The public version of the data will need to draw from the new private sheet
- Everyone who has edit access will need to be given access to the new sheet/lose access to the old one so they don't update in the wrong place
- Pinned links/Slackbot auto-responses that spit out the link will need to be updated
@maxine and/or @mc-funk, what will need to be done in Twilio to make sure texts go to the new sheet?
Everyone else, what else am I missing? | priority | replacing the back end sheet with a copy the current owner of the google sheets document that is our backend is someone who hasn t been part of this project for more than a month and hasn t answered emails i d like to do a make a copy on it so that one of our registered email addresses can become the owner to remove the possibility that she cleans up her drive and deletes the whole thing here are things i know will need to change to make that happen the public version of the data will need to draw from the new private sheet everyone who has edit access will need to be given access to the new sheet lose access to the old one so they don t update in the wrong place pinned links slackbot auto responses that spit out the link will need to be updated maxine and or mc funk what will need to be done in twilio to make sure texts go to the new sheet everyone else what else am i missing | 1 |
741,296 | 25,787,875,737 | IssuesEvent | 2022-12-09 22:44:25 | scs-lab/ChronoLog | https://api.github.com/repos/scs-lab/ChronoLog | opened | Add show Chronicle/Story to ClientAPI | medium priority | Add two methods to ClientAPI that would provide a list of chronicles and /or stories on the cluster that the client is authorized to access.
showChronicles() - returns a list of chronicle names
showStories(const string& chronicle_name) - returns a list of story names that belong to a chronicle given the chronicle name | 1.0 | Add show Chronicle/Story to ClientAPI - Add two methods to ClientAPI that would provide a list of chronicles and /or stories on the cluster that the client is authorized to access.
showChronicles() - returns a list of chronicle names
showStories(const string& chronicle_name) - returns a list of story names that belong to a chronicle given the chronicle name | priority | add show chronicle story to clientapi add two methods to clientapi that would provide a list of chronicles and or stories on the cluster that the client is authorized to access showchronicles returns a list of chronicle names showstories const string chronicle name returns a list of story names that belong to a chronicle given the chronicle name | 1 |
54,969 | 3,071,738,239 | IssuesEvent | 2015-08-19 13:48:40 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Ошибки в отображении мгновенной скорости сразу после старта передачи | bug imported Priority-Medium | _From [infinitysky7](https://code.google.com/u/infinitysky7/) on November 25, 2010 13:14:06_
Неверное отображение текущей скорости, скачки. На скриншоте канал 100 Мбит (12,5 Миб\с).
**Attachment:** [Speed.png](http://code.google.com/p/flylinkdc/issues/detail?id=230)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=230_ | 1.0 | Ошибки в отображении мгновенной скорости сразу после старта передачи - _From [infinitysky7](https://code.google.com/u/infinitysky7/) on November 25, 2010 13:14:06_
Неверное отображение текущей скорости, скачки. На скриншоте канал 100 Мбит (12,5 Миб\с).
**Attachment:** [Speed.png](http://code.google.com/p/flylinkdc/issues/detail?id=230)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=230_ | priority | ошибки в отображении мгновенной скорости сразу после старта передачи from on november неверное отображение текущей скорости скачки на скриншоте канал мбит миб с attachment original issue | 1 |
241,632 | 7,818,430,332 | IssuesEvent | 2018-06-13 12:18:52 | weglot/translate-laravel | https://api.github.com/repos/weglot/translate-laravel | closed | Update request tracking | priority: medium status: confirmed type: enhancement | <!-- This form is for bug reports and feature requests ONLY! -->
**Is this a BUG REPORT or FEATURE REQUEST?**: FEATURE REQUEST
**What happened**: Update request tracking with new one | 1.0 | Update request tracking - <!-- This form is for bug reports and feature requests ONLY! -->
**Is this a BUG REPORT or FEATURE REQUEST?**: FEATURE REQUEST
**What happened**: Update request tracking with new one | priority | update request tracking is this a bug report or feature request feature request what happened update request tracking with new one | 1 |
271,828 | 8,489,991,905 | IssuesEvent | 2018-10-26 22:02:29 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | Scope - Create/Edit Resource workflow | (3) Large Effort Medium Priority enhancement | This issue will be a checklist of all of the separate functionality to develop for implementing the create/edit workflow updates found in the Scope work.
* [ ] #???? Ensure and enforce empty resources to be unable to be marked public
* [ ] #???? Open Resource in edit mode when a resource is created
* [ ] #???? Hide Ratings and comments in Edit mode
* [ ] #???? Include on-hover information boxes for each item on the resource page in edit mode. Make the information boxes editable in mezzanine interface.
* [ ] #???? Dynamic keyword suggestions based on existing keywords as user is typing. Setup a page that is editable through mezzanine for hydroshare common keywords.
* [ ] #???? sharing status rules revamp
This issue is derived from the Creating a Resource and Editing it section in https://docs.google.com/document/d/1td4-tb-cMOjeHZt0cJGE8xVSSlBL4vG3MUzkZtpL4r0
| 1.0 | Scope - Create/Edit Resource workflow - This issue will be a checklist of all of the separate functionality to develop for implementing the create/edit workflow updates found in the Scope work.
* [ ] #???? Ensure and enforce empty resources to be unable to be marked public
* [ ] #???? Open Resource in edit mode when a resource is created
* [ ] #???? Hide Ratings and comments in Edit mode
* [ ] #???? Include on-hover information boxes for each item on the resource page in edit mode. Make the information boxes editable in mezzanine interface.
* [ ] #???? Dynamic keyword suggestions based on existing keywords as user is typing. Setup a page that is editable through mezzanine for hydroshare common keywords.
* [ ] #???? sharing status rules revamp
This issue is derived from the Creating a Resource and Editing it section in https://docs.google.com/document/d/1td4-tb-cMOjeHZt0cJGE8xVSSlBL4vG3MUzkZtpL4r0
| priority | scope create edit resource workflow this issue will be a checklist of all of the separate functionality to develop for implementing the create edit workflow updates found in the scope work ensure and enforce empty resources to be unable to be marked public open resource in edit mode when a resource is created hide ratings and comments in edit mode include on hover information boxes for each item on the resource page in edit mode make the information boxes editable in mezzanine interface dynamic keyword suggestions based on existing keywords as user is typing setup a page that is editable through mezzanine for hydroshare common keywords sharing status rules revamp this issue is derived from the creating a resource and editing it section in | 1 |
522,646 | 15,164,501,648 | IssuesEvent | 2021-02-12 13:49:56 | erlang/otp | https://api.github.com/repos/erlang/otp | closed | ERL-1465: Child erl.exe process must be killed when erlsrv.exe is terminated | bug priority:medium team:VM |
Original reporter: `ilyan`
Affected version: `Not Specified`
Component: `Not Specified`
Migrated from: https://bugs.erlang.org/browse/ERL-1465
---
```
Killing erlsrv.exe (manually or 30 seconds after stopping the service) leaves erl.exe running.
The service can't be started until erl.exe is manually killed.
This link explains how to create child processes in the same job:
[https://stackoverflow.com/questions/24012773/c-winapi-how-to-kill-child-processes-when-the-calling-parent-process-is-for/24020820]
```
| 1.0 | ERL-1465: Child erl.exe process must be killed when erlsrv.exe is terminated -
Original reporter: `ilyan`
Affected version: `Not Specified`
Component: `Not Specified`
Migrated from: https://bugs.erlang.org/browse/ERL-1465
---
```
Killing erlsrv.exe (manually or 30 seconds after stopping the service) leaves erl.exe running.
The service can't be started until erl.exe is manually killed.
This link explains how to create child processes in the same job:
[https://stackoverflow.com/questions/24012773/c-winapi-how-to-kill-child-processes-when-the-calling-parent-process-is-for/24020820]
```
| priority | erl child erl exe process must be killed when erlsrv exe is terminated original reporter ilyan affected version not specified component not specified migrated from killing erlsrv exe manually or seconds after stopping the service leaves erl exe running the service can t be started until erl exe is manually killed this link explains how to create child processes in the same job | 1 |
790,082 | 27,815,055,400 | IssuesEvent | 2023-03-18 15:34:36 | WordPress/openverse | https://api.github.com/repos/WordPress/openverse | opened | Line heights are inconsistent | 🟨 priority: medium 🛠 goal: fix 🕹 aspect: interface 🧱 stack: frontend | ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
Tailwind config parameter for setting custom line heights is called `lineHeight`. In our `tailwind.config` it is called `lineHeights` (with an extra s at the end). Because of this, all the named line heights (larger, large, normal, snug, tight) fall back on the Tailwind default values instead of using our custom values. This makes our line heights inconsistent.
We need to rename `lineHeights` to `lineHeight` in `tailwind.config` and update all the snapshots.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
Check the homepage h1 element. Instead of using our `tight` value of 1.2 for line height, it uses the Tailwind default of 1.25.
## Additional context
There is a `FIXME` comment about it in the code. We did not update it when developing the new header/footer/homepage to prevent a massive update in snapshots. Now, it should be useful to update before the `Core UI improvements` project. | 1.0 | Line heights are inconsistent - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
Tailwind config parameter for setting custom line heights is called `lineHeight`. In our `tailwind.config` it is called `lineHeights` (with an extra s at the end). Because of this, all the named line heights (larger, large, normal, snug, tight) fall back on the Tailwind default values instead of using our custom values. This makes our line heights inconsistent.
We need to rename `lineHeights` to `lineHeight` in `tailwind.config` and update all the snapshots.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
Check the homepage h1 element. Instead of using our `tight` value of 1.2 for line height, it uses the Tailwind default of 1.25.
## Additional context
There is a `FIXME` comment about it in the code. We did not update it when developing the new header/footer/homepage to prevent a massive update in snapshots. Now, it should be useful to update before the `Core UI improvements` project. | priority | line heights are inconsistent description tailwind config parameter for setting custom line heights is called lineheight in our tailwind config it is called lineheights with an extra s at the end because of this all the named line heights larger large normal snug tight fall back on the tailwind default values instead of using our custom values this makes our line heights inconsistent we need to rename lineheights to lineheight in tailwind config and update all the snapshots reproduction check the homepage element instead of using our tight value of for line height it uses the tailwind default of additional context there is a fixme comment about it in the code we did not update it when developing the new header footer homepage to prevent a massive update in snapshots now it should be useful to update before the core ui improvements project | 1 |
426,419 | 12,372,216,126 | IssuesEvent | 2020-05-18 19:59:21 | Big-Brain-Crew/learn_ml | https://api.github.com/repos/Big-Brain-Crew/learn_ml | opened | Define streaming interface for inference results | feature medium priority | Create an interface to stream data from the Coral board to microcontrollers and other computers.
Need to be able to access:
* Raw sensor feeds
* Inference results
Additionally, there must be an interface on the master computer/microcontroller that makes it easy to parse the transmitted data. For example, a python package, Arduino Package, and UI. | 1.0 | Define streaming interface for inference results - Create an interface to stream data from the Coral board to microcontrollers and other computers.
Need to be able to access:
* Raw sensor feeds
* Inference results
Additionally, there must be an interface on the master computer/microcontroller that makes it easy to parse the transmitted data. For example, a python package, Arduino Package, and UI. | priority | define streaming interface for inference results create an interface to stream data from the coral board to microcontrollers and other computers need to be able to access raw sensor feeds inference results additionally there must be an interface on the master computer microcontroller that makes it easy to parse the transmitted data for example a python package arduino package and ui | 1 |
602,802 | 18,507,075,829 | IssuesEvent | 2021-10-19 20:00:58 | r-lib/styler | https://api.github.com/repos/r-lib/styler | closed | Other style guides | Priority: Medium Complexity: High Status: Unassigned Type: Meta | What would it take to support the following styles?
- @yihui's, e.g. in [tinytex](https://github.com/yihui/tinytex)
- e.g., use `=` for assignment, use `'` for strings
- @wlandau's in [drake](https://github.com/ropensci/drake)
- start argument list on the following line in a function declaration, no space between `)` and `{`
I'm sure there's a related postponed issue, but I haven't looked. | 1.0 | Other style guides - What would it take to support the following styles?
- @yihui's, e.g. in [tinytex](https://github.com/yihui/tinytex)
- e.g., use `=` for assignment, use `'` for strings
- @wlandau's in [drake](https://github.com/ropensci/drake)
- start argument list on the following line in a function declaration, no space between `)` and `{`
I'm sure there's a related postponed issue, but I haven't looked. | priority | other style guides what would it take to support the following styles yihui s e g in e g use for assignment use for strings wlandau s in start argument list on the following line in a function declaration no space between and i m sure there s a related postponed issue but i haven t looked | 1 |
461,962 | 13,239,023,415 | IssuesEvent | 2020-08-19 02:11:01 | pingcap/dumpling | https://api.github.com/repos/pingcap/dumpling | closed | dump column name as `--complete-insert` did in mydumper | difficulty/2-medium priority/P2 | ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
we may need to dump columns' name in some cases, like:
- will load data to a table with more columns than the dumped one
- different column orders between the target and source tables.
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
add a flag like `--complete-insert` in mydumper to support dump the columns' name (or make it as the default behavior).
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Teachability, Documentation, Adoption, Optimization:**
<!-- If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful. --> | 1.0 | dump column name as `--complete-insert` did in mydumper - ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
we may need to dump columns' name in some cases, like:
- will load data to a table with more columns than the dumped one
- different column orders between the target and source tables.
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
add a flag like `--complete-insert` in mydumper to support dump the columns' name (or make it as the default behavior).
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Teachability, Documentation, Adoption, Optimization:**
<!-- If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful. --> | priority | dump column name as complete insert did in mydumper feature request is your feature request related to a problem please describe we may need to dump columns name in some cases like will load data to a table with more columns than the dumped one different column orders between the target and source tables describe the feature you d like add a flag like complete insert in mydumper to support dump the columns name or make it as the default behavior describe alternatives you ve considered teachability documentation adoption optimization | 1 |
358,606 | 10,618,582,796 | IssuesEvent | 2019-10-13 06:00:02 | AY1920S1-CS2113T-W17-3/main | https://api.github.com/repos/AY1920S1-CS2113T-W17-3/main | closed | As an existing user, I can delete my investment account (bond) | priority.Medium type.Story | so that I can sell it before the maturity date.
| 1.0 | As an existing user, I can delete my investment account (bond) - so that I can sell it before the maturity date.
| priority | as an existing user i can delete my investment account bond so that i can sell it before the maturity date | 1 |
84,145 | 3,654,276,718 | IssuesEvent | 2016-02-17 11:44:48 | brunoais/javadude | https://api.github.com/repos/brunoais/javadude | closed | test trailing comma after properties & document error message from APT | auto-migrated Priority-Medium Project-Annotations Type-Task | ```
test trailing comma after properties & document error message from APT
```
Original issue reported on code.google.com by `scott%ja...@gtempaccount.com` on 23 Jul 2008 at 1:55 | 1.0 | test trailing comma after properties & document error message from APT - ```
test trailing comma after properties & document error message from APT
```
Original issue reported on code.google.com by `scott%ja...@gtempaccount.com` on 23 Jul 2008 at 1:55 | priority | test trailing comma after properties document error message from apt test trailing comma after properties document error message from apt original issue reported on code google com by scott ja gtempaccount com on jul at | 1 |
812,831 | 30,385,558,398 | IssuesEvent | 2023-07-13 00:08:26 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-1688] Inconsistent time availability slots on current day in the middle of availability | 🐛 bug Medium priority bookings | ### Issue Summary
Inconsistent availability slots when the current day is in the middle of an availability time interval.
### Steps to Reproduce
I will give an example on reproduction for my scenario, you can edit the times based on your current date/time. Right now, it is 15:16 local time for me on Sunday.
1. Set availability for 09:00 - 17:00 (with event length of 35 minutes) on Sunday, and any other day in the future just for reference.
2. Since my local time is 15:16, it will obviously only show me the availabilities in the future, in my case: [15:45, 16:20]
3. Then go to a day in the future and look at the availabilities: [..., 14:50, 15:25, 16:00, ...]
The problem is this inconsistency. The availability seems to be computed depending on the current time, which is not correct for a booking system. As someone who wants to have time slots throughout the day, you want them to be static and consistent. So in this example, I am expecting time slots to be [15:25, 16:00]. It's a bit difficult to explain, so let me know if that didn't make any sense.
<sub>[CAL-1688](https://linear.app/calcom/issue/CAL-1688/inconsistent-time-availability-slots-on-current-day-in-the-middle-of)</sub> | 1.0 | [CAL-1688] Inconsistent time availability slots on current day in the middle of availability - ### Issue Summary
Inconsistent availability slots when the current day is in the middle of an availability time interval.
### Steps to Reproduce
I will give an example on reproduction for my scenario, you can edit the times based on your current date/time. Right now, it is 15:16 local time for me on Sunday.
1. Set availability for 09:00 - 17:00 (with event length of 35 minutes) on Sunday, and any other day in the future just for reference.
2. Since my local time is 15:16, it will obviously only show me the availabilities in the future, in my case: [15:45, 16:20]
3. Then go to a day in the future and look at the availabilities: [..., 14:50, 15:25, 16:00, ...]
The problem is this inconsistency. The availability seems to be computed depending on the current time, which is not correct for a booking system. As someone who wants to have time slots throughout the day, you want them to be static and consistent. So in this example, I am expecting time slots to be [15:25, 16:00]. It's a bit difficult to explain, so let me know if that didn't make any sense.
<sub>[CAL-1688](https://linear.app/calcom/issue/CAL-1688/inconsistent-time-availability-slots-on-current-day-in-the-middle-of)</sub> | priority | inconsistent time availability slots on current day in the middle of availability issue summary inconsistent availability slots when the current day is in the middle of an availability time interval steps to reproduce i will give an example on reproduction for my scenario you can edit the times based on your current date time right now it is local time for me on sunday set availability for with event length of minutes on sunday and any other day in the future just for reference since my local time is it will obviously only show me the availabilities in the future in my case then go to a day in the future and look at the availabilities the problem is this inconsistency the availability seems to be computed depending on the current time which is not correct for a booking system as someone who wants to have time slots throughout the day you want them to be static and consistent so in this example i am expecting time slots to be it s a bit difficult to explain so let me know if that didn t make any sense | 1 |
2,072 | 2,523,069,026 | IssuesEvent | 2015-01-20 06:26:27 | dartsim/dart | https://api.github.com/repos/dartsim/dart | closed | const correctness violations | Comp: API Priority: Medium | I noticed that there are some issues with the way pointers are passed around by const member functions. In particular, it's very easy to violate const correctness without the user needing to use a const_cast. To illustrate this, consider the following example functions which are perfectly valid in the current master of DART, but which circumvent const correctness:
```
BodyNode* const_breaking(const BodyNode* const_node)
{
return const_node->getChildBodyNode(0)->getParentBodyNode();
}
Skeleton* const_breaking(const Skeleton* const_skel)
{
return const_skel->getBodyNode(0)->getSkeleton();
}
```
These functions are able to return non-const versions of their const input without explicitly using a const_cast. I see this as a safety liability.
I have changed the API to offer overloaded nonconst/const versions of these getter functions. This does not seem to have broken any code (which is good, because any broken code would have been violating const-correctness), although I did have to overload SoftContactConstraint::selectCollidingPointMass(~) with a const and non-const version.
I also added getter functions that return the child BodyNode and parent BodyNode of a Joint, because I frequently find that useful.
The changes are all available in this commit: 0af26387ac1524b5076ddb30d0458efa7a29101e
(Note that the commit also contains my changes to the name mapping from #261) | 1.0 | const correctness violations - I noticed that there are some issues with the way pointers are passed around by const member functions. In particular, it's very easy to violate const correctness without the user needing to use a const_cast. To illustrate this, consider the following example functions which are perfectly valid in the current master of DART, but which circumvent const correctness:
```
BodyNode* const_breaking(const BodyNode* const_node)
{
return const_node->getChildBodyNode(0)->getParentBodyNode();
}
Skeleton* const_breaking(const Skeleton* const_skel)
{
return const_skel->getBodyNode(0)->getSkeleton();
}
```
These functions are able to return non-const versions of their const input without explicitly using a const_cast. I see this as a safety liability.
I have changed the API to offer overloaded nonconst/const versions of these getter functions. This does not seem to have broken any code (which is good, because any broken code would have been violating const-correctness), although I did have to overload SoftContactConstraint::selectCollidingPointMass(~) with a const and non-const version.
I also added getter functions that return the child BodyNode and parent BodyNode of a Joint, because I frequently find that useful.
The changes are all available in this commit: 0af26387ac1524b5076ddb30d0458efa7a29101e
(Note that the commit also contains my changes to the name mapping from #261) | priority | const correctness violations i noticed that there are some issues with the way pointers are passed around by const member functions in particular it s very easy to violate const correctness without the user needing to use a const cast to illustrate this consider the following example functions which are perfectly valid in the current master of dart but which circumvent const correctness bodynode const breaking const bodynode const node return const node getchildbodynode getparentbodynode skeleton const breaking const skeleton const skel return const skel getbodynode getskeleton these functions are able to return non const versions of their const input without explicitly using a const cast i see this as a safety liability i have changed the api to offer overloaded nonconst const versions of these getter functions this does not seem to have broken any code which is good because any broken code would have been violating const correctness although i did have to overload softcontactconstraint selectcollidingpointmass with a const and non const version i also added getter functions that return the child bodynode and parent bodynode of a joint because i frequently find that useful the changes are all available in this commit note that the commit also contains my changes to the name mapping from | 1 |
825,874 | 31,476,115,132 | IssuesEvent | 2023-08-30 10:49:11 | vscentrum/vsc-software-stack | https://api.github.com/repos/vscentrum/vsc-software-stack | closed | Omnipose | difficulty: easy new priority: medium Python site:t1_ugent_hortense GPU sources-only | * link to support ticket: [#2023013160000892](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=108644)
* website: https://omnipose.readthedocs.io/
* installation docs: https://omnipose.readthedocs.io/installation.html
* toolchain: `foss/2022a`
* easyblock to use: `PythonBundle`
* required dependencies:
* [ ] [cellpose-omni](https://github.com/kevinjohncutler/cellpose-omni)
* see https://github.com/kevinjohncutler/omnipose/blob/main/setup.py
* notes:
* ...
* effort: *(TBD)*
| 1.0 | Omnipose - * link to support ticket: [#2023013160000892](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=108644)
* website: https://omnipose.readthedocs.io/
* installation docs: https://omnipose.readthedocs.io/installation.html
* toolchain: `foss/2022a`
* easyblock to use: `PythonBundle`
* required dependencies:
* [ ] [cellpose-omni](https://github.com/kevinjohncutler/cellpose-omni)
* see https://github.com/kevinjohncutler/omnipose/blob/main/setup.py
* notes:
* ...
* effort: *(TBD)*
| priority | omnipose link to support ticket website installation docs toolchain foss easyblock to use pythonbundle required dependencies see notes effort tbd | 1 |
595,443 | 18,067,037,865 | IssuesEvent | 2021-09-20 20:30:58 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | [Blackrock Depths - Boss] Summoner's Tomb/ The Seven Event too fast | Priority-Medium Instance - Dungeon - Classic Confirmed 50-59 | ### What client do you play on?
enUS
### Faction
- [X] Alliance
- [X] Horde
### Content Phase:
- [ ] Generic
- [ ] 1-19
- [ ] 20-29
- [ ] 30-39
- [ ] 40-49
- [x] 50-59
### Current Behaviour
When you get to Summoner's Tomb and when you pull the event. Bosses that should spawn every 45 seconds or so, but here they spawn every 20 sec or so, might be wrong, never counted. But everyone in multiple runs (over 20) stated that they spawn faster than expected leading to fight 3 bosses at once instead max 2.
https://classic.wowhead.com/npc=9039/doomrel#drops;mode:normal
### Expected Blizzlike Behaviour
https://www.youtube.com/watch?v=wW84027e2JU&ab_channel=Chou-chiChang
At lower levels in BRD you on Blizzlike WoW have time to kill 1 by 1 boss, while here bosses spawns so fast that you get 3 at some point even with BiS lvl 49 gear and so fast killing with WoTLK talents. Our spawn times are insane.
### Source
_No response_
### Steps to reproduce the problem
1. Enter BRD and get to Summoner's Tob (Near the end of BRD full run)
2. Start the event by talking to one of bosses (straight forward when you enter door)
3. Bosses spawning so fast, literally 20 sec or 25, get 3 bosses at some point to handle.
### Extra Notes
Original Report: https://github.com/chromiecraft/chromiecraft/issues/1659
Triager notes:
https://www.youtube.com/watch?v=QoEwUz7Bdq8
24:41 Anger'rel
25:18 Seeth'rel (killed before next one activated)
25:40 Dope'rel (killed before next one activated)
26:03 Gloom'rel (killed before next one activated)
26:26 Vile'rel (killed before next one activated)
26:51 Hate'rel (killed before next one activated)
27:12 Doom'rel
Right now, they spawn in 14-20 seconds on CC.
They should either spawn on a timer, or when they are too quickly die, the next one should activate.
https://wowpedia.fandom.com/wiki/The_Seven
`All of the mobs in the encounter are immune to curses. Also the event is entirely on a timer. After a set amount of seconds, the next mob will release and aggro the first player it sees.`
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/595bb6adccbabc714469f3935541978283b8bdfb
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-server-auto-shutdown](https://github.com/azerothcore/mod-server-auto-shutdown)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-custom-corldboss](https://github.com/55Honey/Acore_CustomWorldboss)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
| 1.0 | [Blackrock Depths - Boss] Summoner's Tomb/ The Seven Event too fast - ### What client do you play on?
enUS
### Faction
- [X] Alliance
- [X] Horde
### Content Phase:
- [ ] Generic
- [ ] 1-19
- [ ] 20-29
- [ ] 30-39
- [ ] 40-49
- [x] 50-59
### Current Behaviour
When you get to Summoner's Tomb and when you pull the event. Bosses that should spawn every 45 seconds or so, but here they spawn every 20 sec or so, might be wrong, never counted. But everyone in multiple runs (over 20) stated that they spawn faster than expected leading to fight 3 bosses at once instead max 2.
https://classic.wowhead.com/npc=9039/doomrel#drops;mode:normal
### Expected Blizzlike Behaviour
https://www.youtube.com/watch?v=wW84027e2JU&ab_channel=Chou-chiChang
At lower levels in BRD you on Blizzlike WoW have time to kill 1 by 1 boss, while here bosses spawns so fast that you get 3 at some point even with BiS lvl 49 gear and so fast killing with WoTLK talents. Our spawn times are insane.
### Source
_No response_
### Steps to reproduce the problem
1. Enter BRD and get to Summoner's Tob (Near the end of BRD full run)
2. Start the event by talking to one of bosses (straight forward when you enter door)
3. Bosses spawning so fast, literally 20 sec or 25, get 3 bosses at some point to handle.
### Extra Notes
Original Report: https://github.com/chromiecraft/chromiecraft/issues/1659
Triager notes:
https://www.youtube.com/watch?v=QoEwUz7Bdq8
24:41 Anger'rel
25:18 Seeth'rel (killed before next one activated)
25:40 Dope'rel (killed before next one activated)
26:03 Gloom'rel (killed before next one activated)
26:26 Vile'rel (killed before next one activated)
26:51 Hate'rel (killed before next one activated)
27:12 Doom'rel
Right now, they spawn in 14-20 seconds on CC.
They should either spawn on a timer, or when they are too quickly die, the next one should activate.
https://wowpedia.fandom.com/wiki/The_Seven
`All of the mobs in the encounter are immune to curses. Also the event is entirely on a timer. After a set amount of seconds, the next mob will release and aggro the first player it sees.`
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/595bb6adccbabc714469f3935541978283b8bdfb
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-server-auto-shutdown](https://github.com/azerothcore/mod-server-auto-shutdown)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-custom-corldboss](https://github.com/55Honey/Acore_CustomWorldboss)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
| priority | summoner s tomb the seven event too fast what client do you play on enus faction alliance horde content phase generic current behaviour when you get to summoner s tomb and when you pull the event bosses that should spawn every seconds or so but here they spawn every sec or so might be wrong never counted but everyone in multiple runs over stated that they spawn faster than expected leading to fight bosses at once instead max expected blizzlike behaviour at lower levels in brd you on blizzlike wow have time to kill by boss while here bosses spawns so fast that you get at some point even with bis lvl gear and so fast killing with wotlk talents our spawn times are insane source no response steps to reproduce the problem enter brd and get to summoner s tob near the end of brd full run start the event by talking to one of bosses straight forward when you enter door bosses spawning so fast literally sec or get bosses at some point to handle extra notes original report triager notes anger rel seeth rel killed before next one activated dope rel killed before next one activated gloom rel killed before next one activated vile rel killed before next one activated hate rel killed before next one activated doom rel right now they spawn in seconds on cc they should either spawn on a timer or when they are too quickly die the next one should activate all of the mobs in the encounter are immune to curses also the event is entirely on a timer after a set amount of seconds the next mob will release and aggro the first player it sees ac rev hash commit operating system ubuntu modules customizations none server chromiecraft | 1 |
807,840 | 30,020,902,113 | IssuesEvent | 2023-06-26 23:14:59 | WordPress/openverse | https://api.github.com/repos/WordPress/openverse | closed | Implementation Plan: Copy updates `mature` -> `sensitive` | 🟨 priority: medium 🌟 goal: addition 📄 aspect: text 🧱 stack: mgmt 🧭 project: implementation plan | ## Description
<!-- Describe the feature and how it solves the problem. -->
Project proposal: https://docs.openverse.org/projects/proposals/trust_and_safety/content_report_moderation/20230411-project_proposal_content_report_moderation.html#update-copy-to-use-the-more-general-sensitive-language-requirement-1
Write an implementation plan to update copy from `mature` to `sensitive` in the frontend and Django API. To reiterate what is said in the project proposal, there should be no design changes or new features as part of this plan, only changes to copy. Field and model names should also be updated when possible, but without the need for database migrations (i.e., use [Django's built in tools for specifying different underlying names for fields and tables](https://docs.djangoproject.com/en/4.2/ref/models/options/#table-names)).
| 1.0 | Implementation Plan: Copy updates `mature` -> `sensitive` - ## Description
<!-- Describe the feature and how it solves the problem. -->
Project proposal: https://docs.openverse.org/projects/proposals/trust_and_safety/content_report_moderation/20230411-project_proposal_content_report_moderation.html#update-copy-to-use-the-more-general-sensitive-language-requirement-1
Write an implementation plan to update copy from `mature` to `sensitive` in the frontend and Django API. To reiterate what is said in the project proposal, there should be no design changes or new features as part of this plan, only changes to copy. Field and model names should also be updated when possible, but without the need for database migrations (i.e., use [Django's built in tools for specifying different underlying names for fields and tables](https://docs.djangoproject.com/en/4.2/ref/models/options/#table-names)).
| priority | implementation plan copy updates mature sensitive description project proposal write an implementation plan to update copy from mature to sensitive in the frontend and django api to reiterate what is said in the project proposal there should be no design changes or new features as part of this plan only changes to copy field and model names should also be updated when possible but without the need for database migrations i e use | 1 |
145,945 | 5,584,316,622 | IssuesEvent | 2017-03-29 04:26:19 | CS2103JAN2017-W09-B2/main | https://api.github.com/repos/CS2103JAN2017-W09-B2/main | opened | Implement Load feature | Feature priority.medium status.ongoing type.task | More importantly, should be able to **save to wherever that is specified**, i.e. after saving if you continue to add, the new changes are reflected in the new location. | 1.0 | Implement Load feature - More importantly, should be able to **save to wherever that is specified**, i.e. after saving if you continue to add, the new changes are reflected in the new location. | priority | implement load feature more importantly should be able to save to wherever that is specified i e after saving if you continue to add the new changes are reflected in the new location | 1 |
354,412 | 10,567,156,276 | IssuesEvent | 2019-10-06 01:04:47 | ESAPI/esapi-java-legacy | https://api.github.com/repos/ESAPI/esapi-java-legacy | closed | org.owasp.esapi.reference.DefaultValidator reports ValidationException with IE 9 | Priority-Medium bug imported | _From [christof...@gmail.com](https://code.google.com/u/106863696289161512808/) on February 15, 2012 09:45:54_
What steps will reproduce the problem? 1. Build a simple web application with a simple index.jsp page in a subfolder /protected
2. Setup ESAPIfilter filter, url pattern /protected/*
3. Call ESAPI.validator().assertIsValidHTTPRequest(); What is the expected output? What do you see instead? No validation exception is expected. However, we instead see below error message:
WARNING: SECURITY-FAILURE Anonymous@unknown:511637 -- Input exceeds maximum allowed length of 150 by 23 characters: context=HTTP header value (USER-AGENT): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C), type=HTTPHeaderValue), input=Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C)
ValidationException @ org.owasp.esapi.reference.DefaultValidator.getValidInput(null:-1)
That is because the header exceeds the 150 length limit, which is hard coded in org.owasp.esapi.filters.SafeRequest What version of the product are you using? On what operating system? 1.4 , Windows 7 Professional Does this issue affect only a specified browser or set of browsers? IE 9 Please provide any additional information below. Maybe the validation limits in Safe request should be moved to the configuration?
_Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=262_
| 1.0 | org.owasp.esapi.reference.DefaultValidator reports ValidationException with IE 9 - _From [christof...@gmail.com](https://code.google.com/u/106863696289161512808/) on February 15, 2012 09:45:54_
What steps will reproduce the problem? 1. Build a simple web application with a simple index.jsp page in a subfolder /protected
2. Setup ESAPIfilter filter, url pattern /protected/*
3. Call ESAPI.validator().assertIsValidHTTPRequest(); What is the expected output? What do you see instead? No validation exception is expected. However, we instead see below error message:
WARNING: SECURITY-FAILURE Anonymous@unknown:511637 -- Input exceeds maximum allowed length of 150 by 23 characters: context=HTTP header value (USER-AGENT): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C), type=HTTPHeaderValue), input=Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C)
ValidationException @ org.owasp.esapi.reference.DefaultValidator.getValidInput(null:-1)
That is because the header exceeds the 150 length limit, which is hard coded in org.owasp.esapi.filters.SafeRequest What version of the product are you using? On what operating system? 1.4 , Windows 7 Professional Does this issue affect only a specified browser or set of browsers? IE 9 Please provide any additional information below. Maybe the validation limits in Safe request should be moved to the configuration?
_Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=262_
| priority | org owasp esapi reference defaultvalidator reports validationexception with ie from on february what steps will reproduce the problem build a simple web application with a simple index jsp page in a subfolder protected setup esapifilter filter url pattern protected call esapi validator assertisvalidhttprequest what is the expected output what do you see instead no validation exception is expected however we instead see below error message warning security failure anonymous unknown input exceeds maximum allowed length of by characters context http header value user agent mozilla compatible msie windows nt trident net clr net clr net clr media center pc type httpheadervalue input mozilla compatible msie windows nt trident net clr net clr net clr media center pc validationexception org owasp esapi reference defaultvalidator getvalidinput null that is because the header exceeds the length limit which is hard coded in org owasp esapi filters saferequest what version of the product are you using on what operating system windows professional does this issue affect only a specified browser or set of browsers ie please provide any additional information below maybe the validation limits in safe request should be moved to the configuration original issue | 1 |
804,434 | 29,487,947,030 | IssuesEvent | 2023-06-02 11:15:04 | ow2-proactive/scheduling | https://api.github.com/repos/ow2-proactive/scheduling | closed | Improve NODE_ACCESS_TOKEN | resolution:fixed type:improvement severity:major priority:medium | Property `-Dproactive.node.access.token` allows to define to a tag per ProActive node. This way, a user can define on which node with which tag a task must be executed by defining `NODE_ACCESS_TOKEN=X` in the generic information section of a task.
This manner to proceed is a really convenient manner for users to select a specific set of nodes for execution (e.g. Enterome and OpenOcean use case). Unfortunately, the implementation has currently some drawbacks:
- Suppose that all ProActive nodes have a tag defined with '-Dproactive.node.access.token', if 'NODE_ACCESS_TOKEN' is not defined, then the job remains in pending state
- 'NODE_ACCESS_TOKEN' is ignored if defined in generic information at the job level. This is really annoying for jobs with a lot of tasks.
Some new interesting features could also be added:
- Support for a list of tags per ProActive node
- Support for dynamic addition/removal of tags
The idea is not to replace selection scripts that are more powerful, but to provide a convenient manner for users to select nodes based on non-dynamic properties without having to write code.
| 1.0 | Improve NODE_ACCESS_TOKEN - Property `-Dproactive.node.access.token` allows to define to a tag per ProActive node. This way, a user can define on which node with which tag a task must be executed by defining `NODE_ACCESS_TOKEN=X` in the generic information section of a task.
This manner to proceed is a really convenient manner for users to select a specific set of nodes for execution (e.g. Enterome and OpenOcean use case). Unfortunately, the implementation has currently some drawbacks:
- Suppose that all ProActive nodes have a tag defined with '-Dproactive.node.access.token', if 'NODE_ACCESS_TOKEN' is not defined, then the job remains in pending state
- 'NODE_ACCESS_TOKEN' is ignored if defined in generic information at the job level. This is really annoying for jobs with a lot of tasks.
Some new interesting features could also be added:
- Support for a list of tags per ProActive node
- Support for dynamic addition/removal of tags
The idea is not to replace selection scripts that are more powerful, but to provide a convenient manner for users to select nodes based on non-dynamic properties without having to write code.
| priority | improve node access token property dproactive node access token allows to define to a tag per proactive node this way a user can define on which node with which tag a task must be executed by defining node access token x in the generic information section of a task this manner to proceed is a really convenient manner for users to select a specific set of nodes for execution e g enterome and openocean use case unfortunately the implementation has currently some drawbacks suppose that all proactive nodes have a tag defined with dproactive node access token if node access token is not defined then the job remains in pending state node access token is ignored if defined in generic information at the job level this is really annoying for jobs with a lot of tasks some new interesting features could also be added support for a list of tags per proactive node support for dynamic addition removal of tags the idea is not to replace selection scripts that are more powerful but to provide a convenient manner for users to select nodes based on non dynamic properties without having to write code | 1 |
202,784 | 7,054,981,956 | IssuesEvent | 2018-01-04 05:03:34 | nylas-mail-lives/nylas-mail | https://api.github.com/repos/nylas-mail-lives/nylas-mail | closed | Issue in FileDownloadStore that forces re-download of files | Priority: Medium Status: In Progress Type: Bug | This is a weird issue that I have been facing but didn't have the time to dig into. The way the `FileDownloadStore` checks, if the file is already downloaded, seems to be broken (or it's just me!).
The `_checkForDownloadedFile` method checks for the file size in the `~/.nylas-mail/downloads` directory and the file size captured from the header. If the file sizes are same, it won't initiate a download, else it would. In my case, none of the attachments has the same file size (physical) as the size mentioned in the header! Therefore, Nylas always re-downloads the file and re-write the physical one. This also delays the availability of the attachment in case of large files.
Does this sound familiar? Is anyone else facing the same issue? | 1.0 | Issue in FileDownloadStore that forces re-download of files - This is a weird issue that I have been facing but didn't have the time to dig into. The way the `FileDownloadStore` checks, if the file is already downloaded, seems to be broken (or it's just me!).
The `_checkForDownloadedFile` method checks for the file size in the `~/.nylas-mail/downloads` directory and the file size captured from the header. If the file sizes are same, it won't initiate a download, else it would. In my case, none of the attachments has the same file size (physical) as the size mentioned in the header! Therefore, Nylas always re-downloads the file and re-write the physical one. This also delays the availability of the attachment in case of large files.
Does this sound familiar? Is anyone else facing the same issue? | priority | issue in filedownloadstore that forces re download of files this is a weird issue that i have been facing but didn t have the time to dig into the way the filedownloadstore checks if the file is already downloaded seems to be broken or it s just me the checkfordownloadedfile method checks for the file size in the nylas mail downloads directory and the file size captured from the header if the file sizes are same it won t initiate a download else it would in my case none of the attachments has the same file size physical as the size mentioned in the header therefore nylas always re downloads the file and re write the physical one this also delays the availability of the attachment in case of large files does this sound familiar is anyone else facing the same issue | 1 |
222,850 | 7,439,827,600 | IssuesEvent | 2018-03-27 08:08:33 | CS2103JAN2018-W09-B1/main | https://api.github.com/repos/CS2103JAN2018-W09-B1/main | closed | As a user, I want CarviciM to auto-fill missing data so that they can be imported. | priority.medium status.ongoing type.epic type.story | Able to:
- Detect new employees and add them
- Take missing employee or client from the previous entry | 1.0 | As a user, I want CarviciM to auto-fill missing data so that they can be imported. - Able to:
- Detect new employees and add them
- Take missing employee or client from the previous entry | priority | as a user i want carvicim to auto fill missing data so that they can be imported able to detect new employees and add them take missing employee or client from the previous entry | 1 |
686,800 | 23,504,796,228 | IssuesEvent | 2022-08-18 11:40:40 | dnd-side-project/dnd-7th-2-backend | https://api.github.com/repos/dnd-side-project/dnd-7th-2-backend | closed | [Feature] 내 팀플 상세 조회 | Priority: Medium Status: On Hold | ## 사전 조건
* [ ] 모집글 조회 Service 메서드 (강의/분야 정보)
## 구현
* [ ] ProjectService getProject 메서드 추가
* [ ] ProjectController API 추가
## 테스트
* [ ] | 1.0 | [Feature] 내 팀플 상세 조회 - ## 사전 조건
* [ ] 모집글 조회 Service 메서드 (강의/분야 정보)
## 구현
* [ ] ProjectService getProject 메서드 추가
* [ ] ProjectController API 추가
## 테스트
* [ ] | priority | 내 팀플 상세 조회 사전 조건 모집글 조회 service 메서드 강의 분야 정보 구현 projectservice getproject 메서드 추가 projectcontroller api 추가 테스트 | 1 |
790,626 | 27,830,773,721 | IssuesEvent | 2023-03-20 04:32:00 | AY2223S2-CS2103-F10-2/tp | https://api.github.com/repos/AY2223S2-CS2103-F10-2/tp | closed | As a user, I can find specific module/lecture/video by keyword from current context | type.Story priority.Medium | so that I can check if a specific module/lecture/video is in the current context quickly. | 1.0 | As a user, I can find specific module/lecture/video by keyword from current context - so that I can check if a specific module/lecture/video is in the current context quickly. | priority | as a user i can find specific module lecture video by keyword from current context so that i can check if a specific module lecture video is in the current context quickly | 1 |
665,364 | 22,310,435,681 | IssuesEvent | 2022-06-13 16:28:31 | FuelLabs/swayswap | https://api.github.com/repos/FuelLabs/swayswap | closed | Add on CI create pool with big liquidity | priority:medium config | When deploying a new contract we need to start a pool with big liquidity as users can only mint `0.5 ETH` that is not enough.
Today we do the process manually on local, we should ad on CI a step, when contracts change (this already exists on CI), deploy and create the initial liquidity `1000 ETH to 2000000 DAI`.
Wallet secret is present on the CI `WALLET_SECRET`. | 1.0 | Add on CI create pool with big liquidity - When deploying a new contract we need to start a pool with big liquidity as users can only mint `0.5 ETH` that is not enough.
Today we do the process manually on local, we should ad on CI a step, when contracts change (this already exists on CI), deploy and create the initial liquidity `1000 ETH to 2000000 DAI`.
Wallet secret is present on the CI `WALLET_SECRET`. | priority | add on ci create pool with big liquidity when deploying a new contract we need to start a pool with big liquidity as users can only mint eth that is not enough today we do the process manually on local we should ad on ci a step when contracts change this already exists on ci deploy and create the initial liquidity eth to dai wallet secret is present on the ci wallet secret | 1 |
753,416 | 26,347,023,309 | IssuesEvent | 2023-01-10 23:16:58 | pdx-blurp/blurp-frontend | https://api.github.com/repos/pdx-blurp/blurp-frontend | closed | Refactor landing page background image to Tailwind styles | bug medium priority | Currently , landing page bg image is using native CSS styling.
This needs to be converted to Tailwind to become reactive and future-proof. | 1.0 | Refactor landing page background image to Tailwind styles - Currently , landing page bg image is using native CSS styling.
This needs to be converted to Tailwind to become reactive and future-proof. | priority | refactor landing page background image to tailwind styles currently landing page bg image is using native css styling this needs to be converted to tailwind to become reactive and future proof | 1 |
654,179 | 21,640,215,763 | IssuesEvent | 2022-05-05 17:58:11 | coders-camp-2021-best-team/FitaTAM | https://api.github.com/repos/coders-camp-2021-best-team/FitaTAM | closed | chore/frontend-setup-routing | priority: medium type: chore frontend | **AC**
Setup routing as it was in the last project
Remember about netlify routing
| 1.0 | chore/frontend-setup-routing - **AC**
Setup routing as it was in the last project
Remember about netlify routing
| priority | chore frontend setup routing ac setup routing as it was in the last project remember about netlify routing | 1 |
709,633 | 24,385,336,474 | IssuesEvent | 2022-10-04 11:15:59 | impactMarket/impact-market-smart-contracts | https://api.github.com/repos/impactMarket/impact-market-smart-contracts | closed | Allow communities to use other currencies | type: feature priority-2: medium | ## Is your feature request related to a problem? Please describe.
Currently, every community uses cUSD.
## Describe the solution you'd like
Every community should be able to select a currency to use.
It should also be possible to change the currency at any moment. Changing the currency should swap existing balance of the community with the new currency.
## Additional context
This allowed currencies should be governable by the DAO.
Make sure this currencies addresses are upgradable by the DAO, converting all funds. | 1.0 | Allow communities to use other currencies - ## Is your feature request related to a problem? Please describe.
Currently, every community uses cUSD.
## Describe the solution you'd like
Every community should be able to select a currency to use.
It should also be possible to change the currency at any moment. Changing the currency should swap existing balance of the community with the new currency.
## Additional context
This allowed currencies should be governable by the DAO.
Make sure this currencies addresses are upgradable by the DAO, converting all funds. | priority | allow communities to use other currencies is your feature request related to a problem please describe currently every community uses cusd describe the solution you d like every community should be able to select a currency to use it should also be possible to change the currency at any moment changing the currency should swap existing balance of the community with the new currency additional context this allowed currencies should be governable by the dao make sure this currencies addresses are upgradable by the dao converting all funds | 1 |
386,301 | 11,435,021,880 | IssuesEvent | 2020-02-04 18:33:06 | PMEAL/OpenPNM | https://api.github.com/repos/PMEAL/OpenPNM | opened | Better handling of iterative properties | Enhancement Priority - Medium | At the moment, the user needs to manually specify which properties are iterative. This is fine since we can't/shouldn't make any assumptions in this regard. But, once a property is declared as `iterative_prop` by the user, `OpenPNM` **should** look up the dependency hierarchy and set upstream properties as `iterative_prop` as well. A good example is conductance, which could potentially depend on iterative properties. | 1.0 | Better handling of iterative properties - At the moment, the user needs to manually specify which properties are iterative. This is fine since we can't/shouldn't make any assumptions in this regard. But, once a property is declared as `iterative_prop` by the user, `OpenPNM` **should** look up the dependency hierarchy and set upstream properties as `iterative_prop` as well. A good example is conductance, which could potentially depend on iterative properties. | priority | better handling of iterative properties at the moment the user needs to manually specify which properties are iterative this is fine since we can t shouldn t make any assumptions in this regard but once a property is declared as iterative prop by the user openpnm should look up the dependency hierarchy and set upstream properties as iterative prop as well a good example is conductance which could potentially depend on iterative properties | 1 |
169,064 | 6,394,103,909 | IssuesEvent | 2017-08-04 09:23:36 | HAS-CRM/IssueTracker | https://api.github.com/repos/HAS-CRM/IssueTracker | closed | Capture line break/carriage return for description at BSI Pipeline Report (Excel) [Irene] | Priority.Medium Status.Done Type.ChangeRequest | Background:
Currently, when data is exported from CRM to Excel, the text formatting (tab/enter) will not be captured. This causes description column at BSI Pipeline Report to look messy with only a single line of data. | 1.0 | Capture line break/carriage return for description at BSI Pipeline Report (Excel) [Irene] - Background:
Currently, when data is exported from CRM to Excel, the text formatting (tab/enter) will not be captured. This causes description column at BSI Pipeline Report to look messy with only a single line of data. | priority | capture line break carriage return for description at bsi pipeline report excel background currently when data is exported from crm to excel the text formatting tab enter will not be captured this causes description column at bsi pipeline report to look messy with only a single line of data | 1 |
42,220 | 2,869,619,518 | IssuesEvent | 2015-06-06 10:50:03 | Miniand/brdg.me-issues | https://api.github.com/repos/Miniand/brdg.me-issues | closed | Include winner in recentlyFinished (winners: Array[0]) | priority:medium project:server type:bug | Populate winners array so it can be surfaced in recently finished list | 1.0 | Include winner in recentlyFinished (winners: Array[0]) - Populate winners array so it can be surfaced in recently finished list | priority | include winner in recentlyfinished winners array populate winners array so it can be surfaced in recently finished list | 1 |
518,406 | 15,027,854,990 | IssuesEvent | 2021-02-02 01:44:37 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | intel_adsp_cavs15: use twister to run kernel testcases has no output | bug priority: medium | **Describe the bug**
Use twister to run kernel cases on ADSP_cavs15 still failed
**To Reproduce**
Steps to reproduce the behavior:
1. west build -b intel_adsp_cavs15 tests/kernel/mutex/mutex_api/
2. west sign -t rimage -p ../modules/audio/sof/zephyr/ext/rimage/build/rimage -D ../modules/audio/sof/zephyr/ext/rimage/config/ -- -k ../modules/audio/sof/keys/otc_private_key.pem
3. boards/xtensa/intel_adsp_cavs15/tools/fw_loader.py -f build/zephyr/zephyr.ri
4. See error
**Logs and console output**
Start firmware downloading...
Open HDA device: /dev/hda
Reset DSP...
Firmware Status Register (0xFFFFFFFF)
Boot: 0xFFFFFF (UNKNOWN)
Wait: 0x0F (UNKNOWN)
Module: 0x07
Error: 0x01
IPC CMD : 0xFFFFFFFF
IPC LEN : 0xFFFFFFFF
Booting up DSP...
Firmware Status Register (0x05000001)
Boot: 0x000001 (INIT_DONE)
Wait: 0x05 (DMA_BUFFER_FULL)
Module: 0x00
Error: 0x00
Downloading firmware...
Start firmware downloading...
**Failed to receive expected status**
Checking firmware status...
Firmware Status Register (0x80000007)
Boot: 0x000007 (UNKNOWN)
Wait: 0x00 (UNKNOWN)
Module: 0x00
**Error: 0x01**
Traceback (most recent call last):
File "/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/fw_loader.py", line 81, in <module>
main()
File "/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/fw_loader.py", line 66, in main
if fw_loader.check_fw_boot_status(plat_def.BOOT_STATUS_FW_ENTERED):
File "/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/lib/loader.py", line 133, in check_fw_boot_status
raise RuntimeError(output)
**RuntimeError: Firmware Status error: Status: 0x80000007
Error Code 0x0000002F**
**Environment (please complete the following information):**
OS: Fedora28
Toolchain: Zephyr-sdk-0.12.1
Commit ID: af67564573f5d7a | 1.0 | intel_adsp_cavs15: use twister to run kernel testcases has no output - **Describe the bug**
Use twister to run kernel cases on ADSP_cavs15 still failed
**To Reproduce**
Steps to reproduce the behavior:
1. west build -b intel_adsp_cavs15 tests/kernel/mutex/mutex_api/
2. west sign -t rimage -p ../modules/audio/sof/zephyr/ext/rimage/build/rimage -D ../modules/audio/sof/zephyr/ext/rimage/config/ -- -k ../modules/audio/sof/keys/otc_private_key.pem
3. boards/xtensa/intel_adsp_cavs15/tools/fw_loader.py -f build/zephyr/zephyr.ri
4. See error
**Logs and console output**
Start firmware downloading...
Open HDA device: /dev/hda
Reset DSP...
Firmware Status Register (0xFFFFFFFF)
Boot: 0xFFFFFF (UNKNOWN)
Wait: 0x0F (UNKNOWN)
Module: 0x07
Error: 0x01
IPC CMD : 0xFFFFFFFF
IPC LEN : 0xFFFFFFFF
Booting up DSP...
Firmware Status Register (0x05000001)
Boot: 0x000001 (INIT_DONE)
Wait: 0x05 (DMA_BUFFER_FULL)
Module: 0x00
Error: 0x00
Downloading firmware...
Start firmware downloading...
**Failed to receive expected status**
Checking firmware status...
Firmware Status Register (0x80000007)
Boot: 0x000007 (UNKNOWN)
Wait: 0x00 (UNKNOWN)
Module: 0x00
**Error: 0x01**
Traceback (most recent call last):
File "/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/fw_loader.py", line 81, in <module>
main()
File "/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/fw_loader.py", line 66, in main
if fw_loader.check_fw_boot_status(plat_def.BOOT_STATUS_FW_ENTERED):
File "/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/lib/loader.py", line 133, in check_fw_boot_status
raise RuntimeError(output)
**RuntimeError: Firmware Status error: Status: 0x80000007
Error Code 0x0000002F**
**Environment (please complete the following information):**
OS: Fedora28
Toolchain: Zephyr-sdk-0.12.1
Commit ID: af67564573f5d7a | priority | intel adsp use twister to run kernel testcases has no output describe the bug use twister to run kernel cases on adsp still failed to reproduce steps to reproduce the behavior west build b intel adsp tests kernel mutex mutex api west sign t rimage p modules audio sof zephyr ext rimage build rimage d modules audio sof zephyr ext rimage config k modules audio sof keys otc private key pem boards xtensa intel adsp tools fw loader py f build zephyr zephyr ri see error logs and console output start firmware downloading open hda device dev hda reset dsp firmware status register boot unknown wait unknown module error ipc cmd ipc len booting up dsp firmware status register boot init done wait dma buffer full module error downloading firmware start firmware downloading failed to receive expected status checking firmware status firmware status register boot unknown wait unknown module error traceback most recent call last file home ztest work zephyrproject zephyr boards xtensa intel adsp tools fw loader py line in main file home ztest work zephyrproject zephyr boards xtensa intel adsp tools fw loader py line in main if fw loader check fw boot status plat def boot status fw entered file home ztest work zephyrproject zephyr boards xtensa intel adsp tools lib loader py line in check fw boot status raise runtimeerror output runtimeerror firmware status error status error code environment please complete the following information os toolchain zephyr sdk commit id | 1 |
567,221 | 16,850,647,175 | IssuesEvent | 2021-06-20 12:42:47 | cdklabs/construct-hub-webapp | https://api.github.com/repos/cdklabs/construct-hub-webapp | closed | Implement: Landing Page | effort/medium priority/p1 | The landing page has the minimal information needed so customers can identify the website they are at, it features a search bar front-and-center, and has a list of "hot items" pre-populated. | 1.0 | Implement: Landing Page - The landing page has the minimal information needed so customers can identify the website they are at, it features a search bar front-and-center, and has a list of "hot items" pre-populated. | priority | implement landing page the landing page has the minimal information needed so customers can identify the website they are at it features a search bar front and center and has a list of hot items pre populated | 1 |
47,963 | 2,990,046,956 | IssuesEvent | 2015-07-21 06:21:55 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | move to new Jackson 2.0 | bug imported Priority-Medium | _From [hanrisel...@gmail.com](https://code.google.com/u/106356216998200153067/) on August 16, 2012 17:48:28_
Using rest-assured 1.6.2, if the classpath contains the new Jackson library (2.0) instead of the old one (1.x), rest-assured fails with:
java.lang.IllegalStateException: Cannot serialize object because no JSON serializer found in classpath. Please put either Jackson or Gson in the classpath.
This is because the new Jackson library isn't backwards compatible with the old one ( https://github.com/FasterXML/jackson-core ).
Spring for example solves this with:
final ClassLoader classLoader = getClass().getClassLoader();
if (ClassUtils.isPresent("com.fasterxml.jackson.databind.ObjectMapper", classLoader)) {
messageConverters.add(new MappingJackson2HttpMessageConverter());
} else if (ClassUtils.isPresent("org.codehaus.jackson.map.ObjectMapper", classLoader)) {
messageConverters.add(new MappingJacksonHttpMessageConverter());
}
A similar solution should be included in rest-assured as well, so that it knows how to use Jackson 2.0, instead of just 1.x.
Thanks.
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=189_ | 1.0 | move to new Jackson 2.0 - _From [hanrisel...@gmail.com](https://code.google.com/u/106356216998200153067/) on August 16, 2012 17:48:28_
Using rest-assured 1.6.2, if the classpath contains the new Jackson library (2.0) instead of the old one (1.x), rest-assured fails with:
java.lang.IllegalStateException: Cannot serialize object because no JSON serializer found in classpath. Please put either Jackson or Gson in the classpath.
This is because the new Jackson library isn't backwards compatible with the old one ( https://github.com/FasterXML/jackson-core ).
Spring for example solves this with:
final ClassLoader classLoader = getClass().getClassLoader();
if (ClassUtils.isPresent("com.fasterxml.jackson.databind.ObjectMapper", classLoader)) {
messageConverters.add(new MappingJackson2HttpMessageConverter());
} else if (ClassUtils.isPresent("org.codehaus.jackson.map.ObjectMapper", classLoader)) {
messageConverters.add(new MappingJacksonHttpMessageConverter());
}
A similar solution should be included in rest-assured as well, so that it knows how to use Jackson 2.0, instead of just 1.x.
Thanks.
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=189_ | priority | move to new jackson from on august using rest assured if the classpath contains the new jackson library instead of the old one x rest assured fails with java lang illegalstateexception cannot serialize object because no json serializer found in classpath please put either jackson or gson in the classpath this is because the new jackson library isn t backwards compatible with the old one spring for example solves this with final classloader classloader getclass getclassloader if classutils ispresent com fasterxml jackson databind objectmapper classloader messageconverters add new else if classutils ispresent org codehaus jackson map objectmapper classloader messageconverters add new mappingjacksonhttpmessageconverter a similar solution should be included in rest assured as well so that it knows how to use jackson instead of just x thanks original issue | 1 |
101,238 | 4,111,127,212 | IssuesEvent | 2016-06-07 03:48:24 | SharpScratchMod/SharpScratchMod.github.io | https://api.github.com/repos/SharpScratchMod/SharpScratchMod.github.io | closed | No padding on site? | bug enhancement medium priority style | As you can see on the site, there is no space between the left of the webpage and the text on the webpage. I came up with some code that should help fix it.
You should add this CSS code:
`
.padding-main {
padding-left: 15px;
}
`
HTML for navbar the Sharp text:
`
<div class="padding-main"><a href="#" class="brand-logo">Sharp</a></div>
`
For the rest of it, surround the main content (no footer included or navbar) with the div with the padding-main class:
`
<div class="padding-main"><!-- content here! --></div>
`
It's just a simple design choice, not so sure if you want to use it but in my personal opinion it looks a bit better. | 1.0 | No padding on site? - As you can see on the site, there is no space between the left of the webpage and the text on the webpage. I came up with some code that should help fix it.
You should add this CSS code:
`
.padding-main {
padding-left: 15px;
}
`
HTML for navbar the Sharp text:
`
<div class="padding-main"><a href="#" class="brand-logo">Sharp</a></div>
`
For the rest of it, surround the main content (no footer included or navbar) with the div with the padding-main class:
`
<div class="padding-main"><!-- content here! --></div>
`
It's just a simple design choice, not so sure if you want to use it but in my personal opinion it looks a bit better. | priority | no padding on site as you can see on the site there is no space between the left of the webpage and the text on the webpage i came up with some code that should help fix it you should add this css code padding main padding left html for navbar the sharp text sharp for the rest of it surround the main content no footer included or navbar with the div with the padding main class it s just a simple design choice not so sure if you want to use it but in my personal opinion it looks a bit better | 1 |
476,755 | 13,749,447,174 | IssuesEvent | 2020-10-06 10:30:22 | canonical-web-and-design/microk8s.io | https://api.github.com/repos/canonical-web-and-design/microk8s.io | closed | Offline install fails on RHEL | Docs 📚 Priority: Medium | Offline install of microk8s on RHEL 7.7 fails
## Process
https://microk8s.io/docs/install-alternatives#heading--offline
Current results:
sudo snap ack microk8s_1379.assert
sudo install microk8s_1379.snap
error: This revision of snap "microk8s_1379.snap" was published using classic confinement and thus may perform arbitrary system changes outside of the security sandbox that snaps are usually confined tom which may put your system at risk
If you understand and want to proceed repeat the command including --classic
sudo install microk8s.snap --classic
error: cannot perform the following tasks:
-Ensure prerequisites for "microk8s" are available (cannot install snap base "core": Post https://api.snapcraft.io/v2/snaps/refresh: dial tcp: lookup api.snapcraft.io on xx.xx.xx.xx:53: read tcp: xx.xx.xx.xx:35831->xx.xx.xx.xx:53: i/o timeout)
Help!
## Screenshot
[If relevant, include a screenshot.]
## Browser details
[Optionally - if you can, copy the report generated by [mybrowser.fyi](https://mybrowser.fyi/) - this might help us debug certain types of issues.]
| 1.0 | Offline install fails on RHEL - Offline install of microk8s on RHEL 7.7 fails
## Process
https://microk8s.io/docs/install-alternatives#heading--offline
Current results:
sudo snap ack microk8s_1379.assert
sudo install microk8s_1379.snap
error: This revision of snap "microk8s_1379.snap" was published using classic confinement and thus may perform arbitrary system changes outside of the security sandbox that snaps are usually confined tom which may put your system at risk
If you understand and want to proceed repeat the command including --classic
sudo install microk8s.snap --classic
error: cannot perform the following tasks:
-Ensure prerequisites for "microk8s" are available (cannot install snap base "core": Post https://api.snapcraft.io/v2/snaps/refresh: dial tcp: lookup api.snapcraft.io on xx.xx.xx.xx:53: read tcp: xx.xx.xx.xx:35831->xx.xx.xx.xx:53: i/o timeout)
Help!
## Screenshot
[If relevant, include a screenshot.]
## Browser details
[Optionally - if you can, copy the report generated by [mybrowser.fyi](https://mybrowser.fyi/) - this might help us debug certain types of issues.]
| priority | offline install fails on rhel offline install of on rhel fails process current results sudo snap ack assert sudo install snap error this revision of snap snap was published using classic confinement and thus may perform arbitrary system changes outside of the security sandbox that snaps are usually confined tom which may put your system at risk if you understand and want to proceed repeat the command including classic sudo install snap classic error cannot perform the following tasks ensure prerequisites for are available cannot install snap base core post dial tcp lookup api snapcraft io on xx xx xx xx read tcp xx xx xx xx xx xx xx xx i o timeout help screenshot browser details this might help us debug certain types of issues | 1 |
235,026 | 7,733,878,932 | IssuesEvent | 2018-05-26 17:05:48 | vinitkumar/googlecl | https://api.github.com/repos/vinitkumar/googlecl | closed | Default browser compat on osx is not correct | Priority-Medium bug imported | _From [dcol...@gmail.com](https://code.google.com/u/112934480117459012974/) on June 20, 2010 11:34:02_
Try running a browser on osx 10.6.4 and you will get an error because the osascript command is not correct. This diff fixes it.
**Attachment:** [browsers.diff](http://code.google.com/p/googlecl/issues/detail?id=95)
_Original issue: http://code.google.com/p/googlecl/issues/detail?id=95_
| 1.0 | Default browser compat on osx is not correct - _From [dcol...@gmail.com](https://code.google.com/u/112934480117459012974/) on June 20, 2010 11:34:02_
Try running a browser on osx 10.6.4 and you will get an error because the osascript command is not correct. This diff fixes it.
**Attachment:** [browsers.diff](http://code.google.com/p/googlecl/issues/detail?id=95)
_Original issue: http://code.google.com/p/googlecl/issues/detail?id=95_
| priority | default browser compat on osx is not correct from on june try running a browser on osx and you will get an error because the osascript command is not correct this diff fixes it attachment original issue | 1 |
797,411 | 28,145,938,072 | IssuesEvent | 2023-04-02 13:37:02 | berkeli/My-Coursework-Planner | https://api.github.com/repos/berkeli/My-Coursework-Planner | opened | Complete NodeJS coursework | 🐂 Size Medium 🏕 Priority Mandatory | ### Link to the coursework
https://github.com/CodeYourFuture/Node-Coursework-Week1
### Why are we doing this?
This is for testing purposes
### Maximum time in hours (Tech has max 16 per week total)
5
### How to get help
testing
### How to submit
Fork it
### How to review
tesrt
### Anything else?
test | 1.0 | Complete NodeJS coursework - ### Link to the coursework
https://github.com/CodeYourFuture/Node-Coursework-Week1
### Why are we doing this?
This is for testing purposes
### Maximum time in hours (Tech has max 16 per week total)
5
### How to get help
testing
### How to submit
Fork it
### How to review
tesrt
### Anything else?
test | priority | complete nodejs coursework link to the coursework why are we doing this this is for testing purposes maximum time in hours tech has max per week total how to get help testing how to submit fork it how to review tesrt anything else test | 1 |
754,919 | 26,408,576,970 | IssuesEvent | 2023-01-13 10:13:00 | Avaiga/taipy-config | https://api.github.com/repos/Avaiga/taipy-config | closed | Address mypy issues | ⚙️Configuration 📈 Improvement 🟨 Priority: Medium | **Description**
The propose is to address the following mypy error messages.

| 1.0 | Address mypy issues - **Description**
The propose is to address the following mypy error messages.

| priority | address mypy issues description the propose is to address the following mypy error messages | 1 |
542,379 | 15,859,357,300 | IssuesEvent | 2021-04-08 07:55:42 | AY2021S2-CS2113-T10-1/tp | https://api.github.com/repos/AY2021S2-CS2113-T10-1/tp | closed | [PE-D] List Feature with parameter OTHER | priority.High severity.Medium | 
`list other` command only lists the food that has both `category` and `location` as `OTHER`.
As shown above, the `meat` in `other` location is not listed.
<!--session: 1617437377510-3c93c0e6-8609-4fd0-9f26-7eb717a196bb-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: tzexern/ped#11 | 1.0 | [PE-D] List Feature with parameter OTHER - 
`list other` command only lists the food that has both `category` and `location` as `OTHER`.
As shown above, the `meat` in `other` location is not listed.
<!--session: 1617437377510-3c93c0e6-8609-4fd0-9f26-7eb717a196bb-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: tzexern/ped#11 | priority | list feature with parameter other list other command only lists the food that has both category and location as other as shown above the meat in other location is not listed labels severity medium type functionalitybug original tzexern ped | 1 |
405,672 | 11,880,872,030 | IssuesEvent | 2020-03-27 11:31:35 | Codaone/DEXBot | https://api.github.com/repos/Codaone/DEXBot | closed | bitshares/market.py: ValueError: could not convert string to float: | [3] Type: Bug [3] Type: Maintenance [4] Priority: Medium component: Relative Orders | This is a library issue but affects dexbot leading to "Worker xxx is disabled".
```
Traceback (most recent call last):
File "/home/vvk/devel/DEXBot-prod/dexbot/worker.py", line 122, in on_block
self.workers[worker_name].ontick(data)
File "/home/vvk/.local/share/virtualenvs/DEXBot-prod-N9mtHQyI/lib/python3.6/site-packages/Events-0.3-py3.6.egg/events/events.py", line 95, in __call__
f(*a, **kw)
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/staggered_orders.py", line 1952, in tick
self.maintain_strategy()
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/staggered_orders.py", line 194, in maintain_strategy
self.market_center_price = self.get_market_center_price(suppress_errors=True)
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/base.py", line 633, in get_market_center_price
base_amount=base_amount, exclude_own_orders=False)
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/base.py", line 678, in get_market_buy_price
return float(self.ticker().get('highestBid'))
File "/home/vvk/.local/share/virtualenvs/DEXBot-prod-N9mtHQyI/lib/python3.6/site-packages/bitshares-0.2.1-py3.6.egg/bitshares/market.py", line 174, in ticker
data["percentChange"] = float(ticker["percent_change"])
ValueError: could not convert string to float:
``` | 1.0 | bitshares/market.py: ValueError: could not convert string to float: - This is a library issue but affects dexbot leading to "Worker xxx is disabled".
```
Traceback (most recent call last):
File "/home/vvk/devel/DEXBot-prod/dexbot/worker.py", line 122, in on_block
self.workers[worker_name].ontick(data)
File "/home/vvk/.local/share/virtualenvs/DEXBot-prod-N9mtHQyI/lib/python3.6/site-packages/Events-0.3-py3.6.egg/events/events.py", line 95, in __call__
f(*a, **kw)
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/staggered_orders.py", line 1952, in tick
self.maintain_strategy()
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/staggered_orders.py", line 194, in maintain_strategy
self.market_center_price = self.get_market_center_price(suppress_errors=True)
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/base.py", line 633, in get_market_center_price
base_amount=base_amount, exclude_own_orders=False)
File "/home/vvk/devel/DEXBot-prod/dexbot/strategies/base.py", line 678, in get_market_buy_price
return float(self.ticker().get('highestBid'))
File "/home/vvk/.local/share/virtualenvs/DEXBot-prod-N9mtHQyI/lib/python3.6/site-packages/bitshares-0.2.1-py3.6.egg/bitshares/market.py", line 174, in ticker
data["percentChange"] = float(ticker["percent_change"])
ValueError: could not convert string to float:
``` | priority | bitshares market py valueerror could not convert string to float this is a library issue but affects dexbot leading to worker xxx is disabled traceback most recent call last file home vvk devel dexbot prod dexbot worker py line in on block self workers ontick data file home vvk local share virtualenvs dexbot prod lib site packages events egg events events py line in call f a kw file home vvk devel dexbot prod dexbot strategies staggered orders py line in tick self maintain strategy file home vvk devel dexbot prod dexbot strategies staggered orders py line in maintain strategy self market center price self get market center price suppress errors true file home vvk devel dexbot prod dexbot strategies base py line in get market center price base amount base amount exclude own orders false file home vvk devel dexbot prod dexbot strategies base py line in get market buy price return float self ticker get highestbid file home vvk local share virtualenvs dexbot prod lib site packages bitshares egg bitshares market py line in ticker data float ticker valueerror could not convert string to float | 1 |
673,309 | 22,957,272,645 | IssuesEvent | 2022-07-19 12:42:40 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | refactor(util/exec): remove deprecated encoding option within RawExecOptions | priority-3-medium type:refactor status:ready | ### Describe the proposed change(s).
Blocked by -
- https://github.com/renovatebot/renovate/pull/16414
Context -
- https://github.com/renovatebot/renovate/pull/16414#discussion_r917392468
Remove prop and all references to it
In `lib/util/exec/types.ts` -
```js
export interface RawExecOptions extends ChildProcessSpawnOptions {
/**
* @deprecated renovate uses utf8, encoding property is ignored.
*/
encoding: string;
``` | 1.0 | refactor(util/exec): remove deprecated encoding option within RawExecOptions - ### Describe the proposed change(s).
Blocked by -
- https://github.com/renovatebot/renovate/pull/16414
Context -
- https://github.com/renovatebot/renovate/pull/16414#discussion_r917392468
Remove prop and all references to it
In `lib/util/exec/types.ts` -
```js
export interface RawExecOptions extends ChildProcessSpawnOptions {
/**
* @deprecated renovate uses utf8, encoding property is ignored.
*/
encoding: string;
``` | priority | refactor util exec remove deprecated encoding option within rawexecoptions describe the proposed change s blocked by context remove prop and all references to it in lib util exec types ts js export interface rawexecoptions extends childprocessspawnoptions deprecated renovate uses encoding property is ignored encoding string | 1 |
127,427 | 5,030,326,885 | IssuesEvent | 2016-12-16 00:16:03 | dpn-admin/dpn-server | https://api.github.com/repos/dpn-admin/dpn-server | closed | [Deployment] git submodule commands | Priority: Medium Status: Available Type: Enhancement | The swagger-ui at /api-docs is linked to code from the `dpn-rest-spec` using a git submodule. In the parent project (this one), the submodule path is empty. It is populated after cloning this repo and issuing a `git submodule update --init --recursive` command.
So, application deployment processes must pull in the code for the `dpn-rest-spec` project in the git submodule, which looks something like this:
``` sh
$ git clone https://github.com/dpn-admin/dpn-server.git dpn-admin/dpn-server
Cloning into 'dpn-admin/dpn-server'...
$ cd dpn-admin/dpn-server
$ git submodule update --init --recursive
Submodule 'dpn-rest-spec' (https://github.com/dpn-admin/dpn-rest-spec.git) registered for path 'dpn-rest-spec'
Cloning into 'dpn-rest-spec'...
Submodule path 'dpn-rest-spec': checked out '4c1cbba135051a0a520cf183f3445a740757086a'
```
When using Capistrano, it should be taken care of with a plugin like
https://github.com/ekho/capistrano-git-submodule-strategy
Since there is no deployment code in this `dpn-server` project, this issue must be addressed by each DPN node deploying the server. There should be an example of using Capistrano for deployment in the SDR fork of this project, see https://github.com/sul-dlss/dpn-server
| 1.0 | [Deployment] git submodule commands - The swagger-ui at /api-docs is linked to code from the `dpn-rest-spec` using a git submodule. In the parent project (this one), the submodule path is empty. It is populated after cloning this repo and issuing a `git submodule update --init --recursive` command.
So, application deployment processes must pull in the code for the `dpn-rest-spec` project in the git submodule, which looks something like this:
``` sh
$ git clone https://github.com/dpn-admin/dpn-server.git dpn-admin/dpn-server
Cloning into 'dpn-admin/dpn-server'...
$ cd dpn-admin/dpn-server
$ git submodule update --init --recursive
Submodule 'dpn-rest-spec' (https://github.com/dpn-admin/dpn-rest-spec.git) registered for path 'dpn-rest-spec'
Cloning into 'dpn-rest-spec'...
Submodule path 'dpn-rest-spec': checked out '4c1cbba135051a0a520cf183f3445a740757086a'
```
When using Capistrano, it should be taken care of with a plugin like
https://github.com/ekho/capistrano-git-submodule-strategy
Since there is no deployment code in this `dpn-server` project, this issue must be addressed by each DPN node deploying the server. There should be an example of using Capistrano for deployment in the SDR fork of this project, see https://github.com/sul-dlss/dpn-server
| priority | git submodule commands the swagger ui at api docs is linked to code from the dpn rest spec using a git submodule in the parent project this one the submodule path is empty it is populated after cloning this repo and issuing a git submodule update init recursive command so application deployment processes must pull in the code for the dpn rest spec project in the git submodule which looks something like this sh git clone dpn admin dpn server cloning into dpn admin dpn server cd dpn admin dpn server git submodule update init recursive submodule dpn rest spec registered for path dpn rest spec cloning into dpn rest spec submodule path dpn rest spec checked out when using capistrano it should be taken care of with a plugin like since there is no deployment code in this dpn server project this issue must be addressed by each dpn node deploying the server there should be an example of using capistrano for deployment in the sdr fork of this project see | 1 |
438,234 | 12,624,693,543 | IssuesEvent | 2020-06-14 07:46:39 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Icon being downloaded from Github for every Google Drive upload | bug export google priority: Medium | **Describe the bug**
As stated in subject. Not sure how this made it through code review.
https://github.com/OpenRefine/OpenRefine/blob/11fbd01a1bd66dbf6858114a43d49b8edee465ff/extensions/gdata/src/com/google/refine/extension/gdata/UploadCommand.java#L174
https://github.com/OpenRefine/OpenRefine/blob/11fbd01a1bd66dbf6858114a43d49b8edee465ff/extensions/gdata/src/com/google/refine/extension/gdata/UploadCommand.java#L72
**Expected behavior**
Icon should be included in distribution and uploaded directly without being downloaded first
**OpenRefine <!--(please complete the following information)-->:**
- Version 3.4 current master
| 1.0 | Icon being downloaded from Github for every Google Drive upload - **Describe the bug**
As stated in subject. Not sure how this made it through code review.
https://github.com/OpenRefine/OpenRefine/blob/11fbd01a1bd66dbf6858114a43d49b8edee465ff/extensions/gdata/src/com/google/refine/extension/gdata/UploadCommand.java#L174
https://github.com/OpenRefine/OpenRefine/blob/11fbd01a1bd66dbf6858114a43d49b8edee465ff/extensions/gdata/src/com/google/refine/extension/gdata/UploadCommand.java#L72
**Expected behavior**
Icon should be included in distribution and uploaded directly without being downloaded first
**OpenRefine <!--(please complete the following information)-->:**
- Version 3.4 current master
| priority | icon being downloaded from github for every google drive upload describe the bug as stated in subject not sure how this made it through code review expected behavior icon should be included in distribution and uploaded directly without being downloaded first openrefine version current master | 1 |
164,947 | 6,259,324,232 | IssuesEvent | 2017-07-14 17:48:02 | tootsuite/mastodon | https://api.github.com/repos/tootsuite/mastodon | closed | Emoji Auto-complete + UTF8 representation | enhancement help wanted priority - medium ui | 
This screenshot was taken from quitter.is, the other side of the federation. Maybe Mastodon handles emoji/unicode in a nonstandard way? | 1.0 | Emoji Auto-complete + UTF8 representation - 
This screenshot was taken from quitter.is, the other side of the federation. Maybe Mastodon handles emoji/unicode in a nonstandard way? | priority | emoji auto complete representation this screenshot was taken from quitter is the other side of the federation maybe mastodon handles emoji unicode in a nonstandard way | 1 |
829,915 | 31,929,651,016 | IssuesEvent | 2023-09-19 06:22:33 | kubebb/core | https://api.github.com/repos/kubebb/core | closed | create a ConfigMap with the values.yaml when updating Component's version | enhancement help wanted priority-high difficulty-medium | When installing components, the values.yaml file is required.

| 1.0 | create a ConfigMap with the values.yaml when updating Component's version - When installing components, the values.yaml file is required.

| priority | create a configmap with the values yaml when updating component s version when installing components the values yaml file is required | 1 |
44,099 | 2,899,138,042 | IssuesEvent | 2015-06-17 09:26:11 | greenlion/PHP-SQL-Parser | https://api.github.com/repos/greenlion/PHP-SQL-Parser | closed | Line breaks making trouble detecting correct table name | bug imported Priority-Medium wontfix | _From [h.leith...@gmail.com](https://code.google.com/u/115105406225264640830/) on April 20, 2012 19:05:54_
What steps will reproduce the problem? Parse the following query:
SELECT title, introtext
FROM kj9un_content
WHERE `id`='159' What is the expected output? What do you see instead? $parsed['FROM'][0]['table'] = 'kj9un_content'
I get
$parsed['FROM'][0]['table'] = 'kj9un_content
WHERE'
_Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=43_ | 1.0 | Line breaks making trouble detecting correct table name - _From [h.leith...@gmail.com](https://code.google.com/u/115105406225264640830/) on April 20, 2012 19:05:54_
What steps will reproduce the problem? Parse the following query:
SELECT title, introtext
FROM kj9un_content
WHERE `id`='159' What is the expected output? What do you see instead? $parsed['FROM'][0]['table'] = 'kj9un_content'
I get
$parsed['FROM'][0]['table'] = 'kj9un_content
WHERE'
_Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=43_ | priority | line breaks making trouble detecting correct table name from on april what steps will reproduce the problem parse the following query select title introtext from content where id what is the expected output what do you see instead parsed content i get parsed content where original issue | 1 |
246,044 | 7,893,128,463 | IssuesEvent | 2018-06-28 17:01:04 | visit-dav/issues-test | https://api.github.com/repos/visit-dav/issues-test | closed | host profile for rzgpu? | Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: Normal Support Group: Any | Dan Laney needed a host profile for rzgpu.
He created a personal one by copying from rzalastor, but then couldn't use it with his script that utilized visit_utils.
Made me think, should we have a public host profile for rz gpu?
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 10/17/2014 11:44 am
Original update: 10/22/2014 02:58 pm
Ticket number: 2022 | 1.0 | host profile for rzgpu? - Dan Laney needed a host profile for rzgpu.
He created a personal one by copying from rzalastor, but then couldn't use it with his script that utilized visit_utils.
Made me think, should we have a public host profile for rz gpu?
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 10/17/2014 11:44 am
Original update: 10/22/2014 02:58 pm
Ticket number: 2022 | priority | host profile for rzgpu dan laney needed a host profile for rzgpu he created a personal one by copying from rzalastor but then couldn t use it with his script that utilized visit utils made me think should we have a public host profile for rz gpu redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author kathleen biagas original creation am original update pm ticket number | 1 |
610,074 | 18,893,335,112 | IssuesEvent | 2021-11-15 15:22:43 | bounswe/2021SpringGroup4 | https://api.github.com/repos/bounswe/2021SpringGroup4 | closed | Review the Project Documentations and SRS Document to Catch-up the Project | individual Priority: Medium Status: Completed Type: Organization | As a new-coming team member, I should read the project plans and documentations to catch-up with the project and start the hands-on coding. | 1.0 | Review the Project Documentations and SRS Document to Catch-up the Project - As a new-coming team member, I should read the project plans and documentations to catch-up with the project and start the hands-on coding. | priority | review the project documentations and srs document to catch up the project as a new coming team member i should read the project plans and documentations to catch up with the project and start the hands on coding | 1 |
586,637 | 17,594,077,296 | IssuesEvent | 2021-08-17 00:49:58 | dataware-tools/dataware-tools | https://api.github.com/repos/dataware-tools/dataware-tools | closed | COMFIRMでなくDELETEのほうが良い | kind/feature priority/medium wg/web-app area/UIUX | COMFIRMでなくDELETEのほうが良い
色は赤がよく使われているので赤を検討したほうがいいと思います
<img width="656" alt="スクリーンショット 2021-07-02 12 33 35" src="https://user-images.githubusercontent.com/61043090/124219131-a3567000-db36-11eb-8c1d-537aa13e2b4d.png">
例)

_Originally posted by @yuri-sakai in https://github.com/dataware-tools/dataware-tools/issues/2#issuecomment-872697950_ | 1.0 | COMFIRMでなくDELETEのほうが良い - COMFIRMでなくDELETEのほうが良い
色は赤がよく使われているので赤を検討したほうがいいと思います
<img width="656" alt="スクリーンショット 2021-07-02 12 33 35" src="https://user-images.githubusercontent.com/61043090/124219131-a3567000-db36-11eb-8c1d-537aa13e2b4d.png">
例)

_Originally posted by @yuri-sakai in https://github.com/dataware-tools/dataware-tools/issues/2#issuecomment-872697950_ | priority | comfirmでなくdeleteのほうが良い comfirmでなくdeleteのほうが良い 色は赤がよく使われているので赤を検討したほうがいいと思います img width alt スクリーンショット src 例) originally posted by yuri sakai in | 1 |
61,426 | 3,145,697,513 | IssuesEvent | 2015-09-14 19:14:50 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | opened | File Download: Download (selected) button not working, results in exception. | Component: File Upload & Handling Priority: Medium Status: Dev Type: Bug |
Download (all/selected) is not working, selecting files and clicking results in an exception. This button will change to be just "Download" with behavior being download selected, which encompasses all. | 1.0 | File Download: Download (selected) button not working, results in exception. -
Download (all/selected) is not working, selecting files and clicking results in an exception. This button will change to be just "Download" with behavior being download selected, which encompasses all. | priority | file download download selected button not working results in exception download all selected is not working selecting files and clicking results in an exception this button will change to be just download with behavior being download selected which encompasses all | 1 |
718,732 | 24,730,138,591 | IssuesEvent | 2022-10-20 16:51:50 | AY2223S1-CS2103T-W13-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-W13-4/tp | closed | As a team leader, I want to be able to randomly assign a task to any team member | type.Story priority.Medium | ...so that I can assign tasks easily if nobody has any preference. | 1.0 | As a team leader, I want to be able to randomly assign a task to any team member - ...so that I can assign tasks easily if nobody has any preference. | priority | as a team leader i want to be able to randomly assign a task to any team member so that i can assign tasks easily if nobody has any preference | 1 |
424,454 | 12,311,408,749 | IssuesEvent | 2020-05-12 12:22:22 | debops/debops | https://api.github.com/repos/debops/debops | closed | [debops.ldap] fusiondirectory will not work on a DIT initialized by ldap/init-directory | bug priority: medium tag: LDAP | Trying to setup an apt installed `fusiondirectory` on top of a fresh `ldap/init-directory` blocks right after it binds and displays the following errors:
* __Warning__: Missing optional object class "fdTemplate"! Schema "template-fd.schema": Used to store templates.
* __Error__: Missing required object class "fdLockEntry"! Schema "core-fd.schema": Main FusionDirectory schema
* __Error__: Missing required object class "fusionDirectoryConf"! Schema "core-fd-conf.schema": Schema used to store FusionDirectory configuration
* __Error__: Your schema is configured to support mixed groups, but this plugin is not present. Schema "nis.schema": The objectClass "posixGroup" must be STRUCTURAL
Tested on a fresh `buster` vm with very minimal other configuration.
Just dropping those here for anyone interested, I'm just going to use Apache Directory Studio. | 1.0 | [debops.ldap] fusiondirectory will not work on a DIT initialized by ldap/init-directory - Trying to setup an apt installed `fusiondirectory` on top of a fresh `ldap/init-directory` blocks right after it binds and displays the following errors:
* __Warning__: Missing optional object class "fdTemplate"! Schema "template-fd.schema": Used to store templates.
* __Error__: Missing required object class "fdLockEntry"! Schema "core-fd.schema": Main FusionDirectory schema
* __Error__: Missing required object class "fusionDirectoryConf"! Schema "core-fd-conf.schema": Schema used to store FusionDirectory configuration
* __Error__: Your schema is configured to support mixed groups, but this plugin is not present. Schema "nis.schema": The objectClass "posixGroup" must be STRUCTURAL
Tested on a fresh `buster` vm with very minimal other configuration.
Just dropping those here for anyone interested, I'm just going to use Apache Directory Studio. | priority | fusiondirectory will not work on a dit initialized by ldap init directory trying to setup an apt installed fusiondirectory on top of a fresh ldap init directory blocks right after it binds and displays the following errors warning missing optional object class fdtemplate schema template fd schema used to store templates error missing required object class fdlockentry schema core fd schema main fusiondirectory schema error missing required object class fusiondirectoryconf schema core fd conf schema schema used to store fusiondirectory configuration error your schema is configured to support mixed groups but this plugin is not present schema nis schema the objectclass posixgroup must be structural tested on a fresh buster vm with very minimal other configuration just dropping those here for anyone interested i m just going to use apache directory studio | 1 |
126,255 | 4,981,766,861 | IssuesEvent | 2016-12-07 09:11:03 | ludo237/vuejs-carousel | https://api.github.com/repos/ludo237/vuejs-carousel | closed | Improve the example | Priority: Medium Status: Completed Type: Improvement | The current example is really ugly and there's a lot of room for improvements. Here's a list of things that should be done:
- Add a better graphics for the thumbnail
- Add a little bit of text to explain whats is going on
- Use different size for the images in order to demonstrate how well the theater works with all kind of images | 1.0 | Improve the example - The current example is really ugly and there's a lot of room for improvements. Here's a list of things that should be done:
- Add a better graphics for the thumbnail
- Add a little bit of text to explain whats is going on
- Use different size for the images in order to demonstrate how well the theater works with all kind of images | priority | improve the example the current example is really ugly and there s a lot of room for improvements here s a list of things that should be done add a better graphics for the thumbnail add a little bit of text to explain whats is going on use different size for the images in order to demonstrate how well the theater works with all kind of images | 1 |
147,692 | 5,650,863,518 | IssuesEvent | 2017-04-08 00:06:07 | redcross/arcdata | https://api.github.com/repos/redcross/arcdata | opened | Incidents created in Cassia and Minidoka Counties (Idaho) affiliate with the old Cassia Territory instead of the newly-created Twin Falls County Territory | affects-dispatch medium-priority | Newly-created incidents should affiliate with the new Twin Falls County Territory. | 1.0 | Incidents created in Cassia and Minidoka Counties (Idaho) affiliate with the old Cassia Territory instead of the newly-created Twin Falls County Territory - Newly-created incidents should affiliate with the new Twin Falls County Territory. | priority | incidents created in cassia and minidoka counties idaho affiliate with the old cassia territory instead of the newly created twin falls county territory newly created incidents should affiliate with the new twin falls county territory | 1 |
529,535 | 15,390,815,082 | IssuesEvent | 2021-03-03 13:52:49 | AY2021S2-CS2103T-W14-4/tp | https://api.github.com/repos/AY2021S2-CS2103T-W14-4/tp | closed | Tag task | priority.MEDIUM type.story | As a lazy university student, I can tag the task based on category, so that I can organise my tasks efficiently. | 1.0 | Tag task - As a lazy university student, I can tag the task based on category, so that I can organise my tasks efficiently. | priority | tag task as a lazy university student i can tag the task based on category so that i can organise my tasks efficiently | 1 |
765,089 | 26,832,986,762 | IssuesEvent | 2023-02-02 17:16:03 | Waygone/JuiceJam | https://api.github.com/repos/Waygone/JuiceJam | closed | Player can sometimes change x velocity instantly when changing direction. | bug physics player priority: medium | Video of the issue:
https://user-images.githubusercontent.com/122162160/215269249-61c45467-3b7e-4c16-b4d5-0a2138d99810.mp4
Environment:
itch.io version 1 build
Reproducibility rate:
5%
Steps to reproduce:
1. Run the game and make sure you can move the player.
2. Move left or right until you're about to hit the maximum speed and then change directions instantly.
I think this only happens when you change direction at certain points while accelerating in the current direction.
For example, I can't reproduce the bug while moving at the maximum speed. | 1.0 | Player can sometimes change x velocity instantly when changing direction. - Video of the issue:
https://user-images.githubusercontent.com/122162160/215269249-61c45467-3b7e-4c16-b4d5-0a2138d99810.mp4
Environment:
itch.io version 1 build
Reproducibility rate:
5%
Steps to reproduce:
1. Run the game and make sure you can move the player.
2. Move left or right until you're about to hit the maximum speed and then change directions instantly.
I think this only happens when you change direction at certain points while accelerating in the current direction.
For example, I can't reproduce the bug while moving at the maximum speed. | priority | player can sometimes change x velocity instantly when changing direction video of the issue environment itch io version build reproducibility rate steps to reproduce run the game and make sure you can move the player move left or right until you re about to hit the maximum speed and then change directions instantly i think this only happens when you change direction at certain points while accelerating in the current direction for example i can t reproduce the bug while moving at the maximum speed | 1 |
162,016 | 6,145,577,717 | IssuesEvent | 2017-06-27 11:54:32 | ressec/thot | https://api.github.com/repos/ressec/thot | opened | Create the room entity | Domain: Actor Priority: Medium Type: New Feature | ## Purpose
The **room** actor represents a container for **user**s. The users contained in a room can discuss together.
It is responsible for handling all messages of the **RoomMessageProtocol** (see #26). | 1.0 | Create the room entity - ## Purpose
The **room** actor represents a container for **user**s. The users contained in a room can discuss together.
It is responsible for handling all messages of the **RoomMessageProtocol** (see #26). | priority | create the room entity purpose the room actor represents a container for user s the users contained in a room can discuss together it is responsible for handling all messages of the roommessageprotocol see | 1 |
475,776 | 13,725,700,252 | IssuesEvent | 2020-10-03 19:42:06 | sous-chefs/nodejs | https://api.github.com/repos/sous-chefs/nodejs | closed | nodejs cookbook does not install npm on ubuntu | Bug Priority: Medium hacktoberfest | ### Cookbook version
[Version of the cookbook where you are encountering the issue]
3.0.0
### Chef-client version
[Version of chef-client in your environment]
Chef: 12.18.31
### Platform Details
[Operating system distribution and release version. Cloud provider if running in the cloud]
Ubuntu 16.04
### Scenario:
[What you are trying to achieve and you can't?]
Trying to install npm cli.
### Steps to Reproduce:
[If you are filing an issue what are the things we need to do in order to repro your problem? How are you using this cookbook or any resources it includes?]
My cookbook contains:
```
include_recipe "nodejs"
include_recipe "nodejs::npm"
nodejs_npm "bower"
```
### Expected Result:
[What are you expecting to happen as the consequence of above reproduction steps?]
npm should install bower.
### Actual Result:
[What actually happens after the reproduction steps? Include the error output or a link to a gist if possible.]
The npm cli is not installed. So the lwrp nodejs_npm does not work.
Error output:
```
Error executing action `run` on resource 'execute[install NPM package bower]'
Errno::ENOENT
No such file or directory - npm
``` | 1.0 | nodejs cookbook does not install npm on ubuntu - ### Cookbook version
[Version of the cookbook where you are encountering the issue]
3.0.0
### Chef-client version
[Version of chef-client in your environment]
Chef: 12.18.31
### Platform Details
[Operating system distribution and release version. Cloud provider if running in the cloud]
Ubuntu 16.04
### Scenario:
[What you are trying to achieve and you can't?]
Trying to install npm cli.
### Steps to Reproduce:
[If you are filing an issue what are the things we need to do in order to repro your problem? How are you using this cookbook or any resources it includes?]
My cookbook contains:
```
include_recipe "nodejs"
include_recipe "nodejs::npm"
nodejs_npm "bower"
```
### Expected Result:
[What are you expecting to happen as the consequence of above reproduction steps?]
npm should install bower.
### Actual Result:
[What actually happens after the reproduction steps? Include the error output or a link to a gist if possible.]
The npm cli is not installed. So the lwrp nodejs_npm does not work.
Error output:
```
Error executing action `run` on resource 'execute[install NPM package bower]'
Errno::ENOENT
No such file or directory - npm
``` | priority | nodejs cookbook does not install npm on ubuntu cookbook version chef client version chef platform details ubuntu scenario trying to install npm cli steps to reproduce my cookbook contains include recipe nodejs include recipe nodejs npm nodejs npm bower expected result npm should install bower actual result the npm cli is not installed so the lwrp nodejs npm does not work error output error executing action run on resource execute errno enoent no such file or directory npm | 1 |
544,822 | 15,897,892,620 | IssuesEvent | 2021-04-11 23:01:32 | QuangTran304/Instagram-Clone | https://api.github.com/repos/QuangTran304/Instagram-Clone | closed | [USER STORY] As a user, I want to unfollow another user. | Epic #002 Medium Priority Medium Risk User Story | **Description**: _As a_ user, _I want to_ unfollow another user _so that_ I can manage my connections.
**Steps**:
- Design an unfollow button.
- Remove unfollowed user from database containing followed users.
**Story Points**: 3
**Risk**: Medium
**Priority**: Medium
**Acceptance Test: **
Note: The task #52 must be completed before attempting this.
1. A list with all the usernames of the people being followed by this user should appear.
2. The unfollow button should be seen next to the username that all the users that are currently being followed.
3. Once the unfollow button is pressed, the name should disappear from the list of followed people.
4. In the database's collection under the following users, the name of the unfollowed user should not be included.
5. Once the unfollow button is implemented and pressing it makes the current user unfollow the user it has selected, the user story is complete. | 1.0 | [USER STORY] As a user, I want to unfollow another user. - **Description**: _As a_ user, _I want to_ unfollow another user _so that_ I can manage my connections.
**Steps**:
- Design an unfollow button.
- Remove unfollowed user from database containing followed users.
**Story Points**: 3
**Risk**: Medium
**Priority**: Medium
**Acceptance Test: **
Note: The task #52 must be completed before attempting this.
1. A list with all the usernames of the people being followed by this user should appear.
2. The unfollow button should be seen next to the username that all the users that are currently being followed.
3. Once the unfollow button is pressed, the name should disappear from the list of followed people.
4. In the database's collection under the following users, the name of the unfollowed user should not be included.
5. Once the unfollow button is implemented and pressing it makes the current user unfollow the user it has selected, the user story is complete. | priority | as a user i want to unfollow another user description as a user i want to unfollow another user so that i can manage my connections steps design an unfollow button remove unfollowed user from database containing followed users story points risk medium priority medium acceptance test note the task must be completed before attempting this a list with all the usernames of the people being followed by this user should appear the unfollow button should be seen next to the username that all the users that are currently being followed once the unfollow button is pressed the name should disappear from the list of followed people in the database s collection under the following users the name of the unfollowed user should not be included once the unfollow button is implemented and pressing it makes the current user unfollow the user it has selected the user story is complete | 1 |
679,559 | 23,237,198,994 | IssuesEvent | 2022-08-03 12:50:49 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | opened | Full name for MS user | enhancement Priority: Medium Internal Good first issue | ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
As part of the user creation the full name should be included.

It should be also possible to search on it using the search field in the user manager.
**What kind of improvement you want to add?** (check one with "x", remove the others)
- [X] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
I | 1.0 | Full name for MS user - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
As part of the user creation the full name should be included.

It should be also possible to search on it using the search field in the user manager.
**What kind of improvement you want to add?** (check one with "x", remove the others)
- [X] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
I | priority | full name for ms user description as part of the user creation the full name should be included it should be also possible to search on it using the search field in the user manager what kind of improvement you want to add check one with x remove the others minor changes to existing features code style update formatting local variables refactoring no functional changes no api changes build related changes ci related changes other please describe other useful information i | 1 |
203,038 | 7,057,390,831 | IssuesEvent | 2018-01-04 16:17:34 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | achievement sound plays inappropriately | priority: medium section: Achievements/Popups/Notifications status: issue: in progress | As reported by Embrace_ ( 9030eb72-300a-4d82-997d-8094504a3c99 ) in the Report a Bug guild:
"There's a sound playing when it shouldn't be... it just played four times within about 10 seconds while I was organizing my To-dos, including a To-do that has checklist items. I think it's the same sound that normally plays when I first log into Habitica, and some other times when I achieve something. I'm using Gokul's theme and it's a bold, loud trumpet sound. Normally I like it but when it's blaring through my speakers over and over and I'm only organizing my To-dos, obviously it's unwanted."
| 1.0 | achievement sound plays inappropriately - As reported by Embrace_ ( 9030eb72-300a-4d82-997d-8094504a3c99 ) in the Report a Bug guild:
"There's a sound playing when it shouldn't be... it just played four times within about 10 seconds while I was organizing my To-dos, including a To-do that has checklist items. I think it's the same sound that normally plays when I first log into Habitica, and some other times when I achieve something. I'm using Gokul's theme and it's a bold, loud trumpet sound. Normally I like it but when it's blaring through my speakers over and over and I'm only organizing my To-dos, obviously it's unwanted."
| priority | achievement sound plays inappropriately as reported by embrace in the report a bug guild there s a sound playing when it shouldn t be it just played four times within about seconds while i was organizing my to dos including a to do that has checklist items i think it s the same sound that normally plays when i first log into habitica and some other times when i achieve something i m using gokul s theme and it s a bold loud trumpet sound normally i like it but when it s blaring through my speakers over and over and i m only organizing my to dos obviously it s unwanted | 1 |
249,416 | 7,961,564,596 | IssuesEvent | 2018-07-13 11:15:18 | poanetwork/token-wizard | https://api.github.com/repos/poanetwork/token-wizard | closed | (Fix) configure Token Wizard 2.0 to support Auth-os Proxy for crowdsales | awaiting for review medium priority migration to auth-os | Configure Token Wizard 2.0 to support Auth-os Proxy contracts for crowdsales:
https://ropsten.etherscan.io/address/0x86501d6bbe1e876db117e313ed7bcf58691cf378#code | 1.0 | (Fix) configure Token Wizard 2.0 to support Auth-os Proxy for crowdsales - Configure Token Wizard 2.0 to support Auth-os Proxy contracts for crowdsales:
https://ropsten.etherscan.io/address/0x86501d6bbe1e876db117e313ed7bcf58691cf378#code | priority | fix configure token wizard to support auth os proxy for crowdsales configure token wizard to support auth os proxy contracts for crowdsales | 1 |
455,387 | 13,125,743,208 | IssuesEvent | 2020-08-06 07:19:22 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | opened | give coordinators ability to mark legacy travel board hearings as requested video or virtual hearings | Priority: Medium Product: caseflow-hearings Stakeholder: BVA Team: Tango 💃 | Connects #14857
Follows #14898
## Description
Currently, if a coordinator loads the case details page for a legacy appeal with a connected travel board hearing, they cannot take any action on the appeal. Instead, we want them to be able to bring the hearing into Caseflow by "converting" the travel board hearing into a request to schedule a video or virtual hearing.
Create a new task type that offers two actions: `Convert hearing type to video`, and `Convert hearing type to virtual`. When an action is selected, the new request type will be recorded as described in #14898.
In a future ticket, additional steps will be added after either action is taken (a confirmation modal; fields to add notes, an email address, or other metadata; etc.). Those additional steps are out of scope for this ticket.
## Acceptance criteria
- [ ] Please put this work behind the feature toggle: `convert_travel_board_to_video_or_virtual`
- [ ] A hearing coordinator user sees a task with available actions when loading the case details page of a legacy appeal with a travel board hearing
- [ ] Selecting an available action records the new request type and completes the task, opening its parent `ScheduleHearingTask`
- [ ] Selecting either available action also changes the hearing type in VACOLS to video, by setting `bfdocind` on the legacy appeal record to `'V'`
## Technical requirements/suggestions
- The new task will be the child of a `ScheduleHearingTask`, maybe called `ConvertTravelBoardHearingRequestTask`.
- Actions on the new task will be available to users with the "Build HearSched" role.
- The new task (and its parents) will be created when the case details page for a legacy travel board hearing is loaded, if they don't already exist, or haven't ever existed.
- The `ConvertTravelBoardHearingRequestTask` will be assigned to [To Be Decided], but will not display in any task queue.
- When the `ConvertTravelBoardHearingRequestTask` is completed (by taking either of the available actions), the request type will be recorded as described in #14898, and its parent `ScheduleHearingTask` will switch to `assigned` status.
## Background/context/resources
[Figma link for design in progress](https://www.figma.com/proto/V87TZArfdurCGJiEjQ73ES/Virtual-Hearings?node-id=6058%3A3298&viewport=-131%2C299%2C0.041808679699897766&scaling=min-zoom). Implementing this design is out of scope for this ticket.
| 1.0 | give coordinators ability to mark legacy travel board hearings as requested video or virtual hearings - Connects #14857
Follows #14898
## Description
Currently, if a coordinator loads the case details page for a legacy appeal with a connected travel board hearing, they cannot take any action on the appeal. Instead, we want them to be able to bring the hearing into Caseflow by "converting" the travel board hearing into a request to schedule a video or virtual hearing.
Create a new task type that offers two actions: `Convert hearing type to video`, and `Convert hearing type to virtual`. When an action is selected, the new request type will be recorded as described in #14898.
In a future ticket, additional steps will be added after either action is taken (a confirmation modal; fields to add notes, an email address, or other metadata; etc.). Those additional steps are out of scope for this ticket.
## Acceptance criteria
- [ ] Please put this work behind the feature toggle: `convert_travel_board_to_video_or_virtual`
- [ ] A hearing coordinator user sees a task with available actions when loading the case details page of a legacy appeal with a travel board hearing
- [ ] Selecting an available action records the new request type and completes the task, opening its parent `ScheduleHearingTask`
- [ ] Selecting either available action also changes the hearing type in VACOLS to video, by setting `bfdocind` on the legacy appeal record to `'V'`
## Technical requirements/suggestions
- The new task will be the child of a `ScheduleHearingTask`, maybe called `ConvertTravelBoardHearingRequestTask`.
- Actions on the new task will be available to users with the "Build HearSched" role.
- The new task (and its parents) will be created when the case details page for a legacy travel board hearing is loaded, if they don't already exist, or haven't ever existed.
- The `ConvertTravelBoardHearingRequestTask` will be assigned to [To Be Decided], but will not display in any task queue.
- When the `ConvertTravelBoardHearingRequestTask` is completed (by taking either of the available actions), the request type will be recorded as described in #14898, and its parent `ScheduleHearingTask` will switch to `assigned` status.
## Background/context/resources
[Figma link for design in progress](https://www.figma.com/proto/V87TZArfdurCGJiEjQ73ES/Virtual-Hearings?node-id=6058%3A3298&viewport=-131%2C299%2C0.041808679699897766&scaling=min-zoom). Implementing this design is out of scope for this ticket.
| priority | give coordinators ability to mark legacy travel board hearings as requested video or virtual hearings connects follows description currently if a coordinator loads the case details page for a legacy appeal with a connected travel board hearing they cannot take any action on the appeal instead we want them to be able to bring the hearing into caseflow by converting the travel board hearing into a request to schedule a video or virtual hearing create a new task type that offers two actions convert hearing type to video and convert hearing type to virtual when an action is selected the new request type will be recorded as described in in a future ticket additional steps will be added after either action is taken a confirmation modal fields to add notes an email address or other metadata etc those additional steps are out of scope for this ticket acceptance criteria please put this work behind the feature toggle convert travel board to video or virtual a hearing coordinator user sees a task with available actions when loading the case details page of a legacy appeal with a travel board hearing selecting an available action records the new request type and completes the task opening its parent schedulehearingtask selecting either available action also changes the hearing type in vacols to video by setting bfdocind on the legacy appeal record to v technical requirements suggestions the new task will be the child of a schedulehearingtask maybe called converttravelboardhearingrequesttask actions on the new task will be available to users with the build hearsched role the new task and its parents will be created when the case details page for a legacy travel board hearing is loaded if they don t already exist or haven t ever existed the converttravelboardhearingrequesttask will be assigned to but will not display in any task queue when the converttravelboardhearingrequesttask is completed by taking either of the available actions the request type will be recorded as described in and its parent schedulehearingtask will switch to assigned status background context resources implementing this design is out of scope for this ticket | 1 |
673,440 | 22,969,670,241 | IssuesEvent | 2022-07-20 00:59:50 | stackcollision/Nebulous-BugReporting | https://api.github.com/repos/stackcollision/Nebulous-BugReporting | closed | AI does not use MSL3. | bug branch-modmis incorrect behavior priority medium | **Describe the bug**
AI does not use MSL3.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to skirmish mode, use any fleet that has a MSL3 and is controlled by the AI.
2. During fight, AI will not attack using torpedos.
**Expected behavior**
AI will shoot torpedo at the player.
**Additional context**
None
**Attachments**


| 1.0 | AI does not use MSL3. - **Describe the bug**
AI does not use MSL3.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to skirmish mode, use any fleet that has a MSL3 and is controlled by the AI.
2. During fight, AI will not attack using torpedos.
**Expected behavior**
AI will shoot torpedo at the player.
**Additional context**
None
**Attachments**


| priority | ai does not use describe the bug ai does not use to reproduce steps to reproduce the behavior go to skirmish mode use any fleet that has a and is controlled by the ai during fight ai will not attack using torpedos expected behavior ai will shoot torpedo at the player additional context none attachments | 1 |
647,993 | 21,161,783,860 | IssuesEvent | 2022-04-07 10:00:24 | bevy-cheatbook/bevy-cheatbook | https://api.github.com/repos/bevy-cheatbook/bevy-cheatbook | closed | Pitfall: in bevy 0.6 meshes require an exact order of vertex attributes | C-enhancement S-pitfalls V-current 0- high priority Z-medium | A lot of users have been burned on this. bevy_pbr 0.6 has hardcoded vertex attribute layout and needs: positions, normals, uvs. | 1.0 | Pitfall: in bevy 0.6 meshes require an exact order of vertex attributes - A lot of users have been burned on this. bevy_pbr 0.6 has hardcoded vertex attribute layout and needs: positions, normals, uvs. | priority | pitfall in bevy meshes require an exact order of vertex attributes a lot of users have been burned on this bevy pbr has hardcoded vertex attribute layout and needs positions normals uvs | 1 |
821,850 | 30,839,362,403 | IssuesEvent | 2023-08-02 09:35:51 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-2097] RFC: Migrating next/router hooks to next/navigation hooks | ✨ feature 🧹 Improvements Medium priority ui performance | Part of the App Router migration plan #9923
[Official docs about migrating to next/navigation](https://nextjs.org/docs/app/building-your-application/upgrading/app-router-migration#step-5-migrating-routing-hooks)
* [ ] Replace router.query usages with useSearchParams
* [ ] Replace router.asPath with usePathname
* [ ] Replace router.pathname with usePathname
* [ ] Remove the router.isFallback usages
* [ ] Replace router.isReady with true
* [ ] Reduce number of router.push and router.replace arguments to 1
* [ ] Remove router.basePath usages and implement this feature in a different way
* [ ] Remove router.locale, router.locales , router.defaultLocale, and router.domainLocales and implement internationalization in a different way
* [ ] Remove router.events
* [ ] Replace useRouter from next/router to useRouter from next/navigation
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-2097](https://linear.app/calcom/issue/CAL-2097/rfc-migrating-nextrouter-hooks-to-nextnavigation-hooks)</sub> | 1.0 | [CAL-2097] RFC: Migrating next/router hooks to next/navigation hooks - Part of the App Router migration plan #9923
[Official docs about migrating to next/navigation](https://nextjs.org/docs/app/building-your-application/upgrading/app-router-migration#step-5-migrating-routing-hooks)
* [ ] Replace router.query usages with useSearchParams
* [ ] Replace router.asPath with usePathname
* [ ] Replace router.pathname with usePathname
* [ ] Remove the router.isFallback usages
* [ ] Replace router.isReady with true
* [ ] Reduce number of router.push and router.replace arguments to 1
* [ ] Remove router.basePath usages and implement this feature in a different way
* [ ] Remove router.locale, router.locales , router.defaultLocale, and router.domainLocales and implement internationalization in a different way
* [ ] Remove router.events
* [ ] Replace useRouter from next/router to useRouter from next/navigation
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-2097](https://linear.app/calcom/issue/CAL-2097/rfc-migrating-nextrouter-hooks-to-nextnavigation-hooks)</sub> | priority | rfc migrating next router hooks to next navigation hooks part of the app router migration plan replace router query usages with usesearchparams replace router aspath with usepathname replace router pathname with usepathname remove the router isfallback usages replace router isready with true reduce number of router push and router replace arguments to remove router basepath usages and implement this feature in a different way remove router locale router locales router defaultlocale and router domainlocales and implement internationalization in a different way remove router events replace userouter from next router to userouter from next navigation from | 1 |
176,710 | 6,564,280,378 | IssuesEvent | 2017-09-08 00:21:22 | OpenBazaar/openbazaar-desktop | https://api.github.com/repos/OpenBazaar/openbazaar-desktop | closed | Free shipping filter not working | bug Medium Priority | Looks like the client might not be detecting ALL in the free shipping field in the index | 1.0 | Free shipping filter not working - Looks like the client might not be detecting ALL in the free shipping field in the index | priority | free shipping filter not working looks like the client might not be detecting all in the free shipping field in the index | 1 |
55,858 | 3,074,977,065 | IssuesEvent | 2015-08-20 10:47:29 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | mediainfo - возможность выбирать папки, из медиа-файлов в которых Flylink будет получать медиа данные | bug imported Priority-Medium | _From [avgust.m...@gmail.com](https://code.google.com/u/114150837111940798320/) on February 06, 2011 17:08:41_
Реализовать возможность выбирать папки, из медиа-файлов в которых Flylink (mediainfo) будет получать данные.
К примеру, у многих отшарена папка Games с установленными играми. Так в ней медиа файлов (wav, mp3, avi и т.д.) бывает больше чем в других папках. Получать из них данные нет смысла.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=341_ | 1.0 | mediainfo - возможность выбирать папки, из медиа-файлов в которых Flylink будет получать медиа данные - _From [avgust.m...@gmail.com](https://code.google.com/u/114150837111940798320/) on February 06, 2011 17:08:41_
Реализовать возможность выбирать папки, из медиа-файлов в которых Flylink (mediainfo) будет получать данные.
К примеру, у многих отшарена папка Games с установленными играми. Так в ней медиа файлов (wav, mp3, avi и т.д.) бывает больше чем в других папках. Получать из них данные нет смысла.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=341_ | priority | mediainfo возможность выбирать папки из медиа файлов в которых flylink будет получать медиа данные from on february реализовать возможность выбирать папки из медиа файлов в которых flylink mediainfo будет получать данные к примеру у многих отшарена папка games с установленными играми так в ней медиа файлов wav avi и т д бывает больше чем в других папках получать из них данные нет смысла original issue | 1 |
89,329 | 3,792,791,825 | IssuesEvent | 2016-03-22 11:13:23 | BugBusterSWE/documentation | https://api.github.com/repos/BugBusterSWE/documentation | opened | Consuntivo fase progettazione architetturale | Manager priority:medium | *Documento in cui si trova il problema*: Piano di Progetto
*Descrizione del problema*:
Manca consuntivo per la fase di progettazione architetturale
*Punti da svolgere per risolvere il problema*:
- [ ] Effettuare la stesura del consuntivo per la fase di progettazione architetturale
Link task: [https://bugbusters.teamwork.com/tasks/5958183](https://bugbusters.teamwork.com/tasks/5958183) | 1.0 | Consuntivo fase progettazione architetturale - *Documento in cui si trova il problema*: Piano di Progetto
*Descrizione del problema*:
Manca consuntivo per la fase di progettazione architetturale
*Punti da svolgere per risolvere il problema*:
- [ ] Effettuare la stesura del consuntivo per la fase di progettazione architetturale
Link task: [https://bugbusters.teamwork.com/tasks/5958183](https://bugbusters.teamwork.com/tasks/5958183) | priority | consuntivo fase progettazione architetturale documento in cui si trova il problema piano di progetto descrizione del problema manca consuntivo per la fase di progettazione architetturale punti da svolgere per risolvere il problema effettuare la stesura del consuntivo per la fase di progettazione architetturale link task | 1 |
807,379 | 29,999,293,682 | IssuesEvent | 2023-06-26 08:13:35 | Field-Passer/newFieldPasser-BE | https://api.github.com/repos/Field-Passer/newFieldPasser-BE | closed | feat: 관심 글 등록, 조회, 삭제 기능 구현 | For: API Priority: Medium Status: Completed Type: Feature | ## Description(설명)
관심 글 등록, 조회, 삭제 구현하겠습니다.
## Tasks(New feature)
- [x] 관심 글 등록
- [x] 관심 글 조회
- [x] 관심 글 삭제
## References
| 1.0 | feat: 관심 글 등록, 조회, 삭제 기능 구현 - ## Description(설명)
관심 글 등록, 조회, 삭제 구현하겠습니다.
## Tasks(New feature)
- [x] 관심 글 등록
- [x] 관심 글 조회
- [x] 관심 글 삭제
## References
| priority | feat 관심 글 등록 조회 삭제 기능 구현 description 설명 관심 글 등록 조회 삭제 구현하겠습니다 tasks new feature 관심 글 등록 관심 글 조회 관심 글 삭제 references | 1 |
365,663 | 10,790,356,181 | IssuesEvent | 2019-11-05 14:41:05 | AY1920S1-CS2113T-W17-3/main | https://api.github.com/repos/AY1920S1-CS2113T-W17-3/main | closed | As an achievement oriented user, I can gain achievements when I achieve system pre-defined goals | priority.Medium type.Story | So that I am motivated to pursue my financial goal. | 1.0 | As an achievement oriented user, I can gain achievements when I achieve system pre-defined goals - So that I am motivated to pursue my financial goal. | priority | as an achievement oriented user i can gain achievements when i achieve system pre defined goals so that i am motivated to pursue my financial goal | 1 |
31,100 | 2,731,808,100 | IssuesEvent | 2015-04-16 22:40:18 | metapolator/metapolator | https://api.github.com/repos/metapolator/metapolator | closed | Tab can misalign pages | bug Priority Medium UI | Without diving into the full complexity of https://github.com/metapolator/metapolator/issues/368 I noticed that when on Parameters page, I double click a master's name to rename it, type a new name, and type the tab key, then the focus shifts to a widget in the design spaces page, and this offsets the page-sliding mechanism, and I can't recover it. | 1.0 | Tab can misalign pages - Without diving into the full complexity of https://github.com/metapolator/metapolator/issues/368 I noticed that when on Parameters page, I double click a master's name to rename it, type a new name, and type the tab key, then the focus shifts to a widget in the design spaces page, and this offsets the page-sliding mechanism, and I can't recover it. | priority | tab can misalign pages without diving into the full complexity of i noticed that when on parameters page i double click a master s name to rename it type a new name and type the tab key then the focus shifts to a widget in the design spaces page and this offsets the page sliding mechanism and i can t recover it | 1 |
105,934 | 4,249,623,894 | IssuesEvent | 2016-07-08 01:05:26 | Lord-Ptolemy/Rosalina-Bottings | https://api.github.com/repos/Lord-Ptolemy/Rosalina-Bottings | opened | Mysql: Fix presence and message insertion | bug Medium Priority | Currently not all message entries are being added to the database correctly. It seems like it skips 5-6 messages then inserts one. Rinse and repeat. | 1.0 | Mysql: Fix presence and message insertion - Currently not all message entries are being added to the database correctly. It seems like it skips 5-6 messages then inserts one. Rinse and repeat. | priority | mysql fix presence and message insertion currently not all message entries are being added to the database correctly it seems like it skips messages then inserts one rinse and repeat | 1 |
148,662 | 5,694,327,690 | IssuesEvent | 2017-04-15 12:08:32 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | unable to curl the docker stats API in the CI environment | priority/medium status/needs-triage | There are two tests in the stats integration suite that curl the VCH engine endpoint for containerVM metrics. When running these in robot locally, they work as merged. When run in the CI environment the curl returns a zero rc, but no data.
The two tests impacted are: `Stats API Memory Validation` & `Stats API CPU Validation`. Will need to determine why they don't work in the CI environment and correct the tests.
This is not a code issue -- the stats API works as demonstrated by the `Stats No Stream` test which uses the docker client to get container stats and validates the memory metric. The client is making the same API calls that the aforementioned tests utilize. | 1.0 | unable to curl the docker stats API in the CI environment - There are two tests in the stats integration suite that curl the VCH engine endpoint for containerVM metrics. When running these in robot locally, they work as merged. When run in the CI environment the curl returns a zero rc, but no data.
The two tests impacted are: `Stats API Memory Validation` & `Stats API CPU Validation`. Will need to determine why they don't work in the CI environment and correct the tests.
This is not a code issue -- the stats API works as demonstrated by the `Stats No Stream` test which uses the docker client to get container stats and validates the memory metric. The client is making the same API calls that the aforementioned tests utilize. | priority | unable to curl the docker stats api in the ci environment there are two tests in the stats integration suite that curl the vch engine endpoint for containervm metrics when running these in robot locally they work as merged when run in the ci environment the curl returns a zero rc but no data the two tests impacted are stats api memory validation stats api cpu validation will need to determine why they don t work in the ci environment and correct the tests this is not a code issue the stats api works as demonstrated by the stats no stream test which uses the docker client to get container stats and validates the memory metric the client is making the same api calls that the aforementioned tests utilize | 1 |
169,953 | 6,422,078,393 | IssuesEvent | 2017-08-09 07:26:58 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | opened | [Declaration request] Check uploaded documents | kind/bug priority/medium status/todo | Improve uploaded documents verification:
- [ ] Show full list of missing documents
Current response:
Shows first missing document
```
{
"meta": {
"url": "http://dev.ehealth.world/api/declaration_requests/20e0282a-06f5-4faf-bfb4-aed825804225/actions/approve",
"type": "object",
"request_id": "nj6470g5nhj0lk4rvee4fse2p4lcgmtr",
"code": 409
},
"error": {
"type": "request_conflict",
"message": "Document person.SSN is not uploaded"
}
}
```
Expected result:
Full list of missing documents
#799
#734 | 1.0 | [Declaration request] Check uploaded documents - Improve uploaded documents verification:
- [ ] Show full list of missing documents
Current response:
Shows first missing document
```
{
"meta": {
"url": "http://dev.ehealth.world/api/declaration_requests/20e0282a-06f5-4faf-bfb4-aed825804225/actions/approve",
"type": "object",
"request_id": "nj6470g5nhj0lk4rvee4fse2p4lcgmtr",
"code": 409
},
"error": {
"type": "request_conflict",
"message": "Document person.SSN is not uploaded"
}
}
```
Expected result:
Full list of missing documents
#799
#734 | priority | check uploaded documents improve uploaded documents verification show full list of missing documents current response shows first missing document meta url type object request id code error type request conflict message document person ssn is not uploaded expected result full list of missing documents | 1 |
540,889 | 15,819,006,653 | IssuesEvent | 2021-04-05 16:51:45 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | [MR] (Project Highlight) Revamp for the Project Highlight section in MARLO mock-up | Priority - Medium Type -Task | According to ticket #1442 in fresh desk, we need to revamp the mock-up for the project highlight section in MARLO. We have to condensate the information and try to make fewer questions and let open space for the people to right.
- [x] Review the message from Andy Jarvis and understand what is his suggestion
- [x] Start the new mock-up
- [x] Send to David and Hector for their approval
- [x] Discuss with Marissa if the section is useful and have her approval
- [x] Send the mock-up to Andy and if is positive (HT-DA)
- [ ] Send the mock-up to MARLO family (HT-DA)
**Move to Review when: We send the mock-up to Marissa and Andy
**Move to Closed when: MARLO family approve it
| 1.0 | [MR] (Project Highlight) Revamp for the Project Highlight section in MARLO mock-up - According to ticket #1442 in fresh desk, we need to revamp the mock-up for the project highlight section in MARLO. We have to condensate the information and try to make fewer questions and let open space for the people to right.
- [x] Review the message from Andy Jarvis and understand what is his suggestion
- [x] Start the new mock-up
- [x] Send to David and Hector for their approval
- [x] Discuss with Marissa if the section is useful and have her approval
- [x] Send the mock-up to Andy and if is positive (HT-DA)
- [ ] Send the mock-up to MARLO family (HT-DA)
**Move to Review when: We send the mock-up to Marissa and Andy
**Move to Closed when: MARLO family approve it
| priority | project highlight revamp for the project highlight section in marlo mock up according to ticket in fresh desk we need to revamp the mock up for the project highlight section in marlo we have to condensate the information and try to make fewer questions and let open space for the people to right review the message from andy jarvis and understand what is his suggestion start the new mock up send to david and hector for their approval discuss with marissa if the section is useful and have her approval send the mock up to andy and if is positive ht da send the mock up to marlo family ht da move to review when we send the mock up to marissa and andy move to closed when marlo family approve it | 1 |
434,815 | 12,528,178,774 | IssuesEvent | 2020-06-04 09:09:11 | geosolutions-it/MapStore2-C028 | https://api.github.com/repos/geosolutions-it/MapStore2-C028 | closed | Project Update | Epic Priority: Medium Project: C028 deploy needed | As part of this work the developments provided for the styles localization support will be provided on MapStore master branch and backported to the closest stable branch, so the MapStore revision on C028 MS project will be updated accordingly.
After the revision update (and related config files/project review) the following custom plugins need to be checked to verify the involved functionalities with the new MS version:
- [Road Accident](http://sit.comune.bolzano.it/mapstore2//#/roadAccidents/openlayers/incidentiMap) Plugin
- Search plugin for cadastral parcel (#54): this also includes the most recent implementation of custom logic related to the support of query parameters for searching parcels on the fly during the first loading of the viewer:
For a manual test by using the search tool, do searches similar to the following:
_Search in the search bar "Gries / Gries" (Building Particle) then search .4442 at this point you have to see the particle on the map, test desktop, mobile_
For a general test, put in the URL of a map the following params:
_?particella=.4442&comCat=669&tipoPart=partedif_
- Catalog Plugin: localization of layer titles (this customization works with specific keywords defined GeoServer side inside the layer configuration
There are also some relevant issues to consider during the revision update: #86, #87, #69
The current production instance is available here:
http://sit.comune.bolzano.it/mapstore2/#/
That instance can be useful to double check the updated MS project comparing it with the existing production instance.
| 1.0 | Project Update - As part of this work the developments provided for the styles localization support will be provided on MapStore master branch and backported to the closest stable branch, so the MapStore revision on C028 MS project will be updated accordingly.
After the revision update (and related config files/project review) the following custom plugins need to be checked to verify the involved functionalities with the new MS version:
- [Road Accident](http://sit.comune.bolzano.it/mapstore2//#/roadAccidents/openlayers/incidentiMap) Plugin
- Search plugin for cadastral parcel (#54): this also includes the most recent implementation of custom logic related to the support of query parameters for searching parcels on the fly during the first loading of the viewer:
For a manual test by using the search tool, do searches similar to the following:
_Search in the search bar "Gries / Gries" (Building Particle) then search .4442 at this point you have to see the particle on the map, test desktop, mobile_
For a general test, put in the URL of a map the following params:
_?particella=.4442&comCat=669&tipoPart=partedif_
- Catalog Plugin: localization of layer titles (this customization works with specific keywords defined GeoServer side inside the layer configuration
There are also some relevant issues to consider during the revision update: #86, #87, #69
The current production instance is available here:
http://sit.comune.bolzano.it/mapstore2/#/
That instance can be useful to double check the updated MS project comparing it with the existing production instance.
| priority | project update as part of this work the developments provided for the styles localization support will be provided on mapstore master branch and backported to the closest stable branch so the mapstore revision on ms project will be updated accordingly after the revision update and related config files project review the following custom plugins need to be checked to verify the involved functionalities with the new ms version plugin search plugin for cadastral parcel this also includes the most recent implementation of custom logic related to the support of query parameters for searching parcels on the fly during the first loading of the viewer for a manual test by using the search tool do searches similar to the following search in the search bar gries gries building particle then search at this point you have to see the particle on the map test desktop mobile for a general test put in the url of a map the following params particella comcat tipopart partedif catalog plugin localization of layer titles this customization works with specific keywords defined geoserver side inside the layer configuration there are also some relevant issues to consider during the revision update the current production instance is available here that instance can be useful to double check the updated ms project comparing it with the existing production instance | 1 |
81,603 | 3,592,725,981 | IssuesEvent | 2016-02-01 17:01:34 | PMEAL/OpenPNM | https://api.github.com/repos/PMEAL/OpenPNM | closed | Allow multiple throats between 2 pores | enhancement Priority - Medium | The code may already do this, but we should check and adjust if necessary | 1.0 | Allow multiple throats between 2 pores - The code may already do this, but we should check and adjust if necessary | priority | allow multiple throats between pores the code may already do this but we should check and adjust if necessary | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.