Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
458,991 | 13,184,743,695 | IssuesEvent | 2020-08-12 20:00:50 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Test suite mslab_threadsafe fails randomly | bug priority: medium | Been seeing this in a few different unrelated PRs now:
```
1/1 mps2_an385 tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs FAILED: timeout
---out-2nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
***** Booting Zephyr OS v1.14.0-rc1-573-g2e89905f86 *****
Running test suite mslab_threadsafe
===================================================================
starting test - test_mslab_threadsafe
***** MPU FAULT *****
Data Access Violation
MMFAR Address: 0x0
***** Hardware exception *****
Current thread ID = 0x20000134
Faulting instruction address = 0x49ea
Fatal fault in ISR! Spinning...
---out-2nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
0 of 1 tests passed with 0 warnings in 84 seconds
+ sleep 10
+ /home/buildslave/src/github.com/zephyrproject-rtos/zephyr/scripts/sanitycheck --inline-logs --enable-coverage -N --only-failed --outdir=out-3nd-pass
JOBS: 12
Selecting default platforms per test case
Building testcase defconfigs...
1 tests selected, 91522 tests discarded due to filters
1/1 mps2_an385 tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs FAILED: timeout
---out-3nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
***** Booting Zephyr OS v1.14.0-rc1-573-g2e89905f86 *****
Running test suite mslab_threadsafe
===================================================================
starting test - test_mslab_threadsafe
***** MPU FAULT *****
Data Access Violation
MMFAR Address: 0x0
***** Hardware exception *****
Current thread ID = 0x200000b0
Faulting instruction address = 0x49ea
Fatal fault in ISR! Spinning...
---out-3nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
0 of 1 tests passed with 0 warnings in 83 seconds
``` | 1.0 | Test suite mslab_threadsafe fails randomly - Been seeing this in a few different unrelated PRs now:
```
1/1 mps2_an385 tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs FAILED: timeout
---out-2nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
***** Booting Zephyr OS v1.14.0-rc1-573-g2e89905f86 *****
Running test suite mslab_threadsafe
===================================================================
starting test - test_mslab_threadsafe
***** MPU FAULT *****
Data Access Violation
MMFAR Address: 0x0
***** Hardware exception *****
Current thread ID = 0x20000134
Faulting instruction address = 0x49ea
Fatal fault in ISR! Spinning...
---out-2nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
0 of 1 tests passed with 0 warnings in 84 seconds
+ sleep 10
+ /home/buildslave/src/github.com/zephyrproject-rtos/zephyr/scripts/sanitycheck --inline-logs --enable-coverage -N --only-failed --outdir=out-3nd-pass
JOBS: 12
Selecting default platforms per test case
Building testcase defconfigs...
1 tests selected, 91522 tests discarded due to filters
1/1 mps2_an385 tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs FAILED: timeout
---out-3nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
***** Booting Zephyr OS v1.14.0-rc1-573-g2e89905f86 *****
Running test suite mslab_threadsafe
===================================================================
starting test - test_mslab_threadsafe
***** MPU FAULT *****
Data Access Violation
MMFAR Address: 0x0
***** Hardware exception *****
Current thread ID = 0x200000b0
Faulting instruction address = 0x49ea
Fatal fault in ISR! Spinning...
---out-3nd-pass/mps2_an385/tests/kernel/mem_slab/mslab_threadsafe/kernel.memory_slabs/handler.log---
0 of 1 tests passed with 0 warnings in 83 seconds
``` | priority | test suite mslab threadsafe fails randomly been seeing this in a few different unrelated prs now tests kernel mem slab mslab threadsafe kernel memory slabs failed timeout out pass tests kernel mem slab mslab threadsafe kernel memory slabs handler log booting zephyr os running test suite mslab threadsafe starting test test mslab threadsafe mpu fault data access violation mmfar address hardware exception current thread id faulting instruction address fatal fault in isr spinning out pass tests kernel mem slab mslab threadsafe kernel memory slabs handler log of tests passed with warnings in seconds sleep home buildslave src github com zephyrproject rtos zephyr scripts sanitycheck inline logs enable coverage n only failed outdir out pass jobs selecting default platforms per test case building testcase defconfigs tests selected tests discarded due to filters tests kernel mem slab mslab threadsafe kernel memory slabs failed timeout out pass tests kernel mem slab mslab threadsafe kernel memory slabs handler log booting zephyr os running test suite mslab threadsafe starting test test mslab threadsafe mpu fault data access violation mmfar address hardware exception current thread id faulting instruction address fatal fault in isr spinning out pass tests kernel mem slab mslab threadsafe kernel memory slabs handler log of tests passed with warnings in seconds | 1 |
428,703 | 12,415,408,489 | IssuesEvent | 2020-05-22 16:15:33 | netdata/netdata | https://api.github.com/repos/netdata/netdata | closed | Add support for eBPF to kickstart_static64.sh installation | area/ci area/packaging priority/medium stale | As per title.
See [this comment](https://github.com/netdata/netdata/issues/8242#issuecomment-602451624) on #8242
cc @thiagoftsm as watcher and @hvisage as requestor. | 1.0 | Add support for eBPF to kickstart_static64.sh installation - As per title.
See [this comment](https://github.com/netdata/netdata/issues/8242#issuecomment-602451624) on #8242
cc @thiagoftsm as watcher and @hvisage as requestor. | priority | add support for ebpf to kickstart sh installation as per title see on cc thiagoftsm as watcher and hvisage as requestor | 1 |
342,335 | 10,315,306,571 | IssuesEvent | 2019-08-30 07:09:06 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Can not build link_board_can shield | bug priority: medium | **Describe the bug**
Using the link_board_can shield fails building due to an invalid field in the overlay, 'status'. It states "ok" but should be "okay"
**To Reproduce**
cmake -GNinja -DBOARD=reel_board -DSHIELD=link_board_can /home//karsten/zephyrproject/zephyr/samples/drivers/CAN/
**Expected behavior**
CAN sample would be built for the link_board_can shield
**Impact**
Can not use the shield in current state.
**Screenshots or console output**
Zephyr version: 2.0.0
-- Selected BOARD reel_board
-- Found west: /home/karsten/.local/bin/west (found suitable version "0.6.0", minimum required is "0.6.0")
-- Loading /home/karsten/zephyrproject/zephyr/boards/arm/reel_board/reel_board.dts as base
-- Overlaying /home/karsten/zephyrproject/zephyr/dts/common/common.dts
-- Overlaying /home/karsten/zephyrproject/zephyr/boards/shields/link_board_can/link_board_can.overlay
device tree error: unknown 'status' value "ok" in /soc/spi@40004000 in reel_board.dts.pre.tmp, expected one of fail-sss, reserved, disabled, fail, okay (see the devicetree specification)
CMake Error at /home/karsten/zephyrproject/zephyr/cmake/dts.cmake:176 (message):
new extractor failed with return code: 1
| 1.0 | Can not build link_board_can shield - **Describe the bug**
Using the link_board_can shield fails building due to an invalid field in the overlay, 'status'. It states "ok" but should be "okay"
**To Reproduce**
cmake -GNinja -DBOARD=reel_board -DSHIELD=link_board_can /home//karsten/zephyrproject/zephyr/samples/drivers/CAN/
**Expected behavior**
CAN sample would be built for the link_board_can shield
**Impact**
Can not use the shield in current state.
**Screenshots or console output**
Zephyr version: 2.0.0
-- Selected BOARD reel_board
-- Found west: /home/karsten/.local/bin/west (found suitable version "0.6.0", minimum required is "0.6.0")
-- Loading /home/karsten/zephyrproject/zephyr/boards/arm/reel_board/reel_board.dts as base
-- Overlaying /home/karsten/zephyrproject/zephyr/dts/common/common.dts
-- Overlaying /home/karsten/zephyrproject/zephyr/boards/shields/link_board_can/link_board_can.overlay
device tree error: unknown 'status' value "ok" in /soc/spi@40004000 in reel_board.dts.pre.tmp, expected one of fail-sss, reserved, disabled, fail, okay (see the devicetree specification)
CMake Error at /home/karsten/zephyrproject/zephyr/cmake/dts.cmake:176 (message):
new extractor failed with return code: 1
| priority | can not build link board can shield describe the bug using the link board can shield fails building due to an invalid field in the overlay status it states ok but should be okay to reproduce cmake gninja dboard reel board dshield link board can home karsten zephyrproject zephyr samples drivers can expected behavior can sample would be built for the link board can shield impact can not use the shield in current state screenshots or console output zephyr version selected board reel board found west home karsten local bin west found suitable version minimum required is loading home karsten zephyrproject zephyr boards arm reel board reel board dts as base overlaying home karsten zephyrproject zephyr dts common common dts overlaying home karsten zephyrproject zephyr boards shields link board can link board can overlay device tree error unknown status value ok in soc spi in reel board dts pre tmp expected one of fail sss reserved disabled fail okay see the devicetree specification cmake error at home karsten zephyrproject zephyr cmake dts cmake message new extractor failed with return code | 1 |
461,920 | 13,238,346,061 | IssuesEvent | 2020-08-19 00:04:31 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | New Concept: Crowdfunding for BuddyBoss joint custom feature development | feature: enhancement priority: medium | **Is your feature request related to a problem? Please describe.**
Many BuddyBoss customers would like to have their custom features implemented in the theme/platform. BuddyBoss has a customisation service as a revenue model so a conflict of interest. BuddyBoss customers then end up getting the same done remotely at a cost and usually such developments are ad-hoc and no clear documentation to further enhance and usually leads to duplication and waste.
**Describe the solution you'd like**
The ideal solution would be to have an in-between (joint customer and BuddyBoss) business model whereby such custom developments are crowdfunded whereby making it more affordable and yet maintaining such developments within the BuddyBoss team with little to zero revenue sacrifice.
**Describe alternatives you've considered**
Create a new "Custom Features" section listing all the requested features as a separate project with a crowdfunding element like the total project cost estimated, minimum funding needed to start the project, etc... This way it will provide the transparency to the customers, connect likewise customers together for that project funding AND the decision to roll-out such new features and functionalities are transparent and fact based rather than judgmental on one or few members.
This will open up new business and revenue model for BuddyBoss as well as happy and satisfied clients.
@eisenwasser , if you like this concept then i have a few more.
| 1.0 | New Concept: Crowdfunding for BuddyBoss joint custom feature development - **Is your feature request related to a problem? Please describe.**
Many BuddyBoss customers would like to have their custom features implemented in the theme/platform. BuddyBoss has a customisation service as a revenue model so a conflict of interest. BuddyBoss customers then end up getting the same done remotely at a cost and usually such developments are ad-hoc and no clear documentation to further enhance and usually leads to duplication and waste.
**Describe the solution you'd like**
The ideal solution would be to have an in-between (joint customer and BuddyBoss) business model whereby such custom developments are crowdfunded whereby making it more affordable and yet maintaining such developments within the BuddyBoss team with little to zero revenue sacrifice.
**Describe alternatives you've considered**
Create a new "Custom Features" section listing all the requested features as a separate project with a crowdfunding element like the total project cost estimated, minimum funding needed to start the project, etc... This way it will provide the transparency to the customers, connect likewise customers together for that project funding AND the decision to roll-out such new features and functionalities are transparent and fact based rather than judgmental on one or few members.
This will open up new business and revenue model for BuddyBoss as well as happy and satisfied clients.
@eisenwasser , if you like this concept then i have a few more.
| priority | new concept crowdfunding for buddyboss joint custom feature development is your feature request related to a problem please describe many buddyboss customers would like to have their custom features implemented in the theme platform buddyboss has a customisation service as a revenue model so a conflict of interest buddyboss customers then end up getting the same done remotely at a cost and usually such developments are ad hoc and no clear documentation to further enhance and usually leads to duplication and waste describe the solution you d like the ideal solution would be to have an in between joint customer and buddyboss business model whereby such custom developments are crowdfunded whereby making it more affordable and yet maintaining such developments within the buddyboss team with little to zero revenue sacrifice describe alternatives you ve considered create a new custom features section listing all the requested features as a separate project with a crowdfunding element like the total project cost estimated minimum funding needed to start the project etc this way it will provide the transparency to the customers connect likewise customers together for that project funding and the decision to roll out such new features and functionalities are transparent and fact based rather than judgmental on one or few members this will open up new business and revenue model for buddyboss as well as happy and satisfied clients eisenwasser if you like this concept then i have a few more | 1 |
96,238 | 3,966,391,627 | IssuesEvent | 2016-05-03 12:50:08 | daronco/test-issue-migrate2 | https://api.github.com/repos/daronco/test-issue-migrate2 | closed | Automatically detect mobile devices to join using the mobile client | Priority: Medium Status: Resolved Type: Feature | ---
Author Name: **Leonardo Daronco** (@daronco)
Original Redmine Issue: 1247, http://dev.mconf.org/redmine/issues/1247
Original Assignee: Leonardo Daronco
---
Instead of having an specific view to join via mobile clients (@join_mobile@), the default join action (@join@) should detect whether a user is in a mobile client and automatically launch the mobile application.
A page should still be displayed to users in mobile devices with information about where to download the mobile client and troubleshooting information (try again link, join via flash client, etc). But the mobile client should still be automatically launched.
| 1.0 | Automatically detect mobile devices to join using the mobile client - ---
Author Name: **Leonardo Daronco** (@daronco)
Original Redmine Issue: 1247, http://dev.mconf.org/redmine/issues/1247
Original Assignee: Leonardo Daronco
---
Instead of having an specific view to join via mobile clients (@join_mobile@), the default join action (@join@) should detect whether a user is in a mobile client and automatically launch the mobile application.
A page should still be displayed to users in mobile devices with information about where to download the mobile client and troubleshooting information (try again link, join via flash client, etc). But the mobile client should still be automatically launched.
| priority | automatically detect mobile devices to join using the mobile client author name leonardo daronco daronco original redmine issue original assignee leonardo daronco instead of having an specific view to join via mobile clients join mobile the default join action join should detect whether a user is in a mobile client and automatically launch the mobile application a page should still be displayed to users in mobile devices with information about where to download the mobile client and troubleshooting information try again link join via flash client etc but the mobile client should still be automatically launched | 1 |
318,343 | 9,691,105,683 | IssuesEvent | 2019-05-24 10:16:05 | conan-io/conan | https://api.github.com/repos/conan-io/conan | reopened | [question] Artifactory CE: Not showing "requires" or "settings" | complex: low component: artifactory priority: medium type: bug | Artifactory version: 6.8.2 rev 60802900
Conan (client) version: 1.11.2
I've created multiple packages (pre-built binaries) in local cache, two build configurations.
When uploaded to local conan server the "conan search <reference>" will show "requires" fields correctly.
When uploaded to Artifactory CE the "conan search <reference>" will list all the packages under the reference but it does not show "requires" fields at all? The "settings" are shown.
When inspecting the packages via Artifactory Web UI interface / "Conan package info" I can see the "settings" there, but "requires" is not visible anywhere either?
If I download a conaninfo.txt of one of the components (declares dependencies) the file does contain:
[requires]
A/1.Y.Z
B/1.Y.Z
Names of the dependency projects are correct but minor and patch level version numbers are obscured?
Is this by design? Is there a way to inspect the real dependencies of a given package <reference> residing in the Artifactory CE? | 1.0 | [question] Artifactory CE: Not showing "requires" or "settings" - Artifactory version: 6.8.2 rev 60802900
Conan (client) version: 1.11.2
I've created multiple packages (pre-built binaries) in local cache, two build configurations.
When uploaded to local conan server the "conan search <reference>" will show "requires" fields correctly.
When uploaded to Artifactory CE the "conan search <reference>" will list all the packages under the reference but it does not show "requires" fields at all? The "settings" are shown.
When inspecting the packages via Artifactory Web UI interface / "Conan package info" I can see the "settings" there, but "requires" is not visible anywhere either?
If I download a conaninfo.txt of one of the components (declares dependencies) the file does contain:
[requires]
A/1.Y.Z
B/1.Y.Z
Names of the dependency projects are correct but minor and patch level version numbers are obscured?
Is this by design? Is there a way to inspect the real dependencies of a given package <reference> residing in the Artifactory CE? | priority | artifactory ce not showing requires or settings artifactory version rev conan client version i ve created multiple packages pre built binaries in local cache two build configurations when uploaded to local conan server the conan search will show requires fields correctly when uploaded to artifactory ce the conan search will list all the packages under the reference but it does not show requires fields at all the settings are shown when inspecting the packages via artifactory web ui interface conan package info i can see the settings there but requires is not visible anywhere either if i download a conaninfo txt of one of the components declares dependencies the file does contain a y z b y z names of the dependency projects are correct but minor and patch level version numbers are obscured is this by design is there a way to inspect the real dependencies of a given package residing in the artifactory ce | 1 |
587,189 | 17,606,636,635 | IssuesEvent | 2021-08-17 17:58:58 | WordPress/openverse-catalog | https://api.github.com/repos/WordPress/openverse-catalog | opened | [Quality] Add documentation about `.env` setup | 🟨 priority: medium 📄 aspect: text 🧰 goal: internal improvement | ## Current Situation
<!-- Describe the part of the code you think should improve -->
The current `env.template` is great, but doesn't provide information about which values, if any, need to be updated for local development.
## Suggested Improvement
<!-- Describe your proposed change -->
- [ ] Add README documentation about which keys, if any, should be updated locally
- [ ] Make it clear which API keys relate to which DAGS (probably a comment in `env.template` above each value that relates to a DAG)
- [ ] Explain in README how some vars are only necessary to run particular DAGs
## Benefit
<!-- Describe the benefit of the change (E.g., increase test coverage, reduce running time, etc.) -->
## Additional context
<!-- Add any other context suggestion here. -->
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in implementing this feature.
| 1.0 | [Quality] Add documentation about `.env` setup - ## Current Situation
<!-- Describe the part of the code you think should improve -->
The current `env.template` is great, but doesn't provide information about which values, if any, need to be updated for local development.
## Suggested Improvement
<!-- Describe your proposed change -->
- [ ] Add README documentation about which keys, if any, should be updated locally
- [ ] Make it clear which API keys relate to which DAGS (probably a comment in `env.template` above each value that relates to a DAG)
- [ ] Explain in README how some vars are only necessary to run particular DAGs
## Benefit
<!-- Describe the benefit of the change (E.g., increase test coverage, reduce running time, etc.) -->
## Additional context
<!-- Add any other context suggestion here. -->
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in implementing this feature.
| priority | add documentation about env setup current situation the current env template is great but doesn t provide information about which values if any need to be updated for local development suggested improvement add readme documentation about which keys if any should be updated locally make it clear which api keys relate to which dags probably a comment in env template above each value that relates to a dag explain in readme how some vars are only necessary to run particular dags benefit additional context implementation 🙋 i would be interested in implementing this feature | 1 |
705,919 | 24,254,414,298 | IssuesEvent | 2022-09-27 16:30:40 | phylum-dev/cli | https://api.github.com/repos/phylum-dev/cli | closed | Allow extension upgrades | enhancement medium priority extensions | We may want to implement the `phylum extension upgrade` subcommand on top of `install`/`uninstall`. In that case, we may allow overwriting the extension installation directory, which we disallow at the moment. | 1.0 | Allow extension upgrades - We may want to implement the `phylum extension upgrade` subcommand on top of `install`/`uninstall`. In that case, we may allow overwriting the extension installation directory, which we disallow at the moment. | priority | allow extension upgrades we may want to implement the phylum extension upgrade subcommand on top of install uninstall in that case we may allow overwriting the extension installation directory which we disallow at the moment | 1 |
416,486 | 12,147,211,255 | IssuesEvent | 2020-04-24 12:38:16 | luna/enso | https://api.github.com/repos/luna/enso | opened | Implement the Java FFI | Category: Compiler Category: RTS Change: Non-Breaking Difficulty: Core Contributor Priority: Medium Type: Enhancement | ### Summary
<!--
- A summary of the task.
-->
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
### Specification
- [ ] Extend the parser (if necessary) to support the determined surface syntax.
- [ ] Extend `AstView` as needed to support the surface syntax for Java FFI.
- [ ] Using the research and experimental code, connect up the reflection API to allow users to write Enso-level Java FFI.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
| 1.0 | Implement the Java FFI - ### Summary
<!--
- A summary of the task.
-->
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
### Specification
- [ ] Extend the parser (if necessary) to support the determined surface syntax.
- [ ] Extend `AstView` as needed to support the surface syntax for Java FFI.
- [ ] Using the research and experimental code, connect up the reflection API to allow users to write Enso-level Java FFI.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
| priority | implement the java ffi summary a summary of the task value this section should describe the value of this task this value can be for users to the team etc specification extend the parser if necessary to support the determined surface syntax extend astview as needed to support the surface syntax for java ffi using the research and experimental code connect up the reflection api to allow users to write enso level java ffi acceptance criteria test cases any criteria that must be satisfied for the task to be accepted the test plan for the feature related to the acceptance criteria | 1 |
473,130 | 13,637,310,530 | IssuesEvent | 2020-09-25 07:37:22 | zowe/api-layer | https://api.github.com/repos/zowe/api-layer | opened | HA: Distinguish Internal/External service accessibility | Priority: Medium enhancement new | **Is your feature request related to a problem? Please describe.**
The feature supports the overall [Zowe HA plan](https://github.com/zowe/zowe-install-packaging/issues/1477).
**Describe the solution you'd like**
Some form of restriction for service accessibility is needed. This will serve the purpose of isolating the caching api from outside, since all services onboarded in Discovery are public now.
**Describe alternatives you've considered**
**Additional context**
| 1.0 | HA: Distinguish Internal/External service accessibility - **Is your feature request related to a problem? Please describe.**
The feature supports the overall [Zowe HA plan](https://github.com/zowe/zowe-install-packaging/issues/1477).
**Describe the solution you'd like**
Some form of restriction for service accessibility is needed. This will serve the purpose of isolating the caching api from outside, since all services onboarded in Discovery are public now.
**Describe alternatives you've considered**
**Additional context**
| priority | ha distinguish internal external service accessibility is your feature request related to a problem please describe the feature supports the overall describe the solution you d like some form of restriction for service accessibility is needed this will serve the purpose of isolating the caching api from outside since all services onboarded in discovery are public now describe alternatives you ve considered additional context | 1 |
83,458 | 3,635,140,604 | IssuesEvent | 2016-02-11 20:38:57 | sandialabs/slycat | https://api.github.com/repos/sandialabs/slycat | opened | Enable arbitrary start/end values for filters (axes) | Medium Priority PS Model | A customer request is to enable setting axis range min/max values to values outside the range of the variable. Currently, we limit the edited min/max values to lie within the value range of the variable. | 1.0 | Enable arbitrary start/end values for filters (axes) - A customer request is to enable setting axis range min/max values to values outside the range of the variable. Currently, we limit the edited min/max values to lie within the value range of the variable. | priority | enable arbitrary start end values for filters axes a customer request is to enable setting axis range min max values to values outside the range of the variable currently we limit the edited min max values to lie within the value range of the variable | 1 |
529,891 | 15,397,192,866 | IssuesEvent | 2021-03-03 21:46:05 | openscd/open-scd | https://api.github.com/repos/openscd/open-scd | opened | Data model overview in IED configuration Editor | Kind: Feature Priority: Medium | As a user of OpenSCD I want to manage `IED` configuration. I want to see and navigate through the data model and want to use filter options on logical nodes, logical devices as well as data objects and data attributes.
**Requirement**:
- The data model shall be interpreted with a tree or folded lists.
- Parse through the `DataTypeTemplates` and show the structure of logical nodes, data objects as well as data attributes
- Show predefined values allocated in the `Val` element. Can be instantiated in `DAI` and `DOI` as well as in the `DataTypeTemplates` section
**Type Definition**:
- No definition necessary as a read only feature only.
**Restriction** :
- no
**Edition 1 vs Edition 2**
- TBD
| 1.0 | Data model overview in IED configuration Editor - As a user of OpenSCD I want to manage `IED` configuration. I want to see and navigate through the data model and want to use filter options on logical nodes, logical devices as well as data objects and data attributes.
**Requirement**:
- The data model shall be interpreted with a tree or folded lists.
- Parse through the `DataTypeTemplates` and show the structure of logical nodes, data objects as well as data attributes
- Show predefined values allocated in the `Val` element. Can be instantiated in `DAI` and `DOI` as well as in the `DataTypeTemplates` section
**Type Definition**:
- No definition necessary as a read only feature only.
**Restriction** :
- no
**Edition 1 vs Edition 2**
- TBD
| priority | data model overview in ied configuration editor as a user of openscd i want to manage ied configuration i want to see and navigate through the data model and want to use filter options on logical nodes logical devices as well as data objects and data attributes requirement the data model shall be interpreted with a tree or folded lists parse through the datatypetemplates and show the structure of logical nodes data objects as well as data attributes show predefined values allocated in the val element can be instantiated in dai and doi as well as in the datatypetemplates section type definition no definition necessary as a read only feature only restriction no edition vs edition tbd | 1 |
744,026 | 25,924,409,031 | IssuesEvent | 2022-12-16 02:10:33 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Fix project metadata dates | bug priority: Medium persistence metadata | In #1383 the project creation and modification dates were changed to the `LocalDateTime` datatype, but this can't represent a point in time (because it has no time zone info), so is completely inappropriate for this use.
This was actually one of the few areas of OpenRefine where `java.util.Date` was perfectly appropriate and suited to the use, so it didn't need to be change. ~~All projects which were written since this change are going to have dates which are only knowable within a 24 hour window, not precisely.~~
We can go back to `java.util.Date` or use fancy new `OffsetDateTime` or `Instant` but it should get serialized as ISO 8601 at UTC and formatted using the user's locale & timezone.
| 1.0 | Fix project metadata dates - In #1383 the project creation and modification dates were changed to the `LocalDateTime` datatype, but this can't represent a point in time (because it has no time zone info), so is completely inappropriate for this use.
This was actually one of the few areas of OpenRefine where `java.util.Date` was perfectly appropriate and suited to the use, so it didn't need to be change. ~~All projects which were written since this change are going to have dates which are only knowable within a 24 hour window, not precisely.~~
We can go back to `java.util.Date` or use fancy new `OffsetDateTime` or `Instant` but it should get serialized as ISO 8601 at UTC and formatted using the user's locale & timezone.
| priority | fix project metadata dates in the project creation and modification dates were changed to the localdatetime datatype but this can t represent a point in time because it has no time zone info so is completely inappropriate for this use this was actually one of the few areas of openrefine where java util date was perfectly appropriate and suited to the use so it didn t need to be change all projects which were written since this change are going to have dates which are only knowable within a hour window not precisely we can go back to java util date or use fancy new offsetdatetime or instant but it should get serialized as iso at utc and formatted using the user s locale timezone | 1 |
822,924 | 30,914,382,644 | IssuesEvent | 2023-08-05 04:53:04 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Stopping stunbaton / esword / welder copy-paste | Priority: 2-Before Release Holy Shit Issue: Needs Cleanup Difficulty: 2-Medium | I see at least one PR a week where someone just copies the aforementioned components onto an entirely new system defeating the purpose of using ECS.
Ideally:
- Stun baton uses the generic toggle system.
- E-Sword also uses the generic toggle system.
- Welders are moved to a generic fuel system instead (split out the control to its own component like AmmoCounter is for guns) and also use toggles.
Component-wise need:
- A generic toggle component
- Some component that ties the generic toggle to battery useage (UseDraw or whatever)
- Component to adjust light on toggle
- Fuel component required for toggle (as above). | 1.0 | Stopping stunbaton / esword / welder copy-paste - I see at least one PR a week where someone just copies the aforementioned components onto an entirely new system defeating the purpose of using ECS.
Ideally:
- Stun baton uses the generic toggle system.
- E-Sword also uses the generic toggle system.
- Welders are moved to a generic fuel system instead (split out the control to its own component like AmmoCounter is for guns) and also use toggles.
Component-wise need:
- A generic toggle component
- Some component that ties the generic toggle to battery useage (UseDraw or whatever)
- Component to adjust light on toggle
- Fuel component required for toggle (as above). | priority | stopping stunbaton esword welder copy paste i see at least one pr a week where someone just copies the aforementioned components onto an entirely new system defeating the purpose of using ecs ideally stun baton uses the generic toggle system e sword also uses the generic toggle system welders are moved to a generic fuel system instead split out the control to its own component like ammocounter is for guns and also use toggles component wise need a generic toggle component some component that ties the generic toggle to battery useage usedraw or whatever component to adjust light on toggle fuel component required for toggle as above | 1 |
39,027 | 2,850,647,902 | IssuesEvent | 2015-05-31 19:09:01 | damonkohler/android-scripting | https://api.github.com/repos/damonkohler/android-scripting | closed | Make Webview display flot charts | auto-migrated Priority-Medium Type-Enhancement | ```
flot (http://code.google.com/p/flot/) is a wonderful javascript library for
plotting. Right now, webviews opened from sl4a (via a script or directly in
sl4a) do not display flot, although the same HTML showed flot plots when opened
in Browser or Dolphin. If webviews can show flot, sl4a scripts can draw charts,
graphs, etc, which is great.
Perhaps this link can help (http://rapidandroid.org/wiki/Graphing).
```
Original issue reported on code.google.com by `truong.n...@gmail.com` on 18 Mar 2011 at 4:32 | 1.0 | Make Webview display flot charts - ```
flot (http://code.google.com/p/flot/) is a wonderful javascript library for
plotting. Right now, webviews opened from sl4a (via a script or directly in
sl4a) do not display flot, although the same HTML showed flot plots when opened
in Browser or Dolphin. If webviews can show flot, sl4a scripts can draw charts,
graphs, etc, which is great.
Perhaps this link can help (http://rapidandroid.org/wiki/Graphing).
```
Original issue reported on code.google.com by `truong.n...@gmail.com` on 18 Mar 2011 at 4:32 | priority | make webview display flot charts flot is a wonderful javascript library for plotting right now webviews opened from via a script or directly in do not display flot although the same html showed flot plots when opened in browser or dolphin if webviews can show flot scripts can draw charts graphs etc which is great perhaps this link can help original issue reported on code google com by truong n gmail com on mar at | 1 |
29,606 | 2,716,627,527 | IssuesEvent | 2015-04-10 20:19:52 | CruxFramework/crux | https://api.github.com/repos/CruxFramework/crux | closed | Create the combobox component | enhancement imported invalid Milestone-M14-C4 Priority-Medium | _From [br...@triggolabs.com](https://code.google.com/u/111363211444989689915/) on August 27, 2014 13:29:28_
It is a combination of a drop-down list or list box and a single-line editable textbox.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=493_ | 1.0 | Create the combobox component - _From [br...@triggolabs.com](https://code.google.com/u/111363211444989689915/) on August 27, 2014 13:29:28_
It is a combination of a drop-down list or list box and a single-line editable textbox.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=493_ | priority | create the combobox component from on august it is a combination of a drop down list or list box and a single line editable textbox original issue | 1 |
609,095 | 18,853,942,143 | IssuesEvent | 2021-11-12 02:03:37 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Remove un-necessary delay from IiwaCommandReceiver | type: cleanup team: manipulation priority: medium | The controller path should use DiscreteTimeDerivative (or equivalent) for a commanded velocity. But the actual LCM converter should be (only) direct-feedthrough. | 1.0 | Remove un-necessary delay from IiwaCommandReceiver - The controller path should use DiscreteTimeDerivative (or equivalent) for a commanded velocity. But the actual LCM converter should be (only) direct-feedthrough. | priority | remove un necessary delay from iiwacommandreceiver the controller path should use discretetimederivative or equivalent for a commanded velocity but the actual lcm converter should be only direct feedthrough | 1 |
604,260 | 18,680,556,275 | IssuesEvent | 2021-11-01 04:40:57 | AY2122S1-CS2103T-W08-2/tp | https://api.github.com/repos/AY2122S1-CS2103T-W08-2/tp | opened | Allocations using the "edit" command does not actually add student to the Group's students | type.Bug priority.High severity.Medium | Steps to reproduce
1. `add group -g Test`
2. `edit 1 -g Test`
3. `show -g `Test`
Expected
- The student at index 1 will be displayed as one of the group's students
Actual
- Group is empty
| 1.0 | Allocations using the "edit" command does not actually add student to the Group's students - Steps to reproduce
1. `add group -g Test`
2. `edit 1 -g Test`
3. `show -g `Test`
Expected
- The student at index 1 will be displayed as one of the group's students
Actual
- Group is empty
| priority | allocations using the edit command does not actually add student to the group s students steps to reproduce add group g test edit g test show g test expected the student at index will be displayed as one of the group s students actual group is empty | 1 |
40,661 | 2,868,934,741 | IssuesEvent | 2015-06-05 22:03:15 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Broken links on http://pub.dartlang.org/doc | bug CannotReproduce Priority-Medium | <a href="https://github.com/bgourlie"><img src="https://avatars.githubusercontent.com/u/996556?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [bgourlie](https://github.com/bgourlie)**
_Originally opened as dart-lang/sdk#6901_
----
There are a couple links pointing to http://pub.dartlang.org/glossary.html#application-package which are returning 404. The links should be pointing to http://pub.dartlang.org/doc/glossary.html#application-package | 1.0 | Broken links on http://pub.dartlang.org/doc - <a href="https://github.com/bgourlie"><img src="https://avatars.githubusercontent.com/u/996556?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [bgourlie](https://github.com/bgourlie)**
_Originally opened as dart-lang/sdk#6901_
----
There are a couple links pointing to http://pub.dartlang.org/glossary.html#application-package which are returning 404. The links should be pointing to http://pub.dartlang.org/doc/glossary.html#application-package | priority | broken links on issue by originally opened as dart lang sdk there are a couple links pointing to which are returning the links should be pointing to | 1 |
254,791 | 8,093,529,441 | IssuesEvent | 2018-08-10 01:28:24 | chasecaleb/OverwatchVision | https://api.github.com/repos/chasecaleb/OverwatchVision | closed | Implement diagnostics: upload recognized images to Amazon S3 | priority:medium type:task | In order to improve recognition in the future, upload image frames, recognition results, and potentially additional metadata such as phone model to S3. This would allow fine-tuning and testing the computer vision logic against a larger, more-representative data set instead of just images that I've captured myself.
- MVP: upload only the final image frame used for each recognition in the background | 1.0 | Implement diagnostics: upload recognized images to Amazon S3 - In order to improve recognition in the future, upload image frames, recognition results, and potentially additional metadata such as phone model to S3. This would allow fine-tuning and testing the computer vision logic against a larger, more-representative data set instead of just images that I've captured myself.
- MVP: upload only the final image frame used for each recognition in the background | priority | implement diagnostics upload recognized images to amazon in order to improve recognition in the future upload image frames recognition results and potentially additional metadata such as phone model to this would allow fine tuning and testing the computer vision logic against a larger more representative data set instead of just images that i ve captured myself mvp upload only the final image frame used for each recognition in the background | 1 |
422,054 | 12,265,643,577 | IssuesEvent | 2020-05-07 07:33:36 | aeternity/superhero-ui | https://api.github.com/repos/aeternity/superhero-ui | closed | Navigation (hamburger) does not appear on smaller widths | area/ui kind/bug priority/medium | 
I guess the problem comes from not very well defined breakpoints.
In general we should not use "mobile" detection for css breakpoints but rather use mainly generic width (or height in rare cases). So I guess the fix is just to modify the breakpoints of the navigation (loosen the ruleset - remove the mobile specific rules) | 1.0 | Navigation (hamburger) does not appear on smaller widths - 
I guess the problem comes from not very well defined breakpoints.
In general we should not use "mobile" detection for css breakpoints but rather use mainly generic width (or height in rare cases). So I guess the fix is just to modify the breakpoints of the navigation (loosen the ruleset - remove the mobile specific rules) | priority | navigation hamburger does not appear on smaller widths i guess the problem comes from not very well defined breakpoints in general we should not use mobile detection for css breakpoints but rather use mainly generic width or height in rare cases so i guess the fix is just to modify the breakpoints of the navigation loosen the ruleset remove the mobile specific rules | 1 |
653,130 | 21,572,702,479 | IssuesEvent | 2022-05-02 10:11:13 | open62541/open62541 | https://api.github.com/repos/open62541/open62541 | closed | Upcoming mbedtls V3 contains breaking changes to API | Type: Enhancement Priority: Medium Component: Encryption | <!--
!ATTENTION!
Please read the following page carefully and provide us with all the
information requested:
https://github.com/open62541/open62541/wiki/Writing-Good-Issue-Reports
Use Github Markdown to format your text:
https://help.github.com/articles/basic-writing-and-formatting-syntax/
Fill out the sections and checklist below (add text at the end of each line).
!ATTENTION!
--------------------------------------------------------------------------------
-->
## Description
With mbedtls moving to V3 they are unifying the internal headers into library/* which breaks current implementations.
see https://github.com/ARMmbed/mbedtls/pull/4164
## Background Information / Reproduction Steps
The [sample server](https://github.com/umati/Sample-Server) uses mbedtls with automated [dependency updates](https://github.com/umati/Sample-Server/pull/222).
The latest changes from ARMmbed/mbedtls@b5939e8 to ARMmbed/mbedtls@12f93f4. breaks the CI, as it upgrades from V2 to V3.
In detail it is this commit ARMmbed/mbedtls@ea0a865
```
Move entropy_poll.h to library
`entropy_poll.h` is not supposed to be used by application code and
is therefore being made internal.
```
Used CMake options:
<!--
Include all CMake options here, which you modified or used for your build.
If you are using cmake-gui, go to "Tools > Show my Changes" and paste the content of "Command Line Options"
On the command line use `cmake -L` (or `cmake -LA` if you changed advanced variables)
-->
```bash
cmake -DUA_ENABLE_SUBSCRIPTIONS_ALARMS_CONDITIONS:BOOL=ON -DUA_ENABLE_SUBSCRIPTIONS_EVENTS:BOOL=ON -DUA_NAMESPACE_ZERO:STRING=FULL -DUA_ENABLE_ENCRYPTION:BOOL=1 -DUA_ENABLE_ENCRYPTION_MBEDTLS:BOOL=1
```
https://github.com/umati/Sample-Server/blob/2d282e64dcccaf0b2591b6df46436e3c5785a37a/.github/CMakeLists.txt#L28
## Checklist
Please provide the following information:
- [X] open62541 Version (release number or git tag): 0c2cb1c
- [ ] Other OPC UA SDKs used (client or server):
- [X] Operating system: Linux
- [ ] Logs (with `UA_LOGLEVEL` set as low as necessary) attached
- [ ] Wireshark network dump attached
- [ ] Self-contained code example attached
- [ ] Critical issue
| 1.0 | Upcoming mbedtls V3 contains breaking changes to API - <!--
!ATTENTION!
Please read the following page carefully and provide us with all the
information requested:
https://github.com/open62541/open62541/wiki/Writing-Good-Issue-Reports
Use Github Markdown to format your text:
https://help.github.com/articles/basic-writing-and-formatting-syntax/
Fill out the sections and checklist below (add text at the end of each line).
!ATTENTION!
--------------------------------------------------------------------------------
-->
## Description
With mbedtls moving to V3 they are unifying the internal headers into library/* which breaks current implementations.
see https://github.com/ARMmbed/mbedtls/pull/4164
## Background Information / Reproduction Steps
The [sample server](https://github.com/umati/Sample-Server) uses mbedtls with automated [dependency updates](https://github.com/umati/Sample-Server/pull/222).
The latest changes from ARMmbed/mbedtls@b5939e8 to ARMmbed/mbedtls@12f93f4. breaks the CI, as it upgrades from V2 to V3.
In detail it is this commit ARMmbed/mbedtls@ea0a865
```
Move entropy_poll.h to library
`entropy_poll.h` is not supposed to be used by application code and
is therefore being made internal.
```
Used CMake options:
<!--
Include all CMake options here, which you modified or used for your build.
If you are using cmake-gui, go to "Tools > Show my Changes" and paste the content of "Command Line Options"
On the command line use `cmake -L` (or `cmake -LA` if you changed advanced variables)
-->
```bash
cmake -DUA_ENABLE_SUBSCRIPTIONS_ALARMS_CONDITIONS:BOOL=ON -DUA_ENABLE_SUBSCRIPTIONS_EVENTS:BOOL=ON -DUA_NAMESPACE_ZERO:STRING=FULL -DUA_ENABLE_ENCRYPTION:BOOL=1 -DUA_ENABLE_ENCRYPTION_MBEDTLS:BOOL=1
```
https://github.com/umati/Sample-Server/blob/2d282e64dcccaf0b2591b6df46436e3c5785a37a/.github/CMakeLists.txt#L28
## Checklist
Please provide the following information:
- [X] open62541 Version (release number or git tag): 0c2cb1c
- [ ] Other OPC UA SDKs used (client or server):
- [X] Operating system: Linux
- [ ] Logs (with `UA_LOGLEVEL` set as low as necessary) attached
- [ ] Wireshark network dump attached
- [ ] Self-contained code example attached
- [ ] Critical issue
| priority | upcoming mbedtls contains breaking changes to api attention please read the following page carefully and provide us with all the information requested use github markdown to format your text fill out the sections and checklist below add text at the end of each line attention description with mbedtls moving to they are unifying the internal headers into library which breaks current implementations see background information reproduction steps the uses mbedtls with automated the latest changes from armmbed mbedtls to armmbed mbedtls breaks the ci as it upgrades from to in detail it is this commit armmbed mbedtls move entropy poll h to library entropy poll h is not supposed to be used by application code and is therefore being made internal used cmake options include all cmake options here which you modified or used for your build if you are using cmake gui go to tools show my changes and paste the content of command line options on the command line use cmake l or cmake la if you changed advanced variables bash cmake dua enable subscriptions alarms conditions bool on dua enable subscriptions events bool on dua namespace zero string full dua enable encryption bool dua enable encryption mbedtls bool checklist please provide the following information version release number or git tag other opc ua sdks used client or server operating system linux logs with ua loglevel set as low as necessary attached wireshark network dump attached self contained code example attached critical issue | 1 |
748,261 | 26,114,688,153 | IssuesEvent | 2022-12-28 03:30:12 | szwathub/LeetCode.swift | https://api.github.com/repos/szwathub/LeetCode.swift | closed | 1753. 移除石子的最大得分 | question: medium math greedy heap (priority queue) | # [1753. 移除石子的最大得分](https://leetcode.cn/problems/maximum-score-from-removing-stones/)
你正在玩一个单人游戏,面前放置着大小分别为 `a`、`b` 和 `c` 的 **三堆** 石子。
每回合你都要从两个 **不同的非空堆** 中取出一颗石子,并在得分上加 `1` 分。当存在 **两个或更多** 的空堆时,游戏停止。
给你三个整数 `a` 、`b` 和 `c` ,返回可以得到的 **最大分数** 。
**示例 1:**
```
输入:a = 2, b = 4, c = 6
输出:6
解释:石子起始状态是 (2, 4, 6) ,最优的一组操作是:
- 从第一和第三堆取,石子状态现在是 (1, 4, 5)
- 从第一和第三堆取,石子状态现在是 (0, 4, 4)
- 从第二和第三堆取,石子状态现在是 (0, 3, 3)
- 从第二和第三堆取,石子状态现在是 (0, 2, 2)
- 从第二和第三堆取,石子状态现在是 (0, 1, 1)
- 从第二和第三堆取,石子状态现在是 (0, 0, 0)
总分:6 分 。
```
**示例 2:**
```
输入:a = 4, b = 4, c = 6
输出:7
解释:石子起始状态是 (4, 4, 6) ,最优的一组操作是:
- 从第一和第二堆取,石子状态现在是 (3, 3, 6)
- 从第一和第三堆取,石子状态现在是 (2, 3, 5)
- 从第一和第三堆取,石子状态现在是 (1, 3, 4)
- 从第一和第三堆取,石子状态现在是 (0, 3, 3)
- 从第二和第三堆取,石子状态现在是 (0, 2, 2)
- 从第二和第三堆取,石子状态现在是 (0, 1, 1)
- 从第二和第三堆取,石子状态现在是 (0, 0, 0)
总分:7 分 。
```
**示例 3:**
```
输入:a = 1, b = 8, c = 8
输出:8
解释:最优的一组操作是连续从第二和第三堆取 8 回合,直到将它们取空。
注意,由于第二和第三堆已经空了,游戏结束,不能继续从第一堆中取石子。
```
**提示:**
- 1 <= a, b, c <= 10<sup>5</sup>
---
来源:力扣(LeetCode)
链接:https://leetcode.cn/problems/maximum-score-from-removing-stones | 1.0 | 1753. 移除石子的最大得分 - # [1753. 移除石子的最大得分](https://leetcode.cn/problems/maximum-score-from-removing-stones/)
你正在玩一个单人游戏,面前放置着大小分别为 `a`、`b` 和 `c` 的 **三堆** 石子。
每回合你都要从两个 **不同的非空堆** 中取出一颗石子,并在得分上加 `1` 分。当存在 **两个或更多** 的空堆时,游戏停止。
给你三个整数 `a` 、`b` 和 `c` ,返回可以得到的 **最大分数** 。
**示例 1:**
```
输入:a = 2, b = 4, c = 6
输出:6
解释:石子起始状态是 (2, 4, 6) ,最优的一组操作是:
- 从第一和第三堆取,石子状态现在是 (1, 4, 5)
- 从第一和第三堆取,石子状态现在是 (0, 4, 4)
- 从第二和第三堆取,石子状态现在是 (0, 3, 3)
- 从第二和第三堆取,石子状态现在是 (0, 2, 2)
- 从第二和第三堆取,石子状态现在是 (0, 1, 1)
- 从第二和第三堆取,石子状态现在是 (0, 0, 0)
总分:6 分 。
```
**示例 2:**
```
输入:a = 4, b = 4, c = 6
输出:7
解释:石子起始状态是 (4, 4, 6) ,最优的一组操作是:
- 从第一和第二堆取,石子状态现在是 (3, 3, 6)
- 从第一和第三堆取,石子状态现在是 (2, 3, 5)
- 从第一和第三堆取,石子状态现在是 (1, 3, 4)
- 从第一和第三堆取,石子状态现在是 (0, 3, 3)
- 从第二和第三堆取,石子状态现在是 (0, 2, 2)
- 从第二和第三堆取,石子状态现在是 (0, 1, 1)
- 从第二和第三堆取,石子状态现在是 (0, 0, 0)
总分:7 分 。
```
**示例 3:**
```
输入:a = 1, b = 8, c = 8
输出:8
解释:最优的一组操作是连续从第二和第三堆取 8 回合,直到将它们取空。
注意,由于第二和第三堆已经空了,游戏结束,不能继续从第一堆中取石子。
```
**提示:**
- 1 <= a, b, c <= 10<sup>5</sup>
---
来源:力扣(LeetCode)
链接:https://leetcode.cn/problems/maximum-score-from-removing-stones | priority | 移除石子的最大得分 你正在玩一个单人游戏,面前放置着大小分别为 a 、 b 和 c 的 三堆 石子。 每回合你都要从两个 不同的非空堆 中取出一颗石子,并在得分上加 分。当存在 两个或更多 的空堆时,游戏停止。 给你三个整数 a 、 b 和 c ,返回可以得到的 最大分数 。 示例 : 输入:a b c 输出: 解释:石子起始状态是 ,最优的一组操作是: 从第一和第三堆取,石子状态现在是 从第一和第三堆取,石子状态现在是 从第二和第三堆取,石子状态现在是 从第二和第三堆取,石子状态现在是 从第二和第三堆取,石子状态现在是 从第二和第三堆取,石子状态现在是 总分: 分 。 示例 : 输入:a b c 输出: 解释:石子起始状态是 ,最优的一组操作是: 从第一和第二堆取,石子状态现在是 从第一和第三堆取,石子状态现在是 从第一和第三堆取,石子状态现在是 从第一和第三堆取,石子状态现在是 从第二和第三堆取,石子状态现在是 从第二和第三堆取,石子状态现在是 从第二和第三堆取,石子状态现在是 总分: 分 。 示例 : 输入:a b c 输出: 解释:最优的一组操作是连续从第二和第三堆取 回合,直到将它们取空。 注意,由于第二和第三堆已经空了,游戏结束,不能继续从第一堆中取石子。 提示: 来源:力扣(leetcode) 链接: | 1 |
309,719 | 9,479,096,414 | IssuesEvent | 2019-04-20 04:44:55 | bounswe/bounswe2019group4 | https://api.github.com/repos/bounswe/bounswe2019group4 | opened | View events page | Front-End Priority: Medium Type: Development | The user should be able to list some past or upcoming events on that page. It doesn't have to be interactive or show much detail, as the purpose of this project is just getting familiar with API's and teamwork, not starting the actual implementation of our project. :) | 1.0 | View events page - The user should be able to list some past or upcoming events on that page. It doesn't have to be interactive or show much detail, as the purpose of this project is just getting familiar with API's and teamwork, not starting the actual implementation of our project. :) | priority | view events page the user should be able to list some past or upcoming events on that page it doesn t have to be interactive or show much detail as the purpose of this project is just getting familiar with api s and teamwork not starting the actual implementation of our project | 1 |
106,359 | 4,270,822,831 | IssuesEvent | 2016-07-13 08:48:44 | ChristianKuehnel/fhem-venetianblinds | https://api.github.com/repos/ChristianKuehnel/fhem-venetianblinds | opened | use attributes instead of define statements | Priority:Medium | Need to find out if we can set different attributes for the three different device classes | 1.0 | use attributes instead of define statements - Need to find out if we can set different attributes for the three different device classes | priority | use attributes instead of define statements need to find out if we can set different attributes for the three different device classes | 1 |
101,344 | 4,113,291,233 | IssuesEvent | 2016-06-07 13:42:27 | laconalabs/LaconaApp | https://api.github.com/repos/laconalabs/LaconaApp | closed | List all options when pressing enter on a generic command | Priority: Medium Status: Accepted Type: Enhancement | I don't know how to best title this issue. I'll explain it using an example.
When I type `eject`, I get this options:
- `eject all`
- `eject` <kbd>volume</kbd>
It would be nice (I don't know if possible, at least in this case), if, when I press <kbd>enter</kbd> on the second option, the list changes to:
- `eject ClipMenu`
- `eject Lacona`
and so forth. | 1.0 | List all options when pressing enter on a generic command - I don't know how to best title this issue. I'll explain it using an example.
When I type `eject`, I get this options:
- `eject all`
- `eject` <kbd>volume</kbd>
It would be nice (I don't know if possible, at least in this case), if, when I press <kbd>enter</kbd> on the second option, the list changes to:
- `eject ClipMenu`
- `eject Lacona`
and so forth. | priority | list all options when pressing enter on a generic command i don t know how to best title this issue i ll explain it using an example when i type eject i get this options eject all eject volume it would be nice i don t know if possible at least in this case if when i press enter on the second option the list changes to eject clipmenu eject lacona and so forth | 1 |
137,678 | 5,314,319,854 | IssuesEvent | 2017-02-13 14:47:08 | graphcool/console | https://api.github.com/repos/graphcool/console | closed | Settings icon next to Model Name confusing | area/models/databrowser enhancement priority/medium | 
I think "settings icon" is a little bit weird. People could assume, it's a schema editor. I would expect a "pen", since the only functionality is to rename the model. | 1.0 | Settings icon next to Model Name confusing - 
I think "settings icon" is a little bit weird. People could assume, it's a schema editor. I would expect a "pen", since the only functionality is to rename the model. | priority | settings icon next to model name confusing i think settings icon is a little bit weird people could assume it s a schema editor i would expect a pen since the only functionality is to rename the model | 1 |
828,127 | 31,812,838,215 | IssuesEvent | 2023-09-13 18:07:21 | 389ds/389-ds-base | https://api.github.com/repos/389ds/389-ds-base | closed | libdb deadlock while performing modrdns | priority_medium | Cloned from Pagure issue: https://pagure.io/389-ds-base/issue/48166
- Created at 2015-04-23 20:41:12 by [mreynolds](https://pagure.io/user/mreynolds) (@mreynolds389)
- Assigned to nobody
---
While running a modrdn stress test it appears that there is a deadlock when between the modrdn operation and the checkpointing thread
0 0x00007f92ddb7eca0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
1 0x00007f92d687a1c3 in __db_hybrid_mutex_suspend () from /lib64/libdb-5.3.so
2 0x00007f92d68795a8 in __db_tas_mutex_lock () from /lib64/libdb-5.3.so
3 0x00007f92d69946fb in __memp_fget () from /lib64/libdb-5.3.so
4 0x00007f92d68970b1 in __bam_search () from /lib64/libdb-5.3.so
5 0x00007f92d6882126 in __bamc_search () from /lib64/libdb-5.3.so
6 0x00007f92d68865a4 in __bamc_put () from /lib64/libdb-5.3.so
7 0x00007f92d693db85 in __dbc_iput () from /lib64/libdb-5.3.so
8 0x00007f92d6938d1e in __db_put () from /lib64/libdb-5.3.so
9 0x00007f92d694e5d4 in __db_put_pp () from /lib64/libdb-5.3.so
10 0x00007f92d46de02f in idl_new_insert_key (be=<optimized out>, db=<optimized out>, key=<optimized out>, id=4, txn=<optimized out>,
a=<optimized out>, disposition=0x0) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/idl_new.c:830
11 0x00007f92d46dcec5 in idl_insert_key (be=be@entry=0xd2e9d0, db=db@entry=0x7f92340029f0, key=key@entry=0x7f92b2fefc10, id=id@entry=4,
txn=txn@entry=0x7f9268004e40, a=a@entry=0xe09f70, disposition=disposition@entry=0x0)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/idl_shim.c:144
12 0x00007f92d46ecbe8 in addordel_values_sv (be=be@entry=0xd2e9d0, db=0x7f92340029f0, indextype=<optimized out>, vals=<optimized out>,
id=id@entry=4, flags=flags@entry=1, txn=txn@entry=0x7f92b2ff2550, a=0xe09f70, idl_disposition=idl_disposition@entry=0x0,
buffer_handle=buffer_handle@entry=0x0, type=<optimized out>)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:1962
13 0x00007f92d46ed642 in index_addordel_values_ext_sv (be=be@entry=0xd2e9d0, type=<optimized out>, vals=0x7f9268008a10,
evals=evals@entry=0x0, id=id@entry=4, flags=flags@entry=1, txn=txn@entry=0x7f92b2ff2550, idl_disposition=idl_disposition@entry=0x0,
buffer_handle=buffer_handle@entry=0x0) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:2156
14 0x00007f92d46edbe4 in index_addordel_values_sv (be=be@entry=0xd2e9d0, type=<optimized out>, vals=<optimized out>,
evals=evals@entry=0x0, id=id@entry=4, flags=flags@entry=1, txn=txn@entry=0x7f92b2ff2550)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:2021
15 0x00007f92d46ee3ab in index_add_mods (be=0xd2e9d0, mods=<optimized out>, olde=olde@entry=0x7f9294009a50,
newe=newe@entry=0x7f926802e960, txn=txn@entry=0x7f92b2ff2550)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:674
16 0x00007f92d470bde0 in modrdn_rename_entry_update_indexes (ptxn=ptxn@entry=0x7f92b2ff2550, pb=pb@entry=0x7f92b2ff4ae0, e=0x7f9294009a50,
ec=ec@entry=0x7f92b2ff2578, smods1=smods1@entry=0x7f92b2ff2670, smods2=smods2@entry=0x7f92b2ff2690, smods3=smods3@entry=0x7f92b2ff2650,
li=<optimized out>) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c:1901
17 0x00007f92d470c76b in ldbm_back_modrdn (pb=<optimized out>)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c:1053
18 0x00007f92e045d047 in op_shared_rename (pb=pb@entry=0x7f92b2ff4ae0, passin_args=0)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/modrdn.c:652
19 0x00007f92e045d885 in do_modrdn (pb=pb@entry=0x7f92b2ff4ae0) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/modrdn.c:256
20 0x00000000004183b9 in connection_dispatch_operation (pb=0x7f92b2ff4ae0, op=0xf4b530, conn=0x7f92e0800aa0)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/connection.c:655
21 connection_threadmain () at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/connection.c:2534
The checkpointing thread appears to own the db lock:
This is the modrdn thread
(gdb) thread 13
[Switching to thread 13 (Thread 0x7f4e697f2700 (LWP 4177))]
0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 62: movl (%rsp), %edi
(gdb) up
1 0x00007f4e8e0e1570 in __db_pthread_mutex_condwait (env=0x2616fa0, mutex=1965, mutexp=0x7f4e888a6090, timespec=0x0) at ../src/mutex/mut_pthread.c:321
321 RET_SET((pthread_cond_wait(&mutexp->u.m.cond,
(gdb) p *mutexp
$1 = {u = {m = {mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 1, __kind = 128, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}},
__size = '\000' <repeats 12 times>, "\001\000\000\000\200", '\000' <repeats 22 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 1, __total_seq = 1, __wakeup_seq = 0, __woken_seq = 0,
__mutex = 0xffffffffffffffff, __nwaiters = 2, __broadcast_seq = 0},
__size = "\000\000\000\000\001\000\000\000\001", '\000' <repeats 23 times>, "\377\377\377\377\377\377\377\377\002\000\000\000\000\000\000", __align = 4294967296}}, rwlock = {__data = {__lock = 0,
__nr_readers = 0, __readers_wakeup = 0, __writer_wakeup = 1, __nr_readers_queued = 128, __nr_writers_queued = 0, __writer = 0, __shared = 0, __pad1 = 0, __pad2 = 4294967296, __flags = 1},
__size = '\000' <repeats 12 times>, "\001\000\000\000\200", '\000' <repeats 27 times>, "\001\000\000\000\001\000\000\000\000\000\000", __align = 0}}, tas = 0 '\000', sharecount = {value = 1}, wait = 1,
pid = 4145, tid = 139975259846400, mutex_next_link = 0, alloc_id = 15, mutex_set_wait = 0, mutex_set_nowait = 17, mutex_set_rd_wait = 0, mutex_set_rd_nowait = 48, hybrid_wait = 0, hybrid_wakeup = 0,
flags = 49]
not sure what all the fields mean, but tid = 139975259846400 == 0x7f4e87a3f700
and this is the checkpoint thread
Thread 41 (Thread 0x7f4e87a3f700 (LWP 4149)):
0 0x00007f4e9519e5f3 in select () at ../sysdeps/unix/syscall-template.S:81
1 0x00007f4e8e29fd19 in __os_sleep (env=0x2616fa0, secs=1, usecs=0) at ../src/os/os_yield.c:90
2 0x00007f4e8e29fcb9 in __os_yield (env=0x2616fa0, secs=1, usecs=0) at ../src/os/os_yield.c:48
3 0x00007f4e8e299d8e in __memp_sync_int (env=0x2616fa0, dbmfp=0x0, trickle_max=0, flags=4, wrote_totalp=0x0, interruptedp=0x0) at ../src/mp/mp_sync.c:483
4 0x00007f4e8e2b1a7a in __txn_checkpoint (env=0x2616fa0, kbytes=0, minutes=0, flags=0) at ../src/txn/txn_chkpt.c:242
5 0x00007f4e8e2b14fd in __txn_checkpoint_pp (dbenv=0x26ecf90, kbytes=0, minutes=0, flags=0) at ../src/txn/txn_chkpt.c:81
6 0x00007f4e8bf3aaa7 in checkpoint_threadmain (param=<optimized out>) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/dblayer.c:4769
7 0x00007f4e95ad7c2b in _pt_root (arg=0x26f65f0) at ../../../nspr/pr/src/pthreads/ptthread.c:212
8 0x00007f4e95477ee5 in start_thread (arg=0x7f4e87a3f700) at pthread_create.c:309
9 0x00007f4e951a6d1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 | 1.0 | libdb deadlock while performing modrdns - Cloned from Pagure issue: https://pagure.io/389-ds-base/issue/48166
- Created at 2015-04-23 20:41:12 by [mreynolds](https://pagure.io/user/mreynolds) (@mreynolds389)
- Assigned to nobody
---
While running a modrdn stress test it appears that there is a deadlock when between the modrdn operation and the checkpointing thread
0 0x00007f92ddb7eca0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
1 0x00007f92d687a1c3 in __db_hybrid_mutex_suspend () from /lib64/libdb-5.3.so
2 0x00007f92d68795a8 in __db_tas_mutex_lock () from /lib64/libdb-5.3.so
3 0x00007f92d69946fb in __memp_fget () from /lib64/libdb-5.3.so
4 0x00007f92d68970b1 in __bam_search () from /lib64/libdb-5.3.so
5 0x00007f92d6882126 in __bamc_search () from /lib64/libdb-5.3.so
6 0x00007f92d68865a4 in __bamc_put () from /lib64/libdb-5.3.so
7 0x00007f92d693db85 in __dbc_iput () from /lib64/libdb-5.3.so
8 0x00007f92d6938d1e in __db_put () from /lib64/libdb-5.3.so
9 0x00007f92d694e5d4 in __db_put_pp () from /lib64/libdb-5.3.so
10 0x00007f92d46de02f in idl_new_insert_key (be=<optimized out>, db=<optimized out>, key=<optimized out>, id=4, txn=<optimized out>,
a=<optimized out>, disposition=0x0) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/idl_new.c:830
11 0x00007f92d46dcec5 in idl_insert_key (be=be@entry=0xd2e9d0, db=db@entry=0x7f92340029f0, key=key@entry=0x7f92b2fefc10, id=id@entry=4,
txn=txn@entry=0x7f9268004e40, a=a@entry=0xe09f70, disposition=disposition@entry=0x0)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/idl_shim.c:144
12 0x00007f92d46ecbe8 in addordel_values_sv (be=be@entry=0xd2e9d0, db=0x7f92340029f0, indextype=<optimized out>, vals=<optimized out>,
id=id@entry=4, flags=flags@entry=1, txn=txn@entry=0x7f92b2ff2550, a=0xe09f70, idl_disposition=idl_disposition@entry=0x0,
buffer_handle=buffer_handle@entry=0x0, type=<optimized out>)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:1962
13 0x00007f92d46ed642 in index_addordel_values_ext_sv (be=be@entry=0xd2e9d0, type=<optimized out>, vals=0x7f9268008a10,
evals=evals@entry=0x0, id=id@entry=4, flags=flags@entry=1, txn=txn@entry=0x7f92b2ff2550, idl_disposition=idl_disposition@entry=0x0,
buffer_handle=buffer_handle@entry=0x0) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:2156
14 0x00007f92d46edbe4 in index_addordel_values_sv (be=be@entry=0xd2e9d0, type=<optimized out>, vals=<optimized out>,
evals=evals@entry=0x0, id=id@entry=4, flags=flags@entry=1, txn=txn@entry=0x7f92b2ff2550)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:2021
15 0x00007f92d46ee3ab in index_add_mods (be=0xd2e9d0, mods=<optimized out>, olde=olde@entry=0x7f9294009a50,
newe=newe@entry=0x7f926802e960, txn=txn@entry=0x7f92b2ff2550)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/index.c:674
16 0x00007f92d470bde0 in modrdn_rename_entry_update_indexes (ptxn=ptxn@entry=0x7f92b2ff2550, pb=pb@entry=0x7f92b2ff4ae0, e=0x7f9294009a50,
ec=ec@entry=0x7f92b2ff2578, smods1=smods1@entry=0x7f92b2ff2670, smods2=smods2@entry=0x7f92b2ff2690, smods3=smods3@entry=0x7f92b2ff2650,
li=<optimized out>) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c:1901
17 0x00007f92d470c76b in ldbm_back_modrdn (pb=<optimized out>)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/ldbm_modrdn.c:1053
18 0x00007f92e045d047 in op_shared_rename (pb=pb@entry=0x7f92b2ff4ae0, passin_args=0)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/modrdn.c:652
19 0x00007f92e045d885 in do_modrdn (pb=pb@entry=0x7f92b2ff4ae0) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/modrdn.c:256
20 0x00000000004183b9 in connection_dispatch_operation (pb=0x7f92b2ff4ae0, op=0xf4b530, conn=0x7f92e0800aa0)
at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/connection.c:655
21 connection_threadmain () at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/connection.c:2534
The checkpointing thread appears to own the db lock:
This is the modrdn thread
(gdb) thread 13
[Switching to thread 13 (Thread 0x7f4e697f2700 (LWP 4177))]
0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 62: movl (%rsp), %edi
(gdb) up
1 0x00007f4e8e0e1570 in __db_pthread_mutex_condwait (env=0x2616fa0, mutex=1965, mutexp=0x7f4e888a6090, timespec=0x0) at ../src/mutex/mut_pthread.c:321
321 RET_SET((pthread_cond_wait(&mutexp->u.m.cond,
(gdb) p *mutexp
$1 = {u = {m = {mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 1, __kind = 128, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}},
__size = '\000' <repeats 12 times>, "\001\000\000\000\200", '\000' <repeats 22 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 1, __total_seq = 1, __wakeup_seq = 0, __woken_seq = 0,
__mutex = 0xffffffffffffffff, __nwaiters = 2, __broadcast_seq = 0},
__size = "\000\000\000\000\001\000\000\000\001", '\000' <repeats 23 times>, "\377\377\377\377\377\377\377\377\002\000\000\000\000\000\000", __align = 4294967296}}, rwlock = {__data = {__lock = 0,
__nr_readers = 0, __readers_wakeup = 0, __writer_wakeup = 1, __nr_readers_queued = 128, __nr_writers_queued = 0, __writer = 0, __shared = 0, __pad1 = 0, __pad2 = 4294967296, __flags = 1},
__size = '\000' <repeats 12 times>, "\001\000\000\000\200", '\000' <repeats 27 times>, "\001\000\000\000\001\000\000\000\000\000\000", __align = 0}}, tas = 0 '\000', sharecount = {value = 1}, wait = 1,
pid = 4145, tid = 139975259846400, mutex_next_link = 0, alloc_id = 15, mutex_set_wait = 0, mutex_set_nowait = 17, mutex_set_rd_wait = 0, mutex_set_rd_nowait = 48, hybrid_wait = 0, hybrid_wakeup = 0,
flags = 49]
not sure what all the fields mean, but tid = 139975259846400 == 0x7f4e87a3f700
and this is the checkpoint thread
Thread 41 (Thread 0x7f4e87a3f700 (LWP 4149)):
0 0x00007f4e9519e5f3 in select () at ../sysdeps/unix/syscall-template.S:81
1 0x00007f4e8e29fd19 in __os_sleep (env=0x2616fa0, secs=1, usecs=0) at ../src/os/os_yield.c:90
2 0x00007f4e8e29fcb9 in __os_yield (env=0x2616fa0, secs=1, usecs=0) at ../src/os/os_yield.c:48
3 0x00007f4e8e299d8e in __memp_sync_int (env=0x2616fa0, dbmfp=0x0, trickle_max=0, flags=4, wrote_totalp=0x0, interruptedp=0x0) at ../src/mp/mp_sync.c:483
4 0x00007f4e8e2b1a7a in __txn_checkpoint (env=0x2616fa0, kbytes=0, minutes=0, flags=0) at ../src/txn/txn_chkpt.c:242
5 0x00007f4e8e2b14fd in __txn_checkpoint_pp (dbenv=0x26ecf90, kbytes=0, minutes=0, flags=0) at ../src/txn/txn_chkpt.c:81
6 0x00007f4e8bf3aaa7 in checkpoint_threadmain (param=<optimized out>) at /home/mareynol/workspaces/389-ds-base/ds/ldap/servers/slapd/back-ldbm/dblayer.c:4769
7 0x00007f4e95ad7c2b in _pt_root (arg=0x26f65f0) at ../../../nspr/pr/src/pthreads/ptthread.c:212
8 0x00007f4e95477ee5 in start_thread (arg=0x7f4e87a3f700) at pthread_create.c:309
9 0x00007f4e951a6d1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 | priority | libdb deadlock while performing modrdns cloned from pagure issue created at by assigned to nobody while running a modrdn stress test it appears that there is a deadlock when between the modrdn operation and the checkpointing thread in pthread cond wait glibc from libpthread so in db hybrid mutex suspend from libdb so in db tas mutex lock from libdb so in memp fget from libdb so in bam search from libdb so in bamc search from libdb so in bamc put from libdb so in dbc iput from libdb so in db put from libdb so in db put pp from libdb so in idl new insert key be db key id txn a disposition at home mareynol workspaces ds base ds ldap servers slapd back ldbm idl new c in idl insert key be be entry db db entry key key entry id id entry txn txn entry a a entry disposition disposition entry at home mareynol workspaces ds base ds ldap servers slapd back ldbm idl shim c in addordel values sv be be entry db indextype vals id id entry flags flags entry txn txn entry a idl disposition idl disposition entry buffer handle buffer handle entry type at home mareynol workspaces ds base ds ldap servers slapd back ldbm index c in index addordel values ext sv be be entry type vals evals evals entry id id entry flags flags entry txn txn entry idl disposition idl disposition entry buffer handle buffer handle entry at home mareynol workspaces ds base ds ldap servers slapd back ldbm index c in index addordel values sv be be entry type vals evals evals entry id id entry flags flags entry txn txn entry at home mareynol workspaces ds base ds ldap servers slapd back ldbm index c in index add mods be mods olde olde entry newe newe entry txn txn entry at home mareynol workspaces ds base ds ldap servers slapd back ldbm index c in modrdn rename entry update indexes ptxn ptxn entry pb pb entry e ec ec entry entry entry entry li at home mareynol workspaces ds base ds ldap servers slapd back ldbm ldbm modrdn c in ldbm back modrdn pb at home mareynol workspaces ds base ds ldap servers slapd back ldbm ldbm modrdn c in op shared rename pb pb entry passin args at home mareynol workspaces ds base ds ldap servers slapd modrdn c in do modrdn pb pb entry at home mareynol workspaces ds base ds ldap servers slapd modrdn c in connection dispatch operation pb op conn at home mareynol workspaces ds base ds ldap servers slapd connection c connection threadmain at home mareynol workspaces ds base ds ldap servers slapd connection c the checkpointing thread appears to own the db lock this is the modrdn thread gdb thread pthread cond wait glibc at nptl sysdeps unix sysv linux pthread cond wait s movl rsp edi gdb up in db pthread mutex condwait env mutex mutexp timespec at src mutex mut pthread c ret set pthread cond wait mutexp u m cond gdb p mutexp u m mutex data lock count owner nusers kind spins elision list prev next size align cond data lock futex total seq wakeup seq woken seq mutex nwaiters broadcast seq size align rwlock data lock nr readers readers wakeup writer wakeup nr readers queued nr writers queued writer shared flags size align tas sharecount value wait pid tid mutex next link alloc id mutex set wait mutex set nowait mutex set rd wait mutex set rd nowait hybrid wait hybrid wakeup flags not sure what all the fields mean but tid and this is the checkpoint thread thread thread lwp in select at sysdeps unix syscall template s in os sleep env secs usecs at src os os yield c in os yield env secs usecs at src os os yield c in memp sync int env dbmfp trickle max flags wrote totalp interruptedp at src mp mp sync c in txn checkpoint env kbytes minutes flags at src txn txn chkpt c in txn checkpoint pp dbenv kbytes minutes flags at src txn txn chkpt c in checkpoint threadmain param at home mareynol workspaces ds base ds ldap servers slapd back ldbm dblayer c in pt root arg at nspr pr src pthreads ptthread c in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s | 1 |
57,768 | 3,083,772,634 | IssuesEvent | 2015-08-24 11:12:34 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | [Анализ лога от Gap51] Не удачные попытки открыть файл C:\FlylinkDC++\.antifrag | bug imported Priority-Medium | _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on July 17, 2013 04:52:37_
Кусок лога http://www.flickr.com/photos/96019675@N02/9302854303/ Сделанного из приложения http://technet.microsoft.com/ru-RU/sysinternals/bb896645 Вероятное место:
QueueItemPtr QueueManager::FileQueue::add
qi->setTempTarget(aTempTarget);
if (!File::isExist(aTempTarget) && File::isExist(aTempTarget + ".antifrag"))
{
// load old antifrag file
File::renameFile(aTempTarget + ".antifrag", qi->getTempTarget());
}
Возникает при пустом aTempTarget
Разобраться с данным кодом
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1084_ | 1.0 | [Анализ лога от Gap51] Не удачные попытки открыть файл C:\FlylinkDC++\.antifrag - _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on July 17, 2013 04:52:37_
Кусок лога http://www.flickr.com/photos/96019675@N02/9302854303/ Сделанного из приложения http://technet.microsoft.com/ru-RU/sysinternals/bb896645 Вероятное место:
QueueItemPtr QueueManager::FileQueue::add
qi->setTempTarget(aTempTarget);
if (!File::isExist(aTempTarget) && File::isExist(aTempTarget + ".antifrag"))
{
// load old antifrag file
File::renameFile(aTempTarget + ".antifrag", qi->getTempTarget());
}
Возникает при пустом aTempTarget
Разобраться с данным кодом
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1084_ | priority | не удачные попытки открыть файл c flylinkdc antifrag from on july кусок лога сделанного из приложения вероятное место queueitemptr queuemanager filequeue add qi settemptarget atemptarget if file isexist atemptarget file isexist atemptarget antifrag load old antifrag file file renamefile atemptarget antifrag qi gettemptarget возникает при пустом atemptarget разобраться с данным кодом original issue | 1 |
76,838 | 3,494,071,848 | IssuesEvent | 2016-01-05 08:14:55 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | opened | New Contribute Flow: update/edit dataset link/button | Priority-Medium | update/edit dataset link/button should point to the new contribute flow as we have it for "add data" | 1.0 | New Contribute Flow: update/edit dataset link/button - update/edit dataset link/button should point to the new contribute flow as we have it for "add data" | priority | new contribute flow update edit dataset link button update edit dataset link button should point to the new contribute flow as we have it for add data | 1 |
737,100 | 25,500,041,373 | IssuesEvent | 2022-11-28 02:38:11 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [CDCSDK] Restrict compaction when CDC snapshot and before the image feature is enabled | priority/medium area/cdcsdk | Jira Link: [DB-4290](https://yugabyte.atlassian.net/browse/DB-4290)
CDC should restrict compaction during the before image and snapshot operation. | 1.0 | [CDCSDK] Restrict compaction when CDC snapshot and before the image feature is enabled - Jira Link: [DB-4290](https://yugabyte.atlassian.net/browse/DB-4290)
CDC should restrict compaction during the before image and snapshot operation. | priority | restrict compaction when cdc snapshot and before the image feature is enabled jira link cdc should restrict compaction during the before image and snapshot operation | 1 |
484,155 | 13,935,101,707 | IssuesEvent | 2020-10-22 11:00:18 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | k_wakeup follwed by k_thread_resume call causes system freeze | bug platform: nRF priority: medium | **Describe the bug**
I've added a while loop with 8 seconds sleep in a thread. Now by mistake I called both k_wakeup and k_thread_resume which is causing system to get freeze without any crash logs.
**To Reproduce**
Steps to reproduce the behaviour:
1. $ git clone -b thread-handling-issue https://github.com/rhardik/reproduce-zephyr-issues.git
2.$ cd _reproduce-zephyr-issues_
3. copy _issue-1_ folder to zephyr compilation environment
4. $ cd issue-1 && west build -b <board_name> (I'm using nRF52 boards)
5. west flash
6. press button-0 to suspend thread.
7. press button-1 to resume thread (here I've used both k_wakeup and k_thread_resume)
8. See at UART terminal: system gets freeze
9. Now edit the code. Enable _k_thread_suspend(dummy_tid)_
` while (1) {
k_sleep(K_MSEC(8000));
printk("flag = %d\n",dummy_flg);
printk("%s\n", __func__);
//k_thread_suspend(dummy_tid);
}
`
10. Recompile and flash the code.
11. Press button-0 to see if suspend thread works or not.
12. Press button-1 to resume the thread.
13. See at UART terminal: while loop continuously running without considering _k_sleep(8000)_
**Expected behavior**
System should not get freeze if I call k_thread_resume after k_wakeup (Note: thread in sleep state here)
**Impact**
annoyance.
**Logs and console output**

**Environment (please complete the following information):**
- OS: Linux)
- Toolchain - Zephyr SDK
- Commit a3fae2f (zephyr v2.2.99)
| 1.0 | k_wakeup follwed by k_thread_resume call causes system freeze - **Describe the bug**
I've added a while loop with 8 seconds sleep in a thread. Now by mistake I called both k_wakeup and k_thread_resume which is causing system to get freeze without any crash logs.
**To Reproduce**
Steps to reproduce the behaviour:
1. $ git clone -b thread-handling-issue https://github.com/rhardik/reproduce-zephyr-issues.git
2.$ cd _reproduce-zephyr-issues_
3. copy _issue-1_ folder to zephyr compilation environment
4. $ cd issue-1 && west build -b <board_name> (I'm using nRF52 boards)
5. west flash
6. press button-0 to suspend thread.
7. press button-1 to resume thread (here I've used both k_wakeup and k_thread_resume)
8. See at UART terminal: system gets freeze
9. Now edit the code. Enable _k_thread_suspend(dummy_tid)_
` while (1) {
k_sleep(K_MSEC(8000));
printk("flag = %d\n",dummy_flg);
printk("%s\n", __func__);
//k_thread_suspend(dummy_tid);
}
`
10. Recompile and flash the code.
11. Press button-0 to see if suspend thread works or not.
12. Press button-1 to resume the thread.
13. See at UART terminal: while loop continuously running without considering _k_sleep(8000)_
**Expected behavior**
System should not get freeze if I call k_thread_resume after k_wakeup (Note: thread in sleep state here)
**Impact**
annoyance.
**Logs and console output**

**Environment (please complete the following information):**
- OS: Linux)
- Toolchain - Zephyr SDK
- Commit a3fae2f (zephyr v2.2.99)
| priority | k wakeup follwed by k thread resume call causes system freeze describe the bug i ve added a while loop with seconds sleep in a thread now by mistake i called both k wakeup and k thread resume which is causing system to get freeze without any crash logs to reproduce steps to reproduce the behaviour git clone b thread handling issue cd reproduce zephyr issues copy issue folder to zephyr compilation environment cd issue west build b i m using boards west flash press button to suspend thread press button to resume thread here i ve used both k wakeup and k thread resume see at uart terminal system gets freeze now edit the code enable k thread suspend dummy tid while k sleep k msec printk flag d n dummy flg printk s n func k thread suspend dummy tid recompile and flash the code press button to see if suspend thread works or not press button to resume the thread see at uart terminal while loop continuously running without considering k sleep expected behavior system should not get freeze if i call k thread resume after k wakeup note thread in sleep state here impact annoyance logs and console output environment please complete the following information os linux toolchain zephyr sdk commit zephyr | 1 |
285,937 | 8,781,174,522 | IssuesEvent | 2018-12-19 19:37:35 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | Profile bio truncate behaviour | SEV3-medium area/UX area/auth area/user/profile priority/P4 team/ui team/ux type/bug | > The bio code prevents users from typing more than 250 characters by removing the last X characters in the bio text s.t. it is 255 characters long.
>
> This means if a user has something 255 characters long, and edits somewhere in the middle of his bio, the code will remove text at the end of the bio. This is definitely not a good thing to do.
>
> I'd prefer the update to be disabled if a user goes over the limit and the bio section should show text, Bio cannot be over 255 characters or something similar so users are aware they need to modify their bio to be shorter.
>
> Also curiously, new lines that a user enters in a bio are shown as spaces. This is definitely unexpected behavior from a user perspective.


Based on @jiekang's comment on a [PR](https://github.com/fabric8-ui/fabric8-ui/pull/2777). I think this issue needs help from the UX team.
cc: @alexeykazakov
| 1.0 | Profile bio truncate behaviour - > The bio code prevents users from typing more than 250 characters by removing the last X characters in the bio text s.t. it is 255 characters long.
>
> This means if a user has something 255 characters long, and edits somewhere in the middle of his bio, the code will remove text at the end of the bio. This is definitely not a good thing to do.
>
> I'd prefer the update to be disabled if a user goes over the limit and the bio section should show text, Bio cannot be over 255 characters or something similar so users are aware they need to modify their bio to be shorter.
>
> Also curiously, new lines that a user enters in a bio are shown as spaces. This is definitely unexpected behavior from a user perspective.


Based on @jiekang's comment on a [PR](https://github.com/fabric8-ui/fabric8-ui/pull/2777). I think this issue needs help from the UX team.
cc: @alexeykazakov
| priority | profile bio truncate behaviour the bio code prevents users from typing more than characters by removing the last x characters in the bio text s t it is characters long this means if a user has something characters long and edits somewhere in the middle of his bio the code will remove text at the end of the bio this is definitely not a good thing to do i d prefer the update to be disabled if a user goes over the limit and the bio section should show text bio cannot be over characters or something similar so users are aware they need to modify their bio to be shorter also curiously new lines that a user enters in a bio are shown as spaces this is definitely unexpected behavior from a user perspective based on jiekang s comment on a i think this issue needs help from the ux team cc alexeykazakov | 1 |
203,120 | 7,058,020,439 | IssuesEvent | 2018-01-04 18:39:21 | opencurrents/opencurrents | https://api.github.com/repos/opencurrents/opencurrents | closed | Org Sign-up: Improper functionality on org sign-up when accessed from home - join as Nonprofit | mvp yes priority high priority medium | This error is only happening when using the bottom form of opencurrents.com/nonprofit which is also accessed from home clicking register a nonprofit. The top form of opencurrents.com/nonprofit is working properly.
@dannypernik @nickolashe note that Org user is being created in DB but with no affiliation. I had registered Costas Verdes as npf on sign up, and when visiting "Orgs" it appeared with no affiliation. Although when clicking on the particular org it appeared to be registered with biz. When changing org from biz to npf, the user was proper functionality was achieved. .


Below: Costas Verdes had originally no affiliation with Orgs - this is after changing it.

| 2.0 | Org Sign-up: Improper functionality on org sign-up when accessed from home - join as Nonprofit - This error is only happening when using the bottom form of opencurrents.com/nonprofit which is also accessed from home clicking register a nonprofit. The top form of opencurrents.com/nonprofit is working properly.
@dannypernik @nickolashe note that Org user is being created in DB but with no affiliation. I had registered Costas Verdes as npf on sign up, and when visiting "Orgs" it appeared with no affiliation. Although when clicking on the particular org it appeared to be registered with biz. When changing org from biz to npf, the user was proper functionality was achieved. .


Below: Costas Verdes had originally no affiliation with Orgs - this is after changing it.

| priority | org sign up improper functionality on org sign up when accessed from home join as nonprofit this error is only happening when using the bottom form of opencurrents com nonprofit which is also accessed from home clicking register a nonprofit the top form of opencurrents com nonprofit is working properly dannypernik nickolashe note that org user is being created in db but with no affiliation i had registered costas verdes as npf on sign up and when visiting orgs it appeared with no affiliation although when clicking on the particular org it appeared to be registered with biz when changing org from biz to npf the user was proper functionality was achieved below costas verdes had originally no affiliation with orgs this is after changing it | 1 |
762,236 | 26,712,307,754 | IssuesEvent | 2023-01-28 03:07:53 | containrrr/watchtower | https://api.github.com/repos/containrrr/watchtower | opened | Registry responded to head request with "404 Not Found" for single image | Type: Bug Priority: Medium Status: Available | ### Describe the bug
For some reason one image from docker hub [weblate/weblate](https://hub.docker.com/r/weblate/weblate) keeps triggering the error email:
```
Could not do a head request for "weblate/weblate:latest", falling back to regular pull.
Reason: registry responded to head request with "404 Not Found", auth: "not present"
```
no other image on my instance is getting this error and the image seem to be present on docker hub. So not sure if its some issue with their APIs or a bug that sometimes gets triggered within Watchtower in checking for image updates.
Based on the logs it seems to happen intermittently.
### Steps to reproduce
1. Run image `weblate/weblate:latest`
2. Run watchtower to update weblate
3. Check logs or wait for error email
### Expected behavior
The image gets checked for updates without issues
### Screenshots
_No response_
### Environment
- Platform: Ubuntu (Linux)
- Architecture: x86_64
- Docker Version: 20.10.23
### Your logs
```text
time="2023-01-28T02:35:19Z" level=warning msg="Could not do a head request for \"weblate/weblate:latest\", falling back to regular pull." container=/weblate-server image="weblate/weblate:latest"
time="2023-01-28T02:35:19Z" level=warning msg="Reason: registry responded to head request with \"404 Not Found\", auth: \"not present\"" container=/weblate-server image="weblate/weblate:latest"
time="2023-01-28T02:36:08Z" level=info msg="Session done" Failed=0 Scanned=39 Updated=0 notify=no
time="2023-01-28T02:51:07Z" level=info msg="Session done" Failed=0 Scanned=38 Updated=0 notify=no
`
```
### Additional context
_No response_ | 1.0 | Registry responded to head request with "404 Not Found" for single image - ### Describe the bug
For some reason one image from docker hub [weblate/weblate](https://hub.docker.com/r/weblate/weblate) keeps triggering the error email:
```
Could not do a head request for "weblate/weblate:latest", falling back to regular pull.
Reason: registry responded to head request with "404 Not Found", auth: "not present"
```
no other image on my instance is getting this error and the image seem to be present on docker hub. So not sure if its some issue with their APIs or a bug that sometimes gets triggered within Watchtower in checking for image updates.
Based on the logs it seems to happen intermittently.
### Steps to reproduce
1. Run image `weblate/weblate:latest`
2. Run watchtower to update weblate
3. Check logs or wait for error email
### Expected behavior
The image gets checked for updates without issues
### Screenshots
_No response_
### Environment
- Platform: Ubuntu (Linux)
- Architecture: x86_64
- Docker Version: 20.10.23
### Your logs
```text
time="2023-01-28T02:35:19Z" level=warning msg="Could not do a head request for \"weblate/weblate:latest\", falling back to regular pull." container=/weblate-server image="weblate/weblate:latest"
time="2023-01-28T02:35:19Z" level=warning msg="Reason: registry responded to head request with \"404 Not Found\", auth: \"not present\"" container=/weblate-server image="weblate/weblate:latest"
time="2023-01-28T02:36:08Z" level=info msg="Session done" Failed=0 Scanned=39 Updated=0 notify=no
time="2023-01-28T02:51:07Z" level=info msg="Session done" Failed=0 Scanned=38 Updated=0 notify=no
`
```
### Additional context
_No response_ | priority | registry responded to head request with not found for single image describe the bug for some reason one image from docker hub keeps triggering the error email could not do a head request for weblate weblate latest falling back to regular pull reason registry responded to head request with not found auth not present no other image on my instance is getting this error and the image seem to be present on docker hub so not sure if its some issue with their apis or a bug that sometimes gets triggered within watchtower in checking for image updates based on the logs it seems to happen intermittently steps to reproduce run image weblate weblate latest run watchtower to update weblate check logs or wait for error email expected behavior the image gets checked for updates without issues screenshots no response environment platform ubuntu linux architecture docker version your logs text time level warning msg could not do a head request for weblate weblate latest falling back to regular pull container weblate server image weblate weblate latest time level warning msg reason registry responded to head request with not found auth not present container weblate server image weblate weblate latest time level info msg session done failed scanned updated notify no time level info msg session done failed scanned updated notify no additional context no response | 1 |
564,689 | 16,738,804,916 | IssuesEvent | 2021-06-11 07:15:50 | niji-co/Fusion-Web | https://api.github.com/repos/niji-co/Fusion-Web | opened | Vulnerability found in "glob-parent" package | Priority: Medium Status: Help Wanted Type: Vulnerability | ## Description
Dependabot found a vulnerability in the `glob-parent` package in our yarn.lock. Dependabot was unable to generate an automatic fix, so we would have to figure something out ourselves.
## Dependabot Log
[Link to alert](https://github.com/niji-co/Fusion-Web/security/dependabot/yarn.lock/glob-parent/open)
### Remediation
Upgrade **glob-parent** to version **5.1.2** or later. For example:
```
glob-parent@^5.1.2:
version "5.1.2"
```
###### _Always verify the validity and compatibility of suggestions with your codebase._
### Details
>**CVE-2020-28469**
moderate severity
**Vulnerable versions**: < 5.1.2
**Patched version**: 5.1.2
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator. | 1.0 | Vulnerability found in "glob-parent" package - ## Description
Dependabot found a vulnerability in the `glob-parent` package in our yarn.lock. Dependabot was unable to generate an automatic fix, so we would have to figure something out ourselves.
## Dependabot Log
[Link to alert](https://github.com/niji-co/Fusion-Web/security/dependabot/yarn.lock/glob-parent/open)
### Remediation
Upgrade **glob-parent** to version **5.1.2** or later. For example:
```
glob-parent@^5.1.2:
version "5.1.2"
```
###### _Always verify the validity and compatibility of suggestions with your codebase._
### Details
>**CVE-2020-28469**
moderate severity
**Vulnerable versions**: < 5.1.2
**Patched version**: 5.1.2
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator. | priority | vulnerability found in glob parent package description dependabot found a vulnerability in the glob parent package in our yarn lock dependabot was unable to generate an automatic fix so we would have to figure something out ourselves dependabot log remediation upgrade glob parent to version or later for example glob parent version always verify the validity and compatibility of suggestions with your codebase details cve moderate severity vulnerable versions patched version this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator | 1 |
650,847 | 21,419,031,689 | IssuesEvent | 2022-04-22 13:52:23 | cds-snc/notification-planning | https://api.github.com/repos/cds-snc/notification-planning | closed | Enable CloudWatch Lambda Insights for the API function | Medium Priority | Priorité moyenne Enhancement l Amélioration | ## Description
As a developer,
I want fine-grained Lambda performance metrics,
so that I can have more data available to me when I troubleshoot Lambda function issues, startup times and scaling requirements.
## Acceptance Criteria** (Definition of done)
- [ ] CloudWatch Lambda Insights enabled for the `api-lambda` function in Staging and Production.
- [ ] Lambda Insights are enabled via code changes.
## QA Steps
- [ ] Login to the Staging AWS account and confirm that [Lambda Insights](https://ca-central-1.console.aws.amazon.com/cloudwatch/home?region=ca-central-1#lambda-insights:performance) exist for the `api-lambda` function.
- [ ] Login to the Production AWS account and confirm that [Lambda Insights](https://ca-central-1.console.aws.amazon.com/cloudwatch/home?region=ca-central-1#lambda-insights:performance) exist for the `api-lambda` function.
| 1.0 | Enable CloudWatch Lambda Insights for the API function - ## Description
As a developer,
I want fine-grained Lambda performance metrics,
so that I can have more data available to me when I troubleshoot Lambda function issues, startup times and scaling requirements.
## Acceptance Criteria** (Definition of done)
- [ ] CloudWatch Lambda Insights enabled for the `api-lambda` function in Staging and Production.
- [ ] Lambda Insights are enabled via code changes.
## QA Steps
- [ ] Login to the Staging AWS account and confirm that [Lambda Insights](https://ca-central-1.console.aws.amazon.com/cloudwatch/home?region=ca-central-1#lambda-insights:performance) exist for the `api-lambda` function.
- [ ] Login to the Production AWS account and confirm that [Lambda Insights](https://ca-central-1.console.aws.amazon.com/cloudwatch/home?region=ca-central-1#lambda-insights:performance) exist for the `api-lambda` function.
| priority | enable cloudwatch lambda insights for the api function description as a developer i want fine grained lambda performance metrics so that i can have more data available to me when i troubleshoot lambda function issues startup times and scaling requirements acceptance criteria definition of done cloudwatch lambda insights enabled for the api lambda function in staging and production lambda insights are enabled via code changes qa steps login to the staging aws account and confirm that exist for the api lambda function login to the production aws account and confirm that exist for the api lambda function | 1 |
441,410 | 12,717,391,514 | IssuesEvent | 2020-06-24 05:01:37 | pingcap/tidb-lightning | https://api.github.com/repos/pingcap/tidb-lightning | closed | panic after loading csv data to tidb backend | bug difficulty/2-medium priority/P1 | ## Bug Report
Please answer these questions before submitting your issue. Thanks!
1. What did you do? If possible, provide a recipe for reproducing the error.

the data is loaded successfully.
2. What did you expect to see?
3. What did you see instead?
4. Versions of the cluster
- TiDB-Lightning version (run `tidb-lightning -V`):
```
(paste TiDB-Lightning version here)
```
368d52e605b1b4299d84dfea95b9f05e3f061110
- TiKV-Importer version (run `tikv-importer -V`)
```
(paste TiKV-Importer version here)
```
- TiKV version (run `tikv-server -V`):
```
(paste TiKV version here)
```
- TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```
(paste TiDB cluster version here)
```
- Other interesting information (system version, hardware config, etc):
>
>
5. Operation logs
[error.log](https://github.com/pingcap/tidb-lightning/files/4791648/error.log)
6. Configuration of the cluster and the task
- `tidb-lightning.toml` for TiDB-Lightning if possible
- `tikv-importer.toml` for TiKV-Importer if possible
- `inventory.ini` if deployed by Ansible
7. Screenshot/exported-PDF of Grafana dashboard or metrics' graph in Prometheus for TiDB-Lightning if possible
| 1.0 | panic after loading csv data to tidb backend - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
1. What did you do? If possible, provide a recipe for reproducing the error.

the data is loaded successfully.
2. What did you expect to see?
3. What did you see instead?
4. Versions of the cluster
- TiDB-Lightning version (run `tidb-lightning -V`):
```
(paste TiDB-Lightning version here)
```
368d52e605b1b4299d84dfea95b9f05e3f061110
- TiKV-Importer version (run `tikv-importer -V`)
```
(paste TiKV-Importer version here)
```
- TiKV version (run `tikv-server -V`):
```
(paste TiKV version here)
```
- TiDB cluster version (execute `SELECT tidb_version();` in a MySQL client):
```
(paste TiDB cluster version here)
```
- Other interesting information (system version, hardware config, etc):
>
>
5. Operation logs
[error.log](https://github.com/pingcap/tidb-lightning/files/4791648/error.log)
6. Configuration of the cluster and the task
- `tidb-lightning.toml` for TiDB-Lightning if possible
- `tikv-importer.toml` for TiKV-Importer if possible
- `inventory.ini` if deployed by Ansible
7. Screenshot/exported-PDF of Grafana dashboard or metrics' graph in Prometheus for TiDB-Lightning if possible
| priority | panic after loading csv data to tidb backend bug report please answer these questions before submitting your issue thanks what did you do if possible provide a recipe for reproducing the error the data is loaded successfully what did you expect to see what did you see instead versions of the cluster tidb lightning version run tidb lightning v paste tidb lightning version here tikv importer version run tikv importer v paste tikv importer version here tikv version run tikv server v paste tikv version here tidb cluster version execute select tidb version in a mysql client paste tidb cluster version here other interesting information system version hardware config etc operation logs configuration of the cluster and the task tidb lightning toml for tidb lightning if possible tikv importer toml for tikv importer if possible inventory ini if deployed by ansible screenshot exported pdf of grafana dashboard or metrics graph in prometheus for tidb lightning if possible | 1 |
246,398 | 7,895,190,789 | IssuesEvent | 2018-06-29 01:39:46 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | SPH Resampling Operator | Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: Normal Support Group: Any | Finish implementing and parallelizing Cody Raskin's SPH Resampling Operator.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kevin Griffin
Original creation: 07/14/2015 09:14 pm
Original update: 07/22/2015 09:56 pm
Ticket number: 2338 | 1.0 | SPH Resampling Operator - Finish implementing and parallelizing Cody Raskin's SPH Resampling Operator.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kevin Griffin
Original creation: 07/14/2015 09:14 pm
Original update: 07/22/2015 09:56 pm
Ticket number: 2338 | priority | sph resampling operator finish implementing and parallelizing cody raskin s sph resampling operator redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author kevin griffin original creation pm original update pm ticket number | 1 |
826,376 | 31,592,924,923 | IssuesEvent | 2023-09-05 01:24:57 | davidfstr/Crystal-Web-Archiver | https://api.github.com/repos/davidfstr/Crystal-Web-Archiver | opened | Long delay opening the Save/Load dialog | priority-medium os-mac type-bug topic-firstrun | Repro Steps:
* Open Crystal on macOS.
* In the initial "Select a Project" dialog, choose "New Project"
Expected Results:
* Either:
* A Save dialog appears immediately, or
* The "Select a Project" dialog stays visible, the buttons dim, and a loading spinner appears in the dialog. After some time the "New Project" dialog appears.
Actual Results:
* The "Select a Project" dialog disappears.
* About 5 seconds pass without any UI feedback.
* The "New Project" dialog appears.
A new user might wonder if Crystal had unexpectedly quit after pressing the "New Project" button, since no UI feedback was given. We can do better.
Note: The need for a **Save** dialog will probably go away when ["Support untitled projects"] is implemented. Nevertheless the **Load** dialog will still remain and still require some kind of feedback.
["Support untitled projects"]: https://github.com/davidfstr/Crystal-Web-Archiver/issues/102 | 1.0 | Long delay opening the Save/Load dialog - Repro Steps:
* Open Crystal on macOS.
* In the initial "Select a Project" dialog, choose "New Project"
Expected Results:
* Either:
* A Save dialog appears immediately, or
* The "Select a Project" dialog stays visible, the buttons dim, and a loading spinner appears in the dialog. After some time the "New Project" dialog appears.
Actual Results:
* The "Select a Project" dialog disappears.
* About 5 seconds pass without any UI feedback.
* The "New Project" dialog appears.
A new user might wonder if Crystal had unexpectedly quit after pressing the "New Project" button, since no UI feedback was given. We can do better.
Note: The need for a **Save** dialog will probably go away when ["Support untitled projects"] is implemented. Nevertheless the **Load** dialog will still remain and still require some kind of feedback.
["Support untitled projects"]: https://github.com/davidfstr/Crystal-Web-Archiver/issues/102 | priority | long delay opening the save load dialog repro steps open crystal on macos in the initial select a project dialog choose new project expected results either a save dialog appears immediately or the select a project dialog stays visible the buttons dim and a loading spinner appears in the dialog after some time the new project dialog appears actual results the select a project dialog disappears about seconds pass without any ui feedback the new project dialog appears a new user might wonder if crystal had unexpectedly quit after pressing the new project button since no ui feedback was given we can do better note the need for a save dialog will probably go away when is implemented nevertheless the load dialog will still remain and still require some kind of feedback | 1 |
451,760 | 13,041,068,043 | IssuesEvent | 2020-07-28 19:40:47 | Clebal/karumi-test | https://api.github.com/repos/Clebal/karumi-test | opened | Create layout for login view | priority:medium team:UI/UX type:enhancement | #### Description
Once the button (#22) and input (#23) are done, create a view for the app login.
#### Requirements
The view should be as similar as possible with the following image:

#### Additional context
The logo can be taken from this URL: https://avatars2.githubusercontent.com/u/6469715?s=280&v=4
| 1.0 | Create layout for login view - #### Description
Once the button (#22) and input (#23) are done, create a view for the app login.
#### Requirements
The view should be as similar as possible with the following image:

#### Additional context
The logo can be taken from this URL: https://avatars2.githubusercontent.com/u/6469715?s=280&v=4
| priority | create layout for login view description once the button and input are done create a view for the app login requirements the view should be as similar as possible with the following image additional context the logo can be taken from this url | 1 |
493,716 | 14,236,989,807 | IssuesEvent | 2020-11-18 16:39:28 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] Switch Studio 2.5 to use MariaDB instead of DerbyDB as a default database | enhancement priority: medium | Per our conversation, we can no longer use DerbyDB due to query complexity and we must switch to another default/out of the box DB. Please switch to use MariaDB as a default while continuing to allow for other databases. | 1.0 | [studio] Switch Studio 2.5 to use MariaDB instead of DerbyDB as a default database - Per our conversation, we can no longer use DerbyDB due to query complexity and we must switch to another default/out of the box DB. Please switch to use MariaDB as a default while continuing to allow for other databases. | priority | switch studio to use mariadb instead of derbydb as a default database per our conversation we can no longer use derbydb due to query complexity and we must switch to another default out of the box db please switch to use mariadb as a default while continuing to allow for other databases | 1 |
3,202 | 2,537,495,353 | IssuesEvent | 2015-01-26 20:59:32 | web2py/web2py | https://api.github.com/repos/web2py/web2py | opened | when trying to use IS_IN_DB with a composite reference key, there is a no _id error reported | 1 star bug imported Priority-Medium | _From [anto..._at_tzorvas.com](https://code.google.com/u/108498664281537156435/) on August 19, 2013 12:51:12_
1. model: http://paste.kde.org/p90a9d2d6/09069137/ 2. from the appadmin create a station then a station_source and then a data record
3. in data record insert, there is a no _id error -- http://paste.kde.org/p48068f25/08969137/ The expected is to have a unique combination of stations_sources.station_id & stations_sources.source (which works) and for db.data.source to have a drop-down list with the referenced values
2.5.1-stable+timestamp.2013.06.06.15.39.19 on Linux x64(Gentoo)
with a combination of IS_NOT_IN_DB i can get the preferred behavior http://goo.gl/lgv88R
_Original issue: http://code.google.com/p/web2py/issues/detail?id=1637_ | 1.0 | when trying to use IS_IN_DB with a composite reference key, there is a no _id error reported - _From [anto..._at_tzorvas.com](https://code.google.com/u/108498664281537156435/) on August 19, 2013 12:51:12_
1. model: http://paste.kde.org/p90a9d2d6/09069137/ 2. from the appadmin create a station then a station_source and then a data record
3. in data record insert, there is a no _id error -- http://paste.kde.org/p48068f25/08969137/ The expected is to have a unique combination of stations_sources.station_id & stations_sources.source (which works) and for db.data.source to have a drop-down list with the referenced values
2.5.1-stable+timestamp.2013.06.06.15.39.19 on Linux x64(Gentoo)
with a combination of IS_NOT_IN_DB i can get the preferred behavior http://goo.gl/lgv88R
_Original issue: http://code.google.com/p/web2py/issues/detail?id=1637_ | priority | when trying to use is in db with a composite reference key there is a no id error reported from on august model from the appadmin create a station then a station source and then a data record in data record insert there is a no id error the expected is to have a unique combination of stations sources station id stations sources source which works and for db data source to have a drop down list with the referenced values stable timestamp on linux gentoo with a combination of is not in db i can get the preferred behavior original issue | 1 |
782,255 | 27,491,338,897 | IssuesEvent | 2023-03-04 17:01:29 | Vatsim-Scandinavia/controlcenter | https://api.github.com/repos/Vatsim-Scandinavia/controlcenter | closed | Booking as finished S1 | bug medium priority | When a student has finished their S1 training and have a permanent endorsement, does not have access to book positions anymore | 1.0 | Booking as finished S1 - When a student has finished their S1 training and have a permanent endorsement, does not have access to book positions anymore | priority | booking as finished when a student has finished their training and have a permanent endorsement does not have access to book positions anymore | 1 |
467,859 | 13,456,880,619 | IssuesEvent | 2020-09-09 08:28:00 | ansible-collections/azure | https://api.github.com/repos/ansible-collections/azure | closed | Back up an Azure VM using Azure Backup | has_pr medium_priority new_module_issue | <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently, there is no module available for back up an Azure VM using Azure Backup
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_backupAzureVM
azure_rm_backupAzureVM_info
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This module will introduce backup feature for azure vm and will be offering feature listed below
1. Enabling protection for the Azure VM
2. Trigger an on-demand backup for a protected Azure VM
3. Modify the backup configuration for a protected Azure VM
4. Stop protection but retain existing data
5. Stop protection and delete data
I am planning on writing the modules myself, but wanted to post an Issue ahead of time to solicit feedback and comments and to make sure no one else is already working on it
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| 1.0 | Back up an Azure VM using Azure Backup - <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently, there is no module available for back up an Azure VM using Azure Backup
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_backupAzureVM
azure_rm_backupAzureVM_info
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This module will introduce backup feature for azure vm and will be offering feature listed below
1. Enabling protection for the Azure VM
2. Trigger an on-demand backup for a protected Azure VM
3. Modify the backup configuration for a protected Azure VM
4. Stop protection but retain existing data
5. Stop protection and delete data
I am planning on writing the modules myself, but wanted to post an Issue ahead of time to solicit feedback and comments and to make sure no one else is already working on it
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| priority | back up an azure vm using azure backup summary currently there is no module available for back up an azure vm using azure backup issue type feature idea component name azure rm backupazurevm azure rm backupazurevm info additional information this module will introduce backup feature for azure vm and will be offering feature listed below enabling protection for the azure vm trigger an on demand backup for a protected azure vm modify the backup configuration for a protected azure vm stop protection but retain existing data stop protection and delete data i am planning on writing the modules myself but wanted to post an issue ahead of time to solicit feedback and comments and to make sure no one else is already working on it yaml | 1 |
727,063 | 25,022,304,615 | IssuesEvent | 2022-11-04 02:48:30 | AY2223S1-CS2103T-F11-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-F11-1/tp | closed | [PE-D][Tester A] Dates on leap years cannot be entered correctly | priority.Medium type.FunctionalityBug | When attempting to enter an invalid date in a leap year (e.g. 29/02/2001) which does not exist, it will result in a rounding of the date to the nearest valid date (e.g. 28/02/2001).
Tested leap year: 2001
https://www.timeanddate.com/calendar/?year=2001&country=63
To reproduce:

(Notice the date is rounded to 28 of feb, 2001)

<!--session: 1666943752235-02b32324-70a5-4510-bc91-daa4085a5311-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: Ferusel/ped#1 | 1.0 | [PE-D][Tester A] Dates on leap years cannot be entered correctly - When attempting to enter an invalid date in a leap year (e.g. 29/02/2001) which does not exist, it will result in a rounding of the date to the nearest valid date (e.g. 28/02/2001).
Tested leap year: 2001
https://www.timeanddate.com/calendar/?year=2001&country=63
To reproduce:

(Notice the date is rounded to 28 of feb, 2001)

<!--session: 1666943752235-02b32324-70a5-4510-bc91-daa4085a5311-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: Ferusel/ped#1 | priority | dates on leap years cannot be entered correctly when attempting to enter an invalid date in a leap year e g which does not exist it will result in a rounding of the date to the nearest valid date e g tested leap year to reproduce notice the date is rounded to of feb labels severity medium type functionalitybug original ferusel ped | 1 |
108,877 | 4,357,221,815 | IssuesEvent | 2016-08-02 00:34:35 | Polymer/polymer-cli | https://api.github.com/repos/Polymer/polymer-cli | closed | Error code return as 0 on failed polymer test run | Priority: Medium Status: Pending Type: Bug | ### Description
I am running test on custom elements using the Polymer CLI on a build server. The build server is setup to report a successful build on an exit code of 0 and failed builds on an exit code of !0. Currently Polymer CLI does not return any codes on a error which is resulting in the build I run to be successful all the time, even on failures.
I will put a PR adding process.exit(1) on a error from cli.run()
### Versions & Environment
- Polymer CLI: 0.12.0
- node: v6.3.1
#### Steps to Reproduce
1. Create an test element: `polymer init element`
2. Run test: `polymer test`
3. Check exit code: `echo $?`
4. Edit test to result in error: change assert.equal(element.is, 'test-element') to assert.equal(element.is, 'foo');
5. Re-run test: `polymer test`
6. Check exit code: `echo $?`
#### Expected Results
Error code to be 0 on successful test and error code to be !0 on a failed test.
#### Actual Results
Error code is 0 on successful test and on a failed test.
| 1.0 | Error code return as 0 on failed polymer test run - ### Description
I am running test on custom elements using the Polymer CLI on a build server. The build server is setup to report a successful build on an exit code of 0 and failed builds on an exit code of !0. Currently Polymer CLI does not return any codes on a error which is resulting in the build I run to be successful all the time, even on failures.
I will put a PR adding process.exit(1) on a error from cli.run()
### Versions & Environment
- Polymer CLI: 0.12.0
- node: v6.3.1
#### Steps to Reproduce
1. Create an test element: `polymer init element`
2. Run test: `polymer test`
3. Check exit code: `echo $?`
4. Edit test to result in error: change assert.equal(element.is, 'test-element') to assert.equal(element.is, 'foo');
5. Re-run test: `polymer test`
6. Check exit code: `echo $?`
#### Expected Results
Error code to be 0 on successful test and error code to be !0 on a failed test.
#### Actual Results
Error code is 0 on successful test and on a failed test.
| priority | error code return as on failed polymer test run description i am running test on custom elements using the polymer cli on a build server the build server is setup to report a successful build on an exit code of and failed builds on an exit code of currently polymer cli does not return any codes on a error which is resulting in the build i run to be successful all the time even on failures i will put a pr adding process exit on a error from cli run versions environment polymer cli node steps to reproduce create an test element polymer init element run test polymer test check exit code echo edit test to result in error change assert equal element is test element to assert equal element is foo re run test polymer test check exit code echo expected results error code to be on successful test and error code to be on a failed test actual results error code is on successful test and on a failed test | 1 |
326,474 | 9,956,695,082 | IssuesEvent | 2019-07-05 14:35:23 | gitcoinco/web | https://api.github.com/repos/gitcoinco/web | closed | As a Gitcoin Admin, I'd like the issues I've re-marketed to surface to the top of the issue explorer as a newly opened issue. | Gitcoin Issue Explorer OKR Up Next priority: medium |
**To be worked on after #2184 is completed**
### User Story
As a Gitcoin Admin, I'd like the issues I've re-marketed to surface to the top of the issue explorer as a newly opened issue.
As a Contributor, I'd like to see re-marketed issues at the top of the issue explorer so I'll know that it is open and available to be worked on.
### What
When an issue is remarketed, it should surface to the top of the issue explorer and show up as a new issue in notifiers-gitcoin on slack.
### Why
This will help us move issues through the platform.
### Definition of Done
- [ ] [Reuse this functionality](https://github.com/gitcoinco/web/issues/2184) for surfacing to the top of the issue explorer that has been built for Expired Issues.
- [ ] Post a video or screenshots of the feature built
- [ ] This issue passes all of the tests and revisions from the Gitcoin Core engineering team.
| 1.0 | As a Gitcoin Admin, I'd like the issues I've re-marketed to surface to the top of the issue explorer as a newly opened issue. -
**To be worked on after #2184 is completed**
### User Story
As a Gitcoin Admin, I'd like the issues I've re-marketed to surface to the top of the issue explorer as a newly opened issue.
As a Contributor, I'd like to see re-marketed issues at the top of the issue explorer so I'll know that it is open and available to be worked on.
### What
When an issue is remarketed, it should surface to the top of the issue explorer and show up as a new issue in notifiers-gitcoin on slack.
### Why
This will help us move issues through the platform.
### Definition of Done
- [ ] [Reuse this functionality](https://github.com/gitcoinco/web/issues/2184) for surfacing to the top of the issue explorer that has been built for Expired Issues.
- [ ] Post a video or screenshots of the feature built
- [ ] This issue passes all of the tests and revisions from the Gitcoin Core engineering team.
| priority | as a gitcoin admin i d like the issues i ve re marketed to surface to the top of the issue explorer as a newly opened issue to be worked on after is completed user story as a gitcoin admin i d like the issues i ve re marketed to surface to the top of the issue explorer as a newly opened issue as a contributor i d like to see re marketed issues at the top of the issue explorer so i ll know that it is open and available to be worked on what when an issue is remarketed it should surface to the top of the issue explorer and show up as a new issue in notifiers gitcoin on slack why this will help us move issues through the platform definition of done for surfacing to the top of the issue explorer that has been built for expired issues post a video or screenshots of the feature built this issue passes all of the tests and revisions from the gitcoin core engineering team | 1 |
433,858 | 12,511,631,749 | IssuesEvent | 2020-06-02 20:57:01 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | closed | [Feature Card] Fix type spec | Airtable Done Feature request dev priority: medium | https://ibmdotcom-react-canary.mybluemix.net/?path=/story/patterns-sub-patterns-featurecard--default
Type spec is incorrect here - that's my bad, I had labeled it incorrectly. Should be $expressive-heading-03. I've fixed the visual specs in the box folder to reflect
<img width="1440" alt="Screen Shot 2020-05-27 at 12 57 55 PM" src="https://user-images.githubusercontent.com/47458576/83055627-b3d61600-a019-11ea-99b5-142d3ab41074.png">
| 1.0 | [Feature Card] Fix type spec - https://ibmdotcom-react-canary.mybluemix.net/?path=/story/patterns-sub-patterns-featurecard--default
Type spec is incorrect here - that's my bad, I had labeled it incorrectly. Should be $expressive-heading-03. I've fixed the visual specs in the box folder to reflect
<img width="1440" alt="Screen Shot 2020-05-27 at 12 57 55 PM" src="https://user-images.githubusercontent.com/47458576/83055627-b3d61600-a019-11ea-99b5-142d3ab41074.png">
| priority | fix type spec type spec is incorrect here that s my bad i had labeled it incorrectly should be expressive heading i ve fixed the visual specs in the box folder to reflect img width alt screen shot at pm src | 1 |
515,957 | 14,972,726,763 | IssuesEvent | 2021-01-27 23:26:15 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | CoAP discovery response does not follow CoRE link format specification | bug priority: medium | **Describe the bug**
CoAP RFC ([RFC7252](https://tools.ietf.org/html/rfc7252)) states that end points should support the CoRE Link Format of discoverable resources as described in [RFC6690](https://tools.ietf.org/html/rfc6690) (refer [section 7.2](https://tools.ietf.org/html/rfc7252#section-7.2) of the RFC7252).
However, the current CoAP implementation does not place comma “,” and semicolon “;” in appropriate places in the payload according to the RFC6690.
**To Reproduce**
Follow the usual building and flashing steps for "samples/net/sockets/coap_server" application on any board.
The payload of the discovery response is as shown below.
`</test>;</seg1/seg2/seg3>;</wild1/+/wild3>;</wild2/#>;</query>;</separate>;</location-query>;</large>;</large-update>;</large-create>;</obs>;</core1>title="Core 1";rt=core1;</core2>title="Core 1";rt=core1;`
**Expected behavior**
According to the RFC6690, the payload should be formatted as follows.
`</test>,</seg1/seg2/seg3>,</wild1/+/wild3>,</wild2/#>,</query>,</separate>,</location-query>,</large>,</large-update>,</large-create>,</obs>,</core1>;title="Core 1";rt=core1,</core2>;title="Core 1";rt=core1`
**Impact**
CoAP client libraries which follow RFC6690 may fail to parse the discovery response.
| 1.0 | CoAP discovery response does not follow CoRE link format specification - **Describe the bug**
CoAP RFC ([RFC7252](https://tools.ietf.org/html/rfc7252)) states that end points should support the CoRE Link Format of discoverable resources as described in [RFC6690](https://tools.ietf.org/html/rfc6690) (refer [section 7.2](https://tools.ietf.org/html/rfc7252#section-7.2) of the RFC7252).
However, the current CoAP implementation does not place comma “,” and semicolon “;” in appropriate places in the payload according to the RFC6690.
**To Reproduce**
Follow the usual building and flashing steps for "samples/net/sockets/coap_server" application on any board.
The payload of the discovery response is as shown below.
`</test>;</seg1/seg2/seg3>;</wild1/+/wild3>;</wild2/#>;</query>;</separate>;</location-query>;</large>;</large-update>;</large-create>;</obs>;</core1>title="Core 1";rt=core1;</core2>title="Core 1";rt=core1;`
**Expected behavior**
According to the RFC6690, the payload should be formatted as follows.
`</test>,</seg1/seg2/seg3>,</wild1/+/wild3>,</wild2/#>,</query>,</separate>,</location-query>,</large>,</large-update>,</large-create>,</obs>,</core1>;title="Core 1";rt=core1,</core2>;title="Core 1";rt=core1`
**Impact**
CoAP client libraries which follow RFC6690 may fail to parse the discovery response.
| priority | coap discovery response does not follow core link format specification describe the bug coap rfc states that end points should support the core link format of discoverable resources as described in refer of the however the current coap implementation does not place comma “ ” and semicolon “ ” in appropriate places in the payload according to the to reproduce follow the usual building and flashing steps for samples net sockets coap server application on any board the payload of the discovery response is as shown below title core rt title core rt expected behavior according to the the payload should be formatted as follows title core rt title core rt impact coap client libraries which follow may fail to parse the discovery response | 1 |
248,175 | 7,928,245,825 | IssuesEvent | 2018-07-06 10:54:11 | redbadger/pride-london-app | https://api.github.com/repos/redbadger/pride-london-app | opened | Optimize image loading | :fire: medium priority :racehorse: performance | Contentful has an API to crop and resize images. Currently we just load the original image regardless of size. To help with the Android out of memory issue and decrease the bandwidth consumption of the app we should load the appropriate sized image for where we are displaying it. As an example [Cabaret_finalists.jpg?w=500&h=500&fit=fill](https://images.ctfassets.net/n2o4hgsv6wcx/4iXietXWjecIke4K4m8mii/f51a9ba4d860859cc3286d23cc29c0e7/Cabaret_finalists.jpg?w=500&h=500&fit=fill).
This image originally returns the following data:
```
{
id: "4iXietXWjecIke4K4m8mii"
revision: 1
uri: "https://images.ctfassets.net/n2o4hgsv6wcx/4iXietXWjecIke4K4m8mii/f51a9ba4d860859cc3286d23cc29c0e7/Cabaret_finalists.jpg"
width: 1500
height: 1000
}
```
We would need to figure out the size of image to request. Perhaps by using React Natives onLayout handler (although not using this API would be preferred). | 1.0 | Optimize image loading - Contentful has an API to crop and resize images. Currently we just load the original image regardless of size. To help with the Android out of memory issue and decrease the bandwidth consumption of the app we should load the appropriate sized image for where we are displaying it. As an example [Cabaret_finalists.jpg?w=500&h=500&fit=fill](https://images.ctfassets.net/n2o4hgsv6wcx/4iXietXWjecIke4K4m8mii/f51a9ba4d860859cc3286d23cc29c0e7/Cabaret_finalists.jpg?w=500&h=500&fit=fill).
This image originally returns the following data:
```
{
id: "4iXietXWjecIke4K4m8mii"
revision: 1
uri: "https://images.ctfassets.net/n2o4hgsv6wcx/4iXietXWjecIke4K4m8mii/f51a9ba4d860859cc3286d23cc29c0e7/Cabaret_finalists.jpg"
width: 1500
height: 1000
}
```
We would need to figure out the size of image to request. Perhaps by using React Natives onLayout handler (although not using this API would be preferred). | priority | optimize image loading contentful has an api to crop and resize images currently we just load the original image regardless of size to help with the android out of memory issue and decrease the bandwidth consumption of the app we should load the appropriate sized image for where we are displaying it as an example this image originally returns the following data id revision uri width height we would need to figure out the size of image to request perhaps by using react natives onlayout handler although not using this api would be preferred | 1 |
450,129 | 12,990,723,567 | IssuesEvent | 2020-07-23 01:00:52 | MyMICDS/MyMICDS-v2 | https://api.github.com/repos/MyMICDS/MyMICDS-v2 | closed | Switch back to ESLint | effort: easy priority: medium work length: short | Since TSLint is eventually being [deprecated in favor of ESLint](https://medium.com/palantir/tslint-in-2019-1a144c2317a9), we should look into making a new [`@mymicds/configs`](https://github.com/MyMICDS/configs) for TypeScript. | 1.0 | Switch back to ESLint - Since TSLint is eventually being [deprecated in favor of ESLint](https://medium.com/palantir/tslint-in-2019-1a144c2317a9), we should look into making a new [`@mymicds/configs`](https://github.com/MyMICDS/configs) for TypeScript. | priority | switch back to eslint since tslint is eventually being we should look into making a new for typescript | 1 |
790,389 | 27,824,536,154 | IssuesEvent | 2023-03-19 16:07:42 | AY2223S2-CS2103T-W11-4/tp | https://api.github.com/repos/AY2223S2-CS2103T-W11-4/tp | closed | As a user I can view test reports(images or pdf) to specific patient. | enhancement priority.Medium type.Story type.Epic | Can be done at v1.3b or v1.4 depends on the project management | 1.0 | As a user I can view test reports(images or pdf) to specific patient. - Can be done at v1.3b or v1.4 depends on the project management | priority | as a user i can view test reports images or pdf to specific patient can be done at or depends on the project management | 1 |
500,907 | 14,517,016,964 | IssuesEvent | 2020-12-13 18:00:16 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Smart Host Filter Accepts Value then Crashes on Edit | component:ui_next help wanted priority:medium state:needs_devel type:bug | ##### ISSUE TYPE
- Bug Report
##### SUMMARY
When creating Smart Host Filter, in the Dynamic Hosts I am able to use this filter
inventory.name:"Server Inventory"



awx version: 2.9.2

##### ENVIRONMENT
* AWX install method: openshift
* Ansible version: 2.9.1
* Operating System: Centos 7
* Web Browser: Chrome
##### STEPS TO REPRODUCE
See images above
##### EXPECTED RESULTS
Be able to edit the Search Filter
Thank you,
HS | 1.0 | Smart Host Filter Accepts Value then Crashes on Edit - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
When creating Smart Host Filter, in the Dynamic Hosts I am able to use this filter
inventory.name:"Server Inventory"



awx version: 2.9.2

##### ENVIRONMENT
* AWX install method: openshift
* Ansible version: 2.9.1
* Operating System: Centos 7
* Web Browser: Chrome
##### STEPS TO REPRODUCE
See images above
##### EXPECTED RESULTS
Be able to edit the Search Filter
Thank you,
HS | priority | smart host filter accepts value then crashes on edit issue type bug report summary when creating smart host filter in the dynamic hosts i am able to use this filter inventory name server inventory awx version environment awx install method openshift ansible version operating system centos web browser chrome steps to reproduce see images above expected results be able to edit the search filter thank you hs | 1 |
303,723 | 9,309,991,937 | IssuesEvent | 2019-03-25 17:42:13 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Market Creator Mailbox Mapping | Feature Priority: Medium Product Critical V2 | Currently the market creator mailbox is a new contract per market.
It would be much more efficient to maintain a mapping of address to mailbox in a new contract. | 1.0 | Market Creator Mailbox Mapping - Currently the market creator mailbox is a new contract per market.
It would be much more efficient to maintain a mapping of address to mailbox in a new contract. | priority | market creator mailbox mapping currently the market creator mailbox is a new contract per market it would be much more efficient to maintain a mapping of address to mailbox in a new contract | 1 |
77,484 | 3,506,397,490 | IssuesEvent | 2016-01-08 06:27:46 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Warlock - Health Funnel (BB #529) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** smoldar
**Original Date:** 04.03.2014 15:08:17 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/529
<hr>
Health Funnel not working. Casting funnel on pet, but without drain life from warlock and not healing pet. | 1.0 | Warlock - Health Funnel (BB #529) - This issue was migrated from bitbucket.
**Original Reporter:** smoldar
**Original Date:** 04.03.2014 15:08:17 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/529
<hr>
Health Funnel not working. Casting funnel on pet, but without drain life from warlock and not healing pet. | priority | warlock health funnel bb this issue was migrated from bitbucket original reporter smoldar original date gmt original priority major original type bug original state resolved direct link health funnel not working casting funnel on pet but without drain life from warlock and not healing pet | 1 |
502,279 | 14,543,597,580 | IssuesEvent | 2020-12-15 17:04:08 | willowtreeapps/vocable-ios | https://api.github.com/repos/willowtreeapps/vocable-ios | closed | Contact Developers link is broken | bug priority - medium | **Describe the bug**
The `Contact Developers` button in settings does not open the link
**To Reproduce**
Open settings on version 1.3.4-1354 and tap or gaze at the `Contact Developers` button
**Expected behavior**
The link opens
**Actual behavior**
The alert saying you're about to lose head tracking does not open the link
**Device Information**
<!-- Please complete the following information: -->
- Device: [e.g. iPhone 11] iPhone X
- OS: [e.g. iOS 13.4] iOS 14 (pretty sure it's not a 14.0 only bug)
| 1.0 | Contact Developers link is broken - **Describe the bug**
The `Contact Developers` button in settings does not open the link
**To Reproduce**
Open settings on version 1.3.4-1354 and tap or gaze at the `Contact Developers` button
**Expected behavior**
The link opens
**Actual behavior**
The alert saying you're about to lose head tracking does not open the link
**Device Information**
<!-- Please complete the following information: -->
- Device: [e.g. iPhone 11] iPhone X
- OS: [e.g. iOS 13.4] iOS 14 (pretty sure it's not a 14.0 only bug)
| priority | contact developers link is broken describe the bug the contact developers button in settings does not open the link to reproduce open settings on version and tap or gaze at the contact developers button expected behavior the link opens actual behavior the alert saying you re about to lose head tracking does not open the link device information device iphone x os ios pretty sure it s not a only bug | 1 |
500,734 | 14,512,991,894 | IssuesEvent | 2020-12-13 03:23:26 | codidact/qpixel | https://api.github.com/repos/codidact/qpixel | opened | Add place for preferences to user profile | area: frontend complexity: unassessed priority: medium type: change request | Related: https://meta.codidact.com/questions/278490
Currently we have one preference (keyboard shortcuts), which is at the bottom of the "edit" section of the profile. We're going to want to add more, so I think it would be better to add a "preferences" tab to the profile. If one more is too many, I think subscriptions could be part of preferences.
This area would be for *community-specific* preferences. The network profile, when we create it, should have a place to set network-wide preferences. Consider the future possibility that the local profile might override network-wide preferences, but don't build that yet until we actually have network profiles. For now the request is: a tab for preferences, move the keyboard-shortcuts preference there, and add other preferences as covered in other issues, forthcoming. | 1.0 | Add place for preferences to user profile - Related: https://meta.codidact.com/questions/278490
Currently we have one preference (keyboard shortcuts), which is at the bottom of the "edit" section of the profile. We're going to want to add more, so I think it would be better to add a "preferences" tab to the profile. If one more is too many, I think subscriptions could be part of preferences.
This area would be for *community-specific* preferences. The network profile, when we create it, should have a place to set network-wide preferences. Consider the future possibility that the local profile might override network-wide preferences, but don't build that yet until we actually have network profiles. For now the request is: a tab for preferences, move the keyboard-shortcuts preference there, and add other preferences as covered in other issues, forthcoming. | priority | add place for preferences to user profile related currently we have one preference keyboard shortcuts which is at the bottom of the edit section of the profile we re going to want to add more so i think it would be better to add a preferences tab to the profile if one more is too many i think subscriptions could be part of preferences this area would be for community specific preferences the network profile when we create it should have a place to set network wide preferences consider the future possibility that the local profile might override network wide preferences but don t build that yet until we actually have network profiles for now the request is a tab for preferences move the keyboard shortcuts preference there and add other preferences as covered in other issues forthcoming | 1 |
454,530 | 13,102,915,625 | IssuesEvent | 2020-08-04 07:38:34 | AbsaOSS/enceladus | https://api.github.com/repos/AbsaOSS/enceladus | closed | Helper script for combined Standardization&Conformance job | Conformance Standardization feature priority: medium | ## Background
After refactoring it's possible to run Standardization and Conformance in succession in one go.
## Feature
Create a helper script that call the new combined Standardization&Conformance job main function.
| 1.0 | Helper script for combined Standardization&Conformance job - ## Background
After refactoring it's possible to run Standardization and Conformance in succession in one go.
## Feature
Create a helper script that call the new combined Standardization&Conformance job main function.
| priority | helper script for combined standardization conformance job background after refactoring it s possible to run standardization and conformance in succession in one go feature create a helper script that call the new combined standardization conformance job main function | 1 |
122,012 | 4,826,558,593 | IssuesEvent | 2016-11-07 10:33:53 | KainosSoftwareLtd/TechRadar | https://api.github.com/repos/KainosSoftwareLtd/TechRadar | closed | Signing out of ADFS when signing out of application seems weird | Medium Priority | This issue was copied from KainosSoftwareLtd/answerit#7
| 1.0 | Signing out of ADFS when signing out of application seems weird - This issue was copied from KainosSoftwareLtd/answerit#7
| priority | signing out of adfs when signing out of application seems weird this issue was copied from kainossoftwareltd answerit | 1 |
353,461 | 10,552,950,453 | IssuesEvent | 2019-10-03 16:08:24 | minio/mc | https://api.github.com/repos/minio/mc | closed | Unable to specify s3v2 when using MC_HOST_ env | community priority: medium |
In the documentation there's a note about selecting s3v2 when using gcs.
```
NOTE: Google Cloud Storage only supports Legacy Signature Version 2, so you have to pick - S3v2
```
Following on from that the option to specify configuration for the session using `MC_HOST_` env is provided. I cannot see an option here to provide the API without going via `mc config` which is what I'm looking to avoid by using `MC_HOST_`.
## Expected behavior
I should be able to specify the api as S3v2 without using `mc config`, either by cli flag to existing commands, or via an env (making it consistent with `MC_HOST_`).
## Actual behavior
Specifying protocol as part of url (as indicated searching through issues/merge_requests) gives a parsing error:
Specifying with `https://` and using `--debug` appears to be using Authorization: AWS4-HMAC-SHA256, as opposed to just AWS when going via `mc config` and using the `--api` option.
## Steps to reproduce the behavior
Specifying api as part of protocol gives parsing error:
```
MC_HOST_s3=https+s3v2://ACCESS-KEY-HERE:SECRET-KEY-HERE@storage.googleapis.com mc ls s3/bucketname/
```
Using storage.googleapis.com defaults to s3v4:
```
MC_HOST_s3=https://ACCESS-KEY-HERE:SECRET-KEY-HERE@storage.googleapis.com mc ls s3/bucketname/ --debug
```
## mc version
```
Version: 2019-09-24T01:36:20Z
Release-tag: RELEASE.2019-09-24T01-36-20Z
Commit-id: 643835013047aa27ed35de0ac5a4ab9538f0cd68
```
## System information
```
Release-Tag:RELEASE.2019-09-24T01-36-20Z | Commit:643835013047 | Host:redacted | OS:linux | Arch:amd64 | Lang:go1.13 | Mem:6.8 MB/72 MB | Heap:6.8 MB/66 MB
```
| 1.0 | Unable to specify s3v2 when using MC_HOST_ env -
In the documentation there's a note about selecting s3v2 when using gcs.
```
NOTE: Google Cloud Storage only supports Legacy Signature Version 2, so you have to pick - S3v2
```
Following on from that the option to specify configuration for the session using `MC_HOST_` env is provided. I cannot see an option here to provide the API without going via `mc config` which is what I'm looking to avoid by using `MC_HOST_`.
## Expected behavior
I should be able to specify the api as S3v2 without using `mc config`, either by cli flag to existing commands, or via an env (making it consistent with `MC_HOST_`).
## Actual behavior
Specifying protocol as part of url (as indicated searching through issues/merge_requests) gives a parsing error:
Specifying with `https://` and using `--debug` appears to be using Authorization: AWS4-HMAC-SHA256, as opposed to just AWS when going via `mc config` and using the `--api` option.
## Steps to reproduce the behavior
Specifying api as part of protocol gives parsing error:
```
MC_HOST_s3=https+s3v2://ACCESS-KEY-HERE:SECRET-KEY-HERE@storage.googleapis.com mc ls s3/bucketname/
```
Using storage.googleapis.com defaults to s3v4:
```
MC_HOST_s3=https://ACCESS-KEY-HERE:SECRET-KEY-HERE@storage.googleapis.com mc ls s3/bucketname/ --debug
```
## mc version
```
Version: 2019-09-24T01:36:20Z
Release-tag: RELEASE.2019-09-24T01-36-20Z
Commit-id: 643835013047aa27ed35de0ac5a4ab9538f0cd68
```
## System information
```
Release-Tag:RELEASE.2019-09-24T01-36-20Z | Commit:643835013047 | Host:redacted | OS:linux | Arch:amd64 | Lang:go1.13 | Mem:6.8 MB/72 MB | Heap:6.8 MB/66 MB
```
| priority | unable to specify when using mc host env in the documentation there s a note about selecting when using gcs note google cloud storage only supports legacy signature version so you have to pick following on from that the option to specify configuration for the session using mc host env is provided i cannot see an option here to provide the api without going via mc config which is what i m looking to avoid by using mc host expected behavior i should be able to specify the api as without using mc config either by cli flag to existing commands or via an env making it consistent with mc host actual behavior specifying protocol as part of url as indicated searching through issues merge requests gives a parsing error specifying with and using debug appears to be using authorization hmac as opposed to just aws when going via mc config and using the api option steps to reproduce the behavior specifying api as part of protocol gives parsing error mc host https access key here secret key here storage googleapis com mc ls bucketname using storage googleapis com defaults to mc host mc ls bucketname debug mc version version release tag release commit id system information release tag release commit host redacted os linux arch lang mem mb mb heap mb mb | 1 |
48,035 | 2,990,132,307 | IssuesEvent | 2015-07-21 07:10:44 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | Getting ResponseParseException "No such field: CONTENT_TYPE" when getting JSON response without content-type header | bug imported Priority-Medium | _From [fila...@gmail.com](https://code.google.com/u/100953504275866446270/) on August 06, 2013 07:38:53_
What steps will reproduce the problem? 1. Add RestAssured.defaultParser = Parser.JSON; since response does not contain content-type header
2. Send get request
given().
.header("X-Accept-Version", "2.0")
.expect().statusCode(200)
.when().get("/catalog"); What is the expected output? What do you see instead? I expect to get success on the test. The response contains JSON object.
Actual result - getting following exception even though I have all the dependencies from the zip file in place and recreated test project from scratch using only supplied jars. Without setting default parser I'm getting just warning "WARNING: Could not parse content-type: Response does not have a content-type header".
Aug 05, 2013 10:27:24 PM com.jayway.restassured.internal.http.HTTPBuilder parseResponse
WARNING: Could not parse content-type: No such field: CONTENT_TYPE for class: com.jayway.restassured.internal.RequestSpecificationImpl$RestAssuredHttpBuilder
FAILED: getCatalog
com.jayway.restassured.internal.http.ResponseParseException: OK
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:77)
at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:102)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:57)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:182)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:194)
at com.jayway.restassured.internal.RequestSpecificationImpl$RestAssuredHttpBuilder.doRequest(RequestSpecificationImpl.groovy:1400)
at com.jayway.restassured.internal.http.HTTPBuilder.doRequest(HTTPBuilder.java:490)
at com.jayway.restassured.internal.http.HTTPBuilder.request(HTTPBuilder.java:439)
at com.jayway.restassured.internal.http.HTTPBuilder$request.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
at com.jayway.restassured.internal.RequestSpecificationImpl.sendHttpRequest(RequestSpecificationImpl.groovy:953)
at com.jayway.restassured.internal.RequestSpecificationImpl.this$2$sendHttpRequest(RequestSpecificationImpl.groovy)
at com.jayway.restassured.internal.RequestSpecificationImpl$this$2$sendHttpRequest.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at com.jayway.restassured.internal.RequestSpecificationImpl.sendRequest(RequestSpecificationImpl.groovy:820)
at com.jayway.restassured.internal.RequestSpecificationImpl.this$2$sendRequest(RequestSpecificationImpl.groovy)
at com.jayway.restassured.internal.RequestSpecificationImpl$this$2$sendRequest.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
at com.jayway.restassured.internal.filter.RootFilter.filter(RootFilter.groovy:30)
at com.jayway.restassured.filter.Filter$filter.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
at com.jayway.restassured.internal.filter.FilterContextImpl.next(FilterContextImpl.groovy:49)
at com.jayway.restassured.filter.FilterContext$next.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at com.jayway.restassured.internal.RequestSpecificationImpl.invokeFilterChain(RequestSpecificationImpl.groovy:758)
at com.jayway.restassured.internal.RequestSpecificationImpl$invokeFilterChain.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:149)
at com.jayway.restassured.internal.RequestSpecificationImpl.applyPathParamsAndSendRequest(RequestSpecificationImpl.groovy:1142)
at com.jayway.restassured.internal.RequestSpecificationImpl.this$2$applyPathParamsAndSendRequest(RequestSpecificationImpl.groovy)
at com.jayway.restassured.internal.RequestSpecificationImpl$this$2$applyPathParamsAndSendRequest.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:149)
at com.jayway.restassured.internal.RequestSpecificationImpl.get(RequestSpecificationImpl.groovy:131)
at com.jayway.restassured.specification.RequestSender$get.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at com.jayway.restassured.internal.ResponseSpecificationImpl.get(ResponseSpecificationImpl.groovy:226)
at motif.qa.rest.assured.GetExample.getCatalog(GetExample.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(...
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=253_ | 1.0 | Getting ResponseParseException "No such field: CONTENT_TYPE" when getting JSON response without content-type header - _From [fila...@gmail.com](https://code.google.com/u/100953504275866446270/) on August 06, 2013 07:38:53_
What steps will reproduce the problem? 1. Add RestAssured.defaultParser = Parser.JSON; since response does not contain content-type header
2. Send get request
given().
.header("X-Accept-Version", "2.0")
.expect().statusCode(200)
.when().get("/catalog"); What is the expected output? What do you see instead? I expect to get success on the test. The response contains JSON object.
Actual result - getting following exception even though I have all the dependencies from the zip file in place and recreated test project from scratch using only supplied jars. Without setting default parser I'm getting just warning "WARNING: Could not parse content-type: Response does not have a content-type header".
Aug 05, 2013 10:27:24 PM com.jayway.restassured.internal.http.HTTPBuilder parseResponse
WARNING: Could not parse content-type: No such field: CONTENT_TYPE for class: com.jayway.restassured.internal.RequestSpecificationImpl$RestAssuredHttpBuilder
FAILED: getCatalog
com.jayway.restassured.internal.http.ResponseParseException: OK
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:77)
at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:102)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:57)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:182)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:194)
at com.jayway.restassured.internal.RequestSpecificationImpl$RestAssuredHttpBuilder.doRequest(RequestSpecificationImpl.groovy:1400)
at com.jayway.restassured.internal.http.HTTPBuilder.doRequest(HTTPBuilder.java:490)
at com.jayway.restassured.internal.http.HTTPBuilder.request(HTTPBuilder.java:439)
at com.jayway.restassured.internal.http.HTTPBuilder$request.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
at com.jayway.restassured.internal.RequestSpecificationImpl.sendHttpRequest(RequestSpecificationImpl.groovy:953)
at com.jayway.restassured.internal.RequestSpecificationImpl.this$2$sendHttpRequest(RequestSpecificationImpl.groovy)
at com.jayway.restassured.internal.RequestSpecificationImpl$this$2$sendHttpRequest.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at com.jayway.restassured.internal.RequestSpecificationImpl.sendRequest(RequestSpecificationImpl.groovy:820)
at com.jayway.restassured.internal.RequestSpecificationImpl.this$2$sendRequest(RequestSpecificationImpl.groovy)
at com.jayway.restassured.internal.RequestSpecificationImpl$this$2$sendRequest.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
at com.jayway.restassured.internal.filter.RootFilter.filter(RootFilter.groovy:30)
at com.jayway.restassured.filter.Filter$filter.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:124)
at com.jayway.restassured.internal.filter.FilterContextImpl.next(FilterContextImpl.groovy:49)
at com.jayway.restassured.filter.FilterContext$next.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at com.jayway.restassured.internal.RequestSpecificationImpl.invokeFilterChain(RequestSpecificationImpl.groovy:758)
at com.jayway.restassured.internal.RequestSpecificationImpl$invokeFilterChain.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:149)
at com.jayway.restassured.internal.RequestSpecificationImpl.applyPathParamsAndSendRequest(RequestSpecificationImpl.groovy:1142)
at com.jayway.restassured.internal.RequestSpecificationImpl.this$2$applyPathParamsAndSendRequest(RequestSpecificationImpl.groovy)
at com.jayway.restassured.internal.RequestSpecificationImpl$this$2$applyPathParamsAndSendRequest.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:149)
at com.jayway.restassured.internal.RequestSpecificationImpl.get(RequestSpecificationImpl.groovy:131)
at com.jayway.restassured.specification.RequestSender$get.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at com.jayway.restassured.internal.ResponseSpecificationImpl.get(ResponseSpecificationImpl.groovy:226)
at motif.qa.rest.assured.GetExample.getCatalog(GetExample.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(...
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=253_ | priority | getting responseparseexception no such field content type when getting json response without content type header from on august what steps will reproduce the problem add restassured defaultparser parser json since response does not contain content type header send get request given header x accept version expect statuscode when get catalog what is the expected output what do you see instead i expect to get success on the test the response contains json object actual result getting following exception even though i have all the dependencies from the zip file in place and recreated test project from scratch using only supplied jars without setting default parser i m getting just warning warning could not parse content type response does not have a content type header aug pm com jayway restassured internal http httpbuilder parseresponse warning could not parse content type no such field content type for class com jayway restassured internal requestspecificationimpl restassuredhttpbuilder failed getcatalog com jayway restassured internal http responseparseexception ok at sun reflect nativeconstructoraccessorimpl native method at sun reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java lang reflect constructor newinstance constructor java at org codehaus groovy reflection cachedconstructor invoke cachedconstructor java at org codehaus groovy runtime callsite constructorsite constructorsitenounwrapnocoerce callconstructor constructorsite java at org codehaus groovy runtime callsite callsitearray defaultcallconstructor callsitearray java at org codehaus groovy runtime callsite abstractcallsite callconstructor abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite callconstructor abstractcallsite java at com jayway restassured internal requestspecificationimpl restassuredhttpbuilder dorequest requestspecificationimpl groovy at com jayway restassured internal http httpbuilder dorequest httpbuilder java at com jayway restassured internal http httpbuilder request httpbuilder java at com jayway restassured internal http httpbuilder request call unknown source at org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at com jayway restassured internal requestspecificationimpl sendhttprequest requestspecificationimpl groovy at com jayway restassured internal requestspecificationimpl this sendhttprequest requestspecificationimpl groovy at com jayway restassured internal requestspecificationimpl this sendhttprequest callcurrent unknown source at org codehaus groovy runtime callsite callsitearray defaultcallcurrent callsitearray java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at com jayway restassured internal requestspecificationimpl sendrequest requestspecificationimpl groovy at com jayway restassured internal requestspecificationimpl this sendrequest requestspecificationimpl groovy at com jayway restassured internal requestspecificationimpl this sendrequest call unknown source at org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at com jayway restassured internal filter rootfilter filter rootfilter groovy at com jayway restassured filter filter filter call unknown source at org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at com jayway restassured internal filter filtercontextimpl next filtercontextimpl groovy at com jayway restassured filter filtercontext next call unknown source at org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at com jayway restassured internal requestspecificationimpl invokefilterchain requestspecificationimpl groovy at com jayway restassured internal requestspecificationimpl invokefilterchain callcurrent unknown source at org codehaus groovy runtime callsite callsitearray defaultcallcurrent callsitearray java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at com jayway restassured internal requestspecificationimpl applypathparamsandsendrequest requestspecificationimpl groovy at com jayway restassured internal requestspecificationimpl this applypathparamsandsendrequest requestspecificationimpl groovy at com jayway restassured internal requestspecificationimpl this applypathparamsandsendrequest callcurrent unknown source at org codehaus groovy runtime callsite callsitearray defaultcallcurrent callsitearray java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at com jayway restassured internal requestspecificationimpl get requestspecificationimpl groovy at com jayway restassured specification requestsender get call unknown source at org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at com jayway restassured internal responsespecificationimpl get responsespecificationimpl groovy at motif qa rest assured getexample getcatalog getexample java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokemethod invoker java at org testng internal invoker invoketestmethod invoker java at org testng internal invoker invoketestmethods original issue | 1 |
721,849 | 24,839,928,778 | IssuesEvent | 2022-10-26 11:56:35 | owncloud/web | https://api.github.com/repos/owncloud/web | opened | Wrong share permissions for space resources on search result page | Type:Bug Priority:p3-medium | ### Steps to reproduce
1. Create a space `space` and upload `somefile.txt` into it
2. Search for `somefile.txt` and press enter
3. On the search result page, open the shares panel for `somefile.txt`
### Expected behaviour
The share panel should give the user the possibility to share the file with people or via link.
### Actual behaviour
It says "You don't have permission to share this folder."

Additional note: Sometimes, the space members are showing in the shares panel, and sometimes they don't. I think it's related to where you search. Searching from inside the space reveals the space members on the search result page. When re-loading the search result page then, no members are showing.
| 1.0 | Wrong share permissions for space resources on search result page - ### Steps to reproduce
1. Create a space `space` and upload `somefile.txt` into it
2. Search for `somefile.txt` and press enter
3. On the search result page, open the shares panel for `somefile.txt`
### Expected behaviour
The share panel should give the user the possibility to share the file with people or via link.
### Actual behaviour
It says "You don't have permission to share this folder."

Additional note: Sometimes, the space members are showing in the shares panel, and sometimes they don't. I think it's related to where you search. Searching from inside the space reveals the space members on the search result page. When re-loading the search result page then, no members are showing.
| priority | wrong share permissions for space resources on search result page steps to reproduce create a space space and upload somefile txt into it search for somefile txt and press enter on the search result page open the shares panel for somefile txt expected behaviour the share panel should give the user the possibility to share the file with people or via link actual behaviour it says you don t have permission to share this folder additional note sometimes the space members are showing in the shares panel and sometimes they don t i think it s related to where you search searching from inside the space reveals the space members on the search result page when re loading the search result page then no members are showing | 1 |
80,617 | 3,568,285,814 | IssuesEvent | 2016-01-26 04:17:46 | aic-collections/aicdams-lakeshore | https://api.github.com/repos/aic-collections/aicdams-lakeshore | closed | Restrict search results to items with isPreferredRepresentationOf relationships | MEDIUM priority use case | User wants to restrict results to assets which have “aic:isPreferredRepresentationOf” relationship with Objects | 1.0 | Restrict search results to items with isPreferredRepresentationOf relationships - User wants to restrict results to assets which have “aic:isPreferredRepresentationOf” relationship with Objects | priority | restrict search results to items with ispreferredrepresentationof relationships user wants to restrict results to assets which have “aic ispreferredrepresentationof” relationship with objects | 1 |
105,075 | 4,229,976,694 | IssuesEvent | 2016-07-04 10:01:40 | kulish-alina/HR_Project | https://api.github.com/repos/kulish-alina/HR_Project | opened | Candidate refactoring | medium priority refactoring ui | refactoring of candidate service (CandidateService.js) and candidate editing controller (candidate.edit.js); | 1.0 | Candidate refactoring - refactoring of candidate service (CandidateService.js) and candidate editing controller (candidate.edit.js); | priority | candidate refactoring refactoring of candidate service candidateservice js and candidate editing controller candidate edit js | 1 |
176,978 | 6,572,322,584 | IssuesEvent | 2017-09-11 01:22:16 | chasecaleb/OverwatchVision | https://api.github.com/repos/chasecaleb/OverwatchVision | opened | Extract sections of image containing stats | points:3 priority:medium type:feature | Depends on: #3
After detecting the stat screen's location, the individual scores should be extracted (as images) in preparation for OCR. | 1.0 | Extract sections of image containing stats - Depends on: #3
After detecting the stat screen's location, the individual scores should be extracted (as images) in preparation for OCR. | priority | extract sections of image containing stats depends on after detecting the stat screen s location the individual scores should be extracted as images in preparation for ocr | 1 |
490,396 | 14,119,062,309 | IssuesEvent | 2020-11-08 16:07:20 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Group Leader not synching as an Organizer in Groups | bug learndash priority: medium | **Describe the bug**
When LD Groups and Social Groups is activated and vice versa in LearnDash Integrations
**To Reproduce**
Steps to reproduce the behavior:
1. Learndash must be activated
2. In BuddyBoss > Learndash > Integrations, activate LD Group to Social Groups and Social Groups to LD Group
3. Still in integrations, Social Group settings, match Organizer to Group Leader
4. Still in integrations, In LearnDash Group settings, match Group Leader to Organizer
5. Add user in a Social Group, then assign this as an Organizer
6. Though the user is counted as Leader in Learndash Groups view, when you click edit on the LearnDash Group, he is not added as a Leader
**Expected behavior**
When you add a Group Leader in a Learndash Group, this get synched as an Organizer in Social Group
But when you add an Organizer in Social Group, he is not shown as Group Leader in Learndash group
**Screenshots**
https://drive.google.com/file/d/1fJCfifpz2A0aHg9d9hu5vJ0vxMnstRlZ/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1328087095/106893
| 1.0 | Group Leader not synching as an Organizer in Groups - **Describe the bug**
When LD Groups and Social Groups is activated and vice versa in LearnDash Integrations
**To Reproduce**
Steps to reproduce the behavior:
1. Learndash must be activated
2. In BuddyBoss > Learndash > Integrations, activate LD Group to Social Groups and Social Groups to LD Group
3. Still in integrations, Social Group settings, match Organizer to Group Leader
4. Still in integrations, In LearnDash Group settings, match Group Leader to Organizer
5. Add user in a Social Group, then assign this as an Organizer
6. Though the user is counted as Leader in Learndash Groups view, when you click edit on the LearnDash Group, he is not added as a Leader
**Expected behavior**
When you add a Group Leader in a Learndash Group, this get synched as an Organizer in Social Group
But when you add an Organizer in Social Group, he is not shown as Group Leader in Learndash group
**Screenshots**
https://drive.google.com/file/d/1fJCfifpz2A0aHg9d9hu5vJ0vxMnstRlZ/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1328087095/106893
| priority | group leader not synching as an organizer in groups describe the bug when ld groups and social groups is activated and vice versa in learndash integrations to reproduce steps to reproduce the behavior learndash must be activated in buddyboss learndash integrations activate ld group to social groups and social groups to ld group still in integrations social group settings match organizer to group leader still in integrations in learndash group settings match group leader to organizer add user in a social group then assign this as an organizer though the user is counted as leader in learndash groups view when you click edit on the learndash group he is not added as a leader expected behavior when you add a group leader in a learndash group this get synched as an organizer in social group but when you add an organizer in social group he is not shown as group leader in learndash group screenshots support ticket links | 1 |
513,503 | 14,922,602,991 | IssuesEvent | 2021-01-23 15:30:02 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Hosts listing for Smart inventory | component:ui priority:medium state:needs_info type:bug | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- UI
##### SUMMARY
Using the Hosts button on smart inventory pages is not showing them like normal inventory while using the magnifier on "Smart Host Filter" show them.
##### ENVIRONMENT
* AWX version: 3.0.0
* AWX install method: docker on linux
* Ansible version: 2.7.6
* Operating System: Windows 7.
* Web Browser: firefox 65.0.1 64bits / Chrome Version 72.0.3626.109 (Build officiel) (64 bits)/ Chrome Version 72.0.3626.119 (Build officiel) (64 bits)
##### STEPS TO REPRODUCE
On INVENTORIES, choose a working Smart Inventory, :

Use the magnifier on "Smart Host Filter" and we should be able to see a list of hosts :

Click on the "Hosts" button here, and nothing happen other than an "Working" on the bottom right :

##### EXPECTED RESULTS
Being able to see the hosts list like on a normal inventory
##### ACTUAL RESULTS
Nothing except the "working" gear on the bottom right
##### ADDITIONAL INFORMATION
Using the api to navigate to the hosts list is ok ( the smart inventory id was the 3 ) :
https://awx_server/api/v2/inventories/3/hosts/
From the inspectorcode of firefox, the code of "hosts" button on smart inventory is :
```html
<div id="hosts_tab" class="Form-tab" ng-click="$state.go('inventories.editSmartInventory.hosts');" ng-class="{'is-selected' : $state.includes('inventories.editSmartInventory.hosts') || $state.includes('inventories.edit.hosts')}" translate="">Hosts</div>
```
While the one on normal inventory is :
```html
<div id="hosts_tab" class="Form-tab is-selected" ng-click="$state.go('inventories.edit.hosts')" ng-class="{'is-selected' : $state.includes('inventories.edit.hosts')}" translate="">Hosts</div>
``` | 1.0 | Hosts listing for Smart inventory - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- UI
##### SUMMARY
Using the Hosts button on smart inventory pages is not showing them like normal inventory while using the magnifier on "Smart Host Filter" show them.
##### ENVIRONMENT
* AWX version: 3.0.0
* AWX install method: docker on linux
* Ansible version: 2.7.6
* Operating System: Windows 7.
* Web Browser: firefox 65.0.1 64bits / Chrome Version 72.0.3626.109 (Build officiel) (64 bits)/ Chrome Version 72.0.3626.119 (Build officiel) (64 bits)
##### STEPS TO REPRODUCE
On INVENTORIES, choose a working Smart Inventory, :

Use the magnifier on "Smart Host Filter" and we should be able to see a list of hosts :

Click on the "Hosts" button here, and nothing happen other than an "Working" on the bottom right :

##### EXPECTED RESULTS
Being able to see the hosts list like on a normal inventory
##### ACTUAL RESULTS
Nothing except the "working" gear on the bottom right
##### ADDITIONAL INFORMATION
Using the api to navigate to the hosts list is ok ( the smart inventory id was the 3 ) :
https://awx_server/api/v2/inventories/3/hosts/
From the inspectorcode of firefox, the code of "hosts" button on smart inventory is :
```html
<div id="hosts_tab" class="Form-tab" ng-click="$state.go('inventories.editSmartInventory.hosts');" ng-class="{'is-selected' : $state.includes('inventories.editSmartInventory.hosts') || $state.includes('inventories.edit.hosts')}" translate="">Hosts</div>
```
While the one on normal inventory is :
```html
<div id="hosts_tab" class="Form-tab is-selected" ng-click="$state.go('inventories.edit.hosts')" ng-class="{'is-selected' : $state.includes('inventories.edit.hosts')}" translate="">Hosts</div>
``` | priority | hosts listing for smart inventory issue type bug report component name ui summary using the hosts button on smart inventory pages is not showing them like normal inventory while using the magnifier on smart host filter show them environment awx version awx install method docker on linux ansible version operating system windows web browser firefox chrome version build officiel bits chrome version build officiel bits steps to reproduce on inventories choose a working smart inventory use the magnifier on smart host filter and we should be able to see a list of hosts click on the hosts button here and nothing happen other than an working on the bottom right expected results being able to see the hosts list like on a normal inventory actual results nothing except the working gear on the bottom right additional information using the api to navigate to the hosts list is ok the smart inventory id was the from the inspectorcode of firefox the code of hosts button on smart inventory is html hosts while the one on normal inventory is html hosts | 1 |
123,924 | 4,888,887,262 | IssuesEvent | 2016-11-18 08:06:23 | bounswe/bounswe2016group7 | https://api.github.com/repos/bounswe/bounswe2016group7 | closed | Top topics implementation | backend priority: medium | Backend shall implement top topics information according to topic ratings. | 1.0 | Top topics implementation - Backend shall implement top topics information according to topic ratings. | priority | top topics implementation backend shall implement top topics information according to topic ratings | 1 |
147,652 | 5,643,154,426 | IssuesEvent | 2017-04-06 23:11:18 | Tour-de-Force/btc-app | https://api.github.com/repos/Tour-de-Force/btc-app | closed | We should use ~ for all Node dependencies | enhancement Priority: Medium | This will make auto-updating more conservative and fix some of the random bugs we see from libraries updating too far. | 1.0 | We should use ~ for all Node dependencies - This will make auto-updating more conservative and fix some of the random bugs we see from libraries updating too far. | priority | we should use for all node dependencies this will make auto updating more conservative and fix some of the random bugs we see from libraries updating too far | 1 |
659,401 | 21,926,043,070 | IssuesEvent | 2022-05-23 04:22:02 | encorelab/ck-board | https://api.github.com/repos/encorelab/ck-board | closed | Bug: Users Notified of their own actions | bug good first issue medium priority 1 | Users should not receive notifications when they comment on, tag or like their own post. We should check if the notification recipient is the same user as the one who triggered the notification and prevent the system from sending the notification.
Potential Solution:
Add a check in canvas.service.ts for userIDs | 1.0 | Bug: Users Notified of their own actions - Users should not receive notifications when they comment on, tag or like their own post. We should check if the notification recipient is the same user as the one who triggered the notification and prevent the system from sending the notification.
Potential Solution:
Add a check in canvas.service.ts for userIDs | priority | bug users notified of their own actions users should not receive notifications when they comment on tag or like their own post we should check if the notification recipient is the same user as the one who triggered the notification and prevent the system from sending the notification potential solution add a check in canvas service ts for userids | 1 |
760,390 | 26,638,341,211 | IssuesEvent | 2023-01-25 00:42:18 | flaviogp27/portifolio | https://api.github.com/repos/flaviogp27/portifolio | closed | Adicionar meus dados de contato | Priority: Medium Weight: 3 Type: Feature | ## Informações
- [ ] Linkedin
- [ ] Email
- [ ] Telefone (Whatsapp/ Telegram)
- [ ] Github | 1.0 | Adicionar meus dados de contato - ## Informações
- [ ] Linkedin
- [ ] Email
- [ ] Telefone (Whatsapp/ Telegram)
- [ ] Github | priority | adicionar meus dados de contato informações linkedin email telefone whatsapp telegram github | 1 |
601,304 | 18,397,318,008 | IssuesEvent | 2021-10-12 12:52:11 | AY2122S1-CS2103T-T11-3/tp | https://api.github.com/repos/AY2122S1-CS2103T-T11-3/tp | closed | Prevent the user from swapping tabs using the mouse | type.Task priority.Medium | Tabs switches should only be done with the toggle command | 1.0 | Prevent the user from swapping tabs using the mouse - Tabs switches should only be done with the toggle command | priority | prevent the user from swapping tabs using the mouse tabs switches should only be done with the toggle command | 1 |
474,635 | 13,672,904,241 | IssuesEvent | 2020-09-29 09:07:45 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | network should be restart after step 1 of configurator | Priority: Medium Type: Feature / Enhancement | **Is your feature request related to a problem? Please describe.**
When you configure network interfaces during Step 1 of configurator, PacketFence will start to manage network interfaces using network-scripts on CentOS and ifup/down and Debian.
It means that if NetworkManager (or another network manager daemon) managed your network settings before Step 1 of configurator, network changes will only occurs after:
* network has been restarted
or
* interfaces modified have been restarted
**Describe the solution you'd like**
Restart network when we click on next step during Step 1 of configurator.
**Describe alternatives you've considered**
- Restart only interfaces modified during Step 1 of configurator
- Warn users they should restart after they finish configurator (network will be restarted) | 1.0 | network should be restart after step 1 of configurator - **Is your feature request related to a problem? Please describe.**
When you configure network interfaces during Step 1 of configurator, PacketFence will start to manage network interfaces using network-scripts on CentOS and ifup/down and Debian.
It means that if NetworkManager (or another network manager daemon) managed your network settings before Step 1 of configurator, network changes will only occurs after:
* network has been restarted
or
* interfaces modified have been restarted
**Describe the solution you'd like**
Restart network when we click on next step during Step 1 of configurator.
**Describe alternatives you've considered**
- Restart only interfaces modified during Step 1 of configurator
- Warn users they should restart after they finish configurator (network will be restarted) | priority | network should be restart after step of configurator is your feature request related to a problem please describe when you configure network interfaces during step of configurator packetfence will start to manage network interfaces using network scripts on centos and ifup down and debian it means that if networkmanager or another network manager daemon managed your network settings before step of configurator network changes will only occurs after network has been restarted or interfaces modified have been restarted describe the solution you d like restart network when we click on next step during step of configurator describe alternatives you ve considered restart only interfaces modified during step of configurator warn users they should restart after they finish configurator network will be restarted | 1 |
271,032 | 8,474,816,582 | IssuesEvent | 2018-10-24 17:10:35 | dojot/dojot | https://api.github.com/repos/dojot/dojot | opened | [GUI] Forgot Password: field validation | Priority:Medium Team:Frontend Type:Bug | The username field is not validated. The email sending message is displayed even though the username field is not filled.
**Steps to reproduce the problem:**
1. Click "Forgot Password"

2. Do not fill in the username field
3. Click "Submit"

| 1.0 | [GUI] Forgot Password: field validation - The username field is not validated. The email sending message is displayed even though the username field is not filled.
**Steps to reproduce the problem:**
1. Click "Forgot Password"

2. Do not fill in the username field
3. Click "Submit"

| priority | forgot password field validation the username field is not validated the email sending message is displayed even though the username field is not filled steps to reproduce the problem click forgot password do not fill in the username field click submit | 1 |
111,628 | 4,479,508,599 | IssuesEvent | 2016-08-27 17:07:18 | ZeusWPI/zeus.ugent.be | https://api.github.com/repos/ZeusWPI/zeus.ugent.be | closed | Add a link to this repo (these issues) | enhancement medium priority | If people see something that doesn't look right or have general suggestions, invite them to make an issue or pull request.
Something among the lines of "See something off? Make a pull request here:" | 1.0 | Add a link to this repo (these issues) - If people see something that doesn't look right or have general suggestions, invite them to make an issue or pull request.
Something among the lines of "See something off? Make a pull request here:" | priority | add a link to this repo these issues if people see something that doesn t look right or have general suggestions invite them to make an issue or pull request something among the lines of see something off make a pull request here | 1 |
548,619 | 16,067,732,207 | IssuesEvent | 2021-04-23 22:26:55 | E3SM-Project/scream | https://api.github.com/repos/E3SM-Project/scream | closed | Finalize F90 version of Simple Prescribed Aerosol (SPA) | p3 priority:medium radiation | - [ ] complete 5 yr ne30 simulations with and without prescribed aerosol and create e3sm_diags to confirm that the difference between these runs isn't large.
- [ ] Create SPA forcing files (including both CCN and aerosol optics properties for each climatological month)
- [ ] Put SPA forcing files in the E3SM inputdata server
- [ ] Modify SCREAM code to use these new files by default. | 1.0 | Finalize F90 version of Simple Prescribed Aerosol (SPA) - - [ ] complete 5 yr ne30 simulations with and without prescribed aerosol and create e3sm_diags to confirm that the difference between these runs isn't large.
- [ ] Create SPA forcing files (including both CCN and aerosol optics properties for each climatological month)
- [ ] Put SPA forcing files in the E3SM inputdata server
- [ ] Modify SCREAM code to use these new files by default. | priority | finalize version of simple prescribed aerosol spa complete yr simulations with and without prescribed aerosol and create diags to confirm that the difference between these runs isn t large create spa forcing files including both ccn and aerosol optics properties for each climatological month put spa forcing files in the inputdata server modify scream code to use these new files by default | 1 |
47,972 | 2,990,079,018 | IssuesEvent | 2015-07-21 06:40:39 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | Rest Assured - can't POST with Parameters and Body | bug imported invalid Priority-Medium | _From [JaccoZi...@gmail.com](https://code.google.com/u/104040070436058765040/) on August 28, 2012 00:17:42_
What steps will reproduce the problem? 1. Post to a URL containing a parameter with body content What is the expected output? What do you see instead? This is expected to work, testing manually works. Instead the following error is thrown:
"You can either send parameters OR body content in the POST, not both!" What version of the product are you using? On what operating system? Using Im using Rest Assured 1.1.6, which is rather old. However, looking at the code on github this still appears to be an issue Please provide any additional information below. Stack overflow post: http://stackoverflow.com/questions/12101297/rest-assured-cant-post-with-parameters-and-body
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=196_ | 1.0 | Rest Assured - can't POST with Parameters and Body - _From [JaccoZi...@gmail.com](https://code.google.com/u/104040070436058765040/) on August 28, 2012 00:17:42_
What steps will reproduce the problem? 1. Post to a URL containing a parameter with body content What is the expected output? What do you see instead? This is expected to work, testing manually works. Instead the following error is thrown:
"You can either send parameters OR body content in the POST, not both!" What version of the product are you using? On what operating system? Using Im using Rest Assured 1.1.6, which is rather old. However, looking at the code on github this still appears to be an issue Please provide any additional information below. Stack overflow post: http://stackoverflow.com/questions/12101297/rest-assured-cant-post-with-parameters-and-body
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=196_ | priority | rest assured can t post with parameters and body from on august what steps will reproduce the problem post to a url containing a parameter with body content what is the expected output what do you see instead this is expected to work testing manually works instead the following error is thrown you can either send parameters or body content in the post not both what version of the product are you using on what operating system using im using rest assured which is rather old however looking at the code on github this still appears to be an issue please provide any additional information below stack overflow post original issue | 1 |
46,360 | 2,956,391,479 | IssuesEvent | 2015-07-08 10:56:56 | PowerPointLabs/powerpointlabs | https://api.github.com/repos/PowerPointLabs/powerpointlabs | closed | Installation fails due to not recognizing URL as trusted | Feature.Installer Priority-Medium type-bug | _From [dam...@gmail.com](https://code.google.com/u/114919422028123491955/) on February 08, 2014 18:56:59_
A small number of cases where the registry edit does not seem to be effective
_Original issue: http://code.google.com/p/powerpointlabs/issues/detail?id=186_ | 1.0 | Installation fails due to not recognizing URL as trusted - _From [dam...@gmail.com](https://code.google.com/u/114919422028123491955/) on February 08, 2014 18:56:59_
A small number of cases where the registry edit does not seem to be effective
_Original issue: http://code.google.com/p/powerpointlabs/issues/detail?id=186_ | priority | installation fails due to not recognizing url as trusted from on february a small number of cases where the registry edit does not seem to be effective original issue | 1 |
480,462 | 13,852,579,437 | IssuesEvent | 2020-10-15 06:45:07 | AY2021S1-CS2113T-F12-2/tp | https://api.github.com/repos/AY2021S1-CS2113T-F12-2/tp | closed | As a new user, I want to be able to view all available commands supported by AniChan | priority.Medium type.Story | So that I can quickly identify the command I need to use for a certain task | 1.0 | As a new user, I want to be able to view all available commands supported by AniChan - So that I can quickly identify the command I need to use for a certain task | priority | as a new user i want to be able to view all available commands supported by anichan so that i can quickly identify the command i need to use for a certain task | 1 |
146,515 | 5,623,547,251 | IssuesEvent | 2017-04-04 15:10:46 | CS2103JAN2017-W15-B2/main | https://api.github.com/repos/CS2103JAN2017-W15-B2/main | closed | As a new user I can Sort the tasks based on completion status, date, priority, category, etc. | priority.medium type.story | so I can organize my tasks more intuitively | 1.0 | As a new user I can Sort the tasks based on completion status, date, priority, category, etc. - so I can organize my tasks more intuitively | priority | as a new user i can sort the tasks based on completion status date priority category etc so i can organize my tasks more intuitively | 1 |
541,783 | 15,833,766,881 | IssuesEvent | 2021-04-06 15:59:46 | graknlabs/console | https://api.github.com/repos/graknlabs/console | closed | Console: automatically set query limit when not specified | priority: medium type: feature | At the moment, it's possible for someone to write a query that produces a very long result, e.g. printing the database by doing `match $x isa thing; get;`. This may not be favourable in most cases, so let's automatically set the query limit (e.g. `limit 30;`) which the user can configure, and obviously also override. | 1.0 | Console: automatically set query limit when not specified - At the moment, it's possible for someone to write a query that produces a very long result, e.g. printing the database by doing `match $x isa thing; get;`. This may not be favourable in most cases, so let's automatically set the query limit (e.g. `limit 30;`) which the user can configure, and obviously also override. | priority | console automatically set query limit when not specified at the moment it s possible for someone to write a query that produces a very long result e g printing the database by doing match x isa thing get this may not be favourable in most cases so let s automatically set the query limit e g limit which the user can configure and obviously also override | 1 |
711,996 | 24,481,644,927 | IssuesEvent | 2022-10-08 23:01:03 | JasonBock/Rocks | https://api.github.com/repos/JasonBock/Rocks | closed | Overgenerating Constraints | bug Medium Priority | To reproduce this:
```csharp
public class PropertyBuilder { }
public class ValueGenerator { }
public class HasConstraints<T>
{
public virtual PropertyBuilder HasValueGenerator<TGenerator>()
where TGenerator : ValueGenerator => default!;
}
public static class Test
{
public static void Go()
{
var expectations = Rock.Create<HasConstraints<object>>();
}
}
```
According to `CS0460`, the constraints shouldn't be included: "Constraints for override and explicit interface implementation methods are inherited from the base method, so they cannot be specified directly, except for either a 'class', or a 'struct' constraint." | 1.0 | Overgenerating Constraints - To reproduce this:
```csharp
public class PropertyBuilder { }
public class ValueGenerator { }
public class HasConstraints<T>
{
public virtual PropertyBuilder HasValueGenerator<TGenerator>()
where TGenerator : ValueGenerator => default!;
}
public static class Test
{
public static void Go()
{
var expectations = Rock.Create<HasConstraints<object>>();
}
}
```
According to `CS0460`, the constraints shouldn't be included: "Constraints for override and explicit interface implementation methods are inherited from the base method, so they cannot be specified directly, except for either a 'class', or a 'struct' constraint." | priority | overgenerating constraints to reproduce this csharp public class propertybuilder public class valuegenerator public class hasconstraints public virtual propertybuilder hasvaluegenerator where tgenerator valuegenerator default public static class test public static void go var expectations rock create according to the constraints shouldn t be included constraints for override and explicit interface implementation methods are inherited from the base method so they cannot be specified directly except for either a class or a struct constraint | 1 |
498,222 | 14,403,656,854 | IssuesEvent | 2020-12-03 16:19:12 | NCAR/GeoCAT-examples | https://api.github.com/repos/NCAR/GeoCAT-examples | closed | Is it going to replace PyNGL? | medium priority support | PyNGL has almost all GeoCAT-examples. If one use matplotlib/cartopy and etc, why to mimic the style of NCL/PyNGL? What's the advantages of GeoCAT-examples over PyNGL? e.g. GeoCAT-examples usually begin with a dozen of "import" before any meaningful things, it looks lumpish. | 1.0 | Is it going to replace PyNGL? - PyNGL has almost all GeoCAT-examples. If one use matplotlib/cartopy and etc, why to mimic the style of NCL/PyNGL? What's the advantages of GeoCAT-examples over PyNGL? e.g. GeoCAT-examples usually begin with a dozen of "import" before any meaningful things, it looks lumpish. | priority | is it going to replace pyngl pyngl has almost all geocat examples if one use matplotlib cartopy and etc why to mimic the style of ncl pyngl what s the advantages of geocat examples over pyngl e g geocat examples usually begin with a dozen of import before any meaningful things it looks lumpish | 1 |
169,559 | 6,404,173,154 | IssuesEvent | 2017-08-07 01:23:08 | ChrisALee/twitch-stocks | https://api.github.com/repos/ChrisALee/twitch-stocks | opened | Remove Husky or replace pre-commit with it | Easy to Medium Priority: Low Project Setup | Husky gives us access to other hooks [here](https://github.com/typicode/husky/blob/master/HOOKS.md).
We'd want to use Husky if we want any of those hooks. One situation I can think of is using pre-push when one file relies on another and they both get committed separately.
All the CI stuff like pushing the code to our host in prod could be probably just be handled by TravisCI. | 1.0 | Remove Husky or replace pre-commit with it - Husky gives us access to other hooks [here](https://github.com/typicode/husky/blob/master/HOOKS.md).
We'd want to use Husky if we want any of those hooks. One situation I can think of is using pre-push when one file relies on another and they both get committed separately.
All the CI stuff like pushing the code to our host in prod could be probably just be handled by TravisCI. | priority | remove husky or replace pre commit with it husky gives us access to other hooks we d want to use husky if we want any of those hooks one situation i can think of is using pre push when one file relies on another and they both get committed separately all the ci stuff like pushing the code to our host in prod could be probably just be handled by travisci | 1 |
495,698 | 14,286,479,496 | IssuesEvent | 2020-11-23 15:12:11 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.2 staging-1852] Government account creation option visible without permissions | Category: UI Priority: Medium Type: Regression | I can see the option to create government account without the actual permission to do so.
Making it very confusing why i would have a button i cannot click on.
It used to be hidden unless i had a title permission for creating government acconts?
 | 1.0 | [0.9.2 staging-1852] Government account creation option visible without permissions - I can see the option to create government account without the actual permission to do so.
Making it very confusing why i would have a button i cannot click on.
It used to be hidden unless i had a title permission for creating government acconts?
 | priority | government account creation option visible without permissions i can see the option to create government account without the actual permission to do so making it very confusing why i would have a button i cannot click on it used to be hidden unless i had a title permission for creating government acconts | 1 |
194,946 | 6,900,856,685 | IssuesEvent | 2017-11-24 22:22:54 | compodoc/compodoc | https://api.github.com/repos/compodoc/compodoc | closed | [BUG] Empty description for accessors | 1. Type: Bug Priority: Medium Status: Completed Time: ~1 hour | ```
/**
*
*
* @readonly
* @type {boolean}
* @memberof LoadStatusComponent
*/
public set isLoading(val: boolean) {
this.status = val ? LoadStatus.LOADING : LoadStatus.IDLE;
}
```
produces a parsing error. | 1.0 | [BUG] Empty description for accessors - ```
/**
*
*
* @readonly
* @type {boolean}
* @memberof LoadStatusComponent
*/
public set isLoading(val: boolean) {
this.status = val ? LoadStatus.LOADING : LoadStatus.IDLE;
}
```
produces a parsing error. | priority | empty description for accessors readonly type boolean memberof loadstatuscomponent public set isloading val boolean this status val loadstatus loading loadstatus idle produces a parsing error | 1 |
630,207 | 20,100,810,727 | IssuesEvent | 2022-02-07 03:44:39 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Map shouldn't be created until StartRound | Type: Bug Priority: 2-Before Release Difficulty: 2-Medium | If it's deleted somehow between PreRoundSetup and StartRound the game crashes. This shouldn't happen normally but apparently it can and it'd be better to just protect against it IMO. | 1.0 | Map shouldn't be created until StartRound - If it's deleted somehow between PreRoundSetup and StartRound the game crashes. This shouldn't happen normally but apparently it can and it'd be better to just protect against it IMO. | priority | map shouldn t be created until startround if it s deleted somehow between preroundsetup and startround the game crashes this shouldn t happen normally but apparently it can and it d be better to just protect against it imo | 1 |
499,170 | 14,442,275,073 | IssuesEvent | 2020-12-07 17:55:17 | vmware/singleton | https://api.github.com/repos/vmware/singleton | opened | [BUG] Java client - Get supported locales from offline data source fails in nested-jar applications | area/java-client kind/bug priority/medium | **Describe the bug**
Get supported locales from offline data source fails in nested-jar applications because FileSystems walk in nested jars is not supported in Java 11 and below.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
**Screenshots**
**Additional context**
| 1.0 | [BUG] Java client - Get supported locales from offline data source fails in nested-jar applications - **Describe the bug**
Get supported locales from offline data source fails in nested-jar applications because FileSystems walk in nested jars is not supported in Java 11 and below.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
**Screenshots**
**Additional context**
| priority | java client get supported locales from offline data source fails in nested jar applications describe the bug get supported locales from offline data source fails in nested jar applications because filesystems walk in nested jars is not supported in java and below to reproduce steps to reproduce the behavior expected behavior screenshots additional context | 1 |
196,980 | 6,951,429,972 | IssuesEvent | 2017-12-06 14:27:35 | wordpress-mobile/AztecEditor-Android | https://api.github.com/repos/wordpress-mobile/AztecEditor-Android | closed | Toggling From HTML View Merges Consecutive Spaces | bug medium priority wontfix | ### Expected
All spaces are retained when toggling between visual and HTML views.
### Observed
Consecutive spaces are merged into a single space when toggling between visual and HTML views.
### Reproduced
1. Enter text with multiple spaces.
2. Tap ***HTML*** format button.
3. Notice multiple spaces are retained.
4. Tap ***HTML*** format button.
5. Notice multiple spaces are merged into a single space.
#### Tested
Google Pixel on Android 7.1.2 with AztecDemo 1.0 | 1.0 | Toggling From HTML View Merges Consecutive Spaces - ### Expected
All spaces are retained when toggling between visual and HTML views.
### Observed
Consecutive spaces are merged into a single space when toggling between visual and HTML views.
### Reproduced
1. Enter text with multiple spaces.
2. Tap ***HTML*** format button.
3. Notice multiple spaces are retained.
4. Tap ***HTML*** format button.
5. Notice multiple spaces are merged into a single space.
#### Tested
Google Pixel on Android 7.1.2 with AztecDemo 1.0 | priority | toggling from html view merges consecutive spaces expected all spaces are retained when toggling between visual and html views observed consecutive spaces are merged into a single space when toggling between visual and html views reproduced enter text with multiple spaces tap html format button notice multiple spaces are retained tap html format button notice multiple spaces are merged into a single space tested google pixel on android with aztecdemo | 1 |
740,943 | 25,775,378,380 | IssuesEvent | 2022-12-09 11:29:32 | bounswe/bounswe2022group8 | https://api.github.com/repos/bounswe/bounswe2022group8 | closed | MOB-18: Fetching images from AWS S3 | Effort: High Effort: Medium Priority: High Status: Completed Coding Team: Mobile | ### What's up?
Backend team has uploaded images to S3. Now we should fetch those images from the S3 bucket.
### To Do
- [x] Getting credentials to connect AWS S3 from backend team.
- [x] Fetching images from S3.
### Deadline
02.12.2022 @11.59
### Additional Information
AWS S3 url : https://cmpe451-production.s3.amazonaws.com
AWS Access Key and Secret key are shared with mobile team internally. Thanks to @KarahanS for sharing AWS credentials.
### Reviewers
@MustafaEmreErengul @mustafa-cihan @dundarmete | 1.0 | MOB-18: Fetching images from AWS S3 - ### What's up?
Backend team has uploaded images to S3. Now we should fetch those images from the S3 bucket.
### To Do
- [x] Getting credentials to connect AWS S3 from backend team.
- [x] Fetching images from S3.
### Deadline
02.12.2022 @11.59
### Additional Information
AWS S3 url : https://cmpe451-production.s3.amazonaws.com
AWS Access Key and Secret key are shared with mobile team internally. Thanks to @KarahanS for sharing AWS credentials.
### Reviewers
@MustafaEmreErengul @mustafa-cihan @dundarmete | priority | mob fetching images from aws what s up backend team has uploaded images to now we should fetch those images from the bucket to do getting credentials to connect aws from backend team fetching images from deadline additional information aws url aws access key and secret key are shared with mobile team internally thanks to karahans for sharing aws credentials reviewers mustafaemreerengul mustafa cihan dundarmete | 1 |
16,608 | 2,615,119,811 | IssuesEvent | 2015-03-01 05:45:52 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | Support for parsing field declared as Array | auto-migrated Component-Util Milestone-Version1.4.0 Priority-Medium Type-Enhancement | ```
External references, such as a standards document, or specification?
N/A
Java environments (e.g. Java 6, Android 2.3, App Engine 1.4.2, or All)?
All
Please describe the feature requested.
Should be able to parse an array pretty much the same as a collection. For
example:
public class Something {
@Key public String[] values;
}
This applies to everywhere @Key is used: JSON, XML, URL encoder, and HTTP
headers.
```
Original issue reported on code.google.com by `yan...@google.com` on 12 Apr 2011 at 4:59 | 1.0 | Support for parsing field declared as Array - ```
External references, such as a standards document, or specification?
N/A
Java environments (e.g. Java 6, Android 2.3, App Engine 1.4.2, or All)?
All
Please describe the feature requested.
Should be able to parse an array pretty much the same as a collection. For
example:
public class Something {
@Key public String[] values;
}
This applies to everywhere @Key is used: JSON, XML, URL encoder, and HTTP
headers.
```
Original issue reported on code.google.com by `yan...@google.com` on 12 Apr 2011 at 4:59 | priority | support for parsing field declared as array external references such as a standards document or specification n a java environments e g java android app engine or all all please describe the feature requested should be able to parse an array pretty much the same as a collection for example public class something key public string values this applies to everywhere key is used json xml url encoder and http headers original issue reported on code google com by yan google com on apr at | 1 |
652,605 | 21,556,851,971 | IssuesEvent | 2022-04-30 15:14:23 | QuiltMC/quiltflower | https://api.github.com/repos/QuiltMC/quiltflower | opened | Finally resugaring is a black box, and it's doesn't always work | bug Subsystem: Statement Structure Priority: Medium | Currently we have no one in the team who understands what exactly is going on to resugar finally statements. Furthermore, it seems to be produce incorrect result or just crashes in some cases where there is a return in the finally block.
The following tests are known error cases
* `TestLoopFinally::test3` and reduced form `TestLoopFinally::test4`. Seems to produce incorrect dgraphs after a round of finally resugaring, but whether this is cause dgraph generation is wrong for catchall statements or because finally resugaring made an incorrect transformation is not clear.
* Converting the `test4` do while loop into a while loop results in the same cfg graph, but with a different start point, which after parsing into a statement succeeds to be parsed into a dgraph. (My gut tells me the graph is wrong though)
* `TestTryWithResources::testFinaly` produces semantically different results while also including a try finally with an empty finally block, which is essentially a no op.
* If the constructor of Scanner throws, it would throw that exception instead of returning null
* `TestTryWithResources::testFinalyNested` fails to parse the cfg graph after a round (at least 1, could be more) of finally resugaring. | 1.0 | Finally resugaring is a black box, and it's doesn't always work - Currently we have no one in the team who understands what exactly is going on to resugar finally statements. Furthermore, it seems to be produce incorrect result or just crashes in some cases where there is a return in the finally block.
The following tests are known error cases
* `TestLoopFinally::test3` and reduced form `TestLoopFinally::test4`. Seems to produce incorrect dgraphs after a round of finally resugaring, but whether this is cause dgraph generation is wrong for catchall statements or because finally resugaring made an incorrect transformation is not clear.
* Converting the `test4` do while loop into a while loop results in the same cfg graph, but with a different start point, which after parsing into a statement succeeds to be parsed into a dgraph. (My gut tells me the graph is wrong though)
* `TestTryWithResources::testFinaly` produces semantically different results while also including a try finally with an empty finally block, which is essentially a no op.
* If the constructor of Scanner throws, it would throw that exception instead of returning null
* `TestTryWithResources::testFinalyNested` fails to parse the cfg graph after a round (at least 1, could be more) of finally resugaring. | priority | finally resugaring is a black box and it s doesn t always work currently we have no one in the team who understands what exactly is going on to resugar finally statements furthermore it seems to be produce incorrect result or just crashes in some cases where there is a return in the finally block the following tests are known error cases testloopfinally and reduced form testloopfinally seems to produce incorrect dgraphs after a round of finally resugaring but whether this is cause dgraph generation is wrong for catchall statements or because finally resugaring made an incorrect transformation is not clear converting the do while loop into a while loop results in the same cfg graph but with a different start point which after parsing into a statement succeeds to be parsed into a dgraph my gut tells me the graph is wrong though testtrywithresources testfinaly produces semantically different results while also including a try finally with an empty finally block which is essentially a no op if the constructor of scanner throws it would throw that exception instead of returning null testtrywithresources testfinalynested fails to parse the cfg graph after a round at least could be more of finally resugaring | 1 |
93,689 | 3,907,877,048 | IssuesEvent | 2016-04-19 14:17:00 | Nexxado/ProjectHands | https://api.github.com/repos/Nexxado/ProjectHands | closed | Basic Chat System | 4 - Done Backend Points: 13 Priority: Medium | - [x] Chat room per renovation page
- [x] Send Messages
- [x] Receive Messages
- [x] Chat History
<!---
@huboard:{"order":6.5,"milestone_order":20,"custom_state":""}
-->
| 1.0 | Basic Chat System - - [x] Chat room per renovation page
- [x] Send Messages
- [x] Receive Messages
- [x] Chat History
<!---
@huboard:{"order":6.5,"milestone_order":20,"custom_state":""}
-->
| priority | basic chat system chat room per renovation page send messages receive messages chat history huboard order milestone order custom state | 1 |
173,774 | 6,530,710,331 | IssuesEvent | 2017-08-30 15:58:15 | openshift-evangelists/intro-katacoda | https://api.github.com/repos/openshift-evangelists/intro-katacoda | closed | Need "VMs" with more memory and disk | enhancement priority-medium | Some of the new people who want to write scenarios have much larger containers and disk space requirements that we need for some of our intro scenarios. How do we have different size base VMs and call them in the init script. This is related to #16 | 1.0 | Need "VMs" with more memory and disk - Some of the new people who want to write scenarios have much larger containers and disk space requirements that we need for some of our intro scenarios. How do we have different size base VMs and call them in the init script. This is related to #16 | priority | need vms with more memory and disk some of the new people who want to write scenarios have much larger containers and disk space requirements that we need for some of our intro scenarios how do we have different size base vms and call them in the init script this is related to | 1 |
463,868 | 13,302,824,157 | IssuesEvent | 2020-08-25 14:44:24 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | Pathfinding chunks break on map remove | Feature: Entity AI Priority: 2-medium Type: Bug | When running `rmmap` on a map with a grid, the pathfinding chunks break.
https://github.com/space-wizards/space-station-14/blob/520e523d30940d8110b7338b8a4e67acff887a48/Content.Server/GameObjects/EntitySystems/AI/Pathfinding/PathfindingChunk.cs#L70 | 1.0 | Pathfinding chunks break on map remove - When running `rmmap` on a map with a grid, the pathfinding chunks break.
https://github.com/space-wizards/space-station-14/blob/520e523d30940d8110b7338b8a4e67acff887a48/Content.Server/GameObjects/EntitySystems/AI/Pathfinding/PathfindingChunk.cs#L70 | priority | pathfinding chunks break on map remove when running rmmap on a map with a grid the pathfinding chunks break | 1 |
427,914 | 12,400,787,706 | IssuesEvent | 2020-05-21 08:34:50 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | 7.2.0 Staging 08551408-Turning on/off braziers | Priority: Medium Status: Fixed | When attempting to turn on/off a brazier, it does not have an effect on it until you re-log. | 1.0 | 7.2.0 Staging 08551408-Turning on/off braziers - When attempting to turn on/off a brazier, it does not have an effect on it until you re-log. | priority | staging turning on off braziers when attempting to turn on off a brazier it does not have an effect on it until you re log | 1 |
157,699 | 6,011,116,578 | IssuesEvent | 2017-06-06 14:36:19 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [studio] Uploading a large video via Upload From Desktop Data Source OOMs | bug Priority: Medium | Steps to Reproduce
--------------------
- Create a site based on an existing BP
- Edit one of the content types and add a Video from Desktop Data Source (DS)
- Add a Video item to the content type and wire it to the aforementioned DS
- Save and edit an instance of the type
- Upload a video from your desktop that's >550MB and watch the logs
- You'll see an OOM like this one:
```
[ERROR] 2017-06-06 09:44:55,076 [http-nio-8080-exec-3] [impl.DefaultExceptionHandler] | POST http://localhost:8080/studio/asset-upload?nocache=Tue%20Jun%2006%202017%2009:43:36%20GMT-0400%20(EDT) failed
org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: Java heap space
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:978)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:208)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.engine.servlet.filter.SiteContextResolvingFilter.doFilter(SiteContextResolvingFilter.java:46)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.engine.servlet.filter.ExceptionHandlingFilter.doFilter(ExceptionHandlingFilter.java:56)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.commons.http.RequestContextBindingFilter.doFilter(RequestContextBindingFilter.java:79)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.studio.impl.v1.web.filter.MultiReadHttpServletRequestWrapperFilter.doFilter(MultiReadHttpServletRequestWrapperFilter.java:32)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:108)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:349)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:784)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.StringCoding.safeTrim(StringCoding.java:89)
at java.lang.StringCoding.access$100(StringCoding.java:50)
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:154)
at java.lang.StringCoding.decode(StringCoding.java:193)
at java.lang.String.<init>(String.java:426)
at java.io.ByteArrayOutputStream.toString(ByteArrayOutputStream.java:245)
at org.craftercms.studio.impl.v1.web.http.MultiReadHttpServletRequestWrapper.getPostBodyAsString(MultiReadHttpServletRequestWrapper.java:132)
at org.craftercms.studio.impl.v1.web.http.MultiReadHttpServletRequestWrapper.getParameterMap(MultiReadHttpServletRequestWrapper.java:102)
at org.craftercms.studio.impl.v1.web.http.MultiReadHttpServletRequestWrapper.getParameter(MultiReadHttpServletRequestWrapper.java:86)
at javax.servlet.ServletRequestWrapper.getParameter(ServletRequestWrapper.java:153)
at org.springframework.web.servlet.i18n.LocaleChangeInterceptor.preHandle(LocaleChangeInterceptor.java:139)
at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:134)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:958)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:208)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.engine.servlet.filter.SiteContextResolvingFilter.doFilter(SiteContextResolvingFilter.java:46)
[INFO] 2017-06-06 09:44:55,126 [http-nio-8080-exec-3] [freemarker.CrafterFreeMarkerConfigurer] | ClassTemplateLoader for Spring macros added to FreeMarker configuration
``` | 1.0 | [studio] Uploading a large video via Upload From Desktop Data Source OOMs - Steps to Reproduce
--------------------
- Create a site based on an existing BP
- Edit one of the content types and add a Video from Desktop Data Source (DS)
- Add a Video item to the content type and wire it to the aforementioned DS
- Save and edit an instance of the type
- Upload a video from your desktop that's >550MB and watch the logs
- You'll see an OOM like this one:
```
[ERROR] 2017-06-06 09:44:55,076 [http-nio-8080-exec-3] [impl.DefaultExceptionHandler] | POST http://localhost:8080/studio/asset-upload?nocache=Tue%20Jun%2006%202017%2009:43:36%20GMT-0400%20(EDT) failed
org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: Java heap space
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:978)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:208)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.engine.servlet.filter.SiteContextResolvingFilter.doFilter(SiteContextResolvingFilter.java:46)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.engine.servlet.filter.ExceptionHandlingFilter.doFilter(ExceptionHandlingFilter.java:56)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.commons.http.RequestContextBindingFilter.doFilter(RequestContextBindingFilter.java:79)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.studio.impl.v1.web.filter.MultiReadHttpServletRequestWrapperFilter.doFilter(MultiReadHttpServletRequestWrapperFilter.java:32)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:108)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:349)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:784)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.StringCoding.safeTrim(StringCoding.java:89)
at java.lang.StringCoding.access$100(StringCoding.java:50)
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:154)
at java.lang.StringCoding.decode(StringCoding.java:193)
at java.lang.String.<init>(String.java:426)
at java.io.ByteArrayOutputStream.toString(ByteArrayOutputStream.java:245)
at org.craftercms.studio.impl.v1.web.http.MultiReadHttpServletRequestWrapper.getPostBodyAsString(MultiReadHttpServletRequestWrapper.java:132)
at org.craftercms.studio.impl.v1.web.http.MultiReadHttpServletRequestWrapper.getParameterMap(MultiReadHttpServletRequestWrapper.java:102)
at org.craftercms.studio.impl.v1.web.http.MultiReadHttpServletRequestWrapper.getParameter(MultiReadHttpServletRequestWrapper.java:86)
at javax.servlet.ServletRequestWrapper.getParameter(ServletRequestWrapper.java:153)
at org.springframework.web.servlet.i18n.LocaleChangeInterceptor.preHandle(LocaleChangeInterceptor.java:139)
at org.springframework.web.servlet.HandlerExecutionChain.applyPreHandle(HandlerExecutionChain.java:134)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:958)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:208)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.craftercms.engine.servlet.filter.SiteContextResolvingFilter.doFilter(SiteContextResolvingFilter.java:46)
[INFO] 2017-06-06 09:44:55,126 [http-nio-8080-exec-3] [freemarker.CrafterFreeMarkerConfigurer] | ClassTemplateLoader for Spring macros added to FreeMarker configuration
``` | priority | uploading a large video via upload from desktop data source ooms steps to reproduce create a site based on an existing bp edit one of the content types and add a video from desktop data source ds add a video item to the content type and wire it to the aforementioned ds save and edit an instance of the type upload a video from your desktop that s and watch the logs you ll see an oom like this one post failed org springframework web util nestedservletexception handler dispatch failed nested exception is java lang outofmemoryerror java heap space at org springframework web servlet dispatcherservlet dodispatch dispatcherservlet java at org springframework web servlet dispatcherservlet doservice dispatcherservlet java at org springframework web servlet frameworkservlet processrequest frameworkservlet java at org springframework web servlet frameworkservlet dopost frameworkservlet java at javax servlet http httpservlet service httpservlet java at org springframework web servlet frameworkservlet service frameworkservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework security web filterchainproxy dofilterinternal filterchainproxy java at org springframework security web filterchainproxy dofilter filterchainproxy java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms engine servlet filter sitecontextresolvingfilter dofilter sitecontextresolvingfilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms engine servlet filter exceptionhandlingfilter dofilter exceptionhandlingfilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms commons http requestcontextbindingfilter dofilter requestcontextbindingfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms studio impl web filter multireadhttpservletrequestwrapperfilter dofilter multireadhttpservletrequestwrapperfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java caused by java lang outofmemoryerror java heap space at java util arrays copyof arrays java at java lang stringcoding safetrim stringcoding java at java lang stringcoding access stringcoding java at java lang stringcoding stringdecoder decode stringcoding java at java lang stringcoding decode stringcoding java at java lang string string java at java io bytearrayoutputstream tostring bytearrayoutputstream java at org craftercms studio impl web http multireadhttpservletrequestwrapper getpostbodyasstring multireadhttpservletrequestwrapper java at org craftercms studio impl web http multireadhttpservletrequestwrapper getparametermap multireadhttpservletrequestwrapper java at org craftercms studio impl web http multireadhttpservletrequestwrapper getparameter multireadhttpservletrequestwrapper java at javax servlet servletrequestwrapper getparameter servletrequestwrapper java at org springframework web servlet localechangeinterceptor prehandle localechangeinterceptor java at org springframework web servlet handlerexecutionchain applyprehandle handlerexecutionchain java at org springframework web servlet dispatcherservlet dodispatch dispatcherservlet java at org springframework web servlet dispatcherservlet doservice dispatcherservlet java at org springframework web servlet frameworkservlet processrequest frameworkservlet java at org springframework web servlet frameworkservlet dopost frameworkservlet java at javax servlet http httpservlet service httpservlet java at org springframework web servlet frameworkservlet service frameworkservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework security web filterchainproxy dofilterinternal filterchainproxy java at org springframework security web filterchainproxy dofilter filterchainproxy java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org craftercms engine servlet filter sitecontextresolvingfilter dofilter sitecontextresolvingfilter java classtemplateloader for spring macros added to freemarker configuration | 1 |
429,781 | 12,427,303,225 | IssuesEvent | 2020-05-25 01:50:37 | confidantstation/Confidant-Station | https://api.github.com/repos/confidantstation/Confidant-Station | closed | renew app push reginfo interface,add PushMsgSend api | Priority: Medium enhancement | 1.add appPushInfoGet,appPushInfoReg api
2.add PushMsgSend
| 1.0 | renew app push reginfo interface,add PushMsgSend api - 1.add appPushInfoGet,appPushInfoReg api
2.add PushMsgSend
| priority | renew app push reginfo interface,add pushmsgsend api add apppushinfoget,apppushinforeg api add pushmsgsend | 1 |
707,312 | 24,301,923,206 | IssuesEvent | 2022-09-29 14:27:13 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | MCUboot update breaks compilation for boards without CONFIG_WATCHDOG=y | bug priority: medium area: MCUBoot | **Describe the bug**
The MCUboot update in cfc96eaacceeaa09bb19d8df35af7bdbbf7af189 breaks compilation for at least the `twr_ke18f` board (and likely all other boards having a DTS `watchdog0` alias with status `okay` but not setting `CONFIG_WATCHDOG=y`).
**To Reproduce**
Steps to reproduce the behavior:
1. `west build -b twr_ke18f ../bootloader/mcuboot/boot/zephyr/`
2. See error:
```
[ 95%] Linking C executable zephyr_pre0.elf
/opt/zephyr-sdk/0.15.0/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/12.1.0/../../../../arm-zephyr-eabi/bin/ld.bfd: ../app/libapp.a(main.c.obj): in function `main':
/home/hebad/Projects/zephyrproject/bootloader/mcuboot/boot/zephyr/main.c:612: undefined reference to `__device_dts_ord_105'
/opt/zephyr-sdk/0.15.0/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/12.1.0/../../../../arm-zephyr-eabi/bin/ld.bfd: ../app/libapp.a(loader.c.obj): in function `boot_copy_region':
/home/hebad/Projects/zephyrproject/bootloader/mcuboot/boot/bootutil/src/loader.c:1035: undefined reference to `__device_dts_ord_105'
collect2: error: ld returned 1 exit status
make[2]: *** [zephyr/CMakeFiles/zephyr_pre0.dir/build.make:120: zephyr/zephyr_pre0.elf] Error 1
make[1]: *** [CMakeFiles/Makefile2:2921: zephyr/CMakeFiles/zephyr_pre0.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
FATAL ERROR: command exited with status 2: /home/hebad/Projects/zephyrproject/venv/bin/cmake --build /home/hebad/Projects/zephyrproject/zephyr/build
```
**Expected behavior**
MCUboot compiles without errors.
**Impact**
Unable to build MCUboot for affected boards.
**Environment (please complete the following information):**
- OS: Ubuntu Linux 20.04 LTS
- Toolchain: Zephyr SDK v0.15.0
- Commit SHA: a5ce3da6ab425d104d73d34f47352ae721c5d216
| 1.0 | MCUboot update breaks compilation for boards without CONFIG_WATCHDOG=y - **Describe the bug**
The MCUboot update in cfc96eaacceeaa09bb19d8df35af7bdbbf7af189 breaks compilation for at least the `twr_ke18f` board (and likely all other boards having a DTS `watchdog0` alias with status `okay` but not setting `CONFIG_WATCHDOG=y`).
**To Reproduce**
Steps to reproduce the behavior:
1. `west build -b twr_ke18f ../bootloader/mcuboot/boot/zephyr/`
2. See error:
```
[ 95%] Linking C executable zephyr_pre0.elf
/opt/zephyr-sdk/0.15.0/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/12.1.0/../../../../arm-zephyr-eabi/bin/ld.bfd: ../app/libapp.a(main.c.obj): in function `main':
/home/hebad/Projects/zephyrproject/bootloader/mcuboot/boot/zephyr/main.c:612: undefined reference to `__device_dts_ord_105'
/opt/zephyr-sdk/0.15.0/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/12.1.0/../../../../arm-zephyr-eabi/bin/ld.bfd: ../app/libapp.a(loader.c.obj): in function `boot_copy_region':
/home/hebad/Projects/zephyrproject/bootloader/mcuboot/boot/bootutil/src/loader.c:1035: undefined reference to `__device_dts_ord_105'
collect2: error: ld returned 1 exit status
make[2]: *** [zephyr/CMakeFiles/zephyr_pre0.dir/build.make:120: zephyr/zephyr_pre0.elf] Error 1
make[1]: *** [CMakeFiles/Makefile2:2921: zephyr/CMakeFiles/zephyr_pre0.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
FATAL ERROR: command exited with status 2: /home/hebad/Projects/zephyrproject/venv/bin/cmake --build /home/hebad/Projects/zephyrproject/zephyr/build
```
**Expected behavior**
MCUboot compiles without errors.
**Impact**
Unable to build MCUboot for affected boards.
**Environment (please complete the following information):**
- OS: Ubuntu Linux 20.04 LTS
- Toolchain: Zephyr SDK v0.15.0
- Commit SHA: a5ce3da6ab425d104d73d34f47352ae721c5d216
| priority | mcuboot update breaks compilation for boards without config watchdog y describe the bug the mcuboot update in breaks compilation for at least the twr board and likely all other boards having a dts alias with status okay but not setting config watchdog y to reproduce steps to reproduce the behavior west build b twr bootloader mcuboot boot zephyr see error linking c executable zephyr elf opt zephyr sdk arm zephyr eabi bin lib gcc arm zephyr eabi arm zephyr eabi bin ld bfd app libapp a main c obj in function main home hebad projects zephyrproject bootloader mcuboot boot zephyr main c undefined reference to device dts ord opt zephyr sdk arm zephyr eabi bin lib gcc arm zephyr eabi arm zephyr eabi bin ld bfd app libapp a loader c obj in function boot copy region home hebad projects zephyrproject bootloader mcuboot boot bootutil src loader c undefined reference to device dts ord error ld returned exit status make error make error make error fatal error command exited with status home hebad projects zephyrproject venv bin cmake build home hebad projects zephyrproject zephyr build expected behavior mcuboot compiles without errors impact unable to build mcuboot for affected boards environment please complete the following information os ubuntu linux lts toolchain zephyr sdk commit sha | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.