Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
381,589 | 11,276,862,402 | IssuesEvent | 2020-01-15 00:40:32 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | closed | Follow up job messaging work | AMO caseflow-intake foxtrot priority-medium | <!-- The goal of this template is to be a tool to communicate the requirements for a story related task. It is not intended as a mandate, adapt as needed. -->
## User or job story
User story: As an intake user, I need to be able to communicate to veterans when their end product establishment is failing, through uploading a screenshot of their job details page. I would like the job details page to include the inbox messages in notes, because the inbox messages page includes information for other veterans.
## Acceptance criteria
- [x] When we add an inbox message for a job failing after 24 hours, subsequently succeeding, or being canceled, also add a job note (job note messages are already shown)
- [x] Read Inbox messages should be hidden after 120 days
- [x] Update copy (see below for new requested copy)
- [x] Use veteran file numbers when labeling async jobs in Inbox
- [x] Include a screenshot in this GitHub issue of an inbox message appearing on the job details page
## Release notes
Inbox messages related to asyncable jobs will appear as job notes on the job detail page.
### Designs
New copy:
- For messaging a job failure after 24 hours, add:
>No further action is necessary as the (IT) support team has been notified. You will receive a separate message in your inbox when the issue has resolved.
- For messaging a job being manually cancelled, add:
>No further action is necessary. Please see the job details page for more information on why this job has been cancelled.
Sample screenshot of job page with new job notes that get automatically created alongside Inbox messages (for failures and successes after 24 hours):
<img width="401" alt="Screen Shot 2019-12-18 at 10 51 19" src="https://user-images.githubusercontent.com/282869/71101457-bc7e1280-2184-11ea-843a-589221e82a2b.png">
| 1.0 | Follow up job messaging work - <!-- The goal of this template is to be a tool to communicate the requirements for a story related task. It is not intended as a mandate, adapt as needed. -->
## User or job story
User story: As an intake user, I need to be able to communicate to veterans when their end product establishment is failing, through uploading a screenshot of their job details page. I would like the job details page to include the inbox messages in notes, because the inbox messages page includes information for other veterans.
## Acceptance criteria
- [x] When we add an inbox message for a job failing after 24 hours, subsequently succeeding, or being canceled, also add a job note (job note messages are already shown)
- [x] Read Inbox messages should be hidden after 120 days
- [x] Update copy (see below for new requested copy)
- [x] Use veteran file numbers when labeling async jobs in Inbox
- [x] Include a screenshot in this GitHub issue of an inbox message appearing on the job details page
## Release notes
Inbox messages related to asyncable jobs will appear as job notes on the job detail page.
### Designs
New copy:
- For messaging a job failure after 24 hours, add:
>No further action is necessary as the (IT) support team has been notified. You will receive a separate message in your inbox when the issue has resolved.
- For messaging a job being manually cancelled, add:
>No further action is necessary. Please see the job details page for more information on why this job has been cancelled.
Sample screenshot of job page with new job notes that get automatically created alongside Inbox messages (for failures and successes after 24 hours):
<img width="401" alt="Screen Shot 2019-12-18 at 10 51 19" src="https://user-images.githubusercontent.com/282869/71101457-bc7e1280-2184-11ea-843a-589221e82a2b.png">
| priority | follow up job messaging work user or job story user story as an intake user i need to be able to communicate to veterans when their end product establishment is failing through uploading a screenshot of their job details page i would like the job details page to include the inbox messages in notes because the inbox messages page includes information for other veterans acceptance criteria when we add an inbox message for a job failing after hours subsequently succeeding or being canceled also add a job note job note messages are already shown read inbox messages should be hidden after days update copy see below for new requested copy use veteran file numbers when labeling async jobs in inbox include a screenshot in this github issue of an inbox message appearing on the job details page release notes inbox messages related to asyncable jobs will appear as job notes on the job detail page designs new copy for messaging a job failure after hours add no further action is necessary as the it support team has been notified you will receive a separate message in your inbox when the issue has resolved for messaging a job being manually cancelled add no further action is necessary please see the job details page for more information on why this job has been cancelled sample screenshot of job page with new job notes that get automatically created alongside inbox messages for failures and successes after hours img width alt screen shot at src | 1 |
270,650 | 8,468,165,249 | IssuesEvent | 2018-10-23 18:57:15 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: cross over equator (x = 0) does strange things | Medium Priority | 
**Version:** 0.7.7.2 beta
**Steps to Reproduce:**
take a vehicle (steam truck) and slowly drive over the equator.
you can spot the point also by looking at the smoke comming out of the engine. it will disappear shortly when you drive over the point
**Expected behavior:**
nothing happens
**Actual behavior:**
the players drop for a short time (1/2 sec) into the ground and disappears then (after I drove over the point) they appear again. also nearby vehicles (steam tractor) are launched in the air and bug away. n
**Do you have mods installed? Does issue happen when no mods are installed?:**
no | 1.0 | USER ISSUE: cross over equator (x = 0) does strange things - 
**Version:** 0.7.7.2 beta
**Steps to Reproduce:**
take a vehicle (steam truck) and slowly drive over the equator.
you can spot the point also by looking at the smoke comming out of the engine. it will disappear shortly when you drive over the point
**Expected behavior:**
nothing happens
**Actual behavior:**
the players drop for a short time (1/2 sec) into the ground and disappears then (after I drove over the point) they appear again. also nearby vehicles (steam tractor) are launched in the air and bug away. n
**Do you have mods installed? Does issue happen when no mods are installed?:**
no | priority | user issue cross over equator x does strange things version beta steps to reproduce take a vehicle steam truck and slowly drive over the equator you can spot the point also by looking at the smoke comming out of the engine it will disappear shortly when you drive over the point expected behavior nothing happens actual behavior the players drop for a short time sec into the ground and disappears then after i drove over the point they appear again also nearby vehicles steam tractor are launched in the air and bug away n do you have mods installed does issue happen when no mods are installed no | 1 |
436,738 | 12,552,561,275 | IssuesEvent | 2020-06-06 18:29:53 | Twin-Cities-Mutual-Aid/twin-cities-aid-distribution-locations | https://api.github.com/repos/Twin-Cities-Mutual-Aid/twin-cities-aid-distribution-locations | closed | Error logging/notification | Priority: Medium Type: Feature | Since our persistence layer (Google Sheets) is pretty error-prone, I think it would be really helpful if we could set up a service to notify us if there are errors in the production app. I know these exist but don't know anything about them ... ideally something we could plug into slack. | 1.0 | Error logging/notification - Since our persistence layer (Google Sheets) is pretty error-prone, I think it would be really helpful if we could set up a service to notify us if there are errors in the production app. I know these exist but don't know anything about them ... ideally something we could plug into slack. | priority | error logging notification since our persistence layer google sheets is pretty error prone i think it would be really helpful if we could set up a service to notify us if there are errors in the production app i know these exist but don t know anything about them ideally something we could plug into slack | 1 |
761,354 | 26,677,007,561 | IssuesEvent | 2023-01-26 14:58:47 | vaticle/intellij-rust | https://api.github.com/repos/vaticle/intellij-rust | opened | When configuring Rust toolchain, set default path as `bazel-out/{toolchain_path}` | priority: medium type: bug | When opening a project and loading a Rust file before running the appropriate Bazel build command - or when loading a Rust file at _any_ time after having ever done the above - you'll see "No Rust toolchain configured".
I'm not sure how we could easily configure it to _always_ try to auto-detect the Bazel-installed Rust toolchain in this case, but what we can definitely do, is modify the behaviour of the "Set up toolchain" option:

Currently it auto-detects the Rust toolchain in `/Users/{user}/.cargo/bin`, installed by `rustup`.

We should modify it to search the same paths that we've defined the Rust toolchain detection (on first load of a Rust file) to search; namely `bazel-out` for the Rust toolchain, and `bazel-{projectName}` for the `stdlib` sources. | 1.0 | When configuring Rust toolchain, set default path as `bazel-out/{toolchain_path}` - When opening a project and loading a Rust file before running the appropriate Bazel build command - or when loading a Rust file at _any_ time after having ever done the above - you'll see "No Rust toolchain configured".
I'm not sure how we could easily configure it to _always_ try to auto-detect the Bazel-installed Rust toolchain in this case, but what we can definitely do, is modify the behaviour of the "Set up toolchain" option:

Currently it auto-detects the Rust toolchain in `/Users/{user}/.cargo/bin`, installed by `rustup`.

We should modify it to search the same paths that we've defined the Rust toolchain detection (on first load of a Rust file) to search; namely `bazel-out` for the Rust toolchain, and `bazel-{projectName}` for the `stdlib` sources. | priority | when configuring rust toolchain set default path as bazel out toolchain path when opening a project and loading a rust file before running the appropriate bazel build command or when loading a rust file at any time after having ever done the above you ll see no rust toolchain configured i m not sure how we could easily configure it to always try to auto detect the bazel installed rust toolchain in this case but what we can definitely do is modify the behaviour of the set up toolchain option currently it auto detects the rust toolchain in users user cargo bin installed by rustup we should modify it to search the same paths that we ve defined the rust toolchain detection on first load of a rust file to search namely bazel out for the rust toolchain and bazel projectname for the stdlib sources | 1 |
41,298 | 2,868,994,804 | IssuesEvent | 2015-06-05 22:26:55 | dart-lang/pub-dartlang | https://api.github.com/repos/dart-lang/pub-dartlang | closed | Pub server should support 3-legged OAuth for 3rd party integration | enhancement MovedToGithub Priority-Medium | <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#6234_
----
Github has post commit hooks. A hook could ping pub server, which then talks to github and checks pubspec.yaml. If version number is different, rebuild pub package.
Neat! | 1.0 | Pub server should support 3-legged OAuth for 3rd party integration - <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#6234_
----
Github has post commit hooks. A hook could ping pub server, which then talks to github and checks pubspec.yaml. If version number is different, rebuild pub package.
Neat! | priority | pub server should support legged oauth for party integration issue by originally opened as dart lang sdk github has post commit hooks a hook could ping pub server which then talks to github and checks pubspec yaml if version number is different rebuild pub package neat | 1 |
210,541 | 7,190,740,983 | IssuesEvent | 2018-02-02 18:21:55 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | two-handed weapons/shields aren't easily identifiable as such | priority: medium section: Equipment status: issue: in progress type: medium level coding | Most two-handed weapons aren't easily identified as such because they don't have anything in their text to state that they are.
We sometimes see bug reports from mobile app users when equipping a two-handed item unequips a shield. There are issues to give notifications about that when equipping them on the apps (https://github.com/HabitRPG/habitica-ios/issues/433 and https://github.com/HabitRPG/habitica-android/issues/688) but we should also include a message in each two-handed item's description on website and apps - i.e., the message should be in the data returned from the API's `content` route.
For consistency, the message should be automatically added to the end of the item's description when the item has a true value for the `twoHanded` attribute (e.g., the rancherLasso below). I.e., the PR for this change will NOT change any of the `common/locales/en/` json files (with one exception listed below) but instead will modify the API's code to insert the message.
The message should be something like "**Two-handed item.**" We won't use "Two-handed weapon" because future items might have more of a shield-like feel to them.
The PR should change `weaponSpecialCandycaneNotes` in `common/locales/en/gear.json` to remove the hard-coded two-handed description. I.e., change this:
`A powerful mage's staff. Powerfully DELICIOUS, we mean! Two-handed weapon. Increases Intelligence by <%= int %> and Perception by <%= per %>. Limited Edition 2013-2014 Winter Gear.`
to this:
`A powerful mage's staff. Powerfully DELICIOUS, we mean! Increases Intelligence by <%= int %> and Perception by <%= per %>. Limited Edition 2013-2014 Winter Gear.`
Example of a two-handed item:
```
rancherLasso: {
twoHanded: true,
text: t('weaponArmoireRancherLassoText'),
notes: t('weaponArmoireRancherLassoNotes', { str: 5, per: 5, int: 5 }),
value: 100,
str: 5,
per: 5,
int: 5,
set: 'rancher',
canOwn: ownsItem('weapon_armoire_rancherLasso'),
},
``` | 1.0 | two-handed weapons/shields aren't easily identifiable as such - Most two-handed weapons aren't easily identified as such because they don't have anything in their text to state that they are.
We sometimes see bug reports from mobile app users when equipping a two-handed item unequips a shield. There are issues to give notifications about that when equipping them on the apps (https://github.com/HabitRPG/habitica-ios/issues/433 and https://github.com/HabitRPG/habitica-android/issues/688) but we should also include a message in each two-handed item's description on website and apps - i.e., the message should be in the data returned from the API's `content` route.
For consistency, the message should be automatically added to the end of the item's description when the item has a true value for the `twoHanded` attribute (e.g., the rancherLasso below). I.e., the PR for this change will NOT change any of the `common/locales/en/` json files (with one exception listed below) but instead will modify the API's code to insert the message.
The message should be something like "**Two-handed item.**" We won't use "Two-handed weapon" because future items might have more of a shield-like feel to them.
The PR should change `weaponSpecialCandycaneNotes` in `common/locales/en/gear.json` to remove the hard-coded two-handed description. I.e., change this:
`A powerful mage's staff. Powerfully DELICIOUS, we mean! Two-handed weapon. Increases Intelligence by <%= int %> and Perception by <%= per %>. Limited Edition 2013-2014 Winter Gear.`
to this:
`A powerful mage's staff. Powerfully DELICIOUS, we mean! Increases Intelligence by <%= int %> and Perception by <%= per %>. Limited Edition 2013-2014 Winter Gear.`
Example of a two-handed item:
```
rancherLasso: {
twoHanded: true,
text: t('weaponArmoireRancherLassoText'),
notes: t('weaponArmoireRancherLassoNotes', { str: 5, per: 5, int: 5 }),
value: 100,
str: 5,
per: 5,
int: 5,
set: 'rancher',
canOwn: ownsItem('weapon_armoire_rancherLasso'),
},
``` | priority | two handed weapons shields aren t easily identifiable as such most two handed weapons aren t easily identified as such because they don t have anything in their text to state that they are we sometimes see bug reports from mobile app users when equipping a two handed item unequips a shield there are issues to give notifications about that when equipping them on the apps and but we should also include a message in each two handed item s description on website and apps i e the message should be in the data returned from the api s content route for consistency the message should be automatically added to the end of the item s description when the item has a true value for the twohanded attribute e g the rancherlasso below i e the pr for this change will not change any of the common locales en json files with one exception listed below but instead will modify the api s code to insert the message the message should be something like two handed item we won t use two handed weapon because future items might have more of a shield like feel to them the pr should change weaponspecialcandycanenotes in common locales en gear json to remove the hard coded two handed description i e change this a powerful mage s staff powerfully delicious we mean two handed weapon increases intelligence by and perception by limited edition winter gear to this a powerful mage s staff powerfully delicious we mean increases intelligence by and perception by limited edition winter gear example of a two handed item rancherlasso twohanded true text t weaponarmoirerancherlassotext notes t weaponarmoirerancherlassonotes str per int value str per int set rancher canown ownsitem weapon armoire rancherlasso | 1 |
151,774 | 5,827,350,298 | IssuesEvent | 2017-05-08 08:45:33 | dotkom/onlineweb4 | https://api.github.com/repos/dotkom/onlineweb4 | closed | Add link to subnavbar to "Om interessegrupper" | Priority: Medium Status: Available | Add a link to the subnavbar that points to "Om interessegrupper", after "Om Online" etc. Awaiting merge of #1787 for now. | 1.0 | Add link to subnavbar to "Om interessegrupper" - Add a link to the subnavbar that points to "Om interessegrupper", after "Om Online" etc. Awaiting merge of #1787 for now. | priority | add link to subnavbar to om interessegrupper add a link to the subnavbar that points to om interessegrupper after om online etc awaiting merge of for now | 1 |
781,552 | 27,441,706,144 | IssuesEvent | 2023-03-02 11:29:09 | TetieWasTaken/BobTheBot | https://api.github.com/repos/TetieWasTaken/BobTheBot | opened | (Error) Responses should be standardized | priority: medium type: feature request | Replies to interactions such as "Item not found" or "[This] does not exist" should be standardized using the same format everywhere.
- Same layout
- Similar language/typing style
Achievable by making a custom logger. | 1.0 | (Error) Responses should be standardized - Replies to interactions such as "Item not found" or "[This] does not exist" should be standardized using the same format everywhere.
- Same layout
- Similar language/typing style
Achievable by making a custom logger. | priority | error responses should be standardized replies to interactions such as item not found or does not exist should be standardized using the same format everywhere same layout similar language typing style achievable by making a custom logger | 1 |
44,880 | 2,917,690,758 | IssuesEvent | 2015-06-24 00:26:57 | andresriancho/w3af | https://api.github.com/repos/andresriancho/w3af | closed | Write REST API client | improvement priority:medium rest-api | Write REST API client
- [x] Create a different repo in github
- [x] Initial implementation
- [x] Unittests for basic REST API consumption
- [x] Pypi integration
- [x] Manually push the first version
- [x] Define the required variables in circleci to push to pypi on successful builds
- [x] Integration tests
- [x] Download django-moth (use django moth utils)
- [x] Download latest w3af
- [x] Run `w3af_api`
- [x] Connect to w3af_api using the REST API and run a test scan
- [x] Successful w3af build should trigger build of w3af-api-client (develop to develop, master to master)
- [x] Add link from the w3af documentation to the w3af-api-client pypi / github repo | 1.0 | Write REST API client - Write REST API client
- [x] Create a different repo in github
- [x] Initial implementation
- [x] Unittests for basic REST API consumption
- [x] Pypi integration
- [x] Manually push the first version
- [x] Define the required variables in circleci to push to pypi on successful builds
- [x] Integration tests
- [x] Download django-moth (use django moth utils)
- [x] Download latest w3af
- [x] Run `w3af_api`
- [x] Connect to w3af_api using the REST API and run a test scan
- [x] Successful w3af build should trigger build of w3af-api-client (develop to develop, master to master)
- [x] Add link from the w3af documentation to the w3af-api-client pypi / github repo | priority | write rest api client write rest api client create a different repo in github initial implementation unittests for basic rest api consumption pypi integration manually push the first version define the required variables in circleci to push to pypi on successful builds integration tests download django moth use django moth utils download latest run api connect to api using the rest api and run a test scan successful build should trigger build of api client develop to develop master to master add link from the documentation to the api client pypi github repo | 1 |
284,429 | 8,738,327,503 | IssuesEvent | 2018-12-12 02:35:45 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | VisIt VTK reader does not support VTM files. | bug likelihood medium priority reviewed severity medium | VTK has an XML "VTM" file that looks a lot like a .visit file for grouping domains into a single multidomain dataset. The current VTK reader does not support VTM files. This came up when trying to look at some OpenFOAM VTK data in VisIt - a classic "paraview can read this fine" situation.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2162
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: VisIt VTK reader does not support VTM files.
Assigned to: Kathleen Biagas
Category:
Target version: 2.10
Author: Brad Whitlock
Start: 02/26/2015
Due date:
% Done: 0
Estimated time:
Created: 02/26/2015 03:28 pm
Updated: 08/18/2015 06:48 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.8.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
VTK has an XML "VTM" file that looks a lot like a .visit file for grouping domains into a single multidomain dataset. The current VTK reader does not support VTM files. This came up when trying to look at some OpenFOAM VTK data in VisIt - a classic "paraview can read this fine" situation.
Comments:
I attached the wrong file before. Added a parser for .vtm files. Currently only supports vtkMultiBlockDataSet flavor. (vtk has a sample file of vtkHierarchicalDataSet)M databases/VTK/VTKPluginInfo.CM databases/VTK/avtVTKFileReader.CM databases/VTK/avtVTKFileReader.hM databases/VTK/VTK.xmlA databases/VTK/VTMParser.CM databases/VTK/avtVTKFileFormat.CA databases/VTK/VTMParser.hM databases/VTK/CMakeLists.txt
| 1.0 | VisIt VTK reader does not support VTM files. - VTK has an XML "VTM" file that looks a lot like a .visit file for grouping domains into a single multidomain dataset. The current VTK reader does not support VTM files. This came up when trying to look at some OpenFOAM VTK data in VisIt - a classic "paraview can read this fine" situation.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2162
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: VisIt VTK reader does not support VTM files.
Assigned to: Kathleen Biagas
Category:
Target version: 2.10
Author: Brad Whitlock
Start: 02/26/2015
Due date:
% Done: 0
Estimated time:
Created: 02/26/2015 03:28 pm
Updated: 08/18/2015 06:48 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.8.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
VTK has an XML "VTM" file that looks a lot like a .visit file for grouping domains into a single multidomain dataset. The current VTK reader does not support VTM files. This came up when trying to look at some OpenFOAM VTK data in VisIt - a classic "paraview can read this fine" situation.
Comments:
I attached the wrong file before. Added a parser for .vtm files. Currently only supports vtkMultiBlockDataSet flavor. (vtk has a sample file of vtkHierarchicalDataSet)M databases/VTK/VTKPluginInfo.CM databases/VTK/avtVTKFileReader.CM databases/VTK/avtVTKFileReader.hM databases/VTK/VTK.xmlA databases/VTK/VTMParser.CM databases/VTK/avtVTKFileFormat.CA databases/VTK/VTMParser.hM databases/VTK/CMakeLists.txt
| priority | visit vtk reader does not support vtm files vtk has an xml vtm file that looks a lot like a visit file for grouping domains into a single multidomain dataset the current vtk reader does not support vtm files this came up when trying to look at some openfoam vtk data in visit a classic paraview can read this fine situation redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject visit vtk reader does not support vtm files assigned to kathleen biagas category target version author brad whitlock start due date done estimated time created pm updated pm likelihood occasional severity major irritation found in version impact expected use os all support group any description vtk has an xml vtm file that looks a lot like a visit file for grouping domains into a single multidomain dataset the current vtk reader does not support vtm files this came up when trying to look at some openfoam vtk data in visit a classic paraview can read this fine situation comments i attached the wrong file before added a parser for vtm files currently only supports vtkmultiblockdataset flavor vtk has a sample file of vtkhierarchicaldataset m databases vtk vtkplugininfo cm databases vtk avtvtkfilereader cm databases vtk avtvtkfilereader hm databases vtk vtk xmla databases vtk vtmparser cm databases vtk avtvtkfileformat ca databases vtk vtmparser hm databases vtk cmakelists txt | 1 |
778,498 | 27,318,638,078 | IssuesEvent | 2023-02-24 17:43:32 | AY2223S2-CS2103T-W11-2/tp | https://api.github.com/repos/AY2223S2-CS2103T-W11-2/tp | opened | List all internships that clash in interview or test dates | type.Story priority.Medium | as an Expert user so that I can try to reschedule some of them | 1.0 | List all internships that clash in interview or test dates - as an Expert user so that I can try to reschedule some of them | priority | list all internships that clash in interview or test dates as an expert user so that i can try to reschedule some of them | 1 |
729,930 | 25,151,208,511 | IssuesEvent | 2022-11-10 10:08:55 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Bluetooth: Application with buffer that cannot unref it in disconnect handler leads to advertising issues | bug priority: medium area: mcumgr | **Describe the bug**
This issue has been found with a peripheral zephyr device which supports 1 connection, the application works by receiving BT data and then putting it into a fifo to be processed by a dedicated (non-system) workqueue which will send responses to it and unref the connection, an issue arises however in this application if the remote device disconnects prior to the task having finished, the application attempts to restart advertising using the system workqueue from the disconnection handler. The problem is that because the other task contains references to the connection, restarting advertising fails with error -12. Therefore the application is unable to restart advertising and then is in a dead state. Upping the connection count to 2 does work around this issue, but is not an acceptable solution due to the additional RAM usage (which is essentially wasted).
Some operations can also take longer than others, e.g. a image erase command can take upwards of 5 seconds to process.
**To Reproduce**
mcumgr has this issue
**Expected behavior**
To be able to restart advertising, from the system workqueue, triggered in the disconnect handler
**Impact**
Showstopper
**Environment (please complete the following information):**
- Commit SHA or Version used: This is apparent in an nRF connect SDK sample application (samples/matter/lock) using the softdevice, but should be reproducible with any application that uses bluetooth, I did not attempt seeing if this issue was present in the zephyr controller but would imagine it would be given that it is caused by the limit of connection references/handles being reached | 1.0 | Bluetooth: Application with buffer that cannot unref it in disconnect handler leads to advertising issues - **Describe the bug**
This issue has been found with a peripheral zephyr device which supports 1 connection, the application works by receiving BT data and then putting it into a fifo to be processed by a dedicated (non-system) workqueue which will send responses to it and unref the connection, an issue arises however in this application if the remote device disconnects prior to the task having finished, the application attempts to restart advertising using the system workqueue from the disconnection handler. The problem is that because the other task contains references to the connection, restarting advertising fails with error -12. Therefore the application is unable to restart advertising and then is in a dead state. Upping the connection count to 2 does work around this issue, but is not an acceptable solution due to the additional RAM usage (which is essentially wasted).
Some operations can also take longer than others, e.g. a image erase command can take upwards of 5 seconds to process.
**To Reproduce**
mcumgr has this issue
**Expected behavior**
To be able to restart advertising, from the system workqueue, triggered in the disconnect handler
**Impact**
Showstopper
**Environment (please complete the following information):**
- Commit SHA or Version used: This is apparent in an nRF connect SDK sample application (samples/matter/lock) using the softdevice, but should be reproducible with any application that uses bluetooth, I did not attempt seeing if this issue was present in the zephyr controller but would imagine it would be given that it is caused by the limit of connection references/handles being reached | priority | bluetooth application with buffer that cannot unref it in disconnect handler leads to advertising issues describe the bug this issue has been found with a peripheral zephyr device which supports connection the application works by receiving bt data and then putting it into a fifo to be processed by a dedicated non system workqueue which will send responses to it and unref the connection an issue arises however in this application if the remote device disconnects prior to the task having finished the application attempts to restart advertising using the system workqueue from the disconnection handler the problem is that because the other task contains references to the connection restarting advertising fails with error therefore the application is unable to restart advertising and then is in a dead state upping the connection count to does work around this issue but is not an acceptable solution due to the additional ram usage which is essentially wasted some operations can also take longer than others e g a image erase command can take upwards of seconds to process to reproduce mcumgr has this issue expected behavior to be able to restart advertising from the system workqueue triggered in the disconnect handler impact showstopper environment please complete the following information commit sha or version used this is apparent in an nrf connect sdk sample application samples matter lock using the softdevice but should be reproducible with any application that uses bluetooth i did not attempt seeing if this issue was present in the zephyr controller but would imagine it would be given that it is caused by the limit of connection references handles being reached | 1 |
703,372 | 24,155,879,481 | IssuesEvent | 2022-09-22 07:38:49 | enviroCar/enviroCar-app | https://api.github.com/repos/enviroCar/enviroCar-app | closed | App crashes when we stop the track before the track starts recording | bug 3 - Done Priority - 2 - Medium | ## Issue
In Develop branch, When a user stops the track before the track starts recording or the timer starts, The app crashes
## Step to reproduce
1. Go to GPS mode
2. Click on start tracking
3. Immediately after the recording screen comes up, click on stop button
4. Click yes for the dialog box.
## Possible Solution
1. We can make the stop button unclickable untill the recording starts
2. Or we can check before exiting that recording is started or not
https://user-images.githubusercontent.com/85510030/163064806-dcd0faad-7fcb-4e05-a7e8-d09c76ec4764.mov
| 1.0 | App crashes when we stop the track before the track starts recording - ## Issue
In Develop branch, When a user stops the track before the track starts recording or the timer starts, The app crashes
## Step to reproduce
1. Go to GPS mode
2. Click on start tracking
3. Immediately after the recording screen comes up, click on stop button
4. Click yes for the dialog box.
## Possible Solution
1. We can make the stop button unclickable untill the recording starts
2. Or we can check before exiting that recording is started or not
https://user-images.githubusercontent.com/85510030/163064806-dcd0faad-7fcb-4e05-a7e8-d09c76ec4764.mov
| priority | app crashes when we stop the track before the track starts recording issue in develop branch when a user stops the track before the track starts recording or the timer starts the app crashes step to reproduce go to gps mode click on start tracking immediately after the recording screen comes up click on stop button click yes for the dialog box possible solution we can make the stop button unclickable untill the recording starts or we can check before exiting that recording is started or not | 1 |
55,130 | 3,072,154,116 | IssuesEvent | 2015-08-19 15:36:38 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | waitForText with scroll set to true scrolls only first ListView it finds | bug imported Priority-Medium wontfix | _From [gaz...@gmail.com](https://code.google.com/u/113313170396315103068/) on September 06, 2011 01:43:13_
What steps will reproduce the problem? 1. Create a tabbed activity with a ListView in each tab
2. In the second tab in the list, put "needle" at the end of the list
3. call solo.waitForText("needle", 1, 2000, true)
What is the expected output?
list will scroll down until the match is found
What do you see instead?
list doesn't scroll, and we're stuck in an infinite loop in searchFor (timeout is ignored, opened another bug for that) What version of the product are you using? On what operating system? Robotium 2.5, Android 2.2 (Cyanogen 6). Please provide any additional information below. the reason for that is that the function:
public boolean scroll(int direction)
gets a list of all ListViews on screen, but scrolls only the first.
It should enter a for loop which scrolls all ListViews on screen.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=151_ | 1.0 | waitForText with scroll set to true scrolls only first ListView it finds - _From [gaz...@gmail.com](https://code.google.com/u/113313170396315103068/) on September 06, 2011 01:43:13_
What steps will reproduce the problem? 1. Create a tabbed activity with a ListView in each tab
2. In the second tab in the list, put "needle" at the end of the list
3. call solo.waitForText("needle", 1, 2000, true)
What is the expected output?
list will scroll down until the match is found
What do you see instead?
list doesn't scroll, and we're stuck in an infinite loop in searchFor (timeout is ignored, opened another bug for that) What version of the product are you using? On what operating system? Robotium 2.5, Android 2.2 (Cyanogen 6). Please provide any additional information below. the reason for that is that the function:
public boolean scroll(int direction)
gets a list of all ListViews on screen, but scrolls only the first.
It should enter a for loop which scrolls all ListViews on screen.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=151_ | priority | waitfortext with scroll set to true scrolls only first listview it finds from on september what steps will reproduce the problem create a tabbed activity with a listview in each tab in the second tab in the list put needle at the end of the list call solo waitfortext needle true what is the expected output list will scroll down until the match is found what do you see instead list doesn t scroll and we re stuck in an infinite loop in searchfor timeout is ignored opened another bug for that what version of the product are you using on what operating system robotium android cyanogen please provide any additional information below the reason for that is that the function public boolean scroll int direction gets a list of all listviews on screen but scrolls only the first it should enter a for loop which scrolls all listviews on screen original issue | 1 |
793,987 | 28,018,994,741 | IssuesEvent | 2023-03-28 02:43:58 | masastack/MASA.TSC | https://api.github.com/repos/masastack/MASA.TSC | closed | Indicator expression button position is inconsistent with the design draft | status/resolved type/ui severity/medium site/staging priority/p3 | 指标表达式按钮位置与设计稿不符
实际结果:

预期结果:

| 1.0 | Indicator expression button position is inconsistent with the design draft - 指标表达式按钮位置与设计稿不符
实际结果:

预期结果:

| priority | indicator expression button position is inconsistent with the design draft 指标表达式按钮位置与设计稿不符 实际结果: 预期结果: | 1 |
802,455 | 28,963,149,761 | IssuesEvent | 2023-05-10 05:29:31 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [xCluster] Enable consumer-side transactional consistency via setup replication options | kind/bug area/docdb priority/medium | Jira Link: [DB-6117](https://yugabyte.atlassian.net/browse/DB-6117)
### Description
Currently transactional consistency using apply_safe_time is enabled via a GFlag - xcluster_consistent_wal. Switch this to a parameter that is passed in using setup_replication.
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6117]: https://yugabyte.atlassian.net/browse/DB-6117?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [xCluster] Enable consumer-side transactional consistency via setup replication options - Jira Link: [DB-6117](https://yugabyte.atlassian.net/browse/DB-6117)
### Description
Currently transactional consistency using apply_safe_time is enabled via a GFlag - xcluster_consistent_wal. Switch this to a parameter that is passed in using setup_replication.
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6117]: https://yugabyte.atlassian.net/browse/DB-6117?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | enable consumer side transactional consistency via setup replication options jira link description currently transactional consistency using apply safe time is enabled via a gflag xcluster consistent wal switch this to a parameter that is passed in using setup replication warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
312,173 | 9,544,311,652 | IssuesEvent | 2019-05-01 13:50:28 | ukwa/w3act | https://api.github.com/repos/ukwa/w3act | closed | Report on Key Sites needed. | Enhancement Medium Priority | In order to manage the key sites list we need a report on Targets that are tagged as key sites (this functionality is only available to the Web Archivist using the Key sites checkbox in the Target record). Could the report be added to this page https://www.webarchive.org.uk/act/reportscreation/targets/ ?
The report can be viewed by all Users.
| 1.0 | Report on Key Sites needed. - In order to manage the key sites list we need a report on Targets that are tagged as key sites (this functionality is only available to the Web Archivist using the Key sites checkbox in the Target record). Could the report be added to this page https://www.webarchive.org.uk/act/reportscreation/targets/ ?
The report can be viewed by all Users.
| priority | report on key sites needed in order to manage the key sites list we need a report on targets that are tagged as key sites this functionality is only available to the web archivist using the key sites checkbox in the target record could the report be added to this page the report can be viewed by all users | 1 |
329,204 | 10,013,325,209 | IssuesEvent | 2019-07-15 14:59:47 | conan-io/conan | https://api.github.com/repos/conan-io/conan | opened | Proper error handling with 'conan get <ref> -r <remote>' when reference is not found | complex: low good first issue priority: medium stage: queue type: bug type: ux | Same as https://github.com/conan-io/conan/issues/5397 but for `conan get` command
Runnning `conan get non-existing/version@user/channel -r conan-center` it is printing the content of the whole 404 _file not found_ response from bintray.
| 1.0 | Proper error handling with 'conan get <ref> -r <remote>' when reference is not found - Same as https://github.com/conan-io/conan/issues/5397 but for `conan get` command
Runnning `conan get non-existing/version@user/channel -r conan-center` it is printing the content of the whole 404 _file not found_ response from bintray.
| priority | proper error handling with conan get r when reference is not found same as but for conan get command runnning conan get non existing version user channel r conan center it is printing the content of the whole file not found response from bintray | 1 |
80,525 | 3,563,413,171 | IssuesEvent | 2016-01-25 03:18:13 | HubTurbo/HubTurbo | https://api.github.com/repos/HubTurbo/HubTurbo | opened | Support user-defined lists in filters | feature-filters priority.medium type.enhancement | It would be nice if instead of typing `assignee:abc OR assignee:def OR assignee:xyz` we can type `assignee:in(group1)` or something like that where `group1` is a user defined list. | 1.0 | Support user-defined lists in filters - It would be nice if instead of typing `assignee:abc OR assignee:def OR assignee:xyz` we can type `assignee:in(group1)` or something like that where `group1` is a user defined list. | priority | support user defined lists in filters it would be nice if instead of typing assignee abc or assignee def or assignee xyz we can type assignee in or something like that where is a user defined list | 1 |
58,383 | 3,088,986,304 | IssuesEvent | 2015-08-25 19:18:33 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Зависание клиента при атаке множеством пользователей с хаба | bug imported Priority-Medium | _From [mike.kor...@gmail.com](https://code.google.com/u/101495626515388303633/) on October 14, 2014 22:34:47_
1. Подключаемся к специальному хабу.
2. Инициируем атаку на клиент множеством пользователей в ЛС.
3. GUI замерзает, клиент как-то шевелится и даже передает что-то, но после снятия атаки работоспособность не возвращается. https://yadi.sk/d/2-c1hDAac2Wtn В топе 2 нити (после атаки):
thread 224:
ntoskrnl.exe!KeWaitForMultipleObjects+0xc0a
ntoskrnl.exe!KeAcquireSpinLockAtDpcLevel+0x732
ntoskrnl.exe!KeWaitForMultipleObjects+0x26a
ntoskrnl.exe!FsRtlCancellableWaitForMultipleObjects+0xac
fltmgr.sys!FltSendMessage+0x4ea
MpFilter.sys!DllInitialize+0x2b2b5
MpFilter.sys!DllInitialize+0x233a8
fltmgr.sys!FltAcquirePushLockShared+0x907
fltmgr.sys!FltIsCallbackDataDirty+0xa39
fltmgr.sys+0x16c7
ntoskrnl.exe!MmCreateSection+0xbccf
ntoskrnl.exe!NtWaitForSingleObject+0xe04
ntoskrnl.exe!NtWaitForSingleObject+0xbc1
ntoskrnl.exe!NtWaitForSingleObject+0x1184
ntoskrnl.exe!KeSynchronizeExecution+0x3a23
ntdll.dll!ZwClose+0xa
KERNELBASE.dll!CloseHandle+0x13
kernel32.dll!CloseHandle+0x41
FlylinkDC_x64.exe+0x1e0621
FlylinkDC_x64.exe+0x7fe71
FlylinkDC_x64.exe+0x7ce44
FlylinkDC_x64.exe+0x74dec
FlylinkDC_x64.exe+0x532f3
USER32.dll!TranslateMessageEx+0x2a1
USER32.dll!TranslateMessage+0x1ea
FlylinkDC_x64.exe+0x9d03e
FlylinkDC_x64.exe+0x9c087
FlylinkDC_x64.exe+0x9c6f7
FlylinkDC_x64.exe+0x6e3294
kernel32.dll!BaseThreadInitThunk+0xd
ntdll.dll!RtlUserThreadStart+0x21
thread 2772:
ntoskrnl.exe!KeWaitForMultipleObjects+0xc0a
ntoskrnl.exe!KeAcquireSpinLockAtDpcLevel+0x732
ntoskrnl.exe!KeWaitForSingleObject+0x19f
ntoskrnl.exe!NtWaitForSingleObject+0xde
ntoskrnl.exe!KeSynchronizeExecution+0x3a23
ntdll.dll!NtWaitForSingleObject+0xa
mswsock.dll+0x3d28
mswsock.dll!WSPStartup+0x8077
WS2_32.dll!select+0x15c
WS2_32.dll!select+0xdd
FlylinkDC_x64.exe+0x279619
FlylinkDC_x64.exe+0x2cb8d0
FlylinkDC_x64.exe+0x20181f
FlylinkDC_x64.exe+0x6e1caf
FlylinkDC_x64.exe+0x6e1d43
kernel32.dll!BaseThreadInitThunk+0xd
ntdll.dll!RtlUserThreadStart+0x21
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1506_ | 1.0 | Зависание клиента при атаке множеством пользователей с хаба - _From [mike.kor...@gmail.com](https://code.google.com/u/101495626515388303633/) on October 14, 2014 22:34:47_
1. Подключаемся к специальному хабу.
2. Инициируем атаку на клиент множеством пользователей в ЛС.
3. GUI замерзает, клиент как-то шевелится и даже передает что-то, но после снятия атаки работоспособность не возвращается. https://yadi.sk/d/2-c1hDAac2Wtn В топе 2 нити (после атаки):
thread 224:
ntoskrnl.exe!KeWaitForMultipleObjects+0xc0a
ntoskrnl.exe!KeAcquireSpinLockAtDpcLevel+0x732
ntoskrnl.exe!KeWaitForMultipleObjects+0x26a
ntoskrnl.exe!FsRtlCancellableWaitForMultipleObjects+0xac
fltmgr.sys!FltSendMessage+0x4ea
MpFilter.sys!DllInitialize+0x2b2b5
MpFilter.sys!DllInitialize+0x233a8
fltmgr.sys!FltAcquirePushLockShared+0x907
fltmgr.sys!FltIsCallbackDataDirty+0xa39
fltmgr.sys+0x16c7
ntoskrnl.exe!MmCreateSection+0xbccf
ntoskrnl.exe!NtWaitForSingleObject+0xe04
ntoskrnl.exe!NtWaitForSingleObject+0xbc1
ntoskrnl.exe!NtWaitForSingleObject+0x1184
ntoskrnl.exe!KeSynchronizeExecution+0x3a23
ntdll.dll!ZwClose+0xa
KERNELBASE.dll!CloseHandle+0x13
kernel32.dll!CloseHandle+0x41
FlylinkDC_x64.exe+0x1e0621
FlylinkDC_x64.exe+0x7fe71
FlylinkDC_x64.exe+0x7ce44
FlylinkDC_x64.exe+0x74dec
FlylinkDC_x64.exe+0x532f3
USER32.dll!TranslateMessageEx+0x2a1
USER32.dll!TranslateMessage+0x1ea
FlylinkDC_x64.exe+0x9d03e
FlylinkDC_x64.exe+0x9c087
FlylinkDC_x64.exe+0x9c6f7
FlylinkDC_x64.exe+0x6e3294
kernel32.dll!BaseThreadInitThunk+0xd
ntdll.dll!RtlUserThreadStart+0x21
thread 2772:
ntoskrnl.exe!KeWaitForMultipleObjects+0xc0a
ntoskrnl.exe!KeAcquireSpinLockAtDpcLevel+0x732
ntoskrnl.exe!KeWaitForSingleObject+0x19f
ntoskrnl.exe!NtWaitForSingleObject+0xde
ntoskrnl.exe!KeSynchronizeExecution+0x3a23
ntdll.dll!NtWaitForSingleObject+0xa
mswsock.dll+0x3d28
mswsock.dll!WSPStartup+0x8077
WS2_32.dll!select+0x15c
WS2_32.dll!select+0xdd
FlylinkDC_x64.exe+0x279619
FlylinkDC_x64.exe+0x2cb8d0
FlylinkDC_x64.exe+0x20181f
FlylinkDC_x64.exe+0x6e1caf
FlylinkDC_x64.exe+0x6e1d43
kernel32.dll!BaseThreadInitThunk+0xd
ntdll.dll!RtlUserThreadStart+0x21
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1506_ | priority | зависание клиента при атаке множеством пользователей с хаба from on october подключаемся к специальному хабу инициируем атаку на клиент множеством пользователей в лс gui замерзает клиент как то шевелится и даже передает что то но после снятия атаки работоспособность не возвращается в топе нити после атаки thread ntoskrnl exe kewaitformultipleobjects ntoskrnl exe keacquirespinlockatdpclevel ntoskrnl exe kewaitformultipleobjects ntoskrnl exe fsrtlcancellablewaitformultipleobjects fltmgr sys fltsendmessage mpfilter sys dllinitialize mpfilter sys dllinitialize fltmgr sys fltacquirepushlockshared fltmgr sys fltiscallbackdatadirty fltmgr sys ntoskrnl exe mmcreatesection ntoskrnl exe ntwaitforsingleobject ntoskrnl exe ntwaitforsingleobject ntoskrnl exe ntwaitforsingleobject ntoskrnl exe kesynchronizeexecution ntdll dll zwclose kernelbase dll closehandle dll closehandle flylinkdc exe flylinkdc exe flylinkdc exe flylinkdc exe flylinkdc exe dll translatemessageex dll translatemessage flylinkdc exe flylinkdc exe flylinkdc exe flylinkdc exe dll basethreadinitthunk ntdll dll rtluserthreadstart thread ntoskrnl exe kewaitformultipleobjects ntoskrnl exe keacquirespinlockatdpclevel ntoskrnl exe kewaitforsingleobject ntoskrnl exe ntwaitforsingleobject ntoskrnl exe kesynchronizeexecution ntdll dll ntwaitforsingleobject mswsock dll mswsock dll wspstartup dll select dll select flylinkdc exe flylinkdc exe flylinkdc exe flylinkdc exe flylinkdc exe dll basethreadinitthunk ntdll dll rtluserthreadstart original issue | 1 |
77,093 | 3,506,260,158 | IssuesEvent | 2016-01-08 05:03:20 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | [boss] Reliquary of Souls - Essence of Anger (BB #154) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 19.05.2010 20:54:11 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/154
<hr>
I cought the RoS, there was no seeth, and than, without reason, I've lost aggro, RoS has attacked an another warrior. It was after 4-5 seconds from pull, in this time there was no chance for this warrior to build aggro using thrown weapon. Misdirect was on me, so a whole situation was increadibly wrong.
After a wipe we went to kill some trash mobs. As a tank I had a second problem, the same as on Arcatraz - i wasn't generating any aggro. I couldn't make a reload during combat so there was no way to repair a bug. Moreover, after every pull i had the same problem.
To sum up - the RoS os bugged. | 1.0 | [boss] Reliquary of Souls - Essence of Anger (BB #154) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 19.05.2010 20:54:11 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/154
<hr>
I cought the RoS, there was no seeth, and than, without reason, I've lost aggro, RoS has attacked an another warrior. It was after 4-5 seconds from pull, in this time there was no chance for this warrior to build aggro using thrown weapon. Misdirect was on me, so a whole situation was increadibly wrong.
After a wipe we went to kill some trash mobs. As a tank I had a second problem, the same as on Arcatraz - i wasn't generating any aggro. I couldn't make a reload during combat so there was no way to repair a bug. Moreover, after every pull i had the same problem.
To sum up - the RoS os bugged. | priority | reliquary of souls essence of anger bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state invalid direct link i cought the ros there was no seeth and than without reason i ve lost aggro ros has attacked an another warrior it was after seconds from pull in this time there was no chance for this warrior to build aggro using thrown weapon misdirect was on me so a whole situation was increadibly wrong after a wipe we went to kill some trash mobs as a tank i had a second problem the same as on arcatraz i wasn t generating any aggro i couldn t make a reload during combat so there was no way to repair a bug moreover after every pull i had the same problem to sum up the ros os bugged | 1 |
828,407 | 31,826,594,858 | IssuesEvent | 2023-09-14 07:56:35 | nickhaf/eatPlot | https://api.github.com/repos/nickhaf/eatPlot | closed | `combine_plots`: Don't adjust the plot widths if only one has a bar - optional argument for the plot_widths | bug medium priority | Also concerncs #353 | 1.0 | `combine_plots`: Don't adjust the plot widths if only one has a bar - optional argument for the plot_widths - Also concerncs #353 | priority | combine plots don t adjust the plot widths if only one has a bar optional argument for the plot widths also concerncs | 1 |
463,802 | 13,301,073,066 | IssuesEvent | 2020-08-25 12:24:36 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | CW: Disable maximum worlds options for tier 11(dev)+ | Category: Accounts Priority: Medium Status: Fixed | Right now we have `world_maximum` option in DB and it set to 1 on staging.
Make Dev Tier (11) and up to have unlimited cloud worlds. | 1.0 | CW: Disable maximum worlds options for tier 11(dev)+ - Right now we have `world_maximum` option in DB and it set to 1 on staging.
Make Dev Tier (11) and up to have unlimited cloud worlds. | priority | cw disable maximum worlds options for tier dev right now we have world maximum option in db and it set to on staging make dev tier and up to have unlimited cloud worlds | 1 |
808,141 | 30,035,107,401 | IssuesEvent | 2023-06-27 12:16:15 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID: 321157] Logically dead code in subsys/bluetooth/audio/csip_set_coordinator.c | bug priority: medium area: Bluetooth Coverity area: Bluetooth Audio |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/ce3317d03e46e1a218af3e712383d9216c48a992/subsys/bluetooth/audio/csip_set_coordinator.c
Category: Control flow issues
Function: `verify_members`
Component: Bluetooth
CID: [321157](https://scan9.scan.coverity.com/reports.htm#v29726/p12996/mergedDefectId=321157)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/ce3317d03e46e1a218af3e712383d9216c48a992/subsys/bluetooth/audio/csip_set_coordinator.c#L1500
Please fix or provide comments in coverity using the link:
https://scan9.scan.coverity.com/reports.htm#v29271/p12996.
For more information about the violation, check the [Coverity Reference](https://scan9.scan.coverity.com/doc/en/cov_checker_ref.html#static_checker_DEADCODE). ([CWE-561](http://cwe.mitre.org/data/definitions/561.html))
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID: 321157] Logically dead code in subsys/bluetooth/audio/csip_set_coordinator.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/ce3317d03e46e1a218af3e712383d9216c48a992/subsys/bluetooth/audio/csip_set_coordinator.c
Category: Control flow issues
Function: `verify_members`
Component: Bluetooth
CID: [321157](https://scan9.scan.coverity.com/reports.htm#v29726/p12996/mergedDefectId=321157)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/ce3317d03e46e1a218af3e712383d9216c48a992/subsys/bluetooth/audio/csip_set_coordinator.c#L1500
Please fix or provide comments in coverity using the link:
https://scan9.scan.coverity.com/reports.htm#v29271/p12996.
For more information about the violation, check the [Coverity Reference](https://scan9.scan.coverity.com/doc/en/cov_checker_ref.html#static_checker_DEADCODE). ([CWE-561](http://cwe.mitre.org/data/definitions/561.html))
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| priority | logically dead code in subsys bluetooth audio csip set coordinator c static code scan issues found in file category control flow issues function verify members component bluetooth cid details please fix or provide comments in coverity using the link for more information about the violation check the note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 1 |
782,333 | 27,493,560,878 | IssuesEvent | 2023-03-04 22:41:34 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] ldb manifest_dump fails in 2.12 | kind/bug area/docdb priority/medium | Jira Link: [DB-5657](https://yugabyte.atlassian.net/browse/DB-5657)
### Description
In version 2.12, the `ldb manifest_dump` command fails when accessing a rocksdb MANIFEST file. The same file will open fine in the 2.17.1 version of the tool.
Run on a 2.12.10 cluster using the 2.12.10 version of the tool:
`./ldb manifest_dump --path=/mnt/d0/yb-data/tserver/data/rocksdb/table-8b2ad5af47ef46aabff43d2ad48bde03/tablet-0c19c426a5dc4123b577a319d4d9ba5c/MANIFEST-000013`
> Error in processing file /mnt/d0/yb-data/tserver/data/rocksdb/table-8b2ad5af47ef46aabff43d2ad48bde03/tablet-0c19c426a5dc4123b577a319d4d9ba5c/MANIFEST-000013 Illegal state (yb/rocksdb/db/version_edit.cc:180): Boundary values contains user frontier but extractor is not specified: key: "G\000\000S980a5363-2996-4c58-b5ab-b008f76c4362:1054314\000\000!!J\200#\200\001^\354B:\254\216\200J\0017\256\002\000\000\000\004" seqno: 1125899906842625 user_values { tag: 1 data: "\200\001^\354\206-\242\342\200J" } user_frontier { [type.googleapis.com/yb.docdb.ConsensusFrontierPB] { op_id { term: 1 index: 2 } hybrid_time: 6869422163067260928 history_cutoff: 18446744073709551614 max_value_level_ttl_expiration_time: 18446744073709551614 } }
Same command using the 2.17.1 tool:
> --------------- Column family "default" (ID 0) --------------
> log number: 4
> comparator: leveldb.BytewiseComparator
> --- level 0 --- version# 1 ---
> { number: 10 total_size: 5420804 base_size: 205661 being_compacted: 0 smallest: { seqno: 1125899906842625 user_frontier: 0x000055a8b9b0a2a0 -> { op_id: 1.2 hybrid_time: { physical: 1677105020280093 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <invalid> primary_schema_version: <NULL> cotable_schema_versions: [] } } largest: { seqno: 1125899907132804 user_frontier: 0x000055a8b9b0aaf0 -> { op_id: 2.145092 hybrid_time: { physical: 1677106773197952 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> primary_schema_version: <NULL> cotable_schema_versions: [] } } }
> next_file_number 15 last_sequence 1125899907132838 prev_log_number 0 max_column_family 0 flushed_values 0x000055a8b9b0ab60 -> { op_id: 2.145092 hybrid_time: { physical: 1677106773197952 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> primary_schema_version: <NULL> cotable_schema_versions: [] }
[DB-5657]: https://yugabyte.atlassian.net/browse/DB-5657?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [DocDB] ldb manifest_dump fails in 2.12 - Jira Link: [DB-5657](https://yugabyte.atlassian.net/browse/DB-5657)
### Description
In version 2.12, the `ldb manifest_dump` command fails when accessing a rocksdb MANIFEST file. The same file will open fine in the 2.17.1 version of the tool.
Run on a 2.12.10 cluster using the 2.12.10 version of the tool:
`./ldb manifest_dump --path=/mnt/d0/yb-data/tserver/data/rocksdb/table-8b2ad5af47ef46aabff43d2ad48bde03/tablet-0c19c426a5dc4123b577a319d4d9ba5c/MANIFEST-000013`
> Error in processing file /mnt/d0/yb-data/tserver/data/rocksdb/table-8b2ad5af47ef46aabff43d2ad48bde03/tablet-0c19c426a5dc4123b577a319d4d9ba5c/MANIFEST-000013 Illegal state (yb/rocksdb/db/version_edit.cc:180): Boundary values contains user frontier but extractor is not specified: key: "G\000\000S980a5363-2996-4c58-b5ab-b008f76c4362:1054314\000\000!!J\200#\200\001^\354B:\254\216\200J\0017\256\002\000\000\000\004" seqno: 1125899906842625 user_values { tag: 1 data: "\200\001^\354\206-\242\342\200J" } user_frontier { [type.googleapis.com/yb.docdb.ConsensusFrontierPB] { op_id { term: 1 index: 2 } hybrid_time: 6869422163067260928 history_cutoff: 18446744073709551614 max_value_level_ttl_expiration_time: 18446744073709551614 } }
Same command using the 2.17.1 tool:
> --------------- Column family "default" (ID 0) --------------
> log number: 4
> comparator: leveldb.BytewiseComparator
> --- level 0 --- version# 1 ---
> { number: 10 total_size: 5420804 base_size: 205661 being_compacted: 0 smallest: { seqno: 1125899906842625 user_frontier: 0x000055a8b9b0a2a0 -> { op_id: 1.2 hybrid_time: { physical: 1677105020280093 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <invalid> primary_schema_version: <NULL> cotable_schema_versions: [] } } largest: { seqno: 1125899907132804 user_frontier: 0x000055a8b9b0aaf0 -> { op_id: 2.145092 hybrid_time: { physical: 1677106773197952 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> primary_schema_version: <NULL> cotable_schema_versions: [] } } }
> next_file_number 15 last_sequence 1125899907132838 prev_log_number 0 max_column_family 0 flushed_values 0x000055a8b9b0ab60 -> { op_id: 2.145092 hybrid_time: { physical: 1677106773197952 } history_cutoff: <invalid> hybrid_time_filter: <invalid> max_value_level_ttl_expiration_time: <initial> primary_schema_version: <NULL> cotable_schema_versions: [] }
[DB-5657]: https://yugabyte.atlassian.net/browse/DB-5657?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | ldb manifest dump fails in jira link description in version the ldb manifest dump command fails when accessing a rocksdb manifest file the same file will open fine in the version of the tool run on a cluster using the version of the tool ldb manifest dump path mnt yb data tserver data rocksdb table tablet manifest error in processing file mnt yb data tserver data rocksdb table tablet manifest illegal state yb rocksdb db version edit cc boundary values contains user frontier but extractor is not specified key g j seqno user values tag data user frontier op id term index hybrid time history cutoff max value level ttl expiration time same command using the tool column family default id log number comparator leveldb bytewisecomparator level version number total size base size being compacted smallest seqno user frontier op id hybrid time physical history cutoff hybrid time filter max value level ttl expiration time primary schema version cotable schema versions largest seqno user frontier op id hybrid time physical history cutoff hybrid time filter max value level ttl expiration time primary schema version cotable schema versions next file number last sequence prev log number max column family flushed values op id hybrid time physical history cutoff hybrid time filter max value level ttl expiration time primary schema version cotable schema versions | 1 |
585,595 | 17,501,424,518 | IssuesEvent | 2021-08-10 09:52:39 | nimblehq/nimble-medium-ios | https://api.github.com/repos/nimblehq/nimble-medium-ios | opened | As a user, I can edit my profile briefing from the left menu header when logged in | type : feature category : ui priority: medium | ## Why
Once the users logged in the application successfully, they can opt to edit the profile from the menu header if needed.
## Acceptance Criteria
- [ ] Add the `Edit Profile` button with no text in the bottom right corner of the header view container.
- [ ] Set the tint of the `Edit Profile` button to white for matching with our green and white theme.
## Resources
- Sample menu header:
<img width="566" alt="Screen Shot 2021-08-10 at 16 31 18" src="https://user-images.githubusercontent.com/70877098/128844841-c1f48023-17f1-4de1-b222-eca915367375.png">
- Edit Profile button icon:
https://icon-library.com/icon/edit-icon-png-14.html | 1.0 | As a user, I can edit my profile briefing from the left menu header when logged in - ## Why
Once the users logged in the application successfully, they can opt to edit the profile from the menu header if needed.
## Acceptance Criteria
- [ ] Add the `Edit Profile` button with no text in the bottom right corner of the header view container.
- [ ] Set the tint of the `Edit Profile` button to white for matching with our green and white theme.
## Resources
- Sample menu header:
<img width="566" alt="Screen Shot 2021-08-10 at 16 31 18" src="https://user-images.githubusercontent.com/70877098/128844841-c1f48023-17f1-4de1-b222-eca915367375.png">
- Edit Profile button icon:
https://icon-library.com/icon/edit-icon-png-14.html | priority | as a user i can edit my profile briefing from the left menu header when logged in why once the users logged in the application successfully they can opt to edit the profile from the menu header if needed acceptance criteria add the edit profile button with no text in the bottom right corner of the header view container set the tint of the edit profile button to white for matching with our green and white theme resources sample menu header img width alt screen shot at src edit profile button icon | 1 |
478,664 | 13,783,089,841 | IssuesEvent | 2020-10-08 18:39:18 | zeoflow/zeobot | https://api.github.com/repos/zeoflow/zeobot | closed | DraftRelease | Content | @bug @priority-medium | DraftRelease content contains the last pr details that was merged in the previous version | 1.0 | DraftRelease | Content - DraftRelease content contains the last pr details that was merged in the previous version | priority | draftrelease content draftrelease content contains the last pr details that was merged in the previous version | 1 |
362,644 | 10,730,129,096 | IssuesEvent | 2019-10-28 16:49:45 | AY1920S1-CS2103T-T12-3/main | https://api.github.com/repos/AY1920S1-CS2103T-T12-3/main | closed | As a coach/captain of male and female teams I want to filter players according to their gender | priority.Medium type.Story | To be able to plan for team/trainings more efficiently. | 1.0 | As a coach/captain of male and female teams I want to filter players according to their gender - To be able to plan for team/trainings more efficiently. | priority | as a coach captain of male and female teams i want to filter players according to their gender to be able to plan for team trainings more efficiently | 1 |
709,255 | 24,371,797,862 | IssuesEvent | 2022-10-03 20:00:08 | mit-cml/appinventor-sources | https://api.github.com/repos/mit-cml/appinventor-sources | opened | Custom label data for Charts | help wanted issue: noted for future Work status: forum feature request affects: ucr priority: medium | **Describe the desired feature**
Rather than trying to infer labels and colors for the legend, allow the user to specify the information using an associative list or dictionary.
**Give an example of how this feature would be used**
For a single data series, the Chart erroneously tries to treat each element in the line as its own entry in the legend, making the feature effectively useless.
**Why doesn't the current App Inventor system address this use case?**
See previous note.
**Why is this feature beneficial to App Inventor's educational mission?**
Explaining one's data is an important skill to learn, and giving students additional control over how their data are interpreted can help them build this skill. | 1.0 | Custom label data for Charts - **Describe the desired feature**
Rather than trying to infer labels and colors for the legend, allow the user to specify the information using an associative list or dictionary.
**Give an example of how this feature would be used**
For a single data series, the Chart erroneously tries to treat each element in the line as its own entry in the legend, making the feature effectively useless.
**Why doesn't the current App Inventor system address this use case?**
See previous note.
**Why is this feature beneficial to App Inventor's educational mission?**
Explaining one's data is an important skill to learn, and giving students additional control over how their data are interpreted can help them build this skill. | priority | custom label data for charts describe the desired feature rather than trying to infer labels and colors for the legend allow the user to specify the information using an associative list or dictionary give an example of how this feature would be used for a single data series the chart erroneously tries to treat each element in the line as its own entry in the legend making the feature effectively useless why doesn t the current app inventor system address this use case see previous note why is this feature beneficial to app inventor s educational mission explaining one s data is an important skill to learn and giving students additional control over how their data are interpreted can help them build this skill | 1 |
72,470 | 3,386,257,735 | IssuesEvent | 2015-11-27 16:22:22 | CosmosOS/Cosmos | https://api.github.com/repos/CosmosOS/Cosmos | closed | Dup tries to pop more stuff from analytical stack than there is! | area_compiler complexity_medium pending_verification priority_high | Log:
```
4> Error: Exception: System.Exception: Error compiling method 'SystemVoidKernelCommandsInputCommand': System.Exception: OpCode IL_014D: Dup tries to pop more stuff from analytical stack than there is!
4> at Cosmos.IL2CPU.ILOpCode.InterpretStackTypes(IDictionary`2 aOpCodes, Stack`1 aStack, Boolean& aSituationChanged, Int32 aMaxRecursionDepth) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILOpCode.cs:line 369
4> at Cosmos.IL2CPU.AppAssembler.InterpretInstructionsToDetermineStackTypes(List`1 aCurrentGroup) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 714
4> at Cosmos.IL2CPU.AppAssembler.EmitInstructions(MethodInfo aMethod, List`1 aCurrentGroup, Boolean& emitINT3) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 557
4> at Cosmos.IL2CPU.AppAssembler.ProcessMethod(MethodInfo aMethod, List`1 aOpCodes) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 514 ---> System.Exception: OpCode IL_014D: Dup tries to pop more stuff from analytical stack than there is!
4> at Cosmos.IL2CPU.ILOpCode.InterpretStackTypes(IDictionary`2 aOpCodes, Stack`1 aStack, Boolean& aSituationChanged, Int32 aMaxRecursionDepth) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILOpCode.cs:line 369
4> at Cosmos.IL2CPU.AppAssembler.InterpretInstructionsToDetermineStackTypes(List`1 aCurrentGroup) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 714
4> at Cosmos.IL2CPU.AppAssembler.EmitInstructions(MethodInfo aMethod, List`1 aCurrentGroup, Boolean& emitINT3) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 557
4> at Cosmos.IL2CPU.AppAssembler.ProcessMethod(MethodInfo aMethod, List`1 aOpCodes) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 514
4> --- End of inner exception stack trace ---
4> at Cosmos.IL2CPU.AppAssembler.ProcessMethod(MethodInfo aMethod, List`1 aOpCodes) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 529
4> at Cosmos.IL2CPU.ILScanner.Assemble() in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILScanner.cs:line 944
4> at Cosmos.IL2CPU.ILScanner.Execute(MethodBase aStartMethod) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILScanner.cs:line 256
4> at Cosmos.IL2CPU.CompilerEngine.Execute() in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\CompilerEngine.cs:line 238
```
And there is code where it heappen:
```C#
public static void InputCommand()
{
Console.Write("D:/command>");
comd = Console.ReadLine();
comd = comd.ToLower();
if (comd == "reboot") h.Power.Restart();
else if (comd == "shutdown") h.Power.Shutdown();
else if (comd == "echo")
{
Console.Write("Echo>");
arg = Console.ReadLine();
Console.WriteLine(arg);
}
else if (comd == "notepad")
{
System.CLI.Applications.Notepad();
}
else if (comd == "cls")
{
Console.Clear();
Console.WriteLine("TriangleOS");
Console.WriteLine("=============================");
}
else if (comd == "soundtest")
{
Console.Write("Frequency>");
arg = Console.ReadLine();
Console.Write("Duration>");
optarg = Console.ReadLine();
Console.Write("Eh, this isn't implemted right now.");
//h.Multimedia.Speakers.CallSound(int.Parse(arg), int.Parse(optarg));
}
else if (comd == "boot")
{
Console.WriteLine("Starting TriangleOS.Drivers . . .");
//ProcessManager.Process Audio = new ProcessManager.Process();
//ProcessManager.Process Graphics = new ProcessManager.Process();
//Graphics.ProcessThread = new System.Threading.Thread(
h.Graphics.LowLevel.init();
//);
//ProcessManager.Process Mouse = new ProcessManager.Process();
//Mouse.ProcessThread = new System.Threading.Thread(
h.Mouse.InitMouse();
//);
//Graphics.Start();
//Mouse.Start();
//Audio.ProcessThread = new System.Threading.Thread(
h.Multimedia.Speakers.IntailizeAudio();
//);
Kernel.GUI();
}
else if (comd == "cliboot")
{
System.CLI.Controls.TextBox Text = new System.CLI.Controls.TextBox();
Text.x = 1;
Text.y = 1;
Text.length = 20;
Text.DrawTextBox();
Text.TypeInto();
System.CLI.Controls.Button OK = new System.CLI.Controls.Button();
OK.y = 22;
OK.x = 1;
OK.width = 6;
OK.height = 1;
OK.text = "OK";
OK.DrawButton();
}
else if (comd == "calculator")
{
System.CLI.Applications.Calculator();
}
else if (comd == "cd")
{
h.Graphics.Console.ErrO("Impossible operation performed. Can't request I/O while it isn't running!");
}
else if (comd == "dir")
{
Console.WriteLine("This isn't folder. you can use <cd> to go up folder.");
}
else if (comd == "paint")
{
System.CLI.Applications.Paint();
}
else if (comd == "changelog")
{
Console.WriteLine("You are running version 0.0.3 Dev. Only Luka see the DEV!");
Console.WriteLine("v0.0.2:");
Console.WriteLine("Blue screen with cursor. not clearing.");
Console.WriteLine("v0.0.3:");
Console.WriteLine("Command Line Shell with Broken CLI, but working unresponsive GUI, but with DOS-like Shell. I/O, Audio, Multithreading, Shutdown doesn't work.");
}
else if (comd == "help")
{
Console.WriteLine("Copyright 2015 Thontelix TriangleOS. Special thanks to Cosmos .net ASM Compiler.");
Console.WriteLine("type CHANGELOG to get version changes.");
Console.WriteLine("How to use:");
Console.WriteLine("After every typed command, press enter. the output of command will be detailed. If there is blinking bottom line cursor, then you need to input something, if it doesn't give any feedback, then its operating a activity. If you want GUI, type 'boot' and press enter.");
Console.WriteLine("Commands:");
Console.WriteLine("shutdown - Gives you ability to safe turn off PC");
Console.WriteLine("boot - Boots you into TriangleOS");
Console.WriteLine("reboot - Reboots your PC");
Console.WriteLine("cd - Travel through directories");
Console.WriteLine("dir - Read content of directory");
Console.WriteLine("echo - Backs string you enter");
Console.WriteLine("shutdown - Backs string you enter");
Console.WriteLine("soundtest - Speakers Drivers. Doesnt work for now");
Console.WriteLine("help - Gives you list of commands.");
Console.WriteLine("paint - Console Paint App. CAUTION:Not for epilepsy persons.");
}
else
{
Console.WriteLine("That command doesn't exist. :(");
}
}
``` | 1.0 | Dup tries to pop more stuff from analytical stack than there is! - Log:
```
4> Error: Exception: System.Exception: Error compiling method 'SystemVoidKernelCommandsInputCommand': System.Exception: OpCode IL_014D: Dup tries to pop more stuff from analytical stack than there is!
4> at Cosmos.IL2CPU.ILOpCode.InterpretStackTypes(IDictionary`2 aOpCodes, Stack`1 aStack, Boolean& aSituationChanged, Int32 aMaxRecursionDepth) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILOpCode.cs:line 369
4> at Cosmos.IL2CPU.AppAssembler.InterpretInstructionsToDetermineStackTypes(List`1 aCurrentGroup) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 714
4> at Cosmos.IL2CPU.AppAssembler.EmitInstructions(MethodInfo aMethod, List`1 aCurrentGroup, Boolean& emitINT3) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 557
4> at Cosmos.IL2CPU.AppAssembler.ProcessMethod(MethodInfo aMethod, List`1 aOpCodes) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 514 ---> System.Exception: OpCode IL_014D: Dup tries to pop more stuff from analytical stack than there is!
4> at Cosmos.IL2CPU.ILOpCode.InterpretStackTypes(IDictionary`2 aOpCodes, Stack`1 aStack, Boolean& aSituationChanged, Int32 aMaxRecursionDepth) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILOpCode.cs:line 369
4> at Cosmos.IL2CPU.AppAssembler.InterpretInstructionsToDetermineStackTypes(List`1 aCurrentGroup) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 714
4> at Cosmos.IL2CPU.AppAssembler.EmitInstructions(MethodInfo aMethod, List`1 aCurrentGroup, Boolean& emitINT3) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 557
4> at Cosmos.IL2CPU.AppAssembler.ProcessMethod(MethodInfo aMethod, List`1 aOpCodes) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 514
4> --- End of inner exception stack trace ---
4> at Cosmos.IL2CPU.AppAssembler.ProcessMethod(MethodInfo aMethod, List`1 aOpCodes) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\AppAssembler.cs:line 529
4> at Cosmos.IL2CPU.ILScanner.Assemble() in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILScanner.cs:line 944
4> at Cosmos.IL2CPU.ILScanner.Execute(MethodBase aStartMethod) in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\ILScanner.cs:line 256
4> at Cosmos.IL2CPU.CompilerEngine.Execute() in C:\Users\Luka\Desktop\Cosmos-master\source\Cosmos.IL2CPU\CompilerEngine.cs:line 238
```
And there is code where it heappen:
```C#
public static void InputCommand()
{
Console.Write("D:/command>");
comd = Console.ReadLine();
comd = comd.ToLower();
if (comd == "reboot") h.Power.Restart();
else if (comd == "shutdown") h.Power.Shutdown();
else if (comd == "echo")
{
Console.Write("Echo>");
arg = Console.ReadLine();
Console.WriteLine(arg);
}
else if (comd == "notepad")
{
System.CLI.Applications.Notepad();
}
else if (comd == "cls")
{
Console.Clear();
Console.WriteLine("TriangleOS");
Console.WriteLine("=============================");
}
else if (comd == "soundtest")
{
Console.Write("Frequency>");
arg = Console.ReadLine();
Console.Write("Duration>");
optarg = Console.ReadLine();
Console.Write("Eh, this isn't implemted right now.");
//h.Multimedia.Speakers.CallSound(int.Parse(arg), int.Parse(optarg));
}
else if (comd == "boot")
{
Console.WriteLine("Starting TriangleOS.Drivers . . .");
//ProcessManager.Process Audio = new ProcessManager.Process();
//ProcessManager.Process Graphics = new ProcessManager.Process();
//Graphics.ProcessThread = new System.Threading.Thread(
h.Graphics.LowLevel.init();
//);
//ProcessManager.Process Mouse = new ProcessManager.Process();
//Mouse.ProcessThread = new System.Threading.Thread(
h.Mouse.InitMouse();
//);
//Graphics.Start();
//Mouse.Start();
//Audio.ProcessThread = new System.Threading.Thread(
h.Multimedia.Speakers.IntailizeAudio();
//);
Kernel.GUI();
}
else if (comd == "cliboot")
{
System.CLI.Controls.TextBox Text = new System.CLI.Controls.TextBox();
Text.x = 1;
Text.y = 1;
Text.length = 20;
Text.DrawTextBox();
Text.TypeInto();
System.CLI.Controls.Button OK = new System.CLI.Controls.Button();
OK.y = 22;
OK.x = 1;
OK.width = 6;
OK.height = 1;
OK.text = "OK";
OK.DrawButton();
}
else if (comd == "calculator")
{
System.CLI.Applications.Calculator();
}
else if (comd == "cd")
{
h.Graphics.Console.ErrO("Impossible operation performed. Can't request I/O while it isn't running!");
}
else if (comd == "dir")
{
Console.WriteLine("This isn't folder. you can use <cd> to go up folder.");
}
else if (comd == "paint")
{
System.CLI.Applications.Paint();
}
else if (comd == "changelog")
{
Console.WriteLine("You are running version 0.0.3 Dev. Only Luka see the DEV!");
Console.WriteLine("v0.0.2:");
Console.WriteLine("Blue screen with cursor. not clearing.");
Console.WriteLine("v0.0.3:");
Console.WriteLine("Command Line Shell with Broken CLI, but working unresponsive GUI, but with DOS-like Shell. I/O, Audio, Multithreading, Shutdown doesn't work.");
}
else if (comd == "help")
{
Console.WriteLine("Copyright 2015 Thontelix TriangleOS. Special thanks to Cosmos .net ASM Compiler.");
Console.WriteLine("type CHANGELOG to get version changes.");
Console.WriteLine("How to use:");
Console.WriteLine("After every typed command, press enter. the output of command will be detailed. If there is blinking bottom line cursor, then you need to input something, if it doesn't give any feedback, then its operating a activity. If you want GUI, type 'boot' and press enter.");
Console.WriteLine("Commands:");
Console.WriteLine("shutdown - Gives you ability to safe turn off PC");
Console.WriteLine("boot - Boots you into TriangleOS");
Console.WriteLine("reboot - Reboots your PC");
Console.WriteLine("cd - Travel through directories");
Console.WriteLine("dir - Read content of directory");
Console.WriteLine("echo - Backs string you enter");
Console.WriteLine("shutdown - Backs string you enter");
Console.WriteLine("soundtest - Speakers Drivers. Doesnt work for now");
Console.WriteLine("help - Gives you list of commands.");
Console.WriteLine("paint - Console Paint App. CAUTION:Not for epilepsy persons.");
}
else
{
Console.WriteLine("That command doesn't exist. :(");
}
}
``` | priority | dup tries to pop more stuff from analytical stack than there is log error exception system exception error compiling method systemvoidkernelcommandsinputcommand system exception opcode il dup tries to pop more stuff from analytical stack than there is at cosmos ilopcode interpretstacktypes idictionary aopcodes stack astack boolean asituationchanged amaxrecursiondepth in c users luka desktop cosmos master source cosmos ilopcode cs line at cosmos appassembler interpretinstructionstodeterminestacktypes list acurrentgroup in c users luka desktop cosmos master source cosmos appassembler cs line at cosmos appassembler emitinstructions methodinfo amethod list acurrentgroup boolean in c users luka desktop cosmos master source cosmos appassembler cs line at cosmos appassembler processmethod methodinfo amethod list aopcodes in c users luka desktop cosmos master source cosmos appassembler cs line system exception opcode il dup tries to pop more stuff from analytical stack than there is at cosmos ilopcode interpretstacktypes idictionary aopcodes stack astack boolean asituationchanged amaxrecursiondepth in c users luka desktop cosmos master source cosmos ilopcode cs line at cosmos appassembler interpretinstructionstodeterminestacktypes list acurrentgroup in c users luka desktop cosmos master source cosmos appassembler cs line at cosmos appassembler emitinstructions methodinfo amethod list acurrentgroup boolean in c users luka desktop cosmos master source cosmos appassembler cs line at cosmos appassembler processmethod methodinfo amethod list aopcodes in c users luka desktop cosmos master source cosmos appassembler cs line end of inner exception stack trace at cosmos appassembler processmethod methodinfo amethod list aopcodes in c users luka desktop cosmos master source cosmos appassembler cs line at cosmos ilscanner assemble in c users luka desktop cosmos master source cosmos ilscanner cs line at cosmos ilscanner execute methodbase astartmethod in c users luka desktop cosmos master source cosmos ilscanner cs line at cosmos compilerengine execute in c users luka desktop cosmos master source cosmos compilerengine cs line and there is code where it heappen c public static void inputcommand console write d command comd console readline comd comd tolower if comd reboot h power restart else if comd shutdown h power shutdown else if comd echo console write echo arg console readline console writeline arg else if comd notepad system cli applications notepad else if comd cls console clear console writeline triangleos console writeline else if comd soundtest console write frequency arg console readline console write duration optarg console readline console write eh this isn t implemted right now h multimedia speakers callsound int parse arg int parse optarg else if comd boot console writeline starting triangleos drivers processmanager process audio new processmanager process processmanager process graphics new processmanager process graphics processthread new system threading thread h graphics lowlevel init processmanager process mouse new processmanager process mouse processthread new system threading thread h mouse initmouse graphics start mouse start audio processthread new system threading thread h multimedia speakers intailizeaudio kernel gui else if comd cliboot system cli controls textbox text new system cli controls textbox text x text y text length text drawtextbox text typeinto system cli controls button ok new system cli controls button ok y ok x ok width ok height ok text ok ok drawbutton else if comd calculator system cli applications calculator else if comd cd h graphics console erro impossible operation performed can t request i o while it isn t running else if comd dir console writeline this isn t folder you can use to go up folder else if comd paint system cli applications paint else if comd changelog console writeline you are running version dev only luka see the dev console writeline console writeline blue screen with cursor not clearing console writeline console writeline command line shell with broken cli but working unresponsive gui but with dos like shell i o audio multithreading shutdown doesn t work else if comd help console writeline copyright thontelix triangleos special thanks to cosmos net asm compiler console writeline type changelog to get version changes console writeline how to use console writeline after every typed command press enter the output of command will be detailed if there is blinking bottom line cursor then you need to input something if it doesn t give any feedback then its operating a activity if you want gui type boot and press enter console writeline commands console writeline shutdown gives you ability to safe turn off pc console writeline boot boots you into triangleos console writeline reboot reboots your pc console writeline cd travel through directories console writeline dir read content of directory console writeline echo backs string you enter console writeline shutdown backs string you enter console writeline soundtest speakers drivers doesnt work for now console writeline help gives you list of commands console writeline paint console paint app caution not for epilepsy persons else console writeline that command doesn t exist | 1 |
669,309 | 22,619,097,781 | IssuesEvent | 2022-06-30 03:25:48 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB][Perf][Sysbench][oltp_read_only] sysbench threads are getting timed out after 15 sec. and one of the YB cluster node becomes unstable. | kind/bug area/docdb priority/medium | Jira Link: [DB-2728](https://yugabyte.atlassian.net/browse/DB-2728)
### Description:
Observed below listed yb process blocked for more than 120 sec. while running sysbench "oltp_read_only " work load
```
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_1xx:19759 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_3xx:19761 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19762 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19764 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19771 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task sq_worker:7205 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19971 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task acceptorxxxxxxx:19972 blocked for more than 120 seconds.
```
below sysbench "RUN" phase command:
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=60 run`
OR
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=100 run`
### Setup:
- YB version: "**yugabyte-2.14.0.0-b62**"
- YB cluster: Newly created, CentoOS 4 node cluster running with c5.xlarge instance type, GP3 with 15000 provisioned IOPS ( 300 gbps )
- Client: Ubuntu
### Steps:
"**Note:**" Observed this issue with single "RUN" phase but to reproduce this issue, recommend to run multiple "RUN" phases with increasing "threads" by keeping all other params same, without cleanup / load phase, this is "oltp_read_only" workload hence not expecting any data change.
- Create CentOS 4 node yb cluster
- After installing sysbench from yb repository on any client machine run sysbench create :
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.28.137 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=160 --create_secondary=false --threads=1 create`
- After CREATE phase run LOAD phase, on client
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.28.137 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=160 --create_secondary=false --threads=10 load`
- After LOAD phase execute RUN phase, on client
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=60 run`
- Sleep for some time and again execute "RUN" phase with increasing "RUN" threads.
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=100 run`
- On Client RUN phase gives below error after some time ( approx 30-45 min )
```
FATAL: `thread_run' function failed: /usr/local/share/sysbench/oltp_common.lua:499: SQL error, errno = 0, state = 'XX000': Network error: Connect timeout Connection (0x0000000001c79e78) client 172.151.24.97:54640 => 172.151.24.97:9100, passed: 15.000s, timeout: 15.000s: kConnectFailed
FATAL: PQexecPrepared() failed: 7 Network error: Connect timeout Connection (0x0000000001c79e78) client 172.151.24.97:54642 => 172.151.24.97:9100, passed: 14.999s, timeout: 15.000s: kConnectFailed
FATAL: `thread_run' function failed: /usr/local/share/sysbench/oltp_common.lua:499: SQL error, errno = 0, state = 'XX000': Network error: Connect timeout Connection (0x0000000001c79e78) client 172.151.24.97:54642 => 172.151.24.97:9100, passed: 14.999s, timeout: 15.000s: kConnectFailed
```
- Further debugging on YB host "172.151.24.97" saw below hung timeouts in "/var/log/messages"
```
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: khugepaged D ffff88021f803ca0 0 44 2 0x00000000
Jun 23 04:02:30 localhost kernel: ffff88021f803c78 0000000000000046 ffff88021ff8cf10 ffff88021f803fd8
Jun 23 04:02:30 localhost kernel: ffff88021f803fd8 ffff88021f803fd8 ffff88021ff8cf10 ffff88021ff8cf10
Jun 23 04:02:30 localhost kernel: ffff88021f8fac38 ffffffffffffffff ffff88021f8fac40 ffff88021f803ca0
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:30 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff811ea887>] khugepaged_scan_mm_slot+0x67/0xcf0
Jun 23 04:02:30 localhost kernel: [<ffffffff81098b30>] ? internal_add_timer+0x70/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff811eb64b>] khugepaged+0x13b/0x480
Jun 23 04:02:30 localhost kernel: [<ffffffff810b1920>] ? wake_up_atomic_t+0x30/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff811eb510>] ? khugepaged_scan_mm_slot+0xcf0/0xcf0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b099f>] kthread+0xcf/0xe0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff816b4fd8>] ret_from_fork+0x58/0x90
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: INFO: task kworker/u8:2:6617 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: kworker/u8:2 D ffff8800359ee940 0 6617 2 0x00000080
Jun 23 04:02:30 localhost kernel: Workqueue: nvme nvme_reset_work [nvme]
Jun 23 04:02:30 localhost kernel: ffff88006c0c7d50 0000000000000046 ffff88021471bf40 ffff88006c0c7fd8
Jun 23 04:02:30 localhost kernel: ffff88006c0c7fd8 ffff88006c0c7fd8 ffff88021471bf40 ffff88021fb98000
Jun 23 04:02:30 localhost kernel: ffff88021fb98730 ffff8800359eea68 0000000000000000 ffff8800359ee940
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff81301dc5>] blk_mq_freeze_queue_wait+0x75/0xe0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b1920>] ? wake_up_atomic_t+0x30/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffffc00782e9>] nvme_wait_freeze+0x39/0x50 [nvme_core]
Jun 23 04:02:30 localhost kernel: [<ffffffffc00a543a>] nvme_reset_work+0x59a/0x8a3 [nvme]
Jun 23 04:02:30 localhost kernel: [<ffffffff810a882a>] process_one_work+0x17a/0x440
Jun 23 04:02:30 localhost kernel: [<ffffffff810a94f6>] worker_thread+0x126/0x3c0
Jun 23 04:02:30 localhost kernel: [<ffffffff810a93d0>] ? manage_workers.isra.24+0x2a0/0x2a0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b099f>] kthread+0xcf/0xe0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff816b4fd8>] ret_from_fork+0x58/0x90
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_1xx:19759 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: iotp_Master_1xx D ffff880215aabdc0 0 19759 1 0x00000080
Jun 23 04:02:30 localhost kernel: ffff880215aabd98 0000000000000086 ffff880035868000 ffff880215aabfd8
Jun 23 04:02:30 localhost kernel: ffff880215aabfd8 ffff880215aabfd8 ffff880035868000 ffff880035868000
Jun 23 04:02:30 localhost kernel: ffff88021f8fe478 ffffffffffffffff ffff88021f8fe480 ffff880215aabdc0
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:30 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff816b029c>] __do_page_fault+0x37c/0x450
Jun 23 04:02:30 localhost kernel: [<ffffffff816b0456>] trace_do_page_fault+0x56/0x150
Jun 23 04:02:30 localhost kernel: [<ffffffff816afaea>] do_async_page_fault+0x1a/0xd0
Jun 23 04:02:30 localhost kernel: [<ffffffff816ac5f8>] async_page_fault+0x28/0x30
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_3xx:19761 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: iotp_Master_3xx D ffffffff00000000 0 19761 1 0x00000080
Jun 23 04:02:30 localhost kernel: ffff88020a607d78 0000000000000086 ffff880035868fd0 ffff88020a607fd8
Jun 23 04:02:30 localhost kernel: ffff88020a607fd8 ffff88020a607fd8 ffff880035868fd0 ffff880035868fd0
Jun 23 04:02:30 localhost kernel: ffff88021f8fe480 ffff88021f8fe478 ffffffff00000001 ffffffff00000000
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:30 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff812b78c0>] ? file_map_prot_check+0xd0/0xd0
Jun 23 04:02:30 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:30 localhost kernel: [<ffffffff811a11f0>] vm_mmap_pgoff+0xa0/0x110
Jun 23 04:02:31 localhost kernel: [<ffffffff811b6d86>] SyS_mmap_pgoff+0x116/0x270
Jun 23 04:02:31 localhost kernel: [<ffffffff8102fbd2>] SyS_mmap+0x22/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19762 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: Master_reactorx D ffffffff00000000 0 19762 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff88020b757d78 0000000000000086 ffff88003586bf40 ffff88020b757fd8
Jun 23 04:02:31 localhost kernel: ffff88020b757fd8 ffff88020b757fd8 ffff88003586bf40 ffff88003586bf40
Jun 23 04:02:31 localhost kernel: ffff88021f8fe480 ffff88021f8fe478 ffffffff00000001 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff812b78c0>] ? file_map_prot_check+0xd0/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811a11f0>] vm_mmap_pgoff+0xa0/0x110
Jun 23 04:02:31 localhost kernel: [<ffffffff816b0091>] ? __do_page_fault+0x171/0x450
Jun 23 04:02:31 localhost kernel: [<ffffffff811b6d86>] SyS_mmap_pgoff+0x116/0x270
Jun 23 04:02:31 localhost kernel: [<ffffffff8102fbd2>] SyS_mmap+0x22/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19764 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: Master_reactorx D ffff88020a627dc0 0 19764 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff88020a627d98 0000000000000086 ffff88017cd7cf10 ffff88020a627fd8
Jun 23 04:02:31 localhost kernel: ffff88020a627fd8 ffff88020a627fd8 ffff88017cd7cf10 ffff88017cd7cf10
Jun 23 04:02:31 localhost kernel: ffff88021f8fe478 ffffffffffffffff ffff88021f8fe480 ffff88020a627dc0
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:31 localhost kernel: [<ffffffff816b029c>] __do_page_fault+0x37c/0x450
Jun 23 04:02:31 localhost kernel: [<ffffffff816b0456>] trace_do_page_fault+0x56/0x150
Jun 23 04:02:31 localhost kernel: [<ffffffff816afaea>] do_async_page_fault+0x1a/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816ac5f8>] async_page_fault+0x28/0x30
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19771 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: sq_acceptor D ffffffff00000000 0 19771 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff8800af7a7d78 0000000000000086 ffff8800ba340fd0 ffff8800af7a7fd8
Jun 23 04:02:31 localhost kernel: ffff8800af7a7fd8 ffff8800af7a7fd8 ffff8800ba340fd0 ffff8800ba340fd0
Jun 23 04:02:31 localhost kernel: ffff88021f8fe480 ffff88021f8fe478 ffffffff00000001 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff812b78c0>] ? file_map_prot_check+0xd0/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811a11f0>] vm_mmap_pgoff+0xa0/0x110
Jun 23 04:02:31 localhost kernel: [<ffffffff8156e998>] ? release_sock+0x118/0x170
Jun 23 04:02:31 localhost kernel: [<ffffffff811b6d86>] SyS_mmap_pgoff+0x116/0x270
Jun 23 04:02:31 localhost kernel: [<ffffffff8102fbd2>] SyS_mmap+0x22/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task sq_worker:7205 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: sq_worker D ffff8800af753dc0 0 7205 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff8800af753d98 0000000000000086 ffff880097f69fa0 ffff8800af753fd8
Jun 23 04:02:31 localhost kernel: ffff8800af753fd8 ffff8800af753fd8 ffff880097f69fa0 ffff880097f69fa0
Jun 23 04:02:31 localhost kernel: ffff88021f8fe478 ffffffffffffffff ffff88021f8fe480 ffff8800af753dc0
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:31 localhost kernel: [<ffffffff816b029c>] __do_page_fault+0x37c/0x450
Jun 23 04:02:31 localhost kernel: [<ffffffff816b0456>] trace_do_page_fault+0x56/0x150
Jun 23 04:02:31 localhost kernel: [<ffffffff816afaea>] do_async_page_fault+0x1a/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816ac5f8>] async_page_fault+0x28/0x30
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19971 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: sq_acceptor D ffffffff00000000 0 19971 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff880074cefe18 0000000000000086 ffff88020d7cbf40 ffff880074ceffd8
Jun 23 04:02:31 localhost kernel: ffff880074ceffd8 ffff880074ceffd8 ffff88020d7cbf40 ffff88020d7cbf40
Jun 23 04:02:31 localhost kernel: ffff88021f8fac40 ffff88021f8fac38 ffffffff00000004 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811ba1f0>] SyS_mprotect+0xd0/0x290
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task acceptorxxxxxxx:19972 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: acceptorxxxxxxx D ffffffff00000000 0 19972 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff88021dba3e18 0000000000000086 ffff88020d7ceeb0 ffff88021dba3fd8
Jun 23 04:02:31 localhost kernel: ffff88021dba3fd8 ffff88021dba3fd8 ffff88020d7ceeb0 ffff88020d7ceeb0
Jun 23 04:02:31 localhost kernel: ffff88021f8fac40 ffff88021f8fac38 ffffffff00000001 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811b80d0>] SyS_brk+0x50/0x200
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
```
- Not able to access "/mnt/d0" as it is in hung state to look further messages. | 1.0 | [DocDB][Perf][Sysbench][oltp_read_only] sysbench threads are getting timed out after 15 sec. and one of the YB cluster node becomes unstable. - Jira Link: [DB-2728](https://yugabyte.atlassian.net/browse/DB-2728)
### Description:
Observed below listed yb process blocked for more than 120 sec. while running sysbench "oltp_read_only " work load
```
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_1xx:19759 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_3xx:19761 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19762 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19764 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19771 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task sq_worker:7205 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19971 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: INFO: task acceptorxxxxxxx:19972 blocked for more than 120 seconds.
```
below sysbench "RUN" phase command:
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=60 run`
OR
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=100 run`
### Setup:
- YB version: "**yugabyte-2.14.0.0-b62**"
- YB cluster: Newly created, CentoOS 4 node cluster running with c5.xlarge instance type, GP3 with 15000 provisioned IOPS ( 300 gbps )
- Client: Ubuntu
### Steps:
"**Note:**" Observed this issue with single "RUN" phase but to reproduce this issue, recommend to run multiple "RUN" phases with increasing "threads" by keeping all other params same, without cleanup / load phase, this is "oltp_read_only" workload hence not expecting any data change.
- Create CentOS 4 node yb cluster
- After installing sysbench from yb repository on any client machine run sysbench create :
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.28.137 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=160 --create_secondary=false --threads=1 create`
- After CREATE phase run LOAD phase, on client
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.28.137 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=160 --create_secondary=false --threads=10 load`
- After LOAD phase execute RUN phase, on client
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=60 run`
- Sleep for some time and again execute "RUN" phase with increasing "RUN" threads.
`sysbench /usr/local/share/sysbench/oltp_read_only.lua --db-driver=pgsql --pgsql-db=yugabyte --pgsql-host=172.151.24.97,172.151.28.137,172.151.26.163,172.151.19.188 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=1000 --range_selects=false --time=1800 --warmup-time=600 --create_secondary=false --thread-init-timeout=90 --threads=100 run`
- On Client RUN phase gives below error after some time ( approx 30-45 min )
```
FATAL: `thread_run' function failed: /usr/local/share/sysbench/oltp_common.lua:499: SQL error, errno = 0, state = 'XX000': Network error: Connect timeout Connection (0x0000000001c79e78) client 172.151.24.97:54640 => 172.151.24.97:9100, passed: 15.000s, timeout: 15.000s: kConnectFailed
FATAL: PQexecPrepared() failed: 7 Network error: Connect timeout Connection (0x0000000001c79e78) client 172.151.24.97:54642 => 172.151.24.97:9100, passed: 14.999s, timeout: 15.000s: kConnectFailed
FATAL: `thread_run' function failed: /usr/local/share/sysbench/oltp_common.lua:499: SQL error, errno = 0, state = 'XX000': Network error: Connect timeout Connection (0x0000000001c79e78) client 172.151.24.97:54642 => 172.151.24.97:9100, passed: 14.999s, timeout: 15.000s: kConnectFailed
```
- Further debugging on YB host "172.151.24.97" saw below hung timeouts in "/var/log/messages"
```
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: khugepaged D ffff88021f803ca0 0 44 2 0x00000000
Jun 23 04:02:30 localhost kernel: ffff88021f803c78 0000000000000046 ffff88021ff8cf10 ffff88021f803fd8
Jun 23 04:02:30 localhost kernel: ffff88021f803fd8 ffff88021f803fd8 ffff88021ff8cf10 ffff88021ff8cf10
Jun 23 04:02:30 localhost kernel: ffff88021f8fac38 ffffffffffffffff ffff88021f8fac40 ffff88021f803ca0
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:30 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff811ea887>] khugepaged_scan_mm_slot+0x67/0xcf0
Jun 23 04:02:30 localhost kernel: [<ffffffff81098b30>] ? internal_add_timer+0x70/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff811eb64b>] khugepaged+0x13b/0x480
Jun 23 04:02:30 localhost kernel: [<ffffffff810b1920>] ? wake_up_atomic_t+0x30/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff811eb510>] ? khugepaged_scan_mm_slot+0xcf0/0xcf0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b099f>] kthread+0xcf/0xe0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff816b4fd8>] ret_from_fork+0x58/0x90
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: INFO: task kworker/u8:2:6617 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: kworker/u8:2 D ffff8800359ee940 0 6617 2 0x00000080
Jun 23 04:02:30 localhost kernel: Workqueue: nvme nvme_reset_work [nvme]
Jun 23 04:02:30 localhost kernel: ffff88006c0c7d50 0000000000000046 ffff88021471bf40 ffff88006c0c7fd8
Jun 23 04:02:30 localhost kernel: ffff88006c0c7fd8 ffff88006c0c7fd8 ffff88021471bf40 ffff88021fb98000
Jun 23 04:02:30 localhost kernel: ffff88021fb98730 ffff8800359eea68 0000000000000000 ffff8800359ee940
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff81301dc5>] blk_mq_freeze_queue_wait+0x75/0xe0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b1920>] ? wake_up_atomic_t+0x30/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffffc00782e9>] nvme_wait_freeze+0x39/0x50 [nvme_core]
Jun 23 04:02:30 localhost kernel: [<ffffffffc00a543a>] nvme_reset_work+0x59a/0x8a3 [nvme]
Jun 23 04:02:30 localhost kernel: [<ffffffff810a882a>] process_one_work+0x17a/0x440
Jun 23 04:02:30 localhost kernel: [<ffffffff810a94f6>] worker_thread+0x126/0x3c0
Jun 23 04:02:30 localhost kernel: [<ffffffff810a93d0>] ? manage_workers.isra.24+0x2a0/0x2a0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b099f>] kthread+0xcf/0xe0
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff816b4fd8>] ret_from_fork+0x58/0x90
Jun 23 04:02:30 localhost kernel: [<ffffffff810b08d0>] ? insert_kthread_work+0x40/0x40
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_1xx:19759 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: iotp_Master_1xx D ffff880215aabdc0 0 19759 1 0x00000080
Jun 23 04:02:30 localhost kernel: ffff880215aabd98 0000000000000086 ffff880035868000 ffff880215aabfd8
Jun 23 04:02:30 localhost kernel: ffff880215aabfd8 ffff880215aabfd8 ffff880035868000 ffff880035868000
Jun 23 04:02:30 localhost kernel: ffff88021f8fe478 ffffffffffffffff ffff88021f8fe480 ffff880215aabdc0
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:30 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:30 localhost kernel: [<ffffffff816b029c>] __do_page_fault+0x37c/0x450
Jun 23 04:02:30 localhost kernel: [<ffffffff816b0456>] trace_do_page_fault+0x56/0x150
Jun 23 04:02:30 localhost kernel: [<ffffffff816afaea>] do_async_page_fault+0x1a/0xd0
Jun 23 04:02:30 localhost kernel: [<ffffffff816ac5f8>] async_page_fault+0x28/0x30
Jun 23 04:02:30 localhost kernel: INFO: task iotp_Master_3xx:19761 blocked for more than 120 seconds.
Jun 23 04:02:30 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:30 localhost kernel: iotp_Master_3xx D ffffffff00000000 0 19761 1 0x00000080
Jun 23 04:02:30 localhost kernel: ffff88020a607d78 0000000000000086 ffff880035868fd0 ffff88020a607fd8
Jun 23 04:02:30 localhost kernel: ffff88020a607fd8 ffff88020a607fd8 ffff880035868fd0 ffff880035868fd0
Jun 23 04:02:30 localhost kernel: ffff88021f8fe480 ffff88021f8fe478 ffffffff00000001 ffffffff00000000
Jun 23 04:02:30 localhost kernel: Call Trace:
Jun 23 04:02:30 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:30 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:30 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:30 localhost kernel: [<ffffffff812b78c0>] ? file_map_prot_check+0xd0/0xd0
Jun 23 04:02:30 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:30 localhost kernel: [<ffffffff811a11f0>] vm_mmap_pgoff+0xa0/0x110
Jun 23 04:02:31 localhost kernel: [<ffffffff811b6d86>] SyS_mmap_pgoff+0x116/0x270
Jun 23 04:02:31 localhost kernel: [<ffffffff8102fbd2>] SyS_mmap+0x22/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19762 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: Master_reactorx D ffffffff00000000 0 19762 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff88020b757d78 0000000000000086 ffff88003586bf40 ffff88020b757fd8
Jun 23 04:02:31 localhost kernel: ffff88020b757fd8 ffff88020b757fd8 ffff88003586bf40 ffff88003586bf40
Jun 23 04:02:31 localhost kernel: ffff88021f8fe480 ffff88021f8fe478 ffffffff00000001 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff812b78c0>] ? file_map_prot_check+0xd0/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811a11f0>] vm_mmap_pgoff+0xa0/0x110
Jun 23 04:02:31 localhost kernel: [<ffffffff816b0091>] ? __do_page_fault+0x171/0x450
Jun 23 04:02:31 localhost kernel: [<ffffffff811b6d86>] SyS_mmap_pgoff+0x116/0x270
Jun 23 04:02:31 localhost kernel: [<ffffffff8102fbd2>] SyS_mmap+0x22/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task Master_reactorx:19764 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: Master_reactorx D ffff88020a627dc0 0 19764 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff88020a627d98 0000000000000086 ffff88017cd7cf10 ffff88020a627fd8
Jun 23 04:02:31 localhost kernel: ffff88020a627fd8 ffff88020a627fd8 ffff88017cd7cf10 ffff88017cd7cf10
Jun 23 04:02:31 localhost kernel: ffff88021f8fe478 ffffffffffffffff ffff88021f8fe480 ffff88020a627dc0
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:31 localhost kernel: [<ffffffff816b029c>] __do_page_fault+0x37c/0x450
Jun 23 04:02:31 localhost kernel: [<ffffffff816b0456>] trace_do_page_fault+0x56/0x150
Jun 23 04:02:31 localhost kernel: [<ffffffff816afaea>] do_async_page_fault+0x1a/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816ac5f8>] async_page_fault+0x28/0x30
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19771 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: sq_acceptor D ffffffff00000000 0 19771 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff8800af7a7d78 0000000000000086 ffff8800ba340fd0 ffff8800af7a7fd8
Jun 23 04:02:31 localhost kernel: ffff8800af7a7fd8 ffff8800af7a7fd8 ffff8800ba340fd0 ffff8800ba340fd0
Jun 23 04:02:31 localhost kernel: ffff88021f8fe480 ffff88021f8fe478 ffffffff00000001 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff812b78c0>] ? file_map_prot_check+0xd0/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811a11f0>] vm_mmap_pgoff+0xa0/0x110
Jun 23 04:02:31 localhost kernel: [<ffffffff8156e998>] ? release_sock+0x118/0x170
Jun 23 04:02:31 localhost kernel: [<ffffffff811b6d86>] SyS_mmap_pgoff+0x116/0x270
Jun 23 04:02:31 localhost kernel: [<ffffffff8102fbd2>] SyS_mmap+0x22/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task sq_worker:7205 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: sq_worker D ffff8800af753dc0 0 7205 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff8800af753d98 0000000000000086 ffff880097f69fa0 ffff8800af753fd8
Jun 23 04:02:31 localhost kernel: ffff8800af753fd8 ffff8800af753fd8 ffff880097f69fa0 ffff880097f69fa0
Jun 23 04:02:31 localhost kernel: ffff88021f8fe478 ffffffffffffffff ffff88021f8fe480 ffff8800af753dc0
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aabbd>] rwsem_down_read_failed+0x10d/0x1a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a8820>] down_read+0x20/0x40
Jun 23 04:02:31 localhost kernel: [<ffffffff816b029c>] __do_page_fault+0x37c/0x450
Jun 23 04:02:31 localhost kernel: [<ffffffff816b0456>] trace_do_page_fault+0x56/0x150
Jun 23 04:02:31 localhost kernel: [<ffffffff816afaea>] do_async_page_fault+0x1a/0xd0
Jun 23 04:02:31 localhost kernel: [<ffffffff816ac5f8>] async_page_fault+0x28/0x30
Jun 23 04:02:31 localhost kernel: INFO: task sq_acceptor:19971 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: sq_acceptor D ffffffff00000000 0 19971 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff880074cefe18 0000000000000086 ffff88020d7cbf40 ffff880074ceffd8
Jun 23 04:02:31 localhost kernel: ffff880074ceffd8 ffff880074ceffd8 ffff88020d7cbf40 ffff88020d7cbf40
Jun 23 04:02:31 localhost kernel: ffff88021f8fac40 ffff88021f8fac38 ffffffff00000004 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811ba1f0>] SyS_mprotect+0xd0/0x290
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
Jun 23 04:02:31 localhost kernel: INFO: task acceptorxxxxxxx:19972 blocked for more than 120 seconds.
Jun 23 04:02:31 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 23 04:02:31 localhost kernel: acceptorxxxxxxx D ffffffff00000000 0 19972 1 0x00000080
Jun 23 04:02:31 localhost kernel: ffff88021dba3e18 0000000000000086 ffff88020d7ceeb0 ffff88021dba3fd8
Jun 23 04:02:31 localhost kernel: ffff88021dba3fd8 ffff88021dba3fd8 ffff88020d7ceeb0 ffff88020d7ceeb0
Jun 23 04:02:31 localhost kernel: ffff88021f8fac40 ffff88021f8fac38 ffffffff00000001 ffffffff00000000
Jun 23 04:02:31 localhost kernel: Call Trace:
Jun 23 04:02:31 localhost kernel: [<ffffffff816a9589>] schedule+0x29/0x70
Jun 23 04:02:31 localhost kernel: [<ffffffff816aae75>] rwsem_down_write_failed+0x225/0x3a0
Jun 23 04:02:31 localhost kernel: [<ffffffff81331bd7>] call_rwsem_down_write_failed+0x17/0x30
Jun 23 04:02:31 localhost kernel: [<ffffffff816a886d>] down_write+0x2d/0x3d
Jun 23 04:02:31 localhost kernel: [<ffffffff811b80d0>] SyS_brk+0x50/0x200
Jun 23 04:02:31 localhost kernel: [<ffffffff816b5089>] system_call_fastpath+0x16/0x1b
```
- Not able to access "/mnt/d0" as it is in hung state to look further messages. | priority | sysbench threads are getting timed out after sec and one of the yb cluster node becomes unstable jira link description observed below listed yb process blocked for more than sec while running sysbench oltp read only work load jun localhost kernel info task iotp master blocked for more than seconds jun localhost kernel info task iotp master blocked for more than seconds jun localhost kernel info task master reactorx blocked for more than seconds jun localhost kernel info task master reactorx blocked for more than seconds jun localhost kernel info task sq acceptor blocked for more than seconds jun localhost kernel info task sq worker blocked for more than seconds jun localhost kernel info task sq acceptor blocked for more than seconds jun localhost kernel info task acceptorxxxxxxx blocked for more than seconds below sysbench run phase command sysbench usr local share sysbench oltp read only lua db driver pgsql pgsql db yugabyte pgsql host pgsql port pgsql user yugabyte tables table size serial cache size range selects false time warmup time create secondary false thread init timeout threads run or sysbench usr local share sysbench oltp read only lua db driver pgsql pgsql db yugabyte pgsql host pgsql port pgsql user yugabyte tables table size serial cache size range selects false time warmup time create secondary false thread init timeout threads run setup yb version yugabyte yb cluster newly created centoos node cluster running with xlarge instance type with provisioned iops gbps client ubuntu steps note observed this issue with single run phase but to reproduce this issue recommend to run multiple run phases with increasing threads by keeping all other params same without cleanup load phase this is oltp read only workload hence not expecting any data change create centos node yb cluster after installing sysbench from yb repository on any client machine run sysbench create sysbench usr local share sysbench oltp read only lua db driver pgsql pgsql db yugabyte pgsql host pgsql port pgsql user yugabyte tables table size serial cache size range selects false time warmup time create secondary false threads create after create phase run load phase on client sysbench usr local share sysbench oltp update index lua db driver pgsql pgsql db yugabyte pgsql host pgsql port pgsql user yugabyte tables table size serial cache size range selects false time warmup time create secondary false threads load after load phase execute run phase on client sysbench usr local share sysbench oltp read only lua db driver pgsql pgsql db yugabyte pgsql host pgsql port pgsql user yugabyte tables table size serial cache size range selects false time warmup time create secondary false thread init timeout threads run sleep for some time and again execute run phase with increasing run threads sysbench usr local share sysbench oltp read only lua db driver pgsql pgsql db yugabyte pgsql host pgsql port pgsql user yugabyte tables table size serial cache size range selects false time warmup time create secondary false thread init timeout threads run on client run phase gives below error after some time approx min fatal thread run function failed usr local share sysbench oltp common lua sql error errno state network error connect timeout connection client passed timeout kconnectfailed fatal pqexecprepared failed network error connect timeout connection client passed timeout kconnectfailed fatal thread run function failed usr local share sysbench oltp common lua sql error errno state network error connect timeout connection client passed timeout kconnectfailed further debugging on yb host saw below hung timeouts in var log messages jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel khugepaged d jun localhost kernel jun localhost kernel jun localhost kernel ffffffffffffffff jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down read failed jun localhost kernel call rwsem down read failed jun localhost kernel down read jun localhost kernel khugepaged scan mm slot jun localhost kernel internal add timer jun localhost kernel khugepaged jun localhost kernel wake up atomic t jun localhost kernel khugepaged scan mm slot jun localhost kernel kthread jun localhost kernel insert kthread work jun localhost kernel ret from fork jun localhost kernel insert kthread work jun localhost kernel info task kworker blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel kworker d jun localhost kernel workqueue nvme nvme reset work jun localhost kernel jun localhost kernel jun localhost kernel jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel blk mq freeze queue wait jun localhost kernel wake up atomic t jun localhost kernel nvme wait freeze jun localhost kernel nvme reset work jun localhost kernel process one work jun localhost kernel worker thread jun localhost kernel manage workers isra jun localhost kernel kthread jun localhost kernel insert kthread work jun localhost kernel ret from fork jun localhost kernel insert kthread work jun localhost kernel info task iotp master blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel iotp master d jun localhost kernel jun localhost kernel jun localhost kernel ffffffffffffffff jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down read failed jun localhost kernel call rwsem down read failed jun localhost kernel down read jun localhost kernel do page fault jun localhost kernel trace do page fault jun localhost kernel do async page fault jun localhost kernel async page fault jun localhost kernel info task iotp master blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel iotp master d jun localhost kernel jun localhost kernel jun localhost kernel jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down write failed jun localhost kernel call rwsem down write failed jun localhost kernel file map prot check jun localhost kernel down write jun localhost kernel vm mmap pgoff jun localhost kernel sys mmap pgoff jun localhost kernel sys mmap jun localhost kernel system call fastpath jun localhost kernel info task master reactorx blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel master reactorx d jun localhost kernel jun localhost kernel jun localhost kernel jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down write failed jun localhost kernel call rwsem down write failed jun localhost kernel file map prot check jun localhost kernel down write jun localhost kernel vm mmap pgoff jun localhost kernel do page fault jun localhost kernel sys mmap pgoff jun localhost kernel sys mmap jun localhost kernel system call fastpath jun localhost kernel info task master reactorx blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel master reactorx d jun localhost kernel jun localhost kernel jun localhost kernel ffffffffffffffff jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down read failed jun localhost kernel call rwsem down read failed jun localhost kernel down read jun localhost kernel do page fault jun localhost kernel trace do page fault jun localhost kernel do async page fault jun localhost kernel async page fault jun localhost kernel info task sq acceptor blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel sq acceptor d jun localhost kernel jun localhost kernel jun localhost kernel jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down write failed jun localhost kernel call rwsem down write failed jun localhost kernel file map prot check jun localhost kernel down write jun localhost kernel vm mmap pgoff jun localhost kernel release sock jun localhost kernel sys mmap pgoff jun localhost kernel sys mmap jun localhost kernel system call fastpath jun localhost kernel info task sq worker blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel sq worker d jun localhost kernel jun localhost kernel jun localhost kernel ffffffffffffffff jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down read failed jun localhost kernel call rwsem down read failed jun localhost kernel down read jun localhost kernel do page fault jun localhost kernel trace do page fault jun localhost kernel do async page fault jun localhost kernel async page fault jun localhost kernel info task sq acceptor blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel sq acceptor d jun localhost kernel jun localhost kernel jun localhost kernel jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down write failed jun localhost kernel call rwsem down write failed jun localhost kernel down write jun localhost kernel sys mprotect jun localhost kernel system call fastpath jun localhost kernel info task acceptorxxxxxxx blocked for more than seconds jun localhost kernel echo proc sys kernel hung task timeout secs disables this message jun localhost kernel acceptorxxxxxxx d jun localhost kernel jun localhost kernel jun localhost kernel jun localhost kernel call trace jun localhost kernel schedule jun localhost kernel rwsem down write failed jun localhost kernel call rwsem down write failed jun localhost kernel down write jun localhost kernel sys brk jun localhost kernel system call fastpath not able to access mnt as it is in hung state to look further messages | 1 |
25,978 | 2,684,075,709 | IssuesEvent | 2015-03-28 16:43:48 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | ConEmuC -new_console failed | 1 star bug imported Priority-Medium | _From [nanofo...@gmail.com](https://code.google.com/u/117990161848247711282/) on May 14, 2012 12:00:50_
OS version: Win7 SP1 x86 ConEmu version: 120513
Far version: 3.0 build 2619
When I call " ConEmu C.EXE -new_console /C ..." from FAR, a popup pops up and saying me that:
" ConEmu C.M, PID=3956 Injecting hooks into PID=1968 FAILED, code=-732:0x00000006"
I press 'OK' then everything works well. I tried to start VIM and MySQL consoles in another tab with this way (maybe there is an other way too to open a program in another tab? (just a question)).
I detected this error today, older versions ran well. If I uncheck "Insert ConEmu HK" this popup isn't shown.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=544_ | 1.0 | ConEmuC -new_console failed - _From [nanofo...@gmail.com](https://code.google.com/u/117990161848247711282/) on May 14, 2012 12:00:50_
OS version: Win7 SP1 x86 ConEmu version: 120513
Far version: 3.0 build 2619
When I call " ConEmu C.EXE -new_console /C ..." from FAR, a popup pops up and saying me that:
" ConEmu C.M, PID=3956 Injecting hooks into PID=1968 FAILED, code=-732:0x00000006"
I press 'OK' then everything works well. I tried to start VIM and MySQL consoles in another tab with this way (maybe there is an other way too to open a program in another tab? (just a question)).
I detected this error today, older versions ran well. If I uncheck "Insert ConEmu HK" this popup isn't shown.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=544_ | priority | conemuc new console failed from on may os version conemu version far version build when i call conemu c exe new console c from far a popup pops up and saying me that conemu c m pid injecting hooks into pid failed code i press ok then everything works well i tried to start vim and mysql consoles in another tab with this way maybe there is an other way too to open a program in another tab just a question i detected this error today older versions ran well if i uncheck insert conemu hk this popup isn t shown original issue | 1 |
92,956 | 3,875,824,000 | IssuesEvent | 2016-04-12 03:49:23 | cs2103jan2016-t16-2j/main | https://api.github.com/repos/cs2103jan2016-t16-2j/main | closed | A user can know if there is conflict or back to back tasks when encoutered | priority.medium type.story | So that he knows if he got a packed day | 1.0 | A user can know if there is conflict or back to back tasks when encoutered - So that he knows if he got a packed day | priority | a user can know if there is conflict or back to back tasks when encoutered so that he knows if he got a packed day | 1 |
207,277 | 7,126,878,088 | IssuesEvent | 2018-01-20 15:36:28 | DASSL/ClassDB | https://api.github.com/repos/DASSL/ClassDB | opened | Some test scripts use COMMIT instead of ROLLBACK (W) | priority medium wrong | Some test scripts (e.g., [`testHelpters.sql`](https://github.com/DASSL/ClassDB/blob/06839500a3cea49f839eb4f5ad295021719e8296/tests/testHelpers.sql#L262) use `COMMIT` unnecessarily.
Unless absolutely necessary all test scripts should use `ROLLBACK`. Then, tests should forego dropping/deleting objects as part of cleanup unless dropping/deleting is part of the test. | 1.0 | Some test scripts use COMMIT instead of ROLLBACK (W) - Some test scripts (e.g., [`testHelpters.sql`](https://github.com/DASSL/ClassDB/blob/06839500a3cea49f839eb4f5ad295021719e8296/tests/testHelpers.sql#L262) use `COMMIT` unnecessarily.
Unless absolutely necessary all test scripts should use `ROLLBACK`. Then, tests should forego dropping/deleting objects as part of cleanup unless dropping/deleting is part of the test. | priority | some test scripts use commit instead of rollback w some test scripts e g use commit unnecessarily unless absolutely necessary all test scripts should use rollback then tests should forego dropping deleting objects as part of cleanup unless dropping deleting is part of the test | 1 |
416,219 | 12,141,343,032 | IssuesEvent | 2020-04-23 22:19:12 | clearlinux/mixer-tools | https://api.github.com/repos/clearlinux/mixer-tools | closed | Swupd update are leaving files behind after an update | bug medium-priority | **Describe the bug**
After an update some files that should be removed are not being removed.
**To Reproduce**
```
# sudo swupd os-install --path /opt/a -B gnome-base-libs -V 32480
# sudo swupd repair --extra-files-only --path /opt/a --no-scripts # no extra file found
# sudo swupd update --path /opt/a --no-scripts
# sudo swupd repair --extra-files-only --path /opt/a --no-scripts # Multiple extra files found
```
**Expected behavior**
- No files should be left behind
- We should have tests implemented to make sure this was really fixed
**Environment (please complete the following information):**
- Clear Linux OS Version: 32510
- Platform: Any
Problem reported by @rossburton on clearlinux/swupd-client#1346. More information available at that issue
| 1.0 | Swupd update are leaving files behind after an update - **Describe the bug**
After an update some files that should be removed are not being removed.
**To Reproduce**
```
# sudo swupd os-install --path /opt/a -B gnome-base-libs -V 32480
# sudo swupd repair --extra-files-only --path /opt/a --no-scripts # no extra file found
# sudo swupd update --path /opt/a --no-scripts
# sudo swupd repair --extra-files-only --path /opt/a --no-scripts # Multiple extra files found
```
**Expected behavior**
- No files should be left behind
- We should have tests implemented to make sure this was really fixed
**Environment (please complete the following information):**
- Clear Linux OS Version: 32510
- Platform: Any
Problem reported by @rossburton on clearlinux/swupd-client#1346. More information available at that issue
| priority | swupd update are leaving files behind after an update describe the bug after an update some files that should be removed are not being removed to reproduce sudo swupd os install path opt a b gnome base libs v sudo swupd repair extra files only path opt a no scripts no extra file found sudo swupd update path opt a no scripts sudo swupd repair extra files only path opt a no scripts multiple extra files found expected behavior no files should be left behind we should have tests implemented to make sure this was really fixed environment please complete the following information clear linux os version platform any problem reported by rossburton on clearlinux swupd client more information available at that issue | 1 |
409,171 | 11,958,013,128 | IssuesEvent | 2020-04-04 16:26:10 | osmontrouge/caresteouvert | https://api.github.com/repos/osmontrouge/caresteouvert | closed | Changer "Ajouter des détails si nécessaire" | priority: medium | Dans la troisième étape il y a "Ajouter des détails si nécessaire" / on se retrouve avec des commentaires sans intérêts qui tranforment en note, ce qui pourrait être une contribution directe.
Peut-être "Ajouter des détails nécessaire au confinement" pour réduire encore les notes.
Éviter au max les notes avec les horaires et ceci "Des jeunes très agréables et une très bonne viande", donc derrière nécessité d'une contribution humaine. | 1.0 | Changer "Ajouter des détails si nécessaire" - Dans la troisième étape il y a "Ajouter des détails si nécessaire" / on se retrouve avec des commentaires sans intérêts qui tranforment en note, ce qui pourrait être une contribution directe.
Peut-être "Ajouter des détails nécessaire au confinement" pour réduire encore les notes.
Éviter au max les notes avec les horaires et ceci "Des jeunes très agréables et une très bonne viande", donc derrière nécessité d'une contribution humaine. | priority | changer ajouter des détails si nécessaire dans la troisième étape il y a ajouter des détails si nécessaire on se retrouve avec des commentaires sans intérêts qui tranforment en note ce qui pourrait être une contribution directe peut être ajouter des détails nécessaire au confinement pour réduire encore les notes éviter au max les notes avec les horaires et ceci des jeunes très agréables et une très bonne viande donc derrière nécessité d une contribution humaine | 1 |
740,006 | 25,731,942,540 | IssuesEvent | 2022-12-07 21:02:26 | CS320EZMeet/EZMeet_Backend | https://api.github.com/repos/CS320EZMeet/EZMeet_Backend | closed | Help Back End team set up Data objects for Django and PostgresSQL connection | priority: medium status: to do For Aditya Surbhit | Django has inbuilt database libraries which are better optimized for the server. Set the data objects for the respective tables on django. | 1.0 | Help Back End team set up Data objects for Django and PostgresSQL connection - Django has inbuilt database libraries which are better optimized for the server. Set the data objects for the respective tables on django. | priority | help back end team set up data objects for django and postgressql connection django has inbuilt database libraries which are better optimized for the server set the data objects for the respective tables on django | 1 |
779,210 | 27,344,341,182 | IssuesEvent | 2023-02-27 02:45:28 | ansible-collections/azure | https://api.github.com/repos/ansible-collections/azure | closed | No module named 'azure.cli.core.auth' | medium_priority work in | _From @kTipSSIoYv on Jul 31, 2022 18:34_
### Summary
'Why am I getting this error `No module named 'azure.cli.core.auth'`. I've installed ansible[azure] and also all the requirements from github requirements-azure.txt.
```
`pip3 list | grep azure.cli` shows that that it exists.
azure-cli-core 2.34.0
azure-cli-telemetry 1.0.6
```
Here is the complete log. Any idea what the issue is or on how to fix it?
```
{"uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e", "counter": 43, "stdout": "\u001b[0;31mThe full traceback is:\u001b[0m\r\n\u001b[0;31mTraceback (most recent call last):\u001b[0m\r\n\u001b[0;31m File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\u001b[0m\r\n\u001b[0;31m from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\u001b[0m\r\n\u001b[0;31mModuleNotFoundError: No module named 'azure.cli.core.auth'\u001b[0m\r\n\u001b[0;31mfatal: [localhost]: FAILED! => {\u001b[0m\r\n\u001b[0;31m \"ansible_facts\": {\u001b[0m\r\n\u001b[0;31m \"discovered_interpreter_python\": \"/usr/libexec/platform-python\"\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"changed\": false,\u001b[0m\r\n\u001b[0;31m \"invocation\": {\u001b[0m\r\n\u001b[0;31m \"module_args\": {\u001b[0m\r\n\u001b[0;31m \"ad_user\": null,\u001b[0m\r\n\u001b[0;31m \"adfs_authority_url\": null,\u001b[0m\r\n\u001b[0;31m \"api_profile\": \"latest\",\u001b[0m\r\n\u001b[0;31m \"auth_source\": \"auto\",\u001b[0m\r\n\u001b[0;31m \"cert_validation_mode\": null,\u001b[0m\r\n\u001b[0;31m \"client_id\": null,\u001b[0m\r\n\u001b[0;31m \"cloud_environment\": \"AzureCloud\",\u001b[0m\r\n\u001b[0;31m \"log_mode\": null,\u001b[0m\r\n\u001b[0;31m \"log_path\": null,\u001b[0m\r\n\u001b[0;31m \"name\": \"Ubuntu973\",\u001b[0m\r\n\u001b[0;31m \"password\": null,\u001b[0m\r\n\u001b[0;31m \"profile\": null,\u001b[0m\r\n\u001b[0;31m \"resource_group\": \"cloud-shell-storage-centralindia\",\u001b[0m\r\n\u001b[0;31m \"secret\": null,\u001b[0m\r\n\u001b[0;31m \"subscription_id\": null,\u001b[0m\r\n\u001b[0;31m \"tags\": null,\u001b[0m\r\n\u001b[0;31m \"tenant\": null\u001b[0m\r\n\u001b[0;31m }\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"msg\": \"Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter\"\u001b[0m\r\n\u001b[0;31m}\u001b[0m", "start_line": 45, "end_line": 78, "runner_ident": "result", "event": "runner_on_failed", "pid": 30225, "created": "2022-07-31T17:57:17.520129", "parent_uuid": "0242ac11-0002-1a3a-a246-000000000008", "event_data": {"playbook": "azure_sg_create.yaml", "playbook_uuid": "7c92739c-49c8-4c46-8e6c-1784269d3172", "play": "localhost", "play_uuid": "0242ac11-0002-1a3a-a246-000000000006", "play_pattern": "localhost", "task": "Get facts by name", "task_uuid": "0242ac11-0002-1a3a-a246-000000000008", "task_action": "azure.azcollection.azure_rm_virtualmachine_info", "task_args": "", "task_path": "/tmp/ansible-runner-git20220731-321-1dey7u1/azure_sg_create.yaml:17", "host": "localhost", "remote_addr": "localhost", "res": {"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\n from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\nModuleNotFoundError: No module named 'azure.cli.core.auth'\n", "msg": "Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter", "invocation": {"module_args": {"resource_group": "cloud-shell-storage-centralindia", "name": "Ubuntu973", "auth_source": "auto", "cloud_environment": "AzureCloud", "api_profile": "latest", "profile": null, "subscription_id": null, "client_id": null, "secret": null, "tenant": null, "ad_user": null, "password": null, "cert_validation_mode": null, "adfs_authority_url": null, "log_mode": null, "log_path": null, "tags": null}}, "ansible_facts": {"discovered_interpreter_python": "/usr/libexec/platform-python"}, "_ansible_no_log": false, "changed": false}, "start": "2022-07-31T17:57:16.944630", "end": "2022-07-31T17:57:17.519992", "duration": 0.575362, "ignore_errors": null, "event_loop": null, "uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e"}}
```
### Issue Type
Bug Report
### Component Name
pip
### Ansible Version
```console
$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current
version: 3.6.8 (default, Jan 14 2022, 11:04:20) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)]. This feature will be removed
from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
ansible [core 2.11.12]
config file = /root/.ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Jan 14 2022, 11:04:20) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current
version: 3.6.8 (default, Jan 14 2022, 11:04:20) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)]. This feature will be removed
from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
DEFAULT_ROLES_PATH(/root/.ansible.cfg) = ['/root/.ansible/roles', '/usr/share/ansible/roles', '/etc/ansible/roles', '/v>
DEFAULT_STDOUT_CALLBACK(/root/.ansible.cfg) = json
```
### OS / Environment
Centos 7 Docker instance
### Steps to Reproduce
Try to run ansible playbook on docker container and it throws the error.
### Expected Results
It should be able to find the azure.cli module.
### Actual Results
```console
{"uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e", "counter": 43, "stdout": "\u001b[0;31mThe full traceback is:\u001b[0m\r\n\u001b[0;31mTraceback (most recent call last):\u001b[0m\r\n\u001b[0;31m File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\u001b[0m\r\n\u001b[0;31m from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\u001b[0m\r\n\u001b[0;31mModuleNotFoundError: No module named 'azure.cli.core.auth'\u001b[0m\r\n\u001b[0;31mfatal: [localhost]: FAILED! => {\u001b[0m\r\n\u001b[0;31m \"ansible_facts\": {\u001b[0m\r\n\u001b[0;31m \"discovered_interpreter_python\": \"/usr/libexec/platform-python\"\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"changed\": false,\u001b[0m\r\n\u001b[0;31m \"invocation\": {\u001b[0m\r\n\u001b[0;31m \"module_args\": {\u001b[0m\r\n\u001b[0;31m \"ad_user\": null,\u001b[0m\r\n\u001b[0;31m \"adfs_authority_url\": null,\u001b[0m\r\n\u001b[0;31m \"api_profile\": \"latest\",\u001b[0m\r\n\u001b[0;31m \"auth_source\": \"auto\",\u001b[0m\r\n\u001b[0;31m \"cert_validation_mode\": null,\u001b[0m\r\n\u001b[0;31m \"client_id\": null,\u001b[0m\r\n\u001b[0;31m \"cloud_environment\": \"AzureCloud\",\u001b[0m\r\n\u001b[0;31m \"log_mode\": null,\u001b[0m\r\n\u001b[0;31m \"log_path\": null,\u001b[0m\r\n\u001b[0;31m \"name\": \"Ubuntu973\",\u001b[0m\r\n\u001b[0;31m \"password\": null,\u001b[0m\r\n\u001b[0;31m \"profile\": null,\u001b[0m\r\n\u001b[0;31m \"resource_group\": \"cloud-shell-storage-centralindia\",\u001b[0m\r\n\u001b[0;31m \"secret\": null,\u001b[0m\r\n\u001b[0;31m \"subscription_id\": null,\u001b[0m\r\n\u001b[0;31m \"tags\": null,\u001b[0m\r\n\u001b[0;31m \"tenant\": null\u001b[0m\r\n\u001b[0;31m }\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"msg\": \"Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter\"\u001b[0m\r\n\u001b[0;31m}\u001b[0m", "start_line": 45, "end_line": 78, "runner_ident": "result", "event": "runner_on_failed", "pid": 30225, "created": "2022-07-31T17:57:17.520129", "parent_uuid": "0242ac11-0002-1a3a-a246-000000000008", "event_data": {"playbook": "azure_sg_create.yaml", "playbook_uuid": "7c92739c-49c8-4c46-8e6c-1784269d3172", "play": "localhost", "play_uuid": "0242ac11-0002-1a3a-a246-000000000006", "play_pattern": "localhost", "task": "Get facts by name", "task_uuid": "0242ac11-0002-1a3a-a246-000000000008", "task_action": "azure.azcollection.azure_rm_virtualmachine_info", "task_args": "", "task_path": "/tmp/ansible-runner-git20220731-321-1dey7u1/azure_sg_create.yaml:17", "host": "localhost", "remote_addr": "localhost", "res": {"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\n from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\nModuleNotFoundError: No module named 'azure.cli.core.auth'\n", "msg": "Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter", "invocation": {"module_args": {"resource_group": "cloud-shell-storage-centralindia", "name": "Ubuntu973", "auth_source": "auto", "cloud_environment": "AzureCloud", "api_profile": "latest", "profile": null, "subscription_id": null, "client_id": null, "secret": null, "tenant": null, "ad_user": null, "password": null, "cert_validation_mode": null, "adfs_authority_url": null, "log_mode": null, "log_path": null, "tags": null}}, "ansible_facts": {"discovered_interpreter_python": "/usr/libexec/platform-python"}, "_ansible_no_log": false, "changed": false}, "start": "2022-07-31T17:57:16.944630", "end": "2022-07-31T17:57:17.519992", "duration": 0.575362, "ignore_errors": null, "event_loop": null, "uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e"}}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
_Copied from original issue: ansible/ansible#78394_ | 1.0 | No module named 'azure.cli.core.auth' - _From @kTipSSIoYv on Jul 31, 2022 18:34_
### Summary
'Why am I getting this error `No module named 'azure.cli.core.auth'`. I've installed ansible[azure] and also all the requirements from github requirements-azure.txt.
```
`pip3 list | grep azure.cli` shows that that it exists.
azure-cli-core 2.34.0
azure-cli-telemetry 1.0.6
```
Here is the complete log. Any idea what the issue is or on how to fix it?
```
{"uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e", "counter": 43, "stdout": "\u001b[0;31mThe full traceback is:\u001b[0m\r\n\u001b[0;31mTraceback (most recent call last):\u001b[0m\r\n\u001b[0;31m File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\u001b[0m\r\n\u001b[0;31m from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\u001b[0m\r\n\u001b[0;31mModuleNotFoundError: No module named 'azure.cli.core.auth'\u001b[0m\r\n\u001b[0;31mfatal: [localhost]: FAILED! => {\u001b[0m\r\n\u001b[0;31m \"ansible_facts\": {\u001b[0m\r\n\u001b[0;31m \"discovered_interpreter_python\": \"/usr/libexec/platform-python\"\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"changed\": false,\u001b[0m\r\n\u001b[0;31m \"invocation\": {\u001b[0m\r\n\u001b[0;31m \"module_args\": {\u001b[0m\r\n\u001b[0;31m \"ad_user\": null,\u001b[0m\r\n\u001b[0;31m \"adfs_authority_url\": null,\u001b[0m\r\n\u001b[0;31m \"api_profile\": \"latest\",\u001b[0m\r\n\u001b[0;31m \"auth_source\": \"auto\",\u001b[0m\r\n\u001b[0;31m \"cert_validation_mode\": null,\u001b[0m\r\n\u001b[0;31m \"client_id\": null,\u001b[0m\r\n\u001b[0;31m \"cloud_environment\": \"AzureCloud\",\u001b[0m\r\n\u001b[0;31m \"log_mode\": null,\u001b[0m\r\n\u001b[0;31m \"log_path\": null,\u001b[0m\r\n\u001b[0;31m \"name\": \"Ubuntu973\",\u001b[0m\r\n\u001b[0;31m \"password\": null,\u001b[0m\r\n\u001b[0;31m \"profile\": null,\u001b[0m\r\n\u001b[0;31m \"resource_group\": \"cloud-shell-storage-centralindia\",\u001b[0m\r\n\u001b[0;31m \"secret\": null,\u001b[0m\r\n\u001b[0;31m \"subscription_id\": null,\u001b[0m\r\n\u001b[0;31m \"tags\": null,\u001b[0m\r\n\u001b[0;31m \"tenant\": null\u001b[0m\r\n\u001b[0;31m }\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"msg\": \"Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter\"\u001b[0m\r\n\u001b[0;31m}\u001b[0m", "start_line": 45, "end_line": 78, "runner_ident": "result", "event": "runner_on_failed", "pid": 30225, "created": "2022-07-31T17:57:17.520129", "parent_uuid": "0242ac11-0002-1a3a-a246-000000000008", "event_data": {"playbook": "azure_sg_create.yaml", "playbook_uuid": "7c92739c-49c8-4c46-8e6c-1784269d3172", "play": "localhost", "play_uuid": "0242ac11-0002-1a3a-a246-000000000006", "play_pattern": "localhost", "task": "Get facts by name", "task_uuid": "0242ac11-0002-1a3a-a246-000000000008", "task_action": "azure.azcollection.azure_rm_virtualmachine_info", "task_args": "", "task_path": "/tmp/ansible-runner-git20220731-321-1dey7u1/azure_sg_create.yaml:17", "host": "localhost", "remote_addr": "localhost", "res": {"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\n from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\nModuleNotFoundError: No module named 'azure.cli.core.auth'\n", "msg": "Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter", "invocation": {"module_args": {"resource_group": "cloud-shell-storage-centralindia", "name": "Ubuntu973", "auth_source": "auto", "cloud_environment": "AzureCloud", "api_profile": "latest", "profile": null, "subscription_id": null, "client_id": null, "secret": null, "tenant": null, "ad_user": null, "password": null, "cert_validation_mode": null, "adfs_authority_url": null, "log_mode": null, "log_path": null, "tags": null}}, "ansible_facts": {"discovered_interpreter_python": "/usr/libexec/platform-python"}, "_ansible_no_log": false, "changed": false}, "start": "2022-07-31T17:57:16.944630", "end": "2022-07-31T17:57:17.519992", "duration": 0.575362, "ignore_errors": null, "event_loop": null, "uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e"}}
```
### Issue Type
Bug Report
### Component Name
pip
### Ansible Version
```console
$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current
version: 3.6.8 (default, Jan 14 2022, 11:04:20) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)]. This feature will be removed
from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
ansible [core 2.11.12]
config file = /root/.ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Jan 14 2022, 11:04:20) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current
version: 3.6.8 (default, Jan 14 2022, 11:04:20) [GCC 8.5.0 20210514 (Red Hat 8.5.0-7)]. This feature will be removed
from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in
ansible.cfg.
DEFAULT_ROLES_PATH(/root/.ansible.cfg) = ['/root/.ansible/roles', '/usr/share/ansible/roles', '/etc/ansible/roles', '/v>
DEFAULT_STDOUT_CALLBACK(/root/.ansible.cfg) = json
```
### OS / Environment
Centos 7 Docker instance
### Steps to Reproduce
Try to run ansible playbook on docker container and it throws the error.
### Expected Results
It should be able to find the azure.cli module.
### Actual Results
```console
{"uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e", "counter": 43, "stdout": "\u001b[0;31mThe full traceback is:\u001b[0m\r\n\u001b[0;31mTraceback (most recent call last):\u001b[0m\r\n\u001b[0;31m File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\u001b[0m\r\n\u001b[0;31m from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\u001b[0m\r\n\u001b[0;31mModuleNotFoundError: No module named 'azure.cli.core.auth'\u001b[0m\r\n\u001b[0;31mfatal: [localhost]: FAILED! => {\u001b[0m\r\n\u001b[0;31m \"ansible_facts\": {\u001b[0m\r\n\u001b[0;31m \"discovered_interpreter_python\": \"/usr/libexec/platform-python\"\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"changed\": false,\u001b[0m\r\n\u001b[0;31m \"invocation\": {\u001b[0m\r\n\u001b[0;31m \"module_args\": {\u001b[0m\r\n\u001b[0;31m \"ad_user\": null,\u001b[0m\r\n\u001b[0;31m \"adfs_authority_url\": null,\u001b[0m\r\n\u001b[0;31m \"api_profile\": \"latest\",\u001b[0m\r\n\u001b[0;31m \"auth_source\": \"auto\",\u001b[0m\r\n\u001b[0;31m \"cert_validation_mode\": null,\u001b[0m\r\n\u001b[0;31m \"client_id\": null,\u001b[0m\r\n\u001b[0;31m \"cloud_environment\": \"AzureCloud\",\u001b[0m\r\n\u001b[0;31m \"log_mode\": null,\u001b[0m\r\n\u001b[0;31m \"log_path\": null,\u001b[0m\r\n\u001b[0;31m \"name\": \"Ubuntu973\",\u001b[0m\r\n\u001b[0;31m \"password\": null,\u001b[0m\r\n\u001b[0;31m \"profile\": null,\u001b[0m\r\n\u001b[0;31m \"resource_group\": \"cloud-shell-storage-centralindia\",\u001b[0m\r\n\u001b[0;31m \"secret\": null,\u001b[0m\r\n\u001b[0;31m \"subscription_id\": null,\u001b[0m\r\n\u001b[0;31m \"tags\": null,\u001b[0m\r\n\u001b[0;31m \"tenant\": null\u001b[0m\r\n\u001b[0;31m }\u001b[0m\r\n\u001b[0;31m },\u001b[0m\r\n\u001b[0;31m \"msg\": \"Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter\"\u001b[0m\r\n\u001b[0;31m}\u001b[0m", "start_line": 45, "end_line": 78, "runner_ident": "result", "event": "runner_on_failed", "pid": 30225, "created": "2022-07-31T17:57:17.520129", "parent_uuid": "0242ac11-0002-1a3a-a246-000000000008", "event_data": {"playbook": "azure_sg_create.yaml", "playbook_uuid": "7c92739c-49c8-4c46-8e6c-1784269d3172", "play": "localhost", "play_uuid": "0242ac11-0002-1a3a-a246-000000000006", "play_pattern": "localhost", "task": "Get facts by name", "task_uuid": "0242ac11-0002-1a3a-a246-000000000008", "task_action": "azure.azcollection.azure_rm_virtualmachine_info", "task_args": "", "task_path": "/tmp/ansible-runner-git20220731-321-1dey7u1/azure_sg_create.yaml:17", "host": "localhost", "remote_addr": "localhost", "res": {"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload_gggcmape/ansible_azure.azcollection.azure_rm_virtualmachine_info_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 232, in <module>\n from azure.cli.core.auth.adal_authentication import MSIAuthenticationWrapper\nModuleNotFoundError: No module named 'azure.cli.core.auth'\n", "msg": "Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on d82ab5e3eee6's Python /usr/libexec/platform-python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter", "invocation": {"module_args": {"resource_group": "cloud-shell-storage-centralindia", "name": "Ubuntu973", "auth_source": "auto", "cloud_environment": "AzureCloud", "api_profile": "latest", "profile": null, "subscription_id": null, "client_id": null, "secret": null, "tenant": null, "ad_user": null, "password": null, "cert_validation_mode": null, "adfs_authority_url": null, "log_mode": null, "log_path": null, "tags": null}}, "ansible_facts": {"discovered_interpreter_python": "/usr/libexec/platform-python"}, "_ansible_no_log": false, "changed": false}, "start": "2022-07-31T17:57:16.944630", "end": "2022-07-31T17:57:17.519992", "duration": 0.575362, "ignore_errors": null, "event_loop": null, "uuid": "655b47de-ccbe-4acf-9eb3-7a28c3ba119e"}}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
_Copied from original issue: ansible/ansible#78394_ | priority | no module named azure cli core auth from ktipssioyv on jul summary why am i getting this error no module named azure cli core auth i ve installed ansible and also all the requirements from github requirements azure txt list grep azure cli shows that that it exists azure cli core azure cli telemetry here is the complete log any idea what the issue is or on how to fix it uuid ccbe counter stdout failed azure on s python usr libexec platform python please read the module documentation and install it in the appropriate location if the required library is installed but ansible is using the wrong python interpreter please consult the documentation on ansible python interpreter azure on s python usr libexec platform python please read the module documentation and install it in the appropriate location if the required library is installed but ansible is using the wrong python interpreter please consult the documentation on ansible python interpreter invocation module args resource group cloud shell storage centralindia name auth source auto cloud environment azurecloud api profile latest profile null subscription id null client id null secret null tenant null ad user null password null cert validation mode null adfs authority url null log mode null log path null tags null ansible facts discovered interpreter python usr libexec platform python ansible no log false changed false start end duration ignore errors null event loop null uuid ccbe issue type bug report component name pip ansible version console ansible version ansible will require python or newer on the controller starting with ansible current version default jan this feature will be removed from ansible core in version deprecation warnings can be disabled by setting deprecation warnings false in ansible cfg ansible config file root ansible cfg configured module search path ansible python module location usr local lib site packages ansible ansible collection location root ansible collections usr share ansible collections executable location usr local bin ansible python version default jan jinja version libyaml true configuration console if using a version older than ansible core you should omit the t all ansible config dump only changed t all ansible will require python or newer on the controller starting with ansible current version default jan this feature will be removed from ansible core in version deprecation warnings can be disabled by setting deprecation warnings false in ansible cfg default roles path root ansible cfg root ansible roles usr share ansible roles etc ansible roles v default stdout callback root ansible cfg json os environment centos docker instance steps to reproduce try to run ansible playbook on docker container and it throws the error expected results it should be able to find the azure cli module actual results console uuid ccbe counter stdout failed azure on s python usr libexec platform python please read the module documentation and install it in the appropriate location if the required library is installed but ansible is using the wrong python interpreter please consult the documentation on ansible python interpreter azure on s python usr libexec platform python please read the module documentation and install it in the appropriate location if the required library is installed but ansible is using the wrong python interpreter please consult the documentation on ansible python interpreter invocation module args resource group cloud shell storage centralindia name auth source auto cloud environment azurecloud api profile latest profile null subscription id null client id null secret null tenant null ad user null password null cert validation mode null adfs authority url null log mode null log path null tags null ansible facts discovered interpreter python usr libexec platform python ansible no log false changed false start end duration ignore errors null event loop null uuid ccbe code of conduct i agree to follow the ansible code of conduct copied from original issue ansible ansible | 1 |
376,640 | 11,149,640,324 | IssuesEvent | 2019-12-23 19:26:37 | SalesforceFoundation/Volunteers-for-Salesforce | https://api.github.com/repos/SalesforceFoundation/Volunteers-for-Salesforce | closed | Mass Email to Campaign Members | Enhancement Medium Priority | Support Mass Email Volunteers to everyone on a campaign, without requiring them to have been assigned to a job or shift. The current wizard requires Volunteer Hours records.
| 1.0 | Mass Email to Campaign Members - Support Mass Email Volunteers to everyone on a campaign, without requiring them to have been assigned to a job or shift. The current wizard requires Volunteer Hours records.
| priority | mass email to campaign members support mass email volunteers to everyone on a campaign without requiring them to have been assigned to a job or shift the current wizard requires volunteer hours records | 1 |
505,340 | 14,631,850,558 | IssuesEvent | 2020-12-23 20:52:08 | MikeVedsted/JoinMe | https://api.github.com/repos/MikeVedsted/JoinMe | closed | [FEAT] Add slider to EventSearch component | Priority: Medium :zap: Status: Done :heavy_check_mark: Type: Enhancement :rocket: | **💡 I would really like to solve or include**
Add an EventSearch component to the Event component.
This should include a FormDropdownField for the category of event, a GoogleAutoComplete field for the location of the event and a DistanceSlider for the distance the user is willing to travel to go to the event.
_Check the components division and naming in this figma file: https://www.figma.com/file/CZn7N02015QxKByVw9vRP1/JoinMe-Components?node-id=0%3A1_

**👶 How would a user describe this?**
This is where I select the options to filter the events I’m interested on.
**🏆 My dream solution would be**
Just like Chiran’s layout on figma.
**:2nd_place_medal: But I'd also consider it solved if**
If it’s just a little bit similar to Chiran’s layout.
**💭 If you were doing it, what would you do?**
- Import all the required components
**♻️ Additional context**
It is required by the Event component
It requires the FormDropdownField, GoogleAutoComplete and the DistanceSlider components
**🚀 I'm ready for take off**
Before submitting, please mark if you:
- [x] Checked that this feature doesn't already exists
- [x] Checked that a feature request doesn't already exists
- [x] Went through the user flow, and understand the impact
- [x] Made sure the request shows why it is important to users but doesn't exaggerate the value
| 1.0 | [FEAT] Add slider to EventSearch component - **💡 I would really like to solve or include**
Add an EventSearch component to the Event component.
This should include a FormDropdownField for the category of event, a GoogleAutoComplete field for the location of the event and a DistanceSlider for the distance the user is willing to travel to go to the event.
_Check the components division and naming in this figma file: https://www.figma.com/file/CZn7N02015QxKByVw9vRP1/JoinMe-Components?node-id=0%3A1_

**👶 How would a user describe this?**
This is where I select the options to filter the events I’m interested on.
**🏆 My dream solution would be**
Just like Chiran’s layout on figma.
**:2nd_place_medal: But I'd also consider it solved if**
If it’s just a little bit similar to Chiran’s layout.
**💭 If you were doing it, what would you do?**
- Import all the required components
**♻️ Additional context**
It is required by the Event component
It requires the FormDropdownField, GoogleAutoComplete and the DistanceSlider components
**🚀 I'm ready for take off**
Before submitting, please mark if you:
- [x] Checked that this feature doesn't already exists
- [x] Checked that a feature request doesn't already exists
- [x] Went through the user flow, and understand the impact
- [x] Made sure the request shows why it is important to users but doesn't exaggerate the value
| priority | add slider to eventsearch component 💡 i would really like to solve or include add an eventsearch component to the event component this should include a formdropdownfield for the category of event a googleautocomplete field for the location of the event and a distanceslider for the distance the user is willing to travel to go to the event check the components division and naming in this figma file 👶 how would a user describe this this is where i select the options to filter the events i’m interested on 🏆 my dream solution would be just like chiran’s layout on figma place medal but i d also consider it solved if if it’s just a little bit similar to chiran’s layout 💭 if you were doing it what would you do import all the required components ♻️ additional context it is required by the event component it requires the formdropdownfield googleautocomplete and the distanceslider components 🚀 i m ready for take off before submitting please mark if you checked that this feature doesn t already exists checked that a feature request doesn t already exists went through the user flow and understand the impact made sure the request shows why it is important to users but doesn t exaggerate the value | 1 |
689,038 | 23,604,798,379 | IssuesEvent | 2022-08-24 07:18:11 | stackabletech/t2 | https://api.github.com/repos/stackabletech/t2 | closed | self-host K3s stuff (simple version) | priority/medium | We have some problems with the download of K3s resources from time to time as the script by default relies on a 98% available REST API (https://update.k3s.io/v1-release/channels) and GitHub. Although this should be pretty solid stuff, we have some issues which also might be on our end (?)
As a quick win/simple version/test, we want to host the binaries of the most-used K3s versions in our Nexus and let T2 access them.
PRO: Hopefully reliable K3s installation.
CON: This means that we have to update our resources from time to time. | 1.0 | self-host K3s stuff (simple version) - We have some problems with the download of K3s resources from time to time as the script by default relies on a 98% available REST API (https://update.k3s.io/v1-release/channels) and GitHub. Although this should be pretty solid stuff, we have some issues which also might be on our end (?)
As a quick win/simple version/test, we want to host the binaries of the most-used K3s versions in our Nexus and let T2 access them.
PRO: Hopefully reliable K3s installation.
CON: This means that we have to update our resources from time to time. | priority | self host stuff simple version we have some problems with the download of resources from time to time as the script by default relies on a available rest api and github although this should be pretty solid stuff we have some issues which also might be on our end as a quick win simple version test we want to host the binaries of the most used versions in our nexus and let access them pro hopefully reliable installation con this means that we have to update our resources from time to time | 1 |
365,512 | 10,788,343,667 | IssuesEvent | 2019-11-05 09:36:15 | hasse69/rar2fs | https://api.github.com/repos/hasse69/rar2fs | closed | Rar2fs not updating mount when rar files are still being added to folder | Enhancement Priority-Medium | Version info:
rar2fs v1.27.2-git8014672 (DLL version 8) Copyright (C) 2009 Hans Beckerus
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it under
certain conditions; see <http://www.gnu.org/licenses/> for details.
FUSE library version: 2.9.9
fusermount version: 2.9.9
using FUSE kernel interface version 7.19
In the past when I used rar2fs, I always moved my folders that contained rar files to the source of my rar2fs mount when all files were there. Now that I download or copy files to the source dir, I've noticed rar2fs doesn't update my mount once all the files have finished downloading or copying. Is this normal behaviour? Is there no way to invalidate the cache? The only way I can get the content from the rar archives to show up is by remounting the rar2fs mount, but that's no option since I'd have to stop my smbd services constantly to be able to unmount.
| 1.0 | Rar2fs not updating mount when rar files are still being added to folder - Version info:
rar2fs v1.27.2-git8014672 (DLL version 8) Copyright (C) 2009 Hans Beckerus
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it under
certain conditions; see <http://www.gnu.org/licenses/> for details.
FUSE library version: 2.9.9
fusermount version: 2.9.9
using FUSE kernel interface version 7.19
In the past when I used rar2fs, I always moved my folders that contained rar files to the source of my rar2fs mount when all files were there. Now that I download or copy files to the source dir, I've noticed rar2fs doesn't update my mount once all the files have finished downloading or copying. Is this normal behaviour? Is there no way to invalidate the cache? The only way I can get the content from the rar archives to show up is by remounting the rar2fs mount, but that's no option since I'd have to stop my smbd services constantly to be able to unmount.
| priority | not updating mount when rar files are still being added to folder version info dll version copyright c hans beckerus this program comes with absolutely no warranty this is free software and you are welcome to redistribute it under certain conditions see for details fuse library version fusermount version using fuse kernel interface version in the past when i used i always moved my folders that contained rar files to the source of my mount when all files were there now that i download or copy files to the source dir i ve noticed doesn t update my mount once all the files have finished downloading or copying is this normal behaviour is there no way to invalidate the cache the only way i can get the content from the rar archives to show up is by remounting the mount but that s no option since i d have to stop my smbd services constantly to be able to unmount | 1 |
705,811 | 24,249,897,796 | IssuesEvent | 2022-09-27 13:30:53 | submariner-io/subctl | https://api.github.com/repos/submariner-io/subctl | closed | subctl gather should have option to output to stdout | enhancement size:small priority:medium next-version-candidate | **What would you like to be added**:
Add option to `subctl gather` to output results to stdout instead of a file.
**Why is this needed**:
This will allow our CI/CD to run `subctl gather` as part of post-mortem, instead of using custom shell scripts. This way, we will not only be able to test `subctl gather` as part of CI/CD, but any newer additions to data required to troubleshoot will not require changes in multiple places. This will also help enable the use case to run `gather` from a pod within cluster. | 1.0 | subctl gather should have option to output to stdout - **What would you like to be added**:
Add option to `subctl gather` to output results to stdout instead of a file.
**Why is this needed**:
This will allow our CI/CD to run `subctl gather` as part of post-mortem, instead of using custom shell scripts. This way, we will not only be able to test `subctl gather` as part of CI/CD, but any newer additions to data required to troubleshoot will not require changes in multiple places. This will also help enable the use case to run `gather` from a pod within cluster. | priority | subctl gather should have option to output to stdout what would you like to be added add option to subctl gather to output results to stdout instead of a file why is this needed this will allow our ci cd to run subctl gather as part of post mortem instead of using custom shell scripts this way we will not only be able to test subctl gather as part of ci cd but any newer additions to data required to troubleshoot will not require changes in multiple places this will also help enable the use case to run gather from a pod within cluster | 1 |
7,390 | 2,601,760,642 | IssuesEvent | 2015-02-24 00:34:58 | chrsmith/bwapi | https://api.github.com/repos/chrsmith/bwapi | closed | Add setAlliance, setVision | auto-migrated Milestone-Release NewFeature Offset-hunting Priority-Medium Type-Enhancement | ```
Add the following functions to Game::
void Game::setAlliance(BWAPI::Player player, bool allied);
void Game::setVision(BWAPI::Player player, bool vision);
void Game::setAlliedVictory(bool alliedVictory);
```
-----
Original issue reported on code.google.com by `AHeinerm` on 27 Feb 2011 at 8:16 | 1.0 | Add setAlliance, setVision - ```
Add the following functions to Game::
void Game::setAlliance(BWAPI::Player player, bool allied);
void Game::setVision(BWAPI::Player player, bool vision);
void Game::setAlliedVictory(bool alliedVictory);
```
-----
Original issue reported on code.google.com by `AHeinerm` on 27 Feb 2011 at 8:16 | priority | add setalliance setvision add the following functions to game void game setalliance bwapi player player bool allied void game setvision bwapi player player bool vision void game setalliedvictory bool alliedvictory original issue reported on code google com by aheinerm on feb at | 1 |
666,829 | 22,389,225,645 | IssuesEvent | 2022-06-17 05:27:18 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | In Performance, deactivated field agents are not showing when viewing records started by field agents | 👹Bug Priority: medium | **Bug Description:**
In the performance, When register selects an office and clicks on the view link beside Field Agents under the Sources of applications section, deactivated field agents don't show. If Deactivate is selected in the filter, it shows 'No user found'
**Steps:**
1. An deactivated field agent is created in the location of the register
2. log in as the Register
3. Go to Performance
4. From the location filter, select an office
5. Navigate to 'Sources of applications'
6. Click on View beside Field agents
7. Select Deactive in the filter
**Actual Result:**
- Deactivate field agent is not showing
**Expected Result:**
- Deactivated field agents should show if Deactive is selected in the filter
**Screenshot:**

**Tested on:**
https://login.farajaland-qa.opencrvs.org/
**Username & Password Used:**
Username: kennedy.mweene
password: test
**Desktop:**
OS: Windows 10
Browser: Chrome | 1.0 | In Performance, deactivated field agents are not showing when viewing records started by field agents - **Bug Description:**
In the performance, When register selects an office and clicks on the view link beside Field Agents under the Sources of applications section, deactivated field agents don't show. If Deactivate is selected in the filter, it shows 'No user found'
**Steps:**
1. An deactivated field agent is created in the location of the register
2. log in as the Register
3. Go to Performance
4. From the location filter, select an office
5. Navigate to 'Sources of applications'
6. Click on View beside Field agents
7. Select Deactive in the filter
**Actual Result:**
- Deactivate field agent is not showing
**Expected Result:**
- Deactivated field agents should show if Deactive is selected in the filter
**Screenshot:**

**Tested on:**
https://login.farajaland-qa.opencrvs.org/
**Username & Password Used:**
Username: kennedy.mweene
password: test
**Desktop:**
OS: Windows 10
Browser: Chrome | priority | in performance deactivated field agents are not showing when viewing records started by field agents bug description in the performance when register selects an office and clicks on the view link beside field agents under the sources of applications section deactivated field agents don t show if deactivate is selected in the filter it shows no user found steps an deactivated field agent is created in the location of the register log in as the register go to performance from the location filter select an office navigate to sources of applications click on view beside field agents select deactive in the filter actual result deactivate field agent is not showing expected result deactivated field agents should show if deactive is selected in the filter screenshot tested on username password used username kennedy mweene password test desktop os windows browser chrome | 1 |
346,289 | 10,410,355,708 | IssuesEvent | 2019-09-13 11:09:52 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | conan create -> conan upload | complex: medium priority: high stage: queue type: look into | It will create a file with the reference created (or the install reference if specified). It will be a pref (without revision).
Together with #5196 would alleviate nicely the `conan create` -> `conan upload` typical CI flow that now requires painful parsings.
@solvingj
| 1.0 | conan create -> conan upload - It will create a file with the reference created (or the install reference if specified). It will be a pref (without revision).
Together with #5196 would alleviate nicely the `conan create` -> `conan upload` typical CI flow that now requires painful parsings.
@solvingj
| priority | conan create conan upload it will create a file with the reference created or the install reference if specified it will be a pref without revision together with would alleviate nicely the conan create conan upload typical ci flow that now requires painful parsings solvingj | 1 |
57,162 | 3,081,245,130 | IssuesEvent | 2015-08-22 14:35:46 | bitfighter/bitfighter | https://api.github.com/repos/bitfighter/bitfighter | closed | Bitfighter Crashes when hitting Esc in Host/Passwords | bug imported Priority-Medium | _From [corteocarl](https://code.google.com/u/corteocarl/) on February 05, 2014 18:55:11_
What steps will reproduce the problem? 1. Go to Bitfighter menu, then Host a Game
2. Go into the Passwords page
3. Hit escape What is the expected output? What do you see instead? It crashes :O What version of the product are you using? On what operating system? 019a 9366:2beb93cf9887 tip
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=386_ | 1.0 | Bitfighter Crashes when hitting Esc in Host/Passwords - _From [corteocarl](https://code.google.com/u/corteocarl/) on February 05, 2014 18:55:11_
What steps will reproduce the problem? 1. Go to Bitfighter menu, then Host a Game
2. Go into the Passwords page
3. Hit escape What is the expected output? What do you see instead? It crashes :O What version of the product are you using? On what operating system? 019a 9366:2beb93cf9887 tip
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=386_ | priority | bitfighter crashes when hitting esc in host passwords from on february what steps will reproduce the problem go to bitfighter menu then host a game go into the passwords page hit escape what is the expected output what do you see instead it crashes o what version of the product are you using on what operating system tip original issue | 1 |
328,892 | 10,001,154,303 | IssuesEvent | 2019-07-12 14:59:19 | wazuh/wazuh-splunk | https://api.github.com/repos/wazuh/wazuh-splunk | closed | Granular options for agent configuration reports | delayed enhancement frontend priority/medium | In https://github.com/wazuh/wazuh-splunk/issues/640 we've created a new kind of reports for the Wazuh app, the agent configuration reports, with this ticket we are improving this new feature.
When the user clicks the PDF button, we want to show a dropdown with the one check per component, this way the user can customize the report.
Example:
> I want a document where I can see the FIM configuration for the agent 003.
Something like this:

| 1.0 | Granular options for agent configuration reports - In https://github.com/wazuh/wazuh-splunk/issues/640 we've created a new kind of reports for the Wazuh app, the agent configuration reports, with this ticket we are improving this new feature.
When the user clicks the PDF button, we want to show a dropdown with the one check per component, this way the user can customize the report.
Example:
> I want a document where I can see the FIM configuration for the agent 003.
Something like this:

| priority | granular options for agent configuration reports in we ve created a new kind of reports for the wazuh app the agent configuration reports with this ticket we are improving this new feature when the user clicks the pdf button we want to show a dropdown with the one check per component this way the user can customize the report example i want a document where i can see the fim configuration for the agent something like this | 1 |
332,039 | 10,083,528,618 | IssuesEvent | 2019-07-25 13:51:57 | open62541/open62541 | https://api.github.com/repos/open62541/open62541 | closed | Server cannot create instance of abstract event, e.g., GeneralModelChangeEventType | Component: Server Priority: Medium Status: Has PR Type: Enhancement | ## Description
My open62541-based server creates and removes nodes in the server AddressSpace at runtime. To notify the client side, I'd like to use the GeneralModelChangeEventType, but the creation of an event of that type fails. I'm using UaExpert V.1.5.1 on the client side.
## Background Information / Reproduction Steps
Trying to create the event using UA_Server_createEvent fails because GeneralModelChangeEventType is abstract. I'd like to point out that the standard says, you are allowed to instantiate abstract EventTypes, as long as they don't appear in the AddressSpace. See OPC UA Part 3, chapter 4.6.2 EventTypes:
> EventTypes defined in this document are specified as abstract and therefore never instantiated in the AddressSpace. Event occurrences of those EventTypes are only exposed via a Subscription. EventTypes exist in the AddressSpace to allow Clients to discover the EventType. This information is used by a client when establishing and working with Event Subscriptions. EventTypes defined by other parts of this series of standards or companion specifications as well as Server specific EventTypes may be defined as not abstract and therefore instances of those EventTypes may be visible in the AddressSpace although Events of those EventTypes are also accessible via the Event Notification mechanisms.
Because UA_Server_createEvent tries to create a node with no parent node, I think this requirement is fulfilled.
Another reference going in that direction can be found in chapter 9.32.7 Guidelines for ModelChangeEvents:
>Two types of ModelChangeEvents are defined: the BaseModelChangeEvent that does not contain any information about the changes and the GeneralModelChangeEvent that identifies the changed Nodes via an array. The precision used depends on both the capability of the OPC UA Server and the nature of the update. An OPC UA Server may use either ModelChangeEvent type depending on circumstances. It may also define subtypes of these EventTypes adding additional information.
In other words: inheriting from GeneralModelChangeEvent is optional, I may also use GeneralModelChangeEvent directly.
I also tried this with the Unified Automation ANSI C Demo Server, which provides methods for dynamic node creation/deletion in its address space at Root\Objects\Demo\0008_DynamicNodes. Those methods instantiate GeneralModelChangeEvents directly.
So IMHO this is a bug, do you agree? Or is there a different way to create instances of those abstract event types?
Used CMake options:
```bash
cmake -DUA_ENABLE_AMALGAMATION=OFF -DUA_NAMESPACE_ZERO=FULL -DUA_ENABLE_SUBSCRIPTIONS_EVENTS=ON
```
Code snippet:
```
UA_NodeId eventID;
// mServer points to a UA_Server object
UA_StatusCode result = UA_Server_createEvent(mServer, UA_NODEID_NUMERIC(0, UA_NS0ID_GENERALMODELCHANGEEVENTTYPE), &eventID);
// UA_STATUSCODE_BADTYPEDEFINITIONINVALID is returned
```
## Checklist
- [x] open62541 Version: commit 21e32dd9973096133a2fcf826809518e5fd0bdfc
- [x] Other OPC UA SDKs used (client or server): UaExpert V1.5.1
- [x] Operating system: Linux
- [ ] Logs (with `UA_LOGLEVEL` set as low as necessary) attached
- [ ] Wireshark network dump attached
- [ ] Self-contained code example attached
- [ ] Critical issue
| 1.0 | Server cannot create instance of abstract event, e.g., GeneralModelChangeEventType - ## Description
My open62541-based server creates and removes nodes in the server AddressSpace at runtime. To notify the client side, I'd like to use the GeneralModelChangeEventType, but the creation of an event of that type fails. I'm using UaExpert V.1.5.1 on the client side.
## Background Information / Reproduction Steps
Trying to create the event using UA_Server_createEvent fails because GeneralModelChangeEventType is abstract. I'd like to point out that the standard says, you are allowed to instantiate abstract EventTypes, as long as they don't appear in the AddressSpace. See OPC UA Part 3, chapter 4.6.2 EventTypes:
> EventTypes defined in this document are specified as abstract and therefore never instantiated in the AddressSpace. Event occurrences of those EventTypes are only exposed via a Subscription. EventTypes exist in the AddressSpace to allow Clients to discover the EventType. This information is used by a client when establishing and working with Event Subscriptions. EventTypes defined by other parts of this series of standards or companion specifications as well as Server specific EventTypes may be defined as not abstract and therefore instances of those EventTypes may be visible in the AddressSpace although Events of those EventTypes are also accessible via the Event Notification mechanisms.
Because UA_Server_createEvent tries to create a node with no parent node, I think this requirement is fulfilled.
Another reference going in that direction can be found in chapter 9.32.7 Guidelines for ModelChangeEvents:
>Two types of ModelChangeEvents are defined: the BaseModelChangeEvent that does not contain any information about the changes and the GeneralModelChangeEvent that identifies the changed Nodes via an array. The precision used depends on both the capability of the OPC UA Server and the nature of the update. An OPC UA Server may use either ModelChangeEvent type depending on circumstances. It may also define subtypes of these EventTypes adding additional information.
In other words: inheriting from GeneralModelChangeEvent is optional, I may also use GeneralModelChangeEvent directly.
I also tried this with the Unified Automation ANSI C Demo Server, which provides methods for dynamic node creation/deletion in its address space at Root\Objects\Demo\0008_DynamicNodes. Those methods instantiate GeneralModelChangeEvents directly.
So IMHO this is a bug, do you agree? Or is there a different way to create instances of those abstract event types?
Used CMake options:
```bash
cmake -DUA_ENABLE_AMALGAMATION=OFF -DUA_NAMESPACE_ZERO=FULL -DUA_ENABLE_SUBSCRIPTIONS_EVENTS=ON
```
Code snippet:
```
UA_NodeId eventID;
// mServer points to a UA_Server object
UA_StatusCode result = UA_Server_createEvent(mServer, UA_NODEID_NUMERIC(0, UA_NS0ID_GENERALMODELCHANGEEVENTTYPE), &eventID);
// UA_STATUSCODE_BADTYPEDEFINITIONINVALID is returned
```
## Checklist
- [x] open62541 Version: commit 21e32dd9973096133a2fcf826809518e5fd0bdfc
- [x] Other OPC UA SDKs used (client or server): UaExpert V1.5.1
- [x] Operating system: Linux
- [ ] Logs (with `UA_LOGLEVEL` set as low as necessary) attached
- [ ] Wireshark network dump attached
- [ ] Self-contained code example attached
- [ ] Critical issue
| priority | server cannot create instance of abstract event e g generalmodelchangeeventtype description my based server creates and removes nodes in the server addressspace at runtime to notify the client side i d like to use the generalmodelchangeeventtype but the creation of an event of that type fails i m using uaexpert v on the client side background information reproduction steps trying to create the event using ua server createevent fails because generalmodelchangeeventtype is abstract i d like to point out that the standard says you are allowed to instantiate abstract eventtypes as long as they don t appear in the addressspace see opc ua part chapter eventtypes eventtypes defined in this document are specified as abstract and therefore never instantiated in the addressspace event occurrences of those eventtypes are only exposed via a subscription eventtypes exist in the addressspace to allow clients to discover the eventtype this information is used by a client when establishing and working with event subscriptions eventtypes defined by other parts of this series of standards or companion specifications as well as server specific eventtypes may be defined as not abstract and therefore instances of those eventtypes may be visible in the addressspace although events of those eventtypes are also accessible via the event notification mechanisms because ua server createevent tries to create a node with no parent node i think this requirement is fulfilled another reference going in that direction can be found in chapter guidelines for modelchangeevents two types of modelchangeevents are defined the basemodelchangeevent that does not contain any information about the changes and the generalmodelchangeevent that identifies the changed nodes via an array the precision used depends on both the capability of the opc ua server and the nature of the update an opc ua server may use either modelchangeevent type depending on circumstances it may also define subtypes of these eventtypes adding additional information in other words inheriting from generalmodelchangeevent is optional i may also use generalmodelchangeevent directly i also tried this with the unified automation ansi c demo server which provides methods for dynamic node creation deletion in its address space at root objects demo dynamicnodes those methods instantiate generalmodelchangeevents directly so imho this is a bug do you agree or is there a different way to create instances of those abstract event types used cmake options bash cmake dua enable amalgamation off dua namespace zero full dua enable subscriptions events on code snippet ua nodeid eventid mserver points to a ua server object ua statuscode result ua server createevent mserver ua nodeid numeric ua generalmodelchangeeventtype eventid ua statuscode badtypedefinitioninvalid is returned checklist version commit other opc ua sdks used client or server uaexpert operating system linux logs with ua loglevel set as low as necessary attached wireshark network dump attached self contained code example attached critical issue | 1 |
184,950 | 6,717,618,545 | IssuesEvent | 2017-10-14 23:51:29 | google/error-prone | https://api.github.com/repos/google/error-prone | closed | Unintended type parameters named "String," etc. | migrated Priority-Medium Type-NewCheck | _[Original issue](https://code.google.com/p/error-prone/issues/detail?id=156) created by **cpovirk@google.com** on 2013-07-03 at 01:22 AM_
---
Occasionally, a user who intends to define a class that implements a specialized generic interface will inadvertently write "<String>" twice, once for the interface he's implementing and once for his own class. For example, note the "MyComparator<String>" below:
static class MyComparator<String> implements Comparator<String> {
@Override public int compare(String a, String b) {
return compareStrings(a, b);
}
}
This can lead to confusing errors like "incompatible types required: String found: java.lang.String," as in this StackOverflow question:
http://stackoverflow.com/q/8443892/28465
An error-prone check for type parameters named "<String>" would probably catch 95% of these errors. The check _might_ get a few more percent by looking for other common JDK class names, but I suspect that most of the remaining 5% would instead happen with custom user classes.
(In an ideal world, I'd be inclined to reject any type parameter with a ClassNameStyle name, conveniently catching all these errors, but that's unrealistic.)
Here are a couple searches that turn up instances of this problem in Google code. (One of those instance is about to be fixed. That fix is what led me to file this feature request.)
'(class|interface) \w+<String>'
'(class|interface) \w+<[A-Z][a-z]\w+>\s+implements Comparator'
| 1.0 | Unintended type parameters named "String," etc. - _[Original issue](https://code.google.com/p/error-prone/issues/detail?id=156) created by **cpovirk@google.com** on 2013-07-03 at 01:22 AM_
---
Occasionally, a user who intends to define a class that implements a specialized generic interface will inadvertently write "<String>" twice, once for the interface he's implementing and once for his own class. For example, note the "MyComparator<String>" below:
static class MyComparator<String> implements Comparator<String> {
@Override public int compare(String a, String b) {
return compareStrings(a, b);
}
}
This can lead to confusing errors like "incompatible types required: String found: java.lang.String," as in this StackOverflow question:
http://stackoverflow.com/q/8443892/28465
An error-prone check for type parameters named "<String>" would probably catch 95% of these errors. The check _might_ get a few more percent by looking for other common JDK class names, but I suspect that most of the remaining 5% would instead happen with custom user classes.
(In an ideal world, I'd be inclined to reject any type parameter with a ClassNameStyle name, conveniently catching all these errors, but that's unrealistic.)
Here are a couple searches that turn up instances of this problem in Google code. (One of those instance is about to be fixed. That fix is what led me to file this feature request.)
'(class|interface) \w+<String>'
'(class|interface) \w+<[A-Z][a-z]\w+>\s+implements Comparator'
| priority | unintended type parameters named string etc created by cpovirk google com on at am occasionally a user who intends to define a class that implements a specialized generic interface will inadvertently write lt string gt twice once for the interface he s implementing and once for his own class for example note the mycomparator lt string gt below nbsp nbsp static class mycomparator lt string gt implements comparator lt string gt nbsp nbsp nbsp nbsp override public int compare string a string b nbsp nbsp nbsp nbsp nbsp nbsp return comparestrings a b nbsp nbsp nbsp nbsp nbsp nbsp this can lead to confusing errors like incompatible types required string found java lang string as in this stackoverflow question an error prone check for type parameters named lt string gt would probably catch of these errors the check might get a few more percent by looking for other common jdk class names but i suspect that most of the remaining would instead happen with custom user classes in an ideal world i d be inclined to reject any type parameter with a classnamestyle name conveniently catching all these errors but that s unrealistic here are a couple searches that turn up instances of this problem in google code one of those instance is about to be fixed that fix is what led me to file this feature request class interface w lt string gt class interface w lt w gt s implements comparator | 1 |
143,425 | 5,515,958,154 | IssuesEvent | 2017-03-17 18:46:23 | certificate-helper/TLS-Inspector | https://api.github.com/repos/certificate-helper/TLS-Inspector | opened | Deintegrate Cocoapods | enhancement medium priority | Cocoapods, while useful, is harmful for the community in their abuse of Github as a CDN. Furthermore, it adds an unnecessary extra barrier for users to compile the app.
As part of our migration to use OpenSSL 1.1.0 series we will need to use an embedded framework, rather than a Pod. That leaves only a single other pod (MBProgressHUD) that can simply be embedded. | 1.0 | Deintegrate Cocoapods - Cocoapods, while useful, is harmful for the community in their abuse of Github as a CDN. Furthermore, it adds an unnecessary extra barrier for users to compile the app.
As part of our migration to use OpenSSL 1.1.0 series we will need to use an embedded framework, rather than a Pod. That leaves only a single other pod (MBProgressHUD) that can simply be embedded. | priority | deintegrate cocoapods cocoapods while useful is harmful for the community in their abuse of github as a cdn furthermore it adds an unnecessary extra barrier for users to compile the app as part of our migration to use openssl series we will need to use an embedded framework rather than a pod that leaves only a single other pod mbprogresshud that can simply be embedded | 1 |
618,390 | 19,434,082,790 | IssuesEvent | 2021-12-21 15:09:56 | canonical/hotsos | https://api.github.com/repos/canonical/hotsos | closed | analyse openstack live migrations | plugin:openstack priority:MEDIUM | it should be possible to identify migration sequences from the logs and do some analysis on them | 1.0 | analyse openstack live migrations - it should be possible to identify migration sequences from the logs and do some analysis on them | priority | analyse openstack live migrations it should be possible to identify migration sequences from the logs and do some analysis on them | 1 |
416,446 | 12,146,594,174 | IssuesEvent | 2020-04-24 11:27:47 | loadimpact/jmeter-to-k6 | https://api.github.com/repos/loadimpact/jmeter-to-k6 | closed | Error: Unrecognized element: ResultCollector | Priority: Medium Status: Awaiting User Type: Bug | The tool is throwing an error on meeting ResultCollector element in jmx file:
```bash
(node:53160) UnhandledPromiseRejectionWarning: Error: Unrecognized element: ResultCollector
at element (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:50:27)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:1:40)
at elements (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:17:43)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:1:40)
at Object.hashTree (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element/hashTree.js:4:10)
at element (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:53:36)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:1:40)
at elements (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:17:43)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:1:40)
at Object.TestPlan (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element/TestPlan.js:14:17)
(node:53160) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:53160) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
Example of the snippet:
```xml
<ResultCollector guiclass="StatVisualizer" testclass="ResultCollector" testname="Aggregate Report" enabled="false">
<boolProp name="ResultCollector.error_logging">false</boolProp>
<objProp>
<name>saveConfig</name>
<value class="SampleSaveConfiguration">
<time>true</time>
<latency>true</latency>
<timestamp>true</timestamp>
<success>true</success>
<label>true</label>
<code>true</code>
<message>true</message>
<threadName>true</threadName>
<dataType>true</dataType>
<encoding>false</encoding>
<assertions>true</assertions>
<subresults>true</subresults>
<responseData>false</responseData>
<samplerData>false</samplerData>
<xml>false</xml>
<fieldNames>true</fieldNames>
<responseHeaders>false</responseHeaders>
<requestHeaders>false</requestHeaders>
<responseDataOnError>false</responseDataOnError>
<saveAssertionResultsFailureMessage>true</saveAssertionResultsFailureMessage>
<assertionsResultsToSave>0</assertionsResultsToSave>
<bytes>true</bytes>
<threadCounts>true</threadCounts>
<idleTime>true</idleTime>
</value>
</objProp>
<stringProp name="filename">/tmp/aggregate-jmeter-results.jtl</stringProp>
<stringProp name="TestPlan.comments">mpaf/tool/fragments/ce/aggregate_report.jmx</stringProp></ResultCollector>
``` | 1.0 | Error: Unrecognized element: ResultCollector - The tool is throwing an error on meeting ResultCollector element in jmx file:
```bash
(node:53160) UnhandledPromiseRejectionWarning: Error: Unrecognized element: ResultCollector
at element (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:50:27)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:1:40)
at elements (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:17:43)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:1:40)
at Object.hashTree (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element/hashTree.js:4:10)
at element (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:53:36)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element.js:1:40)
at elements (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:17:43)
at module.exports (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/elements.js:1:40)
at Object.TestPlan (/Users/glushko/.nvm/versions/node/v12.14.1/lib/node_modules/jmeter-to-k6/src/element/TestPlan.js:14:17)
(node:53160) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:53160) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
Example of the snippet:
```xml
<ResultCollector guiclass="StatVisualizer" testclass="ResultCollector" testname="Aggregate Report" enabled="false">
<boolProp name="ResultCollector.error_logging">false</boolProp>
<objProp>
<name>saveConfig</name>
<value class="SampleSaveConfiguration">
<time>true</time>
<latency>true</latency>
<timestamp>true</timestamp>
<success>true</success>
<label>true</label>
<code>true</code>
<message>true</message>
<threadName>true</threadName>
<dataType>true</dataType>
<encoding>false</encoding>
<assertions>true</assertions>
<subresults>true</subresults>
<responseData>false</responseData>
<samplerData>false</samplerData>
<xml>false</xml>
<fieldNames>true</fieldNames>
<responseHeaders>false</responseHeaders>
<requestHeaders>false</requestHeaders>
<responseDataOnError>false</responseDataOnError>
<saveAssertionResultsFailureMessage>true</saveAssertionResultsFailureMessage>
<assertionsResultsToSave>0</assertionsResultsToSave>
<bytes>true</bytes>
<threadCounts>true</threadCounts>
<idleTime>true</idleTime>
</value>
</objProp>
<stringProp name="filename">/tmp/aggregate-jmeter-results.jtl</stringProp>
<stringProp name="TestPlan.comments">mpaf/tool/fragments/ce/aggregate_report.jmx</stringProp></ResultCollector>
``` | priority | error unrecognized element resultcollector the tool is throwing an error on meeting resultcollector element in jmx file bash node unhandledpromiserejectionwarning error unrecognized element resultcollector at element users glushko nvm versions node lib node modules jmeter to src element js at module exports users glushko nvm versions node lib node modules jmeter to src element js at elements users glushko nvm versions node lib node modules jmeter to src elements js at module exports users glushko nvm versions node lib node modules jmeter to src elements js at object hashtree users glushko nvm versions node lib node modules jmeter to src element hashtree js at element users glushko nvm versions node lib node modules jmeter to src element js at module exports users glushko nvm versions node lib node modules jmeter to src element js at elements users glushko nvm versions node lib node modules jmeter to src elements js at module exports users glushko nvm versions node lib node modules jmeter to src elements js at object testplan users glushko nvm versions node lib node modules jmeter to src element testplan js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code example of the snippet xml false saveconfig true true true true true true true true true false true true false false false true false false false true true true true tmp aggregate jmeter results jtl mpaf tool fragments ce aggregate report jmx | 1 |
641,060 | 20,816,693,319 | IssuesEvent | 2022-03-18 11:05:28 | AY2122S2-CS2103T-T13-4/tp | https://api.github.com/repos/AY2122S2-CS2103T-T13-4/tp | closed | Rename TAG to MODULE | priority.MEDIUM type.Enhancement | All instances of `TAG` will be renamed to `MODULE` to better fit the context of our application. | 1.0 | Rename TAG to MODULE - All instances of `TAG` will be renamed to `MODULE` to better fit the context of our application. | priority | rename tag to module all instances of tag will be renamed to module to better fit the context of our application | 1 |
378,994 | 11,211,821,714 | IssuesEvent | 2020-01-06 16:13:49 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | ZeroXTrade.doTrade should return actual amount traded. | Priority: Medium V2 Audit | https://github.com/AugurProject/augur/blob/057c31ddf1124acdcb200ed6ce425834fd86238c/packages/augur-core/source/contracts/trading/ZeroXTrade.sol#L255-L269
This function is expected to return the amount of the asset that was traded, but instead it returns the amount of the asset that was _requested_ to be traded. At a glance I wasn't able to spot an immediate vulnerability from this (@epheph may have better luck) since `exchange.fillOrder(...)` _should_ make it so `_amount` is always exactly equal to the amount traded. However, from a code health standpoint the current code is very dangerous since there is an implicit assumption in this function that the caller has already established that `_amout` is guaranteed to be exactly equal to the final amount traded.
Recommendation: Return the result of `fillOrder.fillZeroXOrder`, which does contain the amount actually traded. | 1.0 | ZeroXTrade.doTrade should return actual amount traded. - https://github.com/AugurProject/augur/blob/057c31ddf1124acdcb200ed6ce425834fd86238c/packages/augur-core/source/contracts/trading/ZeroXTrade.sol#L255-L269
This function is expected to return the amount of the asset that was traded, but instead it returns the amount of the asset that was _requested_ to be traded. At a glance I wasn't able to spot an immediate vulnerability from this (@epheph may have better luck) since `exchange.fillOrder(...)` _should_ make it so `_amount` is always exactly equal to the amount traded. However, from a code health standpoint the current code is very dangerous since there is an implicit assumption in this function that the caller has already established that `_amout` is guaranteed to be exactly equal to the final amount traded.
Recommendation: Return the result of `fillOrder.fillZeroXOrder`, which does contain the amount actually traded. | priority | zeroxtrade dotrade should return actual amount traded this function is expected to return the amount of the asset that was traded but instead it returns the amount of the asset that was requested to be traded at a glance i wasn t able to spot an immediate vulnerability from this epheph may have better luck since exchange fillorder should make it so amount is always exactly equal to the amount traded however from a code health standpoint the current code is very dangerous since there is an implicit assumption in this function that the caller has already established that amout is guaranteed to be exactly equal to the final amount traded recommendation return the result of fillorder fillzeroxorder which does contain the amount actually traded | 1 |
375,288 | 11,102,314,626 | IssuesEvent | 2019-12-16 23:38:03 | grimeyg/wheel-of-fortune | https://api.github.com/repos/grimeyg/wheel-of-fortune | closed | Create Player class and initial structure. | Functionality Iteration 0 Priority: Medium | Should include:
- Name (assigned via parameter)
- totalScore (default to 0)
- roundScore (default to 0)
Category (default to none)
Number of words (default to one)
Description (string)
Answer (string) | 1.0 | Create Player class and initial structure. - Should include:
- Name (assigned via parameter)
- totalScore (default to 0)
- roundScore (default to 0)
Category (default to none)
Number of words (default to one)
Description (string)
Answer (string) | priority | create player class and initial structure should include name assigned via parameter totalscore default to roundscore default to category default to none number of words default to one description string answer string | 1 |
375,327 | 11,102,703,293 | IssuesEvent | 2019-12-17 01:02:34 | Seneca-CDOT/Osteppy | https://api.github.com/repos/Seneca-CDOT/Osteppy | opened | /eod should but today's date in title | difficulty: medium priority: medium type: enhancement | /eod slash command should have today's date in the title (today as in before 10AM).
Maybe add a /eod_past for people who need to submit eod's for previous days. | 1.0 | /eod should but today's date in title - /eod slash command should have today's date in the title (today as in before 10AM).
Maybe add a /eod_past for people who need to submit eod's for previous days. | priority | eod should but today s date in title eod slash command should have today s date in the title today as in before maybe add a eod past for people who need to submit eod s for previous days | 1 |
542,451 | 15,860,647,498 | IssuesEvent | 2021-04-08 09:24:14 | Team1-TeliaProject/team1_FrontEnd | https://api.github.com/repos/Team1-TeliaProject/team1_FrontEnd | closed | [Feat] Job Component for Employee | New feature Priority: Medium | A component that will provide job details at a glance for employee user. | 1.0 | [Feat] Job Component for Employee - A component that will provide job details at a glance for employee user. | priority | job component for employee a component that will provide job details at a glance for employee user | 1 |
82,595 | 3,615,820,446 | IssuesEvent | 2016-02-07 01:09:55 | Solinea/goldstone-server | https://api.github.com/repos/Solinea/goldstone-server | closed | event browser does not sort columns | component: ui priority 3: medium type: enhancement | The event browser (`http://goldstoneip:8888/#reports/eventbrowser`) does not allow the table to be sorted by column. Using Chrome 48.0.2564.82 (64-bit) on Mac.
<img width="592" alt="goldstone" src="https://cloud.githubusercontent.com/assets/9674/12668520/d7ec0a8e-c69a-11e5-8ac4-4c8ba2757167.png">
| 1.0 | event browser does not sort columns - The event browser (`http://goldstoneip:8888/#reports/eventbrowser`) does not allow the table to be sorted by column. Using Chrome 48.0.2564.82 (64-bit) on Mac.
<img width="592" alt="goldstone" src="https://cloud.githubusercontent.com/assets/9674/12668520/d7ec0a8e-c69a-11e5-8ac4-4c8ba2757167.png">
| priority | event browser does not sort columns the event browser does not allow the table to be sorted by column using chrome bit on mac img width alt goldstone src | 1 |
515,001 | 14,948,470,861 | IssuesEvent | 2021-01-26 10:07:29 | epiphany-platform/m-azure-basic-infrastructure | https://api.github.com/repos/epiphany-platform/m-azure-basic-infrastructure | closed | fix pipeline to be triggered only from organisation members since for tests azure resources are required | priority/medium type/improvement | Current way is possible security issue.
We should
Use it like this: https://github.com/microsoft/azure-pipelines-yaml/blob/master/design/pipeline-triggers.md#examples-pr-triggers
Consider to add options:
"allowSecrets":false (DON'T Make secrets available to builds of forks)
"requireCommentsForNonTeamMembersOnly":true
See more info [here](https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/github?view=azure-devops&tabs=yaml#ci-triggers) | 1.0 | fix pipeline to be triggered only from organisation members since for tests azure resources are required - Current way is possible security issue.
We should
Use it like this: https://github.com/microsoft/azure-pipelines-yaml/blob/master/design/pipeline-triggers.md#examples-pr-triggers
Consider to add options:
"allowSecrets":false (DON'T Make secrets available to builds of forks)
"requireCommentsForNonTeamMembersOnly":true
See more info [here](https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/github?view=azure-devops&tabs=yaml#ci-triggers) | priority | fix pipeline to be triggered only from organisation members since for tests azure resources are required current way is possible security issue we should use it like this consider to add options allowsecrets false don t make secrets available to builds of forks requirecommentsfornonteammembersonly true see more info | 1 |
699,703 | 24,028,666,793 | IssuesEvent | 2022-09-15 13:32:01 | owncloud/ocis | https://api.github.com/repos/owncloud/ocis | closed | oCIS fails to start with `MICRO_REGISTRY` set to etcd | Type:Bug Priority:p3-medium | As already discussed in the chat, my local build of ocis (recent versions) fails to start with the following error message:
```
2022-05-01T13:40:49+02:00 INF running on 6 cpus service=ocis
{"level":"fatal","time":"2022-05-01T13:40:49+02:00","message":"Error configuring broker: cannot init while connected"}
Process 10537 has exited with status 1
```
The http broker of go-micro that we use by default checks in the Init method if the broker is already running: https://github.com/asim/go-micro/blob/master/broker/http.go#L462
While this patch:
```
diff --git a/broker/http.go b/broker/http.go
index 64388f24..07018a7f 100644
--- a/broker/http.go
+++ b/broker/http.go
@@ -5,7 +5,6 @@ import (
"bytes"
"context"
"crypto/tls"
- "errors"
"fmt"
"io"
"math/rand"
@@ -461,7 +460,7 @@ func (h *httpBroker) Init(opts ...Option) error {
h.RLock()
if h.running {
h.RUnlock()
- return errors.New("cannot init while connected")
+ return nil // errors.New("cannot init while connected")
}
h.RUnlock()
```
fixes the problem, this has a **smell**.
Some questions that need to be clarified:
- How many brokers should be started when running in a single binary?
- As I understand the code, I guess it should be only one. In that case, I don't think it makes sense to call the `Init` function of the Broker again for each service using the broker.
- What consequences does my proposed patch have?
- Why do only I see that problem on my machine :fearful:, it seems to work for anyone else?
I would feel better if I understood this problem ;-)
@refs let me reference you here as a go micro maintainer. | 1.0 | oCIS fails to start with `MICRO_REGISTRY` set to etcd - As already discussed in the chat, my local build of ocis (recent versions) fails to start with the following error message:
```
2022-05-01T13:40:49+02:00 INF running on 6 cpus service=ocis
{"level":"fatal","time":"2022-05-01T13:40:49+02:00","message":"Error configuring broker: cannot init while connected"}
Process 10537 has exited with status 1
```
The http broker of go-micro that we use by default checks in the Init method if the broker is already running: https://github.com/asim/go-micro/blob/master/broker/http.go#L462
While this patch:
```
diff --git a/broker/http.go b/broker/http.go
index 64388f24..07018a7f 100644
--- a/broker/http.go
+++ b/broker/http.go
@@ -5,7 +5,6 @@ import (
"bytes"
"context"
"crypto/tls"
- "errors"
"fmt"
"io"
"math/rand"
@@ -461,7 +460,7 @@ func (h *httpBroker) Init(opts ...Option) error {
h.RLock()
if h.running {
h.RUnlock()
- return errors.New("cannot init while connected")
+ return nil // errors.New("cannot init while connected")
}
h.RUnlock()
```
fixes the problem, this has a **smell**.
Some questions that need to be clarified:
- How many brokers should be started when running in a single binary?
- As I understand the code, I guess it should be only one. In that case, I don't think it makes sense to call the `Init` function of the Broker again for each service using the broker.
- What consequences does my proposed patch have?
- Why do only I see that problem on my machine :fearful:, it seems to work for anyone else?
I would feel better if I understood this problem ;-)
@refs let me reference you here as a go micro maintainer. | priority | ocis fails to start with micro registry set to etcd as already discussed in the chat my local build of ocis recent versions fails to start with the following error message inf running on cpus service ocis level fatal time message error configuring broker cannot init while connected process has exited with status the http broker of go micro that we use by default checks in the init method if the broker is already running while this patch diff git a broker http go b broker http go index a broker http go b broker http go import bytes context crypto tls errors fmt io math rand func h httpbroker init opts option error h rlock if h running h runlock return errors new cannot init while connected return nil errors new cannot init while connected h runlock fixes the problem this has a smell some questions that need to be clarified how many brokers should be started when running in a single binary as i understand the code i guess it should be only one in that case i don t think it makes sense to call the init function of the broker again for each service using the broker what consequences does my proposed patch have why do only i see that problem on my machine fearful it seems to work for anyone else i would feel better if i understood this problem refs let me reference you here as a go micro maintainer | 1 |
226,925 | 7,524,791,251 | IssuesEvent | 2018-04-13 08:26:41 | eriq-augustine/psl | https://api.github.com/repos/eriq-augustine/psl | closed | Expand Literal Constants In Parser | Difficulty - Medium Interfaces - Parser Priority - Normal Type - Enhancement | We have been debating what to allow in constant literals for a while (#97, #98).
Particularly, what non-alphanumeric characters to let in.
I cannot think of a reason to not just expand it to all characters.
Lets go with C-style quote escapes rather than SQL-style.
--- Original Issue (@binh-vu) ---
In ```PSL.g4```
```
constant
: SINGLE_QUOTE IDENTIFIER SINGLE_QUOTE
| DOUBLE_QUOTE IDENTIFIER DOUBLE_QUOTE
;
```
Since a constant is quoted within single double or double quote, it would be better if user can use any character instead of just letter and digit (e.g: using delimiter to make constant easier to read and debug)
In addition to this, may be lots of people will get confuse if they have this error: ```org.antlr.v4.runtime.NoViableAltException``` because of constant parsing. | 1.0 | Expand Literal Constants In Parser - We have been debating what to allow in constant literals for a while (#97, #98).
Particularly, what non-alphanumeric characters to let in.
I cannot think of a reason to not just expand it to all characters.
Lets go with C-style quote escapes rather than SQL-style.
--- Original Issue (@binh-vu) ---
In ```PSL.g4```
```
constant
: SINGLE_QUOTE IDENTIFIER SINGLE_QUOTE
| DOUBLE_QUOTE IDENTIFIER DOUBLE_QUOTE
;
```
Since a constant is quoted within single double or double quote, it would be better if user can use any character instead of just letter and digit (e.g: using delimiter to make constant easier to read and debug)
In addition to this, may be lots of people will get confuse if they have this error: ```org.antlr.v4.runtime.NoViableAltException``` because of constant parsing. | priority | expand literal constants in parser we have been debating what to allow in constant literals for a while particularly what non alphanumeric characters to let in i cannot think of a reason to not just expand it to all characters lets go with c style quote escapes rather than sql style original issue binh vu in psl constant single quote identifier single quote double quote identifier double quote since a constant is quoted within single double or double quote it would be better if user can use any character instead of just letter and digit e g using delimiter to make constant easier to read and debug in addition to this may be lots of people will get confuse if they have this error org antlr runtime noviablealtexception because of constant parsing | 1 |
499,281 | 14,444,013,896 | IssuesEvent | 2020-12-07 20:33:32 | Seneca-CDOT/plumadriver | https://api.github.com/repos/Seneca-CDOT/plumadriver | closed | Refactor types relating to Express Request | Difficulty: Medium Priority: Medium Type: Refactor/Cleanup | **What would you like to be changed**:
Refactor the types relating to express `request`
**Why is this needed**:
All the routes and endpoints share the same params object, therefore they are currently typed as optional. This makes us have to do redundant checking in some places where the params are guarantee to exist. | 1.0 | Refactor types relating to Express Request - **What would you like to be changed**:
Refactor the types relating to express `request`
**Why is this needed**:
All the routes and endpoints share the same params object, therefore they are currently typed as optional. This makes us have to do redundant checking in some places where the params are guarantee to exist. | priority | refactor types relating to express request what would you like to be changed refactor the types relating to express request why is this needed all the routes and endpoints share the same params object therefore they are currently typed as optional this makes us have to do redundant checking in some places where the params are guarantee to exist | 1 |
253,028 | 8,050,247,106 | IssuesEvent | 2018-08-01 12:54:56 | vuejs/vue-devtools | https://api.github.com/repos/vuejs/vue-devtools | closed | [Firefox] "Proxy injection failed" while using Responsive Design Mode. | bug priority: medium | **Firefox**: 54.0.1 (64-bit)
**Vue,js Devtools**: 3.1.6
**Vue.js**: 2.4.2
When using Firefox Responsive Design Mode, Vue.js Devtools is not initializing.
It's giving me the response: `Proxy injection failed`.
No error is given in the console, but 3 of our projects that's using Vue is giving me the same response.
One of these projects is using Sentry.io logging and we received an error:
```
TypeError: can't redefine non-configurable property "__VUE_DEVTOOLS_GLOBAL_HOOK__"
```
```
in installHook at line 66:3
```
Steps to reproduce:
1. Use Vue.js in your application
2. Have the official Vue.js Devtools installed in Firefox
3. Enable Responsive Design Mode
4. Open the Vue.js Devtools tab | 1.0 | [Firefox] "Proxy injection failed" while using Responsive Design Mode. - **Firefox**: 54.0.1 (64-bit)
**Vue,js Devtools**: 3.1.6
**Vue.js**: 2.4.2
When using Firefox Responsive Design Mode, Vue.js Devtools is not initializing.
It's giving me the response: `Proxy injection failed`.
No error is given in the console, but 3 of our projects that's using Vue is giving me the same response.
One of these projects is using Sentry.io logging and we received an error:
```
TypeError: can't redefine non-configurable property "__VUE_DEVTOOLS_GLOBAL_HOOK__"
```
```
in installHook at line 66:3
```
Steps to reproduce:
1. Use Vue.js in your application
2. Have the official Vue.js Devtools installed in Firefox
3. Enable Responsive Design Mode
4. Open the Vue.js Devtools tab | priority | proxy injection failed while using responsive design mode firefox bit vue js devtools vue js when using firefox responsive design mode vue js devtools is not initializing it s giving me the response proxy injection failed no error is given in the console but of our projects that s using vue is giving me the same response one of these projects is using sentry io logging and we received an error typeerror can t redefine non configurable property vue devtools global hook in installhook at line steps to reproduce use vue js in your application have the official vue js devtools installed in firefox enable responsive design mode open the vue js devtools tab | 1 |
233,351 | 7,696,874,760 | IssuesEvent | 2018-05-18 16:43:42 | Marri/glowfic | https://api.github.com/repos/Marri/glowfic | opened | Subcontinuity descriptions | 3. medium priority 7. easy type: new feature | Whole continuities get descriptions; it'd be nice to allow them on the subcontinuities, too. | 1.0 | Subcontinuity descriptions - Whole continuities get descriptions; it'd be nice to allow them on the subcontinuities, too. | priority | subcontinuity descriptions whole continuities get descriptions it d be nice to allow them on the subcontinuities too | 1 |
826,355 | 31,592,176,662 | IssuesEvent | 2023-09-05 00:12:26 | unstructuredstudio/zubhub | https://api.github.com/repos/unstructuredstudio/zubhub | reopened | Feature Request: Verification Email not sent upon User Signup | enhancement feature medium priority | As a user, I recently signed up for the service but did not receive any verification email.
Therefore, I would like to request the addition of a new feature where an email is automatically sent to users upon successful registration.
Improved User Experience: By sending a verification email, users will have a better experience and will be able to use the service promptly. The verification email will ensure that their email address is valid and they can be contacted by the service.
Reduced Support Requests: Without a verification email, users may have difficulty in accessing the service, and this could lead to an increase in support requests. By sending a verification email.
Enhanced Security: Sending a verification email helps to ensure that the user is genuine and has access to the email account they have provided. This helps to reduce the risk of fraudulent or spam accounts being created on the service.
Compliance with Regulations: In some countries, it is a legal requirement to verify the email address of users. By implementing this feature, the service can comply with these regulations and avoid any legal issues.
also, we could also add a page that tells the user that a verification email has been sent to their email.
| 1.0 | Feature Request: Verification Email not sent upon User Signup - As a user, I recently signed up for the service but did not receive any verification email.
Therefore, I would like to request the addition of a new feature where an email is automatically sent to users upon successful registration.
Improved User Experience: By sending a verification email, users will have a better experience and will be able to use the service promptly. The verification email will ensure that their email address is valid and they can be contacted by the service.
Reduced Support Requests: Without a verification email, users may have difficulty in accessing the service, and this could lead to an increase in support requests. By sending a verification email.
Enhanced Security: Sending a verification email helps to ensure that the user is genuine and has access to the email account they have provided. This helps to reduce the risk of fraudulent or spam accounts being created on the service.
Compliance with Regulations: In some countries, it is a legal requirement to verify the email address of users. By implementing this feature, the service can comply with these regulations and avoid any legal issues.
also, we could also add a page that tells the user that a verification email has been sent to their email.
| priority | feature request verification email not sent upon user signup as a user i recently signed up for the service but did not receive any verification email therefore i would like to request the addition of a new feature where an email is automatically sent to users upon successful registration improved user experience by sending a verification email users will have a better experience and will be able to use the service promptly the verification email will ensure that their email address is valid and they can be contacted by the service reduced support requests without a verification email users may have difficulty in accessing the service and this could lead to an increase in support requests by sending a verification email enhanced security sending a verification email helps to ensure that the user is genuine and has access to the email account they have provided this helps to reduce the risk of fraudulent or spam accounts being created on the service compliance with regulations in some countries it is a legal requirement to verify the email address of users by implementing this feature the service can comply with these regulations and avoid any legal issues also we could also add a page that tells the user that a verification email has been sent to their email | 1 |
340,004 | 10,265,602,481 | IssuesEvent | 2019-08-22 19:16:03 | ualbertalib/avalon | https://api.github.com/repos/ualbertalib/avalon | closed | Report/Survey of existing multimedia objects in ERA | Post-launch priority:medium | - [x] @zschoenb to send a report of zipped multifile objects that contains multimedia.
- [ ] generate reports from ERA for multimedia objects. | 1.0 | Report/Survey of existing multimedia objects in ERA - - [x] @zschoenb to send a report of zipped multifile objects that contains multimedia.
- [ ] generate reports from ERA for multimedia objects. | priority | report survey of existing multimedia objects in era zschoenb to send a report of zipped multifile objects that contains multimedia generate reports from era for multimedia objects | 1 |
186,271 | 6,734,841,729 | IssuesEvent | 2017-10-18 19:30:04 | choraleapp/chorale-web | https://api.github.com/repos/choraleapp/chorale-web | opened | Sidebar menu discussion | desktop only good first issue help wanted priority: medium ui & ux | 1. Branding: Where to put logo, if anywhere? Should text be included?
2. Sections: Is “Places” for generic views & “Collection” enough? Where does the create playlist button go? | 1.0 | Sidebar menu discussion - 1. Branding: Where to put logo, if anywhere? Should text be included?
2. Sections: Is “Places” for generic views & “Collection” enough? Where does the create playlist button go? | priority | sidebar menu discussion branding where to put logo if anywhere should text be included sections is “places” for generic views “collection” enough where does the create playlist button go | 1 |
440,815 | 12,704,422,745 | IssuesEvent | 2020-06-23 01:22:06 | vdaas/vald | https://api.github.com/repos/vdaas/vald | opened | pkg/agent/core/ngt/service/New blocks startup process of agent-ngt | priority/medium team/core type/bug | ### Describe the bug:
<!-- A clear and concise description of what the bug is. -->
with invalid NGT backup files, pkg/agent/core/ngt/service/New blocks startup process of agent-ngt.
### To Reproduce:
<!-- Please describe the steps to reproduce the behavior: -->
put invalid backup files in `index_path` and `enable_in_memory_mode=false`.
### Expected behavior:
<!-- A clear and concise description of what you expected to happen. -->
Maybe there are two options to fallback the application. :thinking:
- restart pod and retry to load
- start daemon without backup data.
### Environment:
<!--- Please change the versions below along with your environment -->
- Go Version: 1.14.3
- Docker Version: 19.03.8
- Kubernetes Version: 1.18.2
- NGT Version: 1.11.5
| 1.0 | pkg/agent/core/ngt/service/New blocks startup process of agent-ngt - ### Describe the bug:
<!-- A clear and concise description of what the bug is. -->
with invalid NGT backup files, pkg/agent/core/ngt/service/New blocks startup process of agent-ngt.
### To Reproduce:
<!-- Please describe the steps to reproduce the behavior: -->
put invalid backup files in `index_path` and `enable_in_memory_mode=false`.
### Expected behavior:
<!-- A clear and concise description of what you expected to happen. -->
Maybe there are two options to fallback the application. :thinking:
- restart pod and retry to load
- start daemon without backup data.
### Environment:
<!--- Please change the versions below along with your environment -->
- Go Version: 1.14.3
- Docker Version: 19.03.8
- Kubernetes Version: 1.18.2
- NGT Version: 1.11.5
| priority | pkg agent core ngt service new blocks startup process of agent ngt describe the bug with invalid ngt backup files pkg agent core ngt service new blocks startup process of agent ngt to reproduce put invalid backup files in index path and enable in memory mode false expected behavior maybe there are two options to fallback the application thinking restart pod and retry to load start daemon without backup data environment go version docker version kubernetes version ngt version | 1 |
383,236 | 11,353,027,468 | IssuesEvent | 2020-01-24 14:49:32 | FruitieX/homectl | https://api.github.com/repos/FruitieX/homectl | opened | get rid of app.emit usages in favor of sendMsg | medium priority | this way we don't have two separate buses for no good reason | 1.0 | get rid of app.emit usages in favor of sendMsg - this way we don't have two separate buses for no good reason | priority | get rid of app emit usages in favor of sendmsg this way we don t have two separate buses for no good reason | 1 |
567,807 | 16,892,260,592 | IssuesEvent | 2021-06-23 10:41:23 | danmarsden/moodle-mod_dialogue | https://api.github.com/repos/danmarsden/moodle-mod_dialogue | closed | No drop-down list in search field showing when many users in course | priority: medium status: needs information | When there are many users in a course, the drop-down list to find user to start dialogue with is not populated. | 1.0 | No drop-down list in search field showing when many users in course - When there are many users in a course, the drop-down list to find user to start dialogue with is not populated. | priority | no drop down list in search field showing when many users in course when there are many users in a course the drop down list to find user to start dialogue with is not populated | 1 |
26,587 | 2,684,882,325 | IssuesEvent | 2015-03-29 13:34:17 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Horizontal scrollbar is missing on certain window resize operations | 2–5 stars bug duplicate imported Priority-Medium | _From [pfeif...@tzi.de](https://code.google.com/u/113360039766170135044/) on June 24, 2013 11:59:01_
Required information! OS version: Windows 7 Ultimate, 64 bit, SP1, German ConEmu version: ConEmu 130621 [64] *Bug description* I would expect a horizontal scrollbar appearing if the window is resized to a smaller width. I would expect to have a viewport (width/height) matching the window size that can be scrolled around in a virtual buffer having a much larger width/height using horizontal and vertical scrollbars. In other words I would expect the same behaviour as it is realized in cmd.exe itself.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1112_ | 1.0 | Horizontal scrollbar is missing on certain window resize operations - _From [pfeif...@tzi.de](https://code.google.com/u/113360039766170135044/) on June 24, 2013 11:59:01_
Required information! OS version: Windows 7 Ultimate, 64 bit, SP1, German ConEmu version: ConEmu 130621 [64] *Bug description* I would expect a horizontal scrollbar appearing if the window is resized to a smaller width. I would expect to have a viewport (width/height) matching the window size that can be scrolled around in a virtual buffer having a much larger width/height using horizontal and vertical scrollbars. In other words I would expect the same behaviour as it is realized in cmd.exe itself.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1112_ | priority | horizontal scrollbar is missing on certain window resize operations from on june required information os version windows ultimate bit german conemu version conemu bug description i would expect a horizontal scrollbar appearing if the window is resized to a smaller width i would expect to have a viewport width height matching the window size that can be scrolled around in a virtual buffer having a much larger width height using horizontal and vertical scrollbars in other words i would expect the same behaviour as it is realized in cmd exe itself original issue | 1 |
338,628 | 10,232,474,148 | IssuesEvent | 2019-08-18 17:47:17 | opentx/opentx | https://api.github.com/repos/opentx/opentx | closed | Adding telemetry display for F.Thobois DIY hub & sensors | Priority-Medium Radio Firmware enhancement stale | _From [mue...@gmail.com](https://code.google.com/u/109305769914638086567/) on May 24, 2013 19:48:33_
Which board (stock / gruvin9x / sky9x / Taranis) are you using? Stock with frsky & audio mod Please provide any additional information below. Suggestion is to add a telemetry/voice function for the F.Thobois hub & sensors. It's actually working with D8RII and a specific display with voice.
I'm asking for this because the sensors (specially alti-vario3) are working very well and it would be nice to have this function on the 9x, without having to build the screen/voice for the TX. http://home.nordnet.fr/~fthobois/Telem-FrSky.htm http://home.nordnet.fr/~fthobois/mod-telem.htm
_Original issue: http://code.google.com/p/opentx/issues/detail?id=40_
| 1.0 | Adding telemetry display for F.Thobois DIY hub & sensors - _From [mue...@gmail.com](https://code.google.com/u/109305769914638086567/) on May 24, 2013 19:48:33_
Which board (stock / gruvin9x / sky9x / Taranis) are you using? Stock with frsky & audio mod Please provide any additional information below. Suggestion is to add a telemetry/voice function for the F.Thobois hub & sensors. It's actually working with D8RII and a specific display with voice.
I'm asking for this because the sensors (specially alti-vario3) are working very well and it would be nice to have this function on the 9x, without having to build the screen/voice for the TX. http://home.nordnet.fr/~fthobois/Telem-FrSky.htm http://home.nordnet.fr/~fthobois/mod-telem.htm
_Original issue: http://code.google.com/p/opentx/issues/detail?id=40_
| priority | adding telemetry display for f thobois diy hub sensors from on may which board stock taranis are you using stock with frsky audio mod please provide any additional information below suggestion is to add a telemetry voice function for the f thobois hub sensors it s actually working with and a specific display with voice i m asking for this because the sensors specially alti are working very well and it would be nice to have this function on the without having to build the screen voice for the tx original issue | 1 |
624,586 | 19,702,100,200 | IssuesEvent | 2022-01-12 17:37:35 | BTAA-Geospatial-Data-Project/geoportal | https://api.github.com/repos/BTAA-Geospatial-Data-Project/geoportal | opened | Customize Citation values by Resource Class | interface:item page priority:medium | Tentative plan:
### Datasets, Web Services:
Creator. (Date Issued). Title. **Provider.** information URL
### Maps, Websites, Other:
Creator. (Date Issued). Title. **Publisher.** information URL
### Collections:
no citation tool
If something has two Resource Class values, use the first one listed.
| 1.0 | Customize Citation values by Resource Class - Tentative plan:
### Datasets, Web Services:
Creator. (Date Issued). Title. **Provider.** information URL
### Maps, Websites, Other:
Creator. (Date Issued). Title. **Publisher.** information URL
### Collections:
no citation tool
If something has two Resource Class values, use the first one listed.
| priority | customize citation values by resource class tentative plan datasets web services creator date issued title provider information url maps websites other creator date issued title publisher information url collections no citation tool if something has two resource class values use the first one listed | 1 |
423,100 | 12,290,725,760 | IssuesEvent | 2020-05-10 05:57:56 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Remove DX <= 80 / Shader Model <= 2.0 shaders | Priority: Medium Size: Small Type: Development / Internal Type: Enhancement | **What feature is your improvement idea related to? Please describe.**
Shader files
**Describe the solution you'd like**
We have a ton of shaders in the `materialsystem/stdshaders/` folder that we don't support due to being on a lower shader model, or just due to using the higher version anyways.
We require DX9 by default now so we can remove all the gross shader model 2.0 and below versions (assuming that there's at least a 2.0b+ version, otherwise keep the 2.0 version but delete the ones below). 2.0b is required for Linux, so keep those in at the very minimum.
Example:
`BlurFilter_ps11`, `BlurFilter_vs11`, `BlurFilterX_dx80`, and `BlurFilterY_dx80` can be removed due to there being a `BlurFilterX.cpp [shader model 2.0+]`, `BlurFilter_ps2x`, etc.
| 1.0 | Remove DX <= 80 / Shader Model <= 2.0 shaders - **What feature is your improvement idea related to? Please describe.**
Shader files
**Describe the solution you'd like**
We have a ton of shaders in the `materialsystem/stdshaders/` folder that we don't support due to being on a lower shader model, or just due to using the higher version anyways.
We require DX9 by default now so we can remove all the gross shader model 2.0 and below versions (assuming that there's at least a 2.0b+ version, otherwise keep the 2.0 version but delete the ones below). 2.0b is required for Linux, so keep those in at the very minimum.
Example:
`BlurFilter_ps11`, `BlurFilter_vs11`, `BlurFilterX_dx80`, and `BlurFilterY_dx80` can be removed due to there being a `BlurFilterX.cpp [shader model 2.0+]`, `BlurFilter_ps2x`, etc.
| priority | remove dx shader model shaders what feature is your improvement idea related to please describe shader files describe the solution you d like we have a ton of shaders in the materialsystem stdshaders folder that we don t support due to being on a lower shader model or just due to using the higher version anyways we require by default now so we can remove all the gross shader model and below versions assuming that there s at least a version otherwise keep the version but delete the ones below is required for linux so keep those in at the very minimum example blurfilter blurfilter blurfilterx and blurfiltery can be removed due to there being a blurfilterx cpp blurfilter etc | 1 |
397,383 | 11,727,653,362 | IssuesEvent | 2020-03-10 16:16:08 | perfect-things/perfect-home | https://api.github.com/repos/perfect-things/perfect-home | closed | Fix "Search" animation to match the overall UX | priority:medium size:S type:enhancement | from @Vldm11r
> Hello, I didn’t know in which branch to write, I recently noticed that a search appeared and I began to use it very actively, it would be great if the search menu was turned on with the same effect as editing a bookmark. | 1.0 | Fix "Search" animation to match the overall UX - from @Vldm11r
> Hello, I didn’t know in which branch to write, I recently noticed that a search appeared and I began to use it very actively, it would be great if the search menu was turned on with the same effect as editing a bookmark. | priority | fix search animation to match the overall ux from hello i didn’t know in which branch to write i recently noticed that a search appeared and i began to use it very actively it would be great if the search menu was turned on with the same effect as editing a bookmark | 1 |
290,865 | 8,908,716,841 | IssuesEvent | 2019-01-18 02:13:40 | Railcraft/Railcraft | https://api.github.com/repos/Railcraft/Railcraft | closed | Rolling machine recipes are not copied to crafting tables | bug implemented medium priority | **Description of the Bug**
When the factory module is disabled, rolling machine recipes are not copied to the crafting table.
~When disabeling the Factory module, things like tracks still use rails in their recipes. In this configuration the tracks would not be normally craftable. Also the rolling machine doesn't show recipes in JEI.~
**To Reproduce**
1. Disable Factory Module in Config
2. Look up recipes in JEI
**Expected behavior**
Rails, etc. craftable through crafting table.
~Alternative recipes that don't utilize items only craftable through the factory module.~
**Environment**
Railcraft 12.0.0-beta2
forge 14.23.2796
| 1.0 | Rolling machine recipes are not copied to crafting tables - **Description of the Bug**
When the factory module is disabled, rolling machine recipes are not copied to the crafting table.
~When disabeling the Factory module, things like tracks still use rails in their recipes. In this configuration the tracks would not be normally craftable. Also the rolling machine doesn't show recipes in JEI.~
**To Reproduce**
1. Disable Factory Module in Config
2. Look up recipes in JEI
**Expected behavior**
Rails, etc. craftable through crafting table.
~Alternative recipes that don't utilize items only craftable through the factory module.~
**Environment**
Railcraft 12.0.0-beta2
forge 14.23.2796
| priority | rolling machine recipes are not copied to crafting tables description of the bug when the factory module is disabled rolling machine recipes are not copied to the crafting table when disabeling the factory module things like tracks still use rails in their recipes in this configuration the tracks would not be normally craftable also the rolling machine doesn t show recipes in jei to reproduce disable factory module in config look up recipes in jei expected behavior rails etc craftable through crafting table alternative recipes that don t utilize items only craftable through the factory module environment railcraft forge | 1 |
9,293 | 2,607,934,684 | IssuesEvent | 2015-02-26 00:28:18 | chrsmithdemos/minify | https://api.github.com/repos/chrsmithdemos/minify | closed | Minify_HTML performance improvements (patch) | auto-migrated Priority-Medium Release-2.1.3 Type-Enhancement | ```
Minify version: 2.1.3
PHP version: 5.2.13 (not really relevant)
What steps will reproduce the problem?
1. Measure minification time of (for example) this page on Minify 2.1.3:
http://www.salir.com/antigua -- any large-ish "normal" page will do.
3. Apply the attached patch (again, on Minify 2.1.3)
4. Repeat the measurement
You'll observe a saving of 70-80% of time -- mostly CPU time.
Expected output:
n/a
Actual output:
n/a
Did any unit tests FAIL? (Please do not post the full list)
No. All tests pass before and after the patch. The patch contains some changes
in expected result, but you'll see that those new expected results are just as
good as the previous ones.
Description of the proposed changes:
The change in "remove ws outside of all elements" replaces a
preg_match_callback call with a preg_replace call, which is much faster.
All other changes use the fact that starting capturing during regular
expression matching is an expensive operation (see, e.g. man perlre, which
states: "you [...] pay a price for each pattern that contains capturing
parentheses.") This can be avoided by delaying the start of capture until the
regular expression has confirmed that there will be something to
capture/replace.
It should be possible to improve the minification of <script> tags by somehow
getting rid of the "(\\s*)" at the start of the regexp, but we've encountered
difficulties doing so and decided that 4 times faster is enough for us.
```
-----
Original issue reported on code.google.com by `jordi.sa...@salir.com` on 27 Sep 2010 at 4:01
Attachments:
* [performance-improvement.patch](https://storage.googleapis.com/google-code-attachments/minify/issue-192/comment-0/performance-improvement.patch)
| 1.0 | Minify_HTML performance improvements (patch) - ```
Minify version: 2.1.3
PHP version: 5.2.13 (not really relevant)
What steps will reproduce the problem?
1. Measure minification time of (for example) this page on Minify 2.1.3:
http://www.salir.com/antigua -- any large-ish "normal" page will do.
3. Apply the attached patch (again, on Minify 2.1.3)
4. Repeat the measurement
You'll observe a saving of 70-80% of time -- mostly CPU time.
Expected output:
n/a
Actual output:
n/a
Did any unit tests FAIL? (Please do not post the full list)
No. All tests pass before and after the patch. The patch contains some changes
in expected result, but you'll see that those new expected results are just as
good as the previous ones.
Description of the proposed changes:
The change in "remove ws outside of all elements" replaces a
preg_match_callback call with a preg_replace call, which is much faster.
All other changes use the fact that starting capturing during regular
expression matching is an expensive operation (see, e.g. man perlre, which
states: "you [...] pay a price for each pattern that contains capturing
parentheses.") This can be avoided by delaying the start of capture until the
regular expression has confirmed that there will be something to
capture/replace.
It should be possible to improve the minification of <script> tags by somehow
getting rid of the "(\\s*)" at the start of the regexp, but we've encountered
difficulties doing so and decided that 4 times faster is enough for us.
```
-----
Original issue reported on code.google.com by `jordi.sa...@salir.com` on 27 Sep 2010 at 4:01
Attachments:
* [performance-improvement.patch](https://storage.googleapis.com/google-code-attachments/minify/issue-192/comment-0/performance-improvement.patch)
| priority | minify html performance improvements patch minify version php version not really relevant what steps will reproduce the problem measure minification time of for example this page on minify any large ish normal page will do apply the attached patch again on minify repeat the measurement you ll observe a saving of of time mostly cpu time expected output n a actual output n a did any unit tests fail please do not post the full list no all tests pass before and after the patch the patch contains some changes in expected result but you ll see that those new expected results are just as good as the previous ones description of the proposed changes the change in remove ws outside of all elements replaces a preg match callback call with a preg replace call which is much faster all other changes use the fact that starting capturing during regular expression matching is an expensive operation see e g man perlre which states you pay a price for each pattern that contains capturing parentheses this can be avoided by delaying the start of capture until the regular expression has confirmed that there will be something to capture replace it should be possible to improve the minification of tags by somehow getting rid of the s at the start of the regexp but we ve encountered difficulties doing so and decided that times faster is enough for us original issue reported on code google com by jordi sa salir com on sep at attachments | 1 |
449,675 | 12,973,238,750 | IssuesEvent | 2020-07-21 13:46:05 | Materials-Consortia/optimade-python-tools | https://api.github.com/repos/Materials-Consortia/optimade-python-tools | closed | `/versions` endpoint content-type parameter "header=present" is provided in the wrong place | bug priority/medium server | Currently, we return `headers: "present"` as a separate header, but it should actually be a parameter of the `content-type` header, see [RFC 4180](https://tools.ietf.org/html/rfc4180.html). | 1.0 | `/versions` endpoint content-type parameter "header=present" is provided in the wrong place - Currently, we return `headers: "present"` as a separate header, but it should actually be a parameter of the `content-type` header, see [RFC 4180](https://tools.ietf.org/html/rfc4180.html). | priority | versions endpoint content type parameter header present is provided in the wrong place currently we return headers present as a separate header but it should actually be a parameter of the content type header see | 1 |
275,284 | 8,575,547,918 | IssuesEvent | 2018-11-12 17:34:56 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | Can we remove Sim V1 files? | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal | I checked with Brad, and he thinks we can remove the Sim version reader/write and library files.
He suggest we keep the V2 naming, in case a V3 comes along.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1910
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Can we remove Sim V1 files?
Assigned to: Kathleen Biagas
Category:
Target version: 2.8
Author: Kathleen Biagas
Start: 07/11/2014
Due date:
% Done: 0
Estimated time:
Created: 07/11/2014 06:41 pm
Updated: 08/19/2014 08:00 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
I checked with Brad, and he thinks we can remove the Sim version reader/write and library files.
He suggest we keep the V2 naming, in case a V3 comes along.
Comments:
I removed V1 from sim and removed the SimV1 reader and writer.
| 1.0 | Can we remove Sim V1 files? - I checked with Brad, and he thinks we can remove the Sim version reader/write and library files.
He suggest we keep the V2 naming, in case a V3 comes along.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1910
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Can we remove Sim V1 files?
Assigned to: Kathleen Biagas
Category:
Target version: 2.8
Author: Kathleen Biagas
Start: 07/11/2014
Due date:
% Done: 0
Estimated time:
Created: 07/11/2014 06:41 pm
Updated: 08/19/2014 08:00 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
I checked with Brad, and he thinks we can remove the Sim version reader/write and library files.
He suggest we keep the V2 naming, in case a V3 comes along.
Comments:
I removed V1 from sim and removed the SimV1 reader and writer.
| priority | can we remove sim files i checked with brad and he thinks we can remove the sim version reader write and library files he suggest we keep the naming in case a comes along redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority normal subject can we remove sim files assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group any description i checked with brad and he thinks we can remove the sim version reader write and library files he suggest we keep the naming in case a comes along comments i removed from sim and removed the reader and writer | 1 |
480,308 | 13,839,597,760 | IssuesEvent | 2020-10-14 08:12:21 | AY2021S1-CS2113-T14-2/tp | https://api.github.com/repos/AY2021S1-CS2113-T14-2/tp | closed | As a fast typist, I want an easily-navigable interface to type up notes and store them | priority.Medium type.Story | ... so that I can create and manage notes quickly and efficiently. | 1.0 | As a fast typist, I want an easily-navigable interface to type up notes and store them - ... so that I can create and manage notes quickly and efficiently. | priority | as a fast typist i want an easily navigable interface to type up notes and store them so that i can create and manage notes quickly and efficiently | 1 |
232,406 | 7,659,653,656 | IssuesEvent | 2018-05-11 07:34:05 | senaite/senaite.health | https://api.github.com/repos/senaite/senaite.health | closed | Travis build badge renders to unknown | addition priority: medium | ## Steps to reproduce
Go to the main page of `senaite.health`'s repository. The readme renders the Travis build badge but it is set as unknown.
## Current behavior
The Travis badge is rendered as unknown.
## Expected behavior
The Travis build badge is rendered with the latest build status of the repository.
I think any of you, @xispa or @ramonski, should activate the Travis integration.
## Screenshot (optional)

| 1.0 | Travis build badge renders to unknown - ## Steps to reproduce
Go to the main page of `senaite.health`'s repository. The readme renders the Travis build badge but it is set as unknown.
## Current behavior
The Travis badge is rendered as unknown.
## Expected behavior
The Travis build badge is rendered with the latest build status of the repository.
I think any of you, @xispa or @ramonski, should activate the Travis integration.
## Screenshot (optional)

| priority | travis build badge renders to unknown steps to reproduce go to the main page of senaite health s repository the readme renders the travis build badge but it is set as unknown current behavior the travis badge is rendered as unknown expected behavior the travis build badge is rendered with the latest build status of the repository i think any of you xispa or ramonski should activate the travis integration screenshot optional | 1 |
823,031 | 30,924,816,317 | IssuesEvent | 2023-08-06 11:00:46 | jagaldol/chat-foodie | https://api.github.com/repos/jagaldol/chat-foodie | opened | (spring boot) /api/chat 완성하 | Priority: Medium Type: Enhancement | ## Description
#66 이거 먼저 해결 후 public-chat의 구조와 동일하게 가는데 history 대신 채팅방이 전달이 됩니다. 채팅방의 채팅을 조회하여 history를 서버에서 만듭니다.
챗봇에게 회원이름 기반으로 전달해야합니다.
챗봇에게 전달할때 맨처음 대화 내역으로 좋아하는 음식 리스트를 전달해줘야합니다.
또한 public-chat과는 다르게 요청과 응답을 한번에 저장하는게 아닌 `is_from_chatbot`을 기반으로 구별하여 message_tb에 집어 넣습니다.
사용자의 메시지를 집어넣고, 챗봇의 응답이 완료된 시점에 챗봇의 응답도 메시지 테이블에 집어넣습니다.
## Tasks
- [ ] 해당 회원의 채팅방인지 검사
- [ ] 선호도를 처리
- [ ] 채팅방의 내역 내역 들고와 history 생성
- [ ] 요청 메시지 및 챗봇이 생성한 메시지 db에 저장
- [ ] 기타 필요한 것들 | 1.0 | (spring boot) /api/chat 완성하 - ## Description
#66 이거 먼저 해결 후 public-chat의 구조와 동일하게 가는데 history 대신 채팅방이 전달이 됩니다. 채팅방의 채팅을 조회하여 history를 서버에서 만듭니다.
챗봇에게 회원이름 기반으로 전달해야합니다.
챗봇에게 전달할때 맨처음 대화 내역으로 좋아하는 음식 리스트를 전달해줘야합니다.
또한 public-chat과는 다르게 요청과 응답을 한번에 저장하는게 아닌 `is_from_chatbot`을 기반으로 구별하여 message_tb에 집어 넣습니다.
사용자의 메시지를 집어넣고, 챗봇의 응답이 완료된 시점에 챗봇의 응답도 메시지 테이블에 집어넣습니다.
## Tasks
- [ ] 해당 회원의 채팅방인지 검사
- [ ] 선호도를 처리
- [ ] 채팅방의 내역 내역 들고와 history 생성
- [ ] 요청 메시지 및 챗봇이 생성한 메시지 db에 저장
- [ ] 기타 필요한 것들 | priority | spring boot api chat 완성하 description 이거 먼저 해결 후 public chat의 구조와 동일하게 가는데 history 대신 채팅방이 전달이 됩니다 채팅방의 채팅을 조회하여 history를 서버에서 만듭니다 챗봇에게 회원이름 기반으로 전달해야합니다 챗봇에게 전달할때 맨처음 대화 내역으로 좋아하는 음식 리스트를 전달해줘야합니다 또한 public chat과는 다르게 요청과 응답을 한번에 저장하는게 아닌 is from chatbot 을 기반으로 구별하여 message tb에 집어 넣습니다 사용자의 메시지를 집어넣고 챗봇의 응답이 완료된 시점에 챗봇의 응답도 메시지 테이블에 집어넣습니다 tasks 해당 회원의 채팅방인지 검사 선호도를 처리 채팅방의 내역 내역 들고와 history 생성 요청 메시지 및 챗봇이 생성한 메시지 db에 저장 기타 필요한 것들 | 1 |
569,002 | 16,992,411,858 | IssuesEvent | 2021-06-30 22:51:28 | altmp/altv-issues | https://api.github.com/repos/altmp/altv-issues | closed | Voice Chat example broken | Class: bug Priority: medium Scope: web-api Side: client Side: server | <!--- Provide a general summary of the issue in the Title above -->
Voice chat example resource is broken.
`alt.createVoiceChannel` get s called which is not the way to create a channel it should be `new alt.VoiceChannel`
## Expected Behavior
<!--- Tell us what should happen -->
Example resource works out of the box
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
does not work | 1.0 | Voice Chat example broken - <!--- Provide a general summary of the issue in the Title above -->
Voice chat example resource is broken.
`alt.createVoiceChannel` get s called which is not the way to create a channel it should be `new alt.VoiceChannel`
## Expected Behavior
<!--- Tell us what should happen -->
Example resource works out of the box
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
does not work | priority | voice chat example broken voice chat example resource is broken alt createvoicechannel get s called which is not the way to create a channel it should be new alt voicechannel expected behavior example resource works out of the box current behavior does not work | 1 |
492,339 | 14,200,871,138 | IssuesEvent | 2020-11-16 06:31:56 | aevix/Health-Tracker-2.0 | https://api.github.com/repos/aevix/Health-Tracker-2.0 | closed | Adding footer | Priority level - medium enhancement good first issue | issue:
At the end of the page there is no information section
expected:
There should be a footer with social media and contact information | 1.0 | Adding footer - issue:
At the end of the page there is no information section
expected:
There should be a footer with social media and contact information | priority | adding footer issue at the end of the page there is no information section expected there should be a footer with social media and contact information | 1 |
385,620 | 11,423,402,812 | IssuesEvent | 2020-02-03 15:51:12 | radical-cybertools/radical.pilot | https://api.github.com/repos/radical-cybertools/radical.pilot | closed | Frontera agent error | comp:agent priority:medium topic:resource type:bug | ```
python : 2.7.16
pythonpath : /opt/apps/intel19/impi19_0/python2/2.7.16/lib/python2.7/site-packages
virtualenv : /home1/02855/mturilli/ve/test
radical.pilot : 0.73.0
radical.saga : 0.72.1
radical.utils : 0.72.0
```
```
$ cat agent_0.err
Traceback (most recent call last):
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/bin/radical-pilot-agent", line 71, in <module>
bootstrap_3(sys.argv[1])
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/bin/radical-pilot-agent", line 46, in bootstrap_3
agent.start(spawn=False)
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/utils/process.py", line 573, in start
self._ru_initialize()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/utils/process.py", line 895, in _ru_initialize
self.ru_initialize_parent()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/utils/component.py", line 593, in ru_initialize_parent
self.initialize_parent()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/agent/agent_0.py", line 160, in initialize_parent
self._start_sub_agents()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/agent/agent_0.py", line 388, in _start_sub_agents
launch_script_hop='/usr/bin/env RP_SPAWNER_HOP=TRUE "%s"' % ls_name)
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/agent/lm/srun.py", line 65, in construct_command
sbox = cu['unit_sandbox_path']
KeyError: 'unit_sandbox_path'
``` | 1.0 | Frontera agent error - ```
python : 2.7.16
pythonpath : /opt/apps/intel19/impi19_0/python2/2.7.16/lib/python2.7/site-packages
virtualenv : /home1/02855/mturilli/ve/test
radical.pilot : 0.73.0
radical.saga : 0.72.1
radical.utils : 0.72.0
```
```
$ cat agent_0.err
Traceback (most recent call last):
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/bin/radical-pilot-agent", line 71, in <module>
bootstrap_3(sys.argv[1])
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/bin/radical-pilot-agent", line 46, in bootstrap_3
agent.start(spawn=False)
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/utils/process.py", line 573, in start
self._ru_initialize()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/utils/process.py", line 895, in _ru_initialize
self.ru_initialize_parent()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/utils/component.py", line 593, in ru_initialize_parent
self.initialize_parent()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/agent/agent_0.py", line 160, in initialize_parent
self._start_sub_agents()
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/agent/agent_0.py", line 388, in _start_sub_agents
launch_script_hop='/usr/bin/env RP_SPAWNER_HOP=TRUE "%s"' % ls_name)
File "/scratch1/02855/mturilli/radical.pilot.sandbox/rp.session.login4.frontera.tacc.utexas.edu.mturilli.018176.0000/pilot.0000/rp_install/lib/python2.7/site-packages/radical/pilot/agent/lm/srun.py", line 65, in construct_command
sbox = cu['unit_sandbox_path']
KeyError: 'unit_sandbox_path'
``` | priority | frontera agent error python pythonpath opt apps lib site packages virtualenv mturilli ve test radical pilot radical saga radical utils cat agent err traceback most recent call last file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install bin radical pilot agent line in bootstrap sys argv file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install bin radical pilot agent line in bootstrap agent start spawn false file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install lib site packages radical utils process py line in start self ru initialize file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install lib site packages radical utils process py line in ru initialize self ru initialize parent file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install lib site packages radical pilot utils component py line in ru initialize parent self initialize parent file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install lib site packages radical pilot agent agent py line in initialize parent self start sub agents file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install lib site packages radical pilot agent agent py line in start sub agents launch script hop usr bin env rp spawner hop true s ls name file mturilli radical pilot sandbox rp session frontera tacc utexas edu mturilli pilot rp install lib site packages radical pilot agent lm srun py line in construct command sbox cu keyerror unit sandbox path | 1 |
814,526 | 30,510,460,372 | IssuesEvent | 2023-07-18 20:23:29 | rancher/rancher-docs | https://api.github.com/repos/rancher/rancher-docs | closed | Links to versioned pages don't redirect correctly | priority/medium docs-site-meta | Reported on Slack:
> I’m getting ‘Page not found’ for this one -> https://ranchermanager.docs.rancher.com/v2.6/how-to-guides/advanced-user-guides/manage-clusters/projects-and-namespaces
The page does exist, it's just in New User Guides, not Advanced User Guides.
Need to investigate if/where links pointing to the wrong directory are in our docset, and create a redirect in case there are external links using the incorrect path. | 1.0 | Links to versioned pages don't redirect correctly - Reported on Slack:
> I’m getting ‘Page not found’ for this one -> https://ranchermanager.docs.rancher.com/v2.6/how-to-guides/advanced-user-guides/manage-clusters/projects-and-namespaces
The page does exist, it's just in New User Guides, not Advanced User Guides.
Need to investigate if/where links pointing to the wrong directory are in our docset, and create a redirect in case there are external links using the incorrect path. | priority | links to versioned pages don t redirect correctly reported on slack i’m getting ‘page not found’ for this one the page does exist it s just in new user guides not advanced user guides need to investigate if where links pointing to the wrong directory are in our docset and create a redirect in case there are external links using the incorrect path | 1 |
504,446 | 14,618,963,032 | IssuesEvent | 2020-12-22 17:02:58 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Style list can not be loaded from this service | Accepted Priority: Medium bug | ## Description
The WMS service/layer indicated do not load the styles list, even if this is present in the list
## How to reproduce
- Add this WMS Catalog: `
https://ifl.francophonelibre.org:443/llg/ows?SERVICE=WMS&REQUEST=GetCapabilities`
- Add this layer from the catalog: `SunuGox:SN_SunuGox_communes_quartiers_intervention_2017`
- From the TOC, open the styles tab
*Expected Result*
3 styles should be listed
*Current Result*
No style is listed.
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
| 1.0 | Style list can not be loaded from this service - ## Description
The WMS service/layer indicated do not load the styles list, even if this is present in the list
## How to reproduce
- Add this WMS Catalog: `
https://ifl.francophonelibre.org:443/llg/ows?SERVICE=WMS&REQUEST=GetCapabilities`
- Add this layer from the catalog: `SunuGox:SN_SunuGox_communes_quartiers_intervention_2017`
- From the TOC, open the styles tab
*Expected Result*
3 styles should be listed
*Current Result*
No style is listed.
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
| priority | style list can not be loaded from this service description the wms service layer indicated do not load the styles list even if this is present in the list how to reproduce add this wms catalog add this layer from the catalog sunugox sn sunugox communes quartiers intervention from the toc open the styles tab expected result styles should be listed current result no style is listed not browser related browser info use this site a href for non expert users browser affected version internet explorer edge chrome firefox safari other useful information | 1 |
448,092 | 12,943,132,237 | IssuesEvent | 2020-07-18 05:21:32 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | Use UUID generation to allow correlating logs for same request | good-first-issue kind/enhancement priority/medium stale | - [ ] Integrate with a UUID library such as https://github.com/satori/go.uuid. NodeID must be set correctly.
- [ ] Generate (timetamp based?) UUID for:
- [ ] Endpoints | 1.0 | Use UUID generation to allow correlating logs for same request - - [ ] Integrate with a UUID library such as https://github.com/satori/go.uuid. NodeID must be set correctly.
- [ ] Generate (timetamp based?) UUID for:
- [ ] Endpoints | priority | use uuid generation to allow correlating logs for same request integrate with a uuid library such as nodeid must be set correctly generate timetamp based uuid for endpoints | 1 |
54,839 | 3,071,432,504 | IssuesEvent | 2015-08-19 12:05:24 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Добавить в исталятор пункт "Введите адрес хаба" | Component-Scripts enhancement imported Priority-Medium Usability | _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on September 19, 2010 13:14:51_
при первичной инсталяции добавьте в исталятор пункт "Введите адрес хаба и порт" (так же как это сделано с ником и шарой). Данный пункт очень нужен, т.к. многие не могут потом догадаться где и как добавить хаб.
(с) аноним из блога
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=167_ | 1.0 | Добавить в исталятор пункт "Введите адрес хаба" - _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on September 19, 2010 13:14:51_
при первичной инсталяции добавьте в исталятор пункт "Введите адрес хаба и порт" (так же как это сделано с ником и шарой). Данный пункт очень нужен, т.к. многие не могут потом догадаться где и как добавить хаб.
(с) аноним из блога
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=167_ | priority | добавить в исталятор пункт введите адрес хаба from on september при первичной инсталяции добавьте в исталятор пункт введите адрес хаба и порт так же как это сделано с ником и шарой данный пункт очень нужен т к многие не могут потом догадаться где и как добавить хаб с аноним из блога original issue | 1 |
624,288 | 19,693,134,638 | IssuesEvent | 2022-01-12 09:23:01 | SAP/xsk | https://api.github.com/repos/SAP/xsk | closed | [Migration] Remove HDI plugin versions as not needed | priority-medium effort-low tooling | Currently, the HDI config we are generating has hardcoded versions of all the plugins listed in it. As discussed offline, we do not need those versions and they could also potentially break something if not kept up to date. We should remove them now. | 1.0 | [Migration] Remove HDI plugin versions as not needed - Currently, the HDI config we are generating has hardcoded versions of all the plugins listed in it. As discussed offline, we do not need those versions and they could also potentially break something if not kept up to date. We should remove them now. | priority | remove hdi plugin versions as not needed currently the hdi config we are generating has hardcoded versions of all the plugins listed in it as discussed offline we do not need those versions and they could also potentially break something if not kept up to date we should remove them now | 1 |
52,121 | 3,021,656,715 | IssuesEvent | 2015-07-31 15:54:29 | ghaering/pysqlite | https://api.github.com/repos/ghaering/pysqlite | closed | Remove from documentation mentions about "FLOAT" fields in sqlite. Replace with "REAL". | bug imported Priority-Medium | _From [socketp...@gmail.com](https://code.google.com/u/106583726027089740866/) on December 06, 2011 10:37:45_
sqlite never have native type FLOAT. Only REAL. Please fix documentation
_Original issue: http://code.google.com/p/pysqlite/issues/detail?id=48_ | 1.0 | Remove from documentation mentions about "FLOAT" fields in sqlite. Replace with "REAL". - _From [socketp...@gmail.com](https://code.google.com/u/106583726027089740866/) on December 06, 2011 10:37:45_
sqlite never have native type FLOAT. Only REAL. Please fix documentation
_Original issue: http://code.google.com/p/pysqlite/issues/detail?id=48_ | priority | remove from documentation mentions about float fields in sqlite replace with real from on december sqlite never have native type float only real please fix documentation original issue | 1 |
209,750 | 7,179,251,739 | IssuesEvent | 2018-01-31 19:03:52 | mrmlnc/fast-glob | https://api.github.com/repos/mrmlnc/fast-glob | closed | Directories end with `/` with `markDirectories: false` and `//` with `markDirectories: true` | Motivation: Medium Priority: Medium Type: Bug | With the following directory tree:
```
+-- test/
| +-- dir
| +-- file.js
```
```js
fg.sync('test/**', {onlyFiles: false, markDirectories: false});
// Result => ['test/dir/', 'test/dir/file.js']
// Should be => ['test/dir', 'test/dir/file.js']
```
```js
fg.sync('test/**', {onlyFiles: false, markDirectories: true});
// Result => ['test/dir//', 'test/dir/file.js']
// Should be => ['test/dir/', 'test/dir/file.js']
``` | 1.0 | Directories end with `/` with `markDirectories: false` and `//` with `markDirectories: true` - With the following directory tree:
```
+-- test/
| +-- dir
| +-- file.js
```
```js
fg.sync('test/**', {onlyFiles: false, markDirectories: false});
// Result => ['test/dir/', 'test/dir/file.js']
// Should be => ['test/dir', 'test/dir/file.js']
```
```js
fg.sync('test/**', {onlyFiles: false, markDirectories: true});
// Result => ['test/dir//', 'test/dir/file.js']
// Should be => ['test/dir/', 'test/dir/file.js']
``` | priority | directories end with with markdirectories false and with markdirectories true with the following directory tree test dir file js js fg sync test onlyfiles false markdirectories false result should be js fg sync test onlyfiles false markdirectories true result should be | 1 |
113,742 | 4,568,186,644 | IssuesEvent | 2016-09-15 13:47:12 | PowerlineApp/powerline-mobile | https://api.github.com/repos/PowerlineApp/powerline-mobile | closed | Push notifications: Vote on UserPetition -- text is wrong | bug P2 - Medium Priority | when user petition is signed, the text says something about Post and voting:
<img width="300" alt="screen shot 2016-09-13 at 15 50 58" src="https://cloud.githubusercontent.com/assets/225506/18476357/fbd7e8fe-79c9-11e6-9267-0e83a87fa5c3.png">
| 1.0 | Push notifications: Vote on UserPetition -- text is wrong - when user petition is signed, the text says something about Post and voting:
<img width="300" alt="screen shot 2016-09-13 at 15 50 58" src="https://cloud.githubusercontent.com/assets/225506/18476357/fbd7e8fe-79c9-11e6-9267-0e83a87fa5c3.png">
| priority | push notifications vote on userpetition text is wrong when user petition is signed the text says something about post and voting img width alt screen shot at src | 1 |
422,121 | 12,266,464,191 | IssuesEvent | 2020-05-07 09:00:17 | bounswe/bounswe2020group8 | https://api.github.com/repos/bounswe/bounswe2020group8 | closed | Selecting the language that we will use for API-Homework | Priority: Medium Status: In Progress everyone help wanted | Please select one or multiple languages that you want to use during the implementation of APIs. If you have any other language preferences please do not hesitate to share it in the comments. Deadline: 1/05/2020 23:55
https://doodle.com/poll/vdvfb4v6cihbubrq | 1.0 | Selecting the language that we will use for API-Homework - Please select one or multiple languages that you want to use during the implementation of APIs. If you have any other language preferences please do not hesitate to share it in the comments. Deadline: 1/05/2020 23:55
https://doodle.com/poll/vdvfb4v6cihbubrq | priority | selecting the language that we will use for api homework please select one or multiple languages that you want to use during the implementation of apis if you have any other language preferences please do not hesitate to share it in the comments deadline | 1 |
413,519 | 12,069,118,553 | IssuesEvent | 2020-04-16 15:39:41 | osmontrouge/caresteouvert | https://api.github.com/repos/osmontrouge/caresteouvert | closed | Display every shops with direct link in URL | bug priority: medium | **Describe the bug**
A direct link to an OSM object that is not yet handled display an invisible box.
**To Reproduce**
Example : this [pub](https://www.caresteouvert.fr/@46.673800,5.552386,18.12/place/n7409176494) ([n7409176494](https://www.openstreetmap.org/node/7409176494)) without `*:covid19` tags.
**Expected behavior**
Display it's properties in the right panel as any "shops" with `*:covid19` tags.
I would like to give a link i.e. to **the owner shop** so he **could edit his shop** or at least display it.
**Screenshot**
<img width="1280" alt="Invisible right panel" src="https://user-images.githubusercontent.com/2158081/79415422-ae62c600-7fad-11ea-952a-3853236a7648.png">
| 1.0 | Display every shops with direct link in URL - **Describe the bug**
A direct link to an OSM object that is not yet handled display an invisible box.
**To Reproduce**
Example : this [pub](https://www.caresteouvert.fr/@46.673800,5.552386,18.12/place/n7409176494) ([n7409176494](https://www.openstreetmap.org/node/7409176494)) without `*:covid19` tags.
**Expected behavior**
Display it's properties in the right panel as any "shops" with `*:covid19` tags.
I would like to give a link i.e. to **the owner shop** so he **could edit his shop** or at least display it.
**Screenshot**
<img width="1280" alt="Invisible right panel" src="https://user-images.githubusercontent.com/2158081/79415422-ae62c600-7fad-11ea-952a-3853236a7648.png">
| priority | display every shops with direct link in url describe the bug a direct link to an osm object that is not yet handled display an invisible box to reproduce example this without tags expected behavior display it s properties in the right panel as any shops with tags i would like to give a link i e to the owner shop so he could edit his shop or at least display it screenshot img width alt invisible right panel src | 1 |
393,231 | 11,612,179,533 | IssuesEvent | 2020-02-26 08:26:38 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1326] Right Panel always closed when you open minimap. | Priority: Medium Status: Fixed | For me the best way will:
1. Right Panel always open in Full-size Minimap:

2. Right Panel save opened/closed state in non-full size mode. If you open first time it will op[en without right panel:

If you switch to full size, Right panel should open, but if you switch back Right panel should close , because last state in non-full-size minimap was closed. | 1.0 | [0.9.0 staging-1326] Right Panel always closed when you open minimap. - For me the best way will:
1. Right Panel always open in Full-size Minimap:

2. Right Panel save opened/closed state in non-full size mode. If you open first time it will op[en without right panel:

If you switch to full size, Right panel should open, but if you switch back Right panel should close , because last state in non-full-size minimap was closed. | priority | right panel always closed when you open minimap for me the best way will right panel always open in full size minimap right panel save opened closed state in non full size mode if you open first time it will op en without right panel if you switch to full size right panel should open but if you switch back right panel should close because last state in non full size minimap was closed | 1 |
41,311 | 2,868,995,691 | IssuesEvent | 2015-06-05 22:27:18 | dart-lang/pub-dartlang | https://api.github.com/repos/dart-lang/pub-dartlang | closed | Allow me to put my own google analytics code into my pub page | enhancement MovedToGithub Priority-Medium | <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#8562_
----
What I wouldn't give to be able to put my own google analytics code into my pub.dartlang.org page.
For example, I'd love to know what traffic is going to http://pub.dartlang.org/packages/lawndart
Many sites, even github auto-pages, let you add your UA code. | 1.0 | Allow me to put my own google analytics code into my pub page - <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#8562_
----
What I wouldn't give to be able to put my own google analytics code into my pub.dartlang.org page.
For example, I'd love to know what traffic is going to http://pub.dartlang.org/packages/lawndart
Many sites, even github auto-pages, let you add your UA code. | priority | allow me to put my own google analytics code into my pub page issue by originally opened as dart lang sdk what i wouldn t give to be able to put my own google analytics code into my pub dartlang org page for example i d love to know what traffic is going to many sites even github auto pages let you add your ua code | 1 |
483,458 | 13,925,068,820 | IssuesEvent | 2020-10-21 16:20:59 | AY2021S1-CS2103T-T17-1/tp | https://api.github.com/repos/AY2021S1-CS2103T-T17-1/tp | closed | As a NUS student, I want to be able to add my modules taken to reflect my CAP, and be able to update those modules when I S/U it to reflect my updated CAP | priority.Medium type.Story | ...so that I can view my new CAP after I S/U the module | 1.0 | As a NUS student, I want to be able to add my modules taken to reflect my CAP, and be able to update those modules when I S/U it to reflect my updated CAP - ...so that I can view my new CAP after I S/U the module | priority | as a nus student i want to be able to add my modules taken to reflect my cap and be able to update those modules when i s u it to reflect my updated cap so that i can view my new cap after i s u the module | 1 |
3,509 | 2,538,575,778 | IssuesEvent | 2015-01-27 08:24:31 | newca12/gapt | https://api.github.com/repos/newca12/gapt | closed | factories inheritance | 1 star bug Component-LogicalDataStructures imported Priority-Medium wontfix | _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on February 15, 2011 14:48:42_
the factories in fol and schema should inherit from hol factory and not from lambda factory. The reason is that this is the correct way to look on the factories. Right now it has no impact as the interface of hol and lambda are equivalent but if we change later the interface of hol factory (for example by createConst) then algorithms designed for hol and using the hol factory wont work on fol and schema anymore.
The factories are right now as objects (hol, fol and schema) one should create a trait FactoryA inheriting from the previous one and then create the object as extending the trait to give a singleton object to use. but we should not force people to use the singleton.
_Original issue: http://code.google.com/p/gapt/issues/detail?id=112_ | 1.0 | factories inheritance - _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on February 15, 2011 14:48:42_
the factories in fol and schema should inherit from hol factory and not from lambda factory. The reason is that this is the correct way to look on the factories. Right now it has no impact as the interface of hol and lambda are equivalent but if we change later the interface of hol factory (for example by createConst) then algorithms designed for hol and using the hol factory wont work on fol and schema anymore.
The factories are right now as objects (hol, fol and schema) one should create a trait FactoryA inheriting from the previous one and then create the object as extending the trait to give a singleton object to use. but we should not force people to use the singleton.
_Original issue: http://code.google.com/p/gapt/issues/detail?id=112_ | priority | factories inheritance from on february the factories in fol and schema should inherit from hol factory and not from lambda factory the reason is that this is the correct way to look on the factories right now it has no impact as the interface of hol and lambda are equivalent but if we change later the interface of hol factory for example by createconst then algorithms designed for hol and using the hol factory wont work on fol and schema anymore the factories are right now as objects hol fol and schema one should create a trait factorya inheriting from the previous one and then create the object as extending the trait to give a singleton object to use but we should not force people to use the singleton original issue | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.