Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12 values | text_combine stringlengths 96 259k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
142,869 | 5,478,171,421 | IssuesEvent | 2017-03-12 15:45:53 | benvenutti/hasm | https://api.github.com/repos/benvenutti/hasm | closed | Add clang 3.6 and 3.7 to Travis build. | priority: medium status: completed type: maintenance | Update Travis CI scripts to add support for the aforementioned compilers. | 1.0 | Add clang 3.6 and 3.7 to Travis build. - Update Travis CI scripts to add support for the aforementioned compilers. | priority | add clang and to travis build update travis ci scripts to add support for the aforementioned compilers | 1 |
520,912 | 15,097,241,781 | IssuesEvent | 2021-02-07 17:59:23 | noter-org/noter-client | https://api.github.com/repos/noter-org/noter-client | opened | Add ability for editing comments | priority:medium requires:server-change size:medium | Also show that the comment was edited (maybe if `modified_at` differs from `created_at`) | 1.0 | Add ability for editing comments - Also show that the comment was edited (maybe if `modified_at` differs from `created_at`) | priority | add ability for editing comments also show that the comment was edited maybe if modified at differs from created at | 1 |
660,847 | 22,033,266,365 | IssuesEvent | 2022-05-28 06:56:03 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | State isn't correctly saved in slide triggers on saveloc | Type: Bug Priority: Medium Size: Small Where: Game | if saving inside of a Slide Trigger ([trigger_momentum_slide](https://docs.momentum-mod.org/entity/trigger_momentum_slide/)), it does not save the properties of you being inside of a Slide Trigger or any effect of the trigger, meaning on teleporting to that save, there is a short time where it does not think you are in one, so friction takes over causing you to lose speed, or allowing to jump regardless of the triggers properties.
In general, what triggers you are in should be saved, or check what triggers that location has in it, applying it to the player, before teleporting the player to the saved location. | 1.0 | State isn't correctly saved in slide triggers on saveloc - if saving inside of a Slide Trigger ([trigger_momentum_slide](https://docs.momentum-mod.org/entity/trigger_momentum_slide/)), it does not save the properties of you being inside of a Slide Trigger or any effect of the trigger, meaning on teleporting to that save, there is a short time where it does not think you are in one, so friction takes over causing you to lose speed, or allowing to jump regardless of the triggers properties.
In general, what triggers you are in should be saved, or check what triggers that location has in it, applying it to the player, before teleporting the player to the saved location. | priority | state isn t correctly saved in slide triggers on saveloc if saving inside of a slide trigger it does not save the properties of you being inside of a slide trigger or any effect of the trigger meaning on teleporting to that save there is a short time where it does not think you are in one so friction takes over causing you to lose speed or allowing to jump regardless of the triggers properties in general what triggers you are in should be saved or check what triggers that location has in it applying it to the player before teleporting the player to the saved location | 1 |
511,892 | 14,884,484,757 | IssuesEvent | 2021-01-20 14:38:34 | OC-DA-JAVA-PROJETS/P4_PARK_IT | https://api.github.com/repos/OC-DA-JAVA-PROJETS/P4_PARK_IT | closed | STORY#2 : 5%-discount for recurring users | enhancement medium priority | > As a user, I want to get a discount when I use the parking garage regularly.
In order to improve user retention, we decided to offer recurring users a 5% discount every time they come back to our parking lot.
### Description
1. When a user enters the parking garage, they are asked for their license plate number.
2. When entering it, the system checks whether the user has entered it previously.
3. If this is the case, then the system displays a message saying "Welcome back! As a recurring user of our parking lot, you'll benefit from a 5% discount." and then proceeds normally.
4. When the user exits the parking garage, they will benefit from a 5% discount on the normal fee.
### Tasks
- [ ] Write the unit test that checks this behavior
- [ ] Implement the feature in the code
src : https://www.notion.so/STORY-2-5-discount-for-recurring-users-a75f51e971aa4679b0e0a40dd022c081 | 1.0 | STORY#2 : 5%-discount for recurring users - > As a user, I want to get a discount when I use the parking garage regularly.
In order to improve user retention, we decided to offer recurring users a 5% discount every time they come back to our parking lot.
### Description
1. When a user enters the parking garage, they are asked for their license plate number.
2. When entering it, the system checks whether the user has entered it previously.
3. If this is the case, then the system displays a message saying "Welcome back! As a recurring user of our parking lot, you'll benefit from a 5% discount." and then proceeds normally.
4. When the user exits the parking garage, they will benefit from a 5% discount on the normal fee.
### Tasks
- [ ] Write the unit test that checks this behavior
- [ ] Implement the feature in the code
src : https://www.notion.so/STORY-2-5-discount-for-recurring-users-a75f51e971aa4679b0e0a40dd022c081 | priority | story discount for recurring users as a user i want to get a discount when i use the parking garage regularly in order to improve user retention we decided to offer recurring users a discount every time they come back to our parking lot description when a user enters the parking garage they are asked for their license plate number when entering it the system checks whether the user has entered it previously if this is the case then the system displays a message saying welcome back as a recurring user of our parking lot you ll benefit from a discount and then proceeds normally when the user exits the parking garage they will benefit from a discount on the normal fee tasks write the unit test that checks this behavior implement the feature in the code src | 1 |
555,172 | 16,448,479,856 | IssuesEvent | 2021-05-20 23:32:29 | nilearn/nilearn | https://api.github.com/repos/nilearn/nilearn | closed | Axes Cutoff in Example 9.2.15.9 (plotting.plot_img_on_surf) | Bug effort: medium impact: medium priority: high | [`Example 9.2.15.9`](https://nilearn.github.io/auto_examples/01_plotting/plot_3d_map_to_surface_projection.html#plot-multiple-views-of-the-3d-volume-on-a-surface) features a quick plot showing multiple views of a volumetric stat map on an average surface.

However, the brain is cutoff on both axes (picture shown) both in the online example and when I use it on nilearn 0.7.1 within a jupyter notebook using the following code
```python
fig, ax = plotting.plot_img_on_surf(new_image,
views=['lateral','medial'],
hemispheres=['left', 'right'],
inflate=True,
colorbar=True
)
```
Perhaps this is easily fixed post-hoc by adjusting matplotlib parameters, but it is not obvious to me. | 1.0 | Axes Cutoff in Example 9.2.15.9 (plotting.plot_img_on_surf) - [`Example 9.2.15.9`](https://nilearn.github.io/auto_examples/01_plotting/plot_3d_map_to_surface_projection.html#plot-multiple-views-of-the-3d-volume-on-a-surface) features a quick plot showing multiple views of a volumetric stat map on an average surface.

However, the brain is cutoff on both axes (picture shown) both in the online example and when I use it on nilearn 0.7.1 within a jupyter notebook using the following code
```python
fig, ax = plotting.plot_img_on_surf(new_image,
views=['lateral','medial'],
hemispheres=['left', 'right'],
inflate=True,
colorbar=True
)
```
Perhaps this is easily fixed post-hoc by adjusting matplotlib parameters, but it is not obvious to me. | priority | axes cutoff in example plotting plot img on surf features a quick plot showing multiple views of a volumetric stat map on an average surface however the brain is cutoff on both axes picture shown both in the online example and when i use it on nilearn within a jupyter notebook using the following code python fig ax plotting plot img on surf new image views hemispheres inflate true colorbar true perhaps this is easily fixed post hoc by adjusting matplotlib parameters but it is not obvious to me | 1 |
214,119 | 7,266,890,287 | IssuesEvent | 2018-02-20 00:59:42 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | installer inventory should allow 'cmd line'/'run time' over-rides for its playbook default values | component:installer priority:medium state:needs_info type:enhancement | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- Installer
##### SUMMARY
the hard coded values for secrets, passwords, ports, directories etc should only use defaults if nothing else has been passed as an argument
##### ENVIRONMENT
* AWX version: 1.0.2
* AWX install method: openshift, minishift, docker on linux, docker for mac, boot2docker
* Ansible version: 2.4.2
##### STEPS TO REPRODUCE
##### EXPECTED RESULTS
##### ACTUAL RESULTS
##### ADDITIONAL INFORMATION
the following vars should be allowing and override
dockerhub_base=ansible
dockerhub_version=latest
awx_secret_key=awxsecret
openshift_host=127.0.0.1:8443
awx_openshift_project=awx
openshift_user=developer
awx_node_port=30083
postgres_data_dir=/tmp/pgdocker
host_port=80
docker_registry=172.30.1.1:5000
docker_registry_repository=awx
docker_registry_username=developer
docker_remove_local_images=False
pg_hostname=postgresql
pg_username=awx
pg_password=awxpass
pg_database=awx
pg_port=5432
use_container_for_build=true
awx_official=false
http_proxy=http://proxy:3128
https_proxy=http://proxy:3128
no_proxy=mycorp.org
awx_container_search_domains=example.com,ansible.com
| 1.0 | installer inventory should allow 'cmd line'/'run time' over-rides for its playbook default values - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- Installer
##### SUMMARY
the hard coded values for secrets, passwords, ports, directories etc should only use defaults if nothing else has been passed as an argument
##### ENVIRONMENT
* AWX version: 1.0.2
* AWX install method: openshift, minishift, docker on linux, docker for mac, boot2docker
* Ansible version: 2.4.2
##### STEPS TO REPRODUCE
##### EXPECTED RESULTS
##### ACTUAL RESULTS
##### ADDITIONAL INFORMATION
the following vars should be allowing and override
dockerhub_base=ansible
dockerhub_version=latest
awx_secret_key=awxsecret
openshift_host=127.0.0.1:8443
awx_openshift_project=awx
openshift_user=developer
awx_node_port=30083
postgres_data_dir=/tmp/pgdocker
host_port=80
docker_registry=172.30.1.1:5000
docker_registry_repository=awx
docker_registry_username=developer
docker_remove_local_images=False
pg_hostname=postgresql
pg_username=awx
pg_password=awxpass
pg_database=awx
pg_port=5432
use_container_for_build=true
awx_official=false
http_proxy=http://proxy:3128
https_proxy=http://proxy:3128
no_proxy=mycorp.org
awx_container_search_domains=example.com,ansible.com
| priority | installer inventory should allow cmd line run time over rides for its playbook default values issue type feature idea component name installer summary the hard coded values for secrets passwords ports directories etc should only use defaults if nothing else has been passed as an argument environment awx version awx install method openshift minishift docker on linux docker for mac ansible version steps to reproduce expected results actual results additional information the following vars should be allowing and override dockerhub base ansible dockerhub version latest awx secret key awxsecret openshift host awx openshift project awx openshift user developer awx node port postgres data dir tmp pgdocker host port docker registry docker registry repository awx docker registry username developer docker remove local images false pg hostname postgresql pg username awx pg password awxpass pg database awx pg port use container for build true awx official false http proxy https proxy no proxy mycorp org awx container search domains example com ansible com | 1 |
213,059 | 7,245,385,068 | IssuesEvent | 2018-02-14 17:54:47 | department-of-veterans-affairs/caseflow-efolder | https://api.github.com/repos/department-of-veterans-affairs/caseflow-efolder | opened | [FE] Make the copy to clipboard function work in react rewrite | bug-medium-priority eFolder Express v2 whiskey | In re-writing the efolder express UI in react, we left a few thing behind. One of those things was the ability to copy the veteran ID to the clipboard by clicking the button in the top right-hand side of the search and download progress pages (pictured below). This [was implemented](https://github.com/department-of-veterans-affairs/caseflow-efolder/blob/9b107e3af836c27016f2376842bc6f8af1e9ad74/app/assets/javascripts/application.js#L13) through the clipboard third-party library, we probably want to use [react-copy-to-clipboard](https://github.com/nkbt/react-copy-to-clipboard) for parity with the main caseflow repo.

## Acceptance criteria:
* Clicking the clipboard button results actually copies the text in the box to the clipboard | 1.0 | [FE] Make the copy to clipboard function work in react rewrite - In re-writing the efolder express UI in react, we left a few thing behind. One of those things was the ability to copy the veteran ID to the clipboard by clicking the button in the top right-hand side of the search and download progress pages (pictured below). This [was implemented](https://github.com/department-of-veterans-affairs/caseflow-efolder/blob/9b107e3af836c27016f2376842bc6f8af1e9ad74/app/assets/javascripts/application.js#L13) through the clipboard third-party library, we probably want to use [react-copy-to-clipboard](https://github.com/nkbt/react-copy-to-clipboard) for parity with the main caseflow repo.

## Acceptance criteria:
* Clicking the clipboard button results actually copies the text in the box to the clipboard | priority | make the copy to clipboard function work in react rewrite in re writing the efolder express ui in react we left a few thing behind one of those things was the ability to copy the veteran id to the clipboard by clicking the button in the top right hand side of the search and download progress pages pictured below this through the clipboard third party library we probably want to use for parity with the main caseflow repo acceptance criteria clicking the clipboard button results actually copies the text in the box to the clipboard | 1 |
25,628 | 2,683,869,206 | IssuesEvent | 2015-03-28 12:07:38 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | conemu 100213 (и раньше): перезагрузка по Ctrl+Alt+Tab | 2–5 stars bug imported Priority-Medium | _From [yury.fin...@gmail.com](https://code.google.com/u/103818921530185261007/) on February 16, 2010 04:17:49_
Версия ОС: Windows XP Home Edition SP2
Версия FAR: 2.0 build 1400
С тех пор как conemu стал отлавливать открепление окна FAR'а (по
Ctrl+Alt+Tab), у меня _на одной машине_ при попытке это сделать происходит
перезагрузка (в 100% случаев). Машина старая: Celeron 1.8 GHz, ОЗУ 480 Мб.
На другой, более современной машине - всё нормально.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=189_ | 1.0 | conemu 100213 (и раньше): перезагрузка по Ctrl+Alt+Tab - _From [yury.fin...@gmail.com](https://code.google.com/u/103818921530185261007/) on February 16, 2010 04:17:49_
Версия ОС: Windows XP Home Edition SP2
Версия FAR: 2.0 build 1400
С тех пор как conemu стал отлавливать открепление окна FAR'а (по
Ctrl+Alt+Tab), у меня _на одной машине_ при попытке это сделать происходит
перезагрузка (в 100% случаев). Машина старая: Celeron 1.8 GHz, ОЗУ 480 Мб.
На другой, более современной машине - всё нормально.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=189_ | priority | conemu и раньше перезагрузка по ctrl alt tab from on february версия ос windows xp home edition версия far build с тех пор как conemu стал отлавливать открепление окна far а по ctrl alt tab у меня на одной машине при попытке это сделать происходит перезагрузка в случаев машина старая celeron ghz озу мб на другой более современной машине всё нормально original issue | 1 |
155,113 | 5,949,305,186 | IssuesEvent | 2017-05-26 13:58:08 | ciena-frost/ember-frost-core | https://api.github.com/repos/ciena-frost/ember-frost-core | closed | Move blueprint generators to a new `ember-cli-frost-blueprints` repo | enhancement Low priority Medium cost | We've got a way to integration test our blueprint generators now, but it's rather slow, so let's move that code into a separate repo so that tests on it don't get run unless the actual blueprints are changing. | 1.0 | Move blueprint generators to a new `ember-cli-frost-blueprints` repo - We've got a way to integration test our blueprint generators now, but it's rather slow, so let's move that code into a separate repo so that tests on it don't get run unless the actual blueprints are changing. | priority | move blueprint generators to a new ember cli frost blueprints repo we ve got a way to integration test our blueprint generators now but it s rather slow so let s move that code into a separate repo so that tests on it don t get run unless the actual blueprints are changing | 1 |
672,781 | 22,840,760,469 | IssuesEvent | 2022-07-12 21:34:06 | codeforbtv/green-up-app | https://api.github.com/repos/codeforbtv/green-up-app | closed | Failing to set a profile photo | Type: Bug Priority: Medium | **Describe the bug**
Failing to set a profile photo
**To Reproduce**
1. Got to Menu --> My Profile
2. Tap the person icon next to the name
3. Choose a photo from photo library
4. Save Profile
5. Go back to "My Profile": Photo doesn't show
**Expected behavior**
Saved photo shows up in the profile
**App Version (found on "Menu" screen):
v5.0.29/5-22-2020/Environment:QA (From TestFlight)
**Smartphone (please complete the following information):**
- Device: iPhone X
- OS: iOS, 13.4.1
| 1.0 | Failing to set a profile photo - **Describe the bug**
Failing to set a profile photo
**To Reproduce**
1. Got to Menu --> My Profile
2. Tap the person icon next to the name
3. Choose a photo from photo library
4. Save Profile
5. Go back to "My Profile": Photo doesn't show
**Expected behavior**
Saved photo shows up in the profile
**App Version (found on "Menu" screen):
v5.0.29/5-22-2020/Environment:QA (From TestFlight)
**Smartphone (please complete the following information):**
- Device: iPhone X
- OS: iOS, 13.4.1
| priority | failing to set a profile photo describe the bug failing to set a profile photo to reproduce got to menu my profile tap the person icon next to the name choose a photo from photo library save profile go back to my profile photo doesn t show expected behavior saved photo shows up in the profile app version found on menu screen environment qa from testflight smartphone please complete the following information device iphone x os ios | 1 |
426,733 | 12,378,817,133 | IssuesEvent | 2020-05-19 11:23:05 | threefoldtech/3bot_wallet | https://api.github.com/repos/threefoldtech/3bot_wallet | closed | Stellar Staging - When not entering a message, validation is incorrect | priority_medium type_bug | 1) Restart the wallet
2) Send a transaction without message
Expected:
Works, message should be optional
Actual:
Sometimes it works, but after a restart it is mandatory again, with an incorrect error.

| 1.0 | Stellar Staging - When not entering a message, validation is incorrect - 1) Restart the wallet
2) Send a transaction without message
Expected:
Works, message should be optional
Actual:
Sometimes it works, but after a restart it is mandatory again, with an incorrect error.

| priority | stellar staging when not entering a message validation is incorrect restart the wallet send a transaction without message expected works message should be optional actual sometimes it works but after a restart it is mandatory again with an incorrect error | 1 |
516,474 | 14,982,916,517 | IssuesEvent | 2021-01-28 16:33:34 | cds-snc/covid-alert-server-metrics-extractor | https://api.github.com/repos/cds-snc/covid-alert-server-metrics-extractor | closed | Synchronize Originator values across Google Spreadsheets | medium priority | Originator values should be the same in all spreadsheets. | 1.0 | Synchronize Originator values across Google Spreadsheets - Originator values should be the same in all spreadsheets. | priority | synchronize originator values across google spreadsheets originator values should be the same in all spreadsheets | 1 |
130,669 | 5,119,271,674 | IssuesEvent | 2017-01-08 16:17:25 | PyFilesystem/pyfilesystem | https://api.github.com/repos/PyFilesystem/pyfilesystem | closed | RemoteFileBuffer range requests feature | auto-migrated Priority-Medium Type-Enhancement | ```
* What steps will reproduce the problem?
When I need just part of file stored on remote system (FS using
RemoteFileBuffer), I have to download whole file from start to requested
position (or whole file to end without on-demand feature).
* What is the expected output? What do you see instead?
Many file formats (mp3, video files) stores additional information to possition
near end of file. I expect some support for remote filesystems, which can
handle range requests to allow download just requested part of file. That
should greatly improve FS performance.
```
Original issue reported on code.google.com by `marekp...@gmail.com` on 8 Oct 2010 at 5:28
| 1.0 | RemoteFileBuffer range requests feature - ```
* What steps will reproduce the problem?
When I need just part of file stored on remote system (FS using
RemoteFileBuffer), I have to download whole file from start to requested
position (or whole file to end without on-demand feature).
* What is the expected output? What do you see instead?
Many file formats (mp3, video files) stores additional information to possition
near end of file. I expect some support for remote filesystems, which can
handle range requests to allow download just requested part of file. That
should greatly improve FS performance.
```
Original issue reported on code.google.com by `marekp...@gmail.com` on 8 Oct 2010 at 5:28
| priority | remotefilebuffer range requests feature what steps will reproduce the problem when i need just part of file stored on remote system fs using remotefilebuffer i have to download whole file from start to requested position or whole file to end without on demand feature what is the expected output what do you see instead many file formats video files stores additional information to possition near end of file i expect some support for remote filesystems which can handle range requests to allow download just requested part of file that should greatly improve fs performance original issue reported on code google com by marekp gmail com on oct at | 1 |
188,063 | 6,767,976,918 | IssuesEvent | 2017-10-26 06:57:47 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | closed | OTP SMS delivery metrics | epic/sms kind/user_story priority/medium status/wontfix | We should have a metrics for SMS delivery process
* Succesful/unsuccessful SMS submissions stats
* Undelivered SMS
* SMS delivery latency
- [ ] integration with life report to store counters
- [ ] new metrics on datadog
https://docs.google.com/spreadsheets/d/1X1gQEWQc02loG1OtNRZzzuN3NssLRoESIgpn-aRDMPQ/edit?usp=sharing | 1.0 | OTP SMS delivery metrics - We should have a metrics for SMS delivery process
* Succesful/unsuccessful SMS submissions stats
* Undelivered SMS
* SMS delivery latency
- [ ] integration with life report to store counters
- [ ] new metrics on datadog
https://docs.google.com/spreadsheets/d/1X1gQEWQc02loG1OtNRZzzuN3NssLRoESIgpn-aRDMPQ/edit?usp=sharing | priority | otp sms delivery metrics we should have a metrics for sms delivery process succesful unsuccessful sms submissions stats undelivered sms sms delivery latency integration with life report to store counters new metrics on datadog | 1 |
154,957 | 5,945,806,342 | IssuesEvent | 2017-05-26 00:17:31 | slackapi/node-slack-sdk | https://api.github.com/repos/slackapi/node-slack-sdk | closed | Confusing implications of "UNABLE_TO_RTM_START" | bug Priority—Medium | I think https://github.com/slackhq/node-slack-sdk/blob/master/lib/clients/rtm/client.js#L348 should be some event other than `UNABLE_TO_RTM_START`.
here: https://github.com/slackhq/node-slack-sdk/blob/master/lib/clients/rtm/client.js#L258, an `UNABLE_TO_RTM_START` message is published, but that doesn't mean anything definitively. If it happens to be an error it can't recover from, `DISCONNECT` is later published. If `autoReconnect` is enabled, it will attempt to reconnect. Which might publish `UNABLE_TO_RTM_START` because we've exceeded the max connection attempts.
I think in the case of: https://github.com/slackhq/node-slack-sdk/blob/master/lib/clients/rtm/client.js#L348, we should raise either a distinct message or call `DISCONNECT` so that at least we're consistent.
My goal is to know when an RTM connection is hosed, ie. `DISCONNECT` but right now `UNABLE_TO_RTM_START` can also sometimes indicate that the connection is lost.
| 1.0 | Confusing implications of "UNABLE_TO_RTM_START" - I think https://github.com/slackhq/node-slack-sdk/blob/master/lib/clients/rtm/client.js#L348 should be some event other than `UNABLE_TO_RTM_START`.
here: https://github.com/slackhq/node-slack-sdk/blob/master/lib/clients/rtm/client.js#L258, an `UNABLE_TO_RTM_START` message is published, but that doesn't mean anything definitively. If it happens to be an error it can't recover from, `DISCONNECT` is later published. If `autoReconnect` is enabled, it will attempt to reconnect. Which might publish `UNABLE_TO_RTM_START` because we've exceeded the max connection attempts.
I think in the case of: https://github.com/slackhq/node-slack-sdk/blob/master/lib/clients/rtm/client.js#L348, we should raise either a distinct message or call `DISCONNECT` so that at least we're consistent.
My goal is to know when an RTM connection is hosed, ie. `DISCONNECT` but right now `UNABLE_TO_RTM_START` can also sometimes indicate that the connection is lost.
| priority | confusing implications of unable to rtm start i think should be some event other than unable to rtm start here an unable to rtm start message is published but that doesn t mean anything definitively if it happens to be an error it can t recover from disconnect is later published if autoreconnect is enabled it will attempt to reconnect which might publish unable to rtm start because we ve exceeded the max connection attempts i think in the case of we should raise either a distinct message or call disconnect so that at least we re consistent my goal is to know when an rtm connection is hosed ie disconnect but right now unable to rtm start can also sometimes indicate that the connection is lost | 1 |
88,076 | 3,771,312,560 | IssuesEvent | 2016-03-16 17:12:30 | cs2103jan2016-f13-1j/main | https://api.github.com/repos/cs2103jan2016-f13-1j/main | closed | A user can add task by entering flexible commands | priority.medium type.story | so that task can be added easily without caring too much about typos | 1.0 | A user can add task by entering flexible commands - so that task can be added easily without caring too much about typos | priority | a user can add task by entering flexible commands so that task can be added easily without caring too much about typos | 1 |
729,114 | 25,109,788,437 | IssuesEvent | 2022-11-08 19:30:07 | teogor/ceres | https://api.github.com/repos/teogor/ceres | closed | Implement `Toolbar` compatible with M3 Guidelines | @priority-medium @feature m3 | Implement `Toolbar` compatible with M3 Guidelines as follows:
- when content is at top the color should be the same as the background (alpha 5%)
- else the color should be lighter by 2 levels (alpha 11%) | 1.0 | Implement `Toolbar` compatible with M3 Guidelines - Implement `Toolbar` compatible with M3 Guidelines as follows:
- when content is at top the color should be the same as the background (alpha 5%)
- else the color should be lighter by 2 levels (alpha 11%) | priority | implement toolbar compatible with guidelines implement toolbar compatible with guidelines as follows when content is at top the color should be the same as the background alpha else the color should be lighter by levels alpha | 1 |
993 | 2,506,547,014 | IssuesEvent | 2015-01-12 11:39:00 | WeAreAthlon/silla.io | https://api.github.com/repos/WeAreAthlon/silla.io | opened | Create a database Filter object | feature medium priority | This will be used for setting a conditions on queries to the database. | 1.0 | Create a database Filter object - This will be used for setting a conditions on queries to the database. | priority | create a database filter object this will be used for setting a conditions on queries to the database | 1 |
587,475 | 17,617,060,168 | IssuesEvent | 2021-08-18 11:04:23 | nimblehq/nimble-medium-ios | https://api.github.com/repos/nimblehq/nimble-medium-ios | opened | As a user, I can create a new article from the home screen | type : feature category: integration priority : medium | ## Why
When the users logged in the application successfully, they can create a new article from the `Home` screen.
## Acceptance Criteria
- [ ] When the users tap on the create new article button in the top right navigation bar of the `Home` screen, navigate to the `New Article` screen.
- [ ] Once in the `New Article ` screen, the users can tap on the `Back` button to go back to the previous screen.
| 1.0 | As a user, I can create a new article from the home screen - ## Why
When the users logged in the application successfully, they can create a new article from the `Home` screen.
## Acceptance Criteria
- [ ] When the users tap on the create new article button in the top right navigation bar of the `Home` screen, navigate to the `New Article` screen.
- [ ] Once in the `New Article ` screen, the users can tap on the `Back` button to go back to the previous screen.
| priority | as a user i can create a new article from the home screen why when the users logged in the application successfully they can create a new article from the home screen acceptance criteria when the users tap on the create new article button in the top right navigation bar of the home screen navigate to the new article screen once in the new article screen the users can tap on the back button to go back to the previous screen | 1 |
588,560 | 17,662,432,692 | IssuesEvent | 2021-08-21 19:48:00 | ZsgsDesign/NOJ | https://api.github.com/repos/ZsgsDesign/NOJ | opened | Native VSCode Theme Support | New Feature Priority 3 (Medium) | Now NOJ Editor only supports the built-in `Monach` themes.
We are going to support native VSCode themes in the next version.
Also, the next version would come with a brand new Language Service, including support for language configuration similar to VSCode and grammar analysis based on TextMate grammar. | 1.0 | Native VSCode Theme Support - Now NOJ Editor only supports the built-in `Monach` themes.
We are going to support native VSCode themes in the next version.
Also, the next version would come with a brand new Language Service, including support for language configuration similar to VSCode and grammar analysis based on TextMate grammar. | priority | native vscode theme support now noj editor only supports the built in monach themes we are going to support native vscode themes in the next version also the next version would come with a brand new language service including support for language configuration similar to vscode and grammar analysis based on textmate grammar | 1 |
428,864 | 12,418,375,502 | IssuesEvent | 2020-05-23 00:01:27 | rubyforgood/casa | https://api.github.com/repos/rubyforgood/casa | closed | Admin view/edit case page should include volunteer/s assigned to the case | :crown: Admin Priority: Medium Status: Available help wanted | Part of epic #4 (Admin Dashboard)
**What type of user is this for? [volunteer/supervisor/admin/all]**
This feature is for **admins** and should not change what non-admins can see or do.
**Where does/should this occur?**
Currently, [in the admin view/edit case view](https://casa-r4g-staging.herokuapp.com/casa_cases/12/edit), admins only see case info.
**Description**
Add to this view the volunteer/s currently **actively** assigned to the case. ( `case_assignment.is_active == true` )
Display volunteer as volunteer **name** not id or email. If volunteer name is not in the db schema yet, display volunteer email. The volunteer's name (or email) should be a link to the admin view/edit volunteer page.
**Screenshots**
https://casa-r4g-staging.herokuapp.com/casa_cases/14/edit
<img width="797" alt="Screen Shot 2020-04-18 at 3 36 32 PM" src="https://user-images.githubusercontent.com/578159/79672871-7fee1200-818a-11ea-9956-169a8508d463.png">
| 1.0 | Admin view/edit case page should include volunteer/s assigned to the case - Part of epic #4 (Admin Dashboard)
**What type of user is this for? [volunteer/supervisor/admin/all]**
This feature is for **admins** and should not change what non-admins can see or do.
**Where does/should this occur?**
Currently, [in the admin view/edit case view](https://casa-r4g-staging.herokuapp.com/casa_cases/12/edit), admins only see case info.
**Description**
Add to this view the volunteer/s currently **actively** assigned to the case. ( `case_assignment.is_active == true` )
Display volunteer as volunteer **name** not id or email. If volunteer name is not in the db schema yet, display volunteer email. The volunteer's name (or email) should be a link to the admin view/edit volunteer page.
**Screenshots**
https://casa-r4g-staging.herokuapp.com/casa_cases/14/edit
<img width="797" alt="Screen Shot 2020-04-18 at 3 36 32 PM" src="https://user-images.githubusercontent.com/578159/79672871-7fee1200-818a-11ea-9956-169a8508d463.png">
| priority | admin view edit case page should include volunteer s assigned to the case part of epic admin dashboard what type of user is this for this feature is for admins and should not change what non admins can see or do where does should this occur currently admins only see case info description add to this view the volunteer s currently actively assigned to the case case assignment is active true display volunteer as volunteer name not id or email if volunteer name is not in the db schema yet display volunteer email the volunteer s name or email should be a link to the admin view edit volunteer page screenshots img width alt screen shot at pm src | 1 |
565,436 | 16,761,265,247 | IssuesEvent | 2021-06-13 20:49:26 | peering-manager/peering-manager | https://api.github.com/repos/peering-manager/peering-manager | closed | Adding IX Peering Sessions manually from the AS page | priority: medium type: enhancement | ### Environment
* Python version: 3.9.0
* Peering Manager version: 7ba396768ebc (v1.2.1)
### Proposed Functionality
Make the IX Peering Sessions tab for an AS always visible and add an Add button to it to create IX Peering Sessions manually.
### Use Case
When a peer doesn't have any IX peering sessions yet then the IX Peering Sessions tab is not visible. But even when it is visible there's no Add button on the tab so sessions that hasn't been imported from PeeringDB can't be added here.
Currently you have to go via Internet Exchanges -> (Exchange) -> Peering Sessions tab -> Add to add a session manually for an AS in an IXP. Here you’ll get the same form as when you import a session from PeeringDB (with an empty IP address field) but it would be easier to reach this from the AS itself.
| 1.0 | Adding IX Peering Sessions manually from the AS page - ### Environment
* Python version: 3.9.0
* Peering Manager version: 7ba396768ebc (v1.2.1)
### Proposed Functionality
Make the IX Peering Sessions tab for an AS always visible and add an Add button to it to create IX Peering Sessions manually.
### Use Case
When a peer doesn't have any IX peering sessions yet then the IX Peering Sessions tab is not visible. But even when it is visible there's no Add button on the tab so sessions that hasn't been imported from PeeringDB can't be added here.
Currently you have to go via Internet Exchanges -> (Exchange) -> Peering Sessions tab -> Add to add a session manually for an AS in an IXP. Here you’ll get the same form as when you import a session from PeeringDB (with an empty IP address field) but it would be easier to reach this from the AS itself.
| priority | adding ix peering sessions manually from the as page environment python version peering manager version proposed functionality make the ix peering sessions tab for an as always visible and add an add button to it to create ix peering sessions manually use case when a peer doesn t have any ix peering sessions yet then the ix peering sessions tab is not visible but even when it is visible there s no add button on the tab so sessions that hasn t been imported from peeringdb can t be added here currently you have to go via internet exchanges exchange peering sessions tab add to add a session manually for an as in an ixp here you’ll get the same form as when you import a session from peeringdb with an empty ip address field but it would be easier to reach this from the as itself | 1 |
404,481 | 11,858,131,290 | IssuesEvent | 2020-03-25 10:51:10 | cpeditor/cpeditor | https://api.github.com/repos/cpeditor/cpeditor | closed | Support `cf parse` and `cf race` | enhancement help wanted medium_priority | **Is your feature request related to a problem? Please describe.**
When we use "open contest option",we open multiple files say A,B,C..etc.But when we parse contest using competitive companion,the new tabs of those questions A,B,C gets created.
**Describe the solution you'd like**
My request is to add a feature so that just the update the test cases of the corresponding file name(i.e test cases of A in A.cpp ,B in B.cpp etc) rather than creation of new files(if its already created).This will also help us more during the contest avoiding to select the location all the time.
**Describe alternatives you've considered**
**Additional context**
| 1.0 | Support `cf parse` and `cf race` - **Is your feature request related to a problem? Please describe.**
When we use "open contest option",we open multiple files say A,B,C..etc.But when we parse contest using competitive companion,the new tabs of those questions A,B,C gets created.
**Describe the solution you'd like**
My request is to add a feature so that just the update the test cases of the corresponding file name(i.e test cases of A in A.cpp ,B in B.cpp etc) rather than creation of new files(if its already created).This will also help us more during the contest avoiding to select the location all the time.
**Describe alternatives you've considered**
**Additional context**
| priority | support cf parse and cf race is your feature request related to a problem please describe when we use open contest option we open multiple files say a b c etc but when we parse contest using competitive companion the new tabs of those questions a b c gets created describe the solution you d like my request is to add a feature so that just the update the test cases of the corresponding file name i e test cases of a in a cpp b in b cpp etc rather than creation of new files if its already created this will also help us more during the contest avoiding to select the location all the time describe alternatives you ve considered additional context | 1 |
831,659 | 32,057,366,124 | IssuesEvent | 2023-09-24 08:40:29 | Seprintour-Test/test | https://api.github.com/repos/Seprintour-Test/test | reopened | Update Documentation On The Ubiquity Readme | Time: <1 Hour Priority: 2 (Medium) Price: 25 USD | modify the documentation on the ubiquity readme
###### [ **[ View on Telegram ]** ](https://t.me/c/1975484291/223) | 1.0 | Update Documentation On The Ubiquity Readme - modify the documentation on the ubiquity readme
###### [ **[ View on Telegram ]** ](https://t.me/c/1975484291/223) | priority | update documentation on the ubiquity readme modify the documentation on the ubiquity readme | 1 |
732,964 | 25,282,486,842 | IssuesEvent | 2022-11-16 16:43:06 | Clan-Attack/Core | https://api.github.com/repos/Clan-Attack/Core | closed | [Enchant]: Extend IPlayer | Priority: Medium Type: Enchant | ### Is your feature request related to a problem?
- [X] Check this if your future request is related to a problem
### Please describe the problem
I needed to get a uuid and name from an `IPlayer`, no way to do so
### Describe the solution you'd like
Extent the `IPlayer` with
- `IPlayer#uuid`
- `IPlayer#name`
- `IPlayer#bukkit`
- `IPlayer#offlineBukkit`
If the player isn't online bukkit should return null
If the player didn't join the server, offlineBukkit should return nuöö
### Describe alternatives you've considered
Create the bukkit player from the uuid by myself
### Additional context
_No response_ | 1.0 | [Enchant]: Extend IPlayer - ### Is your feature request related to a problem?
- [X] Check this if your future request is related to a problem
### Please describe the problem
I needed to get a uuid and name from an `IPlayer`, no way to do so
### Describe the solution you'd like
Extent the `IPlayer` with
- `IPlayer#uuid`
- `IPlayer#name`
- `IPlayer#bukkit`
- `IPlayer#offlineBukkit`
If the player isn't online bukkit should return null
If the player didn't join the server, offlineBukkit should return nuöö
### Describe alternatives you've considered
Create the bukkit player from the uuid by myself
### Additional context
_No response_ | priority | extend iplayer is your feature request related to a problem check this if your future request is related to a problem please describe the problem i needed to get a uuid and name from an iplayer no way to do so describe the solution you d like extent the iplayer with iplayer uuid iplayer name iplayer bukkit iplayer offlinebukkit if the player isn t online bukkit should return null if the player didn t join the server offlinebukkit should return nuöö describe alternatives you ve considered create the bukkit player from the uuid by myself additional context no response | 1 |
796,372 | 28,108,554,407 | IssuesEvent | 2023-03-31 04:23:28 | WordPress/openverse | https://api.github.com/repos/WordPress/openverse | closed | Deployment workflow runs do not show in workflow run history | 🟨 priority: medium 🛠 goal: fix 🤖 aspect: dx 🧱 stack: mgmt | ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
Workflow runs triggered by the `workflow_call` apparently do not show up in the workflow run history. This means that staging deployments do not show up in the staging deployment workflow run histories due to their being dispatched via `call_workflow`:
- https://github.com/WordPress/openverse/actions/workflows/deploy-staging-api.yml
- https://github.com/WordPress/openverse/actions/workflows/deploy-staging-nuxt.yml
This will also be an issue for the release-app workflow as it also uses `call_workflow` to trigger the production frontend and API deployments.
To fix this, we can use the `gh` cli to dispatch the workflow rather than using `call_workflow`. `workflow_dispatch` triggered runs do show in the history. This would require updating the CI/CD and Release app workflows' frontend and api deployment jobs to run the following script:
```
gh workflow run "<workflow name>" -f tag=${{ needs.get-image-tag.outputs.image-tag }}
```
or
```
gh workflow run <workflow id> -f tag=${{ needs.get-image-tag.outputs.image-tag }}
```
Depending on whether we want to use the stable ID or the human-readable workflow name (I prefer using the workflow name).
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
Notice that this staging API deployment does not show in the workflow run history linked above for staging API deployment: https://github.com/WordPress/openverse/actions/runs/4527118209/jobs/7972807708
| 1.0 | Deployment workflow runs do not show in workflow run history - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
Workflow runs triggered by the `workflow_call` apparently do not show up in the workflow run history. This means that staging deployments do not show up in the staging deployment workflow run histories due to their being dispatched via `call_workflow`:
- https://github.com/WordPress/openverse/actions/workflows/deploy-staging-api.yml
- https://github.com/WordPress/openverse/actions/workflows/deploy-staging-nuxt.yml
This will also be an issue for the release-app workflow as it also uses `call_workflow` to trigger the production frontend and API deployments.
To fix this, we can use the `gh` cli to dispatch the workflow rather than using `call_workflow`. `workflow_dispatch` triggered runs do show in the history. This would require updating the CI/CD and Release app workflows' frontend and api deployment jobs to run the following script:
```
gh workflow run "<workflow name>" -f tag=${{ needs.get-image-tag.outputs.image-tag }}
```
or
```
gh workflow run <workflow id> -f tag=${{ needs.get-image-tag.outputs.image-tag }}
```
Depending on whether we want to use the stable ID or the human-readable workflow name (I prefer using the workflow name).
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
Notice that this staging API deployment does not show in the workflow run history linked above for staging API deployment: https://github.com/WordPress/openverse/actions/runs/4527118209/jobs/7972807708
| priority | deployment workflow runs do not show in workflow run history description workflow runs triggered by the workflow call apparently do not show up in the workflow run history this means that staging deployments do not show up in the staging deployment workflow run histories due to their being dispatched via call workflow this will also be an issue for the release app workflow as it also uses call workflow to trigger the production frontend and api deployments to fix this we can use the gh cli to dispatch the workflow rather than using call workflow workflow dispatch triggered runs do show in the history this would require updating the ci cd and release app workflows frontend and api deployment jobs to run the following script gh workflow run f tag needs get image tag outputs image tag or gh workflow run f tag needs get image tag outputs image tag depending on whether we want to use the stable id or the human readable workflow name i prefer using the workflow name reproduction notice that this staging api deployment does not show in the workflow run history linked above for staging api deployment | 1 |
274,001 | 8,555,989,483 | IssuesEvent | 2018-11-08 11:43:25 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | reopened | [7.8.0 #79e33fd9] Placing down a workbench does not fulfill the tutorials requirement anymore | Medium Priority | I tried several times, it does not tick the objective:
<img width="1920" alt="workbench" src="https://user-images.githubusercontent.com/25908592/47251103-e330a400-d42e-11e8-94ec-0b18606e75a3.png">
| 1.0 | [7.8.0 #79e33fd9] Placing down a workbench does not fulfill the tutorials requirement anymore - I tried several times, it does not tick the objective:
<img width="1920" alt="workbench" src="https://user-images.githubusercontent.com/25908592/47251103-e330a400-d42e-11e8-94ec-0b18606e75a3.png">
| priority | placing down a workbench does not fulfill the tutorials requirement anymore i tried several times it does not tick the objective img width alt workbench src | 1 |
599,242 | 18,268,676,579 | IssuesEvent | 2021-10-04 11:30:18 | moducate/heimdall | https://api.github.com/repos/moducate/heimdall | reopened | Split REST/GraphQL endpoints into separate services | Priority: Medium Status: Available Type: Enhancement good first issue Hacktoberfest | Currently, the GraphQL endpoint (`/graphql`) is exposed on the same HTTP service as the REST endpoints (such as `/school`).
To promote more modularity, we should split up GraphQL and REST onto separate HTTP services (ports 1470 and 1471 respectively), with CLI commands to serve either: one of the services, or both at once. (`heimdall serve <all | graphql | rest>`)
It will then be up to the end user to use a reverse proxy if they wish to have both APIs accessible through a single host. | 1.0 | Split REST/GraphQL endpoints into separate services - Currently, the GraphQL endpoint (`/graphql`) is exposed on the same HTTP service as the REST endpoints (such as `/school`).
To promote more modularity, we should split up GraphQL and REST onto separate HTTP services (ports 1470 and 1471 respectively), with CLI commands to serve either: one of the services, or both at once. (`heimdall serve <all | graphql | rest>`)
It will then be up to the end user to use a reverse proxy if they wish to have both APIs accessible through a single host. | priority | split rest graphql endpoints into separate services currently the graphql endpoint graphql is exposed on the same http service as the rest endpoints such as school to promote more modularity we should split up graphql and rest onto separate http services ports and respectively with cli commands to serve either one of the services or both at once heimdall serve it will then be up to the end user to use a reverse proxy if they wish to have both apis accessible through a single host | 1 |
30,744 | 2,725,078,287 | IssuesEvent | 2015-04-14 21:27:57 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Create new account: General ToU show incorrectly | Component: UX & Upgrade Priority: Medium Status: QA Type: Bug | - I think there are either typos or/and formatting issues
- I see many question marks. Don't think thats correct.

| 1.0 | Create new account: General ToU show incorrectly - - I think there are either typos or/and formatting issues
- I see many question marks. Don't think thats correct.

| priority | create new account general tou show incorrectly i think there are either typos or and formatting issues i see many question marks don t think thats correct | 1 |
25,448 | 2,683,802,491 | IssuesEvent | 2015-03-28 10:18:01 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | ConEmu 090620a сворачивание/разворачивание в таск-бар | 1 star bug imported Priority-Medium | _From [alexandr...@gmail.com](https://code.google.com/u/102266525303291005921/) on June 21, 2009 05:10:25_
Версия ОС: Win7
Версия FAR: Far20b1001.x86.20090621
Описание бага:
Запускаю Far (через ConEmu ); всё прекрасно; нажимаю на иконку в таск-баре, окно
сворачивается, нажимаю снова - окно разворачивается, но ConEmu больше не
реагирует ни на что. Вернее, реагирует, но не показывает этого. О перемещении по
панели можно судить только по заголовку окна, в котором всё отображается.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=16_ | 1.0 | ConEmu 090620a сворачивание/разворачивание в таск-бар - _From [alexandr...@gmail.com](https://code.google.com/u/102266525303291005921/) on June 21, 2009 05:10:25_
Версия ОС: Win7
Версия FAR: Far20b1001.x86.20090621
Описание бага:
Запускаю Far (через ConEmu ); всё прекрасно; нажимаю на иконку в таск-баре, окно
сворачивается, нажимаю снова - окно разворачивается, но ConEmu больше не
реагирует ни на что. Вернее, реагирует, но не показывает этого. О перемещении по
панели можно судить только по заголовку окна, в котором всё отображается.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=16_ | priority | conemu сворачивание разворачивание в таск бар from on june версия ос версия far описание бага запускаю far через conemu всё прекрасно нажимаю на иконку в таск баре окно сворачивается нажимаю снова окно разворачивается но conemu больше не реагирует ни на что вернее реагирует но не показывает этого о перемещении по панели можно судить только по заголовку окна в котором всё отображается original issue | 1 |
29,607 | 2,716,628,253 | IssuesEvent | 2015-04-10 20:20:08 | CruxFramework/crux | https://api.github.com/repos/CruxFramework/crux | closed | [Smart-Faces] Create a Simple View Container | enhancement imported Milestone-M14-C4 Module-CruxSmartFaces Priority-Medium | _From [juli...@cruxframework.org](https://code.google.com/u/108392056359000771618/) on August 27, 2014 14:16:08_
Create a componente simple view container on crux-smart-faces
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=494_ | 1.0 | [Smart-Faces] Create a Simple View Container - _From [juli...@cruxframework.org](https://code.google.com/u/108392056359000771618/) on August 27, 2014 14:16:08_
Create a componente simple view container on crux-smart-faces
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=494_ | priority | create a simple view container from on august create a componente simple view container on crux smart faces original issue | 1 |
770,526 | 27,043,384,814 | IssuesEvent | 2023-02-13 07:54:22 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Rename `adoptium-java` datasource to `java-version` | priority-3-medium type:refactor status:ready | ### What would you like Renovate to be able to do?
We should do this to be in line with our other datasources.
- #20233
### If you have any ideas on how this should be implemented, please tell us here.
just rename and migrate
### Is this a feature you are interested in implementing yourself?
Maybe | 1.0 | Rename `adoptium-java` datasource to `java-version` - ### What would you like Renovate to be able to do?
We should do this to be in line with our other datasources.
- #20233
### If you have any ideas on how this should be implemented, please tell us here.
just rename and migrate
### Is this a feature you are interested in implementing yourself?
Maybe | priority | rename adoptium java datasource to java version what would you like renovate to be able to do we should do this to be in line with our other datasources if you have any ideas on how this should be implemented please tell us here just rename and migrate is this a feature you are interested in implementing yourself maybe | 1 |
17,051 | 2,615,129,766 | IssuesEvent | 2015-03-01 05:59:30 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | GDrive sample for android | auto-migrated Priority-Medium Type-Sample | ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
Google Drive api
What format (e.g. JSON, Atom)?
JSON
Java environment (e.g. Java 6, Android 2.3, App Engine)?
Android
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `madhusud...@gmail.com` on 23 Oct 2012 at 1:04
* Merged into: #469 | 1.0 | GDrive sample for android - ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
Google Drive api
What format (e.g. JSON, Atom)?
JSON
Java environment (e.g. Java 6, Android 2.3, App Engine)?
Android
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `madhusud...@gmail.com` on 23 Oct 2012 at 1:04
* Merged into: #469 | priority | gdrive sample for android which google api and version e g google calendar data api version google drive api what format e g json atom json java environment e g java android app engine android external references such as api reference guide please provide any additional information below original issue reported on code google com by madhusud gmail com on oct at merged into | 1 |
690,083 | 23,645,202,239 | IssuesEvent | 2022-08-25 21:15:09 | bcgov/cas-cif | https://api.github.com/repos/bcgov/cas-cif | closed | Record demo for the upcoming branch meeting | Task Backlog Refinement Medium Priority | #### Describe the task
The next scheduled Branch meeting is scheduled for July 26th. The presentation [slide deck](https://bcgov.sharepoint.com/:p:/t/00608-ScrumTeam/EV6WsJ49vrxMr1ZqMUV7nMYB68gvCWgv8E7NwGkEaQYurg?e=KdVHKc) is done. Pre-recorded demo is upon accomplishment.
#### Acceptance Criteria
- [x] Decide who and what to be recorded
- [x] Record and host in the MS Teams folder
- [x] Link the demo from the slide deck
#### Additional context
- See #545 for more details.
| 1.0 | Record demo for the upcoming branch meeting - #### Describe the task
The next scheduled Branch meeting is scheduled for July 26th. The presentation [slide deck](https://bcgov.sharepoint.com/:p:/t/00608-ScrumTeam/EV6WsJ49vrxMr1ZqMUV7nMYB68gvCWgv8E7NwGkEaQYurg?e=KdVHKc) is done. Pre-recorded demo is upon accomplishment.
#### Acceptance Criteria
- [x] Decide who and what to be recorded
- [x] Record and host in the MS Teams folder
- [x] Link the demo from the slide deck
#### Additional context
- See #545 for more details.
| priority | record demo for the upcoming branch meeting describe the task the next scheduled branch meeting is scheduled for july the presentation is done pre recorded demo is upon accomplishment acceptance criteria decide who and what to be recorded record and host in the ms teams folder link the demo from the slide deck additional context see for more details | 1 |
103,518 | 4,174,564,757 | IssuesEvent | 2016-06-21 14:25:42 | CascadesCarnivoreProject/Timelapse | https://api.github.com/repos/CascadesCarnivoreProject/Timelapse | opened | Bug: DialogDateRereadDatesFromImages doesn't update the dates | Medium Priority fix | To reproduce:
Update bug. Reread date and time from the images
1. Change some of the dates in the date field so that they differ from what had been read in
2. Select Re-read dates from images
3. The feedback says that the dates haven’t changed,
4. The changed dates in the field are not updated in either the datagrid or the db
Preliminary walk through the code seems like it was altered considerably from original, so have to figure out what its now doing before I can fix it.
| 1.0 | Bug: DialogDateRereadDatesFromImages doesn't update the dates - To reproduce:
Update bug. Reread date and time from the images
1. Change some of the dates in the date field so that they differ from what had been read in
2. Select Re-read dates from images
3. The feedback says that the dates haven’t changed,
4. The changed dates in the field are not updated in either the datagrid or the db
Preliminary walk through the code seems like it was altered considerably from original, so have to figure out what its now doing before I can fix it.
| priority | bug dialogdaterereaddatesfromimages doesn t update the dates to reproduce update bug reread date and time from the images change some of the dates in the date field so that they differ from what had been read in select re read dates from images the feedback says that the dates haven’t changed the changed dates in the field are not updated in either the datagrid or the db preliminary walk through the code seems like it was altered considerably from original so have to figure out what its now doing before i can fix it | 1 |
25,668 | 2,683,918,822 | IssuesEvent | 2015-03-28 13:26:26 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Многоточия в меню | 2–5 stars bug imported Priority-Medium | _From [yurivk...@gmail.com](https://code.google.com/u/109564459582005085765/) on April 06, 2010 23:12:34_
Согласно всем guideline’ам, многоточие в пункте меню должно быть, если
перед выполнением команды запрашиваются дополнительные сведения, и не
должно, если не запрашиваются.
**Attachment:** [0001-Fix-menu-ellipses.patch](http://code.google.com/p/conemu-maximus5/issues/detail?id=220)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=220_ | 1.0 | Многоточия в меню - _From [yurivk...@gmail.com](https://code.google.com/u/109564459582005085765/) on April 06, 2010 23:12:34_
Согласно всем guideline’ам, многоточие в пункте меню должно быть, если
перед выполнением команды запрашиваются дополнительные сведения, и не
должно, если не запрашиваются.
**Attachment:** [0001-Fix-menu-ellipses.patch](http://code.google.com/p/conemu-maximus5/issues/detail?id=220)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=220_ | priority | многоточия в меню from on april согласно всем guideline’ам многоточие в пункте меню должно быть если перед выполнением команды запрашиваются дополнительные сведения и не должно если не запрашиваются attachment original issue | 1 |
463,131 | 13,260,315,154 | IssuesEvent | 2020-08-20 18:00:32 | juju/js-libjuju | https://api.github.com/repos/juju/js-libjuju | closed | Can't get model from Charm store | Priority: Medium | As per our discussions on IRC today, it seems impossible to get a bundle staged via the getChanges API in the bundle facade.
Passing in the raw YAML to the same API call does work.
Whatever we pass in the bundleurl the reponse is "at least one application must be specified" | 1.0 | Can't get model from Charm store - As per our discussions on IRC today, it seems impossible to get a bundle staged via the getChanges API in the bundle facade.
Passing in the raw YAML to the same API call does work.
Whatever we pass in the bundleurl the reponse is "at least one application must be specified" | priority | can t get model from charm store as per our discussions on irc today it seems impossible to get a bundle staged via the getchanges api in the bundle facade passing in the raw yaml to the same api call does work whatever we pass in the bundleurl the reponse is at least one application must be specified | 1 |
80,330 | 3,560,934,653 | IssuesEvent | 2016-01-23 12:40:06 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | Anki skips to next card without user direction | bug Priority-Medium waitingforfeedback | Originally reported on Google Code with ID 2001
```
What steps will reproduce the problem?
(unfortunately, this is hard to reproduce. Something odd must be going on in the background)
1. Turn off 'automatically display answer'
2. Open a deck
3. Wait an unspecified length of time (sometime 5min, sometimes 30 seconds, sometimes
never happens!), and Anki will skip from Card1 to Card2.
What is the expected output? What do you see instead?
> This occurs regardless of whether you are on the front or the back of Card1. It also
occurs regardless of whether the phone is active or asleep.
Does it happen again every time you repeat the steps above? Or did it
happen only one time?
>This has happened to me over 100 times. However, I still cannot consistently reproduce
it! I am at a loss
What version of AnkiDroid are you using? (Decks list > menu > About > Look
at the title)
>> v. 2.0.4
On what version of Android? (Home screen > menu > About phone > Android
version)
>> 4.2.2
If it is a crash or "Force close" and you can reproduce it, the following
would help immensely: 1) Install the "SendLog" app, 2) Reproduce the crash,
3) Immediately after, launch SendLog, 4) Attach the resulting file to this
report. That will make the bug much easier to fix.
Please provide any additional information below.
I'm very willing to help, just let me know what I can do
```
Reported by `fischerlees` on 2014-02-22 22:34:55
| 1.0 | Anki skips to next card without user direction - Originally reported on Google Code with ID 2001
```
What steps will reproduce the problem?
(unfortunately, this is hard to reproduce. Something odd must be going on in the background)
1. Turn off 'automatically display answer'
2. Open a deck
3. Wait an unspecified length of time (sometime 5min, sometimes 30 seconds, sometimes
never happens!), and Anki will skip from Card1 to Card2.
What is the expected output? What do you see instead?
> This occurs regardless of whether you are on the front or the back of Card1. It also
occurs regardless of whether the phone is active or asleep.
Does it happen again every time you repeat the steps above? Or did it
happen only one time?
>This has happened to me over 100 times. However, I still cannot consistently reproduce
it! I am at a loss
What version of AnkiDroid are you using? (Decks list > menu > About > Look
at the title)
>> v. 2.0.4
On what version of Android? (Home screen > menu > About phone > Android
version)
>> 4.2.2
If it is a crash or "Force close" and you can reproduce it, the following
would help immensely: 1) Install the "SendLog" app, 2) Reproduce the crash,
3) Immediately after, launch SendLog, 4) Attach the resulting file to this
report. That will make the bug much easier to fix.
Please provide any additional information below.
I'm very willing to help, just let me know what I can do
```
Reported by `fischerlees` on 2014-02-22 22:34:55
| priority | anki skips to next card without user direction originally reported on google code with id what steps will reproduce the problem unfortunately this is hard to reproduce something odd must be going on in the background turn off automatically display answer open a deck wait an unspecified length of time sometime sometimes seconds sometimes never happens and anki will skip from to what is the expected output what do you see instead this occurs regardless of whether you are on the front or the back of it also occurs regardless of whether the phone is active or asleep does it happen again every time you repeat the steps above or did it happen only one time this has happened to me over times however i still cannot consistently reproduce it i am at a loss what version of ankidroid are you using decks list menu about look at the title v on what version of android home screen menu about phone android version if it is a crash or force close and you can reproduce it the following would help immensely install the sendlog app reproduce the crash immediately after launch sendlog attach the resulting file to this report that will make the bug much easier to fix please provide any additional information below i m very willing to help just let me know what i can do reported by fischerlees on | 1 |
227,160 | 7,527,604,565 | IssuesEvent | 2018-04-13 17:40:00 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | excavator cant dump mud in to water | Medium Priority | why cant the excavator dump mud in to the water like you used to be able to do ? makes trying to level a area very hard | 1.0 | excavator cant dump mud in to water - why cant the excavator dump mud in to the water like you used to be able to do ? makes trying to level a area very hard | priority | excavator cant dump mud in to water why cant the excavator dump mud in to the water like you used to be able to do makes trying to level a area very hard | 1 |
47,374 | 2,978,350,715 | IssuesEvent | 2015-07-16 05:24:42 | adobe/brackets | https://api.github.com/repos/adobe/brackets | opened | [IQE]: Health Data Report File Stat metric for working set does not capture the preferences file count opened from debug menu in the w | IQE medium priority | Steps:
1) Launch Bracekts
2) Open preferences file from debug menu
3) Open Health Data Report
Result : Health Data Report File Stat metric for working set does not capture the preferences file count opened from debug menu, only openedFileExt count increases | 1.0 | [IQE]: Health Data Report File Stat metric for working set does not capture the preferences file count opened from debug menu in the w - Steps:
1) Launch Bracekts
2) Open preferences file from debug menu
3) Open Health Data Report
Result : Health Data Report File Stat metric for working set does not capture the preferences file count opened from debug menu, only openedFileExt count increases | priority | health data report file stat metric for working set does not capture the preferences file count opened from debug menu in the w steps launch bracekts open preferences file from debug menu open health data report result health data report file stat metric for working set does not capture the preferences file count opened from debug menu only openedfileext count increases | 1 |
509,621 | 14,740,565,037 | IssuesEvent | 2021-01-07 09:17:30 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | Extend display of bans | Category: Accounts Priority: Medium | Bans of users should additionally display the reason and a note of
"If you think this is in error, contact support@strangeloopgames.com"
| 1.0 | Extend display of bans - Bans of users should additionally display the reason and a note of
"If you think this is in error, contact support@strangeloopgames.com"
| priority | extend display of bans bans of users should additionally display the reason and a note of if you think this is in error contact support strangeloopgames com | 1 |
209,463 | 7,176,332,112 | IssuesEvent | 2018-01-31 09:41:43 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Write Consistency Improvements | Priority: Medium Source: Internal Team: Core Type: Enhancement | - [x] Sync writes and sync backups should be consistent and synchronous from caller point of view.
- [x] Graceful shutdown should:
- [x] prevent receiving new operations
- [x] wait for in-flight operations to finish and send responses back to callers
https://hazelcast.atlassian.net/wiki/display/PM/Write+Consistency+Improvements
| 1.0 | Write Consistency Improvements - - [x] Sync writes and sync backups should be consistent and synchronous from caller point of view.
- [x] Graceful shutdown should:
- [x] prevent receiving new operations
- [x] wait for in-flight operations to finish and send responses back to callers
https://hazelcast.atlassian.net/wiki/display/PM/Write+Consistency+Improvements
| priority | write consistency improvements sync writes and sync backups should be consistent and synchronous from caller point of view graceful shutdown should prevent receiving new operations wait for in flight operations to finish and send responses back to callers | 1 |
751,011 | 26,227,781,226 | IssuesEvent | 2023-01-04 20:27:21 | KingSupernova31/RulesGuru | https://api.github.com/repos/KingSupernova31/RulesGuru | closed | Allow ORing of template fields | enhancement medium priority | Allow separate template rules to be ORed together such that a card must only satisfy at least one in order to be returned. (The most common usage of this would be for "instant or sorcery".)
The UI should probably accomplish this by having some sort of drag-able line or box that can connect rules. Perhaps allow the rules to be reordered and then only adjacent rules can be ORed. Or there could be an overlay that allows for selection of any two rules. Any system that looks nice and is easy to use is fine. | 1.0 | Allow ORing of template fields - Allow separate template rules to be ORed together such that a card must only satisfy at least one in order to be returned. (The most common usage of this would be for "instant or sorcery".)
The UI should probably accomplish this by having some sort of drag-able line or box that can connect rules. Perhaps allow the rules to be reordered and then only adjacent rules can be ORed. Or there could be an overlay that allows for selection of any two rules. Any system that looks nice and is easy to use is fine. | priority | allow oring of template fields allow separate template rules to be ored together such that a card must only satisfy at least one in order to be returned the most common usage of this would be for instant or sorcery the ui should probably accomplish this by having some sort of drag able line or box that can connect rules perhaps allow the rules to be reordered and then only adjacent rules can be ored or there could be an overlay that allows for selection of any two rules any system that looks nice and is easy to use is fine | 1 |
32,536 | 2,755,726,815 | IssuesEvent | 2015-04-26 22:06:15 | IIsi50MHz/chromey-calculator | https://api.github.com/repos/IIsi50MHz/chromey-calculator | reopened | popout's input field not initially focused | auto-migrated bug Priority-Medium | ```
What steps will reproduce the problem?
Press the button or trigger it in some other way. It automatically takes
focus, requiring me to click in the text field every single time. Sure, it's
only one click, but it is still annoying.
What is the expected output?
What do you see instead?
What version of Chrome are you using?
What operating system are you using?
What country are you in?
Please provide any additional information below.
```
Original issue reported on code.google.com by `saumanah...@gmail.com` on 27 Sep 2012 at 9:31 | 1.0 | popout's input field not initially focused - ```
What steps will reproduce the problem?
Press the button or trigger it in some other way. It automatically takes
focus, requiring me to click in the text field every single time. Sure, it's
only one click, but it is still annoying.
What is the expected output?
What do you see instead?
What version of Chrome are you using?
What operating system are you using?
What country are you in?
Please provide any additional information below.
```
Original issue reported on code.google.com by `saumanah...@gmail.com` on 27 Sep 2012 at 9:31 | priority | popout s input field not initially focused what steps will reproduce the problem press the button or trigger it in some other way it automatically takes focus requiring me to click in the text field every single time sure it s only one click but it is still annoying what is the expected output what do you see instead what version of chrome are you using what operating system are you using what country are you in please provide any additional information below original issue reported on code google com by saumanah gmail com on sep at | 1 |
227,080 | 7,526,704,723 | IssuesEvent | 2018-04-13 14:46:47 | emoncms/MyHomeEnergyPlanner | https://api.github.com/repos/emoncms/MyHomeEnergyPlanner | closed | Merge 'roofs' and 'lofts' measures list. | For release Libraries Medium priority usability | When applying measures, sometimes need to change a 'loft' to a 'roof' and vice versa.
This is because sometimes works are planned by the householder that involve moving the insulation line from the ceiling to the rafters.
At the moment this option is locked out - you can't select a 'roof' measure when it was a 'loft' in the baseline scenario.
Simplest solution is probably to put all 'roofs' and 'lofts' into the same list | 1.0 | Merge 'roofs' and 'lofts' measures list. - When applying measures, sometimes need to change a 'loft' to a 'roof' and vice versa.
This is because sometimes works are planned by the householder that involve moving the insulation line from the ceiling to the rafters.
At the moment this option is locked out - you can't select a 'roof' measure when it was a 'loft' in the baseline scenario.
Simplest solution is probably to put all 'roofs' and 'lofts' into the same list | priority | merge roofs and lofts measures list when applying measures sometimes need to change a loft to a roof and vice versa this is because sometimes works are planned by the householder that involve moving the insulation line from the ceiling to the rafters at the moment this option is locked out you can t select a roof measure when it was a loft in the baseline scenario simplest solution is probably to put all roofs and lofts into the same list | 1 |
782,249 | 27,491,295,441 | IssuesEvent | 2023-03-04 16:55:27 | aleksbobic/csx | https://api.github.com/repos/aleksbobic/csx | closed | UI Guide & Survey | enhancement priority:medium Complexity:medium | **Is your feature request related to a problem? Please describe.**
Users should have the option to click "show me around". This should guide them through the UI and provide them with an overview of basic features.
| 1.0 | UI Guide & Survey - **Is your feature request related to a problem? Please describe.**
Users should have the option to click "show me around". This should guide them through the UI and provide them with an overview of basic features.
| priority | ui guide survey is your feature request related to a problem please describe users should have the option to click show me around this should guide them through the ui and provide them with an overview of basic features | 1 |
351,957 | 10,525,704,273 | IssuesEvent | 2019-09-30 15:33:48 | forceworkbench/forceworkbench | https://api.github.com/repos/forceworkbench/forceworkbench | closed | Add UI support for HAVING in SOQL | Component-Query Priority-Medium Scheduled-Backlog enhancement imported | _Original author: ryan.bra...@gmail.com (February 06, 2010 04:46:02)_
New HAVING Clause
There is a new HAVING clause in SOQL that is similar to HAVING in SQL.
You can use a HAVING clause with a GROUP BY clause to filter the results
returned by aggregate functions, such as SUM(). A HAVING clause is similar
to a WHERE clause. The difference is that you can include aggregate
functions in a HAVING clause, but not in a WHERE clause. For example, the
following query returns accounts with duplicate names:
```
SELECT Name, Count(Id)
FROM Account
GROUP BY Name
HAVING Count(Id) > 1
For more information, see “HAVING” in the Force.com Web Services API
```
Developer's Guide.
_Original issue: http://code.google.com/p/forceworkbench/issues/detail?id=273_
| 1.0 | Add UI support for HAVING in SOQL - _Original author: ryan.bra...@gmail.com (February 06, 2010 04:46:02)_
New HAVING Clause
There is a new HAVING clause in SOQL that is similar to HAVING in SQL.
You can use a HAVING clause with a GROUP BY clause to filter the results
returned by aggregate functions, such as SUM(). A HAVING clause is similar
to a WHERE clause. The difference is that you can include aggregate
functions in a HAVING clause, but not in a WHERE clause. For example, the
following query returns accounts with duplicate names:
```
SELECT Name, Count(Id)
FROM Account
GROUP BY Name
HAVING Count(Id) > 1
For more information, see “HAVING” in the Force.com Web Services API
```
Developer's Guide.
_Original issue: http://code.google.com/p/forceworkbench/issues/detail?id=273_
| priority | add ui support for having in soql original author ryan bra gmail com february new having clause there is a new having clause in soql that is similar to having in sql you can use a having clause with a group by clause to filter the results returned by aggregate functions such as sum a having clause is similar to a where clause the difference is that you can include aggregate functions in a having clause but not in a where clause for example the following query returns accounts with duplicate names select name count id from account group by name having count id gt for more information see “having” in the force com web services api developer s guide original issue | 1 |
340,442 | 10,272,435,853 | IssuesEvent | 2019-08-23 16:26:03 | 0xfr34ky/webeng-viergewinnt | https://api.github.com/repos/0xfr34ky/webeng-viergewinnt | closed | Farbauswahl für die Steine | medium priority | Satz an Farb-Kombinationen erzeugen.
Vor dem Spiel Spieler fragen, welche Farbkombination gewünscht wird. | 1.0 | Farbauswahl für die Steine - Satz an Farb-Kombinationen erzeugen.
Vor dem Spiel Spieler fragen, welche Farbkombination gewünscht wird. | priority | farbauswahl für die steine satz an farb kombinationen erzeugen vor dem spiel spieler fragen welche farbkombination gewünscht wird | 1 |
443,075 | 12,759,318,767 | IssuesEvent | 2020-06-29 05:29:07 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | Documents: PDF document upload not working on Forum Discussions | bug component: document priority: medium | **Describe the bug**
Uploading PDF not working on Forum Discussions. But this is working on Profile document uploads
**To Reproduce**
Steps to reproduce the behavior:
1. Enable Documents on BuddyBoss Settings
2. allow Document uploads on Profile and Forums on Settings > Media
3. Go to a Forums, create a New Discussion and upload a PDF file
4. See error
**Expected behavior**
After the upload, this is showing an error
**Screenshots**
https://drive.google.com/file/d/1ztMfWYl1wVXKrFOnq8z5aA4_1bghbCLZ/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1205941327/79973 | 1.0 | Documents: PDF document upload not working on Forum Discussions - **Describe the bug**
Uploading PDF not working on Forum Discussions. But this is working on Profile document uploads
**To Reproduce**
Steps to reproduce the behavior:
1. Enable Documents on BuddyBoss Settings
2. allow Document uploads on Profile and Forums on Settings > Media
3. Go to a Forums, create a New Discussion and upload a PDF file
4. See error
**Expected behavior**
After the upload, this is showing an error
**Screenshots**
https://drive.google.com/file/d/1ztMfWYl1wVXKrFOnq8z5aA4_1bghbCLZ/view?usp=sharing
**Support ticket links**
https://secure.helpscout.net/conversation/1205941327/79973 | priority | documents pdf document upload not working on forum discussions describe the bug uploading pdf not working on forum discussions but this is working on profile document uploads to reproduce steps to reproduce the behavior enable documents on buddyboss settings allow document uploads on profile and forums on settings media go to a forums create a new discussion and upload a pdf file see error expected behavior after the upload this is showing an error screenshots support ticket links | 1 |
766,430 | 26,883,548,510 | IssuesEvent | 2023-02-05 22:45:37 | dcs-retribution/dcs-retribution | https://api.github.com/repos/dcs-retribution/dcs-retribution | closed | Adjustment slider for AI purchase behavior | Enhancement Good First Issue Priority Medium | ### Is your feature request related to a problem? Please describe.
Currently the AI purchase behavior regarding the ratio of air forces to ground forces is defined by a default ratio (I think it's 50/50). The user should have more control over this ratio.
### Describe the solution you'd like
Create an adjustment slider within the campaign settings.
The adjustment slider should range from 0 to 100% and use 10% increments.
### Additional context
_No response_ | 1.0 | Adjustment slider for AI purchase behavior - ### Is your feature request related to a problem? Please describe.
Currently the AI purchase behavior regarding the ratio of air forces to ground forces is defined by a default ratio (I think it's 50/50). The user should have more control over this ratio.
### Describe the solution you'd like
Create an adjustment slider within the campaign settings.
The adjustment slider should range from 0 to 100% and use 10% increments.
### Additional context
_No response_ | priority | adjustment slider for ai purchase behavior is your feature request related to a problem please describe currently the ai purchase behavior regarding the ratio of air forces to ground forces is defined by a default ratio i think it s the user should have more control over this ratio describe the solution you d like create an adjustment slider within the campaign settings the adjustment slider should range from to and use increments additional context no response | 1 |
536,857 | 15,715,912,844 | IssuesEvent | 2021-03-28 04:10:04 | AY2021S2-CS2103T-T12-4/tp | https://api.github.com/repos/AY2021S2-CS2103T-T12-4/tp | closed | UI: remove menu bar? | priority.Medium type.Enhancement | I'm working on the demo slides and thought that since we are going for a simplistic no bs UI, should we remove the menu bar? Right now all it does is to provide an exit button and a help button. For exit, users can simply click the top right corner to close, for help users can simply type in help. Wonder what the rest think?

| 1.0 | UI: remove menu bar? - I'm working on the demo slides and thought that since we are going for a simplistic no bs UI, should we remove the menu bar? Right now all it does is to provide an exit button and a help button. For exit, users can simply click the top right corner to close, for help users can simply type in help. Wonder what the rest think?

| priority | ui remove menu bar i m working on the demo slides and thought that since we are going for a simplistic no bs ui should we remove the menu bar right now all it does is to provide an exit button and a help button for exit users can simply click the top right corner to close for help users can simply type in help wonder what the rest think | 1 |
465,815 | 13,392,668,934 | IssuesEvent | 2020-09-03 02:01:36 | alanqchen/Bear-Blog-Engine | https://api.github.com/repos/alanqchen/Bear-Blog-Engine | opened | Add support for uploading images in the body of the post | Medium Priority backend enhancement frontend | Currently only uploading the feature image to the server is allowed. The only way to add an image is by using an URL. To change this, the following needs to be added:
- [ ] Add backend API endpoint to delete an image given the image name in the request URL
- [ ] Add parser in the frontend that goes compares the raw old vs new post value and determines which images it needs to delete. (Use regex if possible). All images will be in the standard markdown syntax, and the regex should also check if it contains the API URL. | 1.0 | Add support for uploading images in the body of the post - Currently only uploading the feature image to the server is allowed. The only way to add an image is by using an URL. To change this, the following needs to be added:
- [ ] Add backend API endpoint to delete an image given the image name in the request URL
- [ ] Add parser in the frontend that goes compares the raw old vs new post value and determines which images it needs to delete. (Use regex if possible). All images will be in the standard markdown syntax, and the regex should also check if it contains the API URL. | priority | add support for uploading images in the body of the post currently only uploading the feature image to the server is allowed the only way to add an image is by using an url to change this the following needs to be added add backend api endpoint to delete an image given the image name in the request url add parser in the frontend that goes compares the raw old vs new post value and determines which images it needs to delete use regex if possible all images will be in the standard markdown syntax and the regex should also check if it contains the api url | 1 |
487,644 | 14,049,809,255 | IssuesEvent | 2020-11-02 10:48:11 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | Workqueues appears briefly before 'Set new pin entry' flow | Priority: medium 👹Bug | **Describe the bug**
- Workqueues appear before 'Set new pin entry' flow
**To Reproduce**
Steps to reproduce the behaviour:
1. Create new user
2. Login as the user
3. Workqueue appears
4. Fraction of a second later the Set pin flow appears
**Expected behaviour**
Go directly to Pin entry on Login
**Screenshots**
..
| 1.0 | Workqueues appears briefly before 'Set new pin entry' flow - **Describe the bug**
- Workqueues appear before 'Set new pin entry' flow
**To Reproduce**
Steps to reproduce the behaviour:
1. Create new user
2. Login as the user
3. Workqueue appears
4. Fraction of a second later the Set pin flow appears
**Expected behaviour**
Go directly to Pin entry on Login
**Screenshots**
..
| priority | workqueues appears briefly before set new pin entry flow describe the bug workqueues appear before set new pin entry flow to reproduce steps to reproduce the behaviour create new user login as the user workqueue appears fraction of a second later the set pin flow appears expected behaviour go directly to pin entry on login screenshots | 1 |
3,358 | 2,537,767,328 | IssuesEvent | 2015-01-26 22:52:38 | newca12/gapt | https://api.github.com/repos/newca12/gapt | closed | add de-Bruijn indices | 1 star imported Milestone-Release2.0 Priority-Medium Type-Task | _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on December 22, 2009 13:56:05_
Add de-Bruijn (db) indices to Var as Option[Int]. None means the Var is
free. The following should be taken into account:
1) Abs should preserves its nominal form but will also place the index in
all the bound variables.
2) in substitution, beta reduction, etc the indices must be recomputed
3) add a new method to the terms factory which creates Var together with
the optional index (otherwise the terms must be mutable). The old factory
method for Var will be implemented already in the base class and pass None.
_Original issue: http://code.google.com/p/gapt/issues/detail?id=61_ | 1.0 | add de-Bruijn indices - _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on December 22, 2009 13:56:05_
Add de-Bruijn (db) indices to Var as Option[Int]. None means the Var is
free. The following should be taken into account:
1) Abs should preserves its nominal form but will also place the index in
all the bound variables.
2) in substitution, beta reduction, etc the indices must be recomputed
3) add a new method to the terms factory which creates Var together with
the optional index (otherwise the terms must be mutable). The old factory
method for Var will be implemented already in the base class and pass None.
_Original issue: http://code.google.com/p/gapt/issues/detail?id=61_ | priority | add de bruijn indices from on december add de bruijn db indices to var as option none means the var is free the following should be taken into account abs should preserves its nominal form but will also place the index in all the bound variables in substitution beta reduction etc the indices must be recomputed add a new method to the terms factory which creates var together with the optional index otherwise the terms must be mutable the old factory method for var will be implemented already in the base class and pass none original issue | 1 |
584,053 | 17,404,938,810 | IssuesEvent | 2021-08-03 03:38:43 | f-lab-edu/conference-reservation | https://api.github.com/repos/f-lab-edu/conference-reservation | closed | [기업]id와 비밀번호 체크시 Exception 처리 추가 | Priority: Medium Type: Feature/Function | - 추가 / 개선 요소
회원 아이디 중복체크와 회원탈퇴 시 체크하는 로직에서
공통된 Exception처리가 필요하다.
- 추가 / 개선 이유
Service에서 로직 처리 후, boolean 리턴을 받을 때, Insert시 난 에러인지, 중복체크 시에 발생한 리턴 값인지
구분하기 위해서이다. | 1.0 | [기업]id와 비밀번호 체크시 Exception 처리 추가 - - 추가 / 개선 요소
회원 아이디 중복체크와 회원탈퇴 시 체크하는 로직에서
공통된 Exception처리가 필요하다.
- 추가 / 개선 이유
Service에서 로직 처리 후, boolean 리턴을 받을 때, Insert시 난 에러인지, 중복체크 시에 발생한 리턴 값인지
구분하기 위해서이다. | priority | id와 비밀번호 체크시 exception 처리 추가 추가 개선 요소 회원 아이디 중복체크와 회원탈퇴 시 체크하는 로직에서 공통된 exception처리가 필요하다 추가 개선 이유 service에서 로직 처리 후 boolean 리턴을 받을 때 insert시 난 에러인지 중복체크 시에 발생한 리턴 값인지 구분하기 위해서이다 | 1 |
132,266 | 5,173,985,041 | IssuesEvent | 2017-01-18 17:23:20 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | User issues with MGCP / OSM Conflate | Category: Translation Priority: Medium Status: Defined Type: Bug | I have downloaded a planet.osm.pbf file and extracted a bbox using Osmosis. My next process was to filter the road and rail layers of my bbox:
**Rail**: osmosis --read-xml odesa.osm --way-key-value keyValueList="railway.tram,railway=light_rail,railway=rail" --used-node --write-xml rail.osm
**Road**: osmosis --read-xml odesa.osm --tf accept-ways highway=\* --tf reject-ways highway=unclassified,service,living_street,pedestrian,track,bus_guideway,raceway,footway,bridleway,steps,path,cycleway --tf reject-relations --used-node --write-xml road_reject.osm
The return was positive with no errors. I then wanted to conflate roads then rail data with corresponding thematic layers within MGCP. The rail data conflated without issue however the road layer has been processing for several days.
I have allocated 1 CPU and 2GB RAM to the Hootenanny OS. The files sizes are: OSM Road - 20.4MB (.osm), MGCP Road - 14.4MB (.shp).
Other than suspect that the OSM road file size is too large for my processor to handle, I am unsure what the problem is. Can anyone help?
Thanks
| 1.0 | User issues with MGCP / OSM Conflate - I have downloaded a planet.osm.pbf file and extracted a bbox using Osmosis. My next process was to filter the road and rail layers of my bbox:
**Rail**: osmosis --read-xml odesa.osm --way-key-value keyValueList="railway.tram,railway=light_rail,railway=rail" --used-node --write-xml rail.osm
**Road**: osmosis --read-xml odesa.osm --tf accept-ways highway=\* --tf reject-ways highway=unclassified,service,living_street,pedestrian,track,bus_guideway,raceway,footway,bridleway,steps,path,cycleway --tf reject-relations --used-node --write-xml road_reject.osm
The return was positive with no errors. I then wanted to conflate roads then rail data with corresponding thematic layers within MGCP. The rail data conflated without issue however the road layer has been processing for several days.
I have allocated 1 CPU and 2GB RAM to the Hootenanny OS. The files sizes are: OSM Road - 20.4MB (.osm), MGCP Road - 14.4MB (.shp).
Other than suspect that the OSM road file size is too large for my processor to handle, I am unsure what the problem is. Can anyone help?
Thanks
| priority | user issues with mgcp osm conflate i have downloaded a planet osm pbf file and extracted a bbox using osmosis my next process was to filter the road and rail layers of my bbox rail osmosis read xml odesa osm way key value keyvaluelist railway tram railway light rail railway rail used node write xml rail osm road osmosis read xml odesa osm tf accept ways highway tf reject ways highway unclassified service living street pedestrian track bus guideway raceway footway bridleway steps path cycleway tf reject relations used node write xml road reject osm the return was positive with no errors i then wanted to conflate roads then rail data with corresponding thematic layers within mgcp the rail data conflated without issue however the road layer has been processing for several days i have allocated cpu and ram to the hootenanny os the files sizes are osm road osm mgcp road shp other than suspect that the osm road file size is too large for my processor to handle i am unsure what the problem is can anyone help thanks | 1 |
251,078 | 7,999,564,385 | IssuesEvent | 2018-07-22 03:33:22 | Marri/glowfic | https://api.github.com/repos/Marri/glowfic | opened | In replies#search, the 'condensed' checkbox resets to true after each search | 3. medium priority 7. easy type: bug | It should persist its previous value if a search has been performed, while also defaulting (otherwise) to true. | 1.0 | In replies#search, the 'condensed' checkbox resets to true after each search - It should persist its previous value if a search has been performed, while also defaulting (otherwise) to true. | priority | in replies search the condensed checkbox resets to true after each search it should persist its previous value if a search has been performed while also defaulting otherwise to true | 1 |
259,375 | 8,198,070,949 | IssuesEvent | 2018-08-31 15:14:42 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | tests/kernel/mem_pool/mem_pool_concept/testcase.yaml#kernel.memory_pool fails on nrf52840_pca10056, nrf52_pca10040 and nrf51_pca10028 | area: ARM bug nRF priority: medium | Execution log:
```
Running test suite mpool_concept
===================================================================
starting test - test_mpool_alloc_wait_prio
Assertion failed at /home/kspoorth/work/latest_zephyr/tests/kernel/mem_pool/mem_pool_concept/src/test_mpool_alloc_wait.c:24: tmpool_alloc_wait_timeout: k_mem_pool_alloc(&mpool1, &block, BLK_SIZE_MIN, TIMEOUT) == -EAGAIN is false
FAIL - test_mpool_alloc_wait_prio
===================================================================
starting test - test_mpool_alloc_size_roundup
Assertion failed at /home/kspoorth/work/latest_zephyr/tests/kernel/mem_pool/mem_pool_concept/src/test_mpool_alloc_size.c:36: test_mpool_alloc_size_roundup: k_mem_pool_alloc(&mpool1, &block[i], TEST_SIZE, K_NO_WAIT) == 0 is false
FAIL - test_mpool_alloc_size_roundup
===================================================================
starting test - test_mpool_alloc_merge_failed_diff_size
PASS - test_mpool_alloc_merge_failed_diff_size
===================================================================
starting test - test_mpool_alloc_merge_failed_diff_parent
Assertion failed at /home/kspoorth/work/latest_zephyr/tests/kernel/mem_pool/mem_pool_concept/src/test_mpool_merge_fail_diff_parent.c:33: test_mpool_alloc_merge_failed_diff_parent: k_mem_pool_alloc(&mpool1, &block[i], BLK_SIZE_MIN, K_NO_WAIT) == 0 is false
FAIL - test_mpool_alloc_merge_failed_diff_parent
===================================================================
===================================================================
PROJECT EXECUTION FAILED
```
Steps to reproduce:
cd tests/kernel/mem_pool/mem_pool_concept
mkdir build && cd build
cmake -DBOARD=nrf51_pca10028 .. && make flash
Latest commit: afa0e0026fb94d37f50dc53f9d0981b1dc5c306b
Platforms tested: nrf52840_pca10056, nrf52_pca10040 and nrf51_pca10028 | 1.0 | tests/kernel/mem_pool/mem_pool_concept/testcase.yaml#kernel.memory_pool fails on nrf52840_pca10056, nrf52_pca10040 and nrf51_pca10028 - Execution log:
```
Running test suite mpool_concept
===================================================================
starting test - test_mpool_alloc_wait_prio
Assertion failed at /home/kspoorth/work/latest_zephyr/tests/kernel/mem_pool/mem_pool_concept/src/test_mpool_alloc_wait.c:24: tmpool_alloc_wait_timeout: k_mem_pool_alloc(&mpool1, &block, BLK_SIZE_MIN, TIMEOUT) == -EAGAIN is false
FAIL - test_mpool_alloc_wait_prio
===================================================================
starting test - test_mpool_alloc_size_roundup
Assertion failed at /home/kspoorth/work/latest_zephyr/tests/kernel/mem_pool/mem_pool_concept/src/test_mpool_alloc_size.c:36: test_mpool_alloc_size_roundup: k_mem_pool_alloc(&mpool1, &block[i], TEST_SIZE, K_NO_WAIT) == 0 is false
FAIL - test_mpool_alloc_size_roundup
===================================================================
starting test - test_mpool_alloc_merge_failed_diff_size
PASS - test_mpool_alloc_merge_failed_diff_size
===================================================================
starting test - test_mpool_alloc_merge_failed_diff_parent
Assertion failed at /home/kspoorth/work/latest_zephyr/tests/kernel/mem_pool/mem_pool_concept/src/test_mpool_merge_fail_diff_parent.c:33: test_mpool_alloc_merge_failed_diff_parent: k_mem_pool_alloc(&mpool1, &block[i], BLK_SIZE_MIN, K_NO_WAIT) == 0 is false
FAIL - test_mpool_alloc_merge_failed_diff_parent
===================================================================
===================================================================
PROJECT EXECUTION FAILED
```
Steps to reproduce:
cd tests/kernel/mem_pool/mem_pool_concept
mkdir build && cd build
cmake -DBOARD=nrf51_pca10028 .. && make flash
Latest commit: afa0e0026fb94d37f50dc53f9d0981b1dc5c306b
Platforms tested: nrf52840_pca10056, nrf52_pca10040 and nrf51_pca10028 | priority | tests kernel mem pool mem pool concept testcase yaml kernel memory pool fails on and execution log running test suite mpool concept starting test test mpool alloc wait prio assertion failed at home kspoorth work latest zephyr tests kernel mem pool mem pool concept src test mpool alloc wait c tmpool alloc wait timeout k mem pool alloc block blk size min timeout eagain is false fail test mpool alloc wait prio starting test test mpool alloc size roundup assertion failed at home kspoorth work latest zephyr tests kernel mem pool mem pool concept src test mpool alloc size c test mpool alloc size roundup k mem pool alloc block test size k no wait is false fail test mpool alloc size roundup starting test test mpool alloc merge failed diff size pass test mpool alloc merge failed diff size starting test test mpool alloc merge failed diff parent assertion failed at home kspoorth work latest zephyr tests kernel mem pool mem pool concept src test mpool merge fail diff parent c test mpool alloc merge failed diff parent k mem pool alloc block blk size min k no wait is false fail test mpool alloc merge failed diff parent project execution failed steps to reproduce cd tests kernel mem pool mem pool concept mkdir build cd build cmake dboard make flash latest commit platforms tested and | 1 |
88,706 | 3,783,890,475 | IssuesEvent | 2016-03-19 12:46:04 | VladSerdobintsev/zfcore | https://api.github.com/repos/VladSerdobintsev/zfcore | closed | Module Comments | auto-migrated Priority-Medium Type-Enhancement | ```
Добавить в раздел администратора страницу
с комментариями ожидающими апрува
администратора
```
Original issue reported on code.google.com by `AntonShe...@gmail.com` on 2 Aug 2012 at 10:10 | 1.0 | Module Comments - ```
Добавить в раздел администратора страницу
с комментариями ожидающими апрува
администратора
```
Original issue reported on code.google.com by `AntonShe...@gmail.com` on 2 Aug 2012 at 10:10 | priority | module comments добавить в раздел администратора страницу с комментариями ожидающими апрува администратора original issue reported on code google com by antonshe gmail com on aug at | 1 |
271,837 | 8,490,023,459 | IssuesEvent | 2018-10-26 22:10:58 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | opened | Investigate case list | bug-medium-priority case-search/list foxtrot | Jebby reported a case where there are only 3 cases associated with the Veteran ID in Caseflow ( showing in the Case list/search results) but may more in VACOLS. Can we investigate why Caseflow isn't showing all of the results?

| 1.0 | Investigate case list - Jebby reported a case where there are only 3 cases associated with the Veteran ID in Caseflow ( showing in the Case list/search results) but may more in VACOLS. Can we investigate why Caseflow isn't showing all of the results?

| priority | investigate case list jebby reported a case where there are only cases associated with the veteran id in caseflow showing in the case list search results but may more in vacols can we investigate why caseflow isn t showing all of the results | 1 |
336,830 | 10,197,508,194 | IssuesEvent | 2019-08-13 00:43:00 | ZTLARTCC/ZTL_website | https://api.github.com/repos/ZTLARTCC/ZTL_website | closed | Emails Throw Random Errors | bug medium priority | Not really sure. I've done really all I can and anyone else that would like to take a look at it is more than welcome. It "works" but not really. | 1.0 | Emails Throw Random Errors - Not really sure. I've done really all I can and anyone else that would like to take a look at it is more than welcome. It "works" but not really. | priority | emails throw random errors not really sure i ve done really all i can and anyone else that would like to take a look at it is more than welcome it works but not really | 1 |
250,110 | 7,968,553,626 | IssuesEvent | 2018-07-16 04:00:21 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | WayJoinerTest fails when quick tests are run sequentially, but passes when run in parallel. | Category: Core Priority: Medium Type: Bug | I noticed this a while ago...
The specific test is `N4hoot13WayJoinerTestE::runConflateTest`
Error:
`src/test/cpp/hoot/core/algorithms/WayJoinerTest.cpp(152) - Maps do not match: test-output/algorithms/wayjoiner/WayJoinerConflateOutput.osm test-files/algorithms/wayjoiner/WayJoinerConflateExpected.osm` | 1.0 | WayJoinerTest fails when quick tests are run sequentially, but passes when run in parallel. - I noticed this a while ago...
The specific test is `N4hoot13WayJoinerTestE::runConflateTest`
Error:
`src/test/cpp/hoot/core/algorithms/WayJoinerTest.cpp(152) - Maps do not match: test-output/algorithms/wayjoiner/WayJoinerConflateOutput.osm test-files/algorithms/wayjoiner/WayJoinerConflateExpected.osm` | priority | wayjoinertest fails when quick tests are run sequentially but passes when run in parallel i noticed this a while ago the specific test is runconflatetest error src test cpp hoot core algorithms wayjoinertest cpp maps do not match test output algorithms wayjoiner wayjoinerconflateoutput osm test files algorithms wayjoiner wayjoinerconflateexpected osm | 1 |
341,061 | 10,282,318,213 | IssuesEvent | 2019-08-26 10:48:11 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Upgrade packages empty lato font files | Bug Medium Priority | Same as the issues #5915 and #7331, the lato font files in the 7.11.x to 7.11.8 upgrade packages are empty. This is also the case with (at least) the upgrade package from 7.10.x to 7.10.20 and 7.10.x to 7.11.8.
The one from 7.8.x to 7.11.8 does contain the correct font files.
According to #7331 it should be fixed, but apparently it's not the case.
#### Issue
Font files should have content
#### Expected Behavior
Font files are overwritten by empty font files
#### Actual Behavior
After running the upgrade wizard, the font files are overwritten by empty ones from the upgrade package.
#### Possible Fix
Upgrade file should not contain empty font files, or/and the upgrader should check the filesize before overwriting existing files.
#### Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Download one of the upgrade packages mentioned above
2. Do the upgrade
3. Clear cache
4. A different font will be used in the browser
#### Context
It has been an issue for every upgrade package on 7.11.x I used (maybe not all, but every one I downloaded).
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.11.7 -> 7.11.8
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): Version 76.0.3809.100 (64-bit)
* Environment name and version (e.g. MySQL, PHP 7): MySQL, PHP 7.2
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 19.04
| 1.0 | Upgrade packages empty lato font files - Same as the issues #5915 and #7331, the lato font files in the 7.11.x to 7.11.8 upgrade packages are empty. This is also the case with (at least) the upgrade package from 7.10.x to 7.10.20 and 7.10.x to 7.11.8.
The one from 7.8.x to 7.11.8 does contain the correct font files.
According to #7331 it should be fixed, but apparently it's not the case.
#### Issue
Font files should have content
#### Expected Behavior
Font files are overwritten by empty font files
#### Actual Behavior
After running the upgrade wizard, the font files are overwritten by empty ones from the upgrade package.
#### Possible Fix
Upgrade file should not contain empty font files, or/and the upgrader should check the filesize before overwriting existing files.
#### Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Download one of the upgrade packages mentioned above
2. Do the upgrade
3. Clear cache
4. A different font will be used in the browser
#### Context
It has been an issue for every upgrade package on 7.11.x I used (maybe not all, but every one I downloaded).
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: 7.11.7 -> 7.11.8
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): Version 76.0.3809.100 (64-bit)
* Environment name and version (e.g. MySQL, PHP 7): MySQL, PHP 7.2
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 19.04
| priority | upgrade packages empty lato font files same as the issues and the lato font files in the x to upgrade packages are empty this is also the case with at least the upgrade package from x to and x to the one from x to does contain the correct font files according to it should be fixed but apparently it s not the case issue font files should have content expected behavior font files are overwritten by empty font files actual behavior after running the upgrade wizard the font files are overwritten by empty ones from the upgrade package possible fix upgrade file should not contain empty font files or and the upgrader should check the filesize before overwriting existing files steps to reproduce download one of the upgrade packages mentioned above do the upgrade clear cache a different font will be used in the browser context it has been an issue for every upgrade package on x i used maybe not all but every one i downloaded your environment suitecrm version used browser name and version e g chrome version bit version bit environment name and version e g mysql php mysql php operating system and version e g ubuntu ubuntu | 1 |
771,820 | 27,093,868,400 | IssuesEvent | 2023-02-15 00:02:40 | docker-mailserver/docker-mailserver | https://api.github.com/repos/docker-mailserver/docker-mailserver | closed | [BUG] extended parameters not parsed main.cf | kind/bug meta/needs triage priority/medium | ### Miscellaneous first checks
- [X] I checked that all ports are open and not blocked by my ISP / hosting provider.
- [X] I know that SSL errors are likely the result of a wrong setup on the user side and not caused by DMS itself. I'm confident my setup is correct.
### Affected Component(s)
cant add mynetworks by using extended parameters
### What happened and when does this occur?
Mailserver refuses my email form 192.168.0.x address.
This is because I cant add such networks in the environment
so after reading
https://docker-mailserver.github.io/docker-mailserver/edge/config/advanced/override-defaults/postfix/
I have created a file on the host in the docker dir /root/containers/mailserver like so
```
root@mail [ ~/containers/mailserver ]# ls -all docker-data/dms/config/
total 16
drwxr-xr-x 2 root root 4096 Feb 14 21:42 .
drwxr-xr-x 3 root root 4096 Feb 7 22:27 ..
-rw-r--r-- 1 root root 143 Feb 7 22:27 postfix-accounts.cf
-rwxrwxrwx 1 root root 69 Feb 14 21:42 postfix-main.cf
root@mail [ ~/containers/mailserver ]# cat docker-data/dms/config/postfix-main.cf
mynetworks = 127.0.0.0/8 192.168.0.0/24 172.26.0.2/32 172.18.0.2/32
```
I hope this is what the documentation means since it does not say if you should create these files inside of outside of the container
When I stop and start the container
docker-compose stop
docker-compose start
nothing happens and the networks do not end up in the container.
```
root@mail [ ~/containers/mailserver ]# docker exec -it mailserver cat /etc/postfix/main.cf| grep mynet
mynetworks =
smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit
```
### What did you expect to happen?
I expected mynetworks to be adjusted as the docs promis
### How do we replicate the issue?
1.
2.
3.
...
### DMS version
v11.3.1
### What operating system is DMS running on?
Linux
### Which operating system version?
photon 4
### What instruction set architecture is DMS running on?
AMD64 / x86_64
### What container orchestration tool are you using?
Docker
### docker-compose.yml
_No response_
### Relevant log output
_No response_
### Other relevant information
_No response_
### What level of experience do you have with Docker and mail servers?
- [ ] I am inexperienced with docker
- [X] I am rather experienced with docker
- [ ] I am inexperienced with mail servers
- [X] I am rather experienced with mail servers
- [ ] I am uncomfortable with the CLI
- [X] I am rather comfortable with the CLI
### Code of conduct
- [X] I have read this project's [Code of Conduct](https://github.com/docker-mailserver/docker-mailserver/blob/master/CODE_OF_CONDUCT.md) and I agree
- [X] I have read the [README](https://github.com/docker-mailserver/docker-mailserver/blob/master/README.md) and the [documentation](https://docker-mailserver.github.io/docker-mailserver/edge/) and I searched the [issue tracker](https://github.com/docker-mailserver/docker-mailserver/issues?q=is%3Aissue) but could not find a solution
### Improvements to this form?
_No response_ | 1.0 | [BUG] extended parameters not parsed main.cf - ### Miscellaneous first checks
- [X] I checked that all ports are open and not blocked by my ISP / hosting provider.
- [X] I know that SSL errors are likely the result of a wrong setup on the user side and not caused by DMS itself. I'm confident my setup is correct.
### Affected Component(s)
cant add mynetworks by using extended parameters
### What happened and when does this occur?
Mailserver refuses my email form 192.168.0.x address.
This is because I cant add such networks in the environment
so after reading
https://docker-mailserver.github.io/docker-mailserver/edge/config/advanced/override-defaults/postfix/
I have created a file on the host in the docker dir /root/containers/mailserver like so
```
root@mail [ ~/containers/mailserver ]# ls -all docker-data/dms/config/
total 16
drwxr-xr-x 2 root root 4096 Feb 14 21:42 .
drwxr-xr-x 3 root root 4096 Feb 7 22:27 ..
-rw-r--r-- 1 root root 143 Feb 7 22:27 postfix-accounts.cf
-rwxrwxrwx 1 root root 69 Feb 14 21:42 postfix-main.cf
root@mail [ ~/containers/mailserver ]# cat docker-data/dms/config/postfix-main.cf
mynetworks = 127.0.0.0/8 192.168.0.0/24 172.26.0.2/32 172.18.0.2/32
```
I hope this is what the documentation means since it does not say if you should create these files inside of outside of the container
When I stop and start the container
docker-compose stop
docker-compose start
nothing happens and the networks do not end up in the container.
```
root@mail [ ~/containers/mailserver ]# docker exec -it mailserver cat /etc/postfix/main.cf| grep mynet
mynetworks =
smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit
```
### What did you expect to happen?
I expected mynetworks to be adjusted as the docs promis
### How do we replicate the issue?
1.
2.
3.
...
### DMS version
v11.3.1
### What operating system is DMS running on?
Linux
### Which operating system version?
photon 4
### What instruction set architecture is DMS running on?
AMD64 / x86_64
### What container orchestration tool are you using?
Docker
### docker-compose.yml
_No response_
### Relevant log output
_No response_
### Other relevant information
_No response_
### What level of experience do you have with Docker and mail servers?
- [ ] I am inexperienced with docker
- [X] I am rather experienced with docker
- [ ] I am inexperienced with mail servers
- [X] I am rather experienced with mail servers
- [ ] I am uncomfortable with the CLI
- [X] I am rather comfortable with the CLI
### Code of conduct
- [X] I have read this project's [Code of Conduct](https://github.com/docker-mailserver/docker-mailserver/blob/master/CODE_OF_CONDUCT.md) and I agree
- [X] I have read the [README](https://github.com/docker-mailserver/docker-mailserver/blob/master/README.md) and the [documentation](https://docker-mailserver.github.io/docker-mailserver/edge/) and I searched the [issue tracker](https://github.com/docker-mailserver/docker-mailserver/issues?q=is%3Aissue) but could not find a solution
### Improvements to this form?
_No response_ | priority | extended parameters not parsed main cf miscellaneous first checks i checked that all ports are open and not blocked by my isp hosting provider i know that ssl errors are likely the result of a wrong setup on the user side and not caused by dms itself i m confident my setup is correct affected component s cant add mynetworks by using extended parameters what happened and when does this occur mailserver refuses my email form x address this is because i cant add such networks in the environment so after reading i have created a file on the host in the docker dir root containers mailserver like so root mail ls all docker data dms config total drwxr xr x root root feb drwxr xr x root root feb rw r r root root feb postfix accounts cf rwxrwxrwx root root feb postfix main cf root mail cat docker data dms config postfix main cf mynetworks i hope this is what the documentation means since it does not say if you should create these files inside of outside of the container when i stop and start the container docker compose stop docker compose start nothing happens and the networks do not end up in the container root mail docker exec it mailserver cat etc postfix main cf grep mynet mynetworks smtpd helo restrictions permit mynetworks reject invalid helo hostname permit what did you expect to happen i expected mynetworks to be adjusted as the docs promis how do we replicate the issue dms version what operating system is dms running on linux which operating system version photon what instruction set architecture is dms running on what container orchestration tool are you using docker docker compose yml no response relevant log output no response other relevant information no response what level of experience do you have with docker and mail servers i am inexperienced with docker i am rather experienced with docker i am inexperienced with mail servers i am rather experienced with mail servers i am uncomfortable with the cli i am rather comfortable with the cli code of conduct i have read this project s and i agree i have read the and the and i searched the but could not find a solution improvements to this form no response | 1 |
497,227 | 14,366,299,175 | IssuesEvent | 2020-12-01 04:01:35 | rich-iannone/pointblank | https://api.github.com/repos/rich-iannone/pointblank | closed | Add the `col_vals_increasing()` and `col_vals_decreasing()` functions | Difficulty: [3] Advanced Effort: [3] High Priority: [2] Medium Type: ★ Enhancement | These functions would validate for whether values are increasing or decreasing. Any NA/NULL values can be excluded with`na_pass`.
Whether to accept stationary values as passing should be an option. Further to this, a tolerance distance (for movement in the opposite direction) should be available. | 1.0 | Add the `col_vals_increasing()` and `col_vals_decreasing()` functions - These functions would validate for whether values are increasing or decreasing. Any NA/NULL values can be excluded with`na_pass`.
Whether to accept stationary values as passing should be an option. Further to this, a tolerance distance (for movement in the opposite direction) should be available. | priority | add the col vals increasing and col vals decreasing functions these functions would validate for whether values are increasing or decreasing any na null values can be excluded with na pass whether to accept stationary values as passing should be an option further to this a tolerance distance for movement in the opposite direction should be available | 1 |
23,991 | 2,665,345,662 | IssuesEvent | 2015-03-20 19:54:18 | jeffbryner/MozDef | https://api.github.com/repos/jeffbryner/MozDef | closed | Documentation for local accounts | category:doc category:question priority:medium | First, thanks for the help and the work.
I am trying to mount Mozdef a debian and I'm going crazy with the installation process.
I managed to install everything by Docker, although the installation manual on is not clear, at least to those who have never used Docker.
I have a question about documentation.
For example, I want to login the IP it by going around 3000 but not how PERSON disable and enable a local account. Following the manual tells me that I must enter Meteor and make some commands ...
From the meteor mozdef run directory '$ meteor remove mrt: accounts-person'
'Meteor add accounts-password'
Alter app / server / mozdef.js Accounts.config section to: forbidClientAccountCreation: false,
restart Meteor
There is a slightly more detailed manual, or actually it's so complex?
Thank You !!!! | 1.0 | Documentation for local accounts - First, thanks for the help and the work.
I am trying to mount Mozdef a debian and I'm going crazy with the installation process.
I managed to install everything by Docker, although the installation manual on is not clear, at least to those who have never used Docker.
I have a question about documentation.
For example, I want to login the IP it by going around 3000 but not how PERSON disable and enable a local account. Following the manual tells me that I must enter Meteor and make some commands ...
From the meteor mozdef run directory '$ meteor remove mrt: accounts-person'
'Meteor add accounts-password'
Alter app / server / mozdef.js Accounts.config section to: forbidClientAccountCreation: false,
restart Meteor
There is a slightly more detailed manual, or actually it's so complex?
Thank You !!!! | priority | documentation for local accounts first thanks for the help and the work i am trying to mount mozdef a debian and i m going crazy with the installation process i managed to install everything by docker although the installation manual on is not clear at least to those who have never used docker i have a question about documentation for example i want to login the ip it by going around but not how person disable and enable a local account following the manual tells me that i must enter meteor and make some commands from the meteor mozdef run directory meteor remove mrt accounts person meteor add accounts password alter app server mozdef js accounts config section to forbidclientaccountcreation false restart meteor there is a slightly more detailed manual or actually it s so complex thank you | 1 |
80,571 | 3,567,778,416 | IssuesEvent | 2016-01-26 00:40:52 | RaymondEllis/simpled | https://api.github.com/repos/RaymondEllis/simpled | closed | Need better name | auto-migrated Priority-Medium Type-Task wontfix | ```
What steps will reproduce the problem?
1. Go to http://google.com/
2. Search for "SimpleD"
What is the expected output? What do you see instead?
You see other stuff. You should see this project.
```
Original issue reported on code.google.com by `raymonde...@gmail.com` on 29 Oct 2012 at 5:02 | 1.0 | Need better name - ```
What steps will reproduce the problem?
1. Go to http://google.com/
2. Search for "SimpleD"
What is the expected output? What do you see instead?
You see other stuff. You should see this project.
```
Original issue reported on code.google.com by `raymonde...@gmail.com` on 29 Oct 2012 at 5:02 | priority | need better name what steps will reproduce the problem go to search for simpled what is the expected output what do you see instead you see other stuff you should see this project original issue reported on code google com by raymonde gmail com on oct at | 1 |
539,012 | 15,782,142,737 | IssuesEvent | 2021-04-01 12:24:16 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID: 220428] Out-of-bounds access in subsys/bluetooth/audio/vocs.c | Coverity bug priority: medium |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/169144afa1826511ee6ec3f53d590b2c0d39d3d4/subsys/bluetooth/audio/vocs.c#L325
Category: Memory - corruptions
Function: `bt_vocs_init`
Component: Bluetooth
CID: [220428](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=220428)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/169144afa1826511ee6ec3f53d590b2c0d39d3d4/subsys/bluetooth/audio/vocs.c
```
313 * also update the characteristic declaration (always found at [i - 1]) with the
314 * BT_GATT_CHRC_WRITE_WITHOUT_RESP property.
315 */
316 for (int i = 1; i < vocs->srv.service_p->attr_count; i++) {
317 attr = &vocs->srv.service_p->attrs[i];
318
>>> CID 220428: (OVERRUN)
>>> Overrunning array "struct bt_uuid_16 [1]({{.uuid = {BT_UUID_TYPE_16}, .val = 11137}})" of 4 bytes by passing it to a function which accesses it at byte offset 16.
319 if (init->location_writable && !bt_uuid_cmp(attr->uuid, BT_UUID_VOCS_LOCATION)) {
320 /* Update attr and chrc to be writable */
321 chrc = vocs->srv.service_p->attrs[i - 1].user_data;
322 attr->write = write_location;
323 attr->perm |= BT_GATT_PERM_WRITE_ENCRYPT;
324 chrc->properties |= BT_GATT_CHRC_WRITE_WITHOUT_RESP;
319 if (init->location_writable && !bt_uuid_cmp(attr->uuid, BT_UUID_VOCS_LOCATION)) {
320 /* Update attr and chrc to be writable */
321 chrc = vocs->srv.service_p->attrs[i - 1].user_data;
322 attr->write = write_location;
323 attr->perm |= BT_GATT_PERM_WRITE_ENCRYPT;
324 chrc->properties |= BT_GATT_CHRC_WRITE_WITHOUT_RESP;
>>> CID 220428: (OVERRUN)
>>> Overrunning array "struct bt_uuid_16 [1]({{.uuid = {BT_UUID_TYPE_16}, .val = 11139}})" of 4 bytes by passing it to a function which accesses it at byte offset 16.
325 } else if (init->desc_writable &&
326 !bt_uuid_cmp(attr->uuid, BT_UUID_VOCS_DESCRIPTION)) {
327 /* Update attr and chrc to be writable */
328 chrc = vocs->srv.service_p->attrs[i - 1].user_data;
329 attr->write = write_output_desc;
330 attr->perm |= BT_GATT_PERM_WRITE_ENCRYPT;
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v29271/p12996
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID: 220428] Out-of-bounds access in subsys/bluetooth/audio/vocs.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/169144afa1826511ee6ec3f53d590b2c0d39d3d4/subsys/bluetooth/audio/vocs.c#L325
Category: Memory - corruptions
Function: `bt_vocs_init`
Component: Bluetooth
CID: [220428](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=220428)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/169144afa1826511ee6ec3f53d590b2c0d39d3d4/subsys/bluetooth/audio/vocs.c
```
313 * also update the characteristic declaration (always found at [i - 1]) with the
314 * BT_GATT_CHRC_WRITE_WITHOUT_RESP property.
315 */
316 for (int i = 1; i < vocs->srv.service_p->attr_count; i++) {
317 attr = &vocs->srv.service_p->attrs[i];
318
>>> CID 220428: (OVERRUN)
>>> Overrunning array "struct bt_uuid_16 [1]({{.uuid = {BT_UUID_TYPE_16}, .val = 11137}})" of 4 bytes by passing it to a function which accesses it at byte offset 16.
319 if (init->location_writable && !bt_uuid_cmp(attr->uuid, BT_UUID_VOCS_LOCATION)) {
320 /* Update attr and chrc to be writable */
321 chrc = vocs->srv.service_p->attrs[i - 1].user_data;
322 attr->write = write_location;
323 attr->perm |= BT_GATT_PERM_WRITE_ENCRYPT;
324 chrc->properties |= BT_GATT_CHRC_WRITE_WITHOUT_RESP;
319 if (init->location_writable && !bt_uuid_cmp(attr->uuid, BT_UUID_VOCS_LOCATION)) {
320 /* Update attr and chrc to be writable */
321 chrc = vocs->srv.service_p->attrs[i - 1].user_data;
322 attr->write = write_location;
323 attr->perm |= BT_GATT_PERM_WRITE_ENCRYPT;
324 chrc->properties |= BT_GATT_CHRC_WRITE_WITHOUT_RESP;
>>> CID 220428: (OVERRUN)
>>> Overrunning array "struct bt_uuid_16 [1]({{.uuid = {BT_UUID_TYPE_16}, .val = 11139}})" of 4 bytes by passing it to a function which accesses it at byte offset 16.
325 } else if (init->desc_writable &&
326 !bt_uuid_cmp(attr->uuid, BT_UUID_VOCS_DESCRIPTION)) {
327 /* Update attr and chrc to be writable */
328 chrc = vocs->srv.service_p->attrs[i - 1].user_data;
329 attr->write = write_output_desc;
330 attr->perm |= BT_GATT_PERM_WRITE_ENCRYPT;
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v29271/p12996
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| priority | out of bounds access in subsys bluetooth audio vocs c static code scan issues found in file category memory corruptions function bt vocs init component bluetooth cid details also update the characteristic declaration always found at with the bt gatt chrc write without resp property for int i i srv service p attr count i attr vocs srv service p attrs cid overrun overrunning array struct bt uuid uuid bt uuid type val of bytes by passing it to a function which accesses it at byte offset if init location writable bt uuid cmp attr uuid bt uuid vocs location update attr and chrc to be writable chrc vocs srv service p attrs user data attr write write location attr perm bt gatt perm write encrypt chrc properties bt gatt chrc write without resp if init location writable bt uuid cmp attr uuid bt uuid vocs location update attr and chrc to be writable chrc vocs srv service p attrs user data attr write write location attr perm bt gatt perm write encrypt chrc properties bt gatt chrc write without resp cid overrun overrunning array struct bt uuid uuid bt uuid type val of bytes by passing it to a function which accesses it at byte offset else if init desc writable bt uuid cmp attr uuid bt uuid vocs description update attr and chrc to be writable chrc vocs srv service p attrs user data attr write write output desc attr perm bt gatt perm write encrypt please fix or provide comments in coverity using the link note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 1 |
214,704 | 7,276,066,497 | IssuesEvent | 2018-02-21 15:23:41 | ilestis/miscellany | https://api.github.com/repos/ilestis/miscellany | closed | Add a "New Entity" button in the view of each entity | feature good first issue medium priority | # Story
As a user, I want to quickly create a new entity of the same type of entity I am currently looking at.
# Work
1. Propose a change to the view crud interface of entities to add a button for "new entity"
2. Implement change
# Tests
* When viewing an entity, a "New EntityName" button is visible somewhere easy and pleasing.
* Clicking on the button brings to the /lang/entityType/create url.
* The button is visible on mobile and works. | 1.0 | Add a "New Entity" button in the view of each entity - # Story
As a user, I want to quickly create a new entity of the same type of entity I am currently looking at.
# Work
1. Propose a change to the view crud interface of entities to add a button for "new entity"
2. Implement change
# Tests
* When viewing an entity, a "New EntityName" button is visible somewhere easy and pleasing.
* Clicking on the button brings to the /lang/entityType/create url.
* The button is visible on mobile and works. | priority | add a new entity button in the view of each entity story as a user i want to quickly create a new entity of the same type of entity i am currently looking at work propose a change to the view crud interface of entities to add a button for new entity implement change tests when viewing an entity a new entityname button is visible somewhere easy and pleasing clicking on the button brings to the lang entitytype create url the button is visible on mobile and works | 1 |
544,147 | 15,890,130,203 | IssuesEvent | 2021-04-10 14:19:18 | beta-team/beta-recsys | https://api.github.com/repos/beta-team/beta-recsys | closed | Use Modin to replace pandas | priority:medium status:confirmed type:enhancement | Use the latest Modin package to replace our pandas package. The Modin can work in parallel, and it is supposed to faster. It also requires no additional knowledge if you are already familiar with pandas.
"Modin uses Ray or Dask to provide an effortless way to speed up your pandas notebooks, scripts, and libraries. Unlike other distributed DataFrame libraries, Modin provides seamless integration and compatibility with existing pandas code. Even using the DataFrame constructor is identical."
Webpage: https://modin.readthedocs.io/en/latest/
Github: https://github.com/modin-project/modin
`import modin.pandas as pd`
`df = pd.read_csv("my_dataset.csv")` | 1.0 | Use Modin to replace pandas - Use the latest Modin package to replace our pandas package. The Modin can work in parallel, and it is supposed to faster. It also requires no additional knowledge if you are already familiar with pandas.
"Modin uses Ray or Dask to provide an effortless way to speed up your pandas notebooks, scripts, and libraries. Unlike other distributed DataFrame libraries, Modin provides seamless integration and compatibility with existing pandas code. Even using the DataFrame constructor is identical."
Webpage: https://modin.readthedocs.io/en/latest/
Github: https://github.com/modin-project/modin
`import modin.pandas as pd`
`df = pd.read_csv("my_dataset.csv")` | priority | use modin to replace pandas use the latest modin package to replace our pandas package the modin can work in parallel and it is supposed to faster it also requires no additional knowledge if you are already familiar with pandas modin uses ray or dask to provide an effortless way to speed up your pandas notebooks scripts and libraries unlike other distributed dataframe libraries modin provides seamless integration and compatibility with existing pandas code even using the dataframe constructor is identical webpage github import modin pandas as pd df pd read csv my dataset csv | 1 |
250,061 | 7,967,123,268 | IssuesEvent | 2018-07-15 10:03:36 | pixijs/pixi.js | https://api.github.com/repos/pixijs/pixi.js | closed | Graphics disappears when cached and multiple filters are applied | Domain: API Plugin: Filters Plugin: Graphics Plugin: cacheAsBitmap Priority: Medium Status: Needs Investigation Type: Bug Version: v4.x | I've noticed following behavior of _PIXI.Graphics_ object (checked on v4.4.4).
It disappears completely from scene when there are 3 conditions fulfilled:
1) it has at least one filter applied
2) one of its parent containers also has at least one filter applied
3) cacheAsBitmap is set to _true_
Removing any of those points fixes the issue.
Please see here: http://codepen.io/miq/pen/XMONQz
The item without filters is visible all the time.
The second one only appears if you comment out one of 3 described lines.
Is there any explanation of this, or is it a bug? | 1.0 | Graphics disappears when cached and multiple filters are applied - I've noticed following behavior of _PIXI.Graphics_ object (checked on v4.4.4).
It disappears completely from scene when there are 3 conditions fulfilled:
1) it has at least one filter applied
2) one of its parent containers also has at least one filter applied
3) cacheAsBitmap is set to _true_
Removing any of those points fixes the issue.
Please see here: http://codepen.io/miq/pen/XMONQz
The item without filters is visible all the time.
The second one only appears if you comment out one of 3 described lines.
Is there any explanation of this, or is it a bug? | priority | graphics disappears when cached and multiple filters are applied i ve noticed following behavior of pixi graphics object checked on it disappears completely from scene when there are conditions fulfilled it has at least one filter applied one of its parent containers also has at least one filter applied cacheasbitmap is set to true removing any of those points fixes the issue please see here the item without filters is visible all the time the second one only appears if you comment out one of described lines is there any explanation of this or is it a bug | 1 |
31,573 | 2,734,096,529 | IssuesEvent | 2015-04-17 17:45:47 | Esri/briefing-book | https://api.github.com/repos/Esri/briefing-book | closed | 2-D array height for each page inconsistent | bug Medium Priority | In the JSON there is a 2D array height for each page (BookConfigData.BookPages[2].height[n][m]). We want to interpret the numbers in the height array to be the heights of the items on that particular page but then noticed that in some cases, the numer of elements in the "heights" array is not equal to the number of items on the corresponding page. Also, there is a "height" attribute in some of the item nodes (e.g., ModuleConfigData.BookPages[2].139585254473501.height) and the two "heights" are not always equal. It is not clear which "height" should be honored. | 1.0 | 2-D array height for each page inconsistent - In the JSON there is a 2D array height for each page (BookConfigData.BookPages[2].height[n][m]). We want to interpret the numbers in the height array to be the heights of the items on that particular page but then noticed that in some cases, the numer of elements in the "heights" array is not equal to the number of items on the corresponding page. Also, there is a "height" attribute in some of the item nodes (e.g., ModuleConfigData.BookPages[2].139585254473501.height) and the two "heights" are not always equal. It is not clear which "height" should be honored. | priority | d array height for each page inconsistent in the json there is a array height for each page bookconfigdata bookpages height we want to interpret the numbers in the height array to be the heights of the items on that particular page but then noticed that in some cases the numer of elements in the heights array is not equal to the number of items on the corresponding page also there is a height attribute in some of the item nodes e g moduleconfigdata bookpages height and the two heights are not always equal it is not clear which height should be honored | 1 |
467,356 | 13,446,454,158 | IssuesEvent | 2020-09-08 12:59:18 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | multibody: `ModelInstanceIndex` as an optional argument to the Jacobian methods. | priority: medium team: dynamics type: feature request | Working with Jacobians in a MBP that has multiple instances is a pain -- it's up the user to pull out the elements of the jacobian that they actually need. ([here is an example](https://colab.research.google.com/github/RussTedrake/manipulation/blob/master/pick.ipynb#scrollTo=ue9ofS7GHpXr))
It seems quite reasonable to support an optional argument for `ModelInstanceIndex` that would just compute/return the gradients wrt a subset of the qdot or v for a particular model. This could have even more value/meaning for the `CalcJacobianCenterOfMassTranslationalVelocity` method (if it would compute CM for just that model instance). | 1.0 | multibody: `ModelInstanceIndex` as an optional argument to the Jacobian methods. - Working with Jacobians in a MBP that has multiple instances is a pain -- it's up the user to pull out the elements of the jacobian that they actually need. ([here is an example](https://colab.research.google.com/github/RussTedrake/manipulation/blob/master/pick.ipynb#scrollTo=ue9ofS7GHpXr))
It seems quite reasonable to support an optional argument for `ModelInstanceIndex` that would just compute/return the gradients wrt a subset of the qdot or v for a particular model. This could have even more value/meaning for the `CalcJacobianCenterOfMassTranslationalVelocity` method (if it would compute CM for just that model instance). | priority | multibody modelinstanceindex as an optional argument to the jacobian methods working with jacobians in a mbp that has multiple instances is a pain it s up the user to pull out the elements of the jacobian that they actually need it seems quite reasonable to support an optional argument for modelinstanceindex that would just compute return the gradients wrt a subset of the qdot or v for a particular model this could have even more value meaning for the calcjacobiancenterofmasstranslationalvelocity method if it would compute cm for just that model instance | 1 |
483,674 | 13,928,107,130 | IssuesEvent | 2020-10-21 20:54:51 | Alluxio/alluxio | https://api.github.com/repos/Alluxio/alluxio | closed | Optimize the du -s command | machine-learning priority-medium type-feature | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
It's hard for me to do `alluxio fs du -sh /` if there are large amounts of files under `/`.
For instance, I've got about 3.8 million files in my Aliyun OSS which I've already mounted on Alluxio. Now, if I try to run `alluxio fs du -sh /`, I would get an OOM error.
I've tried to set a larger JVM heap size by setting env variable `ALLUXIO_USER_JAVA_OPTS` to `-Xmx8G`, but I've got the same OOM error.
```
bash-4.4# alluxio fs du -sh /
File Size In Alluxio Path
SLF4J: Failed toString() invocation on an object of type [java.util.ArrayList]
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at java.util.AbstractCollection.toString(AbstractCollection.java:462)
at org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:304)
at org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
at org.slf4j.impl.Log4jLoggerAdapter.warn(Log4jLoggerAdapter.java:463)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:372)
at alluxio.client.file.RetryHandlingFileSystemMasterClient.listStatus(RetryHandlingFileSystemMasterClient.java:228)
at alluxio.client.file.BaseFileSystem.lambda$listStatus$9(BaseFileSystem.java:274)
at alluxio.client.file.BaseFileSystem$$Lambda$71/825249556.call(Unknown Source)
at alluxio.client.file.BaseFileSystem.rpc(BaseFileSystem.java:531)
at alluxio.client.file.BaseFileSystem.listStatus(BaseFileSystem.java:270)
at alluxio.cli.fs.command.DuCommand.runPlainPath(DuCommand.java:94)
at alluxio.cli.fs.command.AbstractFileSystemCommand.runWildCardCmd(AbstractFileSystemCommand.java:92)
at alluxio.cli.fs.command.DuCommand.run(DuCommand.java:207)
at alluxio.cli.AbstractShell.run(AbstractShell.java:137)
at alluxio.cli.fs.FileSystemShell.main(FileSystemShell.java:66)
84.29GB 0B (0%) /
```
Here is my `jmap -histo` result:
```
bash-4.4# jps
2417 FileSystemShell
257 AlluxioJobMaster
258 AlluxioMaster
2504 Jps
bash-4.4# jmap -histo 2417 | head -20
num #instances #bytes class name
----------------------------------------------
1: 34261544 3187371480 [C
2: 34261508 822276192 java.lang.String
3: 3804849 639214632 alluxio.wire.FileInfo
4: 11414717 547906416 java.util.HashMap
5: 15219885 365277240 java.util.ArrayList
6: 3805045 304420728 [Ljava.util.HashMap$Node;
7: 7612891 198845528 [Ljava.lang.Object;
8: 7610004 182640096 java.lang.Long
9: 3807207 121830624 java.util.HashMap$Node
10: 3804850 121755200 alluxio.security.authorization.AccessControlList
11: 3804847 121755104 alluxio.wire.BlockInfo
12: 3804847 121755104 alluxio.wire.FileBlockInfo
13: 3804874 60877984 java.util.HashSet
14: 3804849 60877584 alluxio.client.file.URIStatus
15: 2149 34411984 [I
16: 537922 8606752 java.util.HashMap$KeySet
17: 4144 465776 java.lang.Class
```
It looks like all the `FileInfo` instances stored in JVM heap, and it won't be recycled for future use.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Can we change the behavior for the command `alluxio fs du -s <path>`, and do all the sum work on Alluxio master side instead of client side, thus no need for client side to get all the `FileInfo` instances.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Or maybe the client side don't have to sum file size after all the `FileInfo` instances have been instantiated, summed FileInfos can be recycled during next GC
**Urgency**
Explain why the feature is important
Urgent, an UFS with large amount of small files may be common in our scenario.
**Additional context**
Add any other context or screenshots about the feature request here.
I've found some related issue here: #12088 | 1.0 | Optimize the du -s command - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
It's hard for me to do `alluxio fs du -sh /` if there are large amounts of files under `/`.
For instance, I've got about 3.8 million files in my Aliyun OSS which I've already mounted on Alluxio. Now, if I try to run `alluxio fs du -sh /`, I would get an OOM error.
I've tried to set a larger JVM heap size by setting env variable `ALLUXIO_USER_JAVA_OPTS` to `-Xmx8G`, but I've got the same OOM error.
```
bash-4.4# alluxio fs du -sh /
File Size In Alluxio Path
SLF4J: Failed toString() invocation on an object of type [java.util.ArrayList]
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at java.util.AbstractCollection.toString(AbstractCollection.java:462)
at org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:304)
at org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
at org.slf4j.impl.Log4jLoggerAdapter.warn(Log4jLoggerAdapter.java:463)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:372)
at alluxio.client.file.RetryHandlingFileSystemMasterClient.listStatus(RetryHandlingFileSystemMasterClient.java:228)
at alluxio.client.file.BaseFileSystem.lambda$listStatus$9(BaseFileSystem.java:274)
at alluxio.client.file.BaseFileSystem$$Lambda$71/825249556.call(Unknown Source)
at alluxio.client.file.BaseFileSystem.rpc(BaseFileSystem.java:531)
at alluxio.client.file.BaseFileSystem.listStatus(BaseFileSystem.java:270)
at alluxio.cli.fs.command.DuCommand.runPlainPath(DuCommand.java:94)
at alluxio.cli.fs.command.AbstractFileSystemCommand.runWildCardCmd(AbstractFileSystemCommand.java:92)
at alluxio.cli.fs.command.DuCommand.run(DuCommand.java:207)
at alluxio.cli.AbstractShell.run(AbstractShell.java:137)
at alluxio.cli.fs.FileSystemShell.main(FileSystemShell.java:66)
84.29GB 0B (0%) /
```
Here is my `jmap -histo` result:
```
bash-4.4# jps
2417 FileSystemShell
257 AlluxioJobMaster
258 AlluxioMaster
2504 Jps
bash-4.4# jmap -histo 2417 | head -20
num #instances #bytes class name
----------------------------------------------
1: 34261544 3187371480 [C
2: 34261508 822276192 java.lang.String
3: 3804849 639214632 alluxio.wire.FileInfo
4: 11414717 547906416 java.util.HashMap
5: 15219885 365277240 java.util.ArrayList
6: 3805045 304420728 [Ljava.util.HashMap$Node;
7: 7612891 198845528 [Ljava.lang.Object;
8: 7610004 182640096 java.lang.Long
9: 3807207 121830624 java.util.HashMap$Node
10: 3804850 121755200 alluxio.security.authorization.AccessControlList
11: 3804847 121755104 alluxio.wire.BlockInfo
12: 3804847 121755104 alluxio.wire.FileBlockInfo
13: 3804874 60877984 java.util.HashSet
14: 3804849 60877584 alluxio.client.file.URIStatus
15: 2149 34411984 [I
16: 537922 8606752 java.util.HashMap$KeySet
17: 4144 465776 java.lang.Class
```
It looks like all the `FileInfo` instances stored in JVM heap, and it won't be recycled for future use.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Can we change the behavior for the command `alluxio fs du -s <path>`, and do all the sum work on Alluxio master side instead of client side, thus no need for client side to get all the `FileInfo` instances.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Or maybe the client side don't have to sum file size after all the `FileInfo` instances have been instantiated, summed FileInfos can be recycled during next GC
**Urgency**
Explain why the feature is important
Urgent, an UFS with large amount of small files may be common in our scenario.
**Additional context**
Add any other context or screenshots about the feature request here.
I've found some related issue here: #12088 | priority | optimize the du s command is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when it s hard for me to do alluxio fs du sh if there are large amounts of files under for instance i ve got about million files in my aliyun oss which i ve already mounted on alluxio now if i try to run alluxio fs du sh i would get an oom error i ve tried to set a larger jvm heap size by setting env variable alluxio user java opts to but i ve got the same oom error bash alluxio fs du sh file size in alluxio path failed tostring invocation on an object of type java lang outofmemoryerror java heap space at java util arrays copyof arrays java at java lang abstractstringbuilder ensurecapacityinternal abstractstringbuilder java at java lang abstractstringbuilder append abstractstringbuilder java at java lang stringbuilder append stringbuilder java at java lang stringbuilder append stringbuilder java at java util abstractcollection tostring abstractcollection java at org helpers messageformatter safeobjectappend messageformatter java at org helpers messageformatter deeplyappendparameter messageformatter java at org helpers messageformatter arrayformat messageformatter java at org impl warn java at alluxio abstractclient retryrpc abstractclient java at alluxio client file retryhandlingfilesystemmasterclient liststatus retryhandlingfilesystemmasterclient java at alluxio client file basefilesystem lambda liststatus basefilesystem java at alluxio client file basefilesystem lambda call unknown source at alluxio client file basefilesystem rpc basefilesystem java at alluxio client file basefilesystem liststatus basefilesystem java at alluxio cli fs command ducommand runplainpath ducommand java at alluxio cli fs command abstractfilesystemcommand runwildcardcmd abstractfilesystemcommand java at alluxio cli fs command ducommand run ducommand java at alluxio cli abstractshell run abstractshell java at alluxio cli fs filesystemshell main filesystemshell java here is my jmap histo result bash jps filesystemshell alluxiojobmaster alluxiomaster jps bash jmap histo head num instances bytes class name c java lang string alluxio wire fileinfo java util hashmap java util arraylist ljava util hashmap node ljava lang object java lang long java util hashmap node alluxio security authorization accesscontrollist alluxio wire blockinfo alluxio wire fileblockinfo java util hashset alluxio client file uristatus i java util hashmap keyset java lang class it looks like all the fileinfo instances stored in jvm heap and it won t be recycled for future use describe the solution you d like a clear and concise description of what you want to happen can we change the behavior for the command alluxio fs du s and do all the sum work on alluxio master side instead of client side thus no need for client side to get all the fileinfo instances describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered or maybe the client side don t have to sum file size after all the fileinfo instances have been instantiated summed fileinfos can be recycled during next gc urgency explain why the feature is important urgent an ufs with large amount of small files may be common in our scenario additional context add any other context or screenshots about the feature request here i ve found some related issue here | 1 |
407,562 | 11,923,660,952 | IssuesEvent | 2020-04-01 08:14:40 | georchestra/mapstore2-georchestra | https://api.github.com/repos/georchestra/mapstore2-georchestra | closed | Improve Plugin configuration in Application Contexts | New Priority: Medium | We can further improve the configuration tier of plugins inside the application context wizard (step 2) by including the "overrides" property and not only "cfg" as it is now. | 1.0 | Improve Plugin configuration in Application Contexts - We can further improve the configuration tier of plugins inside the application context wizard (step 2) by including the "overrides" property and not only "cfg" as it is now. | priority | improve plugin configuration in application contexts we can further improve the configuration tier of plugins inside the application context wizard step by including the overrides property and not only cfg as it is now | 1 |
249,119 | 7,953,839,933 | IssuesEvent | 2018-07-12 04:12:04 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | FPS drop when looking at glass in stockpile | Medium Priority Optimization | When looking at a stockpile full of glass my FPS are dropping from ~50 in town and ~75 in the wilderness to ~15.


Note the 14.9 fps in the bottom right, sadly my steam fps counter didnt get screenshotted.






Hardware: i7-4790k/GTX980*2/16gb
I didn't notice an increase in CPU load when looking at the glass stockpile.
| 1.0 | FPS drop when looking at glass in stockpile - When looking at a stockpile full of glass my FPS are dropping from ~50 in town and ~75 in the wilderness to ~15.


Note the 14.9 fps in the bottom right, sadly my steam fps counter didnt get screenshotted.






Hardware: i7-4790k/GTX980*2/16gb
I didn't notice an increase in CPU load when looking at the glass stockpile.
| priority | fps drop when looking at glass in stockpile when looking at a stockpile full of glass my fps are dropping from in town and in the wilderness to note the fps in the bottom right sadly my steam fps counter didnt get screenshotted hardware i didn t notice an increase in cpu load when looking at the glass stockpile | 1 |
809,674 | 30,205,028,658 | IssuesEvent | 2023-07-05 08:51:37 | ubiquity/action-conventional-commits | https://api.github.com/repos/ubiquity/action-conventional-commits | closed | "Initial commit" commit message does not pass conventional commits | Time: <1 Hour Priority: 1 (Medium) Price: 25 USD | It is unfortunate that when creating a new repository from this template, the first commit will always fail CI. Is there a solution to change either the default commit message, or should we modify [our version of conventional commits](https://github.com/ubiquity/action-conventional-commits) to explicitly support the "Initial commit" commit message?
https://github.com/ubiquity/ubiquibot-sandbox/commit/6d5006d4594aaf4f79bab6f1767b29cce987a921 | 1.0 | "Initial commit" commit message does not pass conventional commits - It is unfortunate that when creating a new repository from this template, the first commit will always fail CI. Is there a solution to change either the default commit message, or should we modify [our version of conventional commits](https://github.com/ubiquity/action-conventional-commits) to explicitly support the "Initial commit" commit message?
https://github.com/ubiquity/ubiquibot-sandbox/commit/6d5006d4594aaf4f79bab6f1767b29cce987a921 | priority | initial commit commit message does not pass conventional commits it is unfortunate that when creating a new repository from this template the first commit will always fail ci is there a solution to change either the default commit message or should we modify to explicitly support the initial commit commit message | 1 |
715,235 | 24,591,928,576 | IssuesEvent | 2022-10-14 03:38:53 | AY2223S1-CS2103T-W10-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-W10-4/tp | closed | bug(tasks): archive tasks not updating tasklist in module | type.Bug priority.Medium severity.Medium | Steps to reproduce:
1) Add a task
2) Archive it
3) Remove it
4) Task still counted in module tag & exists in `modulelist.json`
Reason: `isArchived` property of tasks in `Module`'s `taskList` isn't updated in `ArchiveTaskCommand::execute()` and `UnarchiveTaskCommand::execute()` so they don't pass the `.equals()` check during deletion and don't get deleted
Part of the problem with our current setup with two "sources of truth" | 1.0 | bug(tasks): archive tasks not updating tasklist in module - Steps to reproduce:
1) Add a task
2) Archive it
3) Remove it
4) Task still counted in module tag & exists in `modulelist.json`
Reason: `isArchived` property of tasks in `Module`'s `taskList` isn't updated in `ArchiveTaskCommand::execute()` and `UnarchiveTaskCommand::execute()` so they don't pass the `.equals()` check during deletion and don't get deleted
Part of the problem with our current setup with two "sources of truth" | priority | bug tasks archive tasks not updating tasklist in module steps to reproduce add a task archive it remove it task still counted in module tag exists in modulelist json reason isarchived property of tasks in module s tasklist isn t updated in archivetaskcommand execute and unarchivetaskcommand execute so they don t pass the equals check during deletion and don t get deleted part of the problem with our current setup with two sources of truth | 1 |
610,371 | 18,905,886,376 | IssuesEvent | 2021-11-16 09:02:39 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | closed | User Cache isnot working with Query monitor plugin | type: bug 3rd party compatibility module: cache priority: medium severity: major | **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version => 3.10.3
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
Cache for logged in users is not working while the Query Monitor plugin is enabled
**To Reproduce**
Steps to reproduce the behavior:
1. Enable [Query Monitor ](https://wordpress.org/plugins/query-monitor/)plugin 3.7.1
2. Enable Cache for logged in users
3. Open home page for logged in user
4. No-cache signature in source and nothing added to the cached folder
**Expected behavior**
Cache for logged in users is working independently on active plugins
**Additional context**
Cache for logged in users working normally if we disable Query Monitor Plugin
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| 1.0 | User Cache isnot working with Query monitor plugin - **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version => 3.10.3
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
Cache for logged in users is not working while the Query Monitor plugin is enabled
**To Reproduce**
Steps to reproduce the behavior:
1. Enable [Query Monitor ](https://wordpress.org/plugins/query-monitor/)plugin 3.7.1
2. Enable Cache for logged in users
3. Open home page for logged in user
4. No-cache signature in source and nothing added to the cached folder
**Expected behavior**
Cache for logged in users is working independently on active plugins
**Additional context**
Cache for logged in users working normally if we disable Query Monitor Plugin
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| priority | user cache isnot working with query monitor plugin before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version used the search feature to ensure that the bug hasn’t been reported before describe the bug cache for logged in users is not working while the query monitor plugin is enabled to reproduce steps to reproduce the behavior enable enable cache for logged in users open home page for logged in user no cache signature in source and nothing added to the cached folder expected behavior cache for logged in users is working independently on active plugins additional context cache for logged in users working normally if we disable query monitor plugin backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort | 1 |
462,001 | 13,239,666,087 | IssuesEvent | 2020-08-19 04:09:36 | medic/cht-core | https://api.github.com/repos/medic/cht-core | closed | Make sure we are storing all debug / output from android when trying to obtain GPS | Analytics Priority: 2 - Medium Type: Improvement | We often have trouble getting great GPS information against reports.
We should make sure that we're conveying as much debug and error information onto reports. For example, it looks like we're catching errors from Android and just logging them on Android (which means we won't see them): https://github.com/medic/medic-android/blob/master/src/main/java/org/medicmobile/webapp/mobile/MedicAndroidJavascript.java#L124-L126 | 1.0 | Make sure we are storing all debug / output from android when trying to obtain GPS - We often have trouble getting great GPS information against reports.
We should make sure that we're conveying as much debug and error information onto reports. For example, it looks like we're catching errors from Android and just logging them on Android (which means we won't see them): https://github.com/medic/medic-android/blob/master/src/main/java/org/medicmobile/webapp/mobile/MedicAndroidJavascript.java#L124-L126 | priority | make sure we are storing all debug output from android when trying to obtain gps we often have trouble getting great gps information against reports we should make sure that we re conveying as much debug and error information onto reports for example it looks like we re catching errors from android and just logging them on android which means we won t see them | 1 |
384,440 | 11,392,708,590 | IssuesEvent | 2020-01-30 03:40:43 | coder3101/cp-editor | https://api.github.com/repos/coder3101/cp-editor | opened | Add an option to disable the animation of the test cases | enhancement good first issue medium_priority | **Is your feature request related to a problem? Please describe.**
Some people maybe don't like the resize animation of the test cases, it should be able to disable.
**Describe the solution you'd like**
Add an option to disable the animation of the test cases.
**Describe alternatives you've considered**
N/A
**Additional context**
Here's the instruction:
1. Add a bool variable in `include/TestCases.hpp` `class TestCaseEdit` and a function to set it. You may also need to add functions in `class TestCase` and `class TestCases` to set their children's properties.
2. If this bool variable is set, set the `minimumHeight` instead of starting the animation in `TestCaseEdit::startAnimation`. Rename this function to `updateHeight` or something like that if you like.
3. Add a setting (probably in the `Extras/Misc` section), see [CONTRIBUTING#FAQ](https://github.com/coder3101/cp-editor/blob/master/CONTRIBUTING.md#faq) for more information. Update the setting of `TestCaseEdit`s in `AppWindow::onSettingsApplied`.
| 1.0 | Add an option to disable the animation of the test cases - **Is your feature request related to a problem? Please describe.**
Some people maybe don't like the resize animation of the test cases, it should be able to disable.
**Describe the solution you'd like**
Add an option to disable the animation of the test cases.
**Describe alternatives you've considered**
N/A
**Additional context**
Here's the instruction:
1. Add a bool variable in `include/TestCases.hpp` `class TestCaseEdit` and a function to set it. You may also need to add functions in `class TestCase` and `class TestCases` to set their children's properties.
2. If this bool variable is set, set the `minimumHeight` instead of starting the animation in `TestCaseEdit::startAnimation`. Rename this function to `updateHeight` or something like that if you like.
3. Add a setting (probably in the `Extras/Misc` section), see [CONTRIBUTING#FAQ](https://github.com/coder3101/cp-editor/blob/master/CONTRIBUTING.md#faq) for more information. Update the setting of `TestCaseEdit`s in `AppWindow::onSettingsApplied`.
| priority | add an option to disable the animation of the test cases is your feature request related to a problem please describe some people maybe don t like the resize animation of the test cases it should be able to disable describe the solution you d like add an option to disable the animation of the test cases describe alternatives you ve considered n a additional context here s the instruction add a bool variable in include testcases hpp class testcaseedit and a function to set it you may also need to add functions in class testcase and class testcases to set their children s properties if this bool variable is set set the minimumheight instead of starting the animation in testcaseedit startanimation rename this function to updateheight or something like that if you like add a setting probably in the extras misc section see for more information update the setting of testcaseedit s in appwindow onsettingsapplied | 1 |
518,328 | 15,026,961,009 | IssuesEvent | 2021-02-01 23:42:45 | stakeordie/scrt-auction | https://api.github.com/repos/stakeordie/scrt-auction | closed | UI Improvement for "sell" and "min bid" at top | medium priority | It wasn't clear to me that these were filters, initially, until I clicked the dropdowns and started choosing specific coins.

I think it would be great to make it more clear that those are search filters.
Also, the "min bid" label doesn't seem to fit because the user's not really filtering on minimum bid amounts.
| 1.0 | UI Improvement for "sell" and "min bid" at top - It wasn't clear to me that these were filters, initially, until I clicked the dropdowns and started choosing specific coins.

I think it would be great to make it more clear that those are search filters.
Also, the "min bid" label doesn't seem to fit because the user's not really filtering on minimum bid amounts.
| priority | ui improvement for sell and min bid at top it wasn t clear to me that these were filters initially until i clicked the dropdowns and started choosing specific coins i think it would be great to make it more clear that those are search filters also the min bid label doesn t seem to fit because the user s not really filtering on minimum bid amounts | 1 |
804,459 | 29,488,747,938 | IssuesEvent | 2023-06-02 11:54:27 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID: 316479] Macro compares unsigned to 0 in subsys/net/l2/ethernet/gptp/gptp.c | bug priority: medium area: Networking Coverity |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/dae79cefaabf63086946a48ccca4094f26f146c8/subsys/net/l2/ethernet/gptp/gptp.c
Category: Integer handling issues
Function: `gptp_state_machine`
Component: Networking
CID: [316479](https://scan9.scan.coverity.com/reports.htm#v29726/p12996/mergedDefectId=316479)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/dae79cefaabf63086946a48ccca4094f26f146c8/subsys/net/l2/ethernet/gptp/gptp.c#L532
Please fix or provide comments in coverity using the link:
https://scan9.scan.coverity.com/reports.htm#v29271/p12996.
For more information about the violation, check the [Coverity Reference](https://scan9.scan.coverity.com/doc/en/cov_checker_ref.html#static_checker_NO_EFFECT). ([CWE-570](http://cwe.mitre.org/data/definitions/570.html))
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID: 316479] Macro compares unsigned to 0 in subsys/net/l2/ethernet/gptp/gptp.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/dae79cefaabf63086946a48ccca4094f26f146c8/subsys/net/l2/ethernet/gptp/gptp.c
Category: Integer handling issues
Function: `gptp_state_machine`
Component: Networking
CID: [316479](https://scan9.scan.coverity.com/reports.htm#v29726/p12996/mergedDefectId=316479)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/dae79cefaabf63086946a48ccca4094f26f146c8/subsys/net/l2/ethernet/gptp/gptp.c#L532
Please fix or provide comments in coverity using the link:
https://scan9.scan.coverity.com/reports.htm#v29271/p12996.
For more information about the violation, check the [Coverity Reference](https://scan9.scan.coverity.com/doc/en/cov_checker_ref.html#static_checker_NO_EFFECT). ([CWE-570](http://cwe.mitre.org/data/definitions/570.html))
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| priority | macro compares unsigned to in subsys net ethernet gptp gptp c static code scan issues found in file category integer handling issues function gptp state machine component networking cid details please fix or provide comments in coverity using the link for more information about the violation check the note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 1 |
165,567 | 6,278,317,628 | IssuesEvent | 2017-07-18 14:09:42 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [commons] Fix typo in errors.properties | enhancement Priority: Medium | Fix typo on line number 4 and 5, in file `errors.properties` (replace the word "**doesnt**" with "**does not**") from:
```
security.permission.globalActionDenied=Current subject doesnt have permission to execute global action "{0}"
```
to:
```
security.permission.globalActionDenied=Current subject does not have permission to execute global action "{0}"
```
| 1.0 | [commons] Fix typo in errors.properties - Fix typo on line number 4 and 5, in file `errors.properties` (replace the word "**doesnt**" with "**does not**") from:
```
security.permission.globalActionDenied=Current subject doesnt have permission to execute global action "{0}"
```
to:
```
security.permission.globalActionDenied=Current subject does not have permission to execute global action "{0}"
```
| priority | fix typo in errors properties fix typo on line number and in file errors properties replace the word doesnt with does not from security permission globalactiondenied current subject doesnt have permission to execute global action to security permission globalactiondenied current subject does not have permission to execute global action | 1 |
55,268 | 3,072,600,730 | IssuesEvent | 2015-08-19 17:44:56 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | clickOnText can't scroll down ListView in a Fragment in a ViewPager | bug imported invalid Priority-Medium | _From [mrlhwlib...@gmail.com](https://code.google.com/u/107770464206980364909/) on April 14, 2012 09:12:47_
Robotium doesn't work well with a list view that can be scroll down in a ViewPager. What steps will reproduce the problem? 1. Use AnyMemo's APK here https://code.google.com/p/anymemo/downloads/list 2. clickOnText("Misc"); clickOnText("About"); What is the expected output? What do you see instead? The About should be clicked but actually not. What version of the product are you using? On what operating system? 3.1, Android 4.0.3, Android 2.3 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=247_ | 1.0 | clickOnText can't scroll down ListView in a Fragment in a ViewPager - _From [mrlhwlib...@gmail.com](https://code.google.com/u/107770464206980364909/) on April 14, 2012 09:12:47_
Robotium doesn't work well with a list view that can be scroll down in a ViewPager. What steps will reproduce the problem? 1. Use AnyMemo's APK here https://code.google.com/p/anymemo/downloads/list 2. clickOnText("Misc"); clickOnText("About"); What is the expected output? What do you see instead? The About should be clicked but actually not. What version of the product are you using? On what operating system? 3.1, Android 4.0.3, Android 2.3 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=247_ | priority | clickontext can t scroll down listview in a fragment in a viewpager from on april robotium doesn t work well with a list view that can be scroll down in a viewpager what steps will reproduce the problem use anymemo s apk here clickontext misc clickontext about what is the expected output what do you see instead the about should be clicked but actually not what version of the product are you using on what operating system android android please provide any additional information below original issue | 1 |
292,153 | 8,953,707,929 | IssuesEvent | 2019-01-25 20:18:21 | richelbilderbeek/djog_unos_2018 | https://api.github.com/repos/richelbilderbeek/djog_unos_2018 | closed | Cows eat grass | medium priority | **Is your feature request related to a problem? Please describe.**
Current, cow and grass agents ignore each other.
**Describe the solution you'd like**
When cow and grass are at the same position,
* the grass should decrease in health, as it is eaten
* a cow should increase in health, as it is eating
Or, fix this test:
```c++
//#define FIX_ISSUE_301
#ifdef FIX_ISSUE_301
//Cows eat grass
{
const double cow_health{10.0};
const double grass_health{5.0};
game g(
create_default_tiles(),
{
agent(agent_type::grass, 0.0, 0.0, grass_health),
agent(agent_type::cow , 0.0, 0.0, cow_health)
}
);
assert(g.get_agents()[0].get_health() == grass_health);
assert(g.get_agents()[1].get_health() == cow_health);
g.process_events();
//Grass is eaten ...
assert(g.get_agents()[0].get_health() < grass_health);
//Cow is fed ...
assert(g.get_agents()[1].get_health() > cow_health);
}
#endif //FIX_ISSUE_301
```
**Describe alternatives you've considered**
None.
**Additional context**
None. | 1.0 | Cows eat grass - **Is your feature request related to a problem? Please describe.**
Current, cow and grass agents ignore each other.
**Describe the solution you'd like**
When cow and grass are at the same position,
* the grass should decrease in health, as it is eaten
* a cow should increase in health, as it is eating
Or, fix this test:
```c++
//#define FIX_ISSUE_301
#ifdef FIX_ISSUE_301
//Cows eat grass
{
const double cow_health{10.0};
const double grass_health{5.0};
game g(
create_default_tiles(),
{
agent(agent_type::grass, 0.0, 0.0, grass_health),
agent(agent_type::cow , 0.0, 0.0, cow_health)
}
);
assert(g.get_agents()[0].get_health() == grass_health);
assert(g.get_agents()[1].get_health() == cow_health);
g.process_events();
//Grass is eaten ...
assert(g.get_agents()[0].get_health() < grass_health);
//Cow is fed ...
assert(g.get_agents()[1].get_health() > cow_health);
}
#endif //FIX_ISSUE_301
```
**Describe alternatives you've considered**
None.
**Additional context**
None. | priority | cows eat grass is your feature request related to a problem please describe current cow and grass agents ignore each other describe the solution you d like when cow and grass are at the same position the grass should decrease in health as it is eaten a cow should increase in health as it is eating or fix this test c define fix issue ifdef fix issue cows eat grass const double cow health const double grass health game g create default tiles agent agent type grass grass health agent agent type cow cow health assert g get agents get health grass health assert g get agents get health cow health g process events grass is eaten assert g get agents get health grass health cow is fed assert g get agents get health cow health endif fix issue describe alternatives you ve considered none additional context none | 1 |
24,269 | 2,667,018,898 | IssuesEvent | 2015-03-22 05:01:42 | NewCreature/EOF | https://api.github.com/repos/NewCreature/EOF | closed | Track undo for and document CTRL+click to redefine lyric pitch | bug imported Priority-Medium | _From [raynebc](https://code.google.com/u/raynebc/) on May 10, 2010 14:04:42_
I found that the CTRL+click on the full piano changes the pitch of the last
clicked lyric. However, making a pitch change this way doesn't mark the
chart as modified so it cannot be undone. This feature is also affected by
the click, CTRL+click bug that will allow a note operation to affect a note
that was deselected.
_Original issue: http://code.google.com/p/editor-on-fire/issues/detail?id=50_ | 1.0 | Track undo for and document CTRL+click to redefine lyric pitch - _From [raynebc](https://code.google.com/u/raynebc/) on May 10, 2010 14:04:42_
I found that the CTRL+click on the full piano changes the pitch of the last
clicked lyric. However, making a pitch change this way doesn't mark the
chart as modified so it cannot be undone. This feature is also affected by
the click, CTRL+click bug that will allow a note operation to affect a note
that was deselected.
_Original issue: http://code.google.com/p/editor-on-fire/issues/detail?id=50_ | priority | track undo for and document ctrl click to redefine lyric pitch from on may i found that the ctrl click on the full piano changes the pitch of the last clicked lyric however making a pitch change this way doesn t mark the chart as modified so it cannot be undone this feature is also affected by the click ctrl click bug that will allow a note operation to affect a note that was deselected original issue | 1 |
533,127 | 15,577,309,008 | IssuesEvent | 2021-03-17 13:24:06 | schemathesis/schemathesis | https://api.github.com/repos/schemathesis/schemathesis | closed | [FEATURE] Control the "code to reproduce" section style from CLI | Difficulty: Medium Priority: Low Type: Feature | **Is your feature request related to a problem? Please describe.**
At the moment, Schemathesis CLI always produces Python code that the end-user should run to reproduce the problem. It might not be desired, especially if Python is not the main language of the app under test or the user doesn't want to use it.
**Describe the solution you'd like**
Provide a way to control whether Python or cURL will be used in these code samples
Schemathesis 2.8.4
| 1.0 | [FEATURE] Control the "code to reproduce" section style from CLI - **Is your feature request related to a problem? Please describe.**
At the moment, Schemathesis CLI always produces Python code that the end-user should run to reproduce the problem. It might not be desired, especially if Python is not the main language of the app under test or the user doesn't want to use it.
**Describe the solution you'd like**
Provide a way to control whether Python or cURL will be used in these code samples
Schemathesis 2.8.4
| priority | control the code to reproduce section style from cli is your feature request related to a problem please describe at the moment schemathesis cli always produces python code that the end user should run to reproduce the problem it might not be desired especially if python is not the main language of the app under test or the user doesn t want to use it describe the solution you d like provide a way to control whether python or curl will be used in these code samples schemathesis | 1 |
769,361 | 27,002,799,460 | IssuesEvent | 2023-02-10 09:16:46 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | Fix potential query injections in metrics service | Priority: medium 🔒 Security 🐤 Low Complexity | In many places where we query from InfluxDB data, we use string interpolation to formulate the query:
https://github.com/opencrvs/opencrvs-core/blob/bf117febf8de0120a05f39de6fca7494baa1ede3/packages/metrics/src/features/searchMetrics/service.ts#L24
This can be dangerous, as a malicious user could pass in a parameter that manipulates the query structure itself.
Consider for example `clientId = "'; DROP measurement search_requests; SELECT FROM search_requests WHERE clientId = 1 "`. This would evaluate the full query string to a form of
```
`SELECT COUNT(clientId) FROM search_requests WHERE clientId = ''; DROP measurement search_requests; SELECT FROM search_requests WHERE clientId = 1 "' AND time >= '${currentDate}'`
```
Not great! We have discovered, that our current InfluxDB client, although quite outdated, supports a thing called "placeholders" that are meant to automatically escape the query variable and solve this problem for us. We are currently using that method in few places like in [here](https://github.com/opencrvs/opencrvs-core/blob/bf117febf8de0120a05f39de6fca7494baa1ede3/packages/metrics/src/features/certifications/service.ts#L35-L41).
Unfortunately the placeholders do not work for integer values in the version we're using. https://github.com/node-influx/node-influx/issues/587
**Dev tasks:**
- [x] Research if there is a more recent version / alternative available for connecting & querying influxdb from Node.js
- [x] Go through the current queries, use placeholders as much as possible
| 1.0 | Fix potential query injections in metrics service - In many places where we query from InfluxDB data, we use string interpolation to formulate the query:
https://github.com/opencrvs/opencrvs-core/blob/bf117febf8de0120a05f39de6fca7494baa1ede3/packages/metrics/src/features/searchMetrics/service.ts#L24
This can be dangerous, as a malicious user could pass in a parameter that manipulates the query structure itself.
Consider for example `clientId = "'; DROP measurement search_requests; SELECT FROM search_requests WHERE clientId = 1 "`. This would evaluate the full query string to a form of
```
`SELECT COUNT(clientId) FROM search_requests WHERE clientId = ''; DROP measurement search_requests; SELECT FROM search_requests WHERE clientId = 1 "' AND time >= '${currentDate}'`
```
Not great! We have discovered, that our current InfluxDB client, although quite outdated, supports a thing called "placeholders" that are meant to automatically escape the query variable and solve this problem for us. We are currently using that method in few places like in [here](https://github.com/opencrvs/opencrvs-core/blob/bf117febf8de0120a05f39de6fca7494baa1ede3/packages/metrics/src/features/certifications/service.ts#L35-L41).
Unfortunately the placeholders do not work for integer values in the version we're using. https://github.com/node-influx/node-influx/issues/587
**Dev tasks:**
- [x] Research if there is a more recent version / alternative available for connecting & querying influxdb from Node.js
- [x] Go through the current queries, use placeholders as much as possible
| priority | fix potential query injections in metrics service in many places where we query from influxdb data we use string interpolation to formulate the query this can be dangerous as a malicious user could pass in a parameter that manipulates the query structure itself consider for example clientid drop measurement search requests select from search requests where clientid this would evaluate the full query string to a form of select count clientid from search requests where clientid drop measurement search requests select from search requests where clientid and time currentdate not great we have discovered that our current influxdb client although quite outdated supports a thing called placeholders that are meant to automatically escape the query variable and solve this problem for us we are currently using that method in few places like in unfortunately the placeholders do not work for integer values in the version we re using dev tasks research if there is a more recent version alternative available for connecting querying influxdb from node js go through the current queries use placeholders as much as possible | 1 |
489,408 | 14,105,993,748 | IssuesEvent | 2020-11-06 14:18:54 | carbon-design-system/carbon-for-ibm-dotcom | https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom | closed | Add RTL artifact for Web Components gulp builds for CDN | Airtable Done RTL dev package: web components priority: medium | <!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
Carbon for IBM.com adopter
> I need to:
have a RTL version of the web components dotcom shell CDN package
> so that I can:
Add RTL support to my application using the CDN version of Carbon for IBM.com Web Components
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- There is currently a regular build script for generating the LTR bundles for the web components dotcom shell, which is uploaded to the production server for CDN access.
- Script will need to be updated to also provide an rtl version of the script. For example:
```
<script type="module">
import 'https://www.ibm.com/common/carbon-for-ibm-dotcom/latest/ibmdotcom-web-components-dotcom-shell.rtl.min.js';
</script>
```
#### Acceptance criteria
- [ ] Create bundle version of the dotcom shell that support RTL
- [ ] Add to the CI/CD workflow to upload to the production/akamai environments | 1.0 | Add RTL artifact for Web Components gulp builds for CDN - <!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
Carbon for IBM.com adopter
> I need to:
have a RTL version of the web components dotcom shell CDN package
> so that I can:
Add RTL support to my application using the CDN version of Carbon for IBM.com Web Components
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- There is currently a regular build script for generating the LTR bundles for the web components dotcom shell, which is uploaded to the production server for CDN access.
- Script will need to be updated to also provide an rtl version of the script. For example:
```
<script type="module">
import 'https://www.ibm.com/common/carbon-for-ibm-dotcom/latest/ibmdotcom-web-components-dotcom-shell.rtl.min.js';
</script>
```
#### Acceptance criteria
- [ ] Create bundle version of the dotcom shell that support RTL
- [ ] Add to the CI/CD workflow to upload to the production/akamai environments | priority | add rtl artifact for web components gulp builds for cdn user story as a carbon for ibm com adopter i need to have a rtl version of the web components dotcom shell cdn package so that i can add rtl support to my application using the cdn version of carbon for ibm com web components additional information there is currently a regular build script for generating the ltr bundles for the web components dotcom shell which is uploaded to the production server for cdn access script will need to be updated to also provide an rtl version of the script for example import acceptance criteria create bundle version of the dotcom shell that support rtl add to the ci cd workflow to upload to the production akamai environments | 1 |
579,804 | 17,198,755,687 | IssuesEvent | 2021-07-16 22:20:11 | bcgov/ols-router | https://api.github.com/repos/bcgov/ols-router | closed | In truck resources, followTruckRoute should default to true | medium priority usability | followTruckRoute should default to true for truck resources (e.g., /truck/directions). Rarely would an application need to use truck/route or truck/directions and not need trucks to follow truck routes. The default value of followTruckRoute when using non-truck resources (e.g., directions instead of truck/directions) should still be false. | 1.0 | In truck resources, followTruckRoute should default to true - followTruckRoute should default to true for truck resources (e.g., /truck/directions). Rarely would an application need to use truck/route or truck/directions and not need trucks to follow truck routes. The default value of followTruckRoute when using non-truck resources (e.g., directions instead of truck/directions) should still be false. | priority | in truck resources followtruckroute should default to true followtruckroute should default to true for truck resources e g truck directions rarely would an application need to use truck route or truck directions and not need trucks to follow truck routes the default value of followtruckroute when using non truck resources e g directions instead of truck directions should still be false | 1 |
48,950 | 3,001,057,041 | IssuesEvent | 2015-07-24 08:32:54 | lua-carbon/carbon | https://api.github.com/repos/lua-carbon/carbon | opened | Access inheritance tree from code | difficulty:medium feature priority:low | It'd be useful to know what the tree of inheritance looks like for a class for cases like inheritance or metadata. | 1.0 | Access inheritance tree from code - It'd be useful to know what the tree of inheritance looks like for a class for cases like inheritance or metadata. | priority | access inheritance tree from code it d be useful to know what the tree of inheritance looks like for a class for cases like inheritance or metadata | 1 |
152,026 | 5,831,801,740 | IssuesEvent | 2017-05-08 20:15:22 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | Target Units are managed per CRP | Priority - Medium Type - Enhancement | Target Units should be defined in the System Admin section and each CRP should be able to select which of them are going to be managed on their impact pathways.
| 1.0 | Target Units are managed per CRP - Target Units should be defined in the System Admin section and each CRP should be able to select which of them are going to be managed on their impact pathways.
| priority | target units are managed per crp target units should be defined in the system admin section and each crp should be able to select which of them are going to be managed on their impact pathways | 1 |
476,741 | 13,749,291,437 | IssuesEvent | 2020-10-06 10:16:09 | svthalia/discord-bot | https://api.github.com/repos/svthalia/discord-bot | closed | Bot status activity always shows "!help for docs" | bug priority: medium | ### Describe the bug
Bot status activity always shows "!help for docs"
### How to reproduce
Steps to reproduce the behaviour:
1. Run the bot
2. Change the command prefix
3. Restart the bot
3. The bot still shows the same message
### Expected behaviour
The prefix in the activity message should also update
| 1.0 | Bot status activity always shows "!help for docs" - ### Describe the bug
Bot status activity always shows "!help for docs"
### How to reproduce
Steps to reproduce the behaviour:
1. Run the bot
2. Change the command prefix
3. Restart the bot
3. The bot still shows the same message
### Expected behaviour
The prefix in the activity message should also update
| priority | bot status activity always shows help for docs describe the bug bot status activity always shows help for docs how to reproduce steps to reproduce the behaviour run the bot change the command prefix restart the bot the bot still shows the same message expected behaviour the prefix in the activity message should also update | 1 |
136,435 | 5,282,432,206 | IssuesEvent | 2017-02-07 18:50:50 | octobercms/october | https://api.github.com/repos/octobercms/october | closed | Runaway system_request_log table. | Priority: Medium Status: Completed Type: Maintenance | ##### Expected behavior
system_request_log table should have some limits, like a monthly rotation or something. We have many other ways to log errors and count them so for us this table is just superfluous overhead, so perhaps we should have a way (by configuration & environment variable) to configure a limit or disable this feature.
##### Actual behavior
The table (in our case) blew up exponentially after a brief DDoS attack. Blocking the attack didn't renew service because the system_request_log table had hundreds of simultaneous queries trying to get (and increase) the count of the 404 errors on favicon.ico (and similar). This created a complete MySQL gridlock. I couldn't truncate the table due to the lock and had to create a new one matching the schema and swap it out.
##### Reproduce steps
Have about 500 users on the site at once, each one producing an error to be logged to this table. Starting with a table over 8m rows thanks to a a DDoS attack. Yea... not the most likely scenario, but there is is :)
##### October build
~v1.0.389
| 1.0 | Runaway system_request_log table. - ##### Expected behavior
system_request_log table should have some limits, like a monthly rotation or something. We have many other ways to log errors and count them so for us this table is just superfluous overhead, so perhaps we should have a way (by configuration & environment variable) to configure a limit or disable this feature.
##### Actual behavior
The table (in our case) blew up exponentially after a brief DDoS attack. Blocking the attack didn't renew service because the system_request_log table had hundreds of simultaneous queries trying to get (and increase) the count of the 404 errors on favicon.ico (and similar). This created a complete MySQL gridlock. I couldn't truncate the table due to the lock and had to create a new one matching the schema and swap it out.
##### Reproduce steps
Have about 500 users on the site at once, each one producing an error to be logged to this table. Starting with a table over 8m rows thanks to a a DDoS attack. Yea... not the most likely scenario, but there is is :)
##### October build
~v1.0.389
| priority | runaway system request log table expected behavior system request log table should have some limits like a monthly rotation or something we have many other ways to log errors and count them so for us this table is just superfluous overhead so perhaps we should have a way by configuration environment variable to configure a limit or disable this feature actual behavior the table in our case blew up exponentially after a brief ddos attack blocking the attack didn t renew service because the system request log table had hundreds of simultaneous queries trying to get and increase the count of the errors on favicon ico and similar this created a complete mysql gridlock i couldn t truncate the table due to the lock and had to create a new one matching the schema and swap it out reproduce steps have about users on the site at once each one producing an error to be logged to this table starting with a table over rows thanks to a a ddos attack yea not the most likely scenario but there is is october build | 1 |
357,706 | 10,617,044,740 | IssuesEvent | 2019-10-12 16:13:40 | x13pixels/remedybg-issues | https://api.github.com/repos/x13pixels/remedybg-issues | closed | Allow for showing ASCII characters in Memory window | Component: Memory Window Priority: 4 (Medium) Status: Completed | It would be useful, when inspecting long strings or searching for strings in a data structure, to be able to see a corresponding column for characters in the memory window, similar to how hex viewers would usually have it as a separate column. | 1.0 | Allow for showing ASCII characters in Memory window - It would be useful, when inspecting long strings or searching for strings in a data structure, to be able to see a corresponding column for characters in the memory window, similar to how hex viewers would usually have it as a separate column. | priority | allow for showing ascii characters in memory window it would be useful when inspecting long strings or searching for strings in a data structure to be able to see a corresponding column for characters in the memory window similar to how hex viewers would usually have it as a separate column | 1 |
742,946 | 25,879,420,609 | IssuesEvent | 2022-12-14 10:13:40 | Australian-Genomics/CTRL | https://api.github.com/repos/Australian-Genomics/CTRL | closed | Can the little information (i) drop down be made darker and/or bolder to stand out more. | priority: medium | Have received feedback that the little i is a bad hard to see.

| 1.0 | Can the little information (i) drop down be made darker and/or bolder to stand out more. - Have received feedback that the little i is a bad hard to see.

| priority | can the little information i drop down be made darker and or bolder to stand out more have received feedback that the little i is a bad hard to see | 1 |
40,348 | 2,868,621,389 | IssuesEvent | 2015-06-05 19:59:12 | dart-lang/dart_style | https://api.github.com/repos/dart-lang/dart_style | closed | If wrapping is required, avoid breaking within statements unless it increases the total number of lines created | AssumedStale enhancement Priority-Medium | <a href="https://github.com/butlermatt"><img src="https://avatars.githubusercontent.com/u/1148886?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [butlermatt](https://github.com/butlermatt)**
_Originally opened as dart-lang/sdk#16897_
----
Take the sample program below:
void main() {
if (true)
throw new FormatError('This is my Stupid long error. Do you like it?');
}
Run through dartfmt. Output is:
void main() {
if (true) throw new FormatError(
'This is my Stupid long error. Do you like it?');
}
I think if a long line is part of the constructor, if possible dartfmt should at the least move the class name to the next line (if possible) not just the argument.
That said, I'd prefer the output of my original with the throw statement also on the following line, which is achieved by wrapping the if block in curly braces. | 1.0 | If wrapping is required, avoid breaking within statements unless it increases the total number of lines created - <a href="https://github.com/butlermatt"><img src="https://avatars.githubusercontent.com/u/1148886?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [butlermatt](https://github.com/butlermatt)**
_Originally opened as dart-lang/sdk#16897_
----
Take the sample program below:
void main() {
if (true)
throw new FormatError('This is my Stupid long error. Do you like it?');
}
Run through dartfmt. Output is:
void main() {
if (true) throw new FormatError(
'This is my Stupid long error. Do you like it?');
}
I think if a long line is part of the constructor, if possible dartfmt should at the least move the class name to the next line (if possible) not just the argument.
That said, I'd prefer the output of my original with the throw statement also on the following line, which is achieved by wrapping the if block in curly braces. | priority | if wrapping is required avoid breaking within statements unless it increases the total number of lines created issue by originally opened as dart lang sdk take the sample program below void main nbsp nbsp if true nbsp nbsp nbsp nbsp throw new formaterror this is my stupid long error do you like it run through dartfmt output is void main nbsp nbsp if true throw new formaterror nbsp nbsp nbsp nbsp nbsp nbsp this is my stupid long error do you like it i think if a long line is part of the constructor if possible dartfmt should at the least move the class name to the next line if possible not just the argument that said i d prefer the output of my original with the throw statement also on the following line which is achieved by wrapping the if block in curly braces | 1 |
657,561 | 21,796,676,281 | IssuesEvent | 2022-05-15 18:39:07 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Practice App: Implementing the POST method for dropping lesson, lesson drop endpoint | priority-medium status-inprogress practice-app practice-app:back-end | ### Issue Description
We determined the endpoints to be included in the practice app project in our [weekly meeting-9](https://github.com/bounswe/bounswe2022group2/wiki/Meeting-%239-(01.05.2022)).
After the determination of the complete list, we divided the tasks within the team. I took the responsibility for the implementation of a POST method for dropping lesson. To be able to implement the signup endpoint, we need User and Lesson models.
### Step Details
Steps that will be performed:
- [x] Check whether the incoming data is valid
- [x] Check whether the user and lesson exists
- [x] Check whether the user takes the lesson
- [x] Remove the lesson from user's enrollment list and return the user and lesson with a success message
- [x] Return an error message if an error occurs in any of the previous steps
- [x] Test the endpoint with Postman
- [x] Save requests as a collection in Postman
- [x] Document the endpoint
### Final Actions
After I created the drop lesson endpoint, I will test it by using Postman. While testing, I will save my successful and failed requests and I will create a collection in Postman. I will also give the link to the collection under this as a comment to enable anyone to test the endpoint. Finally, I will document the endpoint and will share the documentation as a comment under this issue to add it to the corresponding wiki page in the future.
### Deadline of the Issue
15.05.2022 - Sunday - 23:59
### Reviewer
Mehmet Batuhan Çelik
### Deadline for the Review
16.05.2022 - Monday - 23:59 | 1.0 | Practice App: Implementing the POST method for dropping lesson, lesson drop endpoint - ### Issue Description
We determined the endpoints to be included in the practice app project in our [weekly meeting-9](https://github.com/bounswe/bounswe2022group2/wiki/Meeting-%239-(01.05.2022)).
After the determination of the complete list, we divided the tasks within the team. I took the responsibility for the implementation of a POST method for dropping lesson. To be able to implement the signup endpoint, we need User and Lesson models.
### Step Details
Steps that will be performed:
- [x] Check whether the incoming data is valid
- [x] Check whether the user and lesson exists
- [x] Check whether the user takes the lesson
- [x] Remove the lesson from user's enrollment list and return the user and lesson with a success message
- [x] Return an error message if an error occurs in any of the previous steps
- [x] Test the endpoint with Postman
- [x] Save requests as a collection in Postman
- [x] Document the endpoint
### Final Actions
After I created the drop lesson endpoint, I will test it by using Postman. While testing, I will save my successful and failed requests and I will create a collection in Postman. I will also give the link to the collection under this as a comment to enable anyone to test the endpoint. Finally, I will document the endpoint and will share the documentation as a comment under this issue to add it to the corresponding wiki page in the future.
### Deadline of the Issue
15.05.2022 - Sunday - 23:59
### Reviewer
Mehmet Batuhan Çelik
### Deadline for the Review
16.05.2022 - Monday - 23:59 | priority | practice app implementing the post method for dropping lesson lesson drop endpoint issue description we determined the endpoints to be included in the practice app project in our after the determination of the complete list we divided the tasks within the team i took the responsibility for the implementation of a post method for dropping lesson to be able to implement the signup endpoint we need user and lesson models step details steps that will be performed check whether the incoming data is valid check whether the user and lesson exists check whether the user takes the lesson remove the lesson from user s enrollment list and return the user and lesson with a success message return an error message if an error occurs in any of the previous steps test the endpoint with postman save requests as a collection in postman document the endpoint final actions after i created the drop lesson endpoint i will test it by using postman while testing i will save my successful and failed requests and i will create a collection in postman i will also give the link to the collection under this as a comment to enable anyone to test the endpoint finally i will document the endpoint and will share the documentation as a comment under this issue to add it to the corresponding wiki page in the future deadline of the issue sunday reviewer mehmet batuhan çelik deadline for the review monday | 1 |
41,486 | 2,869,010,102 | IssuesEvent | 2015-06-05 22:33:32 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Allow application-level deferred libraries to have separate message catalogs | Area-Pkg Pkg-Intl Priority-Medium Triaged Type-Enhancement | If the application wants to use deferred libraries for portions of itself, it would be best if the message catalog for that portion was loaded along with it. Right now the Intl.message calls are global, and we don't have a way to indicate a scope for the lookup, so that might require an API change. Alternatively, the implementation could remain as a single lookup but be a composite and have parts of it loaded when the application library is loaded.There are two issues there. One is identifying which messages need to be in which deferred chunks. That shouldn't be too bad, because we can base it off which file the message is implemented in and which chunk of the application that is. The second is having a runtime hook of some sort to load the messages when that deferred part of the application is loaded. That probably has to be done by convention, because we don't have that in the language. | 1.0 | Allow application-level deferred libraries to have separate message catalogs - If the application wants to use deferred libraries for portions of itself, it would be best if the message catalog for that portion was loaded along with it. Right now the Intl.message calls are global, and we don't have a way to indicate a scope for the lookup, so that might require an API change. Alternatively, the implementation could remain as a single lookup but be a composite and have parts of it loaded when the application library is loaded.There are two issues there. One is identifying which messages need to be in which deferred chunks. That shouldn't be too bad, because we can base it off which file the message is implemented in and which chunk of the application that is. The second is having a runtime hook of some sort to load the messages when that deferred part of the application is loaded. That probably has to be done by convention, because we don't have that in the language. | priority | allow application level deferred libraries to have separate message catalogs if the application wants to use deferred libraries for portions of itself it would be best if the message catalog for that portion was loaded along with it right now the intl message calls are global and we don t have a way to indicate a scope for the lookup so that might require an api change alternatively the implementation could remain as a single lookup but be a composite and have parts of it loaded when the application library is loaded there are two issues there one is identifying which messages need to be in which deferred chunks that shouldn t be too bad because we can base it off which file the message is implemented in and which chunk of the application that is the second is having a runtime hook of some sort to load the messages when that deferred part of the application is loaded that probably has to be done by convention because we don t have that in the language | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.