Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
683,633 | 23,389,473,981 | IssuesEvent | 2022-08-11 16:23:22 | archesproject/arches | https://api.github.com/repos/archesproject/arches | closed | i18n language switcher broken | Type: Bug Priority: High | After the webpack build merged, I can no longer switch languages with the language switcher in 7.x. After talking with @chrabyrd this may have to do with a missing i18n_patterns call in urls.py - more research is needed. | 1.0 | i18n language switcher broken - After the webpack build merged, I can no longer switch languages with the language switcher in 7.x. After talking with @chrabyrd this may have to do with a missing i18n_patterns call in urls.py - more research is needed. | priority | language switcher broken after the webpack build merged i can no longer switch languages with the language switcher in x after talking with chrabyrd this may have to do with a missing patterns call in urls py more research is needed | 1 |
765,825 | 26,862,475,805 | IssuesEvent | 2023-02-03 19:45:54 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Foreach op don't follow aten level debug asserts | high priority triage review triaged actionable module: mta | Since https://github.com/pytorch/pytorch/pull/91846 has moved to make torch.nn.utils.clip_grad_norm_() use foreach ops, it throws the error message:
*** RuntimeError: t.storage().use_count() == 1 INTERNAL ASSERT FAILED at "caffe2/torch/csrc/autograd/autograd_not_implemented_fallback.cpp":189, please report a bug to PyTorch.
when PyTorch is built with debug asserts.
Given the assert, the error is that this foreach op should be returning a brand new Tensor but it is actually returning a Tensor that shares storage with at least another one.
- If this is done on purpose for a good reason, we should remove this assert
- If that is not expected, then it can be a bug in the implementation where it returns a view where it should not.
cc @ezyang @gchanan @zou3519 @crcrpar @mcarilli @ngimel | 1.0 | Foreach op don't follow aten level debug asserts - Since https://github.com/pytorch/pytorch/pull/91846 has moved to make torch.nn.utils.clip_grad_norm_() use foreach ops, it throws the error message:
*** RuntimeError: t.storage().use_count() == 1 INTERNAL ASSERT FAILED at "caffe2/torch/csrc/autograd/autograd_not_implemented_fallback.cpp":189, please report a bug to PyTorch.
when PyTorch is built with debug asserts.
Given the assert, the error is that this foreach op should be returning a brand new Tensor but it is actually returning a Tensor that shares storage with at least another one.
- If this is done on purpose for a good reason, we should remove this assert
- If that is not expected, then it can be a bug in the implementation where it returns a view where it should not.
cc @ezyang @gchanan @zou3519 @crcrpar @mcarilli @ngimel | priority | foreach op don t follow aten level debug asserts since has moved to make torch nn utils clip grad norm use foreach ops it throws the error message runtimeerror t storage use count internal assert failed at torch csrc autograd autograd not implemented fallback cpp please report a bug to pytorch when pytorch is built with debug asserts given the assert the error is that this foreach op should be returning a brand new tensor but it is actually returning a tensor that shares storage with at least another one if this is done on purpose for a good reason we should remove this assert if that is not expected then it can be a bug in the implementation where it returns a view where it should not cc ezyang gchanan crcrpar mcarilli ngimel | 1 |
794,956 | 28,056,412,012 | IssuesEvent | 2023-03-29 09:39:39 | projectdiscovery/naabu | https://api.github.com/repos/projectdiscovery/naabu | closed | exclude `IP` / `Port` / `Host` option is not working after last update | Priority: High Status: Completed Type: Bug | Hello,
when i try to exclude ports using -ep or -exclude-ports
the outpust still have excluded ports on it.
```console
naabu -host hackerone.com -ep 80
__
___ ___ ___ _/ / __ __
/ _ \/ _ \/ _ \/ _ \/ // /
/_//_/\_,_/\_,_/_.__/\_,_/
projectdiscovery.io
[INF] Current naabu version 2.1.4 (latest)
[INF] Running CONNECT scan with non root privileges
[INF] Found 1 ports on host hackerone.com (104.16.100.52)
hackerone.com:80
```
tried more than once.
Thanks | 1.0 | exclude `IP` / `Port` / `Host` option is not working after last update - Hello,
when i try to exclude ports using -ep or -exclude-ports
the outpust still have excluded ports on it.
```console
naabu -host hackerone.com -ep 80
__
___ ___ ___ _/ / __ __
/ _ \/ _ \/ _ \/ _ \/ // /
/_//_/\_,_/\_,_/_.__/\_,_/
projectdiscovery.io
[INF] Current naabu version 2.1.4 (latest)
[INF] Running CONNECT scan with non root privileges
[INF] Found 1 ports on host hackerone.com (104.16.100.52)
hackerone.com:80
```
tried more than once.
Thanks | priority | exclude ip port host option is not working after last update hello when i try to exclude ports using ep or exclude ports the outpust still have excluded ports on it console naabu host hackerone com ep projectdiscovery io current naabu version latest running connect scan with non root privileges found ports on host hackerone com hackerone com tried more than once thanks | 1 |
364,081 | 10,758,645,096 | IssuesEvent | 2019-10-31 15:18:52 | smallhadroncollider/taskell | https://api.github.com/repos/smallhadroncollider/taskell | closed | Allow sub tasks reordering | enhancement high priority in progress | Would be possible to allow sub tasks to move up or down with K and J? | 1.0 | Allow sub tasks reordering - Would be possible to allow sub tasks to move up or down with K and J? | priority | allow sub tasks reordering would be possible to allow sub tasks to move up or down with k and j | 1 |
547,637 | 16,044,143,832 | IssuesEvent | 2021-04-22 11:41:47 | apluslms/mooc-grader | https://api.github.com/repos/apluslms/mooc-grader | closed | Question choice label does not connect to the input | area: UX student area: questionnaire effort: hours experience: good first issue priority: high requester: internal type: bug | Questionnaire HTML forms have labels that have outdated `for` attribute values. This concerns at least checkbox and radio button questions. The `for` value is not the same as the `id` attribute of the input and thus, clicking on the label does not tick the checkbox.
The id attributes were changed in A+ v1.8 (January 2021). The new id contains a random part so that the id is unique in the DOM even when the page (A+ content chapter) includes multiple exercises of the same type. The current label `for` values match the old ids that are not used any longer (the old ids were probably the Django default values).
Relevant code:
https://github.com/apluslms/mooc-grader/blob/2497d7874335e5cb65dea3fe50c6304874d14a8c/access/templates/access/graded_form.html#L124
https://github.com/apluslms/mooc-grader/blob/2497d7874335e5cb65dea3fe50c6304874d14a8c/access/types/forms.py#L46
| 1.0 | Question choice label does not connect to the input - Questionnaire HTML forms have labels that have outdated `for` attribute values. This concerns at least checkbox and radio button questions. The `for` value is not the same as the `id` attribute of the input and thus, clicking on the label does not tick the checkbox.
The id attributes were changed in A+ v1.8 (January 2021). The new id contains a random part so that the id is unique in the DOM even when the page (A+ content chapter) includes multiple exercises of the same type. The current label `for` values match the old ids that are not used any longer (the old ids were probably the Django default values).
Relevant code:
https://github.com/apluslms/mooc-grader/blob/2497d7874335e5cb65dea3fe50c6304874d14a8c/access/templates/access/graded_form.html#L124
https://github.com/apluslms/mooc-grader/blob/2497d7874335e5cb65dea3fe50c6304874d14a8c/access/types/forms.py#L46
| priority | question choice label does not connect to the input questionnaire html forms have labels that have outdated for attribute values this concerns at least checkbox and radio button questions the for value is not the same as the id attribute of the input and thus clicking on the label does not tick the checkbox the id attributes were changed in a january the new id contains a random part so that the id is unique in the dom even when the page a content chapter includes multiple exercises of the same type the current label for values match the old ids that are not used any longer the old ids were probably the django default values relevant code | 1 |
503,409 | 14,591,271,037 | IssuesEvent | 2020-12-19 12:09:18 | space-wizards/RobustToolbox | https://api.github.com/repos/space-wizards/RobustToolbox | closed | Defer move events out of TransformComponent.HandleComponentState() | Feature: Entities Feature: Networking Priority: 1-high Type: Bug | I just had a crash because an entity with a snap grid component got its states applied before the grid, and the snap grid's event handler for updating position hit an assert because the grid hadn't been given data yet.
Reproduction is to apply the following change, doing `cvar net.pvs 0` server side and then run a `restartround`, as far as I can tell:
```diff
index 91de2d007..ed8046ce1 100644
--- a/Robust.Server/GameObjects/ServerEntityManager.cs
+++ b/Robust.Server/GameObjects/ServerEntityManager.cs
@@ -12,6 +12,7 @@ using Robust.Server.Interfaces.Timing;
using Robust.Shared;
using Robust.Shared.GameObjects;
using Robust.Shared.GameObjects.Components;
+using Robust.Shared.GameObjects.Components.Map;
using Robust.Shared.GameObjects.Components.Transform;
using Robust.Shared.Interfaces.Configuration;
using Robust.Shared.Interfaces.GameObjects;
@@ -165,7 +166,7 @@ namespace Robust.Server.GameObjects
}
// no point sending an empty collection
- return stateEntities.Count == 0 ? default : stateEntities;
+ return stateEntities.Count == 0 ? default : stateEntities.OrderByDescending(e => GetEntity(e.Uid).HasComponent<IMapGridComponent>()).ToList();
}
private readonly Dictionary<IPlayerSession, SortedSet<EntityUid>> _seenMovers
``` | 1.0 | Defer move events out of TransformComponent.HandleComponentState() - I just had a crash because an entity with a snap grid component got its states applied before the grid, and the snap grid's event handler for updating position hit an assert because the grid hadn't been given data yet.
Reproduction is to apply the following change, doing `cvar net.pvs 0` server side and then run a `restartround`, as far as I can tell:
```diff
index 91de2d007..ed8046ce1 100644
--- a/Robust.Server/GameObjects/ServerEntityManager.cs
+++ b/Robust.Server/GameObjects/ServerEntityManager.cs
@@ -12,6 +12,7 @@ using Robust.Server.Interfaces.Timing;
using Robust.Shared;
using Robust.Shared.GameObjects;
using Robust.Shared.GameObjects.Components;
+using Robust.Shared.GameObjects.Components.Map;
using Robust.Shared.GameObjects.Components.Transform;
using Robust.Shared.Interfaces.Configuration;
using Robust.Shared.Interfaces.GameObjects;
@@ -165,7 +166,7 @@ namespace Robust.Server.GameObjects
}
// no point sending an empty collection
- return stateEntities.Count == 0 ? default : stateEntities;
+ return stateEntities.Count == 0 ? default : stateEntities.OrderByDescending(e => GetEntity(e.Uid).HasComponent<IMapGridComponent>()).ToList();
}
private readonly Dictionary<IPlayerSession, SortedSet<EntityUid>> _seenMovers
``` | priority | defer move events out of transformcomponent handlecomponentstate i just had a crash because an entity with a snap grid component got its states applied before the grid and the snap grid s event handler for updating position hit an assert because the grid hadn t been given data yet reproduction is to apply the following change doing cvar net pvs server side and then run a restartround as far as i can tell diff index a robust server gameobjects serverentitymanager cs b robust server gameobjects serverentitymanager cs using robust server interfaces timing using robust shared using robust shared gameobjects using robust shared gameobjects components using robust shared gameobjects components map using robust shared gameobjects components transform using robust shared interfaces configuration using robust shared interfaces gameobjects namespace robust server gameobjects no point sending an empty collection return stateentities count default stateentities return stateentities count default stateentities orderbydescending e getentity e uid hascomponent tolist private readonly dictionary seenmovers | 1 |
206,789 | 7,121,239,178 | IssuesEvent | 2018-01-19 06:36:28 | carbon-design-system/carbon-components-react | https://api.github.com/repos/carbon-design-system/carbon-components-react | closed | Is there any way to reset range DatePicker ? | bug priority: high | <!-- Feel free to remove sections that aren't relevant.
## Title line template: [Title]: Brief description
-->
## Detailed description
Describe in detail the issue you're having. Is this a feature request (new component, new icon), a bug, or a general issue?
Now I need to reset DataPicker (range type) - it means after I select a date range, I need to reset it to the initial state without re-rendering the component.
> Is this issue related to a specific component?
DatePicker/DatePickerInput
> What did you expect to happen? What happened instead? What would you like to see changed?
`<DatePickerInput
labelText="Start date"
onChange={this.handleDateChange}
onClick={this.handleFromToClick}
placeholder="mm/dd/yyyy"
id="fromDate"
value={this.state.startDate}
/>`
Now I bind a property value with a property "startDate" in local state, by reset startDate value in local state, the input text is gone, but in the popup, the date range is still there. The expected behavior is there should be a way to "completely" reset the component - the date range in popup should be reset as well.

> What browser are you working in?
Chrome
> What version of the Carbon Design System are you using?
7.26.10
> What offering/product do you work on? Any pressing ship or release dates we should be aware of?
Yes. It is better to be fixed ASAP.
## Steps to reproduce the issue
## Additional information
* Screenshots or code
* Notes
## Add labels
Please choose the appropriate label(s) from our existing label list to ensure that your issue is properly categorized. This will help us to better understand and address your issue.
| 1.0 | Is there any way to reset range DatePicker ? - <!-- Feel free to remove sections that aren't relevant.
## Title line template: [Title]: Brief description
-->
## Detailed description
Describe in detail the issue you're having. Is this a feature request (new component, new icon), a bug, or a general issue?
Now I need to reset DataPicker (range type) - it means after I select a date range, I need to reset it to the initial state without re-rendering the component.
> Is this issue related to a specific component?
DatePicker/DatePickerInput
> What did you expect to happen? What happened instead? What would you like to see changed?
`<DatePickerInput
labelText="Start date"
onChange={this.handleDateChange}
onClick={this.handleFromToClick}
placeholder="mm/dd/yyyy"
id="fromDate"
value={this.state.startDate}
/>`
Now I bind a property value with a property "startDate" in local state, by reset startDate value in local state, the input text is gone, but in the popup, the date range is still there. The expected behavior is there should be a way to "completely" reset the component - the date range in popup should be reset as well.

> What browser are you working in?
Chrome
> What version of the Carbon Design System are you using?
7.26.10
> What offering/product do you work on? Any pressing ship or release dates we should be aware of?
Yes. It is better to be fixed ASAP.
## Steps to reproduce the issue
## Additional information
* Screenshots or code
* Notes
## Add labels
Please choose the appropriate label(s) from our existing label list to ensure that your issue is properly categorized. This will help us to better understand and address your issue.
| priority | is there any way to reset range datepicker feel free to remove sections that aren t relevant title line template brief description detailed description describe in detail the issue you re having is this a feature request new component new icon a bug or a general issue now i need to reset datapicker range type it means after i select a date range i need to reset it to the initial state without re rendering the component is this issue related to a specific component datepicker datepickerinput what did you expect to happen what happened instead what would you like to see changed datepickerinput labeltext start date onchange this handledatechange onclick this handlefromtoclick placeholder mm dd yyyy id fromdate value this state startdate now i bind a property value with a property startdate in local state by reset startdate value in local state the input text is gone but in the popup the date range is still there the expected behavior is there should be a way to completely reset the component the date range in popup should be reset as well what browser are you working in chrome what version of the carbon design system are you using what offering product do you work on any pressing ship or release dates we should be aware of yes it is better to be fixed asap steps to reproduce the issue additional information screenshots or code notes add labels please choose the appropriate label s from our existing label list to ensure that your issue is properly categorized this will help us to better understand and address your issue | 1 |
413,804 | 12,092,366,727 | IssuesEvent | 2020-04-19 15:22:49 | Rammelkast/AntiCheatReloaded | https://api.github.com/repos/Rammelkast/AntiCheatReloaded | closed | False positive with Elytra | false-positive high priority | When sprintjumping and subsequently using an elytra, ACR sometimes gives the following false positive:
USER ascended 8 times in a row (max = 8)
caused by false positive in the checkAscension method (Backend.java line 408)
| 1.0 | False positive with Elytra - When sprintjumping and subsequently using an elytra, ACR sometimes gives the following false positive:
USER ascended 8 times in a row (max = 8)
caused by false positive in the checkAscension method (Backend.java line 408)
| priority | false positive with elytra when sprintjumping and subsequently using an elytra acr sometimes gives the following false positive user ascended times in a row max caused by false positive in the checkascension method backend java line | 1 |
747,735 | 26,096,859,135 | IssuesEvent | 2022-12-26 21:37:38 | bounswe/bounswe2022group4 | https://api.github.com/repos/bounswe/bounswe2022group4 | closed | Backend: Create tests for the text annotations | Category - Enhancement Priority - High Status: Completed Difficulty - Hard Backend Team - Backend | Description:
Tests are essential in the software development lifecycle. We need to create tests for the text annotation endpoints.
Steps:
- Create a test for TextAnnotationAPIView. It has only one endpoint GET. We can check the status code and the content of the response.
- Create a test for PostTextAnnotationAPIView. It has two endpoints GET and POST. We can check the status codes and the content of the responses.
Deadline: 25.12.2022 23.59 | 1.0 | Backend: Create tests for the text annotations - Description:
Tests are essential in the software development lifecycle. We need to create tests for the text annotation endpoints.
Steps:
- Create a test for TextAnnotationAPIView. It has only one endpoint GET. We can check the status code and the content of the response.
- Create a test for PostTextAnnotationAPIView. It has two endpoints GET and POST. We can check the status codes and the content of the responses.
Deadline: 25.12.2022 23.59 | priority | backend create tests for the text annotations description tests are essential in the software development lifecycle we need to create tests for the text annotation endpoints steps create a test for textannotationapiview it has only one endpoint get we can check the status code and the content of the response create a test for posttextannotationapiview it has two endpoints get and post we can check the status codes and the content of the responses deadline | 1 |
631,427 | 20,151,624,509 | IssuesEvent | 2022-02-09 12:58:43 | ita-social-projects/horondi_client_fe | https://api.github.com/repos/ita-social-projects/horondi_client_fe | closed | [Product Detail Page] Inconsistent of product prices on Category page and Product detail page | bug UI priority: high severity: trivial | **Environment:** Windows 10 64bit, Firefox 91.0.4472.124 64bit
**Reproducible:** always
**Preconditions**
Go to: https://horondi-front-staging.azurewebsites.net/
**Description:**
**Steps to reproduce**
1. Click to the menu on the left corner on the top of the pages
2. Choose any point on "BAGS" tab (e.g. BANANA BAGS)
3. Pay attention to the price of the good (e.g. Banana bag-red)
3. Open "Banana bag-red" product detail page
4. Pay attention to the price of the good
**Actual result**
Inconsistent prices are displayed


**Expected result**
Product prices are appropriate
| 1.0 | [Product Detail Page] Inconsistent of product prices on Category page and Product detail page - **Environment:** Windows 10 64bit, Firefox 91.0.4472.124 64bit
**Reproducible:** always
**Preconditions**
Go to: https://horondi-front-staging.azurewebsites.net/
**Description:**
**Steps to reproduce**
1. Click to the menu on the left corner on the top of the pages
2. Choose any point on "BAGS" tab (e.g. BANANA BAGS)
3. Pay attention to the price of the good (e.g. Banana bag-red)
3. Open "Banana bag-red" product detail page
4. Pay attention to the price of the good
**Actual result**
Inconsistent prices are displayed


**Expected result**
Product prices are appropriate
| priority | inconsistent of product prices on category page and product detail page environment windows firefox reproducible always preconditions go to description steps to reproduce click to the menu on the left corner on the top of the pages choose any point on bags tab e g banana bags pay attention to the price of the good e g banana bag red open banana bag red product detail page pay attention to the price of the good actual result inconsistent prices are displayed expected result product prices are appropriate | 1 |
416,429 | 12,146,313,969 | IssuesEvent | 2020-04-24 10:54:37 | luna/enso | https://api.github.com/repos/luna/enso | opened | Build the Transitive Closure of Invalidated External IDs | Category: Compiler Change: Non-Breaking Difficulty: Core Contributor Priority: High Type: Enhancement | ### Summary
<!--
- A summary of the task.
-->
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
### Specification
- [ ] On change, store the previous version of the IR for the module(s) being changed.
- [ ] From the change, determine the internal identifier 'root' that has been changed.
- [ ] Traverse the _previous_ IR representation, and gather the transitive closure of _external_ IDs that have been invalidated (using the results of dataflow analysis).
- [ ] If a node has an external ID, it should be added to the result set.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
| 1.0 | Build the Transitive Closure of Invalidated External IDs - ### Summary
<!--
- A summary of the task.
-->
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
### Specification
- [ ] On change, store the previous version of the IR for the module(s) being changed.
- [ ] From the change, determine the internal identifier 'root' that has been changed.
- [ ] Traverse the _previous_ IR representation, and gather the transitive closure of _external_ IDs that have been invalidated (using the results of dataflow analysis).
- [ ] If a node has an external ID, it should be added to the result set.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
| priority | build the transitive closure of invalidated external ids summary a summary of the task value this section should describe the value of this task this value can be for users to the team etc specification on change store the previous version of the ir for the module s being changed from the change determine the internal identifier root that has been changed traverse the previous ir representation and gather the transitive closure of external ids that have been invalidated using the results of dataflow analysis if a node has an external id it should be added to the result set acceptance criteria test cases any criteria that must be satisfied for the task to be accepted the test plan for the feature related to the acceptance criteria | 1 |
760,812 | 26,657,510,618 | IssuesEvent | 2023-01-25 18:02:28 | ThinkR-open/checkhelper | https://api.github.com/repos/ThinkR-open/checkhelper | closed | Final checks for the CRAN submission | priority: high chore project ok | ## Critères de validation
- [ ] Arthur ajouté dans DESCRIPTION

- [ ] `check_clean_userspace`
- [ ] a un paramètre par défaut

- [ ] a un exemple avec une variable

- [ ] Les instructions/checks du fichier `dev_history.Rmd` sont exécutes et sont ok :
## Comment technique ?
Faire passer tout le contenu de `dev_history.Rmd`
```
# Check package coverage
covr::package_coverage()
# _Check in interactive test-inflate for templates and Addins
pkgload::load_all()
devtools::test()
testthat::test_dir("tests/testthat/")
# Test no output generated in the user files
# Run examples in interactive mode too
devtools::run_examples()
# Check that the state is clean after check
checkhelper::check_clean_userspace(pkg = ".")
# Check package as CRAN
rcmdcheck::rcmdcheck(args = c("--no-manual", "--as-cran"))
# devtools::check(args = c("--no-manual", "--as-cran"))
# Check content
# remotes::install_github("ThinkR-open/checkhelper")
checkhelper::find_missing_tags() # Toutes les fonctions doivent avoir soit `@noRd` soit un `@export`
checkhelper::check_clean_userspace(pkg = ".")
checkhelper::check_as_cran()
# Check spelling
# usethis::use_spell_check()
spelling::spell_check_package()
# Check URL are correct
# remotes::install_github("r-lib/urlchecker")
urlchecker::url_check()
urlchecker::url_update()
# Upgrade version number
usethis::use_version(which = c("patch", "minor", "major", "dev")[2])
# check on other distributions
# _rhub
# devtools::check_rhub()
rhub::platforms()
rhub::check_on_windows(check_args = "--force-multiarch", show_status = FALSE)
rhub::check_on_solaris(show_status = FALSE)
rhub::check(platform = "debian-clang-devel", show_status = FALSE)
rhub::check(platform = "debian-gcc-devel", show_status = FALSE)
rhub::check(platform = "fedora-clang-devel", show_status = FALSE)
rhub::check(platform = "macos-highsierra-release-cran", show_status = FALSE)
rhub::check_for_cran(show_status = FALSE)
# _win devel
devtools::check_win_devel()
devtools::check_win_release()
# remotes::install_github("r-lib/devtools")
devtools::check_mac_release() # Need to follow the URL proposed to see the results
# Update NEWS
# Bump version manually and add list of changes
# Add comments for CRAN
# Need to .gitignore this file
usethis::use_cran_comments(open = rlang::is_interactive())
# Upgrade version number if necessary
usethis::use_version(which = c("patch", "minor", "major", "dev")[1])
# Verify you're ready for release, and release
devtools::release()
```
| 1.0 | Final checks for the CRAN submission - ## Critères de validation
- [ ] Arthur ajouté dans DESCRIPTION

- [ ] `check_clean_userspace`
- [ ] a un paramètre par défaut

- [ ] a un exemple avec une variable

- [ ] Les instructions/checks du fichier `dev_history.Rmd` sont exécutes et sont ok :
## Comment technique ?
Faire passer tout le contenu de `dev_history.Rmd`
```
# Check package coverage
covr::package_coverage()
# _Check in interactive test-inflate for templates and Addins
pkgload::load_all()
devtools::test()
testthat::test_dir("tests/testthat/")
# Test no output generated in the user files
# Run examples in interactive mode too
devtools::run_examples()
# Check that the state is clean after check
checkhelper::check_clean_userspace(pkg = ".")
# Check package as CRAN
rcmdcheck::rcmdcheck(args = c("--no-manual", "--as-cran"))
# devtools::check(args = c("--no-manual", "--as-cran"))
# Check content
# remotes::install_github("ThinkR-open/checkhelper")
checkhelper::find_missing_tags() # Toutes les fonctions doivent avoir soit `@noRd` soit un `@export`
checkhelper::check_clean_userspace(pkg = ".")
checkhelper::check_as_cran()
# Check spelling
# usethis::use_spell_check()
spelling::spell_check_package()
# Check URL are correct
# remotes::install_github("r-lib/urlchecker")
urlchecker::url_check()
urlchecker::url_update()
# Upgrade version number
usethis::use_version(which = c("patch", "minor", "major", "dev")[2])
# check on other distributions
# _rhub
# devtools::check_rhub()
rhub::platforms()
rhub::check_on_windows(check_args = "--force-multiarch", show_status = FALSE)
rhub::check_on_solaris(show_status = FALSE)
rhub::check(platform = "debian-clang-devel", show_status = FALSE)
rhub::check(platform = "debian-gcc-devel", show_status = FALSE)
rhub::check(platform = "fedora-clang-devel", show_status = FALSE)
rhub::check(platform = "macos-highsierra-release-cran", show_status = FALSE)
rhub::check_for_cran(show_status = FALSE)
# _win devel
devtools::check_win_devel()
devtools::check_win_release()
# remotes::install_github("r-lib/devtools")
devtools::check_mac_release() # Need to follow the URL proposed to see the results
# Update NEWS
# Bump version manually and add list of changes
# Add comments for CRAN
# Need to .gitignore this file
usethis::use_cran_comments(open = rlang::is_interactive())
# Upgrade version number if necessary
usethis::use_version(which = c("patch", "minor", "major", "dev")[1])
# Verify you're ready for release, and release
devtools::release()
```
| priority | final checks for the cran submission critères de validation arthur ajouté dans description check clean userspace a un paramètre par défaut a un exemple avec une variable les instructions checks du fichier dev history rmd sont exécutes et sont ok comment technique faire passer tout le contenu de dev history rmd check package coverage covr package coverage check in interactive test inflate for templates and addins pkgload load all devtools test testthat test dir tests testthat test no output generated in the user files run examples in interactive mode too devtools run examples check that the state is clean after check checkhelper check clean userspace pkg check package as cran rcmdcheck rcmdcheck args c no manual as cran devtools check args c no manual as cran check content remotes install github thinkr open checkhelper checkhelper find missing tags toutes les fonctions doivent avoir soit nord soit un export checkhelper check clean userspace pkg checkhelper check as cran check spelling usethis use spell check spelling spell check package check url are correct remotes install github r lib urlchecker urlchecker url check urlchecker url update upgrade version number usethis use version which c patch minor major dev check on other distributions rhub devtools check rhub rhub platforms rhub check on windows check args force multiarch show status false rhub check on solaris show status false rhub check platform debian clang devel show status false rhub check platform debian gcc devel show status false rhub check platform fedora clang devel show status false rhub check platform macos highsierra release cran show status false rhub check for cran show status false win devel devtools check win devel devtools check win release remotes install github r lib devtools devtools check mac release need to follow the url proposed to see the results update news bump version manually and add list of changes add comments for cran need to gitignore this file usethis use cran comments open rlang is interactive upgrade version number if necessary usethis use version which c patch minor major dev verify you re ready for release and release devtools release | 1 |
102,811 | 4,161,044,162 | IssuesEvent | 2016-06-17 15:18:03 | ALitttleBitDifferent/AmbientPrologueBugs | https://api.github.com/repos/ALitttleBitDifferent/AmbientPrologueBugs | opened | teleport through solid objects | bug Exploit High Priority | **Submitted by:** Zil
**Description:**
It is possible to make astrums horn poke through a wall/door, allowing spells to be cast through.
**How to reproduce:**
When standing with a wall or door on the right, looking straight down (or sometimes up) while in casting mode will put astrums head though it slightly, allowing player to teleport into the wall/dor and then once again into locked or hidden rooms. | 1.0 | teleport through solid objects - **Submitted by:** Zil
**Description:**
It is possible to make astrums horn poke through a wall/door, allowing spells to be cast through.
**How to reproduce:**
When standing with a wall or door on the right, looking straight down (or sometimes up) while in casting mode will put astrums head though it slightly, allowing player to teleport into the wall/dor and then once again into locked or hidden rooms. | priority | teleport through solid objects submitted by zil description it is possible to make astrums horn poke through a wall door allowing spells to be cast through how to reproduce when standing with a wall or door on the right looking straight down or sometimes up while in casting mode will put astrums head though it slightly allowing player to teleport into the wall dor and then once again into locked or hidden rooms | 1 |
659,091 | 21,916,230,304 | IssuesEvent | 2022-05-21 21:42:08 | SkriptLang/Skript | https://api.github.com/repos/SkriptLang/Skript | closed | Loop Issue | bug priority: high completed | ### Skript/Server Version
```markdown
[21:34:25 INFO]: [Skript] Skript's aliases can be found here: https://github.com/SkriptLang/skript-aliases
[21:34:25 INFO]: [Skript] Skript's documentation can be found here: https://skriptlang.github.io/Skript
[21:34:25 INFO]: [Skript] Server Version: git-Paper-794 (MC: 1.16.5)
[21:34:25 INFO]: [Skript] Skript Version: 2.6.1
[21:34:25 INFO]: [Skript] Installed Skript Addons:
[21:34:25 INFO]: [Skript] - Repuska v2.8.8
[21:34:25 INFO]: [Skript] Installed dependencies:
[21:34:25 INFO]: [Skript] - Vault v1.7.3-b131
[21:34:25 INFO]: [Skript] - WorldGuard v7.0.5+3827266
```
### Bug Description
Here is my code of looping:
```
command /tx:
trigger:
set {_x} to "123"
set {_y} to "234"
loop {_x} and {_y}:
send loop-value to console
```
the output should be "123" and "234"
but the actual console output is:

### Expected Behavior
Times of looping multiple variables should be equal to the index of variables not "index of variables^2"
### Steps to Reproduce
```
command /tx:
trigger:
set {_x} to "123"
set {_y} to "234"
set {_z} to "345"
loop {_x}, {_y} and {_z}:
send loop-value to console
```
output screenshot:

```
command /tx:
trigger:
set {_x} to "123"
set {_y} to "234"
loop {_x} and {_y}:
send loop-value to console
```
output screenshot:
![Uploading image.png…]()
### Errors or Screenshots
None
### Other
_No response_
### Agreement
- [X] I have read the guidelines above and affirm I am following them with this report. | 1.0 | Loop Issue - ### Skript/Server Version
```markdown
[21:34:25 INFO]: [Skript] Skript's aliases can be found here: https://github.com/SkriptLang/skript-aliases
[21:34:25 INFO]: [Skript] Skript's documentation can be found here: https://skriptlang.github.io/Skript
[21:34:25 INFO]: [Skript] Server Version: git-Paper-794 (MC: 1.16.5)
[21:34:25 INFO]: [Skript] Skript Version: 2.6.1
[21:34:25 INFO]: [Skript] Installed Skript Addons:
[21:34:25 INFO]: [Skript] - Repuska v2.8.8
[21:34:25 INFO]: [Skript] Installed dependencies:
[21:34:25 INFO]: [Skript] - Vault v1.7.3-b131
[21:34:25 INFO]: [Skript] - WorldGuard v7.0.5+3827266
```
### Bug Description
Here is my code of looping:
```
command /tx:
trigger:
set {_x} to "123"
set {_y} to "234"
loop {_x} and {_y}:
send loop-value to console
```
the output should be "123" and "234"
but the actual console output is:

### Expected Behavior
Times of looping multiple variables should be equal to the index of variables not "index of variables^2"
### Steps to Reproduce
```
command /tx:
trigger:
set {_x} to "123"
set {_y} to "234"
set {_z} to "345"
loop {_x}, {_y} and {_z}:
send loop-value to console
```
output screenshot:

```
command /tx:
trigger:
set {_x} to "123"
set {_y} to "234"
loop {_x} and {_y}:
send loop-value to console
```
output screenshot:
![Uploading image.png…]()
### Errors or Screenshots
None
### Other
_No response_
### Agreement
- [X] I have read the guidelines above and affirm I am following them with this report. | priority | loop issue skript server version markdown skript s aliases can be found here skript s documentation can be found here server version git paper mc skript version installed skript addons repuska installed dependencies vault worldguard bug description here is my code of looping command tx trigger set x to set y to loop x and y send loop value to console the output should be and but the actual console output is expected behavior times of looping multiple variables should be equal to the index of variables not index of variables steps to reproduce command tx trigger set x to set y to set z to loop x y and z send loop value to console output screenshot command tx trigger set x to set y to loop x and y send loop value to console output screenshot errors or screenshots none other no response agreement i have read the guidelines above and affirm i am following them with this report | 1 |
284,023 | 8,729,624,825 | IssuesEvent | 2018-12-10 20:50:57 | ilmtest/search-engine | https://api.github.com/repos/ilmtest/search-engine | opened | Book coverage stats do not take into consideration to_page | bug priority/high technical-logic | Only checks from_page. It should also take into account to_page-from_page+1 to know how many pages were actually covered. | 1.0 | Book coverage stats do not take into consideration to_page - Only checks from_page. It should also take into account to_page-from_page+1 to know how many pages were actually covered. | priority | book coverage stats do not take into consideration to page only checks from page it should also take into account to page from page to know how many pages were actually covered | 1 |
247,703 | 7,922,565,196 | IssuesEvent | 2018-07-05 11:16:46 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | Request fails in scale out scenario for pod to pod networking | area/datapath kind/bug kind/community-report priority/high |
## Bug reports
I have tried to communicate from a pod to pods which are scaling out at a moment using Service's Cluster IP. In Kubernetes, grouped Endpoints as one Service can be scaled out in some cases.(either manually and auto-scaling).
In this case, when pods are scaling out, requests fail must happen.
Request fails should not be happened in scaling out scenario. If connection sync is not correct in HTTP, web server sends RST-packet.. then request fails. (It doesn't happen in Calico)
---
In my polite opinion, it happens because of hash algorithm.
--> https://github.com/cilium/cilium/blob/master/bpf/lib/lb.h#L184
### Title
Request fails in scale out scenario for pod to pod networking
### General Information
- Cilium version (run `cilium version`
```
Client: 1.0.90 aef051d 2018-05-23T12:01:01-07:00 go version go1.9 linux/amd64
Daemon: 1.0.90 aef051d 2018-05-23T12:01:01-07:00 go version go1.9 linux/amd64
```
- Kernel version (run `uname -a`)
```
4.12.2-1.20180114.el7.centos.x86_64
```
- Orchestration system version in use (e.g. `kubectl version`, Mesos, ...)
```
Kubernetes v1.9.6
```
### How to reproduce the issue
1. Make 2 sets of Deployment and target Service(ClusterIP) in Kubernetes
2. Generate packets(I used locust - https://github.com/kubernetes/charts/tree/master/stable/locust) from a pod to the ClusterIP.
3. Check correct connectivity
4. Try to change Replica in a Deployment. The target Deployment is web servers(SimpleNodeJS...just helloworld)
5. Check request fails
---
If you have any discussion, then please contact me anytime. I'm very interested in Cillium.
Thank you.
| 1.0 | Request fails in scale out scenario for pod to pod networking -
## Bug reports
I have tried to communicate from a pod to pods which are scaling out at a moment using Service's Cluster IP. In Kubernetes, grouped Endpoints as one Service can be scaled out in some cases.(either manually and auto-scaling).
In this case, when pods are scaling out, requests fail must happen.
Request fails should not be happened in scaling out scenario. If connection sync is not correct in HTTP, web server sends RST-packet.. then request fails. (It doesn't happen in Calico)
---
In my polite opinion, it happens because of hash algorithm.
--> https://github.com/cilium/cilium/blob/master/bpf/lib/lb.h#L184
### Title
Request fails in scale out scenario for pod to pod networking
### General Information
- Cilium version (run `cilium version`
```
Client: 1.0.90 aef051d 2018-05-23T12:01:01-07:00 go version go1.9 linux/amd64
Daemon: 1.0.90 aef051d 2018-05-23T12:01:01-07:00 go version go1.9 linux/amd64
```
- Kernel version (run `uname -a`)
```
4.12.2-1.20180114.el7.centos.x86_64
```
- Orchestration system version in use (e.g. `kubectl version`, Mesos, ...)
```
Kubernetes v1.9.6
```
### How to reproduce the issue
1. Make 2 sets of Deployment and target Service(ClusterIP) in Kubernetes
2. Generate packets(I used locust - https://github.com/kubernetes/charts/tree/master/stable/locust) from a pod to the ClusterIP.
3. Check correct connectivity
4. Try to change Replica in a Deployment. The target Deployment is web servers(SimpleNodeJS...just helloworld)
5. Check request fails
---
If you have any discussion, then please contact me anytime. I'm very interested in Cillium.
Thank you.
| priority | request fails in scale out scenario for pod to pod networking bug reports i have tried to communicate from a pod to pods which are scaling out at a moment using service s cluster ip in kubernetes grouped endpoints as one service can be scaled out in some cases either manually and auto scaling in this case when pods are scaling out requests fail must happen request fails should not be happened in scaling out scenario if connection sync is not correct in http web server sends rst packet then request fails it doesn t happen in calico in my polite opinion it happens because of hash algorithm title request fails in scale out scenario for pod to pod networking general information cilium version run cilium version client go version linux daemon go version linux kernel version run uname a centos orchestration system version in use e g kubectl version mesos kubernetes how to reproduce the issue make sets of deployment and target service clusterip in kubernetes generate packets i used locust from a pod to the clusterip check correct connectivity try to change replica in a deployment the target deployment is web servers simplenodejs just helloworld check request fails if you have any discussion then please contact me anytime i m very interested in cillium thank you | 1 |
54,005 | 3,058,455,089 | IssuesEvent | 2015-08-14 08:19:35 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | closed | Location page: random number of datasets on page | bug Priority-High | For different locations there is a random number of datasets that are displayed on each page (same for testing and prod servers)


| 1.0 | Location page: random number of datasets on page - For different locations there is a random number of datasets that are displayed on each page (same for testing and prod servers)


| priority | location page random number of datasets on page for different locations there is a random number of datasets that are displayed on each page same for testing and prod servers | 1 |
683,625 | 23,389,183,601 | IssuesEvent | 2022-08-11 16:08:33 | insightsengineering/tern.mmrm | https://api.github.com/repos/insightsengineering/tern.mmrm | closed | [UAT] Test fitting of rank-deficient models | sme high priority | UAT the option `accept_singular` in `fit_mmrm()` which allows to estimate rank-deficient models (similar as `lm()` and `gls()` do) by omitting singular coefficients.
To do:
- [x] Use pre-release branch on enableR (see Dinakar's instructions)
- [x] See `?fit_mmrm` for description of the `accept_singular` and check if that is clear
- [ ] Play around with the argument and results and see if that works - will need very small data sets, or artificially introduced collinear columns in the formula, to see the effect. | 1.0 | [UAT] Test fitting of rank-deficient models - UAT the option `accept_singular` in `fit_mmrm()` which allows to estimate rank-deficient models (similar as `lm()` and `gls()` do) by omitting singular coefficients.
To do:
- [x] Use pre-release branch on enableR (see Dinakar's instructions)
- [x] See `?fit_mmrm` for description of the `accept_singular` and check if that is clear
- [ ] Play around with the argument and results and see if that works - will need very small data sets, or artificially introduced collinear columns in the formula, to see the effect. | priority | test fitting of rank deficient models uat the option accept singular in fit mmrm which allows to estimate rank deficient models similar as lm and gls do by omitting singular coefficients to do use pre release branch on enabler see dinakar s instructions see fit mmrm for description of the accept singular and check if that is clear play around with the argument and results and see if that works will need very small data sets or artificially introduced collinear columns in the formula to see the effect | 1 |
468,751 | 13,489,793,535 | IssuesEvent | 2020-09-11 14:19:09 | tal3898/Hummus | https://api.github.com/repos/tal3898/Hummus | closed | add support button, in the bottom left | Done new feature priority - high | add a floating button in the bottom left of the windows.
display there my name, and my email (טל?? + ctrl + k)
make a "chatbot" there, that writes only:
חמור איך אני יכול לענות לך פה? תשלח לי מייל
| 1.0 | add support button, in the bottom left - add a floating button in the bottom left of the windows.
display there my name, and my email (טל?? + ctrl + k)
make a "chatbot" there, that writes only:
חמור איך אני יכול לענות לך פה? תשלח לי מייל
| priority | add support button in the bottom left add a floating button in the bottom left of the windows display there my name and my email טל ctrl k make a chatbot there that writes only חמור איך אני יכול לענות לך פה תשלח לי מייל | 1 |
705,153 | 24,223,552,953 | IssuesEvent | 2022-09-26 12:51:10 | joomlahenk/fabrik | https://api.github.com/repos/joomlahenk/fabrik | closed | Overloading no longer work in J!4.2.2. Please test. | help wanted High Priority | List filters were working in J!4.1.5, but no longer works in J!4.2.2. I also get:
Notice: Indirect modification of overloaded property FabrikFEModelList::$_whereSQL has no effect in .../components/com_fabrik/models/list.php on line 3579
Changed all $this->_whereSQL into $whereSQL and added property public $whereSQL = array();
Now list filters work again, but I am not sure if this is a final solution. May cause other issues?
| 1.0 | Overloading no longer work in J!4.2.2. Please test. - List filters were working in J!4.1.5, but no longer works in J!4.2.2. I also get:
Notice: Indirect modification of overloaded property FabrikFEModelList::$_whereSQL has no effect in .../components/com_fabrik/models/list.php on line 3579
Changed all $this->_whereSQL into $whereSQL and added property public $whereSQL = array();
Now list filters work again, but I am not sure if this is a final solution. May cause other issues?
| priority | overloading no longer work in j please test list filters were working in j but no longer works in j i also get notice indirect modification of overloaded property fabrikfemodellist wheresql has no effect in components com fabrik models list php on line changed all this wheresql into wheresql and added property public wheresql array now list filters work again but i am not sure if this is a final solution may cause other issues | 1 |
45,656 | 2,938,000,670 | IssuesEvent | 2015-07-01 07:52:49 | ndomar/megasoft-13 | https://api.github.com/repos/ndomar/megasoft-13 | closed | As a designer I should be able to react to events by adding a navigation link, animation or alert. | Component-1 Points-5 Priority-High | @Hoss93 @MennaAshraf @mayaammar @ahmadsoliman
This story is concerned with the events panel in terms of adding different actions to react to various events.
The following has to be added:
1- Listing different animations and events.
2- Defining an animation on a certain property
3- Validate the entered data.
Success scenarios:
1- Event and action are saved into the component's html.
Failure Scenario:
1- The action is not applied if it doesn't pass the validation procedure. | 1.0 | As a designer I should be able to react to events by adding a navigation link, animation or alert. - @Hoss93 @MennaAshraf @mayaammar @ahmadsoliman
This story is concerned with the events panel in terms of adding different actions to react to various events.
The following has to be added:
1- Listing different animations and events.
2- Defining an animation on a certain property
3- Validate the entered data.
Success scenarios:
1- Event and action are saved into the component's html.
Failure Scenario:
1- The action is not applied if it doesn't pass the validation procedure. | priority | as a designer i should be able to react to events by adding a navigation link animation or alert mennaashraf mayaammar ahmadsoliman this story is concerned with the events panel in terms of adding different actions to react to various events the following has to be added listing different animations and events defining an animation on a certain property validate the entered data success scenarios event and action are saved into the component s html failure scenario the action is not applied if it doesn t pass the validation procedure | 1 |
484,308 | 13,937,802,228 | IssuesEvent | 2020-10-22 14:33:03 | prysmaticlabs/prysm | https://api.github.com/repos/prysmaticlabs/prysm | closed | Beacon Node Still Returns Out of Order Blocks From Range Requests | Bug Priority: High Sync | # 🐞 Bug Report
### Description
Currently Lighthouse Peers score Prysm Nodes down due to the fact that prysm nodes occasionally
send back out of order blocks.
### Has this worked before in a previous version?
Supposed to.
## 🔬 Minimal Reproduction
Run a Prysm Node in medalla, and after a while lighthouse nodes start banning them.
## 🔥 Error
```
DEBUG sync: Peer has sent a goodbye message Reason=unknown goodbye value of 251 Received
```
Consistent goodbyes received from lighthouse nodes.
## 🌍 Your Environment
**Operating System:**
Ubuntu
**What version of Prysm are you running? (Which release)**
https://github.com/prysmaticlabs/prysm/commit/390a589afb6a38925b2d6da25a3df3d4f84a9bf7
**Anything else relevant (validator index / public key)?**
| 1.0 | Beacon Node Still Returns Out of Order Blocks From Range Requests - # 🐞 Bug Report
### Description
Currently Lighthouse Peers score Prysm Nodes down due to the fact that prysm nodes occasionally
send back out of order blocks.
### Has this worked before in a previous version?
Supposed to.
## 🔬 Minimal Reproduction
Run a Prysm Node in medalla, and after a while lighthouse nodes start banning them.
## 🔥 Error
```
DEBUG sync: Peer has sent a goodbye message Reason=unknown goodbye value of 251 Received
```
Consistent goodbyes received from lighthouse nodes.
## 🌍 Your Environment
**Operating System:**
Ubuntu
**What version of Prysm are you running? (Which release)**
https://github.com/prysmaticlabs/prysm/commit/390a589afb6a38925b2d6da25a3df3d4f84a9bf7
**Anything else relevant (validator index / public key)?**
| priority | beacon node still returns out of order blocks from range requests 🐞 bug report description currently lighthouse peers score prysm nodes down due to the fact that prysm nodes occasionally send back out of order blocks has this worked before in a previous version supposed to 🔬 minimal reproduction run a prysm node in medalla and after a while lighthouse nodes start banning them 🔥 error debug sync peer has sent a goodbye message reason unknown goodbye value of received consistent goodbyes received from lighthouse nodes 🌍 your environment operating system ubuntu what version of prysm are you running which release anything else relevant validator index public key | 1 |
141,526 | 5,437,233,518 | IssuesEvent | 2017-03-06 05:54:55 | uryoya/molt | https://api.github.com/repos/uryoya/molt | closed | マジックナンバーのソースコードベタ書きを修正したい | high priority refactoring | - [x] Redis Server HOST
- [x] Redis Server PORT
- [x] Flask DEBUG mode
- [x] Flask TEST mode
- [x] Flask HOST
- [x] Flask PORT
他にもあったら追記します | 1.0 | マジックナンバーのソースコードベタ書きを修正したい - - [x] Redis Server HOST
- [x] Redis Server PORT
- [x] Flask DEBUG mode
- [x] Flask TEST mode
- [x] Flask HOST
- [x] Flask PORT
他にもあったら追記します | priority | マジックナンバーのソースコードベタ書きを修正したい redis server host redis server port flask debug mode flask test mode flask host flask port 他にもあったら追記します | 1 |
89,966 | 3,807,427,595 | IssuesEvent | 2016-03-25 08:17:42 | Captianrock/android_PV | https://api.github.com/repos/Captianrock/android_PV | closed | User Authentication | High Priority New Feature | As a user of the web client, I would like to authenticate myself when logging in. | 1.0 | User Authentication - As a user of the web client, I would like to authenticate myself when logging in. | priority | user authentication as a user of the web client i would like to authenticate myself when logging in | 1 |
559,342 | 16,556,541,149 | IssuesEvent | 2021-05-28 14:31:51 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | profile.snapchat.com - site is not usable | browser-firefox-ios bugbug-probability-high os-ios priority-normal | <!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 13_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74607 -->
**URL**: https://profile.snapchat.com/_popup?redirect=https%3A%2F%2Faccounts.snapchat.com%2Faccounts%2Flogin%3Fclient_id%3Dads-api%26referrer%3Dhttps%25253A%25252F%25252Fprofile.snapchat.com%25252F_popup%25253FparentId%25253D3d81e660-c9e9-409e-b35a-94a047fdc23d%252526close%25253Dtrue%252526callback%25253DonPopupCalled%26business%3Dtrue%26skip_login%3Dtrue%26multi_user%3Dtrue%26ignore_welcome_email%3Dfalse
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 13.3
**Tested Another Browser**: Yes Safari
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Won’t load at all just blank page
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | profile.snapchat.com - site is not usable - <!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 13_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74607 -->
**URL**: https://profile.snapchat.com/_popup?redirect=https%3A%2F%2Faccounts.snapchat.com%2Faccounts%2Flogin%3Fclient_id%3Dads-api%26referrer%3Dhttps%25253A%25252F%25252Fprofile.snapchat.com%25252F_popup%25253FparentId%25253D3d81e660-c9e9-409e-b35a-94a047fdc23d%252526close%25253Dtrue%252526callback%25253DonPopupCalled%26business%3Dtrue%26skip_login%3Dtrue%26multi_user%3Dtrue%26ignore_welcome_email%3Dfalse
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 13.3
**Tested Another Browser**: Yes Safari
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Won’t load at all just blank page
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | profile snapchat com site is not usable url browser version firefox ios operating system ios tested another browser yes safari problem type site is not usable description page not loading correctly steps to reproduce won’t load at all just blank page browser configuration none from with ❤️ | 1 |
404,273 | 11,854,543,154 | IssuesEvent | 2020-03-25 01:10:45 | art-community/ART | https://api.github.com/repos/art-community/ART | closed | Bug with HTTP interceptors | HTTP server bug good first issue help wanted high priority | 
When HTTP interceptor returns 'STOP_HANDLING' strategy, then no one from subsequent HTTP Filters (include servlets) will be called.
In this issue needs to refactor HttpServer class and replace next result holders
```
private final static ThreadLocal<InterceptionStrategy> lastRequestInterceptionResult = new ThreadLocal<>();
private final static ThreadLocal<InterceptionStrategy> lastResponseInterceptionResult = new ThreadLocal<>();
```
for something safe and simple.
We need to clear last interception strategy between requests. | 1.0 | Bug with HTTP interceptors - 
When HTTP interceptor returns 'STOP_HANDLING' strategy, then no one from subsequent HTTP Filters (include servlets) will be called.
In this issue needs to refactor HttpServer class and replace next result holders
```
private final static ThreadLocal<InterceptionStrategy> lastRequestInterceptionResult = new ThreadLocal<>();
private final static ThreadLocal<InterceptionStrategy> lastResponseInterceptionResult = new ThreadLocal<>();
```
for something safe and simple.
We need to clear last interception strategy between requests. | priority | bug with http interceptors when http interceptor returns stop handling strategy then no one from subsequent http filters include servlets will be called in this issue needs to refactor httpserver class and replace next result holders private final static threadlocal lastrequestinterceptionresult new threadlocal private final static threadlocal lastresponseinterceptionresult new threadlocal for something safe and simple we need to clear last interception strategy between requests | 1 |
295,762 | 9,100,872,949 | IssuesEvent | 2019-02-20 09:41:39 | ESA-VirES/WebClient-Framework | https://api.github.com/repos/ESA-VirES/WebClient-Framework | closed | 'time selection outside the model validity' warning not displayed for the new Swarm SHA L2 models. | bug priority high | The recently added Swarm L2 SHA models, like the older models, are limited by the validity period.
A warning should be displayed when the time selection exceeds this validity period, just like it is done for the older models.
| 1.0 | 'time selection outside the model validity' warning not displayed for the new Swarm SHA L2 models. - The recently added Swarm L2 SHA models, like the older models, are limited by the validity period.
A warning should be displayed when the time selection exceeds this validity period, just like it is done for the older models.
| priority | time selection outside the model validity warning not displayed for the new swarm sha models the recently added swarm sha models like the older models are limited by the validity period a warning should be displayed when the time selection exceeds this validity period just like it is done for the older models | 1 |
470,015 | 13,529,699,799 | IssuesEvent | 2020-09-15 18:43:07 | HEXRD/hexrdgui | https://api.github.com/repos/HEXRD/hexrdgui | closed | distortion overhaul | enhancement hedm high priority llnl | I'll put you on notice that I am finally fixing the distortion, which remains to be `None` or hard coded the the 6-parameter GE-style function that is defined in distortion.py
Summary of the changes:
1. distortion.py is moving to a package
2. defining an abc to provide a single interface for the distortion application
3. start a registry of different adapters for the various distortion functions (which have different length parameter lists)
Questions:
- do we want to always have a distortion class generated and just have a null case rather than using `None`?
- different distortion functions will have different length parameter lists; I recognize this creates issues for the GUI -- do we need to generate a separate widget for handing the distortion? Tree view might not care, but the forms view will...
This is related to #418
I'm working this now... | 1.0 | distortion overhaul - I'll put you on notice that I am finally fixing the distortion, which remains to be `None` or hard coded the the 6-parameter GE-style function that is defined in distortion.py
Summary of the changes:
1. distortion.py is moving to a package
2. defining an abc to provide a single interface for the distortion application
3. start a registry of different adapters for the various distortion functions (which have different length parameter lists)
Questions:
- do we want to always have a distortion class generated and just have a null case rather than using `None`?
- different distortion functions will have different length parameter lists; I recognize this creates issues for the GUI -- do we need to generate a separate widget for handing the distortion? Tree view might not care, but the forms view will...
This is related to #418
I'm working this now... | priority | distortion overhaul i ll put you on notice that i am finally fixing the distortion which remains to be none or hard coded the the parameter ge style function that is defined in distortion py summary of the changes distortion py is moving to a package defining an abc to provide a single interface for the distortion application start a registry of different adapters for the various distortion functions which have different length parameter lists questions do we want to always have a distortion class generated and just have a null case rather than using none different distortion functions will have different length parameter lists i recognize this creates issues for the gui do we need to generate a separate widget for handing the distortion tree view might not care but the forms view will this is related to i m working this now | 1 |
166,909 | 6,314,524,325 | IssuesEvent | 2017-07-24 11:05:36 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | opened | Intermittent icon status in layer tree when loading | enhancement Priority: High task | The animated gif below to clarify the issue:

This issue could be fixed during the TOC review ( issue #2025 )
| 1.0 | Intermittent icon status in layer tree when loading - The animated gif below to clarify the issue:

This issue could be fixed during the TOC review ( issue #2025 )
| priority | intermittent icon status in layer tree when loading the animated gif below to clarify the issue this issue could be fixed during the toc review issue | 1 |
513,805 | 14,926,888,446 | IssuesEvent | 2021-01-24 13:24:14 | bounswe/bounswe2020group4 | https://api.github.com/repos/bounswe/bounswe2020group4 | closed | (BKND) Push Notifications | Backend Coding Effort: High Priority: High Status: In-Progress Task: Assignment | Realtime push notifications should be sent to clients.
Notifications should be stored to be returned in an endpoint.
Deadline: 24/01/21 | 1.0 | (BKND) Push Notifications - Realtime push notifications should be sent to clients.
Notifications should be stored to be returned in an endpoint.
Deadline: 24/01/21 | priority | bknd push notifications realtime push notifications should be sent to clients notifications should be stored to be returned in an endpoint deadline | 1 |
240,789 | 7,805,843,624 | IssuesEvent | 2018-06-11 12:18:35 | ColoredCow/employee-portal | https://api.github.com/repos/ColoredCow/employee-portal | opened | Applicant and college association | HR priority : high | An applicant can belong to a college. Currently, we're tracking the college names. We'll have to associate with the id instead once #414 is done. | 1.0 | Applicant and college association - An applicant can belong to a college. Currently, we're tracking the college names. We'll have to associate with the id instead once #414 is done. | priority | applicant and college association an applicant can belong to a college currently we re tracking the college names we ll have to associate with the id instead once is done | 1 |
563,747 | 16,704,905,501 | IssuesEvent | 2021-06-09 08:50:08 | enix/dothill-csi | https://api.github.com/repos/enix/dothill-csi | opened | volume is considered already unpublished as soon as lsblk cannot find one of the devices | priority/high type/bug | ```
DEBUG: 2021/06/09 08:45:50 iscsi.go:454: An error occured while looking info about SCSI devices: lsblk: /dev/sdd: not a block device, (exit status 64)
W0609 08:45:50.834330 1 node.go:224] assuming that ISCSI connection is already closed: lsblk: /dev/sdd: not a block device, (exit status 64)
``` | 1.0 | volume is considered already unpublished as soon as lsblk cannot find one of the devices - ```
DEBUG: 2021/06/09 08:45:50 iscsi.go:454: An error occured while looking info about SCSI devices: lsblk: /dev/sdd: not a block device, (exit status 64)
W0609 08:45:50.834330 1 node.go:224] assuming that ISCSI connection is already closed: lsblk: /dev/sdd: not a block device, (exit status 64)
``` | priority | volume is considered already unpublished as soon as lsblk cannot find one of the devices debug iscsi go an error occured while looking info about scsi devices lsblk dev sdd not a block device exit status node go assuming that iscsi connection is already closed lsblk dev sdd not a block device exit status | 1 |
52,780 | 3,029,469,683 | IssuesEvent | 2015-08-04 12:49:36 | PolarisSS13/Polaris | https://api.github.com/repos/PolarisSS13/Polaris | opened | You can join as jobs that don't exist or have roundstart markers | NRV Dauntless Priority: High | 
Jobs that don't exist:
>Librarian
>Atmos Tech (General Engineer as replacement)
>Cargo Tech
>IA Agent
>Chaplain
>Warden
>Xenobiologist
>RD
>Psych
>Geneticist
>Gardener | 1.0 | You can join as jobs that don't exist or have roundstart markers - 
Jobs that don't exist:
>Librarian
>Atmos Tech (General Engineer as replacement)
>Cargo Tech
>IA Agent
>Chaplain
>Warden
>Xenobiologist
>RD
>Psych
>Geneticist
>Gardener | priority | you can join as jobs that don t exist or have roundstart markers jobs that don t exist librarian atmos tech general engineer as replacement cargo tech ia agent chaplain warden xenobiologist rd psych geneticist gardener | 1 |
292,445 | 8,958,093,881 | IssuesEvent | 2019-01-27 11:22:04 | stancl/tenancy | https://api.github.com/repos/stancl/tenancy | opened | HTTPS certificates | enhancement high priority | When the `yourclient.yourapp.com`, `yourclient2.yourapp.com` model is used, a wildcard cert can take care of HTTPS. However, when the `yourapp.yourclient.com`, `yourapp.yourclient2.com` model is used, there needs to be some feature for HTTPS management. Luckily file-based verification can be used with Let's Encrypt, so perhaps creating a route to verify the domain ownership is sufficient? Auto renewal etc could be added too. | 1.0 | HTTPS certificates - When the `yourclient.yourapp.com`, `yourclient2.yourapp.com` model is used, a wildcard cert can take care of HTTPS. However, when the `yourapp.yourclient.com`, `yourapp.yourclient2.com` model is used, there needs to be some feature for HTTPS management. Luckily file-based verification can be used with Let's Encrypt, so perhaps creating a route to verify the domain ownership is sufficient? Auto renewal etc could be added too. | priority | https certificates when the yourclient yourapp com yourapp com model is used a wildcard cert can take care of https however when the yourapp yourclient com yourapp com model is used there needs to be some feature for https management luckily file based verification can be used with let s encrypt so perhaps creating a route to verify the domain ownership is sufficient auto renewal etc could be added too | 1 |
234,420 | 7,720,826,640 | IssuesEvent | 2018-05-24 01:29:52 | AtlasOfLivingAustralia/layers-service | https://api.github.com/repos/AtlasOfLivingAustralia/layers-service | closed | Load Layer: States and Territories Polygons 2011 | enhancement priority-high status-started | _migrated from:_ https://code.google.com/p/ala/issues/detail?id=58
_date:_ Thu Aug 8 05:31:55 2013
_author:_ moyesyside
---
Original Issue - [https://code.google.com/p/alageospatialportal/issues/detail?id=801](https://code.google.com/p/alageospatialportal/issues/detail?id=801)
Reported by d...@dougashton.com, Nov 29, 2011
load "layers->political->Australia States and Territories" and then look at border area. I looked at the border between Canberra and Queanbeyan and also Cameron Corner [http://en.wikipedia.org/wiki/Cameron_Corner](http://en.wikipedia.org/wiki/Cameron_Corner). The Google supplied borders matched the base layers but "layers->political->Australia States and Territories" matched neither.
The end result is confusing if you are trying to work out visually which state a point close to the border is in. Important for understanding the operation of Sensitive data rules.
In addition the "layers->political->Australia States and Territories" shows a dog leg in the border near Cameron Corner that I have not been able to see on other maps.
Unfortunately the "View metadata for Australian States .." page [http://spatial.ala.org.au/alaspatial/layers/22](http://spatial.ala.org.au/alaspatial/layers/22) has broken links.
The same issues apply to the LGA Boundaries layer as well
Nov 29, 2011 `#1` d...@dougashton.com
I have just tested "layers->Marine->boundaries->States including coastal waters" and the State boundaries it includes match very closely with the google boundaries and base layers.
This layer also has excellent metadata associated with it and appears authoritative.
Nov 29, 2011 `#2` ajay.ranipeta
setting to Lee for triage
Owner: leebel...@gmail.com Nov 29, 2011 Project Member `#3` leebel...@gmail.com
It boils down to a need for a better States and Territories layer and fixing the broken links in the layer table. Meanwhile, the states/territories with coastal waters will need to suffice.
There is a 2011 states/territories shapefile at [http://abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/1270.0.55.001July%202011?OpenDocument](http://abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/1270.0.55.001July%202011?OpenDocument). Metadata? Who knows.
Status: Accepted Owner: yuanfang...@gmail.com Cc: leebel...@gmail.com Nov 29, 2011 Project Member `#4` leebel...@gmail.com
(No comment was entered for this change.)
Cc: d...@dougashton.com Feb 15, 2012 `#5` ajay.ranipeta
Should this be done by Chris as he is set to update the LGA boundaries? [Issue 670](https://code.google.com/p/ala/issues/detail?id=670)
Feb 15, 2012 Project Member `#6` leebel...@gmail.com
Yes. Over to Chris to lead. Added the updated states/territories to the ListOfOutstandingLayersToBeIngested spreadsheet on Google Docs.
Owner: chris.fl...@gmail.com Labels: -Priority-Medium Priority-High Apr 24, 2012 Project Member `#7` chris.fl...@gmail.com
Changing title to reflect that this layer needs to be loaded.
Summary: LOAD LAYER: States and Territories Polygons 2011 Apr 25, 2012 Project Member `#8` leebel...@gmail.com
(No comment was entered for this change.)
Labels: -Priority-High Priority-Low Jul 10, 2013 Project Member `#9` leebel...@gmail.com
This is a basic layer used in many applications and there appears to be an offset of up to 1km on the ACT boundary so it would be a good idea to update this layer asap. Agree with Doug that the layer "States and territories including coastal waters" is far more accurate
Cc: -d...@dougashton.com moyesyside Labels: -Priority-Low Priority-High
| 1.0 | Load Layer: States and Territories Polygons 2011 - _migrated from:_ https://code.google.com/p/ala/issues/detail?id=58
_date:_ Thu Aug 8 05:31:55 2013
_author:_ moyesyside
---
Original Issue - [https://code.google.com/p/alageospatialportal/issues/detail?id=801](https://code.google.com/p/alageospatialportal/issues/detail?id=801)
Reported by d...@dougashton.com, Nov 29, 2011
load "layers->political->Australia States and Territories" and then look at border area. I looked at the border between Canberra and Queanbeyan and also Cameron Corner [http://en.wikipedia.org/wiki/Cameron_Corner](http://en.wikipedia.org/wiki/Cameron_Corner). The Google supplied borders matched the base layers but "layers->political->Australia States and Territories" matched neither.
The end result is confusing if you are trying to work out visually which state a point close to the border is in. Important for understanding the operation of Sensitive data rules.
In addition the "layers->political->Australia States and Territories" shows a dog leg in the border near Cameron Corner that I have not been able to see on other maps.
Unfortunately the "View metadata for Australian States .." page [http://spatial.ala.org.au/alaspatial/layers/22](http://spatial.ala.org.au/alaspatial/layers/22) has broken links.
The same issues apply to the LGA Boundaries layer as well
Nov 29, 2011 `#1` d...@dougashton.com
I have just tested "layers->Marine->boundaries->States including coastal waters" and the State boundaries it includes match very closely with the google boundaries and base layers.
This layer also has excellent metadata associated with it and appears authoritative.
Nov 29, 2011 `#2` ajay.ranipeta
setting to Lee for triage
Owner: leebel...@gmail.com Nov 29, 2011 Project Member `#3` leebel...@gmail.com
It boils down to a need for a better States and Territories layer and fixing the broken links in the layer table. Meanwhile, the states/territories with coastal waters will need to suffice.
There is a 2011 states/territories shapefile at [http://abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/1270.0.55.001July%202011?OpenDocument](http://abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/1270.0.55.001July%202011?OpenDocument). Metadata? Who knows.
Status: Accepted Owner: yuanfang...@gmail.com Cc: leebel...@gmail.com Nov 29, 2011 Project Member `#4` leebel...@gmail.com
(No comment was entered for this change.)
Cc: d...@dougashton.com Feb 15, 2012 `#5` ajay.ranipeta
Should this be done by Chris as he is set to update the LGA boundaries? [Issue 670](https://code.google.com/p/ala/issues/detail?id=670)
Feb 15, 2012 Project Member `#6` leebel...@gmail.com
Yes. Over to Chris to lead. Added the updated states/territories to the ListOfOutstandingLayersToBeIngested spreadsheet on Google Docs.
Owner: chris.fl...@gmail.com Labels: -Priority-Medium Priority-High Apr 24, 2012 Project Member `#7` chris.fl...@gmail.com
Changing title to reflect that this layer needs to be loaded.
Summary: LOAD LAYER: States and Territories Polygons 2011 Apr 25, 2012 Project Member `#8` leebel...@gmail.com
(No comment was entered for this change.)
Labels: -Priority-High Priority-Low Jul 10, 2013 Project Member `#9` leebel...@gmail.com
This is a basic layer used in many applications and there appears to be an offset of up to 1km on the ACT boundary so it would be a good idea to update this layer asap. Agree with Doug that the layer "States and territories including coastal waters" is far more accurate
Cc: -d...@dougashton.com moyesyside Labels: -Priority-Low Priority-High
| priority | load layer states and territories polygons migrated from date thu aug author moyesyside original issue reported by d dougashton com nov load layers political australia states and territories and then look at border area i looked at the border between canberra and queanbeyan and also cameron corner the google supplied borders matched the base layers but layers political australia states and territories matched neither the end result is confusing if you are trying to work out visually which state a point close to the border is in important for understanding the operation of sensitive data rules in addition the layers political australia states and territories shows a dog leg in the border near cameron corner that i have not been able to see on other maps unfortunately the view metadata for australian states page has broken links the same issues apply to the lga boundaries layer as well nov d dougashton com i have just tested layers marine boundaries states including coastal waters and the state boundaries it includes match very closely with the google boundaries and base layers this layer also has excellent metadata associated with it and appears authoritative nov ajay ranipeta setting to lee for triage owner leebel gmail com nov project member leebel gmail com it boils down to a need for a better states and territories layer and fixing the broken links in the layer table meanwhile the states territories with coastal waters will need to suffice there is a states territories shapefile at metadata who knows status accepted owner yuanfang gmail com cc leebel gmail com nov project member leebel gmail com no comment was entered for this change cc d dougashton com feb ajay ranipeta should this be done by chris as he is set to update the lga boundaries feb project member leebel gmail com yes over to chris to lead added the updated states territories to the listofoutstandinglayerstobeingested spreadsheet on google docs owner chris fl gmail com labels priority medium priority high apr project member chris fl gmail com changing title to reflect that this layer needs to be loaded summary load layer states and territories polygons apr project member leebel gmail com no comment was entered for this change labels priority high priority low jul project member leebel gmail com this is a basic layer used in many applications and there appears to be an offset of up to on the act boundary so it would be a good idea to update this layer asap agree with doug that the layer states and territories including coastal waters is far more accurate cc d dougashton com moyesyside labels priority low priority high | 1 |
85,308 | 3,689,332,167 | IssuesEvent | 2016-02-25 16:07:04 | Valhalla-Gaming/Tracker | https://api.github.com/repos/Valhalla-Gaming/Tracker | closed | weapon issue - fist weapons | Class-Druid Priority-High Type-Item | "It lists in the weapons a druid is supposed to be capable of using ""daggers, fist weapons, polearms, staves, maces""
However I can't use the heirloom fist weapon I bought for the shop. It says she doesn't have the skill to use it." | 1.0 | weapon issue - fist weapons - "It lists in the weapons a druid is supposed to be capable of using ""daggers, fist weapons, polearms, staves, maces""
However I can't use the heirloom fist weapon I bought for the shop. It says she doesn't have the skill to use it." | priority | weapon issue fist weapons it lists in the weapons a druid is supposed to be capable of using daggers fist weapons polearms staves maces however i can t use the heirloom fist weapon i bought for the shop it says she doesn t have the skill to use it | 1 |
763,336 | 26,752,819,011 | IssuesEvent | 2023-01-30 21:06:39 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | `torch.amax` will fail when computing forward mode jacobian | high priority triaged module: functorch | ### 🐛 Describe the bug
`torch.amax` will fail when computing forward mode jacobian with error `RuntimeError: Could not allocate memory to change Tensor SizesAndStrides! `. But its reverse mode jacobian can be computed without any error.
The forward mode jacobian computation
```py
input = torch.rand([2], dtype=torch.float32)
def func(input):
res = torch.amax(input)
return res
jacobian(func, (input.requires_grad_(), ), strategy='forward-mode', vectorize=True)
# RuntimeError: Could not allocate memory to change Tensor SizesAndStrides!
```
For the reverse mode,
```py
input = torch.rand([2], dtype=torch.float32)
def func(input):
res = torch.amax(input)
return res
jacobian(func, (input.requires_grad_(), ), strategy='reverse-mode', vectorize=True)
# (tensor([0., 1.]),)
```
### Versions
```
PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly
[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly
```
cc @ezyang @gchanan @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99 | 1.0 | `torch.amax` will fail when computing forward mode jacobian - ### 🐛 Describe the bug
`torch.amax` will fail when computing forward mode jacobian with error `RuntimeError: Could not allocate memory to change Tensor SizesAndStrides! `. But its reverse mode jacobian can be computed without any error.
The forward mode jacobian computation
```py
input = torch.rand([2], dtype=torch.float32)
def func(input):
res = torch.amax(input)
return res
jacobian(func, (input.requires_grad_(), ), strategy='forward-mode', vectorize=True)
# RuntimeError: Could not allocate memory to change Tensor SizesAndStrides!
```
For the reverse mode,
```py
input = torch.rand([2], dtype=torch.float32)
def func(input):
res = torch.amax(input)
return res
jacobian(func, (input.requires_grad_(), ), strategy='reverse-mode', vectorize=True)
# (tensor([0., 1.]),)
```
### Versions
```
PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.23.5 py39h14f4228_0
[conda] numpy-base 1.23.5 py39h31eccc5_0
[conda] pytorch 2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0 pytorch-nightly
[conda] pytorch-cuda 11.7 h67b0de4_2 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torchaudio 2.0.0.dev20230105 py39_cu117 pytorch-nightly
[conda] torchtriton 2.0.0+0d7e753227 py39 pytorch-nightly
[conda] torchvision 0.15.0.dev20230105 py39_cu117 pytorch-nightly
```
cc @ezyang @gchanan @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99 | priority | torch amax will fail when computing forward mode jacobian 🐛 describe the bug torch amax will fail when computing forward mode jacobian with error runtimeerror could not allocate memory to change tensor sizesandstrides but its reverse mode jacobian can be computed without any error the forward mode jacobian computation py input torch rand dtype torch def func input res torch amax input return res jacobian func input requires grad strategy forward mode vectorize true runtimeerror could not allocate memory to change tensor sizesandstrides for the reverse mode py input torch rand dtype torch def func input res torch amax input return res jacobian func input requires grad strategy reverse mode vectorize true tensor versions pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version version libc version glibc python version main nov bit runtime python platform linux generic with is cuda available true cuda runtime version cuda module loading set to lazy gpu models and configuration gpu nvidia geforce rtx gpu nvidia geforce rtx gpu nvidia geforce rtx nvidia driver version cudnn version probably one of the following usr lib linux gnu libcudnn so usr lib linux gnu libcudnn adv infer so usr lib linux gnu libcudnn adv train so usr lib linux gnu libcudnn cnn infer so usr lib linux gnu libcudnn cnn train so usr lib linux gnu libcudnn ops infer so usr lib linux gnu libcudnn ops train so hip runtime version n a miopen runtime version n a is xnnpack available true versions of relevant libraries numpy torch torchaudio torchvision blas mkl mkl mkl service mkl fft mkl random numpy numpy base pytorch pytorch nightly pytorch cuda pytorch nightly pytorch mutex cuda pytorch nightly torchaudio pytorch nightly torchtriton pytorch nightly torchvision pytorch nightly cc ezyang gchanan chillee samdow soumith | 1 |
500,587 | 14,502,537,904 | IssuesEvent | 2020-12-11 21:11:23 | dell/helm-charts | https://api.github.com/repos/dell/helm-charts | closed | [BUG]: [karavi-topology & karavi-topology]: Fails in Openshift | priority/high release-found/karavi-observability release-found/karavi-powerflex-metrics release-found/karavi-topology type/bug | <!-- Thanks for filing an issue! Before hitting the button, please answer these questions. It's helpful to search the existing GitHub issues first. It's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of.
Fill in as much of the template below as you can. If you leave out information, we can't help you as well.
Be ready for followup questions, and please respond in a timely manner. If we can't reproduce a bug or think a feature already exists, we might close your issue. If we're wrong, PLEASE feel free to reopen it and explain why.
-->
**Describe the bug**
After installing Karavi on Openshift with: `helm install karavi-observability dell/karavi-observability -n vxflexos -f karavi.powerflex.values.yaml --render-subchart-notes --set-file karavi-topology.certificateFile=karavi-topology.crt --set-file karavi-topology.privateKeyFile=karavi-topology.key --set-file karavi-metrics-powerflex.otelCollector.certificateFile=otel-collector.crt --set-file karavi-metrics-powerflex.otelCollector.privateKeyFile=otel-collector.key` the components `otel-collector/nginx-proxy` and `karavi-topology` fails to start with errors like:
```
2020/12/01 08:47:10 [emerg] 1#1: bind() to 0.0.0.0:443 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:443 failed (13: Permission denied)
```
The problem is that Openshift runs containers as unprivileged by default and don't allow to run containers on default ports.
Please consider using the [unprivileged nginx image](https://hub.docker.com/r/nginxinc/nginx-unprivileged) and make the topology component listen on a non-default port.
**Version of Helm and Kubernetes**:
```
[root@floshift-bastion ~]# oc version
Client Version: 4.5.15
Server Version: 4.5.15
Kubernetes Version: v1.18.3+2fbd7c7
[root@floshift-bastion ~]# helm version
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/floshift/auth/kubeconfig
version.BuildInfo{Version:"v3.3.4+5.el8", GitCommit:"1e63a4770a20072ed9f574013c01cc6e59881e48", GitTreeState:"clean", GoVersion:"go1.13.15"}
```
**Which chart**:
karavi-topology & karavi-topology
| 1.0 | [BUG]: [karavi-topology & karavi-topology]: Fails in Openshift - <!-- Thanks for filing an issue! Before hitting the button, please answer these questions. It's helpful to search the existing GitHub issues first. It's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of.
Fill in as much of the template below as you can. If you leave out information, we can't help you as well.
Be ready for followup questions, and please respond in a timely manner. If we can't reproduce a bug or think a feature already exists, we might close your issue. If we're wrong, PLEASE feel free to reopen it and explain why.
-->
**Describe the bug**
After installing Karavi on Openshift with: `helm install karavi-observability dell/karavi-observability -n vxflexos -f karavi.powerflex.values.yaml --render-subchart-notes --set-file karavi-topology.certificateFile=karavi-topology.crt --set-file karavi-topology.privateKeyFile=karavi-topology.key --set-file karavi-metrics-powerflex.otelCollector.certificateFile=otel-collector.crt --set-file karavi-metrics-powerflex.otelCollector.privateKeyFile=otel-collector.key` the components `otel-collector/nginx-proxy` and `karavi-topology` fails to start with errors like:
```
2020/12/01 08:47:10 [emerg] 1#1: bind() to 0.0.0.0:443 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:443 failed (13: Permission denied)
```
The problem is that Openshift runs containers as unprivileged by default and don't allow to run containers on default ports.
Please consider using the [unprivileged nginx image](https://hub.docker.com/r/nginxinc/nginx-unprivileged) and make the topology component listen on a non-default port.
**Version of Helm and Kubernetes**:
```
[root@floshift-bastion ~]# oc version
Client Version: 4.5.15
Server Version: 4.5.15
Kubernetes Version: v1.18.3+2fbd7c7
[root@floshift-bastion ~]# helm version
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/floshift/auth/kubeconfig
version.BuildInfo{Version:"v3.3.4+5.el8", GitCommit:"1e63a4770a20072ed9f574013c01cc6e59881e48", GitTreeState:"clean", GoVersion:"go1.13.15"}
```
**Which chart**:
karavi-topology & karavi-topology
| priority | fails in openshift thanks for filing an issue before hitting the button please answer these questions it s helpful to search the existing github issues first it s likely that another user has already reported the issue you re facing or it s a known issue that we re already aware of fill in as much of the template below as you can if you leave out information we can t help you as well be ready for followup questions and please respond in a timely manner if we can t reproduce a bug or think a feature already exists we might close your issue if we re wrong please feel free to reopen it and explain why describe the bug after installing karavi on openshift with helm install karavi observability dell karavi observability n vxflexos f karavi powerflex values yaml render subchart notes set file karavi topology certificatefile karavi topology crt set file karavi topology privatekeyfile karavi topology key set file karavi metrics powerflex otelcollector certificatefile otel collector crt set file karavi metrics powerflex otelcollector privatekeyfile otel collector key the components otel collector nginx proxy and karavi topology fails to start with errors like bind to failed permission denied nginx bind to failed permission denied the problem is that openshift runs containers as unprivileged by default and don t allow to run containers on default ports please consider using the and make the topology component listen on a non default port version of helm and kubernetes oc version client version server version kubernetes version helm version warning kubernetes configuration file is group readable this is insecure location root floshift auth kubeconfig version buildinfo version gitcommit gittreestate clean goversion which chart karavi topology karavi topology | 1 |
187,672 | 6,760,149,226 | IssuesEvent | 2017-10-24 19:35:17 | bounswe/bounswe2017group10 | https://api.github.com/repos/bounswe/bounswe2017group10 | closed | Disallow going back to login/signup after successful login | Android Bug Report High-priority | A user should not be able to go back to login/signup activity after a successful login. In this case, back button should prompt the user if he/she wants to quit the app. | 1.0 | Disallow going back to login/signup after successful login - A user should not be able to go back to login/signup activity after a successful login. In this case, back button should prompt the user if he/she wants to quit the app. | priority | disallow going back to login signup after successful login a user should not be able to go back to login signup activity after a successful login in this case back button should prompt the user if he she wants to quit the app | 1 |
549,160 | 16,087,046,861 | IssuesEvent | 2021-04-26 12:35:55 | InteractiveFaultLocalization/iFL4Eclipse | https://api.github.com/repos/InteractiveFaultLocalization/iFL4Eclipse | opened | Unhandled event loop exception, regex.PatternSyntaxException using valid regular expression | bug high priority | **Precondition**
* .../eclipse.exe has launched.
* iFL plugin has been installed.
**Steps**
1. Select a project.
1. Click iFL button.
1. Load scores.
1. Click Show filters button.
1. Click Add rule button.
1. Add a Name rule like this:
- Enter string: create???????
- Containment: contains
- Matching: partial
- Case-sensitive: no
- Regular expression: yes
**Expected results**
* There will be results to this expression like "createProgram" elements.
**Received results**
* Results in: -
* Error in log:
```
!MESSAGE Unhandled event loop exception
!STACK 0
java.util.regex.PatternSyntaxException: Dangling meta character '?' near index 8
create???????
^
at java.util.regex.Pattern.error(Unknown Source)
at java.util.regex.Pattern.sequence(Unknown Source)
at java.util.regex.Pattern.expr(Unknown Source)
```
**Environment:**
* Package: https://github.com/InteractiveFaultLocalization/iFL4Eclipse/releases/tag/V2.general_fixes.2
* Operating System: Windows 10 Pro, 64 bit
* Eclipse version: 2019-09 R (4.13.0) | 1.0 | Unhandled event loop exception, regex.PatternSyntaxException using valid regular expression - **Precondition**
* .../eclipse.exe has launched.
* iFL plugin has been installed.
**Steps**
1. Select a project.
1. Click iFL button.
1. Load scores.
1. Click Show filters button.
1. Click Add rule button.
1. Add a Name rule like this:
- Enter string: create???????
- Containment: contains
- Matching: partial
- Case-sensitive: no
- Regular expression: yes
**Expected results**
* There will be results to this expression like "createProgram" elements.
**Received results**
* Results in: -
* Error in log:
```
!MESSAGE Unhandled event loop exception
!STACK 0
java.util.regex.PatternSyntaxException: Dangling meta character '?' near index 8
create???????
^
at java.util.regex.Pattern.error(Unknown Source)
at java.util.regex.Pattern.sequence(Unknown Source)
at java.util.regex.Pattern.expr(Unknown Source)
```
**Environment:**
* Package: https://github.com/InteractiveFaultLocalization/iFL4Eclipse/releases/tag/V2.general_fixes.2
* Operating System: Windows 10 Pro, 64 bit
* Eclipse version: 2019-09 R (4.13.0) | priority | unhandled event loop exception regex patternsyntaxexception using valid regular expression precondition eclipse exe has launched ifl plugin has been installed steps select a project click ifl button load scores click show filters button click add rule button add a name rule like this enter string create containment contains matching partial case sensitive no regular expression yes expected results there will be results to this expression like createprogram elements received results results in error in log message unhandled event loop exception stack java util regex patternsyntaxexception dangling meta character near index create at java util regex pattern error unknown source at java util regex pattern sequence unknown source at java util regex pattern expr unknown source environment package operating system windows pro bit eclipse version r | 1 |
771,372 | 27,082,665,272 | IssuesEvent | 2023-02-14 15:01:18 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | messenger: sometimes history is not fetched | bug priority 1: high messenger | # Bug Report
Sometimes, right after startup, only very recent message(s) is displayed but history is empty. On another start, history is back to normal
Empty history:

Correct history:

### Additional Information
- Status desktop version: master
- Operating System: linux
| 1.0 | messenger: sometimes history is not fetched - # Bug Report
Sometimes, right after startup, only very recent message(s) is displayed but history is empty. On another start, history is back to normal
Empty history:

Correct history:

### Additional Information
- Status desktop version: master
- Operating System: linux
| priority | messenger sometimes history is not fetched bug report sometimes right after startup only very recent message s is displayed but history is empty on another start history is back to normal empty history correct history additional information status desktop version master operating system linux | 1 |
372,097 | 11,009,128,275 | IssuesEvent | 2019-12-04 11:59:10 | ooni/probe-engine | https://api.github.com/repos/ooni/probe-engine | closed | psiphon: make sure we're using latest version of dependencies | cycle backlog effort/XL enhancement priority/high | Last time I checked, Psiphon was not using `go.mod`. If that's still the case, we need to play with `go get -v` to make sure we use the right version of the dependencies. | 1.0 | psiphon: make sure we're using latest version of dependencies - Last time I checked, Psiphon was not using `go.mod`. If that's still the case, we need to play with `go get -v` to make sure we use the right version of the dependencies. | priority | psiphon make sure we re using latest version of dependencies last time i checked psiphon was not using go mod if that s still the case we need to play with go get v to make sure we use the right version of the dependencies | 1 |
455,961 | 13,134,548,493 | IssuesEvent | 2020-08-06 23:49:41 | geopm/geopm | https://api.github.com/repos/geopm/geopm | closed | Integration test decorators are firing during the import phase of running an unrelated test suite | bug bug-exposure-low bug-priority-low bug-quality-high | In Python2, importing modules using the ```from <PACKAGE> import <MODULE>``` syntax is causing the ```__init__.py``` to fire. Once init.py starts, it tries to issue ```from test_integration.geopm_test_integration import *``` which starts executing random test decorators. An example:
```
$ python -m pdb ./test_frequency_hint_usage.py
> /home/bgeltz/git/geopm/test_integration/test_frequency_hint_usage.py(40)<module>()
-> """
(Pdb) break /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py:535
Breakpoint 1 at /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py:535
(Pdb) c
> /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py(535)run()
-> pid = subprocess.Popen(argv_mod, env=self.environ(),
(Pdb) argv_mod
'srun -N 1 -n 1 -J geopm_allocation_test --cpu-freq=Performance -- stat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq'
(Pdb) bt
/usr/lib64/python2.7/bdb.py(400)run()
-> exec cmd in globals, locals
<string>(1)<module>()
/home/bgeltz/git/geopm/test_integration/test_frequency_hint_usage.py(48)<module>()
-> from test_integration import geopm_context
/home/bgeltz/git/geopm/test_integration/__init__.py(40)<module>()
-> from test_integration.geopm_test_integration import *
/home/bgeltz/git/geopm/test_integration/geopm_test_integration.py(100)<module>()
-> class TestIntegration(unittest.TestCase):
/home/bgeltz/git/geopm/test_integration/geopm_test_integration.py(1196)TestIntegration()
-> @util.skip_unless_cpufreq()
/home/bgeltz/git/geopm/test_integration/util.py(111)skip_unless_cpufreq()
-> geopm_test_launcher.allocation_node_test(test_exec, dev_null, dev_null)
/home/bgeltz/git/geopm/test_integration/geopm_test_launcher.py(100)allocation_node_test()
-> launcher.run(stdout, stderr)
> /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py(535)run()
-> pid = subprocess.Popen(argv_mod, env=self.environ(),
(Pdb)
```
(I set a breakpoint set in launcher.py just before the Popen() call.)
The problem above starts with ```from test_integration import geopm_context```. If this were changed to just ```import geopm_context``` the problem goes away because ```__init__.py``` does not fire. I think we can simply change all of the imports in the integration test dir to *not* use ``` from <MODULE> ...``` and this will go away.
The above example illustrates that we start in ```hint_usage.py```. We execute until we get to the first ```from <MODULE>``` import, then ```__init__.py``` fires. Once that happens the main ```integration_test.py``` is imported and one of the test's decorators starting on line 1196 fires.
This code path is erroneous and has nothing to do with where we started (i.e. the imports for the ```hint_usage.py```). | 1.0 | Integration test decorators are firing during the import phase of running an unrelated test suite - In Python2, importing modules using the ```from <PACKAGE> import <MODULE>``` syntax is causing the ```__init__.py``` to fire. Once init.py starts, it tries to issue ```from test_integration.geopm_test_integration import *``` which starts executing random test decorators. An example:
```
$ python -m pdb ./test_frequency_hint_usage.py
> /home/bgeltz/git/geopm/test_integration/test_frequency_hint_usage.py(40)<module>()
-> """
(Pdb) break /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py:535
Breakpoint 1 at /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py:535
(Pdb) c
> /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py(535)run()
-> pid = subprocess.Popen(argv_mod, env=self.environ(),
(Pdb) argv_mod
'srun -N 1 -n 1 -J geopm_allocation_test --cpu-freq=Performance -- stat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq'
(Pdb) bt
/usr/lib64/python2.7/bdb.py(400)run()
-> exec cmd in globals, locals
<string>(1)<module>()
/home/bgeltz/git/geopm/test_integration/test_frequency_hint_usage.py(48)<module>()
-> from test_integration import geopm_context
/home/bgeltz/git/geopm/test_integration/__init__.py(40)<module>()
-> from test_integration.geopm_test_integration import *
/home/bgeltz/git/geopm/test_integration/geopm_test_integration.py(100)<module>()
-> class TestIntegration(unittest.TestCase):
/home/bgeltz/git/geopm/test_integration/geopm_test_integration.py(1196)TestIntegration()
-> @util.skip_unless_cpufreq()
/home/bgeltz/git/geopm/test_integration/util.py(111)skip_unless_cpufreq()
-> geopm_test_launcher.allocation_node_test(test_exec, dev_null, dev_null)
/home/bgeltz/git/geopm/test_integration/geopm_test_launcher.py(100)allocation_node_test()
-> launcher.run(stdout, stderr)
> /home/bgeltz/git/geopm/scripts/geopmpy/launcher.py(535)run()
-> pid = subprocess.Popen(argv_mod, env=self.environ(),
(Pdb)
```
(I set a breakpoint set in launcher.py just before the Popen() call.)
The problem above starts with ```from test_integration import geopm_context```. If this were changed to just ```import geopm_context``` the problem goes away because ```__init__.py``` does not fire. I think we can simply change all of the imports in the integration test dir to *not* use ``` from <MODULE> ...``` and this will go away.
The above example illustrates that we start in ```hint_usage.py```. We execute until we get to the first ```from <MODULE>``` import, then ```__init__.py``` fires. Once that happens the main ```integration_test.py``` is imported and one of the test's decorators starting on line 1196 fires.
This code path is erroneous and has nothing to do with where we started (i.e. the imports for the ```hint_usage.py```). | priority | integration test decorators are firing during the import phase of running an unrelated test suite in importing modules using the from import syntax is causing the init py to fire once init py starts it tries to issue from test integration geopm test integration import which starts executing random test decorators an example python m pdb test frequency hint usage py home bgeltz git geopm test integration test frequency hint usage py pdb break home bgeltz git geopm scripts geopmpy launcher py breakpoint at home bgeltz git geopm scripts geopmpy launcher py pdb c home bgeltz git geopm scripts geopmpy launcher py run pid subprocess popen argv mod env self environ pdb argv mod srun n n j geopm allocation test cpu freq performance stat sys devices system cpu cpufreq cpuinfo min freq pdb bt usr bdb py run exec cmd in globals locals home bgeltz git geopm test integration test frequency hint usage py from test integration import geopm context home bgeltz git geopm test integration init py from test integration geopm test integration import home bgeltz git geopm test integration geopm test integration py class testintegration unittest testcase home bgeltz git geopm test integration geopm test integration py testintegration util skip unless cpufreq home bgeltz git geopm test integration util py skip unless cpufreq geopm test launcher allocation node test test exec dev null dev null home bgeltz git geopm test integration geopm test launcher py allocation node test launcher run stdout stderr home bgeltz git geopm scripts geopmpy launcher py run pid subprocess popen argv mod env self environ pdb i set a breakpoint set in launcher py just before the popen call the problem above starts with from test integration import geopm context if this were changed to just import geopm context the problem goes away because init py does not fire i think we can simply change all of the imports in the integration test dir to not use from and this will go away the above example illustrates that we start in hint usage py we execute until we get to the first from import then init py fires once that happens the main integration test py is imported and one of the test s decorators starting on line fires this code path is erroneous and has nothing to do with where we started i e the imports for the hint usage py | 1 |
641,584 | 20,829,983,038 | IssuesEvent | 2022-03-19 09:07:20 | AY2122S2-CS2113T-T10-1/tp | https://api.github.com/repos/AY2122S2-CS2113T-T10-1/tp | closed | Setup GroupCreateCommand class | priority.High backend.command | Include necessary constructors, attributes, methods and Javadocs | 1.0 | Setup GroupCreateCommand class - Include necessary constructors, attributes, methods and Javadocs | priority | setup groupcreatecommand class include necessary constructors attributes methods and javadocs | 1 |
758,610 | 26,561,994,472 | IssuesEvent | 2023-01-20 16:36:47 | bcgov/cas-cif | https://api.github.com/repos/bcgov/cas-cif | closed | As an Ops Team member, I want to see calculated payment amounts for this milestone, so that I can choose to override them | User Story High Priority Program Area QA | #### Description:
Using the "calculated + adjusted" UI pattern on the net and gross payment amounts for a milestone
[Formula with examples](https://bcgov.sharepoint.com/:w:/t/00608-ScrumTeam/EcXm1DnetBZNgFOkakDR_IEB1RS8r51PTHqH2h4C5L6URw?e=RbiNV2) - Please open it with the Word app instead of browser for formatting and better readability.
Discussion details in #656
Edit: Added `holdback_amount` to this card. It does not currently have fields in the schema for the calculated/adjusted holdback amount & will need to be added.
#### Acceptance Criteria:
Given I am an ops team member
When I enter milestone information
And I have entered the total eligible expenses for this milestone
Then I can see calculated values for holdback, net and gross payment amounts for this particular milestones
Given I am an ops team member
When I enter milestone information
Then I can choose to override the calculated holdback, net and gross payment information through "adjusted" fields
Edit:
Given I am an ops team member
When I am on the milestones page
Then I see a Maximum Amount This Milestone field (replaces the current `Maximum Amount` field, this is just a title change).
#### Development Checklist:
- [x] check in with @dleard to see what has been implemented as part of #986
- [x] milestone schema has `calculated_holdback_amount` and `adjusted_holdback_amount` fields for milestones with expenses.
- [x] milestone schema shows a title of `Maximum Amount This Milestone` for the `maximum_amount` field. Do not change the value of the json key, just the title attribute. Changing the key will be a larger refactor across the database that is probably unnecessary.
- [x] uses the "AdjustableCalculatedValueWidget" for `gross_payment_amount`, `holdback_amount` and `net_payment_amount` fields
- [x] Calculated values are retrieved from the form_change_**_this_miletstone computed columns created in #986
- [ ] Meets the DOD
**Definition of Ready** (Note: If any of these points are not applicable, mark N/A)
- [x] User story is included
- [x] User role and type are identified
- [x] Acceptance criteria are included
- [x] Wireframes are included (if required)
- [x] Design / Solution is accepted by Product Owner
- [x] Dependencies are identified (technical, business, regulatory/policy)
- [x] Story has been estimated (under 13 pts)
·**Definition of Done** (Note: If any of these points are not applicable, mark N/A)
- [ ] Acceptance criteria are tested by the CI pipeline
- [ ] UI meets accessibility requirements
- [ ] Configuration changes are documented, documentation and designs are updated
- [ ] Passes code peer-review
- [ ] Passes QA of Acceptance Criteria with verification in Dev and Test
- [ ] Ticket is ready to be merged to main branch
- [ ] Can be demoed in Sprint Review
- [ ] Bugs or future work cards are identified and created
- [ ] Reviewed and approved by Product Owner
#### Notes:
-
| 1.0 | As an Ops Team member, I want to see calculated payment amounts for this milestone, so that I can choose to override them - #### Description:
Using the "calculated + adjusted" UI pattern on the net and gross payment amounts for a milestone
[Formula with examples](https://bcgov.sharepoint.com/:w:/t/00608-ScrumTeam/EcXm1DnetBZNgFOkakDR_IEB1RS8r51PTHqH2h4C5L6URw?e=RbiNV2) - Please open it with the Word app instead of browser for formatting and better readability.
Discussion details in #656
Edit: Added `holdback_amount` to this card. It does not currently have fields in the schema for the calculated/adjusted holdback amount & will need to be added.
#### Acceptance Criteria:
Given I am an ops team member
When I enter milestone information
And I have entered the total eligible expenses for this milestone
Then I can see calculated values for holdback, net and gross payment amounts for this particular milestones
Given I am an ops team member
When I enter milestone information
Then I can choose to override the calculated holdback, net and gross payment information through "adjusted" fields
Edit:
Given I am an ops team member
When I am on the milestones page
Then I see a Maximum Amount This Milestone field (replaces the current `Maximum Amount` field, this is just a title change).
#### Development Checklist:
- [x] check in with @dleard to see what has been implemented as part of #986
- [x] milestone schema has `calculated_holdback_amount` and `adjusted_holdback_amount` fields for milestones with expenses.
- [x] milestone schema shows a title of `Maximum Amount This Milestone` for the `maximum_amount` field. Do not change the value of the json key, just the title attribute. Changing the key will be a larger refactor across the database that is probably unnecessary.
- [x] uses the "AdjustableCalculatedValueWidget" for `gross_payment_amount`, `holdback_amount` and `net_payment_amount` fields
- [x] Calculated values are retrieved from the form_change_**_this_miletstone computed columns created in #986
- [ ] Meets the DOD
**Definition of Ready** (Note: If any of these points are not applicable, mark N/A)
- [x] User story is included
- [x] User role and type are identified
- [x] Acceptance criteria are included
- [x] Wireframes are included (if required)
- [x] Design / Solution is accepted by Product Owner
- [x] Dependencies are identified (technical, business, regulatory/policy)
- [x] Story has been estimated (under 13 pts)
·**Definition of Done** (Note: If any of these points are not applicable, mark N/A)
- [ ] Acceptance criteria are tested by the CI pipeline
- [ ] UI meets accessibility requirements
- [ ] Configuration changes are documented, documentation and designs are updated
- [ ] Passes code peer-review
- [ ] Passes QA of Acceptance Criteria with verification in Dev and Test
- [ ] Ticket is ready to be merged to main branch
- [ ] Can be demoed in Sprint Review
- [ ] Bugs or future work cards are identified and created
- [ ] Reviewed and approved by Product Owner
#### Notes:
-
| priority | as an ops team member i want to see calculated payment amounts for this milestone so that i can choose to override them description using the calculated adjusted ui pattern on the net and gross payment amounts for a milestone please open it with the word app instead of browser for formatting and better readability discussion details in edit added holdback amount to this card it does not currently have fields in the schema for the calculated adjusted holdback amount will need to be added acceptance criteria given i am an ops team member when i enter milestone information and i have entered the total eligible expenses for this milestone then i can see calculated values for holdback net and gross payment amounts for this particular milestones given i am an ops team member when i enter milestone information then i can choose to override the calculated holdback net and gross payment information through adjusted fields edit given i am an ops team member when i am on the milestones page then i see a maximum amount this milestone field replaces the current maximum amount field this is just a title change development checklist check in with dleard to see what has been implemented as part of milestone schema has calculated holdback amount and adjusted holdback amount fields for milestones with expenses milestone schema shows a title of maximum amount this milestone for the maximum amount field do not change the value of the json key just the title attribute changing the key will be a larger refactor across the database that is probably unnecessary uses the adjustablecalculatedvaluewidget for gross payment amount holdback amount and net payment amount fields calculated values are retrieved from the form change this miletstone computed columns created in meets the dod definition of ready note if any of these points are not applicable mark n a user story is included user role and type are identified acceptance criteria are included wireframes are included if required design solution is accepted by product owner dependencies are identified technical business regulatory policy story has been estimated under pts · definition of done note if any of these points are not applicable mark n a acceptance criteria are tested by the ci pipeline ui meets accessibility requirements configuration changes are documented documentation and designs are updated passes code peer review passes qa of acceptance criteria with verification in dev and test ticket is ready to be merged to main branch can be demoed in sprint review bugs or future work cards are identified and created reviewed and approved by product owner notes | 1 |
31,947 | 2,741,814,241 | IssuesEvent | 2015-04-21 13:35:42 | CenterForOpenScience/osf.io | https://api.github.com/repos/CenterForOpenScience/osf.io | closed | [dataverse] Update to Dataverse 4.0 | 2 - Ready 5 - Pending Review Priority - High | Dataverse 4.0 is slated for release this December: http://roadmap.datascience.iq.harvard.edu/milestones/milestone-roadmap/dataverse/. I haven't reviewed the migration documents extensively (at https://docs.google.com/document/d/11DpdKyp1tagmaJAAzRqQBEZEZ69WOOOYsoqhz8UsfNM/edit), but this will inevitably require some changes on our side. It would be ideal to have our add-on ready before Dataverse 4.0 ships, since the current code will stop working as soon as the new Dataverse release goes to production.
@JeffSpies | 1.0 | [dataverse] Update to Dataverse 4.0 - Dataverse 4.0 is slated for release this December: http://roadmap.datascience.iq.harvard.edu/milestones/milestone-roadmap/dataverse/. I haven't reviewed the migration documents extensively (at https://docs.google.com/document/d/11DpdKyp1tagmaJAAzRqQBEZEZ69WOOOYsoqhz8UsfNM/edit), but this will inevitably require some changes on our side. It would be ideal to have our add-on ready before Dataverse 4.0 ships, since the current code will stop working as soon as the new Dataverse release goes to production.
@JeffSpies | priority | update to dataverse dataverse is slated for release this december i haven t reviewed the migration documents extensively at but this will inevitably require some changes on our side it would be ideal to have our add on ready before dataverse ships since the current code will stop working as soon as the new dataverse release goes to production jeffspies | 1 |
526,567 | 15,295,613,337 | IssuesEvent | 2021-02-24 05:15:01 | jcsnorlax97/rentr | https://api.github.com/repos/jcsnorlax97/rentr | closed | [TASK] Backend Server Setup for Deployment into Heroku | High Priority backend database dev-task | ### Associated User Story:
N/A
### Task Description:
- [X] Heroku
- [X] Create new account
- [X] Create new Heroku app in DashBoard.
- [X] Add "Heroku Postgres" add-ons in the app->Resources tab.
- [X] Update Config Vars in <app_page>->"Settings" tab by adding `DB_ENVIRONMENT=production` in it.
- [X] Rentr Project (Back-end)
- [X] Add `npm start` script in `package.json` (Required by Heroku for running the app)
- [X] Update `config/config.js` and `db/db.js` to handle DB in Production environment.
- [X] Deploy Node.JS app (`/server` folder) & current feature branch into Heroku for first-try.
- Note: Before the end of this sprint, the Node.JS app (`/server` folder) & `master` branch need to be deployed into the Heroku as Prod. This would be done after `develop` has been merged into `master`. As no code changes involved to do that, no tickets would be created for that.
### Dependencies:
Heroku
### Acceptance Criteria:
- Able to call the API locally, i.e. `localhost`
- Able to call the API via the given domain name by Heroku. | 1.0 | [TASK] Backend Server Setup for Deployment into Heroku - ### Associated User Story:
N/A
### Task Description:
- [X] Heroku
- [X] Create new account
- [X] Create new Heroku app in DashBoard.
- [X] Add "Heroku Postgres" add-ons in the app->Resources tab.
- [X] Update Config Vars in <app_page>->"Settings" tab by adding `DB_ENVIRONMENT=production` in it.
- [X] Rentr Project (Back-end)
- [X] Add `npm start` script in `package.json` (Required by Heroku for running the app)
- [X] Update `config/config.js` and `db/db.js` to handle DB in Production environment.
- [X] Deploy Node.JS app (`/server` folder) & current feature branch into Heroku for first-try.
- Note: Before the end of this sprint, the Node.JS app (`/server` folder) & `master` branch need to be deployed into the Heroku as Prod. This would be done after `develop` has been merged into `master`. As no code changes involved to do that, no tickets would be created for that.
### Dependencies:
Heroku
### Acceptance Criteria:
- Able to call the API locally, i.e. `localhost`
- Able to call the API via the given domain name by Heroku. | priority | backend server setup for deployment into heroku associated user story n a task description heroku create new account create new heroku app in dashboard add heroku postgres add ons in the app resources tab update config vars in settings tab by adding db environment production in it rentr project back end add npm start script in package json required by heroku for running the app update config config js and db db js to handle db in production environment deploy node js app server folder current feature branch into heroku for first try note before the end of this sprint the node js app server folder master branch need to be deployed into the heroku as prod this would be done after develop has been merged into master as no code changes involved to do that no tickets would be created for that dependencies heroku acceptance criteria able to call the api locally i e localhost able to call the api via the given domain name by heroku | 1 |
359,901 | 10,682,385,393 | IssuesEvent | 2019-10-22 05:08:19 | bounswe/bounswe2019group8 | https://api.github.com/repos/bounswe/bounswe2019group8 | closed | Connect mobile application to Backend | Effort: High Mobile Platform: Mobile Priority: High Status: Done | **Actions:**
1. Create API connection.
2. Determine the Base URL.
**Notes:**
- [x] Create API connection.
- [x] Determine the Base URL.
**Deadline:** 21.10.2019 - 23.59 | 1.0 | Connect mobile application to Backend - **Actions:**
1. Create API connection.
2. Determine the Base URL.
**Notes:**
- [x] Create API connection.
- [x] Determine the Base URL.
**Deadline:** 21.10.2019 - 23.59 | priority | connect mobile application to backend actions create api connection determine the base url notes create api connection determine the base url deadline | 1 |
2,630 | 2,531,793,723 | IssuesEvent | 2015-01-23 10:43:15 | handsontable/handsontable | https://api.github.com/repos/handsontable/handsontable | closed | The context menu breaks the manualColumnResize | Bug Guess: < 2 hours Priority: high Released | Check this fiddle:
http://jsfiddle.net/r879qc3p/
When the page loads, you can resize the columns as you wish. Right-click anywhere on the table to show the context menu (you don't have to choose an item) then left-click somewhere else to hide the context menu. The columns cannot be resized anymore. | 1.0 | The context menu breaks the manualColumnResize - Check this fiddle:
http://jsfiddle.net/r879qc3p/
When the page loads, you can resize the columns as you wish. Right-click anywhere on the table to show the context menu (you don't have to choose an item) then left-click somewhere else to hide the context menu. The columns cannot be resized anymore. | priority | the context menu breaks the manualcolumnresize check this fiddle when the page loads you can resize the columns as you wish right click anywhere on the table to show the context menu you don t have to choose an item then left click somewhere else to hide the context menu the columns cannot be resized anymore | 1 |
359,593 | 10,678,463,357 | IssuesEvent | 2019-10-21 17:19:41 | Esri/indoor-routing-xamarin | https://api.github.com/repos/Esri/indoor-routing-xamarin | closed | MMPK file is not being downloaded when using simulator iOS 13.1 | Effort - Small Priority - High Status - PR Open Type - Bug | Development Environment:
- Visual Studio for Mac version 8.3.2
- xCode 11.1
Using VS for Mac, when running the Indoors Application on iOS Simulator 12.x the applications works normally.
However, when using simulator 13.1 the simulator keeps showing the page (Downloading Map) and it will not download the mmpk file. | 1.0 | MMPK file is not being downloaded when using simulator iOS 13.1 - Development Environment:
- Visual Studio for Mac version 8.3.2
- xCode 11.1
Using VS for Mac, when running the Indoors Application on iOS Simulator 12.x the applications works normally.
However, when using simulator 13.1 the simulator keeps showing the page (Downloading Map) and it will not download the mmpk file. | priority | mmpk file is not being downloaded when using simulator ios development environment visual studio for mac version xcode using vs for mac when running the indoors application on ios simulator x the applications works normally however when using simulator the simulator keeps showing the page downloading map and it will not download the mmpk file | 1 |
85,813 | 3,699,075,435 | IssuesEvent | 2016-02-28 19:11:45 | vicgen/newtonium-gui | https://api.github.com/repos/vicgen/newtonium-gui | opened | Create project structure | enhancement high priority | child of #1
As gui code I should have a project structure to live.
The package structure should be: `org.vicgen.newtonium.gui.*` | 1.0 | Create project structure - child of #1
As gui code I should have a project structure to live.
The package structure should be: `org.vicgen.newtonium.gui.*` | priority | create project structure child of as gui code i should have a project structure to live the package structure should be org vicgen newtonium gui | 1 |
124,028 | 4,890,901,642 | IssuesEvent | 2016-11-18 15:15:18 | tlatoza/SeeCodeRun | https://api.github.com/repos/tlatoza/SeeCodeRun | opened | Call graph should have higher information density | high priority | Most of the space in the call graph view is blank, empty space which communicates no useful information to the user.
The space between rows should be reduced. Nodes in the call graph should be dynamically sized relative to content. The padding for nodes should be reduced, but not entirely eliminated.
| 1.0 | Call graph should have higher information density - Most of the space in the call graph view is blank, empty space which communicates no useful information to the user.
The space between rows should be reduced. Nodes in the call graph should be dynamically sized relative to content. The padding for nodes should be reduced, but not entirely eliminated.
| priority | call graph should have higher information density most of the space in the call graph view is blank empty space which communicates no useful information to the user the space between rows should be reduced nodes in the call graph should be dynamically sized relative to content the padding for nodes should be reduced but not entirely eliminated | 1 |
390,703 | 11,561,803,392 | IssuesEvent | 2020-02-20 00:23:26 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | opened | Warp Sync: Checkpoints must be generated based on Market End Time | Priority: Very High | Need to generate a warp sync hash once we're 30 blocks past the last block before the market end time. | 1.0 | Warp Sync: Checkpoints must be generated based on Market End Time - Need to generate a warp sync hash once we're 30 blocks past the last block before the market end time. | priority | warp sync checkpoints must be generated based on market end time need to generate a warp sync hash once we re blocks past the last block before the market end time | 1 |
405,227 | 11,870,070,233 | IssuesEvent | 2020-03-26 12:07:59 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Social Icons are out of alignment in Design 2 | NEXT UPDATE [Priority: HIGH] bug | Social Icons are out of alignment in Design 2
https://monosnap.com/file/Tg5OtsuA7JMKDUynDXtUPLp6VCBGan | 1.0 | Social Icons are out of alignment in Design 2 - Social Icons are out of alignment in Design 2
https://monosnap.com/file/Tg5OtsuA7JMKDUynDXtUPLp6VCBGan | priority | social icons are out of alignment in design social icons are out of alignment in design | 1 |
360,495 | 10,693,428,703 | IssuesEvent | 2019-10-23 08:49:37 | AY1920S1-CS2113T-W17-1/main | https://api.github.com/repos/AY1920S1-CS2113T-W17-1/main | closed | As a student, I want to edit due dates of tasks that I have so that I can update the description and deadlines of the tasks | priority.High type.Story | Feature: Edit deadlines and description of task | 1.0 | As a student, I want to edit due dates of tasks that I have so that I can update the description and deadlines of the tasks - Feature: Edit deadlines and description of task | priority | as a student i want to edit due dates of tasks that i have so that i can update the description and deadlines of the tasks feature edit deadlines and description of task | 1 |
182,136 | 6,667,448,553 | IssuesEvent | 2017-10-03 12:38:27 | Kirez/dat255 | https://api.github.com/repos/Kirez/dat255 | closed | Write Java method to get distance sensor value from SCU | high priority HW communication S3 | As a developer implementing the distance control part of the adaptive cruise control system I need a Java interface to read the distance sensor value | 1.0 | Write Java method to get distance sensor value from SCU - As a developer implementing the distance control part of the adaptive cruise control system I need a Java interface to read the distance sensor value | priority | write java method to get distance sensor value from scu as a developer implementing the distance control part of the adaptive cruise control system i need a java interface to read the distance sensor value | 1 |
88,169 | 3,774,278,687 | IssuesEvent | 2016-03-17 08:29:53 | BugBusterSWE/documentation | https://api.github.com/repos/BugBusterSWE/documentation | closed | Miglioramento AdR - 2 | Analyst priority:high | *Documento in cui si trova il problema*:
Analisi dei Requisiti
*Descrizione del problema*:
Vedere pdf esito RR
*Punti da svolgere per risolvere il problema*:
- [ ] Specificare meglio i requisiti
- [ ] Correggere punti indicati nel documento | 1.0 | Miglioramento AdR - 2 - *Documento in cui si trova il problema*:
Analisi dei Requisiti
*Descrizione del problema*:
Vedere pdf esito RR
*Punti da svolgere per risolvere il problema*:
- [ ] Specificare meglio i requisiti
- [ ] Correggere punti indicati nel documento | priority | miglioramento adr documento in cui si trova il problema analisi dei requisiti descrizione del problema vedere pdf esito rr punti da svolgere per risolvere il problema specificare meglio i requisiti correggere punti indicati nel documento | 1 |
459,460 | 13,193,554,386 | IssuesEvent | 2020-08-13 15:23:55 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | SQL Plan Cache | Estimation: M Module: SQL Priority: High Source: Internal Team: Core Type: Enhancement | We need to implement a cache for prepared SQL plans, to avoid costly parting and optimization on every SQL request, | 1.0 | SQL Plan Cache - We need to implement a cache for prepared SQL plans, to avoid costly parting and optimization on every SQL request, | priority | sql plan cache we need to implement a cache for prepared sql plans to avoid costly parting and optimization on every sql request | 1 |
825,804 | 31,472,764,061 | IssuesEvent | 2023-08-30 08:40:44 | bryntum/support | https://api.github.com/repos/bryntum/support | closed | Event bar disappears on drop when async listener used | bug resolved high-priority premium forum OEM | [Forum post](https://forum.bryntum.com/viewtopic.php?f=51&t=25924&p=129083#p129083)
Edit basic demo to have listener and eventDrag feature config
```
features : {
eventDrag: {
constrainDragToTimeline: false
},
}
});
scheduler.on('beforeEventDropFinalize', ({context})=>{
context.async = true;
// some async check
setTimeout(()=>{
context.finalize(true);
},10)
});
```
Try to drag and drop event as on video. See event bar disappeared.
https://github.com/bryntum/support/assets/7203098/444f8419-7479-4f6f-aebf-5a2579d39e63
| 1.0 | Event bar disappears on drop when async listener used - [Forum post](https://forum.bryntum.com/viewtopic.php?f=51&t=25924&p=129083#p129083)
Edit basic demo to have listener and eventDrag feature config
```
features : {
eventDrag: {
constrainDragToTimeline: false
},
}
});
scheduler.on('beforeEventDropFinalize', ({context})=>{
context.async = true;
// some async check
setTimeout(()=>{
context.finalize(true);
},10)
});
```
Try to drag and drop event as on video. See event bar disappeared.
https://github.com/bryntum/support/assets/7203098/444f8419-7479-4f6f-aebf-5a2579d39e63
| priority | event bar disappears on drop when async listener used edit basic demo to have listener and eventdrag feature config features eventdrag constraindragtotimeline false scheduler on beforeeventdropfinalize context context async true some async check settimeout context finalize true try to drag and drop event as on video see event bar disappeared | 1 |
450,733 | 13,018,772,626 | IssuesEvent | 2020-07-26 19:03:01 | ION28/BLUESPAWN | https://api.github.com/repos/ION28/BLUESPAWN | closed | Add Default Values for Atomic Tests | difficulty/easy mode/other module/configuration priority/high type/enhancement | https://github.com/redcanaryco/atomic-red-team/tree/master/execution-frameworks/Invoke-AtomicRedTeam#specify-input-parameters-on-the-command-line
this will improve the accuracy of the tests run in the CI for specific hunts, particular those that use YARA | 1.0 | Add Default Values for Atomic Tests - https://github.com/redcanaryco/atomic-red-team/tree/master/execution-frameworks/Invoke-AtomicRedTeam#specify-input-parameters-on-the-command-line
this will improve the accuracy of the tests run in the CI for specific hunts, particular those that use YARA | priority | add default values for atomic tests this will improve the accuracy of the tests run in the ci for specific hunts particular those that use yara | 1 |
240,985 | 7,807,896,339 | IssuesEvent | 2018-06-11 18:23:10 | ansible/galaxy | https://api.github.com/repos/ansible/galaxy | closed | Manually test `ansible-galaxy` from 2.5 | priority/high status/fix-committed type/bug | Test all commands, and resolve any Galaxy v3.0 server bugs. | 1.0 | Manually test `ansible-galaxy` from 2.5 - Test all commands, and resolve any Galaxy v3.0 server bugs. | priority | manually test ansible galaxy from test all commands and resolve any galaxy server bugs | 1 |
409,128 | 11,957,262,309 | IssuesEvent | 2020-04-04 13:47:42 | apragacz/django-rest-registration | https://api.github.com/repos/apragacz/django-rest-registration | closed | UserProfileSerializer doesn't handle foreignKey fields? | priority:high type:bug | So I have a foreignKey on one of the fields of my User model (did this by subclassing AbstractUser and pointing AUTH_USER_MODEL in settings.py to this User model). These extra fields are showing up correctly in my calls to "accounts/register/". However when I'm submitting my registration form I'm submitting an integer that represents the primary key for this foreignKey field and I get the following error:
Cannot assign "4": "User.user_type" must be a "UserType" instance.
The error happens in _build_initial_user() on line 175 of [rest_registration/api/serializers.py](https://github.com/apragacz/django-rest-registration/blob/master/rest_registration/api/serializers.py) because it's trying to construct an instance of the usermodel there with self.initial_data which holds an integer for the ForeignKey field. | 1.0 | UserProfileSerializer doesn't handle foreignKey fields? - So I have a foreignKey on one of the fields of my User model (did this by subclassing AbstractUser and pointing AUTH_USER_MODEL in settings.py to this User model). These extra fields are showing up correctly in my calls to "accounts/register/". However when I'm submitting my registration form I'm submitting an integer that represents the primary key for this foreignKey field and I get the following error:
Cannot assign "4": "User.user_type" must be a "UserType" instance.
The error happens in _build_initial_user() on line 175 of [rest_registration/api/serializers.py](https://github.com/apragacz/django-rest-registration/blob/master/rest_registration/api/serializers.py) because it's trying to construct an instance of the usermodel there with self.initial_data which holds an integer for the ForeignKey field. | priority | userprofileserializer doesn t handle foreignkey fields so i have a foreignkey on one of the fields of my user model did this by subclassing abstractuser and pointing auth user model in settings py to this user model these extra fields are showing up correctly in my calls to accounts register however when i m submitting my registration form i m submitting an integer that represents the primary key for this foreignkey field and i get the following error cannot assign user user type must be a usertype instance the error happens in build initial user on line of because it s trying to construct an instance of the usermodel there with self initial data which holds an integer for the foreignkey field | 1 |
69,125 | 3,295,524,535 | IssuesEvent | 2015-11-01 01:21:20 | ClinGen/clincoded | https://api.github.com/repos/ClinGen/clincoded | closed | PhenExplorer site has been replaced by "HPO browser"- need to update name & link | high priority merged v0.5 | We have links to PhenExplorer on several pages (Group, Family, Individual) - the HPO site now says the following:
"PhenExplorer: The PhenExplorere has been superceded by the new HPO browser, which has all the functionalities of PhenExplorer and has additional features such as Excel-Exports"
the words "HPO browser" in the above phrase link to:
http://www.human-phenotype-ontology.org/hpoweb/showterm?id=HP:0000118
Please change each instance of "PhenExplorer" to read "HPO browser" and change the link (which is now broken) to the URL above.
Thanks!
Selina
| 1.0 | PhenExplorer site has been replaced by "HPO browser"- need to update name & link - We have links to PhenExplorer on several pages (Group, Family, Individual) - the HPO site now says the following:
"PhenExplorer: The PhenExplorere has been superceded by the new HPO browser, which has all the functionalities of PhenExplorer and has additional features such as Excel-Exports"
the words "HPO browser" in the above phrase link to:
http://www.human-phenotype-ontology.org/hpoweb/showterm?id=HP:0000118
Please change each instance of "PhenExplorer" to read "HPO browser" and change the link (which is now broken) to the URL above.
Thanks!
Selina
| priority | phenexplorer site has been replaced by hpo browser need to update name link we have links to phenexplorer on several pages group family individual the hpo site now says the following phenexplorer the phenexplorere has been superceded by the new hpo browser which has all the functionalities of phenexplorer and has additional features such as excel exports the words hpo browser in the above phrase link to please change each instance of phenexplorer to read hpo browser and change the link which is now broken to the url above thanks selina | 1 |
814,229 | 30,495,793,589 | IssuesEvent | 2023-07-18 10:43:02 | VisActor/VChart | https://api.github.com/repos/VisActor/VChart | closed | [Bug] The `updateSpec` method does not handle the globalScale, cause color is not updated | bug high priority | ### Version
1.0.0
### Link to Minimal Reproduction
none
### Steps to Reproduce
```ts
const spec = {
type: 'pie',
categoryField: '20001',
valueField: '230718111907021',
data: [
{
id: 'data',
values: [
{
'20001': '消费者',
'230718111907021': 8025072,
'230718111907024': '消费者'
},
{
'20001': '公司',
'230718111907021': 5152793,
'230718111907024': '公司'
},
{
'20001': '小型企业',
'230718111907021': 2891088,
'230718111907024': '小型企业'
}
],
transform: [
{
type: 'fields',
options: {
fields: {
'20001': {
alias: '图例项 ',
domain: ['消费者', '公司', '小型企业']
},
'230718111907021': {
alias: '销售额'
},
'230718111907024': {
alias: '细分'
}
}
}
}
]
}
],
groupBy: '20001',
color: {
field: '20001',
type: 'ordinal',
range: ['#212121', '#656565', '#8ec8ac'],
specified: {
消费者: '#f5222d',
公司: '#7953CB',
小型企业: '#81c87c'
}
}
};
const vchart = new VChart(spec, { dom: document.getElementById('chart') as HTMLElement });
vchart.renderAsync();
// 只为了方便控制台调试用,不要拷贝
window['vchart'] = vchart;
setTimeout(() => {
vchart.updateSpec({
type: 'pie',
categoryField: '20001',
valueField: '230718111907021',
data: [
{
id: 'data',
values: [
{
'20001': '消费者',
'230718111907021': 1000000,
'230718111907024': '消费者'
},
{
'20001': '公司',
'230718111907021': 5152793,
'230718111907024': '公司'
},
{
'20001': '小型企业',
'230718111907021': 2891088,
'230718111907024': '小型企业'
}
],
transform: [
{
type: 'fields',
options: {
fields: {
'20001': {
alias: '图例项 ',
domain: ['消费者', '公司', '小型企业']
},
'230718111907021': {
alias: '销售额'
},
'230718111907024': {
alias: '细分'
}
}
}
}
]
}
],
groupBy: '20001',
color: {
field: '20001',
type: 'ordinal',
range: ['yellow', 'red', 'black'],
specified: {
消费者: '#f5222d',
公司: '#7953CB',
小型企业: '#81c87c'
}
}
});
}, 2000);
```
### Current Behavior
the color does not work.
### Expected Behavior
work
### Environment
```markdown
- OS:
- Browser:
- Framework:
```
### Any additional comments?
_No response_ | 1.0 | [Bug] The `updateSpec` method does not handle the globalScale, cause color is not updated - ### Version
1.0.0
### Link to Minimal Reproduction
none
### Steps to Reproduce
```ts
const spec = {
type: 'pie',
categoryField: '20001',
valueField: '230718111907021',
data: [
{
id: 'data',
values: [
{
'20001': '消费者',
'230718111907021': 8025072,
'230718111907024': '消费者'
},
{
'20001': '公司',
'230718111907021': 5152793,
'230718111907024': '公司'
},
{
'20001': '小型企业',
'230718111907021': 2891088,
'230718111907024': '小型企业'
}
],
transform: [
{
type: 'fields',
options: {
fields: {
'20001': {
alias: '图例项 ',
domain: ['消费者', '公司', '小型企业']
},
'230718111907021': {
alias: '销售额'
},
'230718111907024': {
alias: '细分'
}
}
}
}
]
}
],
groupBy: '20001',
color: {
field: '20001',
type: 'ordinal',
range: ['#212121', '#656565', '#8ec8ac'],
specified: {
消费者: '#f5222d',
公司: '#7953CB',
小型企业: '#81c87c'
}
}
};
const vchart = new VChart(spec, { dom: document.getElementById('chart') as HTMLElement });
vchart.renderAsync();
// 只为了方便控制台调试用,不要拷贝
window['vchart'] = vchart;
setTimeout(() => {
vchart.updateSpec({
type: 'pie',
categoryField: '20001',
valueField: '230718111907021',
data: [
{
id: 'data',
values: [
{
'20001': '消费者',
'230718111907021': 1000000,
'230718111907024': '消费者'
},
{
'20001': '公司',
'230718111907021': 5152793,
'230718111907024': '公司'
},
{
'20001': '小型企业',
'230718111907021': 2891088,
'230718111907024': '小型企业'
}
],
transform: [
{
type: 'fields',
options: {
fields: {
'20001': {
alias: '图例项 ',
domain: ['消费者', '公司', '小型企业']
},
'230718111907021': {
alias: '销售额'
},
'230718111907024': {
alias: '细分'
}
}
}
}
]
}
],
groupBy: '20001',
color: {
field: '20001',
type: 'ordinal',
range: ['yellow', 'red', 'black'],
specified: {
消费者: '#f5222d',
公司: '#7953CB',
小型企业: '#81c87c'
}
}
});
}, 2000);
```
### Current Behavior
the color does not work.
### Expected Behavior
work
### Environment
```markdown
- OS:
- Browser:
- Framework:
```
### Any additional comments?
_No response_ | priority | the updatespec method does not handle the globalscale cause color is not updated version link to minimal reproduction none steps to reproduce ts const spec type pie categoryfield valuefield data id data values 消费者 消费者 公司 公司 小型企业 小型企业 transform type fields options fields alias 图例项 domain alias 销售额 alias 细分 groupby color field type ordinal range specified 消费者 公司 小型企业 const vchart new vchart spec dom document getelementbyid chart as htmlelement vchart renderasync 只为了方便控制台调试用,不要拷贝 window vchart settimeout vchart updatespec type pie categoryfield valuefield data id data values 消费者 消费者 公司 公司 小型企业 小型企业 transform type fields options fields alias 图例项 domain alias 销售额 alias 细分 groupby color field type ordinal range specified 消费者 公司 小型企业 current behavior the color does not work expected behavior work environment markdown os browser framework any additional comments no response | 1 |
266,946 | 8,377,139,203 | IssuesEvent | 2018-10-05 22:49:05 | bluek8s/kubedirector | https://api.github.com/repos/bluek8s/kubedirector | closed | don't block handler during initial config | Priority: High Project: Cluster Reconcile Type: Enhancement | We shouldn't run an initial startscript invocation synchronously. Not only would a super-long (or indefinite) startscript block deletion of the cluster, it can also cause potential setup deadlock issues as described in issue #54 if the startscripts on different members try to coordinate. | 1.0 | don't block handler during initial config - We shouldn't run an initial startscript invocation synchronously. Not only would a super-long (or indefinite) startscript block deletion of the cluster, it can also cause potential setup deadlock issues as described in issue #54 if the startscripts on different members try to coordinate. | priority | don t block handler during initial config we shouldn t run an initial startscript invocation synchronously not only would a super long or indefinite startscript block deletion of the cluster it can also cause potential setup deadlock issues as described in issue if the startscripts on different members try to coordinate | 1 |
70,026 | 3,316,429,602 | IssuesEvent | 2015-11-06 16:51:37 | TeselaGen/Peony-Issue-Tracking | https://api.github.com/repos/TeselaGen/Peony-Issue-Tracking | opened | VE view ORF specific frames broken | Customer: DAS 90-Day Milestone #4 - Oracle Rewrite newcolumn1 Priority: High | _From @njhillson on October 18, 2015 17:56_
If in VE I View -> ORFs, if all frames rendered, seems to work fine, but when I start toggling individual frames, the functionality seems to break.
_Copied from original issue: TeselaGen/ve#1454_ | 1.0 | VE view ORF specific frames broken - _From @njhillson on October 18, 2015 17:56_
If in VE I View -> ORFs, if all frames rendered, seems to work fine, but when I start toggling individual frames, the functionality seems to break.
_Copied from original issue: TeselaGen/ve#1454_ | priority | ve view orf specific frames broken from njhillson on october if in ve i view orfs if all frames rendered seems to work fine but when i start toggling individual frames the functionality seems to break copied from original issue teselagen ve | 1 |
776,041 | 27,244,526,268 | IssuesEvent | 2023-02-22 00:09:55 | IgniteUI/igniteui-angular | https://api.github.com/repos/IgniteUI/igniteui-angular | closed | Input group not inheriting the correct density | :bug: bug :new: status: new input-group :runner: priority: high status: inactive | ## Description
The input group does not inherit the correct density class from its parent, for example, when the grid is set to compact the input stays comfortable. This is happening only if the input-group is inside another form component, for example, date-picker or select.
There is [an old issue in the sample repo](https://github.com/IgniteUI/igniteui-angular-samples/issues/3006)
* igniteui-angular version:
* browser: all
## Steps to reproduce
1. Open this [gridCellEditing](http://localhost:4200/gridCellEditing) sample in dev demos
2. Select compact density
3. Edit any of the date-picker inputs and compare with one that is pure input-group
## Result
The input-group does not inherit the correct density
## Expected result
The input-group should inherit the correct density
| 1.0 | Input group not inheriting the correct density - ## Description
The input group does not inherit the correct density class from its parent, for example, when the grid is set to compact the input stays comfortable. This is happening only if the input-group is inside another form component, for example, date-picker or select.
There is [an old issue in the sample repo](https://github.com/IgniteUI/igniteui-angular-samples/issues/3006)
* igniteui-angular version:
* browser: all
## Steps to reproduce
1. Open this [gridCellEditing](http://localhost:4200/gridCellEditing) sample in dev demos
2. Select compact density
3. Edit any of the date-picker inputs and compare with one that is pure input-group
## Result
The input-group does not inherit the correct density
## Expected result
The input-group should inherit the correct density
| priority | input group not inheriting the correct density description the input group does not inherit the correct density class from its parent for example when the grid is set to compact the input stays comfortable this is happening only if the input group is inside another form component for example date picker or select there is igniteui angular version browser all steps to reproduce open this sample in dev demos select compact density edit any of the date picker inputs and compare with one that is pure input group result the input group does not inherit the correct density expected result the input group should inherit the correct density | 1 |
439,554 | 12,683,658,164 | IssuesEvent | 2020-06-19 20:18:36 | SUSE/doc-cap | https://api.github.com/repos/SUSE/doc-cap | closed | [CAP-813] How do I get started on AKS? | 2.0 high priority | See also: https://github.com/SUSE/kubecf/issues/18
```
az aks create -g RESOURCE_GROUP -n kubecf-aks --node-count 3 --admin-username azuser --ssh-key-value /PATH/TO/id_rsa_.pub --node-vm-size Standard_DS4_v2 --node-osdisk-size=80 --nodepool-name NODEPOOL_NAME --service-principal .... --client-secret ....
k create ns cfo
followed by the 2 helm installs
``` | 1.0 | [CAP-813] How do I get started on AKS? - See also: https://github.com/SUSE/kubecf/issues/18
```
az aks create -g RESOURCE_GROUP -n kubecf-aks --node-count 3 --admin-username azuser --ssh-key-value /PATH/TO/id_rsa_.pub --node-vm-size Standard_DS4_v2 --node-osdisk-size=80 --nodepool-name NODEPOOL_NAME --service-principal .... --client-secret ....
k create ns cfo
followed by the 2 helm installs
``` | priority | how do i get started on aks see also az aks create g resource group n kubecf aks node count admin username azuser ssh key value path to id rsa pub node vm size standard node osdisk size nodepool name nodepool name service principal client secret k create ns cfo followed by the helm installs | 1 |
248,651 | 7,934,719,465 | IssuesEvent | 2018-07-08 22:37:30 | commercialhaskell/hindent | https://api.github.com/repos/commercialhaskell/hindent | opened | Comments sometimes lost/deleted or misplaced (before/after where/let/do/=/etc) | component: hindent priority: high type: bug | Creating this issue as a catch-all for the comment handling situation in hindent. Redirect all issues here. | 1.0 | Comments sometimes lost/deleted or misplaced (before/after where/let/do/=/etc) - Creating this issue as a catch-all for the comment handling situation in hindent. Redirect all issues here. | priority | comments sometimes lost deleted or misplaced before after where let do etc creating this issue as a catch all for the comment handling situation in hindent redirect all issues here | 1 |
342,772 | 10,321,541,547 | IssuesEvent | 2019-08-31 03:03:32 | storybookjs/storybook | https://api.github.com/repos/storybookjs/storybook | closed | Addon-docs: Code will not be shown in docs if CSF story is camelCase'd | addon: docs bug high priority | If you are reporting a bug or requesting support, start here:
### Bug or support request summary
_Please provide issue details here - What did you expect to happen? What happened instead?_
I'll wrote my stories in the new CSF format, some had a now, containing a single word and some multiple, which I then used camelCase for. For those with camelCase'd names the source code of the story could not be shown. Oddly, camel_Case_With_Underscores did work again. To summarize
```
export const foo = () => <Component />` // Works
export const Foo = () => <Component />` // Works
export const withFoo = () => <Component />` // Does not work
export const WithFoo = () => <Component />` // Does not work
export const with_Foo = () => <Component />` // Oddly works
```
### Steps to reproduce
Just write csf stories with different upper und lower cases
### Please specify which version of Storybook and optionally any affected addons that you're running
all versions on 5.2.0-beta.46
### Affected platforms
Windows and Chrome
| 1.0 | Addon-docs: Code will not be shown in docs if CSF story is camelCase'd - If you are reporting a bug or requesting support, start here:
### Bug or support request summary
_Please provide issue details here - What did you expect to happen? What happened instead?_
I'll wrote my stories in the new CSF format, some had a now, containing a single word and some multiple, which I then used camelCase for. For those with camelCase'd names the source code of the story could not be shown. Oddly, camel_Case_With_Underscores did work again. To summarize
```
export const foo = () => <Component />` // Works
export const Foo = () => <Component />` // Works
export const withFoo = () => <Component />` // Does not work
export const WithFoo = () => <Component />` // Does not work
export const with_Foo = () => <Component />` // Oddly works
```
### Steps to reproduce
Just write csf stories with different upper und lower cases
### Please specify which version of Storybook and optionally any affected addons that you're running
all versions on 5.2.0-beta.46
### Affected platforms
Windows and Chrome
| priority | addon docs code will not be shown in docs if csf story is camelcase d if you are reporting a bug or requesting support start here bug or support request summary please provide issue details here what did you expect to happen what happened instead i ll wrote my stories in the new csf format some had a now containing a single word and some multiple which i then used camelcase for for those with camelcase d names the source code of the story could not be shown oddly camel case with underscores did work again to summarize export const foo works export const foo works export const withfoo does not work export const withfoo does not work export const with foo oddly works steps to reproduce just write csf stories with different upper und lower cases please specify which version of storybook and optionally any affected addons that you re running all versions on beta affected platforms windows and chrome | 1 |
605,148 | 18,725,727,015 | IssuesEvent | 2021-11-03 16:05:27 | AY2122S1-CS2103T-T17-1/tp | https://api.github.com/repos/AY2122S1-CS2103T-T17-1/tp | closed | Edit User Guide | priority.High severity.High | Edit user guide with regards to the flexibility of shift, new validations for shift, salary and leaves and loyalty points. | 1.0 | Edit User Guide - Edit user guide with regards to the flexibility of shift, new validations for shift, salary and leaves and loyalty points. | priority | edit user guide edit user guide with regards to the flexibility of shift new validations for shift salary and leaves and loyalty points | 1 |
771,419 | 27,084,214,395 | IssuesEvent | 2023-02-14 15:55:12 | morpho-dao/morpho-rewards | https://api.github.com/repos/morpho-dao/morpho-rewards | closed | Distribution mechanism V2 | 🔥 high priority 🦋 feature | ## Feature Request
**Is your feature request related to a problem? Please describe.**
The current mechanism distributes rewards using an index for each market side (supply/borrow side of one given market).
To distribute the total MORPHO emitted in one market side, we have to use a normalizer, i.e., to know the proportion of one given user between 2 market updates.
For a pool-only model, we can easily use scaled balances in pool units, which are constant between 2 market updates. With that, if Alice has 100 in pool units, and the total market size is 300 pool Units, Alice has 1/3 of the rewards during this period.
However, there is two balances on Morpho, onPool, and inP2P, and we have only one speed.
So the V1 of the distribution mechanism uses underlying balances.
In a nutshell, we are summing the underlying balance of the users at the moment of the interaction to distribute rewards. So if Alice supplies 100 USDC on m-aave, and the Morpho market size is 300 USDC, she earns 1/3 of the rewards. However, if Alice doesn't deposit for one year, we are still considering his initial balance, without the interest earned.
And anyone can game the mechanism by only supplying 1 WEI in one given market, adding the interests generated to the user balance.
This mechanism worked well at the beginning of the protocol, but now, many interests are generated that are not generating rewards.
<!-- A clear and concise description of what the problem is. Ex. I have an issue when [...] -->
**Describe the solution you'd like**
As said before, Morpho is computing two scaled balances that are static between 2 market updates: onPool in pool units and inP2P in P2P units. However, if we want to distribute to one given market (pool market or p2p market), we have to have two different MORPHO rates: the number of MORPHO per second for matched users and the number of MORPHO per second for people on the pool.
How to compute p2pSpeed and poolSpeed?
Between 2 market transactions, the matched percent amount is constant. This assumption needs to be revised since the rates and amounts are different, but this approximation can be assumed at the magnitude of a market interaction time period.
So, considering that, we can define two rates from the market rate:
p2pSupplyRate = supplyMatchingEfficiency * marketRate
poolSupplyRate = (1 - supplyMatchingEfficiency) * marketRate
And then distribute over the scaled balances.
NB: we want the subgraph to give the same value for already distributed rewards and start the new mechanism only for future epochs.
I suggest using the new mechanism at the current epoch (age 3 epoch 2), if the code is validated on time, i.e. from timestamp **1675263600**.
The mechanism is to be refactored both in the subgraph and in the different scripts
<!-- A clear and concise description of what you want to happen. Add any considered drawbacks. -->
**Describe alternatives you've considered**
- We thought about a rate described by an index to have an accurate distribution between two market updates, but the approximation of the mechanism described below is acceptable.
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Are you willing to resolve this issue by submitting a Pull Request?
This issue is going to be solved in 2 PR:
- [ ] One for updating the subgraph
- [ ] One for updating the script and adding tests
<!--
Remember that first-time contributors are welcome! 🙌
-->
- [x] Yes, I have the time, and I know how to start.
<!--
👋 Have a great day and thank you for the feature request!
--> | 1.0 | Distribution mechanism V2 - ## Feature Request
**Is your feature request related to a problem? Please describe.**
The current mechanism distributes rewards using an index for each market side (supply/borrow side of one given market).
To distribute the total MORPHO emitted in one market side, we have to use a normalizer, i.e., to know the proportion of one given user between 2 market updates.
For a pool-only model, we can easily use scaled balances in pool units, which are constant between 2 market updates. With that, if Alice has 100 in pool units, and the total market size is 300 pool Units, Alice has 1/3 of the rewards during this period.
However, there is two balances on Morpho, onPool, and inP2P, and we have only one speed.
So the V1 of the distribution mechanism uses underlying balances.
In a nutshell, we are summing the underlying balance of the users at the moment of the interaction to distribute rewards. So if Alice supplies 100 USDC on m-aave, and the Morpho market size is 300 USDC, she earns 1/3 of the rewards. However, if Alice doesn't deposit for one year, we are still considering his initial balance, without the interest earned.
And anyone can game the mechanism by only supplying 1 WEI in one given market, adding the interests generated to the user balance.
This mechanism worked well at the beginning of the protocol, but now, many interests are generated that are not generating rewards.
<!-- A clear and concise description of what the problem is. Ex. I have an issue when [...] -->
**Describe the solution you'd like**
As said before, Morpho is computing two scaled balances that are static between 2 market updates: onPool in pool units and inP2P in P2P units. However, if we want to distribute to one given market (pool market or p2p market), we have to have two different MORPHO rates: the number of MORPHO per second for matched users and the number of MORPHO per second for people on the pool.
How to compute p2pSpeed and poolSpeed?
Between 2 market transactions, the matched percent amount is constant. This assumption needs to be revised since the rates and amounts are different, but this approximation can be assumed at the magnitude of a market interaction time period.
So, considering that, we can define two rates from the market rate:
p2pSupplyRate = supplyMatchingEfficiency * marketRate
poolSupplyRate = (1 - supplyMatchingEfficiency) * marketRate
And then distribute over the scaled balances.
NB: we want the subgraph to give the same value for already distributed rewards and start the new mechanism only for future epochs.
I suggest using the new mechanism at the current epoch (age 3 epoch 2), if the code is validated on time, i.e. from timestamp **1675263600**.
The mechanism is to be refactored both in the subgraph and in the different scripts
<!-- A clear and concise description of what you want to happen. Add any considered drawbacks. -->
**Describe alternatives you've considered**
- We thought about a rate described by an index to have an accurate distribution between two market updates, but the approximation of the mechanism described below is acceptable.
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Are you willing to resolve this issue by submitting a Pull Request?
This issue is going to be solved in 2 PR:
- [ ] One for updating the subgraph
- [ ] One for updating the script and adding tests
<!--
Remember that first-time contributors are welcome! 🙌
-->
- [x] Yes, I have the time, and I know how to start.
<!--
👋 Have a great day and thank you for the feature request!
--> | priority | distribution mechanism feature request is your feature request related to a problem please describe the current mechanism distributes rewards using an index for each market side supply borrow side of one given market to distribute the total morpho emitted in one market side we have to use a normalizer i e to know the proportion of one given user between market updates for a pool only model we can easily use scaled balances in pool units which are constant between market updates with that if alice has in pool units and the total market size is pool units alice has of the rewards during this period however there is two balances on morpho onpool and and we have only one speed so the of the distribution mechanism uses underlying balances in a nutshell we are summing the underlying balance of the users at the moment of the interaction to distribute rewards so if alice supplies usdc on m aave and the morpho market size is usdc she earns of the rewards however if alice doesn t deposit for one year we are still considering his initial balance without the interest earned and anyone can game the mechanism by only supplying wei in one given market adding the interests generated to the user balance this mechanism worked well at the beginning of the protocol but now many interests are generated that are not generating rewards describe the solution you d like as said before morpho is computing two scaled balances that are static between market updates onpool in pool units and in units however if we want to distribute to one given market pool market or market we have to have two different morpho rates the number of morpho per second for matched users and the number of morpho per second for people on the pool how to compute and poolspeed between market transactions the matched percent amount is constant this assumption needs to be revised since the rates and amounts are different but this approximation can be assumed at the magnitude of a market interaction time period so considering that we can define two rates from the market rate supplymatchingefficiency marketrate poolsupplyrate supplymatchingefficiency marketrate and then distribute over the scaled balances nb we want the subgraph to give the same value for already distributed rewards and start the new mechanism only for future epochs i suggest using the new mechanism at the current epoch age epoch if the code is validated on time i e from timestamp the mechanism is to be refactored both in the subgraph and in the different scripts describe alternatives you ve considered we thought about a rate described by an index to have an accurate distribution between two market updates but the approximation of the mechanism described below is acceptable are you willing to resolve this issue by submitting a pull request this issue is going to be solved in pr one for updating the subgraph one for updating the script and adding tests remember that first time contributors are welcome 🙌 yes i have the time and i know how to start 👋 have a great day and thank you for the feature request | 1 |
409,055 | 11,955,972,585 | IssuesEvent | 2020-04-04 07:56:40 | ProductBoat/Registry2.0 | https://api.github.com/repos/ProductBoat/Registry2.0 | opened | Aki registry --> examination --> | High Priority bug | * Open chrome browser -->enter URL: http://qa.aprenalregistry.com
--> login page should be displaying.
Enter valid user name, and password
click on the login button.
* Test steps
1. After login to the page dashboard page is displaying, and click on the registry select the AKI registry --> Examination tab in side menu --> then hide the BMI field but 'bmi' value is showing in grid.
https://www.screencast.com/t/ZphdOB8Xc >> https://www.screencast.com/t/nKlCyQbPu6l
| 1.0 | Aki registry --> examination --> - * Open chrome browser -->enter URL: http://qa.aprenalregistry.com
--> login page should be displaying.
Enter valid user name, and password
click on the login button.
* Test steps
1. After login to the page dashboard page is displaying, and click on the registry select the AKI registry --> Examination tab in side menu --> then hide the BMI field but 'bmi' value is showing in grid.
https://www.screencast.com/t/ZphdOB8Xc >> https://www.screencast.com/t/nKlCyQbPu6l
| priority | aki registry examination open chrome browser enter url login page should be displaying enter valid user name and password click on the login button test steps after login to the page dashboard page is displaying and click on the registry select the aki registry examination tab in side menu then hide the bmi field but bmi value is showing in grid | 1 |
322,833 | 9,829,200,057 | IssuesEvent | 2019-06-15 18:29:32 | NCIOCPL/cgov-digital-platform | https://api.github.com/repos/NCIOCPL/cgov-digital-platform | closed | Migrated Content - Missing/Incorrect Card Titles and Themes | High priority Migration bug | HP CTHPs – Statistics cards are not displaying CTHP Card Title and CTHP Card Theme
http://ncigovcdode176.prod.acquia-sites.com/types/breast/hp
http://ncigovcdode176.prod.acquia-sites.com/types/lung/hp
Patient & HP CTHPs — Some internal feature cards are not displaying the CTHP Card Title and CTHP Card Theme
http://ncigovcdode176.prod.acquia-sites.com/types/breast
http://ncigovcdode176.prod.acquia-sites.com/types/lung
Incorrect Card Theme Displaying (After Treatment Ends)
http://ncigovcdode176.prod.acquia-sites.com/types/breast
Statistics Cards are not displaying the CTHP Card TItle and CTHP Card Theme | 1.0 | Migrated Content - Missing/Incorrect Card Titles and Themes - HP CTHPs – Statistics cards are not displaying CTHP Card Title and CTHP Card Theme
http://ncigovcdode176.prod.acquia-sites.com/types/breast/hp
http://ncigovcdode176.prod.acquia-sites.com/types/lung/hp
Patient & HP CTHPs — Some internal feature cards are not displaying the CTHP Card Title and CTHP Card Theme
http://ncigovcdode176.prod.acquia-sites.com/types/breast
http://ncigovcdode176.prod.acquia-sites.com/types/lung
Incorrect Card Theme Displaying (After Treatment Ends)
http://ncigovcdode176.prod.acquia-sites.com/types/breast
Statistics Cards are not displaying the CTHP Card TItle and CTHP Card Theme | priority | migrated content missing incorrect card titles and themes hp cthps – statistics cards are not displaying cthp card title and cthp card theme patient hp cthps — some internal feature cards are not displaying the cthp card title and cthp card theme incorrect card theme displaying after treatment ends statistics cards are not displaying the cthp card title and cthp card theme | 1 |
498,133 | 14,401,570,759 | IssuesEvent | 2020-12-03 13:53:59 | enso-org/enso | https://api.github.com/repos/enso-org/enso | closed | Integrate the Logging Service with the Project Manager | Category: Tooling Change: Non-Breaking Difficulty: Core Contributor Priority: High Type: Enhancement | ### Summary
<!--
- A summary of the task.
-->
When the logging service (#1031) is implemented, it should also be integrated with the project manager.
As the project-manager is started independently of the launcher, it should start its own logging service, but the logs should be available in the same file structure in the distribution (possibly under a separate directory).
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
- Our logging is unified across components.
### Specification
<!--
- Detailed requirements for the feature.
- The performance requirements for the feature.
-->
- [ ] The project manager uses the new logger for all of its logging.
- [ ] The project manager can start its own instance of logging service.
- [ ] The protocol message querying logging service setup is implemented.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
- [ ] Manually check that relevant logs are reported in the logging service. | 1.0 | Integrate the Logging Service with the Project Manager - ### Summary
<!--
- A summary of the task.
-->
When the logging service (#1031) is implemented, it should also be integrated with the project manager.
As the project-manager is started independently of the launcher, it should start its own logging service, but the logs should be available in the same file structure in the distribution (possibly under a separate directory).
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
- Our logging is unified across components.
### Specification
<!--
- Detailed requirements for the feature.
- The performance requirements for the feature.
-->
- [ ] The project manager uses the new logger for all of its logging.
- [ ] The project manager can start its own instance of logging service.
- [ ] The protocol message querying logging service setup is implemented.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
- [ ] Manually check that relevant logs are reported in the logging service. | priority | integrate the logging service with the project manager summary a summary of the task when the logging service is implemented it should also be integrated with the project manager as the project manager is started independently of the launcher it should start its own logging service but the logs should be available in the same file structure in the distribution possibly under a separate directory value this section should describe the value of this task this value can be for users to the team etc our logging is unified across components specification detailed requirements for the feature the performance requirements for the feature the project manager uses the new logger for all of its logging the project manager can start its own instance of logging service the protocol message querying logging service setup is implemented acceptance criteria test cases any criteria that must be satisfied for the task to be accepted the test plan for the feature related to the acceptance criteria manually check that relevant logs are reported in the logging service | 1 |
781,449 | 27,437,685,602 | IssuesEvent | 2023-03-02 08:48:33 | frequenz-floss/frequenz-sdk-python | https://api.github.com/repos/frequenz-floss/frequenz-sdk-python | closed | mypy checks are broken | part:tooling priority:high type:bug | ### What happened?
It turns out `mypy` has not been running on the sdk codebase for the past 3 months due to a bug in the noxfile, in: https://github.com/frequenz-floss/frequenz-sdk-python/blob/70d9d43f7f37807baf98e6896a330588b24673da/noxfile.py#L194-L198
### What did you expect instead?
mypy to run as part of the CI.
### Affected version(s)
_No response_
### Affected part(s)
Build script, CI, dependencies, etc. (part:tooling)
### Extra information
_No response_ | 1.0 | mypy checks are broken - ### What happened?
It turns out `mypy` has not been running on the sdk codebase for the past 3 months due to a bug in the noxfile, in: https://github.com/frequenz-floss/frequenz-sdk-python/blob/70d9d43f7f37807baf98e6896a330588b24673da/noxfile.py#L194-L198
### What did you expect instead?
mypy to run as part of the CI.
### Affected version(s)
_No response_
### Affected part(s)
Build script, CI, dependencies, etc. (part:tooling)
### Extra information
_No response_ | priority | mypy checks are broken what happened it turns out mypy has not been running on the sdk codebase for the past months due to a bug in the noxfile in what did you expect instead mypy to run as part of the ci affected version s no response affected part s build script ci dependencies etc part tooling extra information no response | 1 |
463,365 | 13,263,845,111 | IssuesEvent | 2020-08-21 01:49:48 | EarthJournalismNetwork/jeo-theme | https://api.github.com/repos/EarthJournalismNetwork/jeo-theme | closed | [ Opinion template ] By <author name> below title should include a author's photo if the post has category = opinion. | priority: high | **Testes**

**DESKTOP**
Post = OPINION
- [x] post com um único autor com foto

- [x] post com um único autor sem foto

- [x] post com múltiples autores: Não deve ser mostrada a foto para nenhum deles.

**MOBILE**
Post = OPINION
- [x] post com um único autor com foto

- [x] post com um único autor sem foto

- [x] post com múltiples autores: Não deve ser mostrada a foto para nenhum deles.

**OTHER TEST**
- [x] Verify that other post types don't show author's photo.

| 1.0 | [ Opinion template ] By <author name> below title should include a author's photo if the post has category = opinion. - **Testes**

**DESKTOP**
Post = OPINION
- [x] post com um único autor com foto

- [x] post com um único autor sem foto

- [x] post com múltiples autores: Não deve ser mostrada a foto para nenhum deles.

**MOBILE**
Post = OPINION
- [x] post com um único autor com foto

- [x] post com um único autor sem foto

- [x] post com múltiples autores: Não deve ser mostrada a foto para nenhum deles.

**OTHER TEST**
- [x] Verify that other post types don't show author's photo.

| priority | by below title should include a author s photo if the post has category opinion testes desktop post opinion post com um único autor com foto post com um único autor sem foto post com múltiples autores não deve ser mostrada a foto para nenhum deles mobile post opinion post com um único autor com foto post com um único autor sem foto post com múltiples autores não deve ser mostrada a foto para nenhum deles other test verify that other post types don t show author s photo | 1 |
422,312 | 12,269,791,635 | IssuesEvent | 2020-05-07 14:34:40 | ProductBoat/Registry2.0 | https://api.github.com/repos/ProductBoat/Registry2.0 | opened | Transplant registry --> pre transplant clinical info --> procedure | High Priority bug | * Open chrome browser -->enter URL: http://qa.aprenalregistry.com
--> login page should be displaying.
Enter valid user name, and password
click on the login button.
* Test steps
1. After login to the page dashboard page is displaying, and click on the registry select the Transplant registry, respective page is displaying. then click on the pre transplant clinical info --> select the procedure --> click on kidney biopsy --> **click on the field customization icon, "kidney biopsy" field is not showing in Kidney Biopsy Field Customization.
-->in kidney biopsy, hied the fields click on cancel but that time tab will be change the dialysis information.
--> hied the fields in fields customization but fields are not hide.**
https://www.screencast.com/t/uO2wKCzj >> https://www.screencast.com/t/WTEEizjg | 1.0 | Transplant registry --> pre transplant clinical info --> procedure - * Open chrome browser -->enter URL: http://qa.aprenalregistry.com
--> login page should be displaying.
Enter valid user name, and password
click on the login button.
* Test steps
1. After login to the page dashboard page is displaying, and click on the registry select the Transplant registry, respective page is displaying. then click on the pre transplant clinical info --> select the procedure --> click on kidney biopsy --> **click on the field customization icon, "kidney biopsy" field is not showing in Kidney Biopsy Field Customization.
-->in kidney biopsy, hied the fields click on cancel but that time tab will be change the dialysis information.
--> hied the fields in fields customization but fields are not hide.**
https://www.screencast.com/t/uO2wKCzj >> https://www.screencast.com/t/WTEEizjg | priority | transplant registry pre transplant clinical info procedure open chrome browser enter url login page should be displaying enter valid user name and password click on the login button test steps after login to the page dashboard page is displaying and click on the registry select the transplant registry respective page is displaying then click on the pre transplant clinical info select the procedure click on kidney biopsy click on the field customization icon kidney biopsy field is not showing in kidney biopsy field customization in kidney biopsy hied the fields click on cancel but that time tab will be change the dialysis information hied the fields in fields customization but fields are not hide | 1 |
137,005 | 5,291,536,210 | IssuesEvent | 2017-02-08 22:47:26 | rtrlib/rtrlib | https://api.github.com/repos/rtrlib/rtrlib | closed | Incorrect validation results | priority:high type:bug | For prefix 1.9.23.0/24 there are two ROAs:
```
ASN: 4788
Prefix: 1.9.0.0/16
Max length: 24
```
```
ASN: 65120
Prefix: 1.9.23.0/24
Max length: 24
```
I used `'pfx_table_validate_r(..)' ` to validate this:
```
ASN: 4788
Prefix: 1.9.23.0/24
```
And the result is `BGP_PFXV_STATE_INVALID`, but it should be `BGP_PFXV_STATE_VALID`.
I've confirmed this using the cli-validator:
```
rtrlib/tools $ ./cli-validator rpki-validator.realmv6.org 8282
Arguments required: IP Mask ASN
1.9.23.0 24 4788 //Input
1.9.23.0 24 4788|65120 1.9.23.0 24 24|2 //Output, 2 means INVALID
1.9.23.0 24 65120 //Input
1.9.23.0 24 65120|65120 1.9.23.0 24 24|0 //Output, 0 means VALID
```
I checked the ROAs at http://rpki-validator.realmv6.org:9090/roas (search for 1.9.23.0/24)
It looks like the validation code is only checking the ROA with the most specific prefix match, and if that ASN doesn't match it says INVALID.
| 1.0 | Incorrect validation results - For prefix 1.9.23.0/24 there are two ROAs:
```
ASN: 4788
Prefix: 1.9.0.0/16
Max length: 24
```
```
ASN: 65120
Prefix: 1.9.23.0/24
Max length: 24
```
I used `'pfx_table_validate_r(..)' ` to validate this:
```
ASN: 4788
Prefix: 1.9.23.0/24
```
And the result is `BGP_PFXV_STATE_INVALID`, but it should be `BGP_PFXV_STATE_VALID`.
I've confirmed this using the cli-validator:
```
rtrlib/tools $ ./cli-validator rpki-validator.realmv6.org 8282
Arguments required: IP Mask ASN
1.9.23.0 24 4788 //Input
1.9.23.0 24 4788|65120 1.9.23.0 24 24|2 //Output, 2 means INVALID
1.9.23.0 24 65120 //Input
1.9.23.0 24 65120|65120 1.9.23.0 24 24|0 //Output, 0 means VALID
```
I checked the ROAs at http://rpki-validator.realmv6.org:9090/roas (search for 1.9.23.0/24)
It looks like the validation code is only checking the ROA with the most specific prefix match, and if that ASN doesn't match it says INVALID.
| priority | incorrect validation results for prefix there are two roas asn prefix max length asn prefix max length i used pfx table validate r to validate this asn prefix and the result is bgp pfxv state invalid but it should be bgp pfxv state valid i ve confirmed this using the cli validator rtrlib tools cli validator rpki validator org arguments required ip mask asn input output means invalid input output means valid i checked the roas at search for it looks like the validation code is only checking the roa with the most specific prefix match and if that asn doesn t match it says invalid | 1 |
404,588 | 11,859,919,225 | IssuesEvent | 2020-03-25 14:08:18 | nhoizey/images-responsiver | https://api.github.com/repos/nhoizey/images-responsiver | closed | Make sure image settings don't leak from one image to another | priority: high type: bug | There's an issue with image width in example 03 | 1.0 | Make sure image settings don't leak from one image to another - There's an issue with image width in example 03 | priority | make sure image settings don t leak from one image to another there s an issue with image width in example | 1 |
400,896 | 11,782,459,670 | IssuesEvent | 2020-03-17 02:01:06 | Brian393/petropolis | https://api.github.com/repos/Brian393/petropolis | closed | When zooming out of Tar Sands, color is lost | bug high priority | https://map.environmentalobservatory.net/#/petropolis/tarsands
Click "-" three times and you lose all the colors. | 1.0 | When zooming out of Tar Sands, color is lost - https://map.environmentalobservatory.net/#/petropolis/tarsands
Click "-" three times and you lose all the colors. | priority | when zooming out of tar sands color is lost click three times and you lose all the colors | 1 |
521,185 | 15,104,361,394 | IssuesEvent | 2021-02-08 11:32:15 | georchestra/mapstore2-cadastrapp | https://api.github.com/repos/georchestra/mapstore2-cadastrapp | closed | Popup mouse hover | CADASTRAPP Contract #2 Priority: High | When you activate the cadastrapp tool, automatically on mouse move on the map, you can see a popup describing some info about the particlelle below the mouse. the floating Identify popup available in MapStore can be used to replicate this functionality.
 | 1.0 | Popup mouse hover - When you activate the cadastrapp tool, automatically on mouse move on the map, you can see a popup describing some info about the particlelle below the mouse. the floating Identify popup available in MapStore can be used to replicate this functionality.
 | priority | popup mouse hover when you activate the cadastrapp tool automatically on mouse move on the map you can see a popup describing some info about the particlelle below the mouse the floating identify popup available in mapstore can be used to replicate this functionality | 1 |
671,822 | 22,777,467,171 | IssuesEvent | 2022-07-08 15:46:57 | cds-snc/notification-planning | https://api.github.com/repos/cds-snc/notification-planning | closed | Create better understanding of how to format a spreadsheet. | High Priority | Haute priorité UX Refined | # Description
As a new or first-time Sender using templates with variables or doing bulk manual sending, I need to be able to create a good spreadsheet that matches my template so that I can have confidence about sending many emails.
WHY are we building?
We learned through usability testing (2022/01) that participants easily created variables, but had trouble linking the variables/template to the spreadsheet.
WHAT are we building?
A map to identify the un/happy paths
Probably iterating on the designs without creating a new feature
VALUE created by our solution
Senders save time and feel capable because they understand how to format a spreadsheet earlier on in the process and they can prepare their spreadsheet once.
# Acceptance Criteria** (Definition of done)
- [x] We identify what the un/happy paths are from gathering the information to sending a bulk email
- [ ] A design iteration in both English and French addresses the unhappy path
- [ ] Design review (with the team)
- [x] Content & interaction critique (incl. a11y)
- [x] The design iteration helps Senders comply with the Terms of Use items related to the spreadsheet
- [ ] Add story/hypothesis to Research Repo
- [x] Link research insight back to hypothesis from Epic or Objective in Airtable
- [ ] Once change/fix/feature is implemented, mark insight as "resolved" in Airtable
- [ ] Once change/fix/feature is implements, link insight to design artifacts (Figma) in Airtable
Given a Sender wants to do a bulk send with variables, when they get to the part in the flow and upload a spreadsheet, they already have a good spreadsheet that matches their template and they don't have to re-do their spreadsheet or they're not surprised.
[The guide section on variables was clear for some participants and many participants didn't have issues adding the parentheses to create a variable in the template. However, for participants who aren't as experienced in tech or spreadsheets, they wanted to find more information on formatting the spreadsheet and how the variables in the template relate to the spreadsheet.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwQUX3tUxbMxYS7C/recKhMerdcHixaXY2?blocks=hide)
[Without information about the spreadsheet on the guide, some initially misunderstood that the recipient information / variables would be input into Notify directly without uploading a spreadsheet. Assembling and formatting the data in a spreadsheet is an additional step that seemed unexpected initially](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwQUX3tUxbMxYS7C/recBBpp2E3BdXUEe1?blocks=hide)
[Didn't understand/catch that the name of the variable and the name of the spreadsheet column would need to match.
](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/reckf5VnPp8C04jUO?blocks=hide)
[One participant wondered if there was a limit to how many contacts the mailing list could include.
](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/rec2LB880BPaMGVUX?blocks=hide)
[The table explanation screen aided participants in understanding/confirming their understanding of how the variables apply to a spreadsheet. One participant who was expecting the mailing list to be input into Notify didn't find this page helpful.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/recaeKS1S9DlxH9fl?blocks=hide)
[Participants wondered about potential conflicts between their system and Notify for exporting contact info from their system and importing into GC Notify](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/reccM1dhJUGPgwPnH?blocks=hide)
[Seemed more familiar with Excel than CSV](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/rechjL7Zu95dUEeHU?blocks=hide)
[Participants were able to complete their tasks. For some participants, it wasn't a completely intuitive process, but they were able to complete it.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/rechtcFa7BOpqeP4u?blocks=hide)
[Unclear guidance on how to use “variables.” In the personalization guide, it would be helpful to connect the “variable” section back to the spreadsheet. Make that connection clearer.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwQUX3tUxbMxYS7C/reclsqBrM3842zWMU?blocks=hide)
[Most participants understood they would click 'add recipients' as a next step. However, for participants who weren't clear on how GC Notify was different from their email client, they didn't expect to upload the recipients spreadsheet file when clicking the "Add recipients" button](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/recmzf3LXWYDZEqr4?blocks=hide) | 1.0 | Create better understanding of how to format a spreadsheet. - # Description
As a new or first-time Sender using templates with variables or doing bulk manual sending, I need to be able to create a good spreadsheet that matches my template so that I can have confidence about sending many emails.
WHY are we building?
We learned through usability testing (2022/01) that participants easily created variables, but had trouble linking the variables/template to the spreadsheet.
WHAT are we building?
A map to identify the un/happy paths
Probably iterating on the designs without creating a new feature
VALUE created by our solution
Senders save time and feel capable because they understand how to format a spreadsheet earlier on in the process and they can prepare their spreadsheet once.
# Acceptance Criteria** (Definition of done)
- [x] We identify what the un/happy paths are from gathering the information to sending a bulk email
- [ ] A design iteration in both English and French addresses the unhappy path
- [ ] Design review (with the team)
- [x] Content & interaction critique (incl. a11y)
- [x] The design iteration helps Senders comply with the Terms of Use items related to the spreadsheet
- [ ] Add story/hypothesis to Research Repo
- [x] Link research insight back to hypothesis from Epic or Objective in Airtable
- [ ] Once change/fix/feature is implemented, mark insight as "resolved" in Airtable
- [ ] Once change/fix/feature is implements, link insight to design artifacts (Figma) in Airtable
Given a Sender wants to do a bulk send with variables, when they get to the part in the flow and upload a spreadsheet, they already have a good spreadsheet that matches their template and they don't have to re-do their spreadsheet or they're not surprised.
[The guide section on variables was clear for some participants and many participants didn't have issues adding the parentheses to create a variable in the template. However, for participants who aren't as experienced in tech or spreadsheets, they wanted to find more information on formatting the spreadsheet and how the variables in the template relate to the spreadsheet.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwQUX3tUxbMxYS7C/recKhMerdcHixaXY2?blocks=hide)
[Without information about the spreadsheet on the guide, some initially misunderstood that the recipient information / variables would be input into Notify directly without uploading a spreadsheet. Assembling and formatting the data in a spreadsheet is an additional step that seemed unexpected initially](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwQUX3tUxbMxYS7C/recBBpp2E3BdXUEe1?blocks=hide)
[Didn't understand/catch that the name of the variable and the name of the spreadsheet column would need to match.
](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/reckf5VnPp8C04jUO?blocks=hide)
[One participant wondered if there was a limit to how many contacts the mailing list could include.
](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/rec2LB880BPaMGVUX?blocks=hide)
[The table explanation screen aided participants in understanding/confirming their understanding of how the variables apply to a spreadsheet. One participant who was expecting the mailing list to be input into Notify didn't find this page helpful.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/recaeKS1S9DlxH9fl?blocks=hide)
[Participants wondered about potential conflicts between their system and Notify for exporting contact info from their system and importing into GC Notify](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/reccM1dhJUGPgwPnH?blocks=hide)
[Seemed more familiar with Excel than CSV](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/rechjL7Zu95dUEeHU?blocks=hide)
[Participants were able to complete their tasks. For some participants, it wasn't a completely intuitive process, but they were able to complete it.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/rechtcFa7BOpqeP4u?blocks=hide)
[Unclear guidance on how to use “variables.” In the personalization guide, it would be helpful to connect the “variable” section back to the spreadsheet. Make that connection clearer.](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwQUX3tUxbMxYS7C/reclsqBrM3842zWMU?blocks=hide)
[Most participants understood they would click 'add recipients' as a next step. However, for participants who weren't clear on how GC Notify was different from their email client, they didn't expect to upload the recipients spreadsheet file when clicking the "Add recipients" button](https://airtable.com/appWwAmHwDLtpIyko/tbl38n7ofWYBuezFc/viwEUFgBd0KLqH7oo/recmzf3LXWYDZEqr4?blocks=hide) | priority | create better understanding of how to format a spreadsheet description as a new or first time sender using templates with variables or doing bulk manual sending i need to be able to create a good spreadsheet that matches my template so that i can have confidence about sending many emails why are we building we learned through usability testing that participants easily created variables but had trouble linking the variables template to the spreadsheet what are we building a map to identify the un happy paths probably iterating on the designs without creating a new feature value created by our solution senders save time and feel capable because they understand how to format a spreadsheet earlier on in the process and they can prepare their spreadsheet once acceptance criteria definition of done we identify what the un happy paths are from gathering the information to sending a bulk email a design iteration in both english and french addresses the unhappy path design review with the team content interaction critique incl the design iteration helps senders comply with the terms of use items related to the spreadsheet add story hypothesis to research repo link research insight back to hypothesis from epic or objective in airtable once change fix feature is implemented mark insight as resolved in airtable once change fix feature is implements link insight to design artifacts figma in airtable given a sender wants to do a bulk send with variables when they get to the part in the flow and upload a spreadsheet they already have a good spreadsheet that matches their template and they don t have to re do their spreadsheet or they re not surprised didn t understand catch that the name of the variable and the name of the spreadsheet column would need to match one participant wondered if there was a limit to how many contacts the mailing list could include | 1 |
295,677 | 9,100,239,575 | IssuesEvent | 2019-02-20 07:55:50 | antonwilc0x/simtactics | https://api.github.com/repos/antonwilc0x/simtactics | opened | Rewrite Iff chunks | complexity: unknown in progress investigation priority: high | Some Iff chunks naturally rely on MonoGame for texture loading, such as BMP. The current IO library is a direct adaption of FreeSO's and the parts that reference MonoGame (thankfully it isn't much) remain the same because I'm still testing the engine-independent parts, like the UIS interrupter. However, in order for these Iff chunks to work, the ImageLoader needs to be partially rewritten since that naturally makes use of MonoGame too right now.
### List (incomplete)
- [ ] ImageLoader
- [ ] BMP | 1.0 | Rewrite Iff chunks - Some Iff chunks naturally rely on MonoGame for texture loading, such as BMP. The current IO library is a direct adaption of FreeSO's and the parts that reference MonoGame (thankfully it isn't much) remain the same because I'm still testing the engine-independent parts, like the UIS interrupter. However, in order for these Iff chunks to work, the ImageLoader needs to be partially rewritten since that naturally makes use of MonoGame too right now.
### List (incomplete)
- [ ] ImageLoader
- [ ] BMP | priority | rewrite iff chunks some iff chunks naturally rely on monogame for texture loading such as bmp the current io library is a direct adaption of freeso s and the parts that reference monogame thankfully it isn t much remain the same because i m still testing the engine independent parts like the uis interrupter however in order for these iff chunks to work the imageloader needs to be partially rewritten since that naturally makes use of monogame too right now list incomplete imageloader bmp | 1 |
390,654 | 11,551,307,622 | IssuesEvent | 2020-02-19 01:02:52 | rich-iannone/pointblank | https://api.github.com/repos/rich-iannone/pointblank | opened | Add print method for the x-list | Difficulty: ② Intermediate Effort: ③ High Priority: ③ High Type: ★ Enhancement | As a way to make the x-list object a bit more visually appealing in the console (and less annoying), a print method should be added. | 1.0 | Add print method for the x-list - As a way to make the x-list object a bit more visually appealing in the console (and less annoying), a print method should be added. | priority | add print method for the x list as a way to make the x list object a bit more visually appealing in the console and less annoying a print method should be added | 1 |
612,568 | 19,025,904,341 | IssuesEvent | 2021-11-24 03:25:49 | AgoraCloud/ui | https://api.github.com/repos/AgoraCloud/ui | closed | Small bugs caught in our weekly demo | bug priority:high | - [x] wiki page causes react error
- [x] edit project name doesn't update project model
- [x] EditDeployment
- [x] change edit-> update
- [x] buttons should go to workspace home
- [x] add ScalingMethod to DeploymentCard
- [x] if there are only favorited deployments / only non-favorited deployments don't show the accordion
- [x] workspace should be able to be created with only a name provided (currently you have to include resources)
- [x] edit deployment should only be able to select higher versions | 1.0 | Small bugs caught in our weekly demo - - [x] wiki page causes react error
- [x] edit project name doesn't update project model
- [x] EditDeployment
- [x] change edit-> update
- [x] buttons should go to workspace home
- [x] add ScalingMethod to DeploymentCard
- [x] if there are only favorited deployments / only non-favorited deployments don't show the accordion
- [x] workspace should be able to be created with only a name provided (currently you have to include resources)
- [x] edit deployment should only be able to select higher versions | priority | small bugs caught in our weekly demo wiki page causes react error edit project name doesn t update project model editdeployment change edit update buttons should go to workspace home add scalingmethod to deploymentcard if there are only favorited deployments only non favorited deployments don t show the accordion workspace should be able to be created with only a name provided currently you have to include resources edit deployment should only be able to select higher versions | 1 |
809,500 | 30,195,631,658 | IssuesEvent | 2023-07-04 20:35:39 | susanssky/careless-whisper | https://api.github.com/repos/susanssky/careless-whisper | closed | Load Transcripts (authenticated user) | priority: high | This task is to make sure that now the information being displayed (work done in issue #10) can only be seen if the user has been authenticated. I a non authenticated user navigates to the browse transcripts page, there should show nothing and display a message like "Please login to see transcripts". | 1.0 | Load Transcripts (authenticated user) - This task is to make sure that now the information being displayed (work done in issue #10) can only be seen if the user has been authenticated. I a non authenticated user navigates to the browse transcripts page, there should show nothing and display a message like "Please login to see transcripts". | priority | load transcripts authenticated user this task is to make sure that now the information being displayed work done in issue can only be seen if the user has been authenticated i a non authenticated user navigates to the browse transcripts page there should show nothing and display a message like please login to see transcripts | 1 |
209,931 | 7,181,346,287 | IssuesEvent | 2018-02-01 04:22:12 | wso2/message-broker | https://api.github.com/repos/wso2/message-broker | closed | Implement exchanges REST API | Complexity/Moderate Module/broker-core Priority/High Severity/Major Type/New Feature | ### Description
Implement /exchanges REST API to create/delete exchanges bindings from the broker.
| 1.0 | Implement exchanges REST API - ### Description
Implement /exchanges REST API to create/delete exchanges bindings from the broker.
| priority | implement exchanges rest api description implement exchanges rest api to create delete exchanges bindings from the broker | 1 |
762,408 | 26,717,717,161 | IssuesEvent | 2023-01-28 18:43:30 | FRCTeam3255/Robot2023 | https://api.github.com/repos/FRCTeam3255/Robot2023 | opened | Intake Cube Command | Intake Arm High Priority Collector | - Deploy collector
- Move intake to collector
- Spin intake
- Spin collector
- Stop collector and intake via switch (back up to proximity sensor)
- Detect object via color sensor
- Move arm mid shelf
- Retract collector
- Set LEDs | 1.0 | Intake Cube Command - - Deploy collector
- Move intake to collector
- Spin intake
- Spin collector
- Stop collector and intake via switch (back up to proximity sensor)
- Detect object via color sensor
- Move arm mid shelf
- Retract collector
- Set LEDs | priority | intake cube command deploy collector move intake to collector spin intake spin collector stop collector and intake via switch back up to proximity sensor detect object via color sensor move arm mid shelf retract collector set leds | 1 |
682,655 | 23,351,974,417 | IssuesEvent | 2022-08-10 01:39:07 | meshery/meshery | https://api.github.com/repos/meshery/meshery | closed | [CI] Cypress fails to install in Docker build workflow | issue/stale area/ci priority/high | #### Current Behavior
Please see https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1508 as an example.
#### Logs
```
#31 [provider-ui 3/3] RUN cd provider-ui; npm install --only=production; npm run build && npm run export; mv out /
[1508](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1509)
#31 sha256:f65a797caf57b2e41f8fdd9dfe78f49e30ce938036fe042fb791ef3600a06b50
[1509](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1510)
#31 60.52 npm notice
[1510](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1511)
#31 60.52 npm notice New minor version of npm available! 8.5.5 -> 8.12.1
[1511](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1512)
#31 60.52 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v8.12.1>
[1512](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1513)
#31 60.52 npm notice Run `npm install -g npm@8.12.1` to update!
[1513](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1514)
#31 60.52 npm notice
[1514](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1515)
#31 60.52 npm ERR! code 1
[1515](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1516)
#31 60.53 npm ERR! path /provider-ui/node_modules/cypress
[1516](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1517)
#31 60.53 npm ERR! command failed
[1517](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1518)
#31 60.53 npm ERR! command sh -c node index.js --exec install
[1518](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1519)
#31 60.53 npm ERR! Installing Cypress (version: 10.0.3)
[1519](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1520)
#31 60.53 npm ERR!
[1520](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1521)
#31 60.53 npm ERR! [STARTED] Task without title.
[1521](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1522)
#31 60.53 npm ERR! The Cypress App could not be downloaded.
[1522](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1523)
#31 60.53 npm ERR!
[1523](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1524)
#31 60.53 npm ERR! Does your workplace require a proxy to be used to access the Internet? If so, you must configure the HTTP_PROXY environment variable before downloading Cypress. Read more: https://on.cypress.io/proxy-configuration
[1524](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1525)
#31 60.53 npm ERR!
[1525](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1526)
#31 60.53 npm ERR! Otherwise, please check network connectivity and try again:
[1526](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1527)
#31 60.53 npm ERR!
[1527](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1528)
#31 60.53 npm ERR! ----------
[1528](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1529)
#31 60.53 npm ERR!
[1529](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1530)
#31 60.53 npm ERR! URL: https://download.cypress.io/desktop/10.0.3?platform=linux&arch=x64
[1530](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1531)
#31 60.53 npm ERR! Error: Failed downloading the Cypress binary.
[1531](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1532)
#31 60.53 npm ERR! Response code: 403
[1532](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1533)
#31 60.53 npm ERR! Response message: Forbidden
[1533](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1534)
#31 60.53 npm ERR!
[1534](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1535)
#31 60.53 npm ERR! ----------
[1535](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1536)
#31 60.53 npm ERR!
[1536](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1537)
#31 60.53 npm ERR! Platform: linux-x64 (Debian - 10.12)
[1537](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1538)
#31 60.53 npm ERR! Cypress Version: 10.0.3
[1538](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1539)
#31 60.53 npm ERR! [FAILED] The Cypress App could not be downloaded.
[1539](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1540)
#31 60.54 npm ERR! [FAILED]
[1540](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1541)
#31 60.54 npm ERR! [FAILED] Does your workplace require a proxy to be used to access the Internet? If so, you must configure the HTTP_PROXY environment variable before downloading Cypress. Read more: https://on.cypress.io/proxy-configuration
[1541](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1542)
#31 60.54 npm ERR! [FAILED]
[1542](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1543)
#31 60.54 npm ERR! [FAILED] Otherwise, please check network connectivity and try again:
[1543](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1544)
#31 60.54 npm ERR! [FAILED]
[1544](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1545)
#31 60.54 npm ERR! [FAILED] ----------
[1545](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1546)
#31 60.54 npm ERR! [FAILED]
[1546](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1547)
#31 60.54 npm ERR! [FAILED] URL: https://download.cypress.io/desktop/10.0.3?platform=linux&arch=x64
[1547](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1548)
#31 60.54 npm ERR! [FAILED] Error: Failed downloading the Cypress binary.
[1548](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1549)
#31 60.54 npm ERR! [FAILED] Response code: 403
[1549](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1550)
#31 60.54 npm ERR! [FAILED] Response message: Forbidden
[1550](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1551)
#31 60.54 npm ERR! [FAILED]
[1551](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1552)
#31 60.54 npm ERR! [FAILED] ----------
[1552](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1553)
#31 60.54 npm ERR! [FAILED]
[1553](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1554)
#31 60.54 npm ERR! [FAILED] Platform: linux-x64 (Debian - 10.12)
[1554](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1555)
#31 60.54 npm ERR! [FAILED] Cypress Version: 10.0.3
[1555](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1556)
#31 60.54
[1556](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1557)
#31 60.54 npm ERR! A complete log of this run can be found in:
[1557](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1558)
#31 60.54 npm ERR! /root/.npm/_logs/2022-06-12T14_27_17_086Z-debug-0.log
```
---
#### Contributor [Guides](https://docs.meshery.io/project/contributing) and Resources
- 🛠 [Meshery Build & Release Strategy](https://docs.meshery.io/project/build-and-release)
- 📚 [Instructions for contributing to documentation](https://github.com/meshery/meshery/blob/master/CONTRIBUTING.md#documentation-contribution-flow)
- Meshery documentation [site](https://docs.meshery.io/) and [source](https://github.com/meshery/meshery/tree/master/docs)
- 🎨 Wireframes and designs for Meshery UI in [Figma](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI)
- 🙋🏾🙋🏼 Questions: [Layer5 Discussion Forum](https://discuss.layer5.io) and [Layer5 Community Slack](http://slack.layer5.io)
| 1.0 | [CI] Cypress fails to install in Docker build workflow - #### Current Behavior
Please see https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1508 as an example.
#### Logs
```
#31 [provider-ui 3/3] RUN cd provider-ui; npm install --only=production; npm run build && npm run export; mv out /
[1508](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1509)
#31 sha256:f65a797caf57b2e41f8fdd9dfe78f49e30ce938036fe042fb791ef3600a06b50
[1509](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1510)
#31 60.52 npm notice
[1510](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1511)
#31 60.52 npm notice New minor version of npm available! 8.5.5 -> 8.12.1
[1511](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1512)
#31 60.52 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v8.12.1>
[1512](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1513)
#31 60.52 npm notice Run `npm install -g npm@8.12.1` to update!
[1513](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1514)
#31 60.52 npm notice
[1514](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1515)
#31 60.52 npm ERR! code 1
[1515](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1516)
#31 60.53 npm ERR! path /provider-ui/node_modules/cypress
[1516](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1517)
#31 60.53 npm ERR! command failed
[1517](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1518)
#31 60.53 npm ERR! command sh -c node index.js --exec install
[1518](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1519)
#31 60.53 npm ERR! Installing Cypress (version: 10.0.3)
[1519](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1520)
#31 60.53 npm ERR!
[1520](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1521)
#31 60.53 npm ERR! [STARTED] Task without title.
[1521](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1522)
#31 60.53 npm ERR! The Cypress App could not be downloaded.
[1522](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1523)
#31 60.53 npm ERR!
[1523](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1524)
#31 60.53 npm ERR! Does your workplace require a proxy to be used to access the Internet? If so, you must configure the HTTP_PROXY environment variable before downloading Cypress. Read more: https://on.cypress.io/proxy-configuration
[1524](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1525)
#31 60.53 npm ERR!
[1525](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1526)
#31 60.53 npm ERR! Otherwise, please check network connectivity and try again:
[1526](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1527)
#31 60.53 npm ERR!
[1527](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1528)
#31 60.53 npm ERR! ----------
[1528](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1529)
#31 60.53 npm ERR!
[1529](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1530)
#31 60.53 npm ERR! URL: https://download.cypress.io/desktop/10.0.3?platform=linux&arch=x64
[1530](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1531)
#31 60.53 npm ERR! Error: Failed downloading the Cypress binary.
[1531](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1532)
#31 60.53 npm ERR! Response code: 403
[1532](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1533)
#31 60.53 npm ERR! Response message: Forbidden
[1533](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1534)
#31 60.53 npm ERR!
[1534](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1535)
#31 60.53 npm ERR! ----------
[1535](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1536)
#31 60.53 npm ERR!
[1536](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1537)
#31 60.53 npm ERR! Platform: linux-x64 (Debian - 10.12)
[1537](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1538)
#31 60.53 npm ERR! Cypress Version: 10.0.3
[1538](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1539)
#31 60.53 npm ERR! [FAILED] The Cypress App could not be downloaded.
[1539](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1540)
#31 60.54 npm ERR! [FAILED]
[1540](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1541)
#31 60.54 npm ERR! [FAILED] Does your workplace require a proxy to be used to access the Internet? If so, you must configure the HTTP_PROXY environment variable before downloading Cypress. Read more: https://on.cypress.io/proxy-configuration
[1541](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1542)
#31 60.54 npm ERR! [FAILED]
[1542](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1543)
#31 60.54 npm ERR! [FAILED] Otherwise, please check network connectivity and try again:
[1543](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1544)
#31 60.54 npm ERR! [FAILED]
[1544](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1545)
#31 60.54 npm ERR! [FAILED] ----------
[1545](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1546)
#31 60.54 npm ERR! [FAILED]
[1546](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1547)
#31 60.54 npm ERR! [FAILED] URL: https://download.cypress.io/desktop/10.0.3?platform=linux&arch=x64
[1547](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1548)
#31 60.54 npm ERR! [FAILED] Error: Failed downloading the Cypress binary.
[1548](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1549)
#31 60.54 npm ERR! [FAILED] Response code: 403
[1549](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1550)
#31 60.54 npm ERR! [FAILED] Response message: Forbidden
[1550](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1551)
#31 60.54 npm ERR! [FAILED]
[1551](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1552)
#31 60.54 npm ERR! [FAILED] ----------
[1552](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1553)
#31 60.54 npm ERR! [FAILED]
[1553](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1554)
#31 60.54 npm ERR! [FAILED] Platform: linux-x64 (Debian - 10.12)
[1554](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1555)
#31 60.54 npm ERR! [FAILED] Cypress Version: 10.0.3
[1555](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1556)
#31 60.54
[1556](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1557)
#31 60.54 npm ERR! A complete log of this run can be found in:
[1557](https://github.com/meshery/meshery/runs/6850509428?check_suite_focus=true#step:4:1558)
#31 60.54 npm ERR! /root/.npm/_logs/2022-06-12T14_27_17_086Z-debug-0.log
```
---
#### Contributor [Guides](https://docs.meshery.io/project/contributing) and Resources
- 🛠 [Meshery Build & Release Strategy](https://docs.meshery.io/project/build-and-release)
- 📚 [Instructions for contributing to documentation](https://github.com/meshery/meshery/blob/master/CONTRIBUTING.md#documentation-contribution-flow)
- Meshery documentation [site](https://docs.meshery.io/) and [source](https://github.com/meshery/meshery/tree/master/docs)
- 🎨 Wireframes and designs for Meshery UI in [Figma](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI)
- 🙋🏾🙋🏼 Questions: [Layer5 Discussion Forum](https://discuss.layer5.io) and [Layer5 Community Slack](http://slack.layer5.io)
| priority | cypress fails to install in docker build workflow current behavior please see as an example logs run cd provider ui npm install only production npm run build npm run export mv out npm notice npm notice new minor version of npm available npm notice changelog npm notice run npm install g npm to update npm notice npm err code npm err path provider ui node modules cypress npm err command failed npm err command sh c node index js exec install npm err installing cypress version npm err npm err task without title npm err the cypress app could not be downloaded npm err npm err does your workplace require a proxy to be used to access the internet if so you must configure the http proxy environment variable before downloading cypress read more npm err npm err otherwise please check network connectivity and try again npm err npm err npm err npm err url npm err error failed downloading the cypress binary npm err response code npm err response message forbidden npm err npm err npm err npm err platform linux debian npm err cypress version npm err the cypress app could not be downloaded npm err npm err does your workplace require a proxy to be used to access the internet if so you must configure the http proxy environment variable before downloading cypress read more npm err npm err otherwise please check network connectivity and try again npm err npm err npm err npm err url npm err error failed downloading the cypress binary npm err response code npm err response message forbidden npm err npm err npm err npm err platform linux debian npm err cypress version npm err a complete log of this run can be found in npm err root npm logs debug log contributor and resources 🛠 📚 meshery documentation and 🎨 wireframes and designs for meshery ui in 🙋🏾🙋🏼 questions and | 1 |
160,762 | 6,101,988,338 | IssuesEvent | 2017-06-20 15:36:09 | kuzzleio/kuzzle-backoffice | https://api.github.com/repos/kuzzleio/kuzzle-backoffice | closed | Edit the user mapping does not work as expected | bug priority-high | When trying to save without changes, there is no feedback.
When I add a new field, the save button empties the mapping textarea and leaves it with `{}` inside.
I get the following error in the browser console:
```
Unhandled rejection InternalError: [illegal_argument_exception] mapper [content] of different type, current_type [text], merged_type [ObjectMapper]
at respond (/var/app/node_modules/elasticsearch/src/lib/transport.js:295:15)
at checkRespForFailure (/var/app/node_modules/elasticsearch/src/lib/transport.js:254:7)
at HttpConnector.<anonymous> (/var/app/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)
at IncomingMessage.bound (/var/app/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickDomainCallback (internal/process/next_tick.js:122:9)
``` | 1.0 | Edit the user mapping does not work as expected - When trying to save without changes, there is no feedback.
When I add a new field, the save button empties the mapping textarea and leaves it with `{}` inside.
I get the following error in the browser console:
```
Unhandled rejection InternalError: [illegal_argument_exception] mapper [content] of different type, current_type [text], merged_type [ObjectMapper]
at respond (/var/app/node_modules/elasticsearch/src/lib/transport.js:295:15)
at checkRespForFailure (/var/app/node_modules/elasticsearch/src/lib/transport.js:254:7)
at HttpConnector.<anonymous> (/var/app/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)
at IncomingMessage.bound (/var/app/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickDomainCallback (internal/process/next_tick.js:122:9)
``` | priority | edit the user mapping does not work as expected when trying to save without changes there is no feedback when i add a new field the save button empties the mapping textarea and leaves it with inside i get the following error in the browser console unhandled rejection internalerror mapper of different type current type merged type at respond var app node modules elasticsearch src lib transport js at checkrespforfailure var app node modules elasticsearch src lib transport js at httpconnector var app node modules elasticsearch src lib connectors http js at incomingmessage bound var app node modules elasticsearch node modules lodash dist lodash js at emitnone events js at incomingmessage emit events js at endreadablent stream readable js at combinedtickcallback internal process next tick js at process tickdomaincallback internal process next tick js | 1 |
140,931 | 5,426,262,847 | IssuesEvent | 2017-03-03 09:33:49 | wp-property/wp-property | https://api.github.com/repos/wp-property/wp-property | opened | Convert Property Search to use WP_Query | complexity/high priority/high | The legacy `WPP_F::get_properties` has old logic that needs to be replaced by a standard `WP_Query` handler. This will allow us to utilize our Elasticsearch service for all queries.
| 1.0 | Convert Property Search to use WP_Query - The legacy `WPP_F::get_properties` has old logic that needs to be replaced by a standard `WP_Query` handler. This will allow us to utilize our Elasticsearch service for all queries.
| priority | convert property search to use wp query the legacy wpp f get properties has old logic that needs to be replaced by a standard wp query handler this will allow us to utilize our elasticsearch service for all queries | 1 |
546,970 | 16,022,854,162 | IssuesEvent | 2021-04-21 04:01:10 | Seneca-PlaNA/body-contouring-clinic | https://api.github.com/repos/Seneca-PlaNA/body-contouring-clinic | closed | Create appointment with offer can't save | bug front-end priority: high | 
After I select the technician, the error message is still there, and the appointment cannot be saved. | 1.0 | Create appointment with offer can't save - 
After I select the technician, the error message is still there, and the appointment cannot be saved. | priority | create appointment with offer can t save after i select the technician the error message is still there and the appointment cannot be saved | 1 |
659,728 | 21,939,425,021 | IssuesEvent | 2022-05-23 16:30:06 | tnc-ca-geo/animl-frontend | https://api.github.com/repos/tnc-ca-geo/animl-frontend | opened | Move automation rules to Project level | high priority | See https://github.com/tnc-ca-geo/animl-api/issues/50
- [ ] Figure out new place to put automation rule config button (sidebar Nav was just for view-specific actions)
- [ ] Wire up automation rules form to new mutation resolvers | 1.0 | Move automation rules to Project level - See https://github.com/tnc-ca-geo/animl-api/issues/50
- [ ] Figure out new place to put automation rule config button (sidebar Nav was just for view-specific actions)
- [ ] Wire up automation rules form to new mutation resolvers | priority | move automation rules to project level see figure out new place to put automation rule config button sidebar nav was just for view specific actions wire up automation rules form to new mutation resolvers | 1 |
311,140 | 9,529,202,242 | IssuesEvent | 2019-04-29 10:34:18 | wso2/product-microgateway | https://api.github.com/repos/wso2/product-microgateway | closed | Error while executing "micro-gw help" command | Priority/Highest Type/Bug bug | **Description:**
<!-- Give a brief description of the issue -->
When I execute micro-gw help command as soon as I set the PATH variable, the following error could be seen (even before executing **micro-gw help** ).
```
$ micro-gw help
Error: Could not find or load main class org.wso2.apimgt.gateway.cli.cmd.Main
```
Then after **micro-gw init**, I executed micro-gw help and micro-gw setup commands and I got the following error.
```
$ micro-gw help
[micro-gw: unknown help topic ``, Run 'micro-gw help' for usage.]
$ micro-gw help setup
[micro-gw: unknown help topic `setup`, Run 'micro-gw help' for usage.]
```
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
3.0.0-alpha
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
Mentioned in the description.
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | 1.0 | Error while executing "micro-gw help" command - **Description:**
<!-- Give a brief description of the issue -->
When I execute micro-gw help command as soon as I set the PATH variable, the following error could be seen (even before executing **micro-gw help** ).
```
$ micro-gw help
Error: Could not find or load main class org.wso2.apimgt.gateway.cli.cmd.Main
```
Then after **micro-gw init**, I executed micro-gw help and micro-gw setup commands and I got the following error.
```
$ micro-gw help
[micro-gw: unknown help topic ``, Run 'micro-gw help' for usage.]
$ micro-gw help setup
[micro-gw: unknown help topic `setup`, Run 'micro-gw help' for usage.]
```
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
3.0.0-alpha
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
Mentioned in the description.
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | priority | error while executing micro gw help command description when i execute micro gw help command as soon as i set the path variable the following error could be seen even before executing micro gw help micro gw help error could not find or load main class org apimgt gateway cli cmd main then after micro gw init i executed micro gw help and micro gw setup commands and i got the following error micro gw help micro gw help setup suggested labels suggested assignees affected product version alpha os db other environment details and versions steps to reproduce mentioned in the description related issues | 1 |
88,567 | 3,779,208,387 | IssuesEvent | 2016-03-18 06:41:15 | cs2103jan2016-f14-2j/main | https://api.github.com/repos/cs2103jan2016-f14-2j/main | closed | Storage: Changing directory to a place where jimplefile is already present | priority.high type.bug type.task | Will not gather all the tasks saved in this file, instead it will wipe it clean and paste the old tasks in only | 1.0 | Storage: Changing directory to a place where jimplefile is already present - Will not gather all the tasks saved in this file, instead it will wipe it clean and paste the old tasks in only | priority | storage changing directory to a place where jimplefile is already present will not gather all the tasks saved in this file instead it will wipe it clean and paste the old tasks in only | 1 |
707,864 | 24,321,793,116 | IssuesEvent | 2022-09-30 11:27:10 | JonasPammer/ansible-roles | https://api.github.com/repos/JonasPammer/ansible-roles | closed | fix ansible-lint Error: schema Additional properties are not allowed ('__not_mandatory_to_role_itself' was unexpected) | priority/high | either we can ignore it specifically (which i do not think) or I need to put this self-made variable that's only being used by this repo's script to determine the soft dependency tree somewhere else (maybe `meta/.ansible-roles.yml`, `.github/.ansible-roles.yml`)
until then the CI for roles that make use of this is red, which is why i add a high label | 1.0 | fix ansible-lint Error: schema Additional properties are not allowed ('__not_mandatory_to_role_itself' was unexpected) - either we can ignore it specifically (which i do not think) or I need to put this self-made variable that's only being used by this repo's script to determine the soft dependency tree somewhere else (maybe `meta/.ansible-roles.yml`, `.github/.ansible-roles.yml`)
until then the CI for roles that make use of this is red, which is why i add a high label | priority | fix ansible lint error schema additional properties are not allowed not mandatory to role itself was unexpected either we can ignore it specifically which i do not think or i need to put this self made variable that s only being used by this repo s script to determine the soft dependency tree somewhere else maybe meta ansible roles yml github ansible roles yml until then the ci for roles that make use of this is red which is why i add a high label | 1 |
181,895 | 6,665,113,299 | IssuesEvent | 2017-10-02 23:03:17 | OperationCode/operationcode_frontend | https://api.github.com/repos/OperationCode/operationcode_frontend | closed | Target classes instead of elements in stylesheets from a specific PR | beginner friendly hacktoberfest Priority: High Status: In Progress Type: Bug | # Bug Report
## What is the current behavior?
Many global styles are being affected by element-applied stylesheets from https://github.com/OperationCode/operationcode_frontend/pull/451 - I dropped the ball in not catching those styles, but we're very aware now. Going forward, all styles must be applied to classes or elements as children of classes.
## What is the expected behavior?
All styles must be applied to classes or children of classes.
Resolving this issue will also close #511
## How do I resolve this issue?
This ticket has been marked beginner-friendly, so I'd like to apply some hints. Looking at the PR above (#451) you can quickly find the stylesheets in question. Anywhere where you see styles applied to a specific element, you should instead put them as children of a class or directly apply them to a class. This may require you create classes and target them in the relevant .js file with `className="newClassName"` | 1.0 | Target classes instead of elements in stylesheets from a specific PR - # Bug Report
## What is the current behavior?
Many global styles are being affected by element-applied stylesheets from https://github.com/OperationCode/operationcode_frontend/pull/451 - I dropped the ball in not catching those styles, but we're very aware now. Going forward, all styles must be applied to classes or elements as children of classes.
## What is the expected behavior?
All styles must be applied to classes or children of classes.
Resolving this issue will also close #511
## How do I resolve this issue?
This ticket has been marked beginner-friendly, so I'd like to apply some hints. Looking at the PR above (#451) you can quickly find the stylesheets in question. Anywhere where you see styles applied to a specific element, you should instead put them as children of a class or directly apply them to a class. This may require you create classes and target them in the relevant .js file with `className="newClassName"` | priority | target classes instead of elements in stylesheets from a specific pr bug report what is the current behavior many global styles are being affected by element applied stylesheets from i dropped the ball in not catching those styles but we re very aware now going forward all styles must be applied to classes or elements as children of classes what is the expected behavior all styles must be applied to classes or children of classes resolving this issue will also close how do i resolve this issue this ticket has been marked beginner friendly so i d like to apply some hints looking at the pr above you can quickly find the stylesheets in question anywhere where you see styles applied to a specific element you should instead put them as children of a class or directly apply them to a class this may require you create classes and target them in the relevant js file with classname newclassname | 1 |
821,871 | 30,839,958,454 | IssuesEvent | 2023-08-02 09:58:08 | Agenta-AI/agenta | https://api.github.com/repos/Agenta-AI/agenta | opened | API keys view in frontend | Frontend High Priority |
We need a new simple view in the frontend that allows the users to save their API keys for openai (we will use these for the evaluation with an AI critic). The API keys should be saved locally across sessions for the user. The API keys should be fetchable in different components / views in the frontend.
The UX should be quite simple. Here is an example from a closed-source competitor:
<img width="1488" alt="Screenshot 2023-08-02 at 11 51 19" src="https://github.com/Agenta-AI/agenta/assets/4510758/ee48f71b-ffd0-40c0-b8c2-c78d87930327">
This issue is required for #178 and has high priority. | 1.0 | API keys view in frontend -
We need a new simple view in the frontend that allows the users to save their API keys for openai (we will use these for the evaluation with an AI critic). The API keys should be saved locally across sessions for the user. The API keys should be fetchable in different components / views in the frontend.
The UX should be quite simple. Here is an example from a closed-source competitor:
<img width="1488" alt="Screenshot 2023-08-02 at 11 51 19" src="https://github.com/Agenta-AI/agenta/assets/4510758/ee48f71b-ffd0-40c0-b8c2-c78d87930327">
This issue is required for #178 and has high priority. | priority | api keys view in frontend we need a new simple view in the frontend that allows the users to save their api keys for openai we will use these for the evaluation with an ai critic the api keys should be saved locally across sessions for the user the api keys should be fetchable in different components views in the frontend the ux should be quite simple here is an example from a closed source competitor img width alt screenshot at src this issue is required for and has high priority | 1 |
670,520 | 22,692,658,458 | IssuesEvent | 2022-07-04 23:39:26 | c2c-project/prytaneum | https://api.github.com/repos/c2c-project/prytaneum | closed | Create static components for chromatic; differentiating them from ones that use MSW. | high-priority old | MSW + chromatic don't play well together. Make a single MSW component and static component where applicable. | 1.0 | Create static components for chromatic; differentiating them from ones that use MSW. - MSW + chromatic don't play well together. Make a single MSW component and static component where applicable. | priority | create static components for chromatic differentiating them from ones that use msw msw chromatic don t play well together make a single msw component and static component where applicable | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.