Unnamed: 0
int64 3
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 2
742
| labels
stringlengths 4
431
| body
stringlengths 5
239k
| index
stringclasses 10
values | text_combine
stringlengths 96
240k
| label
stringclasses 2
values | text
stringlengths 96
200k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
22,630
| 15,339,793,647
|
IssuesEvent
|
2021-02-27 03:36:04
|
binarythistle/DevWorkbench
|
https://api.github.com/repos/binarythistle/DevWorkbench
|
closed
|
Set up Azure Dev Ops Pipeline
|
Infrastructure
|
- [x] Establish DevOps Pipeline
- [x] Card is done when commit leads to deploy on Azure
**NOTE:** At the time of writing BalzorWASM apps can only be deployed to a **Windows-based** Azure App Service, (deploy to Linux "Appears" to work, but does not).
|
1.0
|
Set up Azure Dev Ops Pipeline - - [x] Establish DevOps Pipeline
- [x] Card is done when commit leads to deploy on Azure
**NOTE:** At the time of writing BalzorWASM apps can only be deployed to a **Windows-based** Azure App Service, (deploy to Linux "Appears" to work, but does not).
|
non_usab
|
set up azure dev ops pipeline establish devops pipeline card is done when commit leads to deploy on azure note at the time of writing balzorwasm apps can only be deployed to a windows based azure app service deploy to linux appears to work but does not
| 0
|
39,574
| 16,046,608,048
|
IssuesEvent
|
2021-04-22 14:17:47
|
dockstore/dockstore
|
https://api.github.com/repos/dockstore/dockstore
|
opened
|
Web-service to return whether an entire entry is verified or not
|
enhancement gui web-service
|
**Is your feature request related to a problem? Please describe.**
What we want(verified tag) at e.g. https://dev.dockstore.net/organizations/bdcatalyst/collections/Cumulus:

Currently, the web service provides us with whether or not a **specific** entry version is verified or not, but not about the **entire** entry itself.
**Describe the solution you'd like**
Not too sure at the moment, will likely modify some classes/endpoints
**Describe alternatives you've considered**
Probably not a top priority since this is a very small feature, but eventually we will need to do this
**Additional context**
The verified tags here(https://dev.dockstore.net/search?entryType=workflows&searchMode=files) are from ElasticSearch but not from our web service, something to keep in mind.
|
1.0
|
Web-service to return whether an entire entry is verified or not - **Is your feature request related to a problem? Please describe.**
What we want(verified tag) at e.g. https://dev.dockstore.net/organizations/bdcatalyst/collections/Cumulus:

Currently, the web service provides us with whether or not a **specific** entry version is verified or not, but not about the **entire** entry itself.
**Describe the solution you'd like**
Not too sure at the moment, will likely modify some classes/endpoints
**Describe alternatives you've considered**
Probably not a top priority since this is a very small feature, but eventually we will need to do this
**Additional context**
The verified tags here(https://dev.dockstore.net/search?entryType=workflows&searchMode=files) are from ElasticSearch but not from our web service, something to keep in mind.
|
non_usab
|
web service to return whether an entire entry is verified or not is your feature request related to a problem please describe what we want verified tag at e g currently the web service provides us with whether or not a specific entry version is verified or not but not about the entire entry itself describe the solution you d like not too sure at the moment will likely modify some classes endpoints describe alternatives you ve considered probably not a top priority since this is a very small feature but eventually we will need to do this additional context the verified tags here are from elasticsearch but not from our web service something to keep in mind
| 0
|
11,268
| 7,132,294,778
|
IssuesEvent
|
2018-01-22 14:10:46
|
Elgg/Elgg
|
https://api.github.com/repos/Elgg/Elgg
|
closed
|
Only the last 10 admin notices are shown
|
easy usability
|
We need to remove the limit when fetching admin notices to display.
|
True
|
Only the last 10 admin notices are shown - We need to remove the limit when fetching admin notices to display.
|
usab
|
only the last admin notices are shown we need to remove the limit when fetching admin notices to display
| 1
|
235,957
| 7,744,433,481
|
IssuesEvent
|
2018-05-29 15:22:42
|
Gloirin/m2gTest
|
https://api.github.com/repos/Gloirin/m2gTest
|
closed
|
0004858:
edit dialog: day change resets time
|
Calendar bug high priority
|
**Reported by cweiss on 21 Sep 2011 16:10**
**Version:** Maischa (2011-05-3)
edit dialog: day change resets time
|
1.0
|
0004858:
edit dialog: day change resets time - **Reported by cweiss on 21 Sep 2011 16:10**
**Version:** Maischa (2011-05-3)
edit dialog: day change resets time
|
non_usab
|
edit dialog day change resets time reported by cweiss on sep version maischa edit dialog day change resets time
| 0
|
673,796
| 23,031,568,928
|
IssuesEvent
|
2022-07-22 14:23:17
|
epam/Indigo
|
https://api.github.com/repos/epam/Indigo
|
closed
|
Versions 1.5 and later conflict with xgboost
|
Bug Indigo API High priority
|
**Steps to Reproduce**
1. Install Indigo version 1.5 or later using pip3 from the repository or a downloaded wheel.
2. In a Python environment, import and initialize Indigo:
from indigo import *
indigo = Indigo()
3. Instantiate and run xgboost XGBClassifier on any dataset:
import numpy as np
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
n = 1000
X = np.random.rand(n, 1)
y = np.random.rand(n)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
xgb = XGBClassifier()
y_pred = xgb.fit(X_train, y_train).predict(X_test)
4. Uninstall this version of Indigo and install version 1.2 - 1.4.
5. Repeat steps 2 and 3.
**Expected behavior**
XGB trains on the training dataset and then produces predictions for the test dataset.
**Actual behavior**
For Indigo versions 1.2 - 1.4, XGBClassifier performs as expected. For Indigo versions 1.5 - 1.7, it crashes and produces incorrect error messages about the type of its parameters (this was tested with multiple different types of training/test data and input parameters).
**Attachments**
If applicable, add attachment files to reproduce the issue.
**Indigo/Bingo version**
Tested versions 1.2.3 - 1.7.0
**Additional context**
Tested using Python versions 3.6, 3.9, 3.10 in a text console and a Jupyter notebook (jupyter v. 1.0.0 installed using pip3). The problem also occurs if indigo is imported without using the wild card:
import indigo
indigo = indigo.Indigo()
|
1.0
|
Versions 1.5 and later conflict with xgboost - **Steps to Reproduce**
1. Install Indigo version 1.5 or later using pip3 from the repository or a downloaded wheel.
2. In a Python environment, import and initialize Indigo:
from indigo import *
indigo = Indigo()
3. Instantiate and run xgboost XGBClassifier on any dataset:
import numpy as np
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
n = 1000
X = np.random.rand(n, 1)
y = np.random.rand(n)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
xgb = XGBClassifier()
y_pred = xgb.fit(X_train, y_train).predict(X_test)
4. Uninstall this version of Indigo and install version 1.2 - 1.4.
5. Repeat steps 2 and 3.
**Expected behavior**
XGB trains on the training dataset and then produces predictions for the test dataset.
**Actual behavior**
For Indigo versions 1.2 - 1.4, XGBClassifier performs as expected. For Indigo versions 1.5 - 1.7, it crashes and produces incorrect error messages about the type of its parameters (this was tested with multiple different types of training/test data and input parameters).
**Attachments**
If applicable, add attachment files to reproduce the issue.
**Indigo/Bingo version**
Tested versions 1.2.3 - 1.7.0
**Additional context**
Tested using Python versions 3.6, 3.9, 3.10 in a text console and a Jupyter notebook (jupyter v. 1.0.0 installed using pip3). The problem also occurs if indigo is imported without using the wild card:
import indigo
indigo = indigo.Indigo()
|
non_usab
|
versions and later conflict with xgboost steps to reproduce install indigo version or later using from the repository or a downloaded wheel in a python environment import and initialize indigo from indigo import indigo indigo instantiate and run xgboost xgbclassifier on any dataset import numpy as np from sklearn model selection import train test split from xgboost import xgbclassifier n x np random rand n y np random rand n x train x test y train y test train test split x y test size xgb xgbclassifier y pred xgb fit x train y train predict x test uninstall this version of indigo and install version repeat steps and expected behavior xgb trains on the training dataset and then produces predictions for the test dataset actual behavior for indigo versions xgbclassifier performs as expected for indigo versions it crashes and produces incorrect error messages about the type of its parameters this was tested with multiple different types of training test data and input parameters attachments if applicable add attachment files to reproduce the issue indigo bingo version tested versions additional context tested using python versions in a text console and a jupyter notebook jupyter v installed using the problem also occurs if indigo is imported without using the wild card import indigo indigo indigo indigo
| 0
|
2,775
| 3,163,768,357
|
IssuesEvent
|
2015-09-20 16:38:00
|
medialize/ally.js
|
https://api.github.com/repos/medialize/ally.js
|
opened
|
handle unwanted focus-events in focus/disable
|
bug focusable
|
* if an element gains focus that has ['data-ally-disabled'] it must be blur()ed
* if an element gains focus that matches the filter-critera it must be blur()ed
some elements might not dispatch `focus` event, so we need to observe `document.activeElement` to be sure.
|
True
|
handle unwanted focus-events in focus/disable - * if an element gains focus that has ['data-ally-disabled'] it must be blur()ed
* if an element gains focus that matches the filter-critera it must be blur()ed
some elements might not dispatch `focus` event, so we need to observe `document.activeElement` to be sure.
|
usab
|
handle unwanted focus events in focus disable if an element gains focus that has it must be blur ed if an element gains focus that matches the filter critera it must be blur ed some elements might not dispatch focus event so we need to observe document activeelement to be sure
| 1
|
26,579
| 26,993,346,575
|
IssuesEvent
|
2023-02-09 21:56:49
|
bevyengine/bevy
|
https://api.github.com/repos/bevyengine/bevy
|
closed
|
Need a way to call the `setup()` of the plugins that have been registered in the App manually
|
C-Usability A-App
|
## What problem does this solve or what need does it fill?
Due to changes in [#7046](https://github.com/bevyengine/bevy/pull/7046), it is now necessary to have a way to call the `setup` function of the plugins that have been registered in the App when using an external event loop to drive a bevy App.
https://github.com/bevyengine/bevy/blob/1ffeff19f9e01f68cd24d64d2bf281926da90d58/crates/bevy_app/src/app.rs#L295-L300
## What solution would you like?
Make `plugin_registry` public?
## What alternative(s) have you considered?
Move relevant code above into a separate `pub fn`?
## Additional context
External event loop demo: https://github.com/jinleili/bevy-in-app
|
True
|
Need a way to call the `setup()` of the plugins that have been registered in the App manually - ## What problem does this solve or what need does it fill?
Due to changes in [#7046](https://github.com/bevyengine/bevy/pull/7046), it is now necessary to have a way to call the `setup` function of the plugins that have been registered in the App when using an external event loop to drive a bevy App.
https://github.com/bevyengine/bevy/blob/1ffeff19f9e01f68cd24d64d2bf281926da90d58/crates/bevy_app/src/app.rs#L295-L300
## What solution would you like?
Make `plugin_registry` public?
## What alternative(s) have you considered?
Move relevant code above into a separate `pub fn`?
## Additional context
External event loop demo: https://github.com/jinleili/bevy-in-app
|
usab
|
need a way to call the setup of the plugins that have been registered in the app manually what problem does this solve or what need does it fill due to changes in it is now necessary to have a way to call the setup function of the plugins that have been registered in the app when using an external event loop to drive a bevy app what solution would you like make plugin registry public what alternative s have you considered move relevant code above into a separate pub fn additional context external event loop demo
| 1
|
19,333
| 13,880,295,394
|
IssuesEvent
|
2020-10-17 18:08:04
|
rubyforgood/circulate
|
https://api.github.com/repos/rubyforgood/circulate
|
closed
|
Librarian Appointments page: Adjust display of date at top of page so that it doesn't wrap unnecessarily.
|
Good First Issue Help Wanted UX / Usability hacktoberfest
|
On pages like https://circulate-staging.herokuapp.com/admin/appointments?day=2020-10-15 , the date at the top is wrapping. Adjust HTML/CSS so that it doesn't wrap unnecessarily.
|
True
|
Librarian Appointments page: Adjust display of date at top of page so that it doesn't wrap unnecessarily. - On pages like https://circulate-staging.herokuapp.com/admin/appointments?day=2020-10-15 , the date at the top is wrapping. Adjust HTML/CSS so that it doesn't wrap unnecessarily.
|
usab
|
librarian appointments page adjust display of date at top of page so that it doesn t wrap unnecessarily on pages like the date at the top is wrapping adjust html css so that it doesn t wrap unnecessarily
| 1
|
92,134
| 8,352,332,745
|
IssuesEvent
|
2018-10-02 05:54:36
|
EyeSeeTea/malariapp
|
https://api.github.com/repos/EyeSeeTea/malariapp
|
closed
|
Introduce nextScheduleMatrix
|
HNQIS complexity - med (1-5hr) priority - critical testing type - maintenance
|
- [ ] Instead of using hardcoded variables create an object returning the values. The object would obtain that value from a hardcoded matrix (List of Lists). When we ask for a value, we provide the server we are connected to. If the server doesn't exist, the default matrix is used (so the current values). Otherwise, then the appropriate value is provided.
The matrix would look like this:
Any server : +2, +4 +6
https://zw.hnqis.org/ : +1, +1, +6
|
1.0
|
Introduce nextScheduleMatrix - - [ ] Instead of using hardcoded variables create an object returning the values. The object would obtain that value from a hardcoded matrix (List of Lists). When we ask for a value, we provide the server we are connected to. If the server doesn't exist, the default matrix is used (so the current values). Otherwise, then the appropriate value is provided.
The matrix would look like this:
Any server : +2, +4 +6
https://zw.hnqis.org/ : +1, +1, +6
|
non_usab
|
introduce nextschedulematrix instead of using hardcoded variables create an object returning the values the object would obtain that value from a hardcoded matrix list of lists when we ask for a value we provide the server we are connected to if the server doesn t exist the default matrix is used so the current values otherwise then the appropriate value is provided the matrix would look like this any server
| 0
|
293,775
| 22,088,163,810
|
IssuesEvent
|
2022-06-01 02:10:46
|
maevsi/maevsi
|
https://api.github.com/repos/maevsi/maevsi
|
opened
|
docs(readme): improve onboarding
|
documentation
|
Write a step to step guide to get started with maevsi for new developers.
|
1.0
|
docs(readme): improve onboarding - Write a step to step guide to get started with maevsi for new developers.
|
non_usab
|
docs readme improve onboarding write a step to step guide to get started with maevsi for new developers
| 0
|
14,272
| 8,970,463,071
|
IssuesEvent
|
2019-01-29 13:43:09
|
lumen-org/lumen
|
https://api.github.com/repos/lumen-org/lumen
|
closed
|
improve dragndrop
|
Usability
|
Pain Point: Drag n Drop funktioniert immer noch nicht so richtig.
Verbesserungsvorschlag:
- Abstand zwischen Shelf-Item erhöhen: dadurch bleibt mehr Platz zum 'neu arrangieren'
- Ersetzen von Shelf-Items durch andere verbieten: das möchte man in aller Regel eh nicht!
|
True
|
improve dragndrop - Pain Point: Drag n Drop funktioniert immer noch nicht so richtig.
Verbesserungsvorschlag:
- Abstand zwischen Shelf-Item erhöhen: dadurch bleibt mehr Platz zum 'neu arrangieren'
- Ersetzen von Shelf-Items durch andere verbieten: das möchte man in aller Regel eh nicht!
|
usab
|
improve dragndrop pain point drag n drop funktioniert immer noch nicht so richtig verbesserungsvorschlag abstand zwischen shelf item erhöhen dadurch bleibt mehr platz zum neu arrangieren ersetzen von shelf items durch andere verbieten das möchte man in aller regel eh nicht
| 1
|
14,067
| 16,890,488,014
|
IssuesEvent
|
2021-06-23 08:39:58
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
opened
|
Move `ServiceBusReceiver` to options model for furture-proof message routing
|
area:message-processing enhancement integration:service-bus
|
**Is your feature request related to a problem? Please describe.**
Move our `ServiceBusReceiver` model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the Azure Functions/message pump to the router.
**Describe alternatives you've considered**
Adding new stuff to the signature, but that requires breaking changes.
|
1.0
|
Move `ServiceBusReceiver` to options model for furture-proof message routing - **Is your feature request related to a problem? Please describe.**
Move our `ServiceBusReceiver` model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the Azure Functions/message pump to the router.
**Describe alternatives you've considered**
Adding new stuff to the signature, but that requires breaking changes.
|
non_usab
|
move servicebusreceiver to options model for furture proof message routing is your feature request related to a problem please describe move our servicebusreceiver model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the azure functions message pump to the router describe alternatives you ve considered adding new stuff to the signature but that requires breaking changes
| 0
|
140,932
| 11,383,794,053
|
IssuesEvent
|
2020-01-29 07:14:43
|
GoogleContainerTools/skaffold
|
https://api.github.com/repos/GoogleContainerTools/skaffold
|
closed
|
flake: TestGracefulBuildCancel
|
kind/bug meta/test-flake priority/p2
|
Sometimes OSX travis tests fail with TestGracefulBuildCancel failures. We should eliminate these flakes.
|
1.0
|
flake: TestGracefulBuildCancel - Sometimes OSX travis tests fail with TestGracefulBuildCancel failures. We should eliminate these flakes.
|
non_usab
|
flake testgracefulbuildcancel sometimes osx travis tests fail with testgracefulbuildcancel failures we should eliminate these flakes
| 0
|
326,617
| 24,094,200,394
|
IssuesEvent
|
2022-09-19 17:13:01
|
psakievich/spack-manager
|
https://api.github.com/repos/psakievich/spack-manager
|
closed
|
Broken link in docs
|
documentation
|
On https://psakievich.github.io/spack-manager/user_profiles/developers/developer_spack_minimum.html, in the last full paragraph, there is a link to https://psakievich.github.io/spack-manager/user_profiles/developers/developer_tutorial.html, which is apparently broken.
@psakievich
|
1.0
|
Broken link in docs - On https://psakievich.github.io/spack-manager/user_profiles/developers/developer_spack_minimum.html, in the last full paragraph, there is a link to https://psakievich.github.io/spack-manager/user_profiles/developers/developer_tutorial.html, which is apparently broken.
@psakievich
|
non_usab
|
broken link in docs on in the last full paragraph there is a link to which is apparently broken psakievich
| 0
|
204,335
| 15,438,876,575
|
IssuesEvent
|
2021-03-07 22:01:20
|
trevorNgo/Measure2.0
|
https://api.github.com/repos/trevorNgo/Measure2.0
|
opened
|
CS4ZP6 Tester Feedback: Clicking "View Past Jobs" under the "Archive Year Term" section on the Admin homepage does not do anything
|
tester
|
**Description:** Clicking the **View Past Jobs** button under the **Archive Year Term** on the homepage when logged in as an **Admin** does not do anything.
**OS:** Windows 10 Enterprise
**Browser:** Chrome Version 89.0.4389.82
**Reproduction steps:**
* Sign in as an **Admin**.
* Click on **View Past Jobs** under the **Archive Year Term** section.
**Expected result:**
Past jobs should become visible.
**Actual result:**
Nothing happens when the button is clicked

|
1.0
|
CS4ZP6 Tester Feedback: Clicking "View Past Jobs" under the "Archive Year Term" section on the Admin homepage does not do anything - **Description:** Clicking the **View Past Jobs** button under the **Archive Year Term** on the homepage when logged in as an **Admin** does not do anything.
**OS:** Windows 10 Enterprise
**Browser:** Chrome Version 89.0.4389.82
**Reproduction steps:**
* Sign in as an **Admin**.
* Click on **View Past Jobs** under the **Archive Year Term** section.
**Expected result:**
Past jobs should become visible.
**Actual result:**
Nothing happens when the button is clicked

|
non_usab
|
tester feedback clicking view past jobs under the archive year term section on the admin homepage does not do anything description clicking the view past jobs button under the archive year term on the homepage when logged in as an admin does not do anything os windows enterprise browser chrome version reproduction steps sign in as an admin click on view past jobs under the archive year term section expected result past jobs should become visible actual result nothing happens when the button is clicked
| 0
|
21,009
| 16,444,554,849
|
IssuesEvent
|
2021-05-20 17:56:18
|
pulumi/pulumi-kubernetes-operator
|
https://api.github.com/repos/pulumi/pulumi-kubernetes-operator
|
closed
|
repoDir not working
|
bug impact/usability
|
Only stack files at the root of the repo are working, subdirs using `repoDir` property fails
## Expected behavior
moving the stack files in a subdir and defining the directory via `repoDir` should work
## Current behavior
error:
```json
{
"level": "error",
"ts": 1620396804.025099,
"logger": "controller-runtime.controller",
"msg": "Reconciler error",
"controller": "stack-controller",
"name": "sams-elasticache",
"namespace": "default",
"error": "failed to resolve git repository from working directory: /tmp/pulumi_auto521021910/testfolder: repository does not exist",
"errorVerbose": "repository does not exist
failed to resolve git repos itory from working directory: /tmp/pulumi_auto521021910/testfolder
github.com/pulumi/pulumi-kubernetes-operator/pkg/controller/stack.revisionAtWorkingDir
/home/runner/work/pulumi-kubernetes-operator/pulumi-kubernetes-operator/pkg/controller/stack/stack_controller.go:643
github.com/pulumi/pulumi-kubernetes-operator/pkg/controller/stack.(*ReconcileStack).Reconcile
/home/runner/work/pulumi-kubernetes-operator/ pulumi-kubernetes-operator/pkg/controller/stack/stack_controller.go:162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:233
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs. k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:209
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachiner y@v0.18.4/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:90
run time.goexit
/opt/hostedtoolcache/go/1.16.2/x64/src/runtime/asm_amd64.s:1371",
"stacktrace": "github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/contr oller.go:235
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:209
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/in ternal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wa it.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:90"
}
```
## Steps to reproduce
cr:
```yaml
apiVersion: pulumi.com/v1alpha1
kind: Stack
metadata:
name: sams-elasticache
spec:
envRefs:
AWS_REGION:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_REGION
AWS_DEFAULT_REGION:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_REGION
AWS_ACCESS_KEY_ID:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_SECRET_ACCESS_KEY
PULUMI_CONFIG_PASSPHRASE:
type: Secret
secret:
name: pulumi-aws-secrets
key: PULUMI_CONFIG_PASSPHRASE
stack: elasticache
config:
aws:region: us-east-1
projectRepo: https://github.com/universam1/pulumi-operator
commit: 4679d57095f4b54862afad0f9d9c9e5d193a51bd
repoDir: testfolder
```
However using commit 41c3d56d77f02a3a1595f4f453514c614fd1b2fa that is on the root of the project it works as expected
## Context (Environment)
impossible to try these examples https://github.com/pulumi/examples
## Affected feature
mono-repos are impossible with this bug
|
True
|
repoDir not working - Only stack files at the root of the repo are working, subdirs using `repoDir` property fails
## Expected behavior
moving the stack files in a subdir and defining the directory via `repoDir` should work
## Current behavior
error:
```json
{
"level": "error",
"ts": 1620396804.025099,
"logger": "controller-runtime.controller",
"msg": "Reconciler error",
"controller": "stack-controller",
"name": "sams-elasticache",
"namespace": "default",
"error": "failed to resolve git repository from working directory: /tmp/pulumi_auto521021910/testfolder: repository does not exist",
"errorVerbose": "repository does not exist
failed to resolve git repos itory from working directory: /tmp/pulumi_auto521021910/testfolder
github.com/pulumi/pulumi-kubernetes-operator/pkg/controller/stack.revisionAtWorkingDir
/home/runner/work/pulumi-kubernetes-operator/pulumi-kubernetes-operator/pkg/controller/stack/stack_controller.go:643
github.com/pulumi/pulumi-kubernetes-operator/pkg/controller/stack.(*ReconcileStack).Reconcile
/home/runner/work/pulumi-kubernetes-operator/ pulumi-kubernetes-operator/pkg/controller/stack/stack_controller.go:162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:233
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs. k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:209
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachiner y@v0.18.4/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:90
run time.goexit
/opt/hostedtoolcache/go/1.16.2/x64/src/runtime/asm_amd64.s:1371",
"stacktrace": "github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/contr oller.go:235
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/internal/controller/controller.go:209
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.1-0.20200724132623-e50c7b819263/pkg/in ternal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wa it.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.18.4/pkg/util/wait/wait.go:90"
}
```
## Steps to reproduce
cr:
```yaml
apiVersion: pulumi.com/v1alpha1
kind: Stack
metadata:
name: sams-elasticache
spec:
envRefs:
AWS_REGION:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_REGION
AWS_DEFAULT_REGION:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_REGION
AWS_ACCESS_KEY_ID:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY:
type: Secret
secret:
name: pulumi-aws-secrets
key: AWS_SECRET_ACCESS_KEY
PULUMI_CONFIG_PASSPHRASE:
type: Secret
secret:
name: pulumi-aws-secrets
key: PULUMI_CONFIG_PASSPHRASE
stack: elasticache
config:
aws:region: us-east-1
projectRepo: https://github.com/universam1/pulumi-operator
commit: 4679d57095f4b54862afad0f9d9c9e5d193a51bd
repoDir: testfolder
```
However using commit 41c3d56d77f02a3a1595f4f453514c614fd1b2fa that is on the root of the project it works as expected
## Context (Environment)
impossible to try these examples https://github.com/pulumi/examples
## Affected feature
mono-repos are impossible with this bug
|
usab
|
repodir not working only stack files at the root of the repo are working subdirs using repodir property fails expected behavior moving the stack files in a subdir and defining the directory via repodir should work current behavior error json level error ts logger controller runtime controller msg reconciler error controller stack controller name sams elasticache namespace default error failed to resolve git repository from working directory tmp pulumi testfolder repository does not exist errorverbose repository does not exist failed to resolve git repos itory from working directory tmp pulumi testfolder github com pulumi pulumi kubernetes operator pkg controller stack revisionatworkingdir home runner work pulumi kubernetes operator pulumi kubernetes operator pkg controller stack stack controller go github com pulumi pulumi kubernetes operator pkg controller stack reconcilestack reconcile home runner work pulumi kubernetes operator pulumi kubernetes operator pkg controller stack stack controller go sigs io controller runtime pkg internal controller controller reconcilehandler home runner go pkg mod sigs io controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller processnextworkitem home runner go pkg mod sigs io controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller worker home runner go pkg mod sigs io controller runtime pkg internal controller controller go io apimachinery pkg util wait backoffuntil home runner go pkg mod io apimachiner y pkg util wait wait go io apimachinery pkg util wait backoffuntil home runner go pkg mod io apimachinery pkg util wait wait go io apimachinery pkg util wait jitteruntil home runner go pkg mod io apimachinery pkg util wait wait go io apimachinery pkg util wait until home runner go pkg mod io apimachinery pkg util wait wait go run time goexit opt hostedtoolcache go src runtime asm s stacktrace github com go logr zapr zaplogger error home runner go pkg mod github com go logr zapr zapr go sigs io controller runtime pkg internal controller controller reconcilehandler home runner go pkg mod sigs io controller runtime pkg internal controller contr oller go sigs io controller runtime pkg internal controller controller processnextworkitem home runner go pkg mod sigs io controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller worker home runner go pkg mod sigs io controller runtime pkg in ternal controller controller go io apimachinery pkg util wait backoffuntil home runner go pkg mod io apimachinery pkg util wait wait go io apimachinery pkg util wait backoffuntil home runner go pkg mod io apimachinery pkg util wait wait go io apimachinery pkg util wait jitteruntil home runner go pkg mod io apimachinery pkg util wait wa it go io apimachinery pkg util wait until home runner go pkg mod io apimachinery pkg util wait wait go steps to reproduce cr yaml apiversion pulumi com kind stack metadata name sams elasticache spec envrefs aws region type secret secret name pulumi aws secrets key aws region aws default region type secret secret name pulumi aws secrets key aws region aws access key id type secret secret name pulumi aws secrets key aws access key id aws secret access key type secret secret name pulumi aws secrets key aws secret access key pulumi config passphrase type secret secret name pulumi aws secrets key pulumi config passphrase stack elasticache config aws region us east projectrepo commit repodir testfolder however using commit that is on the root of the project it works as expected context environment impossible to try these examples affected feature mono repos are impossible with this bug
| 1
|
23,871
| 23,065,612,977
|
IssuesEvent
|
2022-07-25 13:42:00
|
pulumi/pulumi
|
https://api.github.com/repos/pulumi/pulumi
|
closed
|
deduplicate runtime error messages
|
area/cli kind/enhancement impact/usability resolution/duplicate
|
## overview
we sandwhich error messages with..
```
error: Program failed with an unhandled exception:
```
and...
```
error: an unhandled error occurred: Program exited with a non-zero exit code: 1
```
are these useful to users? if not, should we consider removing them to reduce the wall of error text that many users hit?
|
True
|
deduplicate runtime error messages - ## overview
we sandwhich error messages with..
```
error: Program failed with an unhandled exception:
```
and...
```
error: an unhandled error occurred: Program exited with a non-zero exit code: 1
```
are these useful to users? if not, should we consider removing them to reduce the wall of error text that many users hit?
|
usab
|
deduplicate runtime error messages overview we sandwhich error messages with error program failed with an unhandled exception and error an unhandled error occurred program exited with a non zero exit code are these useful to users if not should we consider removing them to reduce the wall of error text that many users hit
| 1
|
77,900
| 22,037,674,489
|
IssuesEvent
|
2022-05-28 21:21:14
|
dxx-rebirth/dxx-rebirth
|
https://api.github.com/repos/dxx-rebirth/dxx-rebirth
|
closed
|
Build failure: similar/main/automap.cpp:373:30: cannot convert ‘const d2x::player_marker_index’ to ‘d2x::game_marker_index’
|
build-failure
|
<!--
These instructions are wrapped in comment markers. Write your answers outside the comment markers. You may delete the commented text as you go, or leave it in and let the system remove the comments when you submit the issue.
Use this template if the code in master fails to build for you. If your problem happens at runtime, please use the issue template `Runtime crash` or the issue template `Runtime bug`, as appropriate.
Please use a descriptive title. Include in the title the _first_ error message. Attach the full output of `scons` or paste it inline inside triple backticks. Collect the output from a run with `verbosebuild=1`; `verbosebuild=1` is default-enabled when output is written to a file or can be set explicitly on the command line.
-->
### Environment
<!--
If you fetched the source from Git, state the Git commit you used, preferably as the full 40-digit commit hash. Please do **not** say "HEAD", "current", or similar relative references. The meaning of relative references can change as contributors publish new code. The 40-digit commit hash will not change.
If you received a pre-packaged source archive from someone, describe how others can get the same archive. For publicly linked downloads, the download URL of the archive is sufficient. Please link to the program archive, not to the web page which links to the program archive.
Good URL: https://www.dxx-rebirth.com/download/dxx/user/afuturepilot/dxx-rebirth_v0.60-weekly-04-14-18-win.zip
Bad URL: https://www.dxx-rebirth.com/download-dxx-rebirth/
-->
git commit: a8e7f6ad597d5036adb738a784dc15f50266dab6
#### Operating System Environment
<!--
State what host platform (Microsoft Windows, Mac OS X, or Linux, *BSD) you tried. If you tried multiple, list all of them.
-->
* [ ] Microsoft Windows (32-bit)
* [ ] Microsoft Windows (64-bit)
* [ ] Mac OS X
<!--
* For Linux, give the name of the distribution.
** For distributions with specific releases (Debian, Fedora, Ubuntu), give the name and number of the release.
** For rolling distributions (Arch, Gentoo), describe how recently the system was fully updated. Reports from out-of-date systems are not rejected. However, if your issue is known to be fixed by a particular update, the Rebirth maintainers may suggest that update instead of changing Rebirth.
Add versions as needed.
-->
* Debian
* [ ] Debian Stretch
* [ ] Debian Buster
* [ ] Debian Bullseye
* [ ] Debian Sid
* Fedora
* [ ] Fedora 28
* [ ] Fedora 29
* [ ] Fedora 30
* [ ] Fedora 31
* [ ] Fedora 32
* [ ] Rawhide
* Ubuntu
* [ ] Ubuntu 16.04 LTS (Xenial Xerus)
* [ ] Ubuntu 18.04 LTS (Bionic Beaver)
* [ ] Ubuntu 18.10 (Cosmic Cuttlefish)
* [ ] Ubuntu 19.04 (Disco Dingo)
* [ ] Ubuntu 19.10 (Eoan Ermine)
* [ ] Ubuntu 20.04 LTS (Focal Fossa)
* [X] Arch
* [ ] Gentoo
* [ ] OpenBSD
* [ ] FreeBSD
* [ ] NetBSD
#### CPU environment
<!--
Indicate which CPU families were targeted. Some bugs are only visible on certain architectures, since other architectures hide the consequences of the mistake.
If unsure, omit this section. Generally, if you are on an architecture that requires special consideration, you will know your architecture.
-->
* [ ] x86 (32-bit Intel/AMD)
* [X] x86\_64 (64-bit Intel/AMD)
* [ ] ARM (32-bit)
* [ ] ARM64 (64-bit; sometimes called AArch64)
### Description
<!--
Describe the issue here.
-->
I'm getting the following build error with dxx-rebirth git master when building d2x:
```
similar/main/automap.cpp: In function ‘d2x::game_marker_index d2x::convert_player_marker_index_to_game_marker_index(dcx::game_mode_flags, unsigned int, unsigned int, player_marker_index)’:
similar/main/automap.cpp:373:30: error: cannot convert ‘const d2x::player_marker_index’ to ‘d2x::game_marker_index’ in initialization
373 | return game_marker_index{player_marker_num};
| ^~~~~~~~~~~~~~~~~
| |
| const d2x::player_marker_index
```
#### Regression status
<!--
What is the oldest Git commit known to present the problem? What is the newest Git commit known not to present the problem? Ideally, the newest unaffected is an immediate parent of the oldest affected. However, if the reporter lacks the ability to test individual versions (or the time to do so), there may be a range of untested commits for which the affected/unaffected status is unknown. Reports are not rejected due to a wide range of untested commits. However, smaller commit ranges are often easier to debug, so better information here improves the chance of a quick resolution.
-->
No info (never tried with sdl2 before).
### Steps to Reproduce
<!--
For build failures, provide:
- The `scons` command executed:
- The contents of `site-local.py`, if present: not applicable
- All output from `scons`, starting at the prompt where the command was entered and ending at the first shell prompt after the error: already provided above as requested on top-page instructions
- If sconf.log is mentioned in the output, attach it. If it is mentioned, it will be in the last lines before SCons exits. You do not need to read the full output searching for references to it. If in doubt, attach it:
- If `dxxsconf.h` is generated, attach it. It will be in the root of the build directory. If you did not set a build directory, it will be in the same directory as `SConstruct`:
-->
- command: `scons -j7 -Cdxx-rebirth lto=1 builddir=./build prefix=/usr opengl=yes sdl2=yes sdlmixer=yes ipv6=yes use_udp=yes use_tracker=yes screenshot=png verbosebuild=1 d1x=0 d2x=1 sharepath=/usr/share/d2x-rebirth`
- [Scons full output](https://paste.ee/p/ezfZd) ([also attached](https://github.com/dxx-rebirth/dxx-rebirth/files/8788427/scons-full-output.txt)).
- [sconf.log](https://github.com/dxx-rebirth/dxx-rebirth/files/8788550/sconf.log)
- [dxxsconf.h.txt](https://github.com/dxx-rebirth/dxx-rebirth/files/8788555/dxxsconf.h.txt)
### Other
- scons: 4.3.0
- compiler: gcc 12.1.0
- sdl2: 2.0.22
- sdl2_image: 2.0.5
- sdl2_mixer: 2.0.4
### Note
It builds d1x fine with the same command (by switching `d1x=0 d2x=1` to `d1x=1 d2x=0`).
|
1.0
|
Build failure: similar/main/automap.cpp:373:30: cannot convert ‘const d2x::player_marker_index’ to ‘d2x::game_marker_index’ - <!--
These instructions are wrapped in comment markers. Write your answers outside the comment markers. You may delete the commented text as you go, or leave it in and let the system remove the comments when you submit the issue.
Use this template if the code in master fails to build for you. If your problem happens at runtime, please use the issue template `Runtime crash` or the issue template `Runtime bug`, as appropriate.
Please use a descriptive title. Include in the title the _first_ error message. Attach the full output of `scons` or paste it inline inside triple backticks. Collect the output from a run with `verbosebuild=1`; `verbosebuild=1` is default-enabled when output is written to a file or can be set explicitly on the command line.
-->
### Environment
<!--
If you fetched the source from Git, state the Git commit you used, preferably as the full 40-digit commit hash. Please do **not** say "HEAD", "current", or similar relative references. The meaning of relative references can change as contributors publish new code. The 40-digit commit hash will not change.
If you received a pre-packaged source archive from someone, describe how others can get the same archive. For publicly linked downloads, the download URL of the archive is sufficient. Please link to the program archive, not to the web page which links to the program archive.
Good URL: https://www.dxx-rebirth.com/download/dxx/user/afuturepilot/dxx-rebirth_v0.60-weekly-04-14-18-win.zip
Bad URL: https://www.dxx-rebirth.com/download-dxx-rebirth/
-->
git commit: a8e7f6ad597d5036adb738a784dc15f50266dab6
#### Operating System Environment
<!--
State what host platform (Microsoft Windows, Mac OS X, or Linux, *BSD) you tried. If you tried multiple, list all of them.
-->
* [ ] Microsoft Windows (32-bit)
* [ ] Microsoft Windows (64-bit)
* [ ] Mac OS X
<!--
* For Linux, give the name of the distribution.
** For distributions with specific releases (Debian, Fedora, Ubuntu), give the name and number of the release.
** For rolling distributions (Arch, Gentoo), describe how recently the system was fully updated. Reports from out-of-date systems are not rejected. However, if your issue is known to be fixed by a particular update, the Rebirth maintainers may suggest that update instead of changing Rebirth.
Add versions as needed.
-->
* Debian
* [ ] Debian Stretch
* [ ] Debian Buster
* [ ] Debian Bullseye
* [ ] Debian Sid
* Fedora
* [ ] Fedora 28
* [ ] Fedora 29
* [ ] Fedora 30
* [ ] Fedora 31
* [ ] Fedora 32
* [ ] Rawhide
* Ubuntu
* [ ] Ubuntu 16.04 LTS (Xenial Xerus)
* [ ] Ubuntu 18.04 LTS (Bionic Beaver)
* [ ] Ubuntu 18.10 (Cosmic Cuttlefish)
* [ ] Ubuntu 19.04 (Disco Dingo)
* [ ] Ubuntu 19.10 (Eoan Ermine)
* [ ] Ubuntu 20.04 LTS (Focal Fossa)
* [X] Arch
* [ ] Gentoo
* [ ] OpenBSD
* [ ] FreeBSD
* [ ] NetBSD
#### CPU environment
<!--
Indicate which CPU families were targeted. Some bugs are only visible on certain architectures, since other architectures hide the consequences of the mistake.
If unsure, omit this section. Generally, if you are on an architecture that requires special consideration, you will know your architecture.
-->
* [ ] x86 (32-bit Intel/AMD)
* [X] x86\_64 (64-bit Intel/AMD)
* [ ] ARM (32-bit)
* [ ] ARM64 (64-bit; sometimes called AArch64)
### Description
<!--
Describe the issue here.
-->
I'm getting the following build error with dxx-rebirth git master when building d2x:
```
similar/main/automap.cpp: In function ‘d2x::game_marker_index d2x::convert_player_marker_index_to_game_marker_index(dcx::game_mode_flags, unsigned int, unsigned int, player_marker_index)’:
similar/main/automap.cpp:373:30: error: cannot convert ‘const d2x::player_marker_index’ to ‘d2x::game_marker_index’ in initialization
373 | return game_marker_index{player_marker_num};
| ^~~~~~~~~~~~~~~~~
| |
| const d2x::player_marker_index
```
#### Regression status
<!--
What is the oldest Git commit known to present the problem? What is the newest Git commit known not to present the problem? Ideally, the newest unaffected is an immediate parent of the oldest affected. However, if the reporter lacks the ability to test individual versions (or the time to do so), there may be a range of untested commits for which the affected/unaffected status is unknown. Reports are not rejected due to a wide range of untested commits. However, smaller commit ranges are often easier to debug, so better information here improves the chance of a quick resolution.
-->
No info (never tried with sdl2 before).
### Steps to Reproduce
<!--
For build failures, provide:
- The `scons` command executed:
- The contents of `site-local.py`, if present: not applicable
- All output from `scons`, starting at the prompt where the command was entered and ending at the first shell prompt after the error: already provided above as requested on top-page instructions
- If sconf.log is mentioned in the output, attach it. If it is mentioned, it will be in the last lines before SCons exits. You do not need to read the full output searching for references to it. If in doubt, attach it:
- If `dxxsconf.h` is generated, attach it. It will be in the root of the build directory. If you did not set a build directory, it will be in the same directory as `SConstruct`:
-->
- command: `scons -j7 -Cdxx-rebirth lto=1 builddir=./build prefix=/usr opengl=yes sdl2=yes sdlmixer=yes ipv6=yes use_udp=yes use_tracker=yes screenshot=png verbosebuild=1 d1x=0 d2x=1 sharepath=/usr/share/d2x-rebirth`
- [Scons full output](https://paste.ee/p/ezfZd) ([also attached](https://github.com/dxx-rebirth/dxx-rebirth/files/8788427/scons-full-output.txt)).
- [sconf.log](https://github.com/dxx-rebirth/dxx-rebirth/files/8788550/sconf.log)
- [dxxsconf.h.txt](https://github.com/dxx-rebirth/dxx-rebirth/files/8788555/dxxsconf.h.txt)
### Other
- scons: 4.3.0
- compiler: gcc 12.1.0
- sdl2: 2.0.22
- sdl2_image: 2.0.5
- sdl2_mixer: 2.0.4
### Note
It builds d1x fine with the same command (by switching `d1x=0 d2x=1` to `d1x=1 d2x=0`).
|
non_usab
|
build failure similar main automap cpp cannot convert ‘const player marker index’ to ‘ game marker index’ these instructions are wrapped in comment markers write your answers outside the comment markers you may delete the commented text as you go or leave it in and let the system remove the comments when you submit the issue use this template if the code in master fails to build for you if your problem happens at runtime please use the issue template runtime crash or the issue template runtime bug as appropriate please use a descriptive title include in the title the first error message attach the full output of scons or paste it inline inside triple backticks collect the output from a run with verbosebuild verbosebuild is default enabled when output is written to a file or can be set explicitly on the command line environment if you fetched the source from git state the git commit you used preferably as the full digit commit hash please do not say head current or similar relative references the meaning of relative references can change as contributors publish new code the digit commit hash will not change if you received a pre packaged source archive from someone describe how others can get the same archive for publicly linked downloads the download url of the archive is sufficient please link to the program archive not to the web page which links to the program archive good url bad url git commit operating system environment state what host platform microsoft windows mac os x or linux bsd you tried if you tried multiple list all of them microsoft windows bit microsoft windows bit mac os x for linux give the name of the distribution for distributions with specific releases debian fedora ubuntu give the name and number of the release for rolling distributions arch gentoo describe how recently the system was fully updated reports from out of date systems are not rejected however if your issue is known to be fixed by a particular update the rebirth maintainers may suggest that update instead of changing rebirth add versions as needed debian debian stretch debian buster debian bullseye debian sid fedora fedora fedora fedora fedora fedora rawhide ubuntu ubuntu lts xenial xerus ubuntu lts bionic beaver ubuntu cosmic cuttlefish ubuntu disco dingo ubuntu eoan ermine ubuntu lts focal fossa arch gentoo openbsd freebsd netbsd cpu environment indicate which cpu families were targeted some bugs are only visible on certain architectures since other architectures hide the consequences of the mistake if unsure omit this section generally if you are on an architecture that requires special consideration you will know your architecture bit intel amd bit intel amd arm bit bit sometimes called description describe the issue here i m getting the following build error with dxx rebirth git master when building similar main automap cpp in function ‘ game marker index convert player marker index to game marker index dcx game mode flags unsigned int unsigned int player marker index ’ similar main automap cpp error cannot convert ‘const player marker index’ to ‘ game marker index’ in initialization return game marker index player marker num const player marker index regression status what is the oldest git commit known to present the problem what is the newest git commit known not to present the problem ideally the newest unaffected is an immediate parent of the oldest affected however if the reporter lacks the ability to test individual versions or the time to do so there may be a range of untested commits for which the affected unaffected status is unknown reports are not rejected due to a wide range of untested commits however smaller commit ranges are often easier to debug so better information here improves the chance of a quick resolution no info never tried with before steps to reproduce for build failures provide the scons command executed the contents of site local py if present not applicable all output from scons starting at the prompt where the command was entered and ending at the first shell prompt after the error already provided above as requested on top page instructions if sconf log is mentioned in the output attach it if it is mentioned it will be in the last lines before scons exits you do not need to read the full output searching for references to it if in doubt attach it if dxxsconf h is generated attach it it will be in the root of the build directory if you did not set a build directory it will be in the same directory as sconstruct command scons cdxx rebirth lto builddir build prefix usr opengl yes yes sdlmixer yes yes use udp yes use tracker yes screenshot png verbosebuild sharepath usr share rebirth other scons compiler gcc image mixer note it builds fine with the same command by switching to
| 0
|
10,631
| 6,827,919,857
|
IssuesEvent
|
2017-11-08 18:37:54
|
dssg/triage
|
https://api.github.com/repos/dssg/triage
|
closed
|
Helpfully report obvious problems with config file
|
usability-enhancement
|
There are some problems with input config files that can be easily and helpfully reported on. For instance: if no aggregates or categoricals are present in a feature_aggregation, that feature_aggregation will produce nothing.
Implementation idea: each component could define a common function to validate a config file without doing anything, so it can, for instance, report a problem with model scoring config before it spends a ton of time training models.
Checklist of items to validate:
- [ ] Ensure that items required to be lists are actually lists (e.g., update windows)
|
True
|
Helpfully report obvious problems with config file - There are some problems with input config files that can be easily and helpfully reported on. For instance: if no aggregates or categoricals are present in a feature_aggregation, that feature_aggregation will produce nothing.
Implementation idea: each component could define a common function to validate a config file without doing anything, so it can, for instance, report a problem with model scoring config before it spends a ton of time training models.
Checklist of items to validate:
- [ ] Ensure that items required to be lists are actually lists (e.g., update windows)
|
usab
|
helpfully report obvious problems with config file there are some problems with input config files that can be easily and helpfully reported on for instance if no aggregates or categoricals are present in a feature aggregation that feature aggregation will produce nothing implementation idea each component could define a common function to validate a config file without doing anything so it can for instance report a problem with model scoring config before it spends a ton of time training models checklist of items to validate ensure that items required to be lists are actually lists e g update windows
| 1
|
360,343
| 25,286,153,357
|
IssuesEvent
|
2022-11-16 19:28:49
|
supabase/supabase-js
|
https://api.github.com/repos/supabase/supabase-js
|
closed
|
Update supabase-js v2 setSession() example
|
documentation
|
# Improve documentation
https://supabase.com/docs/reference/javascript/auth-setsession
## Describe the problem
The example given is inaccurate and doesn't work with the current version of supabase-js. This has been discussed and mentioned in:
[gotrue-js PR473](https://github.com/supabase/gotrue-js/pull/473)
[gitrue-js PR511](https://github.com/supabase/gotrue-js/pull/511)
## Describe the improvement
Update the example to use the refresh token and access token in an object.
```TypeScript
supabase.auth.setSession({
refresh_token
access_token
});
```
|
1.0
|
Update supabase-js v2 setSession() example - # Improve documentation
https://supabase.com/docs/reference/javascript/auth-setsession
## Describe the problem
The example given is inaccurate and doesn't work with the current version of supabase-js. This has been discussed and mentioned in:
[gotrue-js PR473](https://github.com/supabase/gotrue-js/pull/473)
[gitrue-js PR511](https://github.com/supabase/gotrue-js/pull/511)
## Describe the improvement
Update the example to use the refresh token and access token in an object.
```TypeScript
supabase.auth.setSession({
refresh_token
access_token
});
```
|
non_usab
|
update supabase js setsession example improve documentation describe the problem the example given is inaccurate and doesn t work with the current version of supabase js this has been discussed and mentioned in describe the improvement update the example to use the refresh token and access token in an object typescript supabase auth setsession refresh token access token
| 0
|
95,669
| 10,885,186,380
|
IssuesEvent
|
2019-11-18 09:54:06
|
Perl/perl5
|
https://api.github.com/repos/Perl/perl5
|
opened
|
[doc] All man pages' SEE ALSO sections should have standard section numbers added
|
Needs Triage documentation
|
[Just the other day](https://debbugs.gnu.org/cgi/bugreport.cgi?bug=38154) we were having an argument about if
$ man perldoc
>
> SEE ALSO
> perlpod, Pod::Perldoc
should instead say
> SEE ALSO
> perlpod(1), Pod::Perldoc(3perl)
I.e., all perl man pages' SEE ALSO sections should have standard numbers attached to them,
like
$ man cat
has.
|
1.0
|
[doc] All man pages' SEE ALSO sections should have standard section numbers added - [Just the other day](https://debbugs.gnu.org/cgi/bugreport.cgi?bug=38154) we were having an argument about if
$ man perldoc
>
> SEE ALSO
> perlpod, Pod::Perldoc
should instead say
> SEE ALSO
> perlpod(1), Pod::Perldoc(3perl)
I.e., all perl man pages' SEE ALSO sections should have standard numbers attached to them,
like
$ man cat
has.
|
non_usab
|
all man pages see also sections should have standard section numbers added we were having an argument about if man perldoc see also perlpod pod perldoc should instead say see also perlpod pod perldoc i e all perl man pages see also sections should have standard numbers attached to them like man cat has
| 0
|
347,418
| 31,163,359,708
|
IssuesEvent
|
2023-08-16 17:40:26
|
gravitational/teleport
|
https://api.github.com/repos/gravitational/teleport
|
opened
|
`TestKubeServerWatcher ` flakiness
|
flaky tests
|
## Failure
#### Link(s) to logs
- https://github.com/gravitational/teleport/actions/runs/5881833472/job/15951108628
#### Relevant snippet
```
=== FAIL: lib/services TestKubeServerWatcher (0.04s)
watcher_test.go:1073:
Error Trace: /__w/teleport/teleport/lib/services/watcher_test.go:1073
Error: An error is expected but got nil.
Test: TestKubeServerWatcher
```
|
1.0
|
`TestKubeServerWatcher ` flakiness - ## Failure
#### Link(s) to logs
- https://github.com/gravitational/teleport/actions/runs/5881833472/job/15951108628
#### Relevant snippet
```
=== FAIL: lib/services TestKubeServerWatcher (0.04s)
watcher_test.go:1073:
Error Trace: /__w/teleport/teleport/lib/services/watcher_test.go:1073
Error: An error is expected but got nil.
Test: TestKubeServerWatcher
```
|
non_usab
|
testkubeserverwatcher flakiness failure link s to logs relevant snippet fail lib services testkubeserverwatcher watcher test go error trace w teleport teleport lib services watcher test go error an error is expected but got nil test testkubeserverwatcher
| 0
|
527,909
| 15,355,968,835
|
IssuesEvent
|
2021-03-01 11:48:10
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
opened
|
[0.9.3 staging-1942] Spin Melter doesn't have Crafting Component.
|
Category: Gameplay Priority: Medium Type: Bug
|
You can craft it:

And we already have some recipes here:

|
1.0
|
[0.9.3 staging-1942] Spin Melter doesn't have Crafting Component. - You can craft it:

And we already have some recipes here:

|
non_usab
|
spin melter doesn t have crafting component you can craft it and we already have some recipes here
| 0
|
283,975
| 24,574,395,942
|
IssuesEvent
|
2022-10-13 11:09:42
|
MartinaB91/project5-task-app-front
|
https://api.github.com/repos/MartinaB91/project5-task-app-front
|
opened
|
Test: view ongoing tasks, my scoreboard
|
test
|
_Test to check if the amount of ongoing tasks is shown when a user has assigned a task._
## Story: #23
## Testcases:
|Test id: #87 | |
|--------|------------------------------|
|**Purpose:**| Check if the amount of ongoing tasks is shown|
|**Requirements:**| As a **Family member** I want to **see how many ongoing tasks I have** so that I can **decide if I can take on some more tasks or not.** |
|**Data:**| Username: Tester Password: Only available for tester Family Member: Parent |
|**Preconditions:**| |
|**Procedure step:**|**Expected result:**|
|**Step 1:** | |
|**Step 2:** | |
|**Step 3:** | |
|
1.0
|
Test: view ongoing tasks, my scoreboard - _Test to check if the amount of ongoing tasks is shown when a user has assigned a task._
## Story: #23
## Testcases:
|Test id: #87 | |
|--------|------------------------------|
|**Purpose:**| Check if the amount of ongoing tasks is shown|
|**Requirements:**| As a **Family member** I want to **see how many ongoing tasks I have** so that I can **decide if I can take on some more tasks or not.** |
|**Data:**| Username: Tester Password: Only available for tester Family Member: Parent |
|**Preconditions:**| |
|**Procedure step:**|**Expected result:**|
|**Step 1:** | |
|**Step 2:** | |
|**Step 3:** | |
|
non_usab
|
test view ongoing tasks my scoreboard test to check if the amount of ongoing tasks is shown when a user has assigned a task story testcases test id purpose check if the amount of ongoing tasks is shown requirements as a family member i want to see how many ongoing tasks i have so that i can decide if i can take on some more tasks or not data username tester password only available for tester family member parent preconditions procedure step expected result step step step
| 0
|
401,830
| 27,339,859,549
|
IssuesEvent
|
2023-02-26 17:10:58
|
scylladb/scylladb
|
https://api.github.com/repos/scylladb/scylladb
|
closed
|
DOCS: Need to document the metrics
|
Documentation
|
The document
https://github.com/scylladb/scylla/blob/master/docs/design-notes/metrics.md is missing LWT metrics. In looking at this draft, it seems that the two styles are not aligned.
Should we document each metric like this example below or more generic like https://github.com/scylladb/scylla/blob/master/docs/design-notes/metrics.md?
Either way, LWT metrics need to be added.
==========================================
Lightweight Transactions Metrics Reference
==========================================
The Lightweight Transactions (LWT) Dashboard has many metrics. This table explains them.
.. list-table:: LWT Metrics
:widths: 10 90
:header-rows: 1
* - Metric Name
- Description
* - scylla_storage_proxy_coordinator_cas_read_timeouts
- Number of timeout exceptions when waiting for replicas during SELECTs with SERIAL consistency.
This also includes timeouts waiting on internal Paxos semaphore, owned by the coordinator and associated with the key and timeouts caused by Paxos protocol retries, also known as Paxos contention.
Replicas considered dead according to gossip are not included into Paxos peers, so the errors only happen for replicas which were contacted.
Typically an indication of node overload or a particularly frequently accessed key.
* - scylla_storage_proxy_coordinator_cas_write_timeouts
- Number of timeout exceptions when waiting for replicas during UPDATEs, INSERTs and DELETEs with SERIAL consistency.
This also includes timeouts waiting on internal Paxos semaphore, owned by the coordinator and associated with the key and timeouts caused by Paxos protocol retries, also known as Paxos contention.
Replicas considered dead according to gossip are not included into Paxos peers, so the errors only happen for replicas which were contacted.
Typically an indication of node overload or a particularly frequently updated key.
* - scylla_storage_proxy_coordinator_cas_read_latency
- Latency histogram for SELECTs with SERIAL consistency.
* - scylla_storage_proxy_coordinator_cas_write_latency
- Latency histogram for INSERTs, UPDATEs, DELETEs with SERIAL consistency.
* - cql_inserts{conditional=yes}
- Total number of CQL INSERT requests with conditions, e.g. INSERT … IF NOT EXISTS
* - cql_updates{conditional=yes}
- Total number of CQL UPDATE requests with conditions, for example UPDATE cf SET key = value WHERE pkey = pvalue IF EXISTS
* - cql_deletes{conditional=yes}
- Total number of CQL DELETE requests with conditions, e.g. DELETE … IF EXISTS
* - cql_batches{conditional=yes}
- Total number of CQL BATCH requests with conditions. If a batch request contains at least one conditional statement, the entire batch is counted as conditional.
* - cql_statements_in_batches{conditional=yes}
- Total number of statements in conditional CQL BATCHes. A CQL BATCH is conditional (atomic) if it contains at least one CQL statement with conditions. In this case all CQL statements in such a batch are accounted for in this metric.
* - scylla_storage_proxy_coordinator_cas_write_condition_not_met
- Total number of times INSERT, UPDATE or DELETE was not applied because the IF condition evaluated to False. Can be used as an indicator of data distribution.
* - storage_proxy_coordinator_cas_read_contention
- Total number of times some SELECT with SERIAL consistency had to retry because there was a concurrent conditional statement against the same key.
Each retry is performed after a randomized sleep interval, so can lead to statement timing out completely. Indicates contention over a hot row or key.
* - storage_proxy_coordinator_cas_write_contention
- Total number of times some INSERT, UPDATE or DELETE request with conditions had to retry because there was a concurrent conditional statement against the same key.
Each retry is performed after a randomized sleep interval, so can lead to statement timing out completely. Indicates contention over a hot row or key.
* - scylla_storage_proxy_coordinator_cas_read_unavailable
- Total number of times a SELECT with SERIAL consistency failed after being unable to contact a majority of replicas. Possible causes include network partitioning or a significant amount of down nodes.
* - scylla_storage_proxy_coordinator_cas_write_unavailable
- Total number of times an INSERT, UPDATE, or DELETE with conditions failed after being unable to contact a majority of replicas. Possible causes include network partitioning or a significant amount of down nodes.
* - scylla_storage_proxy_coordinator_cas_write_timeout_due_to_uncertainty
- Total number of partially succeeded conditional statements. These statements were not committed by the coordinator, due to some replicas responding with errors or timing out.
The coordinator had to propagate the error to the client. However, the statement succeeded on a minority of replicas, so may later be propagated to the rest during repair.
* - scylla_storage_proxy_coordinator_cas_read_unfinished_commit
- Total number of Paxos repairs SELECTs statement with SERIAL consistency performed. A repair is necessary when a previous Paxos round did not complete.
A subsequent statement then may not proceed before completing the work of its predecessor. A repair is not guaranteed to succeed, the metric indicates the number of repair attempts made.
* - scylla_storage_proxy_coordinator_cas_write_unfinished_commit
- Total number of Paxos repairs a conditional INSERT, UPDATE or DELETE statements consistency performed. A repair is necessary when a previous Paxos round did not complete.
A subsequent statement then may not proceed before completing the work of its predecessor. A repair is not guaranteed to succeed, the metric indicates the number of repair attempts made.
* - storage_proxy_coordinator_cas_failed_read_round_optimization
- Normally a PREPARE Paxos round piggy-backs the previous value along with PREPARE response.
This metric is incremented whenever the coordinator was not able to obtain the previous value (or its digest) from some of the participants, or when the digests did not match.
A separate repair round has to be performed in this case. Indicates that some Paxos queries did not run successfully to completion, e.g. because some node is overloaded, down, or there was contention around a key.
* - storage_proxy_coordinator_cas_prune
- A successful conditional statement deletes the intermediate state from “system.paxos” table using “PRUNE” command. This metric reflects the total number of pruning requests executed on this replica.
* - storage_proxy_coordinator_cas_dropped_prune
- A successful conditional statement deletes the intermediate state from “system.paxos” table using “PRUNE” command.
If the system is busy it may not keep up with the PRUNE requests, so such requests are throttled. This metric indicates the total number of throttled PRUNE requests.
High value suggests the system is overloaded and also that system.paxos table is taking up space.
If a prune is dropped, system.paxos table key and value for respective LWT transaction will stay around until next transaction against the same key or gc_grace_period, when it's removed by compaction.
* - cas_prepare_latency
- Histogram of CAS PREPARE round latency for this table. Contributes to the overall conditional statement latency.
* - cas_accept_latency
- Histogram of CAS ACCEPT round latency for this table. Contributes to the overall conditional statement latency.
* - cas_learn_latency
- Histogram of CAS LEARN round latency for this table. Contributes to the overall conditional statement latency.
|
1.0
|
DOCS: Need to document the metrics - The document
https://github.com/scylladb/scylla/blob/master/docs/design-notes/metrics.md is missing LWT metrics. In looking at this draft, it seems that the two styles are not aligned.
Should we document each metric like this example below or more generic like https://github.com/scylladb/scylla/blob/master/docs/design-notes/metrics.md?
Either way, LWT metrics need to be added.
==========================================
Lightweight Transactions Metrics Reference
==========================================
The Lightweight Transactions (LWT) Dashboard has many metrics. This table explains them.
.. list-table:: LWT Metrics
:widths: 10 90
:header-rows: 1
* - Metric Name
- Description
* - scylla_storage_proxy_coordinator_cas_read_timeouts
- Number of timeout exceptions when waiting for replicas during SELECTs with SERIAL consistency.
This also includes timeouts waiting on internal Paxos semaphore, owned by the coordinator and associated with the key and timeouts caused by Paxos protocol retries, also known as Paxos contention.
Replicas considered dead according to gossip are not included into Paxos peers, so the errors only happen for replicas which were contacted.
Typically an indication of node overload or a particularly frequently accessed key.
* - scylla_storage_proxy_coordinator_cas_write_timeouts
- Number of timeout exceptions when waiting for replicas during UPDATEs, INSERTs and DELETEs with SERIAL consistency.
This also includes timeouts waiting on internal Paxos semaphore, owned by the coordinator and associated with the key and timeouts caused by Paxos protocol retries, also known as Paxos contention.
Replicas considered dead according to gossip are not included into Paxos peers, so the errors only happen for replicas which were contacted.
Typically an indication of node overload or a particularly frequently updated key.
* - scylla_storage_proxy_coordinator_cas_read_latency
- Latency histogram for SELECTs with SERIAL consistency.
* - scylla_storage_proxy_coordinator_cas_write_latency
- Latency histogram for INSERTs, UPDATEs, DELETEs with SERIAL consistency.
* - cql_inserts{conditional=yes}
- Total number of CQL INSERT requests with conditions, e.g. INSERT … IF NOT EXISTS
* - cql_updates{conditional=yes}
- Total number of CQL UPDATE requests with conditions, for example UPDATE cf SET key = value WHERE pkey = pvalue IF EXISTS
* - cql_deletes{conditional=yes}
- Total number of CQL DELETE requests with conditions, e.g. DELETE … IF EXISTS
* - cql_batches{conditional=yes}
- Total number of CQL BATCH requests with conditions. If a batch request contains at least one conditional statement, the entire batch is counted as conditional.
* - cql_statements_in_batches{conditional=yes}
- Total number of statements in conditional CQL BATCHes. A CQL BATCH is conditional (atomic) if it contains at least one CQL statement with conditions. In this case all CQL statements in such a batch are accounted for in this metric.
* - scylla_storage_proxy_coordinator_cas_write_condition_not_met
- Total number of times INSERT, UPDATE or DELETE was not applied because the IF condition evaluated to False. Can be used as an indicator of data distribution.
* - storage_proxy_coordinator_cas_read_contention
- Total number of times some SELECT with SERIAL consistency had to retry because there was a concurrent conditional statement against the same key.
Each retry is performed after a randomized sleep interval, so can lead to statement timing out completely. Indicates contention over a hot row or key.
* - storage_proxy_coordinator_cas_write_contention
- Total number of times some INSERT, UPDATE or DELETE request with conditions had to retry because there was a concurrent conditional statement against the same key.
Each retry is performed after a randomized sleep interval, so can lead to statement timing out completely. Indicates contention over a hot row or key.
* - scylla_storage_proxy_coordinator_cas_read_unavailable
- Total number of times a SELECT with SERIAL consistency failed after being unable to contact a majority of replicas. Possible causes include network partitioning or a significant amount of down nodes.
* - scylla_storage_proxy_coordinator_cas_write_unavailable
- Total number of times an INSERT, UPDATE, or DELETE with conditions failed after being unable to contact a majority of replicas. Possible causes include network partitioning or a significant amount of down nodes.
* - scylla_storage_proxy_coordinator_cas_write_timeout_due_to_uncertainty
- Total number of partially succeeded conditional statements. These statements were not committed by the coordinator, due to some replicas responding with errors or timing out.
The coordinator had to propagate the error to the client. However, the statement succeeded on a minority of replicas, so may later be propagated to the rest during repair.
* - scylla_storage_proxy_coordinator_cas_read_unfinished_commit
- Total number of Paxos repairs SELECTs statement with SERIAL consistency performed. A repair is necessary when a previous Paxos round did not complete.
A subsequent statement then may not proceed before completing the work of its predecessor. A repair is not guaranteed to succeed, the metric indicates the number of repair attempts made.
* - scylla_storage_proxy_coordinator_cas_write_unfinished_commit
- Total number of Paxos repairs a conditional INSERT, UPDATE or DELETE statements consistency performed. A repair is necessary when a previous Paxos round did not complete.
A subsequent statement then may not proceed before completing the work of its predecessor. A repair is not guaranteed to succeed, the metric indicates the number of repair attempts made.
* - storage_proxy_coordinator_cas_failed_read_round_optimization
- Normally a PREPARE Paxos round piggy-backs the previous value along with PREPARE response.
This metric is incremented whenever the coordinator was not able to obtain the previous value (or its digest) from some of the participants, or when the digests did not match.
A separate repair round has to be performed in this case. Indicates that some Paxos queries did not run successfully to completion, e.g. because some node is overloaded, down, or there was contention around a key.
* - storage_proxy_coordinator_cas_prune
- A successful conditional statement deletes the intermediate state from “system.paxos” table using “PRUNE” command. This metric reflects the total number of pruning requests executed on this replica.
* - storage_proxy_coordinator_cas_dropped_prune
- A successful conditional statement deletes the intermediate state from “system.paxos” table using “PRUNE” command.
If the system is busy it may not keep up with the PRUNE requests, so such requests are throttled. This metric indicates the total number of throttled PRUNE requests.
High value suggests the system is overloaded and also that system.paxos table is taking up space.
If a prune is dropped, system.paxos table key and value for respective LWT transaction will stay around until next transaction against the same key or gc_grace_period, when it's removed by compaction.
* - cas_prepare_latency
- Histogram of CAS PREPARE round latency for this table. Contributes to the overall conditional statement latency.
* - cas_accept_latency
- Histogram of CAS ACCEPT round latency for this table. Contributes to the overall conditional statement latency.
* - cas_learn_latency
- Histogram of CAS LEARN round latency for this table. Contributes to the overall conditional statement latency.
|
non_usab
|
docs need to document the metrics the document is missing lwt metrics in looking at this draft it seems that the two styles are not aligned should we document each metric like this example below or more generic like either way lwt metrics need to be added lightweight transactions metrics reference the lightweight transactions lwt dashboard has many metrics this table explains them list table lwt metrics widths header rows metric name description scylla storage proxy coordinator cas read timeouts number of timeout exceptions when waiting for replicas during selects with serial consistency this also includes timeouts waiting on internal paxos semaphore owned by the coordinator and associated with the key and timeouts caused by paxos protocol retries also known as paxos contention replicas considered dead according to gossip are not included into paxos peers so the errors only happen for replicas which were contacted typically an indication of node overload or a particularly frequently accessed key scylla storage proxy coordinator cas write timeouts number of timeout exceptions when waiting for replicas during updates inserts and deletes with serial consistency this also includes timeouts waiting on internal paxos semaphore owned by the coordinator and associated with the key and timeouts caused by paxos protocol retries also known as paxos contention replicas considered dead according to gossip are not included into paxos peers so the errors only happen for replicas which were contacted typically an indication of node overload or a particularly frequently updated key scylla storage proxy coordinator cas read latency latency histogram for selects with serial consistency scylla storage proxy coordinator cas write latency latency histogram for inserts updates deletes with serial consistency cql inserts conditional yes total number of cql insert requests with conditions e g insert … if not exists cql updates conditional yes total number of cql update requests with conditions for example update cf set key value where pkey pvalue if exists cql deletes conditional yes total number of cql delete requests with conditions e g delete … if exists cql batches conditional yes total number of cql batch requests with conditions if a batch request contains at least one conditional statement the entire batch is counted as conditional cql statements in batches conditional yes total number of statements in conditional cql batches a cql batch is conditional atomic if it contains at least one cql statement with conditions in this case all cql statements in such a batch are accounted for in this metric scylla storage proxy coordinator cas write condition not met total number of times insert update or delete was not applied because the if condition evaluated to false can be used as an indicator of data distribution storage proxy coordinator cas read contention total number of times some select with serial consistency had to retry because there was a concurrent conditional statement against the same key each retry is performed after a randomized sleep interval so can lead to statement timing out completely indicates contention over a hot row or key storage proxy coordinator cas write contention total number of times some insert update or delete request with conditions had to retry because there was a concurrent conditional statement against the same key each retry is performed after a randomized sleep interval so can lead to statement timing out completely indicates contention over a hot row or key scylla storage proxy coordinator cas read unavailable total number of times a select with serial consistency failed after being unable to contact a majority of replicas possible causes include network partitioning or a significant amount of down nodes scylla storage proxy coordinator cas write unavailable total number of times an insert update or delete with conditions failed after being unable to contact a majority of replicas possible causes include network partitioning or a significant amount of down nodes scylla storage proxy coordinator cas write timeout due to uncertainty total number of partially succeeded conditional statements these statements were not committed by the coordinator due to some replicas responding with errors or timing out the coordinator had to propagate the error to the client however the statement succeeded on a minority of replicas so may later be propagated to the rest during repair scylla storage proxy coordinator cas read unfinished commit total number of paxos repairs selects statement with serial consistency performed a repair is necessary when a previous paxos round did not complete a subsequent statement then may not proceed before completing the work of its predecessor a repair is not guaranteed to succeed the metric indicates the number of repair attempts made scylla storage proxy coordinator cas write unfinished commit total number of paxos repairs a conditional insert update or delete statements consistency performed a repair is necessary when a previous paxos round did not complete a subsequent statement then may not proceed before completing the work of its predecessor a repair is not guaranteed to succeed the metric indicates the number of repair attempts made storage proxy coordinator cas failed read round optimization normally a prepare paxos round piggy backs the previous value along with prepare response this metric is incremented whenever the coordinator was not able to obtain the previous value or its digest from some of the participants or when the digests did not match a separate repair round has to be performed in this case indicates that some paxos queries did not run successfully to completion e g because some node is overloaded down or there was contention around a key storage proxy coordinator cas prune a successful conditional statement deletes the intermediate state from “system paxos” table using “prune” command this metric reflects the total number of pruning requests executed on this replica storage proxy coordinator cas dropped prune a successful conditional statement deletes the intermediate state from “system paxos” table using “prune” command if the system is busy it may not keep up with the prune requests so such requests are throttled this metric indicates the total number of throttled prune requests high value suggests the system is overloaded and also that system paxos table is taking up space if a prune is dropped system paxos table key and value for respective lwt transaction will stay around until next transaction against the same key or gc grace period when it s removed by compaction cas prepare latency histogram of cas prepare round latency for this table contributes to the overall conditional statement latency cas accept latency histogram of cas accept round latency for this table contributes to the overall conditional statement latency cas learn latency histogram of cas learn round latency for this table contributes to the overall conditional statement latency
| 0
|
14,466
| 9,199,824,132
|
IssuesEvent
|
2019-03-07 15:44:46
|
OctopusDeploy/Issues
|
https://api.github.com/repos/OctopusDeploy/Issues
|
opened
|
Add style to current space in the SpaceSwitcher
|
area/usability kind/enhancement
|
# The enhancement
When opening the SpaceSwitcher, its not entirely obvious which space is the one the user is currently sitting on.
The Idea is to add some styles to the current space to make it stand out more.
## Mockup
**Before**

**After**

|
True
|
Add style to current space in the SpaceSwitcher - # The enhancement
When opening the SpaceSwitcher, its not entirely obvious which space is the one the user is currently sitting on.
The Idea is to add some styles to the current space to make it stand out more.
## Mockup
**Before**

**After**

|
usab
|
add style to current space in the spaceswitcher the enhancement when opening the spaceswitcher its not entirely obvious which space is the one the user is currently sitting on the idea is to add some styles to the current space to make it stand out more mockup before after
| 1
|
279,100
| 21,111,331,197
|
IssuesEvent
|
2022-04-05 02:10:51
|
SE701-T1/backend
|
https://api.github.com/repos/SE701-T1/backend
|
opened
|
Update Wiki
|
Priority: High Status: Available Type: Documentation
|
**Describe the task that needs to be done.**
*(If this issue is about a bug, please describe the problem and steps to reproduce the issue. You can also include screenshots of any stack traces, or any other supporting images).*
Update the Backend and Frontend Wiki to reflect changes made to the Backend repository during the second assignment to bring it up-to-date for any future contributing.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
Review the current state of the repository, and make changes where necessary to bring the Backend and Frontend Wiki up-to-date with changes made to the Backend repository.
**Notes**
|
1.0
|
Update Wiki - **Describe the task that needs to be done.**
*(If this issue is about a bug, please describe the problem and steps to reproduce the issue. You can also include screenshots of any stack traces, or any other supporting images).*
Update the Backend and Frontend Wiki to reflect changes made to the Backend repository during the second assignment to bring it up-to-date for any future contributing.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
Review the current state of the repository, and make changes where necessary to bring the Backend and Frontend Wiki up-to-date with changes made to the Backend repository.
**Notes**
|
non_usab
|
update wiki describe the task that needs to be done if this issue is about a bug please describe the problem and steps to reproduce the issue you can also include screenshots of any stack traces or any other supporting images update the backend and frontend wiki to reflect changes made to the backend repository during the second assignment to bring it up to date for any future contributing describe how a solution to your proposed task might look like and any alternatives considered review the current state of the repository and make changes where necessary to bring the backend and frontend wiki up to date with changes made to the backend repository notes
| 0
|
4,757
| 3,882,221,824
|
IssuesEvent
|
2016-04-13 09:02:42
|
lionheart/openradar-mirror
|
https://api.github.com/repos/lionheart/openradar-mirror
|
opened
|
20796421: "More" button isn't aligned with App update notes
|
classification:ui/usability reproducible:always status:open
|
#### Description
Summary:
The "More" button that loads the full notes view isn't aligned with the description text.
Steps to Reproduce:
See attached screen shots.
Version:
App Store Mac 2.0 (376.24), Mac OS, 10.10.4 (14E11f)
Configuration:
MacBook10,1
-
Product Version: 2.0 (376.24)
Created: 2015-05-04T05:34:30.154660
Originated: 2015-05-03T00:00:00
Open Radar Link: http://www.openradar.me/20796421
|
True
|
20796421: "More" button isn't aligned with App update notes - #### Description
Summary:
The "More" button that loads the full notes view isn't aligned with the description text.
Steps to Reproduce:
See attached screen shots.
Version:
App Store Mac 2.0 (376.24), Mac OS, 10.10.4 (14E11f)
Configuration:
MacBook10,1
-
Product Version: 2.0 (376.24)
Created: 2015-05-04T05:34:30.154660
Originated: 2015-05-03T00:00:00
Open Radar Link: http://www.openradar.me/20796421
|
usab
|
more button isn t aligned with app update notes description summary the more button that loads the full notes view isn t aligned with the description text steps to reproduce see attached screen shots version app store mac mac os configuration product version created originated open radar link
| 1
|
61,053
| 17,023,589,562
|
IssuesEvent
|
2021-07-03 02:48:32
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Merkaartor does not load Sat Backgrounds
|
Component: merkaartor Priority: major Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 6.06pm, Wednesday, 12th May 2010]**
Merkaartor 0.15.3 does not load Sattelite backgrounds any more (Yahoo keeps Merkaartor blank and even if i want to use Digitalglobe or others i wont have sucess).
Digitalglobe will return a gray picture in seconds, Yahoo stays blank (WMZ and tiled).
What to do?
|
1.0
|
Merkaartor does not load Sat Backgrounds - **[Submitted to the original trac issue database at 6.06pm, Wednesday, 12th May 2010]**
Merkaartor 0.15.3 does not load Sattelite backgrounds any more (Yahoo keeps Merkaartor blank and even if i want to use Digitalglobe or others i wont have sucess).
Digitalglobe will return a gray picture in seconds, Yahoo stays blank (WMZ and tiled).
What to do?
|
non_usab
|
merkaartor does not load sat backgrounds merkaartor does not load sattelite backgrounds any more yahoo keeps merkaartor blank and even if i want to use digitalglobe or others i wont have sucess digitalglobe will return a gray picture in seconds yahoo stays blank wmz and tiled what to do
| 0
|
18,477
| 3,067,270,280
|
IssuesEvent
|
2015-08-18 09:22:51
|
contao/core
|
https://api.github.com/repos/contao/core
|
closed
|
Cannot unset string offsets in Controller.php on line 1478
|
defect
|
In [Controller.php:1478](https://github.com/contao/core/blob/ac68761904694febb7636efacf34c30575e720a0/system/modules/core/library/Contao/Controller.php#L1478) we try to unset the third array item, but `$size` isn’t always an array and this leads to the fatal error “Cannot unset string offsets”. I think we should change to a `deserialize($arrItem['size'], true)` in [Controller.php:1432](https://github.com/contao/core/blob/ac68761904694febb7636efacf34c30575e720a0/system/modules/core/library/Contao/Controller.php#L1432).
Related issue: #7875
|
1.0
|
Cannot unset string offsets in Controller.php on line 1478 - In [Controller.php:1478](https://github.com/contao/core/blob/ac68761904694febb7636efacf34c30575e720a0/system/modules/core/library/Contao/Controller.php#L1478) we try to unset the third array item, but `$size` isn’t always an array and this leads to the fatal error “Cannot unset string offsets”. I think we should change to a `deserialize($arrItem['size'], true)` in [Controller.php:1432](https://github.com/contao/core/blob/ac68761904694febb7636efacf34c30575e720a0/system/modules/core/library/Contao/Controller.php#L1432).
Related issue: #7875
|
non_usab
|
cannot unset string offsets in controller php on line in we try to unset the third array item but size isn’t always an array and this leads to the fatal error “cannot unset string offsets” i think we should change to a deserialize arritem true in related issue
| 0
|
28,207
| 6,966,088,613
|
IssuesEvent
|
2017-12-09 14:43:55
|
iaserrat/roadmapster
|
https://api.github.com/repos/iaserrat/roadmapster
|
closed
|
Fix "Rubocop/Performance/StringReplacement" issue in http/helpers.rb
|
Codeclimate help wanted
|
Use `delete!` instead of `gsub!`.
https://codeclimate.com/github/iaserrat/roadmapster/http/helpers.rb#issue_59a773521e1f110001000021
|
1.0
|
Fix "Rubocop/Performance/StringReplacement" issue in http/helpers.rb - Use `delete!` instead of `gsub!`.
https://codeclimate.com/github/iaserrat/roadmapster/http/helpers.rb#issue_59a773521e1f110001000021
|
non_usab
|
fix rubocop performance stringreplacement issue in http helpers rb use delete instead of gsub
| 0
|
645,797
| 21,015,766,845
|
IssuesEvent
|
2022-03-30 10:51:16
|
scribe-org/Scribe-Android
|
https://api.github.com/repos/scribe-org/Scribe-Android
|
opened
|
Link Kotlin codes to appropriate frameworks
|
help wanted -priority-
|
After the conversion of codes from Scribe-iOS is completed in #10, the next task is to try to get everything linked to appropriate Android frameworks. The codes translated from Swift will by no means be workable as they're based on UIKit, which cannot itself be translated. These will instead serve as guides for what needs to be included in the conversion.
As discussed in #9, comparing iOS and Android keyboard extensions will be important going forward. This should allow the translated code to be linked to appropriate frameworks, although rewriting it will doubtless be necessary to a non-trivial degree.
|
1.0
|
Link Kotlin codes to appropriate frameworks - After the conversion of codes from Scribe-iOS is completed in #10, the next task is to try to get everything linked to appropriate Android frameworks. The codes translated from Swift will by no means be workable as they're based on UIKit, which cannot itself be translated. These will instead serve as guides for what needs to be included in the conversion.
As discussed in #9, comparing iOS and Android keyboard extensions will be important going forward. This should allow the translated code to be linked to appropriate frameworks, although rewriting it will doubtless be necessary to a non-trivial degree.
|
non_usab
|
link kotlin codes to appropriate frameworks after the conversion of codes from scribe ios is completed in the next task is to try to get everything linked to appropriate android frameworks the codes translated from swift will by no means be workable as they re based on uikit which cannot itself be translated these will instead serve as guides for what needs to be included in the conversion as discussed in comparing ios and android keyboard extensions will be important going forward this should allow the translated code to be linked to appropriate frameworks although rewriting it will doubtless be necessary to a non trivial degree
| 0
|
258,923
| 19,578,955,298
|
IssuesEvent
|
2022-01-04 18:35:03
|
vicariousinc/PGMax
|
https://api.github.com/repos/vicariousinc/PGMax
|
closed
|
Add more text explanation to examples so they're more understandable to new users
|
documentation
|
Currently, our examples are not easy to understand for a new user. I think it'd be nice to:
- Provide an explanation at the top of the file that gives some intuition for why this example exists/what the model is doing
- Provide more of a walkthrough for each cell
- Explain what the various outputs are showing
|
1.0
|
Add more text explanation to examples so they're more understandable to new users - Currently, our examples are not easy to understand for a new user. I think it'd be nice to:
- Provide an explanation at the top of the file that gives some intuition for why this example exists/what the model is doing
- Provide more of a walkthrough for each cell
- Explain what the various outputs are showing
|
non_usab
|
add more text explanation to examples so they re more understandable to new users currently our examples are not easy to understand for a new user i think it d be nice to provide an explanation at the top of the file that gives some intuition for why this example exists what the model is doing provide more of a walkthrough for each cell explain what the various outputs are showing
| 0
|
318,505
| 23,724,332,014
|
IssuesEvent
|
2022-08-30 18:05:52
|
tidymodels/parsnip
|
https://api.github.com/repos/tidymodels/parsnip
|
closed
|
Internals bug? `will_make_matrix()` returns `False` when given a matrix
|
documentation
|
Hello, I'm working on a PR to address #765. While doing so I ran into an issue with `will_make_matrix()`. Shouldn't the return value be True if `y` is a matrix or a vector?
When `y` is a numeric matrix it fails this check and converts it to a vector. This is stripping the colname of `y` whenever it is passed to function later down the line such as `xgb_train()`
https://github.com/tidymodels/parsnip/blob/f0810910d0b69141af12c66ae210ec0880f95323/R/convert_data.R#L326-L336
|
1.0
|
Internals bug? `will_make_matrix()` returns `False` when given a matrix - Hello, I'm working on a PR to address #765. While doing so I ran into an issue with `will_make_matrix()`. Shouldn't the return value be True if `y` is a matrix or a vector?
When `y` is a numeric matrix it fails this check and converts it to a vector. This is stripping the colname of `y` whenever it is passed to function later down the line such as `xgb_train()`
https://github.com/tidymodels/parsnip/blob/f0810910d0b69141af12c66ae210ec0880f95323/R/convert_data.R#L326-L336
|
non_usab
|
internals bug will make matrix returns false when given a matrix hello i m working on a pr to address while doing so i ran into an issue with will make matrix shouldn t the return value be true if y is a matrix or a vector when y is a numeric matrix it fails this check and converts it to a vector this is stripping the colname of y whenever it is passed to function later down the line such as xgb train
| 0
|
85,932
| 10,697,677,183
|
IssuesEvent
|
2019-10-23 17:02:00
|
phetsims/vector-addition
|
https://api.github.com/repos/phetsims/vector-addition
|
closed
|
Equations screen: change sum component color to black
|
design:polish status:ready-for-review
|
@kathy-phet pointed out that it's a bit odd that the sum components on the Equations screen are dark gray, but the other vector components have the same color as their parent.

I believe we chose dark gray to improve the contrast for the on-axis projection, but now that we've shifted the components off-axis this is unnecessary.
@pixelzoom let's change the color of the sum components to match the sum color (black).
|
1.0
|
Equations screen: change sum component color to black - @kathy-phet pointed out that it's a bit odd that the sum components on the Equations screen are dark gray, but the other vector components have the same color as their parent.

I believe we chose dark gray to improve the contrast for the on-axis projection, but now that we've shifted the components off-axis this is unnecessary.
@pixelzoom let's change the color of the sum components to match the sum color (black).
|
non_usab
|
equations screen change sum component color to black kathy phet pointed out that it s a bit odd that the sum components on the equations screen are dark gray but the other vector components have the same color as their parent i believe we chose dark gray to improve the contrast for the on axis projection but now that we ve shifted the components off axis this is unnecessary pixelzoom let s change the color of the sum components to match the sum color black
| 0
|
18,054
| 12,511,337,591
|
IssuesEvent
|
2020-06-02 20:22:20
|
GaloisInc/saw-script
|
https://api.github.com/repos/GaloisInc/saw-script
|
closed
|
literate Cryptol imports?
|
bug usability
|
saw doesn't seem to be able to `include` literate Cryptol. Obviously this makes literate Cryptol (latex, markdown, etc.?) less useful.
|
True
|
literate Cryptol imports? - saw doesn't seem to be able to `include` literate Cryptol. Obviously this makes literate Cryptol (latex, markdown, etc.?) less useful.
|
usab
|
literate cryptol imports saw doesn t seem to be able to include literate cryptol obviously this makes literate cryptol latex markdown etc less useful
| 1
|
33,526
| 2,769,372,896
|
IssuesEvent
|
2015-05-01 00:20:40
|
shgysk8zer0/core
|
https://api.github.com/repos/shgysk8zer0/core
|
opened
|
Enhance error events
|
enhancement High Priority PHP
|
# Update error class to be able to handle errors from `Error_Events`
Need to be able to handle:
* Database reporting
* Log reporting
* AJAX/JSON/JSON_Response
|
1.0
|
Enhance error events - # Update error class to be able to handle errors from `Error_Events`
Need to be able to handle:
* Database reporting
* Log reporting
* AJAX/JSON/JSON_Response
|
non_usab
|
enhance error events update error class to be able to handle errors from error events need to be able to handle database reporting log reporting ajax json json response
| 0
|
20,060
| 14,971,391,200
|
IssuesEvent
|
2021-01-27 21:04:39
|
micrometer-metrics/micrometer
|
https://api.github.com/repos/micrometer-metrics/micrometer
|
closed
|
Can't remove meters after a filter is added
|
usability
|
`MeterRegistry.remove()` fails to remove metrics that were created before a `commonTags()` filter was applied. It seems to apply the filter to the metric ID before trying to look it up, and so it finds nothing, and fails to delete it.
Tested using the latest version (`io.micrometer:micrometer-core:1.2.1`).
Here is my test case:
```java
import io.micrometer.core.instrument.Metrics;
import io.micrometer.core.instrument.Timer;
import org.junit.Test;
import java.time.Duration;
import static org.junit.Assert.assertNull;
public class RemoveMetricTest {
/**
* This passes as expected
*/
@Test
public void testRemoveMetric() {
Timer testTimer = Metrics.globalRegistry.timer("testTimer");
testTimer.record(Duration.ofMillis(10));
Metrics.globalRegistry.remove(testTimer);
assertNull(Metrics.globalRegistry.find("testTimer").timer());
}
/**
* This fails - the Timer does not get removed
*/
@Test
public void testRemoveMetricAfterAddingFilter() {
Timer testTimer = Metrics.globalRegistry.timer("testTimer");
testTimer.record(Duration.ofMillis(10));
Metrics.globalRegistry.config().commonTags("foo", "bar");
Metrics.globalRegistry.remove(testTimer);
assertNull(Metrics.globalRegistry.find("testTimer").timer());
}
}
```
|
True
|
Can't remove meters after a filter is added - `MeterRegistry.remove()` fails to remove metrics that were created before a `commonTags()` filter was applied. It seems to apply the filter to the metric ID before trying to look it up, and so it finds nothing, and fails to delete it.
Tested using the latest version (`io.micrometer:micrometer-core:1.2.1`).
Here is my test case:
```java
import io.micrometer.core.instrument.Metrics;
import io.micrometer.core.instrument.Timer;
import org.junit.Test;
import java.time.Duration;
import static org.junit.Assert.assertNull;
public class RemoveMetricTest {
/**
* This passes as expected
*/
@Test
public void testRemoveMetric() {
Timer testTimer = Metrics.globalRegistry.timer("testTimer");
testTimer.record(Duration.ofMillis(10));
Metrics.globalRegistry.remove(testTimer);
assertNull(Metrics.globalRegistry.find("testTimer").timer());
}
/**
* This fails - the Timer does not get removed
*/
@Test
public void testRemoveMetricAfterAddingFilter() {
Timer testTimer = Metrics.globalRegistry.timer("testTimer");
testTimer.record(Duration.ofMillis(10));
Metrics.globalRegistry.config().commonTags("foo", "bar");
Metrics.globalRegistry.remove(testTimer);
assertNull(Metrics.globalRegistry.find("testTimer").timer());
}
}
```
|
usab
|
can t remove meters after a filter is added meterregistry remove fails to remove metrics that were created before a commontags filter was applied it seems to apply the filter to the metric id before trying to look it up and so it finds nothing and fails to delete it tested using the latest version io micrometer micrometer core here is my test case java import io micrometer core instrument metrics import io micrometer core instrument timer import org junit test import java time duration import static org junit assert assertnull public class removemetrictest this passes as expected test public void testremovemetric timer testtimer metrics globalregistry timer testtimer testtimer record duration ofmillis metrics globalregistry remove testtimer assertnull metrics globalregistry find testtimer timer this fails the timer does not get removed test public void testremovemetricafteraddingfilter timer testtimer metrics globalregistry timer testtimer testtimer record duration ofmillis metrics globalregistry config commontags foo bar metrics globalregistry remove testtimer assertnull metrics globalregistry find testtimer timer
| 1
|
10,702
| 6,888,639,978
|
IssuesEvent
|
2017-11-22 07:08:19
|
git-cola/git-cola
|
https://api.github.com/repos/git-cola/git-cola
|
closed
|
Option to increase the amount of recent repositories?
|
good first issue help wanted usability
|
At the moment, it seems that git-cola only shows the latest 6 or 7 repositories when opening up. However, it might happen to work with ~10 repositories, which then requires using the "Select manually" option to choose repositories that have been pushed out of the list.
Could we imagine an option to choose in the preferences the amount of recent repositories we want git-cola to remember, and hence to show up in the start window?
Note: I know that "favorites" can be used, but when git-cola opens up it only shows the recent repositories, not the favorites. Maybe yet another feature could be to show the favorites in the opening window.
|
True
|
Option to increase the amount of recent repositories? - At the moment, it seems that git-cola only shows the latest 6 or 7 repositories when opening up. However, it might happen to work with ~10 repositories, which then requires using the "Select manually" option to choose repositories that have been pushed out of the list.
Could we imagine an option to choose in the preferences the amount of recent repositories we want git-cola to remember, and hence to show up in the start window?
Note: I know that "favorites" can be used, but when git-cola opens up it only shows the recent repositories, not the favorites. Maybe yet another feature could be to show the favorites in the opening window.
|
usab
|
option to increase the amount of recent repositories at the moment it seems that git cola only shows the latest or repositories when opening up however it might happen to work with repositories which then requires using the select manually option to choose repositories that have been pushed out of the list could we imagine an option to choose in the preferences the amount of recent repositories we want git cola to remember and hence to show up in the start window note i know that favorites can be used but when git cola opens up it only shows the recent repositories not the favorites maybe yet another feature could be to show the favorites in the opening window
| 1
|
1,489
| 2,860,406,132
|
IssuesEvent
|
2015-06-03 15:42:01
|
OpenSourceFieldlinguistics/FieldDB
|
https://api.github.com/repos/OpenSourceFieldlinguistics/FieldDB
|
closed
|
Andriod UI designs and patterns for elicitation and gamified psycholing experiments
|
Software Enginneering Usability User Interface
|
hi @kondrann here is a very interesting video about UI design and best practices for you to watch while you design and think about the flow of your "game"
http://www.youtube.com/watch?v=Jl3-lzlzOJI
when you are done, assign the video to senhorzinho for his elicitation app
|
True
|
Andriod UI designs and patterns for elicitation and gamified psycholing experiments - hi @kondrann here is a very interesting video about UI design and best practices for you to watch while you design and think about the flow of your "game"
http://www.youtube.com/watch?v=Jl3-lzlzOJI
when you are done, assign the video to senhorzinho for his elicitation app
|
usab
|
andriod ui designs and patterns for elicitation and gamified psycholing experiments hi kondrann here is a very interesting video about ui design and best practices for you to watch while you design and think about the flow of your game when you are done assign the video to senhorzinho for his elicitation app
| 1
|
27,073
| 27,619,826,888
|
IssuesEvent
|
2023-03-09 22:44:36
|
USEPA/TADA
|
https://api.github.com/repos/USEPA/TADA
|
closed
|
Ordering added columns
|
Usability
|
This topic applies to many TADA functions, such as TADAdataRetrieval, HarmonizeData, ConvertDepthUnits.
ConvertDepthUnits Example
- My original thought when we designed this function was that users would want to see the added columns next to the relevant columns in the dataframe, and not at the end. But I'm reconsidering now since R users are used to new fields being added at the end, not in the middle of the dataframe. Also since the reordering is only relevant to transform=FALSE now, it might more confusing than helpful. What do you think is more user friendly?
Related comment from Justin B: As a general rule I don't re-name/re-order cols unless otherwise they seem like something they aren't or are really ugly. It can make it harder to follow what the code did and the end user may want cols to have different names/order for their workflow anyway (doing it multiple times is inefficient). Particularly here it adds a good chunk of code. A third option is to add an arg where users can specify order with new cols next to their origin or at the end. A topic for discussion/the working group?
|
True
|
Ordering added columns - This topic applies to many TADA functions, such as TADAdataRetrieval, HarmonizeData, ConvertDepthUnits.
ConvertDepthUnits Example
- My original thought when we designed this function was that users would want to see the added columns next to the relevant columns in the dataframe, and not at the end. But I'm reconsidering now since R users are used to new fields being added at the end, not in the middle of the dataframe. Also since the reordering is only relevant to transform=FALSE now, it might more confusing than helpful. What do you think is more user friendly?
Related comment from Justin B: As a general rule I don't re-name/re-order cols unless otherwise they seem like something they aren't or are really ugly. It can make it harder to follow what the code did and the end user may want cols to have different names/order for their workflow anyway (doing it multiple times is inefficient). Particularly here it adds a good chunk of code. A third option is to add an arg where users can specify order with new cols next to their origin or at the end. A topic for discussion/the working group?
|
usab
|
ordering added columns this topic applies to many tada functions such as tadadataretrieval harmonizedata convertdepthunits convertdepthunits example my original thought when we designed this function was that users would want to see the added columns next to the relevant columns in the dataframe and not at the end but i m reconsidering now since r users are used to new fields being added at the end not in the middle of the dataframe also since the reordering is only relevant to transform false now it might more confusing than helpful what do you think is more user friendly related comment from justin b as a general rule i don t re name re order cols unless otherwise they seem like something they aren t or are really ugly it can make it harder to follow what the code did and the end user may want cols to have different names order for their workflow anyway doing it multiple times is inefficient particularly here it adds a good chunk of code a third option is to add an arg where users can specify order with new cols next to their origin or at the end a topic for discussion the working group
| 1
|
17,946
| 12,439,817,503
|
IssuesEvent
|
2020-05-26 10:48:18
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
File recovery on crash
|
feature proposal topic:editor usability
|
Godot shouldn't crash, but we all know that it does sometimes. It would be very nice that a data recovery function existed so not much of work is lost.
I believe an autosave to a temp path from time to time seems enough, along with a popup showing what can be recovered when opening a project that crashed.
|
True
|
File recovery on crash - Godot shouldn't crash, but we all know that it does sometimes. It would be very nice that a data recovery function existed so not much of work is lost.
I believe an autosave to a temp path from time to time seems enough, along with a popup showing what can be recovered when opening a project that crashed.
|
usab
|
file recovery on crash godot shouldn t crash but we all know that it does sometimes it would be very nice that a data recovery function existed so not much of work is lost i believe an autosave to a temp path from time to time seems enough along with a popup showing what can be recovered when opening a project that crashed
| 1
|
233,610
| 17,872,926,642
|
IssuesEvent
|
2021-09-06 19:11:45
|
fga-eps-mds/2021-1-Bot
|
https://api.github.com/repos/fga-eps-mds/2021-1-Bot
|
opened
|
Otimização da GH Page
|
documentation Time-PlusUltra GH Page
|
## Descrição da Issue
Issue com o objetivo de melhorar a página do projeto, a fim de otimizá-la e melhorar a experiência do usuário.
## Tasks:
- [ ] Melhorias na navegação
- [ ] Colocar a logo
- [ ] Paleta de cores no site
## Critérios de Aceitação:
- [ ] A página possui uma melhor navegação
- [ ] SIte possui a logo
- [ ] Mudanças na paleta de cores
|
1.0
|
Otimização da GH Page - ## Descrição da Issue
Issue com o objetivo de melhorar a página do projeto, a fim de otimizá-la e melhorar a experiência do usuário.
## Tasks:
- [ ] Melhorias na navegação
- [ ] Colocar a logo
- [ ] Paleta de cores no site
## Critérios de Aceitação:
- [ ] A página possui uma melhor navegação
- [ ] SIte possui a logo
- [ ] Mudanças na paleta de cores
|
non_usab
|
otimização da gh page descrição da issue issue com o objetivo de melhorar a página do projeto a fim de otimizá la e melhorar a experiência do usuário tasks melhorias na navegação colocar a logo paleta de cores no site critérios de aceitação a página possui uma melhor navegação site possui a logo mudanças na paleta de cores
| 0
|
142,942
| 19,142,260,926
|
IssuesEvent
|
2021-12-02 01:05:35
|
AlexWilson-GIS/WebGoat
|
https://api.github.com/repos/AlexWilson-GIS/WebGoat
|
opened
|
CVE-2021-22096 (Medium) detected in spring-web-5.3.1.jar
|
security vulnerability
|
## CVE-2021-22096 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.3.1.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: WebGoat/webwolf/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.0.jar (Root Library)
- :x: **spring-web-5.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
<p>Publish Date: 2021-10-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096>CVE-2021-22096</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22096">https://tanzu.vmware.com/security/cve-2021-22096</a></p>
<p>Release Date: 2021-10-28</p>
<p>Fix Resolution: org.springframework:spring:5.2.18.RELEASE,5.3.12</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"5.3.1","packageFilePaths":["/webwolf/pom.xml","/webgoat-container/pom.xml","/webgoat-integration-tests/pom.xml","/webgoat-server/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.4.0;org.springframework:spring-web:5.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring:5.2.18.RELEASE,5.3.12","isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-22096","vulnerabilityDetails":"In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096","cvss3Severity":"medium","cvss3Score":"4.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-22096 (Medium) detected in spring-web-5.3.1.jar - ## CVE-2021-22096 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.3.1.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: WebGoat/webwolf/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.1/spring-web-5.3.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.0.jar (Root Library)
- :x: **spring-web-5.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
<p>Publish Date: 2021-10-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096>CVE-2021-22096</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22096">https://tanzu.vmware.com/security/cve-2021-22096</a></p>
<p>Release Date: 2021-10-28</p>
<p>Fix Resolution: org.springframework:spring:5.2.18.RELEASE,5.3.12</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"5.3.1","packageFilePaths":["/webwolf/pom.xml","/webgoat-container/pom.xml","/webgoat-integration-tests/pom.xml","/webgoat-server/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.4.0;org.springframework:spring-web:5.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring:5.2.18.RELEASE,5.3.12","isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-22096","vulnerabilityDetails":"In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096","cvss3Severity":"medium","cvss3Score":"4.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_usab
|
cve medium detected in spring web jar cve medium severity vulnerability vulnerable library spring web jar spring web library home page a href path to dependency file webgoat webwolf pom xml path to vulnerable library home wss scanner repository org springframework spring web spring web jar home wss scanner repository org springframework spring web spring web jar home wss scanner repository org springframework spring web spring web jar home wss scanner repository org springframework spring web spring web jar dependency hierarchy spring boot starter web jar root library x spring web jar vulnerable library found in base branch develop vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring release isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter web org springframework spring web isminimumfixversionavailable true minimumfixversion org springframework spring release isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries vulnerabilityurl
| 0
|
17,502
| 12,102,910,157
|
IssuesEvent
|
2020-04-20 17:28:55
|
publishpress/PublishPress-Permissions
|
https://api.github.com/repos/publishpress/PublishPress-Permissions
|
opened
|
Rename tabs in Permissions box
|
usability
|
Can we go with 3 name changes?

- User Roles
- Custom Groups
- Users
|
True
|
Rename tabs in Permissions box - Can we go with 3 name changes?

- User Roles
- Custom Groups
- Users
|
usab
|
rename tabs in permissions box can we go with name changes user roles custom groups users
| 1
|
20,778
| 16,044,312,658
|
IssuesEvent
|
2021-04-22 11:54:41
|
imchillin/Anamnesis
|
https://api.github.com/repos/imchillin/Anamnesis
|
closed
|
camera position temporary save/load
|
Scenes Usability
|
It would be cool if the temporary save/load for the camera from CMTool could be added. Very handy if you have your camera already setup, and then want to check/tweak something and can reload your camera setup.

|
True
|
camera position temporary save/load - It would be cool if the temporary save/load for the camera from CMTool could be added. Very handy if you have your camera already setup, and then want to check/tweak something and can reload your camera setup.

|
usab
|
camera position temporary save load it would be cool if the temporary save load for the camera from cmtool could be added very handy if you have your camera already setup and then want to check tweak something and can reload your camera setup
| 1
|
264,381
| 8,309,188,595
|
IssuesEvent
|
2018-09-24 04:36:04
|
SketchUp/testup-2
|
https://api.github.com/repos/SketchUp/testup-2
|
closed
|
Text size on Win 8 for preferences window
|
bug high priority
|

On Win 8 with DPI scaling set to 125% text sizing and scaling of other elements seem to be out of sync. If I am not mistaking the text alone is scaled but not the other elements. This applies to the preferences window but not the main window.
|
1.0
|
Text size on Win 8 for preferences window - 
On Win 8 with DPI scaling set to 125% text sizing and scaling of other elements seem to be out of sync. If I am not mistaking the text alone is scaled but not the other elements. This applies to the preferences window but not the main window.
|
non_usab
|
text size on win for preferences window on win with dpi scaling set to text sizing and scaling of other elements seem to be out of sync if i am not mistaking the text alone is scaled but not the other elements this applies to the preferences window but not the main window
| 0
|
59,896
| 6,666,091,576
|
IssuesEvent
|
2017-10-03 06:26:17
|
sgmap/mes-aides-ui
|
https://api.github.com/repos/sgmap/mes-aides-ui
|
closed
|
Préciser où regarder pour déclarer ses revenus n-1
|
irritant needs-consensus needs-user-testing
|
Sur la page de déclaration des ressources de l'année précédente en fin de simulation, il est demandé :
"Les revenus imposables de votre foyer en 2015
Ces informations se trouvent sur votre déclaration de revenus d'impôts 2015.
Vous pouvez la retrouver en ligne sur impots.gouv.fr."
Cependant, l'usage en France est plutôt de se baser pour toutes les démarches administratives sur l'avis d'imposition, décalé d'une année.
Ainsi, si la déclaration de revenus d'impôts 2015 contient bien les revenus d'impôts 2015, c'est l'avis 2016 qui contient ceux de 2015. L'avis 2015 contient ceux de 2014.
Faut-il ajouter un éclaircissement quelconque sur cette page, les deux documents sont-ils interchangeables par rapport à dont nous avons besoin ? Une solution serait de mettre :
"Ces informations se trouvent sur votre déclaration de revenus d'impôts 2015 et sur votre avis d'impôts 2016 sur les revenus 2015".
Cette réflexion fait suite au mail reçu le 02/03/17 à 00:05
|
1.0
|
Préciser où regarder pour déclarer ses revenus n-1 - Sur la page de déclaration des ressources de l'année précédente en fin de simulation, il est demandé :
"Les revenus imposables de votre foyer en 2015
Ces informations se trouvent sur votre déclaration de revenus d'impôts 2015.
Vous pouvez la retrouver en ligne sur impots.gouv.fr."
Cependant, l'usage en France est plutôt de se baser pour toutes les démarches administratives sur l'avis d'imposition, décalé d'une année.
Ainsi, si la déclaration de revenus d'impôts 2015 contient bien les revenus d'impôts 2015, c'est l'avis 2016 qui contient ceux de 2015. L'avis 2015 contient ceux de 2014.
Faut-il ajouter un éclaircissement quelconque sur cette page, les deux documents sont-ils interchangeables par rapport à dont nous avons besoin ? Une solution serait de mettre :
"Ces informations se trouvent sur votre déclaration de revenus d'impôts 2015 et sur votre avis d'impôts 2016 sur les revenus 2015".
Cette réflexion fait suite au mail reçu le 02/03/17 à 00:05
|
non_usab
|
préciser où regarder pour déclarer ses revenus n sur la page de déclaration des ressources de l année précédente en fin de simulation il est demandé les revenus imposables de votre foyer en ces informations se trouvent sur votre déclaration de revenus d impôts vous pouvez la retrouver en ligne sur impots gouv fr cependant l usage en france est plutôt de se baser pour toutes les démarches administratives sur l avis d imposition décalé d une année ainsi si la déclaration de revenus d impôts contient bien les revenus d impôts c est l avis qui contient ceux de l avis contient ceux de faut il ajouter un éclaircissement quelconque sur cette page les deux documents sont ils interchangeables par rapport à dont nous avons besoin une solution serait de mettre ces informations se trouvent sur votre déclaration de revenus d impôts et sur votre avis d impôts sur les revenus cette réflexion fait suite au mail reçu le à
| 0
|
16,525
| 11,027,379,491
|
IssuesEvent
|
2019-12-06 09:20:01
|
virtualsatellite/VirtualSatellite4-Core
|
https://api.github.com/repos/virtualsatellite/VirtualSatellite4-Core
|
opened
|
Have a warning message on deleting a discipline which has assigned elements
|
comfort/usability
|
Currently the role manager can remove a discipline, and will lead to dangling references in its subsystem which only superUser can fix.
Therefore, we need to make sure that a discipline cannot be deleted by mistake.
|
True
|
Have a warning message on deleting a discipline which has assigned elements - Currently the role manager can remove a discipline, and will lead to dangling references in its subsystem which only superUser can fix.
Therefore, we need to make sure that a discipline cannot be deleted by mistake.
|
usab
|
have a warning message on deleting a discipline which has assigned elements currently the role manager can remove a discipline and will lead to dangling references in its subsystem which only superuser can fix therefore we need to make sure that a discipline cannot be deleted by mistake
| 1
|
59,276
| 17,016,791,235
|
IssuesEvent
|
2021-07-02 13:10:55
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
opened
|
Incomplete information Coqueiros hamlet.
|
Component: nominatim Priority: minor Type: defect
|
**[Submitted to the original trac issue database at 2.34pm, Friday, 7th September 2018]**
Searching for point in Coqueiros hamlet don't return complete information in reverse searching.
See example:
https://nominatim.openstreetmap.org/reverse?format=json&lat=-19.870104&lon=-49.635855
The city is Itapagipe - state of MG (Minas Gerais).
Thanks
|
1.0
|
Incomplete information Coqueiros hamlet. - **[Submitted to the original trac issue database at 2.34pm, Friday, 7th September 2018]**
Searching for point in Coqueiros hamlet don't return complete information in reverse searching.
See example:
https://nominatim.openstreetmap.org/reverse?format=json&lat=-19.870104&lon=-49.635855
The city is Itapagipe - state of MG (Minas Gerais).
Thanks
|
non_usab
|
incomplete information coqueiros hamlet searching for point in coqueiros hamlet don t return complete information in reverse searching see example the city is itapagipe state of mg minas gerais thanks
| 0
|
8,061
| 5,374,852,327
|
IssuesEvent
|
2017-02-23 01:58:09
|
Tour-de-Force/btc-app
|
https://api.github.com/repos/Tour-de-Force/btc-app
|
opened
|
Hamburger Menu - Flex box needs to be fixed.
|
Priority: Medium usability
|
Hamburger menu - flex box need to be fixed. The menu disappears when you
go into Settings, Download track page, Filter, Publish, About page, Logout page, and you do
anything to resize the window. I know the menu bar is still there but the color that makes you be
able to see it is removed on resize. Everything collides as you expand and shrink the window.
It’s not using the same format as the “map”, “add service” and “add alert page”.
|
True
|
Hamburger Menu - Flex box needs to be fixed. - Hamburger menu - flex box need to be fixed. The menu disappears when you
go into Settings, Download track page, Filter, Publish, About page, Logout page, and you do
anything to resize the window. I know the menu bar is still there but the color that makes you be
able to see it is removed on resize. Everything collides as you expand and shrink the window.
It’s not using the same format as the “map”, “add service” and “add alert page”.
|
usab
|
hamburger menu flex box needs to be fixed hamburger menu flex box need to be fixed the menu disappears when you go into settings download track page filter publish about page logout page and you do anything to resize the window i know the menu bar is still there but the color that makes you be able to see it is removed on resize everything collides as you expand and shrink the window it’s not using the same format as the “map” “add service” and “add alert page”
| 1
|
12,225
| 7,756,565,496
|
IssuesEvent
|
2018-05-31 13:58:55
|
downshiftorg/prophoto7-issues
|
https://api.github.com/repos/downshiftorg/prophoto7-issues
|
closed
|
SPIKE: What should happen when you hide a column at a breakpoint?
|
usability
|
<a href="https://github.com/meatwad5675"><img src="https://avatars3.githubusercontent.com/u/11544705?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [meatwad5675](https://github.com/meatwad5675)**
_Monday Mar 26, 2018 at 19:52 GMT_
_Originally opened as https://github.com/downshiftorg/prophoto/issues/3060_
----
Currently there is just an empty space where the column should be (because we use display none). Users have communicated that they expect column spans of the remaining columns to expand to utilize all of the space. This "smart" column span distribution would require some sort of method to infer what we think user expectations are. Worth the time? How much of a problem is this (esp. considering it was the same behavior in 6 too).
|
True
|
SPIKE: What should happen when you hide a column at a breakpoint? - <a href="https://github.com/meatwad5675"><img src="https://avatars3.githubusercontent.com/u/11544705?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [meatwad5675](https://github.com/meatwad5675)**
_Monday Mar 26, 2018 at 19:52 GMT_
_Originally opened as https://github.com/downshiftorg/prophoto/issues/3060_
----
Currently there is just an empty space where the column should be (because we use display none). Users have communicated that they expect column spans of the remaining columns to expand to utilize all of the space. This "smart" column span distribution would require some sort of method to infer what we think user expectations are. Worth the time? How much of a problem is this (esp. considering it was the same behavior in 6 too).
|
usab
|
spike what should happen when you hide a column at a breakpoint issue by monday mar at gmt originally opened as currently there is just an empty space where the column should be because we use display none users have communicated that they expect column spans of the remaining columns to expand to utilize all of the space this smart column span distribution would require some sort of method to infer what we think user expectations are worth the time how much of a problem is this esp considering it was the same behavior in too
| 1
|
790,906
| 27,841,584,349
|
IssuesEvent
|
2023-03-20 13:06:55
|
Wizleap-Inc/wiz-ui
|
https://api.github.com/repos/Wizleap-Inc/wiz-ui
|
closed
|
Feat(checkbox, radio): チェック時に取り消し線を出すオプションを追加
|
📦 component 🔼 High Priority
|
**機能追加理由・詳細**

**解決策の提案(任意)**
**その他考慮事項(任意)**
|
1.0
|
Feat(checkbox, radio): チェック時に取り消し線を出すオプションを追加 - **機能追加理由・詳細**

**解決策の提案(任意)**
**その他考慮事項(任意)**
|
non_usab
|
feat checkbox radio チェック時に取り消し線を出すオプションを追加 機能追加理由・詳細 解決策の提案(任意) その他考慮事項(任意)
| 0
|
755
| 2,622,257,016
|
IssuesEvent
|
2015-03-04 00:57:05
|
golang/go
|
https://api.github.com/repos/golang/go
|
closed
|
build: too many open files in buildlet
|
builder
|
Just observed this new failure when starting a Windows buildlet:
```
builder: windows-amd64-gce
rev: 0cdf1d2bc265b6ed8e1b9834ec9f6c59dcecdcfd
vm name: buildlet-windows-amd64-gce-0cdf1d2b-rnbb9a0b
started: 2015-03-03 00:49:42.356721167 +0000 UTC
started: 2015-03-03 00:50:52.996985212 +0000 UTC
success: false
Events:
2015-03-03T00:49:43Z instance_create_requested
+23.2s 2015-03-03T00:50:06Z instance_created
+0.1s 2015-03-03T00:50:06Z waiting_for_buildlet
+39.0s 2015-03-03T00:50:45Z buildlet_up
+0.0s 2015-03-03T00:50:45Z start_write_version_tar
+0.0s 2015-03-03T00:50:45Z start_fetch_gerrit_tgz
+0.1s 2015-03-03T00:50:45Z start_write_go_tar
+0.0s 2015-03-03T00:50:45Z start_write_go14_tar
+7.2s 2015-03-03T00:50:52Z end_write_go_tar
Build log:
Error: Post http://10.240.20.29/writetgz?dir=go1.4: dial tcp 10.240.20.29:80: too many open files
```
Does the buildlet have an fd leak?
/cc @adg
|
1.0
|
build: too many open files in buildlet - Just observed this new failure when starting a Windows buildlet:
```
builder: windows-amd64-gce
rev: 0cdf1d2bc265b6ed8e1b9834ec9f6c59dcecdcfd
vm name: buildlet-windows-amd64-gce-0cdf1d2b-rnbb9a0b
started: 2015-03-03 00:49:42.356721167 +0000 UTC
started: 2015-03-03 00:50:52.996985212 +0000 UTC
success: false
Events:
2015-03-03T00:49:43Z instance_create_requested
+23.2s 2015-03-03T00:50:06Z instance_created
+0.1s 2015-03-03T00:50:06Z waiting_for_buildlet
+39.0s 2015-03-03T00:50:45Z buildlet_up
+0.0s 2015-03-03T00:50:45Z start_write_version_tar
+0.0s 2015-03-03T00:50:45Z start_fetch_gerrit_tgz
+0.1s 2015-03-03T00:50:45Z start_write_go_tar
+0.0s 2015-03-03T00:50:45Z start_write_go14_tar
+7.2s 2015-03-03T00:50:52Z end_write_go_tar
Build log:
Error: Post http://10.240.20.29/writetgz?dir=go1.4: dial tcp 10.240.20.29:80: too many open files
```
Does the buildlet have an fd leak?
/cc @adg
|
non_usab
|
build too many open files in buildlet just observed this new failure when starting a windows buildlet builder windows gce rev vm name buildlet windows gce started utc started utc success false events instance create requested instance created waiting for buildlet buildlet up start write version tar start fetch gerrit tgz start write go tar start write tar end write go tar build log error post dial tcp too many open files does the buildlet have an fd leak cc adg
| 0
|
51,708
| 27,205,976,533
|
IssuesEvent
|
2023-02-20 13:07:05
|
CommunityToolkit/dotnet
|
https://api.github.com/repos/CommunityToolkit/dotnet
|
closed
|
Add ArrayPoolBufferWriter<T>.DangerousGetArray() API
|
feature request :mailbox_with_mail: high-performance 🚂
|
### Overview
Related to #614. The `ArrayPoolBufferWriter<T>` lacks the `DangerousGetArray()` API which `MemoryOwner<T>` and `SpanOwner<T>` have. We should add it there too to make it easier and clearer how to get the underlying array from a writer.
### API breakdown
```csharp
namespace CommunityToolkit.HighPerformance.Buffers;
public sealed class ArrayPoolBufferWriter<T> : IBuffer<T>, IMemoryOwner<T>
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ArraySegment<T> DangerousGetArray();
}
```
### Usage example
```csharp
ArraySegment<byte> segment = bufferWriter.DangerousGetArray();
stream.Write(segment.Array!, segment.Offset, segment.Count);
```
### Breaking change?
No
### Alternatives
Use `MemoryMarshal.TryGetArray`. That's less clear and less discoverable, so not ideal.
|
True
|
Add ArrayPoolBufferWriter<T>.DangerousGetArray() API - ### Overview
Related to #614. The `ArrayPoolBufferWriter<T>` lacks the `DangerousGetArray()` API which `MemoryOwner<T>` and `SpanOwner<T>` have. We should add it there too to make it easier and clearer how to get the underlying array from a writer.
### API breakdown
```csharp
namespace CommunityToolkit.HighPerformance.Buffers;
public sealed class ArrayPoolBufferWriter<T> : IBuffer<T>, IMemoryOwner<T>
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ArraySegment<T> DangerousGetArray();
}
```
### Usage example
```csharp
ArraySegment<byte> segment = bufferWriter.DangerousGetArray();
stream.Write(segment.Array!, segment.Offset, segment.Count);
```
### Breaking change?
No
### Alternatives
Use `MemoryMarshal.TryGetArray`. That's less clear and less discoverable, so not ideal.
|
non_usab
|
add arraypoolbufferwriter dangerousgetarray api overview related to the arraypoolbufferwriter lacks the dangerousgetarray api which memoryowner and spanowner have we should add it there too to make it easier and clearer how to get the underlying array from a writer api breakdown csharp namespace communitytoolkit highperformance buffers public sealed class arraypoolbufferwriter ibuffer imemoryowner public arraysegment dangerousgetarray usage example csharp arraysegment segment bufferwriter dangerousgetarray stream write segment array segment offset segment count breaking change no alternatives use memorymarshal trygetarray that s less clear and less discoverable so not ideal
| 0
|
306,736
| 26,492,440,671
|
IssuesEvent
|
2023-01-18 00:32:38
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
opened
|
Fix jax_numpy_creation.test_jax_numpy_zeros_like
|
JAX Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744292715" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
1.0
|
Fix jax_numpy_creation.test_jax_numpy_zeros_like - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744292715" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
non_usab
|
fix jax numpy creation test jax numpy zeros like tensorflow img src torch img src numpy img src jax img src not found not found
| 0
|
175,322
| 13,546,382,618
|
IssuesEvent
|
2020-09-17 01:09:17
|
JoshSevy/dreamboard
|
https://api.github.com/repos/JoshSevy/dreamboard
|
opened
|
App Testing
|
testing
|
- [ ] Full integration testing -is everything tested for
- [ ] Async testing and event testing make sure all events are working correctly
- [ ] Check all fireEvents
- [ ] Full coverage in testing
|
1.0
|
App Testing - - [ ] Full integration testing -is everything tested for
- [ ] Async testing and event testing make sure all events are working correctly
- [ ] Check all fireEvents
- [ ] Full coverage in testing
|
non_usab
|
app testing full integration testing is everything tested for async testing and event testing make sure all events are working correctly check all fireevents full coverage in testing
| 0
|
250,848
| 27,112,934,429
|
IssuesEvent
|
2023-02-15 16:29:32
|
jgeraigery/jersey
|
https://api.github.com/repos/jgeraigery/jersey
|
opened
|
commons-codec-1.5.jar: 1 vulnerabilities (highest severity is: 6.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.5.jar</b></p></summary>
<p>The codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/jersey/commit/ef5e320d97e3e135b35c6de1079aa415ceb9c6f5">ef5e320d97e3e135b35c6de1079aa415ceb9c6f5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (commons-codec version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2019-0379](https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | commons-codec-1.5.jar | Direct | 1.13 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2019-0379</summary>
### Vulnerable Library - <b>commons-codec-1.5.jar</b></p>
<p>The codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>
Dependency Hierarchy:
- :x: **commons-codec-1.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/jersey/commit/ef5e320d97e3e135b35c6de1079aa415ceb9c6f5">ef5e320d97e3e135b35c6de1079aa415ceb9c6f5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache commons-codec before version “commons-codec-1.13-RC1” is vulnerable to information disclosure due to Improper Input validation.
<p>Publish Date: 2019-05-20
<p>URL: <a href=https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113>WS-2019-0379</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-05-20</p>
<p>Fix Resolution: 1.13</p>
</p>
<p></p>
</details>
|
True
|
commons-codec-1.5.jar: 1 vulnerabilities (highest severity is: 6.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.5.jar</b></p></summary>
<p>The codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/jersey/commit/ef5e320d97e3e135b35c6de1079aa415ceb9c6f5">ef5e320d97e3e135b35c6de1079aa415ceb9c6f5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (commons-codec version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2019-0379](https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | commons-codec-1.5.jar | Direct | 1.13 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2019-0379</summary>
### Vulnerable Library - <b>commons-codec-1.5.jar</b></p>
<p>The codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>
Dependency Hierarchy:
- :x: **commons-codec-1.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/jersey/commit/ef5e320d97e3e135b35c6de1079aa415ceb9c6f5">ef5e320d97e3e135b35c6de1079aa415ceb9c6f5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache commons-codec before version “commons-codec-1.13-RC1” is vulnerable to information disclosure due to Improper Input validation.
<p>Publish Date: 2019-05-20
<p>URL: <a href=https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113>WS-2019-0379</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-05-20</p>
<p>Fix Resolution: 1.13</p>
</p>
<p></p>
</details>
|
non_usab
|
commons codec jar vulnerabilities highest severity is vulnerable library commons codec jar the codec package contains simple encoder and decoders for various formats such as and hexadecimal in addition to these widely used encoders and decoders the codec package also maintains a collection of phonetic encoding utilities found in head commit a href vulnerabilities cve severity cvss dependency type fixed in commons codec version remediation available medium commons codec jar direct details ws vulnerable library commons codec jar the codec package contains simple encoder and decoders for various formats such as and hexadecimal in addition to these widely used encoders and decoders the codec package also maintains a collection of phonetic encoding utilities dependency hierarchy x commons codec jar vulnerable library found in head commit a href found in base branch master vulnerability details apache commons codec before version “commons codec ” is vulnerable to information disclosure due to improper input validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution
| 0
|
27,868
| 30,581,541,909
|
IssuesEvent
|
2023-07-21 10:03:45
|
informalsystems/quint
|
https://api.github.com/repos/informalsystems/quint
|
opened
|
Estimate the number of visited states in the simulator
|
W8 usability Fsimulator (phase 5a)
|
This is a more useful coverage estimate asked for in #1067. We could store the hashes of the states visited by the simulator and print out the number of visited states. Moreover, we could prune the search when we visit a previously visited state, similar to stateful model checking.
This will not give us a qualitative guarantee of spec correctness, since it would not be exhaustive state exploration (the way it is implemented in TLC), but if the user observes that simulations of `N` runs and `10*N` runs produce roughly the same number of visited states, they can conclude that the simulation was good enough and it does not make sense to increase the number of runs. This is all given that the simulator is doing random execution, so it can only give us estimates.
There are some technical challenges in implementing this approach, since the simulator is written in TypeScript. We will have to figure out how store large bitvectors, e.g., 1G large. This does not seem to be anything JavaScript/TypeScript were designed for, but we can try to do it in AssemblyScript, which seem to be ok with performance computations in Wasm.
|
True
|
Estimate the number of visited states in the simulator - This is a more useful coverage estimate asked for in #1067. We could store the hashes of the states visited by the simulator and print out the number of visited states. Moreover, we could prune the search when we visit a previously visited state, similar to stateful model checking.
This will not give us a qualitative guarantee of spec correctness, since it would not be exhaustive state exploration (the way it is implemented in TLC), but if the user observes that simulations of `N` runs and `10*N` runs produce roughly the same number of visited states, they can conclude that the simulation was good enough and it does not make sense to increase the number of runs. This is all given that the simulator is doing random execution, so it can only give us estimates.
There are some technical challenges in implementing this approach, since the simulator is written in TypeScript. We will have to figure out how store large bitvectors, e.g., 1G large. This does not seem to be anything JavaScript/TypeScript were designed for, but we can try to do it in AssemblyScript, which seem to be ok with performance computations in Wasm.
|
usab
|
estimate the number of visited states in the simulator this is a more useful coverage estimate asked for in we could store the hashes of the states visited by the simulator and print out the number of visited states moreover we could prune the search when we visit a previously visited state similar to stateful model checking this will not give us a qualitative guarantee of spec correctness since it would not be exhaustive state exploration the way it is implemented in tlc but if the user observes that simulations of n runs and n runs produce roughly the same number of visited states they can conclude that the simulation was good enough and it does not make sense to increase the number of runs this is all given that the simulator is doing random execution so it can only give us estimates there are some technical challenges in implementing this approach since the simulator is written in typescript we will have to figure out how store large bitvectors e g large this does not seem to be anything javascript typescript were designed for but we can try to do it in assemblyscript which seem to be ok with performance computations in wasm
| 1
|
283,145
| 21,316,049,139
|
IssuesEvent
|
2022-04-16 09:41:35
|
khseah/pe
|
https://api.github.com/repos/khseah/pe
|
opened
|
Incorrect notation
|
severity.VeryLow type.DocumentationBug
|
From the website, multiplicities shoud be "0..*" instead (2 dots only)


<!--session: 1650093881952-d12b2869-9e5c-4d55-9fd8-1891c2e1355c-->
<!--Version: Web v3.4.2-->
|
1.0
|
Incorrect notation - From the website, multiplicities shoud be "0..*" instead (2 dots only)


<!--session: 1650093881952-d12b2869-9e5c-4d55-9fd8-1891c2e1355c-->
<!--Version: Web v3.4.2-->
|
non_usab
|
incorrect notation from the website multiplicities shoud be instead dots only
| 0
|
40,260
| 16,439,656,504
|
IssuesEvent
|
2021-05-20 13:06:52
|
microsoft/BotFramework-Composer
|
https://api.github.com/repos/microsoft/BotFramework-Composer
|
closed
|
issues with update-schema script
|
Bot Services Type: Bug customer-replied-to customer-reported
|
<!-- Please search for your feature request before creating a new one. >
<!-- Complete the necessary portions of this template and delete the rest. -->
## Describe the bug
I am using nightly composer and I see below error when try to run update-schema script
.\update-schema.ps1
Running schema merge.
module.js:487
throw err;
^
Error: Cannot find module 'C:\Users\prvavill\AppData\Roaming\npm\node_modules\@microsoft\botframework-cli\bin\run'
at Function.Module._resolveFilename (module.js:485:15)
at Function.Module._load (module.js:437:25)
at Function.Module.runMain (module.js:605:10)
at startup (bootstrap_node.js:158:16)
at bootstrap_node.js:575:3
Schema merge failed. Restoring previous versions.
PS D:\Kustofeatures\bot\GaiaTestCustomActions\GaiaTestCustomActions\schemas> .\update-schema.ps1
Running schema merge.
module.js:487
throw err;
^
## Version
<!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). -->
## Browser
<!-- What browser are you using? -->
- [ ] Electron distribution
- [ ] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
## OS
<!-- What operating system are you using? -->
- [x] Windows
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Expected behavior
<!-- Give a clear and concise description of what you expected to happen. -->
## Screenshots
<!-- If applicable, add screenshots/gif/video to help explain your problem. -->
## Additional context
<!-- Add any other context about the problem here. -->
|
1.0
|
issues with update-schema script - <!-- Please search for your feature request before creating a new one. >
<!-- Complete the necessary portions of this template and delete the rest. -->
## Describe the bug
I am using nightly composer and I see below error when try to run update-schema script
.\update-schema.ps1
Running schema merge.
module.js:487
throw err;
^
Error: Cannot find module 'C:\Users\prvavill\AppData\Roaming\npm\node_modules\@microsoft\botframework-cli\bin\run'
at Function.Module._resolveFilename (module.js:485:15)
at Function.Module._load (module.js:437:25)
at Function.Module.runMain (module.js:605:10)
at startup (bootstrap_node.js:158:16)
at bootstrap_node.js:575:3
Schema merge failed. Restoring previous versions.
PS D:\Kustofeatures\bot\GaiaTestCustomActions\GaiaTestCustomActions\schemas> .\update-schema.ps1
Running schema merge.
module.js:487
throw err;
^
## Version
<!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). -->
## Browser
<!-- What browser are you using? -->
- [ ] Electron distribution
- [ ] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
## OS
<!-- What operating system are you using? -->
- [x] Windows
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Expected behavior
<!-- Give a clear and concise description of what you expected to happen. -->
## Screenshots
<!-- If applicable, add screenshots/gif/video to help explain your problem. -->
## Additional context
<!-- Add any other context about the problem here. -->
|
non_usab
|
issues with update schema script describe the bug i am using nightly composer and i see below error when try to run update schema script update schema running schema merge module js throw err error cannot find module c users prvavill appdata roaming npm node modules microsoft botframework cli bin run at function module resolvefilename module js at function module load module js at function module runmain module js at startup bootstrap node js at bootstrap node js schema merge failed restoring previous versions ps d kustofeatures bot gaiatestcustomactions gaiatestcustomactions schemas update schema running schema merge module js throw err version browser electron distribution chrome safari firefox edge os windows to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior screenshots additional context
| 0
|
22,534
| 3,663,764,336
|
IssuesEvent
|
2016-02-19 08:21:22
|
wbsoft/frescobaldi
|
https://api.github.com/repos/wbsoft/frescobaldi
|
opened
|
Update layout control options to LilyPond 2.19.21 syntax
|
defect
|
Many of the layout control options are affected by a fundamental syntax change about using the `parser` argument in music-etc-functions.
The task is to wrap the offending calls in version predicate conditionals because Frescobaldi has to support older *and* newer LilyPonds.
|
1.0
|
Update layout control options to LilyPond 2.19.21 syntax - Many of the layout control options are affected by a fundamental syntax change about using the `parser` argument in music-etc-functions.
The task is to wrap the offending calls in version predicate conditionals because Frescobaldi has to support older *and* newer LilyPonds.
|
non_usab
|
update layout control options to lilypond syntax many of the layout control options are affected by a fundamental syntax change about using the parser argument in music etc functions the task is to wrap the offending calls in version predicate conditionals because frescobaldi has to support older and newer lilyponds
| 0
|
28,715
| 23,463,326,230
|
IssuesEvent
|
2022-08-16 14:44:23
|
SasView/sasview
|
https://api.github.com/repos/SasView/sasview
|
closed
|
DataXD duplication
|
Infrastructure Discuss At The Call Housekeeping
|
There is a Data1D definition in both sasview (in Plotting for some reason) and sasmodels.data, are these expected to become a single type during the sasdata extraction? They probably should.
|
1.0
|
DataXD duplication - There is a Data1D definition in both sasview (in Plotting for some reason) and sasmodels.data, are these expected to become a single type during the sasdata extraction? They probably should.
|
non_usab
|
dataxd duplication there is a definition in both sasview in plotting for some reason and sasmodels data are these expected to become a single type during the sasdata extraction they probably should
| 0
|
26,303
| 26,673,309,412
|
IssuesEvent
|
2023-01-26 12:17:10
|
opentap/opentap
|
https://api.github.com/repos/opentap/opentap
|
closed
|
Let the user know which other version(s) of a package is available in case a release does not exist
|
Usability CLI
|
Can we have the CLI return a hint to the user when a package exist in other versions than release.
e.g. "There are no release version of 'Runner', but pre-releases exists, use --all for a complete list".
or simply show the complete list after the user was informed about the fact that no releases versions existed.

|
True
|
Let the user know which other version(s) of a package is available in case a release does not exist - Can we have the CLI return a hint to the user when a package exist in other versions than release.
e.g. "There are no release version of 'Runner', but pre-releases exists, use --all for a complete list".
or simply show the complete list after the user was informed about the fact that no releases versions existed.

|
usab
|
let the user know which other version s of a package is available in case a release does not exist can we have the cli return a hint to the user when a package exist in other versions than release e g there are no release version of runner but pre releases exists use all for a complete list or simply show the complete list after the user was informed about the fact that no releases versions existed
| 1
|
27,535
| 7,977,935,894
|
IssuesEvent
|
2018-07-17 16:43:49
|
zooniverse/Panoptes-Front-End
|
https://api.github.com/repos/zooniverse/Panoptes-Front-End
|
closed
|
viewing subject info in project builder should show hidden values
|
project builder stale ui
|
When viewing subjects and info in the project builder, if I'm a collaborator, I'd like to be able to see the _full_ set of values, with the ones that will be hidden from public view marked in some way.
This just came up because I thought some essential columns were missing from a gold-standard dataset in the pulsar SGL project, but in fact the project builder is just showing what the classifiers will, so the only way I can see all the columns is to download the subject export. (I don't have the original manifest because I'm a collaborator, not the project owner.)
|
1.0
|
viewing subject info in project builder should show hidden values - When viewing subjects and info in the project builder, if I'm a collaborator, I'd like to be able to see the _full_ set of values, with the ones that will be hidden from public view marked in some way.
This just came up because I thought some essential columns were missing from a gold-standard dataset in the pulsar SGL project, but in fact the project builder is just showing what the classifiers will, so the only way I can see all the columns is to download the subject export. (I don't have the original manifest because I'm a collaborator, not the project owner.)
|
non_usab
|
viewing subject info in project builder should show hidden values when viewing subjects and info in the project builder if i m a collaborator i d like to be able to see the full set of values with the ones that will be hidden from public view marked in some way this just came up because i thought some essential columns were missing from a gold standard dataset in the pulsar sgl project but in fact the project builder is just showing what the classifiers will so the only way i can see all the columns is to download the subject export i don t have the original manifest because i m a collaborator not the project owner
| 0
|
132,090
| 12,498,119,532
|
IssuesEvent
|
2020-06-01 17:42:35
|
kounch/knloader
|
https://api.github.com/repos/kounch/knloader
|
closed
|
Clarification on the knloader.bdt file
|
documentation
|
Not sure if you prefer issue here or FB, but starting here for now.
I suspect my own issue is not quite following the manual properly (and I'd be happy to help the docs via PR if you like).
The manual says
> Create knloader.bdt file (see below for more instructions).
But then below is reads "navigate to where … knloader.bdt are" - yet the file doesn't exist as part of the release (so I'm guessing I've got the wrong part).
The next section is called "Database file format" - it's descriptive, but I wasn't sure if this meant I should write this file myself or if this is how it was generated (and from the earlier instructions I thought it would be generated).
This section states the first line must be the root path (in my case `/ROMS`) - so I thought to create the file knloader.bdt with `/ROMS\n`, but this caused errors in the loader (`2\t6410`).
So if the file does need to be created manually, the first line is the search path, but the path for the game is included in the record, so I wasn't sure if this was relative or absolute, i.e.
```
/ROMS
Batman,4,/ROMS/Batman/,Batman.tap
```
Or if it's
```
/ROMS
Batman,4,/Batman/,Batman.tap
```
I do feel like I've just got the manual wrong (but again, I'd be happy to help propose any changes to help clarify for simpletons like me!)
|
1.0
|
Clarification on the knloader.bdt file - Not sure if you prefer issue here or FB, but starting here for now.
I suspect my own issue is not quite following the manual properly (and I'd be happy to help the docs via PR if you like).
The manual says
> Create knloader.bdt file (see below for more instructions).
But then below is reads "navigate to where … knloader.bdt are" - yet the file doesn't exist as part of the release (so I'm guessing I've got the wrong part).
The next section is called "Database file format" - it's descriptive, but I wasn't sure if this meant I should write this file myself or if this is how it was generated (and from the earlier instructions I thought it would be generated).
This section states the first line must be the root path (in my case `/ROMS`) - so I thought to create the file knloader.bdt with `/ROMS\n`, but this caused errors in the loader (`2\t6410`).
So if the file does need to be created manually, the first line is the search path, but the path for the game is included in the record, so I wasn't sure if this was relative or absolute, i.e.
```
/ROMS
Batman,4,/ROMS/Batman/,Batman.tap
```
Or if it's
```
/ROMS
Batman,4,/Batman/,Batman.tap
```
I do feel like I've just got the manual wrong (but again, I'd be happy to help propose any changes to help clarify for simpletons like me!)
|
non_usab
|
clarification on the knloader bdt file not sure if you prefer issue here or fb but starting here for now i suspect my own issue is not quite following the manual properly and i d be happy to help the docs via pr if you like the manual says create knloader bdt file see below for more instructions but then below is reads navigate to where … knloader bdt are yet the file doesn t exist as part of the release so i m guessing i ve got the wrong part the next section is called database file format it s descriptive but i wasn t sure if this meant i should write this file myself or if this is how it was generated and from the earlier instructions i thought it would be generated this section states the first line must be the root path in my case roms so i thought to create the file knloader bdt with roms n but this caused errors in the loader so if the file does need to be created manually the first line is the search path but the path for the game is included in the record so i wasn t sure if this was relative or absolute i e roms batman roms batman batman tap or if it s roms batman batman batman tap i do feel like i ve just got the manual wrong but again i d be happy to help propose any changes to help clarify for simpletons like me
| 0
|
159,290
| 20,048,346,840
|
IssuesEvent
|
2022-02-03 01:07:39
|
kapseliboi/coronavirus-dashboard
|
https://api.github.com/repos/kapseliboi/coronavirus-dashboard
|
opened
|
CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz
|
security vulnerability
|
## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>v3-development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution (json-schema): 0.4.0</p>
<p>Direct dependency fix Resolution (node-sass): 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>v3-development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution (json-schema): 0.4.0</p>
<p>Direct dependency fix Resolution (node-sass): 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_usab
|
cve high detected in json schema tgz cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href path to dependency file package json path to vulnerable library node modules json schema package json dependency hierarchy node sass tgz root library request tgz http signature tgz jsprim tgz x json schema tgz vulnerable library found in base branch development vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution json schema direct dependency fix resolution node sass step up your open source security game with whitesource
| 0
|
21,772
| 17,691,612,081
|
IssuesEvent
|
2021-08-24 10:39:12
|
goblint/analyzer
|
https://api.github.com/repos/goblint/analyzer
|
closed
|
Rework system outputting warnings
|
cleanup feature usability practical-course
|
Categories should be introduced to create a better understanding of the warnings.
#198 #199 #200
@vandah
@edincitaku
|
True
|
Rework system outputting warnings - Categories should be introduced to create a better understanding of the warnings.
#198 #199 #200
@vandah
@edincitaku
|
usab
|
rework system outputting warnings categories should be introduced to create a better understanding of the warnings vandah edincitaku
| 1
|
26,248
| 26,594,630,546
|
IssuesEvent
|
2023-01-23 11:27:59
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Clicking on `res://` folder is very slow
|
bug topic:editor usability regression performance
|
### Godot version
0056acf
### System information
Windows 10 x64
### Issue description
When you click `res://` folder in filesystem dock, it hangs the editor for a few seconds:

(this is release build)
### Steps to reproduce
1. Click `res://`
2. 🥶
### Minimal reproduction project
_No response_
|
True
|
Clicking on `res://` folder is very slow - ### Godot version
0056acf
### System information
Windows 10 x64
### Issue description
When you click `res://` folder in filesystem dock, it hangs the editor for a few seconds:

(this is release build)
### Steps to reproduce
1. Click `res://`
2. 🥶
### Minimal reproduction project
_No response_
|
usab
|
clicking on res folder is very slow godot version system information windows issue description when you click res folder in filesystem dock it hangs the editor for a few seconds this is release build steps to reproduce click res 🥶 minimal reproduction project no response
| 1
|
251,207
| 21,445,939,535
|
IssuesEvent
|
2022-04-25 06:19:09
|
Azure/azure-sdk-for-js
|
https://api.github.com/repos/Azure/azure-sdk-for-js
|
closed
|
[Recorder] Escape high codepoints in browser recordings
|
test-utils-recorder
|
Because `karma` determines whether a file is binary or not using this package https://www.npmjs.com/package/isbinaryfile which flags non-ASCII characters. After a certain number of "weird" characters, the file is flagged as binary. We do not want recording files to be flagged as binary so that they can actually work, so all these weird characters should be escaped.
One test that sends emojis had this issue in text analytics and the recording file was altered manually so emoji's codepoints were escaped: https://github.com/Azure/azure-sdk-for-js/pull/16126
|
1.0
|
[Recorder] Escape high codepoints in browser recordings - Because `karma` determines whether a file is binary or not using this package https://www.npmjs.com/package/isbinaryfile which flags non-ASCII characters. After a certain number of "weird" characters, the file is flagged as binary. We do not want recording files to be flagged as binary so that they can actually work, so all these weird characters should be escaped.
One test that sends emojis had this issue in text analytics and the recording file was altered manually so emoji's codepoints were escaped: https://github.com/Azure/azure-sdk-for-js/pull/16126
|
non_usab
|
escape high codepoints in browser recordings because karma determines whether a file is binary or not using this package which flags non ascii characters after a certain number of weird characters the file is flagged as binary we do not want recording files to be flagged as binary so that they can actually work so all these weird characters should be escaped one test that sends emojis had this issue in text analytics and the recording file was altered manually so emoji s codepoints were escaped
| 0
|
18,935
| 13,463,537,164
|
IssuesEvent
|
2020-09-09 17:45:26
|
openstreetmap/iD
|
https://api.github.com/repos/openstreetmap/iD
|
opened
|
Sort relation dropdown by most recently used
|
usability
|
_Broken out from https://github.com/openstreetmap/iD/issues/5997#issuecomment-689086673._
Relations in the "Add relation" dropdown selector are currently sorted by creation date. We could improve this by putting recently used relations at the top, even if they weren't created recently.
|
True
|
Sort relation dropdown by most recently used - _Broken out from https://github.com/openstreetmap/iD/issues/5997#issuecomment-689086673._
Relations in the "Add relation" dropdown selector are currently sorted by creation date. We could improve this by putting recently used relations at the top, even if they weren't created recently.
|
usab
|
sort relation dropdown by most recently used broken out from relations in the add relation dropdown selector are currently sorted by creation date we could improve this by putting recently used relations at the top even if they weren t created recently
| 1
|
16,591
| 11,101,178,387
|
IssuesEvent
|
2019-12-16 20:52:01
|
NCAR/VAPOR
|
https://api.github.com/repos/NCAR/VAPOR
|
closed
|
No longer able to edit Data Value
|
Fixed Usability
|

The Data value box highlighted in the picture within the appearance tab is no longer able to be typed in or edited with the keyboard.
To Reproduce
1. Go to appearance tab of the image. (I was using a barb renderer)
2. Attempt to click on and edit the highlighted data value box in the above image.
3. Number does not change regardless of clicks or key presses.
### Desktop
- OS and version: Redhat Linux Enterprise 7
- Version: Vapor 3.2.0(new)
|
True
|
No longer able to edit Data Value -

The Data value box highlighted in the picture within the appearance tab is no longer able to be typed in or edited with the keyboard.
To Reproduce
1. Go to appearance tab of the image. (I was using a barb renderer)
2. Attempt to click on and edit the highlighted data value box in the above image.
3. Number does not change regardless of clicks or key presses.
### Desktop
- OS and version: Redhat Linux Enterprise 7
- Version: Vapor 3.2.0(new)
|
usab
|
no longer able to edit data value the data value box highlighted in the picture within the appearance tab is no longer able to be typed in or edited with the keyboard to reproduce go to appearance tab of the image i was using a barb renderer attempt to click on and edit the highlighted data value box in the above image number does not change regardless of clicks or key presses desktop os and version redhat linux enterprise version vapor new
| 1
|
480,053
| 13,822,589,714
|
IssuesEvent
|
2020-10-13 05:27:03
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
opened
|
Bugreports: Update report page
|
Priority: Medium Type: Task
|
Update report page to list the data associated with the report, and a list of exceptions.
|
1.0
|
Bugreports: Update report page - Update report page to list the data associated with the report, and a list of exceptions.
|
non_usab
|
bugreports update report page update report page to list the data associated with the report and a list of exceptions
| 0
|
22,815
| 20,243,130,677
|
IssuesEvent
|
2022-02-14 11:10:00
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
Option `--format` for `clickhouse-local` should set both input and output formats.
|
easy task usability unexpected behaviour
|
`clickhouse-local` can take three options: `--input-format`, `--output-format` and `--format`.
Currently the `--format` option only sets the output format.
Let's make it set default input and output formats that can be overriden by `--input-format` and `--output-format`.
|
True
|
Option `--format` for `clickhouse-local` should set both input and output formats. - `clickhouse-local` can take three options: `--input-format`, `--output-format` and `--format`.
Currently the `--format` option only sets the output format.
Let's make it set default input and output formats that can be overriden by `--input-format` and `--output-format`.
|
usab
|
option format for clickhouse local should set both input and output formats clickhouse local can take three options input format output format and format currently the format option only sets the output format let s make it set default input and output formats that can be overriden by input format and output format
| 1
|
19,698
| 14,443,609,227
|
IssuesEvent
|
2020-12-07 19:55:15
|
openstreetmap/iD
|
https://api.github.com/repos/openstreetmap/iD
|
closed
|
Use blue for "waterway" group and waterfall to make them easier to recognize
|
icon usability
|
Currently icons for both of them are not really recognizable. I see that at leas some icons are using color, but I am not sure about strategy for doing this (I checked https://github.com/openstreetmap/iD/blob/master/data/presets/README.md ).

|
True
|
Use blue for "waterway" group and waterfall to make them easier to recognize - Currently icons for both of them are not really recognizable. I see that at leas some icons are using color, but I am not sure about strategy for doing this (I checked https://github.com/openstreetmap/iD/blob/master/data/presets/README.md ).

|
usab
|
use blue for waterway group and waterfall to make them easier to recognize currently icons for both of them are not really recognizable i see that at leas some icons are using color but i am not sure about strategy for doing this i checked
| 1
|
173,899
| 6,534,018,909
|
IssuesEvent
|
2017-08-31 09:04:07
|
Cadasta/cadasta-platform
|
https://api.github.com/repos/Cadasta/cadasta-platform
|
closed
|
AttributeError: 'WSGIRequest' object has no attribute 'user'
|
bug priority: critical
|
### Steps to reproduce the error
Without logging in, attempt to load a platform URL for creation, such as:
https://platform-staging.cadasta.org/organizations/allthethings/projects/quiet-car/records/locations/new
Also applies to /delete, /edit
_**Edit:** Same error occurs with or without active user session, if trailing slash is not appended to URL_
### Actual behavior
Unhandled Django exception causes a 500 error in uWSGI. See https://opbeat.com/cadasta/platform-staging/errors/209/ for trace.
### Expected behavior
Redirect to sign in page.
|
1.0
|
AttributeError: 'WSGIRequest' object has no attribute 'user' - ### Steps to reproduce the error
Without logging in, attempt to load a platform URL for creation, such as:
https://platform-staging.cadasta.org/organizations/allthethings/projects/quiet-car/records/locations/new
Also applies to /delete, /edit
_**Edit:** Same error occurs with or without active user session, if trailing slash is not appended to URL_
### Actual behavior
Unhandled Django exception causes a 500 error in uWSGI. See https://opbeat.com/cadasta/platform-staging/errors/209/ for trace.
### Expected behavior
Redirect to sign in page.
|
non_usab
|
attributeerror wsgirequest object has no attribute user steps to reproduce the error without logging in attempt to load a platform url for creation such as also applies to delete edit edit same error occurs with or without active user session if trailing slash is not appended to url actual behavior unhandled django exception causes a error in uwsgi see for trace expected behavior redirect to sign in page
| 0
|
286,857
| 8,794,030,851
|
IssuesEvent
|
2018-12-21 22:46:19
|
INN/umbrella-rivard-report
|
https://api.github.com/repos/INN/umbrella-rivard-report
|
closed
|
Donor logos for Businesses and Nonprofits
|
high priority
|
1. Go to: https://therivardreport.com/membership-donate/ (privately published)
RESULT: Doesn't have list of donors yet
EXPECT: Rivard would like something similar to "OUR MEMBERS" here: https://inn.org/ to appear at the bottom of the page (below the tabs of info)
NOTES:
- needs to be easily updatable and using a class in the text editor may not be ideal
- should include business and nonprofit logos from here: https://therivardreport.com/business-members/
- will need to decide how to handle the donation amounts (group them, show $ amount above or below logo, etc.)

|
1.0
|
Donor logos for Businesses and Nonprofits - 1. Go to: https://therivardreport.com/membership-donate/ (privately published)
RESULT: Doesn't have list of donors yet
EXPECT: Rivard would like something similar to "OUR MEMBERS" here: https://inn.org/ to appear at the bottom of the page (below the tabs of info)
NOTES:
- needs to be easily updatable and using a class in the text editor may not be ideal
- should include business and nonprofit logos from here: https://therivardreport.com/business-members/
- will need to decide how to handle the donation amounts (group them, show $ amount above or below logo, etc.)

|
non_usab
|
donor logos for businesses and nonprofits go to privately published result doesn t have list of donors yet expect rivard would like something similar to our members here to appear at the bottom of the page below the tabs of info notes needs to be easily updatable and using a class in the text editor may not be ideal should include business and nonprofit logos from here will need to decide how to handle the donation amounts group them show amount above or below logo etc
| 0
|
5,685
| 3,975,542,198
|
IssuesEvent
|
2016-05-05 06:12:44
|
kolliSuman/issues
|
https://api.github.com/repos/kolliSuman/issues
|
closed
|
QA_Video Tutorial_Back to experiments_p2
|
Category: Usability Developed By: VLEAD Release Number: Production Severity: S2 Status: Open
|
Defect Description :
In the "Video Tutorial" experiment,the back to experiments link is not present in the page instead the back to experiments link should be displayed on the screen in-order to view the list of experiments by the user.
Actual Result :
In the "Video Tutorial" experiment,the back to experiments link is not present in the page.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/virtual-english-iitg/blob/master/test-cases/integration_test-cases/Video%20Tutorial/Video%20Tutorial_05_Back%20to%20experiments_p2.org
|
True
|
QA_Video Tutorial_Back to experiments_p2 - Defect Description :
In the "Video Tutorial" experiment,the back to experiments link is not present in the page instead the back to experiments link should be displayed on the screen in-order to view the list of experiments by the user.
Actual Result :
In the "Video Tutorial" experiment,the back to experiments link is not present in the page.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/virtual-english-iitg/blob/master/test-cases/integration_test-cases/Video%20Tutorial/Video%20Tutorial_05_Back%20to%20experiments_p2.org
|
usab
|
qa video tutorial back to experiments defect description in the video tutorial experiment the back to experiments link is not present in the page instead the back to experiments link should be displayed on the screen in order to view the list of experiments by the user actual result in the video tutorial experiment the back to experiments link is not present in the page environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor test step link
| 1
|
11,290
| 7,138,142,806
|
IssuesEvent
|
2018-01-23 13:34:30
|
MISP/MISP
|
https://api.github.com/repos/MISP/MISP
|
closed
|
Disallow proposals that match an already existing attribute
|
bug enhancement usability
|
Disallow proposal that already have an attribute that match on the triplet type/value/ids so that there is no possible duplicate.
|
True
|
Disallow proposals that match an already existing attribute - Disallow proposal that already have an attribute that match on the triplet type/value/ids so that there is no possible duplicate.
|
usab
|
disallow proposals that match an already existing attribute disallow proposal that already have an attribute that match on the triplet type value ids so that there is no possible duplicate
| 1
|
3,378
| 3,428,212,543
|
IssuesEvent
|
2015-12-10 08:13:28
|
tgstation/-tg-station
|
https://api.github.com/repos/tgstation/-tg-station
|
opened
|
Entering the gateway should immobilize you for a moment, to prevent you from instantly going through it again.
|
Feature Request Usability
|
What it says on the tin, it's very easy to accidentally jump back into the mission when you didn't mean to because you moved one tile too many.
|
True
|
Entering the gateway should immobilize you for a moment, to prevent you from instantly going through it again. - What it says on the tin, it's very easy to accidentally jump back into the mission when you didn't mean to because you moved one tile too many.
|
usab
|
entering the gateway should immobilize you for a moment to prevent you from instantly going through it again what it says on the tin it s very easy to accidentally jump back into the mission when you didn t mean to because you moved one tile too many
| 1
|
293,017
| 8,971,817,770
|
IssuesEvent
|
2019-01-29 16:44:25
|
griffithlab/civic-client
|
https://api.github.com/repos/griffithlab/civic-client
|
closed
|
Can't add new DOID to record.
|
bug enhancement high priority
|
Per our request, chronic neutrophilic leukemia was added to disease-ontology. https://github.com/DiseaseOntology/HumanDiseaseOntology/issues/280#issuecomment-315180267
Shown here: http://www.disease-ontology.org/?id=DOID:0080187
However, I still can't update this evidence item.
https://civicdb.org/events/genes/1239/summary/variants/560/summary/evidence/1388/summary#evidence
I assume it's missing from the DOID file.
|
1.0
|
Can't add new DOID to record. - Per our request, chronic neutrophilic leukemia was added to disease-ontology. https://github.com/DiseaseOntology/HumanDiseaseOntology/issues/280#issuecomment-315180267
Shown here: http://www.disease-ontology.org/?id=DOID:0080187
However, I still can't update this evidence item.
https://civicdb.org/events/genes/1239/summary/variants/560/summary/evidence/1388/summary#evidence
I assume it's missing from the DOID file.
|
non_usab
|
can t add new doid to record per our request chronic neutrophilic leukemia was added to disease ontology shown here however i still can t update this evidence item i assume it s missing from the doid file
| 0
|
182,815
| 6,673,746,746
|
IssuesEvent
|
2017-10-04 16:00:34
|
knipferrc/pastey
|
https://api.github.com/repos/knipferrc/pastey
|
closed
|
next.js?
|
Priority: Maximum Type: Refactor
|
Thinking of porting to next.js for their prefetching, and SSR. Should give better time to interactive and going to include the server with the app to simplify the app a little as far as deployment.
|
1.0
|
next.js? - Thinking of porting to next.js for their prefetching, and SSR. Should give better time to interactive and going to include the server with the app to simplify the app a little as far as deployment.
|
non_usab
|
next js thinking of porting to next js for their prefetching and ssr should give better time to interactive and going to include the server with the app to simplify the app a little as far as deployment
| 0
|
26,899
| 27,334,327,188
|
IssuesEvent
|
2023-02-26 01:59:15
|
MarkBind/markbind
|
https://api.github.com/repos/MarkBind/markbind
|
closed
|
"Cap" / remove panel closing animations
|
p.Medium a-ReaderUsability d.easy
|
<!--
Before opening a new issue, please search existing issues: https://github.com/MarkBind/markbind/issues
-->
**Is your request related to a problem?**
When the user closes the panel, the viewport is "dragged" back up to the panel's start.
This works well for smaller panels but becomes rather disorienting for larger ones.
**Describe the solution you'd like**
Either:
- Only activate the closing transitions if the panel contents are less than some fixed height (I think preferably as the transition isn't too big)
- Completely remove the _closing_ transitions
|
True
|
"Cap" / remove panel closing animations - <!--
Before opening a new issue, please search existing issues: https://github.com/MarkBind/markbind/issues
-->
**Is your request related to a problem?**
When the user closes the panel, the viewport is "dragged" back up to the panel's start.
This works well for smaller panels but becomes rather disorienting for larger ones.
**Describe the solution you'd like**
Either:
- Only activate the closing transitions if the panel contents are less than some fixed height (I think preferably as the transition isn't too big)
- Completely remove the _closing_ transitions
|
usab
|
cap remove panel closing animations before opening a new issue please search existing issues is your request related to a problem when the user closes the panel the viewport is dragged back up to the panel s start this works well for smaller panels but becomes rather disorienting for larger ones describe the solution you d like either only activate the closing transitions if the panel contents are less than some fixed height i think preferably as the transition isn t too big completely remove the closing transitions
| 1
|
4,572
| 3,872,477,357
|
IssuesEvent
|
2016-04-11 14:01:37
|
lionheart/openradar-mirror
|
https://api.github.com/repos/lionheart/openradar-mirror
|
opened
|
22513959: Xcode-beta (7A192o): Refactor does not take into account casting, ObjC generics and __kindof class uses
|
classification:ui/usability reproducible:always status:open
|
#### Description
Summary:
Xcode 7 introduces ObjC generics and __kindof, however the refactoring tools have not been updated with support. Result is, when renaming a class and compiling, there are compilation errors on casting, generics, __kindof.
Steps to Reproduce:
Create a class
Cast an instance of an object to that class.
Declare an NSArray of __kindof that class objects.
Rename the class
Expected Results:
All uses of the class name should be renamed to the new name
Actual Results:
Some cases are handled, while others are not.
Regression:
This happens on all Xcode 7 betas
Notes:
N/A
-
Product Version: Xcode-beta (7A192o)
Created: 2015-09-01T05:59:52.816410
Originated: 2015-09-01T08:59:00
Open Radar Link: http://www.openradar.me/22513959
|
True
|
22513959: Xcode-beta (7A192o): Refactor does not take into account casting, ObjC generics and __kindof class uses - #### Description
Summary:
Xcode 7 introduces ObjC generics and __kindof, however the refactoring tools have not been updated with support. Result is, when renaming a class and compiling, there are compilation errors on casting, generics, __kindof.
Steps to Reproduce:
Create a class
Cast an instance of an object to that class.
Declare an NSArray of __kindof that class objects.
Rename the class
Expected Results:
All uses of the class name should be renamed to the new name
Actual Results:
Some cases are handled, while others are not.
Regression:
This happens on all Xcode 7 betas
Notes:
N/A
-
Product Version: Xcode-beta (7A192o)
Created: 2015-09-01T05:59:52.816410
Originated: 2015-09-01T08:59:00
Open Radar Link: http://www.openradar.me/22513959
|
usab
|
xcode beta refactor does not take into account casting objc generics and kindof class uses description summary xcode introduces objc generics and kindof however the refactoring tools have not been updated with support result is when renaming a class and compiling there are compilation errors on casting generics kindof steps to reproduce create a class cast an instance of an object to that class declare an nsarray of kindof that class objects rename the class expected results all uses of the class name should be renamed to the new name actual results some cases are handled while others are not regression this happens on all xcode betas notes n a product version xcode beta created originated open radar link
| 1
|
117,655
| 25,170,612,357
|
IssuesEvent
|
2022-11-11 02:40:35
|
MicrosoftDocs/live-share
|
https://api.github.com/repos/MicrosoftDocs/live-share
|
closed
|
Issues with NTLM proxy
|
bug client: vscode area: proxy product-feedback p2 Stale
|
We are using a NTLM proxy in our cooperate environment.
The only way to get LiveShare to work on Windows 10 PC is:
1.) set the HTTP_PROXY and HTTPS_PROXY system environment variables
2.) set the property "http.proxy" in the settings.json file with the windows user name and password:
"http://[user]:[password]@[proxy FQDNS]:[port]"
Without that setting creating a shared sessions fails with a proxy authentication issue.
Settting the two system environment variables isn't a big issue. But putting the users windows password into the setttings.json could be an issue. The file is only stored on the users PC and only accessible by the user itself and Administrators - but still this shouldn't be necessary
Using VS 2019 16.8.2. with the latest LiveShare extension works out of the box on the very same PC.
|
1.0
|
Issues with NTLM proxy - We are using a NTLM proxy in our cooperate environment.
The only way to get LiveShare to work on Windows 10 PC is:
1.) set the HTTP_PROXY and HTTPS_PROXY system environment variables
2.) set the property "http.proxy" in the settings.json file with the windows user name and password:
"http://[user]:[password]@[proxy FQDNS]:[port]"
Without that setting creating a shared sessions fails with a proxy authentication issue.
Settting the two system environment variables isn't a big issue. But putting the users windows password into the setttings.json could be an issue. The file is only stored on the users PC and only accessible by the user itself and Administrators - but still this shouldn't be necessary
Using VS 2019 16.8.2. with the latest LiveShare extension works out of the box on the very same PC.
|
non_usab
|
issues with ntlm proxy we are using a ntlm proxy in our cooperate environment the only way to get liveshare to work on windows pc is set the http proxy and https proxy system environment variables set the property http proxy in the settings json file with the windows user name and password http without that setting creating a shared sessions fails with a proxy authentication issue settting the two system environment variables isn t a big issue but putting the users windows password into the setttings json could be an issue the file is only stored on the users pc and only accessible by the user itself and administrators but still this shouldn t be necessary using vs with the latest liveshare extension works out of the box on the very same pc
| 0
|
55,742
| 14,664,270,369
|
IssuesEvent
|
2020-12-29 11:35:51
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
P-Table with VirtualScroll only header is resizing
|
defect
|
**I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Stackblitz**
https://stackblitz.com/edit/primeng-tablecolresize-demo-p886ik?file=src/app/app.component.html
**Current behavior**
Only the headers of the table are beeing resized, when you add virtualScroll and virtualRowHeight. No errors are thrown.
https://imgur.com/a/GMthfJa
**Expected behavior**
The header as well as the rest of the rows are resized.
**Minimal reproduction of the problem with instructions**
I've used the last example of https://www.primefaces.org/primeng/showcase/#/table/colresize 'Scrollable with Variable Width' and added the following properties to the p-table:
`[rows]="100" [virtualScroll]="true" [virtualRowHeight]="34"`
* **Angular version:** 11.0.0
* **PrimeNG version:** 11.0.0-rc.2
|
1.0
|
P-Table with VirtualScroll only header is resizing - **I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Stackblitz**
https://stackblitz.com/edit/primeng-tablecolresize-demo-p886ik?file=src/app/app.component.html
**Current behavior**
Only the headers of the table are beeing resized, when you add virtualScroll and virtualRowHeight. No errors are thrown.
https://imgur.com/a/GMthfJa
**Expected behavior**
The header as well as the rest of the rows are resized.
**Minimal reproduction of the problem with instructions**
I've used the last example of https://www.primefaces.org/primeng/showcase/#/table/colresize 'Scrollable with Variable Width' and added the following properties to the p-table:
`[rows]="100" [virtualScroll]="true" [virtualRowHeight]="34"`
* **Angular version:** 11.0.0
* **PrimeNG version:** 11.0.0-rc.2
|
non_usab
|
p table with virtualscroll only header is resizing i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see stackblitz current behavior only the headers of the table are beeing resized when you add virtualscroll and virtualrowheight no errors are thrown expected behavior the header as well as the rest of the rows are resized minimal reproduction of the problem with instructions i ve used the last example of scrollable with variable width and added the following properties to the p table true angular version primeng version rc
| 0
|
42,168
| 10,895,820,419
|
IssuesEvent
|
2019-11-19 11:26:02
|
bitcoin/bitcoin
|
https://api.github.com/repos/bitcoin/bitcoin
|
closed
|
openssl version 1.1.1 and Qt 5.12.4/5
|
Build system GUI
|
I noticed while trying to deploy for all platforms on Qt 5.12.4 that as of that version only openssl version 1.1.1+ is supported. This prompted me to check what version is used currently in this project (1.0.1k) and to also check the EOL.
I found that 1.0.1 version reached EOL in 2016. 1.0.2 reaches EOL in december 2019 so I wanted to know if there is a reason the default is not 1.1.1 ? This also probably supports our efforts to encourage a bump from QT 5.9 to 5.12.4/5. I think a comprehensive look across all platforms should be considered.
|
1.0
|
openssl version 1.1.1 and Qt 5.12.4/5 - I noticed while trying to deploy for all platforms on Qt 5.12.4 that as of that version only openssl version 1.1.1+ is supported. This prompted me to check what version is used currently in this project (1.0.1k) and to also check the EOL.
I found that 1.0.1 version reached EOL in 2016. 1.0.2 reaches EOL in december 2019 so I wanted to know if there is a reason the default is not 1.1.1 ? This also probably supports our efforts to encourage a bump from QT 5.9 to 5.12.4/5. I think a comprehensive look across all platforms should be considered.
|
non_usab
|
openssl version and qt i noticed while trying to deploy for all platforms on qt that as of that version only openssl version is supported this prompted me to check what version is used currently in this project and to also check the eol i found that version reached eol in reaches eol in december so i wanted to know if there is a reason the default is not this also probably supports our efforts to encourage a bump from qt to i think a comprehensive look across all platforms should be considered
| 0
|
4,263
| 3,789,533,342
|
IssuesEvent
|
2016-03-21 18:15:22
|
mesosphere/marathon
|
https://api.github.com/repos/mesosphere/marathon
|
opened
|
Browsing the app create/edit options should not add any data
|
enhancement gui ready usability
|

Browsing the app create/edit options should not add any data to the app definition nor trigger `handleAppConfigChange`.
|
True
|
Browsing the app create/edit options should not add any data - 
Browsing the app create/edit options should not add any data to the app definition nor trigger `handleAppConfigChange`.
|
usab
|
browsing the app create edit options should not add any data browsing the app create edit options should not add any data to the app definition nor trigger handleappconfigchange
| 1
|
15,728
| 10,265,610,960
|
IssuesEvent
|
2019-08-22 19:17:24
|
ualbertalib/avalon
|
https://api.github.com/repos/ualbertalib/avalon
|
closed
|
Manage Files - Relabel Associated Files to Uploaded Files
|
Post-launch usability
|
### Descriptive summary
On manage file(s) form relabel "Associated Files" to "Uploaded Files":

|
True
|
Manage Files - Relabel Associated Files to Uploaded Files - ### Descriptive summary
On manage file(s) form relabel "Associated Files" to "Uploaded Files":

|
usab
|
manage files relabel associated files to uploaded files descriptive summary on manage file s form relabel associated files to uploaded files
| 1
|
164,329
| 25,949,843,870
|
IssuesEvent
|
2022-12-17 12:49:23
|
Sunbird-cQube/community
|
https://api.github.com/repos/Sunbird-cQube/community
|
closed
|
Documentation -"How to Configure"
|
Backlog Connector Size-Medium Biz-Priority-P1 Tech-Priority-P1 Design Req
|
Prepare the Clear documentation on "How-to" configure with its limitations, rules, sample data.
**Assumption**
Create a doc on how to fill valid configuration file.
Depends on config file validations
|
1.0
|
Documentation -"How to Configure" - Prepare the Clear documentation on "How-to" configure with its limitations, rules, sample data.
**Assumption**
Create a doc on how to fill valid configuration file.
Depends on config file validations
|
non_usab
|
documentation how to configure prepare the clear documentation on how to configure with its limitations rules sample data assumption create a doc on how to fill valid configuration file depends on config file validations
| 0
|
193,251
| 6,883,033,178
|
IssuesEvent
|
2017-11-21 07:44:01
|
r-lib/styler
|
https://api.github.com/repos/r-lib/styler
|
closed
|
Also adding curly braces to expressions other than conditional statements
|
Complexity: High Priority: Low Status: Postponed Type: Enhancement
|
Since we change
```r
if (TRUE)
return(call(a))
```
Into
```r
if (TRUE) {
return(call(a))
}
```
We should probably also change
```r
while (FALSE)
xyz(v)
```
Into
```r
while (FALSE) {
xyz(v)
}
```
Reference: https://github.com/r-lib/styler/pull/279#discussion_r151879108
|
1.0
|
Also adding curly braces to expressions other than conditional statements - Since we change
```r
if (TRUE)
return(call(a))
```
Into
```r
if (TRUE) {
return(call(a))
}
```
We should probably also change
```r
while (FALSE)
xyz(v)
```
Into
```r
while (FALSE) {
xyz(v)
}
```
Reference: https://github.com/r-lib/styler/pull/279#discussion_r151879108
|
non_usab
|
also adding curly braces to expressions other than conditional statements since we change r if true return call a into r if true return call a we should probably also change r while false xyz v into r while false xyz v reference
| 0
|
289,534
| 8,872,179,395
|
IssuesEvent
|
2019-01-11 14:47:40
|
whole-tale/dashboard
|
https://api.github.com/repos/whole-tale/dashboard
|
opened
|
Tale browsing widget should be embedded in the landing page.
|
area/ui-ux kind/enhancement priority/low
|
and accessible to anonymous users.
|
1.0
|
Tale browsing widget should be embedded in the landing page. - and accessible to anonymous users.
|
non_usab
|
tale browsing widget should be embedded in the landing page and accessible to anonymous users
| 0
|
25,452
| 25,207,262,414
|
IssuesEvent
|
2022-11-13 20:43:44
|
matomo-org/matomo
|
https://api.github.com/repos/matomo-org/matomo
|
opened
|
Pages by Goals report is missing a tooltip explaining the "Goal conversions" metric
|
c: Usability
|
In the Goals > Goal X > Goals by Pages > Page URLs, the report table displayed has a column "Goal X Conversions".
Currently the column doesn't have an inline help.
Instead, it's expected there is an help on hover.
For example here is what the help on hover looks like for the "Goal X conversion rate" column.

|
True
|
Pages by Goals report is missing a tooltip explaining the "Goal conversions" metric - In the Goals > Goal X > Goals by Pages > Page URLs, the report table displayed has a column "Goal X Conversions".
Currently the column doesn't have an inline help.
Instead, it's expected there is an help on hover.
For example here is what the help on hover looks like for the "Goal X conversion rate" column.

|
usab
|
pages by goals report is missing a tooltip explaining the goal conversions metric in the goals goal x goals by pages page urls the report table displayed has a column goal x conversions currently the column doesn t have an inline help instead it s expected there is an help on hover for example here is what the help on hover looks like for the goal x conversion rate column
| 1
|
104,397
| 8,972,307,134
|
IssuesEvent
|
2019-01-29 17:56:18
|
nasa-gibs/worldview
|
https://api.github.com/repos/nasa-gibs/worldview
|
closed
|
Some measurement descriptions are not using the right description for the assigned measurements
|
bug testing
|
**Describe the bug**
Measurement descriptions (ones found in the layer picker categories) are not correctly matching up to the appropriate description after you pick subsequent measurements with the same satellite/sensor. I have noticed this with Terra/MODIS, Aqua/MODIS and Aura/OMI descriptions, it may be happening to others.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Add Layers
2. Click on All
3. Scroll down to Aerosol Optical Depth, click on Terra/MODIS. See correct description.
4. Scroll to Areas of No Data, click on Terra/MODIS. See description for Aerosol Optical Depth, not Areas of No Data.
**Expected behavior**
I would expect to see the correct measurement description for the correct group of layers.
**Screenshots**
<img width="964" alt="screen shot 2019-01-23 at 12 02 15 pm" src="https://user-images.githubusercontent.com/10747923/51623686-3e2ed700-1f07-11e9-997e-181b1fdbb840.png">
**Desktop (please complete the following information):**
- OS: OSX
- Browser Firefox and Chrome
- Seen on Production and UAT
|
1.0
|
Some measurement descriptions are not using the right description for the assigned measurements - **Describe the bug**
Measurement descriptions (ones found in the layer picker categories) are not correctly matching up to the appropriate description after you pick subsequent measurements with the same satellite/sensor. I have noticed this with Terra/MODIS, Aqua/MODIS and Aura/OMI descriptions, it may be happening to others.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Add Layers
2. Click on All
3. Scroll down to Aerosol Optical Depth, click on Terra/MODIS. See correct description.
4. Scroll to Areas of No Data, click on Terra/MODIS. See description for Aerosol Optical Depth, not Areas of No Data.
**Expected behavior**
I would expect to see the correct measurement description for the correct group of layers.
**Screenshots**
<img width="964" alt="screen shot 2019-01-23 at 12 02 15 pm" src="https://user-images.githubusercontent.com/10747923/51623686-3e2ed700-1f07-11e9-997e-181b1fdbb840.png">
**Desktop (please complete the following information):**
- OS: OSX
- Browser Firefox and Chrome
- Seen on Production and UAT
|
non_usab
|
some measurement descriptions are not using the right description for the assigned measurements describe the bug measurement descriptions ones found in the layer picker categories are not correctly matching up to the appropriate description after you pick subsequent measurements with the same satellite sensor i have noticed this with terra modis aqua modis and aura omi descriptions it may be happening to others to reproduce steps to reproduce the behavior go to add layers click on all scroll down to aerosol optical depth click on terra modis see correct description scroll to areas of no data click on terra modis see description for aerosol optical depth not areas of no data expected behavior i would expect to see the correct measurement description for the correct group of layers screenshots img width alt screen shot at pm src desktop please complete the following information os osx browser firefox and chrome seen on production and uat
| 0
|
13,138
| 8,303,814,672
|
IssuesEvent
|
2018-09-21 18:52:30
|
spacetx/starfish
|
https://api.github.com/repos/spacetx/starfish
|
closed
|
Allow Filter CLI + API to dump pngs
|
enhancement usability
|
Filter methods should have an option to dump pngs for easy feedback + viewing.
|
True
|
Allow Filter CLI + API to dump pngs - Filter methods should have an option to dump pngs for easy feedback + viewing.
|
usab
|
allow filter cli api to dump pngs filter methods should have an option to dump pngs for easy feedback viewing
| 1
|
24,368
| 23,676,150,325
|
IssuesEvent
|
2022-08-28 05:18:48
|
tailscale/tailscale
|
https://api.github.com/repos/tailscale/tailscale
|
closed
|
Only ipv6 IP obtained and DNS errors, v1.28.0
|
OS-linux L1 Very few P3 Can't get started T5 Usability bug
|
### What is the issue?
A tailscale node of mine is unable to properly bring up tailscale and join the VPN. It only gets an ipv6 address on the tailscale interface. However, I am ssh'ing into this machine via my "backup" service called [tmate](https://tmate.io/), that I keep using for the rare cases like this when tailscale doesn't work.
**Background information:**
_Ubuntu 18.04_
My nodes are deployed into my clients networks, which I basically consider hostile environments, I rely on tailscale to punch through whatever insane network things any individual client may have done, so I can maintain the server itself. The server is seemingly fine, it even upgraded tailscale to the latest version of 1.28.0 today (automatically, using unattended-upgrades, as expected). I can reach any website or IP I can dream up, yet the backend of tailscale is just non-stop DNS issues.
Output of: journalctl -r -u tailscaled.service
```
Jul 19 21:15:19 be-197 tailscaled[22960]: control: doLogin(regen=false, hasUrl=false)
Jul 19 21:15:18 be-197 tailscaled[22960]: Received error: fetch control key: Get "https://controlplane.tailscale.com/key?v=32": dial tcp [2a05:d014:386:203:4535:c15c:9ab:8258]:443: connect: network is unreachable
Jul 19 21:15:18 be-197 tailscaled[22960]: bootstrapDNS("derp1d.tailscale.com", "165.22.33.71") for "controlplane.tailscale.com" = [2a05:d014:386:202:6dd4:fc68:35ae:2b80 2a05:d014:386:202:a3e1:5be6:cbb1:1ac9 2a05:d014:386:203:4535:c15c:9ab:8258 2a05:d014:386:202:24dd:46aa:f98e:9997 2a05:d01
Jul 19 21:15:18 be-197 tailscaled[22960]: trying bootstrapDNS("derp1d.tailscale.com", "165.22.33.71") for "controlplane.tailscale.com" ...
Jul 19 21:15:18 be-197 tailscaled[22960]: bootstrapDNS("derp8d.tailscale.com", "2a03:b0c0:1:d0::e08:e001") for "controlplane.tailscale.com" error: Get "https://derp8d.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": dial tcp [2a03:b0c0:1:d0::e08:e001]:443: connect: network is unr
Jul 19 21:15:18 be-197 tailscaled[22960]: trying bootstrapDNS("derp8d.tailscale.com", "2a03:b0c0:1:d0::e08:e001") for "controlplane.tailscale.com" ...
Jul 19 21:15:18 be-197 tailscaled[22960]: bootstrapDNS("derp4c.tailscale.com", "134.122.77.138") for "controlplane.tailscale.com" error: Get "https://derp4c.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": context deadline exceeded
Jul 19 21:15:15 be-197 tailscaled[22960]: trying bootstrapDNS("derp4c.tailscale.com", "134.122.77.138") for "controlplane.tailscale.com" ...
Jul 19 21:15:15 be-197 tailscaled[22960]: bootstrapDNS("derp1e.tailscale.com", "2604:a880:800:10::873:4001") for "controlplane.tailscale.com" error: Get "https://derp1e.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": dial tcp [2604:a880:800:10::873:4001]:443: connect: network is
Jul 19 21:15:15 be-197 tailscaled[22960]: trying bootstrapDNS("derp1e.tailscale.com", "2604:a880:800:10::873:4001") for "controlplane.tailscale.com" ...
Jul 19 21:15:15 be-197 tailscaled[22960]: bootstrapDNS("derp4d.tailscale.com", "134.122.94.167") for "controlplane.tailscale.com" error: Get "https://derp4d.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": context deadline exceeded
Jul 19 21:15:12 be-197 tailscaled[22960]: trying bootstrapDNS("derp4d.tailscale.com", "134.122.94.167") for "controlplane.tailscale.com" ...
Jul 19 21:13:01 be-197 tailscaled[22960]: control: doLogin(regen=false, hasUrl=false)
```
After seeing this, I went and tried to adjust the netplan to try and get the DNS to resolve, to absolutely no effect. Prior, it was just simply "dhcp4: true", so I added a sections for nameservers as my hopeful fix, adding both their local DNS and google+cloudflares DNS.
Netplan Contents:
```
network:
ethernets:
eno1:
dhcp4: true
nameservers:
search: [mwci.local, whale-mountain.ts.net, blueeyemonitoring.com.beta.tailscale.net]
addresses: [172.16.1.30,172.16.5.20,8.8.8.8,1.1.1.1]
version: 2
```
Output of systemd-resolve --status prior to changing netplan:
```
Link 2 (eno1)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 172.16.1.30
172.16.5.20
DNS Domain: mwci.local
```
Output after changing Netplan (Still shows as offline in tailscale admin, and doesn't get an ipv4 assigned to the tailscale interface):
```
Link 2 (eno1)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 172.16.1.30
172.16.5.20
8.8.8.8
1.1.1.1
DNS Domain: mwci.local
whale-mountain.ts.net
blueeyemonitoring.com.beta.tailscale.net
```
This is what the tailscale interface looks like, nothing I've done seems to change anything about it:
```
Link 10 (tailscale0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
```
### Steps to reproduce
I actually do not know. This is the only one of my servers having this issue out of hundreds.
### Are there any recent changes that introduced the issue?
Tailscale did change a lot with DNS in 1.28.0 which I did automatically update to.
### OS
Linux
### OS version
Ubuntu 18.04
### Tailscale version
1.28.0
### Bug report
BUG-72d946068a0f1fa079298f5b08df7d1417db5df67b27c1009ade3ee983fefe00-20220719213029Z-ecda131f278aa306
|
True
|
Only ipv6 IP obtained and DNS errors, v1.28.0 - ### What is the issue?
A tailscale node of mine is unable to properly bring up tailscale and join the VPN. It only gets an ipv6 address on the tailscale interface. However, I am ssh'ing into this machine via my "backup" service called [tmate](https://tmate.io/), that I keep using for the rare cases like this when tailscale doesn't work.
**Background information:**
_Ubuntu 18.04_
My nodes are deployed into my clients networks, which I basically consider hostile environments, I rely on tailscale to punch through whatever insane network things any individual client may have done, so I can maintain the server itself. The server is seemingly fine, it even upgraded tailscale to the latest version of 1.28.0 today (automatically, using unattended-upgrades, as expected). I can reach any website or IP I can dream up, yet the backend of tailscale is just non-stop DNS issues.
Output of: journalctl -r -u tailscaled.service
```
Jul 19 21:15:19 be-197 tailscaled[22960]: control: doLogin(regen=false, hasUrl=false)
Jul 19 21:15:18 be-197 tailscaled[22960]: Received error: fetch control key: Get "https://controlplane.tailscale.com/key?v=32": dial tcp [2a05:d014:386:203:4535:c15c:9ab:8258]:443: connect: network is unreachable
Jul 19 21:15:18 be-197 tailscaled[22960]: bootstrapDNS("derp1d.tailscale.com", "165.22.33.71") for "controlplane.tailscale.com" = [2a05:d014:386:202:6dd4:fc68:35ae:2b80 2a05:d014:386:202:a3e1:5be6:cbb1:1ac9 2a05:d014:386:203:4535:c15c:9ab:8258 2a05:d014:386:202:24dd:46aa:f98e:9997 2a05:d01
Jul 19 21:15:18 be-197 tailscaled[22960]: trying bootstrapDNS("derp1d.tailscale.com", "165.22.33.71") for "controlplane.tailscale.com" ...
Jul 19 21:15:18 be-197 tailscaled[22960]: bootstrapDNS("derp8d.tailscale.com", "2a03:b0c0:1:d0::e08:e001") for "controlplane.tailscale.com" error: Get "https://derp8d.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": dial tcp [2a03:b0c0:1:d0::e08:e001]:443: connect: network is unr
Jul 19 21:15:18 be-197 tailscaled[22960]: trying bootstrapDNS("derp8d.tailscale.com", "2a03:b0c0:1:d0::e08:e001") for "controlplane.tailscale.com" ...
Jul 19 21:15:18 be-197 tailscaled[22960]: bootstrapDNS("derp4c.tailscale.com", "134.122.77.138") for "controlplane.tailscale.com" error: Get "https://derp4c.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": context deadline exceeded
Jul 19 21:15:15 be-197 tailscaled[22960]: trying bootstrapDNS("derp4c.tailscale.com", "134.122.77.138") for "controlplane.tailscale.com" ...
Jul 19 21:15:15 be-197 tailscaled[22960]: bootstrapDNS("derp1e.tailscale.com", "2604:a880:800:10::873:4001") for "controlplane.tailscale.com" error: Get "https://derp1e.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": dial tcp [2604:a880:800:10::873:4001]:443: connect: network is
Jul 19 21:15:15 be-197 tailscaled[22960]: trying bootstrapDNS("derp1e.tailscale.com", "2604:a880:800:10::873:4001") for "controlplane.tailscale.com" ...
Jul 19 21:15:15 be-197 tailscaled[22960]: bootstrapDNS("derp4d.tailscale.com", "134.122.94.167") for "controlplane.tailscale.com" error: Get "https://derp4d.tailscale.com/bootstrap-dns?q=controlplane.tailscale.com": context deadline exceeded
Jul 19 21:15:12 be-197 tailscaled[22960]: trying bootstrapDNS("derp4d.tailscale.com", "134.122.94.167") for "controlplane.tailscale.com" ...
Jul 19 21:13:01 be-197 tailscaled[22960]: control: doLogin(regen=false, hasUrl=false)
```
After seeing this, I went and tried to adjust the netplan to try and get the DNS to resolve, to absolutely no effect. Prior, it was just simply "dhcp4: true", so I added a sections for nameservers as my hopeful fix, adding both their local DNS and google+cloudflares DNS.
Netplan Contents:
```
network:
ethernets:
eno1:
dhcp4: true
nameservers:
search: [mwci.local, whale-mountain.ts.net, blueeyemonitoring.com.beta.tailscale.net]
addresses: [172.16.1.30,172.16.5.20,8.8.8.8,1.1.1.1]
version: 2
```
Output of systemd-resolve --status prior to changing netplan:
```
Link 2 (eno1)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 172.16.1.30
172.16.5.20
DNS Domain: mwci.local
```
Output after changing Netplan (Still shows as offline in tailscale admin, and doesn't get an ipv4 assigned to the tailscale interface):
```
Link 2 (eno1)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 172.16.1.30
172.16.5.20
8.8.8.8
1.1.1.1
DNS Domain: mwci.local
whale-mountain.ts.net
blueeyemonitoring.com.beta.tailscale.net
```
This is what the tailscale interface looks like, nothing I've done seems to change anything about it:
```
Link 10 (tailscale0)
Current Scopes: none
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
```
### Steps to reproduce
I actually do not know. This is the only one of my servers having this issue out of hundreds.
### Are there any recent changes that introduced the issue?
Tailscale did change a lot with DNS in 1.28.0 which I did automatically update to.
### OS
Linux
### OS version
Ubuntu 18.04
### Tailscale version
1.28.0
### Bug report
BUG-72d946068a0f1fa079298f5b08df7d1417db5df67b27c1009ade3ee983fefe00-20220719213029Z-ecda131f278aa306
|
usab
|
only ip obtained and dns errors what is the issue a tailscale node of mine is unable to properly bring up tailscale and join the vpn it only gets an address on the tailscale interface however i am ssh ing into this machine via my backup service called that i keep using for the rare cases like this when tailscale doesn t work background information ubuntu my nodes are deployed into my clients networks which i basically consider hostile environments i rely on tailscale to punch through whatever insane network things any individual client may have done so i can maintain the server itself the server is seemingly fine it even upgraded tailscale to the latest version of today automatically using unattended upgrades as expected i can reach any website or ip i can dream up yet the backend of tailscale is just non stop dns issues output of journalctl r u tailscaled service jul be tailscaled control dologin regen false hasurl false jul be tailscaled received error fetch control key get dial tcp connect network is unreachable jul be tailscaled bootstrapdns tailscale com for controlplane tailscale com jul be tailscaled trying bootstrapdns tailscale com for controlplane tailscale com jul be tailscaled bootstrapdns tailscale com for controlplane tailscale com error get dial tcp connect network is unr jul be tailscaled trying bootstrapdns tailscale com for controlplane tailscale com jul be tailscaled bootstrapdns tailscale com for controlplane tailscale com error get context deadline exceeded jul be tailscaled trying bootstrapdns tailscale com for controlplane tailscale com jul be tailscaled bootstrapdns tailscale com for controlplane tailscale com error get dial tcp connect network is jul be tailscaled trying bootstrapdns tailscale com for controlplane tailscale com jul be tailscaled bootstrapdns tailscale com for controlplane tailscale com error get context deadline exceeded jul be tailscaled trying bootstrapdns tailscale com for controlplane tailscale com jul be tailscaled control dologin regen false hasurl false after seeing this i went and tried to adjust the netplan to try and get the dns to resolve to absolutely no effect prior it was just simply true so i added a sections for nameservers as my hopeful fix adding both their local dns and google cloudflares dns netplan contents network ethernets true nameservers search addresses version output of systemd resolve status prior to changing netplan link current scopes dns llmnr setting yes multicastdns setting no dnssec setting no dnssec supported no dns servers dns domain mwci local output after changing netplan still shows as offline in tailscale admin and doesn t get an assigned to the tailscale interface link current scopes dns llmnr setting yes multicastdns setting no dnssec setting no dnssec supported no dns servers dns domain mwci local whale mountain ts net blueeyemonitoring com beta tailscale net this is what the tailscale interface looks like nothing i ve done seems to change anything about it link current scopes none llmnr setting yes multicastdns setting no dnssec setting no dnssec supported no steps to reproduce i actually do not know this is the only one of my servers having this issue out of hundreds are there any recent changes that introduced the issue tailscale did change a lot with dns in which i did automatically update to os linux os version ubuntu tailscale version bug report bug
| 1
|
133,403
| 18,297,457,284
|
IssuesEvent
|
2021-10-05 21:59:36
|
ghc-dev/Jennifer-Poole
|
https://api.github.com/repos/ghc-dev/Jennifer-Poole
|
opened
|
WS-2016-0036 (Low) detected in cli-0.6.6.tgz
|
security vulnerability
|
## WS-2016-0036 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cli-0.6.6.tgz</b></p></summary>
<p>A tool for rapidly building command line apps</p>
<p>Library home page: <a href="https://registry.npmjs.org/cli/-/cli-0.6.6.tgz">https://registry.npmjs.org/cli/-/cli-0.6.6.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/htmlhint/node_modules/cli/package.json</p>
<p>
Dependency Hierarchy:
- grunt-htmlhint-0.9.13.tgz (Root Library)
- htmlhint-0.9.13.tgz
- jshint-2.8.0.tgz
- :x: **cli-0.6.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Jennifer-Poole/commit/a42e1f276b13ba57ab48e60289e7f00c2858fab6">a42e1f276b13ba57ab48e60289e7f00c2858fab6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package node-cli insecurely uses the lock_file and log_file. Both of these are temporary, but it allows the starting user to overwrite any file they have access to.
<p>Publish Date: 2016-08-16
<p>URL: <a href=https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809252>WS-2016-0036</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>1.9</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/node-js-libs/cli/commit/fd6bc4d2a901aabe0bb6067fbcc14a4fe3faa8b9">https://github.com/node-js-libs/cli/commit/fd6bc4d2a901aabe0bb6067fbcc14a4fe3faa8b9</a></p>
<p>Release Date: 2017-01-31</p>
<p>Fix Resolution: 1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"cli","packageVersion":"0.6.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;cli:0.6.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.0.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2016-0036","vulnerabilityDetails":"The package node-cli insecurely uses the lock_file and log_file. Both of these are temporary, but it allows the starting user to overwrite any file they have access to.","vulnerabilityUrl":"https://bugs.debian.org/cgi-bin/bugreport.cgi?bug\u003d809252","cvss2Severity":"low","cvss2Score":"1.9","extraData":{}}</REMEDIATE> -->
|
True
|
WS-2016-0036 (Low) detected in cli-0.6.6.tgz - ## WS-2016-0036 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cli-0.6.6.tgz</b></p></summary>
<p>A tool for rapidly building command line apps</p>
<p>Library home page: <a href="https://registry.npmjs.org/cli/-/cli-0.6.6.tgz">https://registry.npmjs.org/cli/-/cli-0.6.6.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/htmlhint/node_modules/cli/package.json</p>
<p>
Dependency Hierarchy:
- grunt-htmlhint-0.9.13.tgz (Root Library)
- htmlhint-0.9.13.tgz
- jshint-2.8.0.tgz
- :x: **cli-0.6.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Jennifer-Poole/commit/a42e1f276b13ba57ab48e60289e7f00c2858fab6">a42e1f276b13ba57ab48e60289e7f00c2858fab6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package node-cli insecurely uses the lock_file and log_file. Both of these are temporary, but it allows the starting user to overwrite any file they have access to.
<p>Publish Date: 2016-08-16
<p>URL: <a href=https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=809252>WS-2016-0036</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>1.9</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/node-js-libs/cli/commit/fd6bc4d2a901aabe0bb6067fbcc14a4fe3faa8b9">https://github.com/node-js-libs/cli/commit/fd6bc4d2a901aabe0bb6067fbcc14a4fe3faa8b9</a></p>
<p>Release Date: 2017-01-31</p>
<p>Fix Resolution: 1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"cli","packageVersion":"0.6.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;cli:0.6.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.0.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2016-0036","vulnerabilityDetails":"The package node-cli insecurely uses the lock_file and log_file. Both of these are temporary, but it allows the starting user to overwrite any file they have access to.","vulnerabilityUrl":"https://bugs.debian.org/cgi-bin/bugreport.cgi?bug\u003d809252","cvss2Severity":"low","cvss2Score":"1.9","extraData":{}}</REMEDIATE> -->
|
non_usab
|
ws low detected in cli tgz ws low severity vulnerability vulnerable library cli tgz a tool for rapidly building command line apps library home page a href path to dependency file jennifer poole package json path to vulnerable library jennifer poole node modules htmlhint node modules cli package json dependency hierarchy grunt htmlhint tgz root library htmlhint tgz jshint tgz x cli tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package node cli insecurely uses the lock file and log file both of these are temporary but it allows the starting user to overwrite any file they have access to publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt htmlhint htmlhint jshint cli isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier ws vulnerabilitydetails the package node cli insecurely uses the lock file and log file both of these are temporary but it allows the starting user to overwrite any file they have access to vulnerabilityurl
| 0
|
10,290
| 6,664,376,304
|
IssuesEvent
|
2017-10-02 19:54:45
|
truetandem/e-QIP-prototype
|
https://api.github.com/repos/truetandem/e-QIP-prototype
|
opened
|
Marital/relationships - status - other explanation margin
|
bug usability
|
- [ ] Close gap between other block and provide explanation input
- [ ] Make sure there is a margin between the provide explanation input and the next headline
<img width="1178" alt="screen shot 2017-10-02 at 3 52 16 pm" src="https://user-images.githubusercontent.com/19935974/31096367-cd4c0c06-a789-11e7-8e15-6606bfabddc4.png">
|
True
|
Marital/relationships - status - other explanation margin - - [ ] Close gap between other block and provide explanation input
- [ ] Make sure there is a margin between the provide explanation input and the next headline
<img width="1178" alt="screen shot 2017-10-02 at 3 52 16 pm" src="https://user-images.githubusercontent.com/19935974/31096367-cd4c0c06-a789-11e7-8e15-6606bfabddc4.png">
|
usab
|
marital relationships status other explanation margin close gap between other block and provide explanation input make sure there is a margin between the provide explanation input and the next headline img width alt screen shot at pm src
| 1
|
33,914
| 14,237,128,824
|
IssuesEvent
|
2020-11-18 16:50:00
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Take Strengthsfinder 💪🧠
|
Service: Apps Service: Dev Service: Product Workgroup: DTS
|
Take [Strengthsfinder](https://www.gallup.com/cliftonstrengths/en/252137/home.aspx) and email results to Amenity.
- [x] Mike
- [x] Sergio
- [x] Tilly
- [x] Surbhi
- [x] Stephanie
- [x] Jace
- [x] Nate
- [x] Chrispin
|
3.0
|
Take Strengthsfinder 💪🧠 - Take [Strengthsfinder](https://www.gallup.com/cliftonstrengths/en/252137/home.aspx) and email results to Amenity.
- [x] Mike
- [x] Sergio
- [x] Tilly
- [x] Surbhi
- [x] Stephanie
- [x] Jace
- [x] Nate
- [x] Chrispin
|
non_usab
|
take strengthsfinder 💪🧠 take and email results to amenity mike sergio tilly surbhi stephanie jace nate chrispin
| 0
|
10,830
| 7,320,637,763
|
IssuesEvent
|
2018-03-02 08:20:24
|
ractivejs/ractive
|
https://api.github.com/repos/ractivejs/ractive
|
closed
|
Memory leak while passing data to child component using static delimiters
|
bug/major enhancement/performance
|
## Description:
Similar to #3183.
Passing data to a child component using static delimiters results in a memory leak.
## Versions affected:
0.9.13
## Platforms affected:
All (tested on Chrome)
## Reproduction:
```javascript
Ractive.components.MyComponent = Ractive.extend({
template: `
<div>
<h1>My component</h1>
<p>{{passedData}}</p>
</div>
`,
});
const r = window.r = new Ractive({
el: '#main',
template: `
<h1>Title</h1>
{{#if showComponent}}
<MyComponent passedData="[[@.get('description')]]"/>
{{/if}}
`,
data: {
showComponent: true,
description: 'Root description',
},
onrender() {
setTimeout(() => {
this.set({ showComponent: false });
}, 500);
},
});
```
https://jsfiddle.net/7xbr57b8/8/
1. Using Chrome open fiddle and wait 1 second.
2. Perform a memory heap snapshot.
<img width="1251" alt="heap snapshot" src="https://user-images.githubusercontent.com/628799/36725805-ccf2a772-1bb7-11e8-8006-b490142d25f5.png">
|
True
|
Memory leak while passing data to child component using static delimiters - ## Description:
Similar to #3183.
Passing data to a child component using static delimiters results in a memory leak.
## Versions affected:
0.9.13
## Platforms affected:
All (tested on Chrome)
## Reproduction:
```javascript
Ractive.components.MyComponent = Ractive.extend({
template: `
<div>
<h1>My component</h1>
<p>{{passedData}}</p>
</div>
`,
});
const r = window.r = new Ractive({
el: '#main',
template: `
<h1>Title</h1>
{{#if showComponent}}
<MyComponent passedData="[[@.get('description')]]"/>
{{/if}}
`,
data: {
showComponent: true,
description: 'Root description',
},
onrender() {
setTimeout(() => {
this.set({ showComponent: false });
}, 500);
},
});
```
https://jsfiddle.net/7xbr57b8/8/
1. Using Chrome open fiddle and wait 1 second.
2. Perform a memory heap snapshot.
<img width="1251" alt="heap snapshot" src="https://user-images.githubusercontent.com/628799/36725805-ccf2a772-1bb7-11e8-8006-b490142d25f5.png">
|
non_usab
|
memory leak while passing data to child component using static delimiters description similar to passing data to a child component using static delimiters results in a memory leak versions affected platforms affected all tested on chrome reproduction javascript ractive components mycomponent ractive extend template my component passeddata const r window r new ractive el main template title if showcomponent if data showcomponent true description root description onrender settimeout this set showcomponent false using chrome open fiddle and wait second perform a memory heap snapshot img width alt heap snapshot src
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.